Oct 02 10:48:25 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 02 10:48:25 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 02 10:48:25 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:48:25 localhost kernel: BIOS-provided physical RAM map:
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 02 10:48:25 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 02 10:48:25 localhost kernel: NX (Execute Disable) protection: active
Oct 02 10:48:25 localhost kernel: APIC: Static calls initialized
Oct 02 10:48:25 localhost kernel: SMBIOS 2.8 present.
Oct 02 10:48:25 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 02 10:48:25 localhost kernel: Hypervisor detected: KVM
Oct 02 10:48:25 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 02 10:48:25 localhost kernel: kvm-clock: using sched offset of 7295613179 cycles
Oct 02 10:48:25 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 02 10:48:25 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 02 10:48:25 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 02 10:48:25 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 02 10:48:25 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 02 10:48:25 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 02 10:48:25 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 02 10:48:25 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 02 10:48:25 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 02 10:48:25 localhost kernel: Using GB pages for direct mapping
Oct 02 10:48:25 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 02 10:48:25 localhost kernel: ACPI: Early table checksum verification disabled
Oct 02 10:48:25 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 02 10:48:25 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:25 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:25 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:25 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 02 10:48:25 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:25 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:25 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 02 10:48:25 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 02 10:48:25 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 02 10:48:25 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 02 10:48:25 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 02 10:48:25 localhost kernel: No NUMA configuration found
Oct 02 10:48:25 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 02 10:48:25 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 02 10:48:25 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 02 10:48:25 localhost kernel: Zone ranges:
Oct 02 10:48:25 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 02 10:48:25 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 02 10:48:25 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 10:48:25 localhost kernel:   Device   empty
Oct 02 10:48:25 localhost kernel: Movable zone start for each node
Oct 02 10:48:25 localhost kernel: Early memory node ranges
Oct 02 10:48:25 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 02 10:48:25 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 02 10:48:25 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 10:48:25 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 02 10:48:25 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 02 10:48:25 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 02 10:48:25 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 02 10:48:25 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 02 10:48:25 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 02 10:48:25 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 02 10:48:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 02 10:48:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 02 10:48:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 02 10:48:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 02 10:48:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 02 10:48:25 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 02 10:48:25 localhost kernel: TSC deadline timer available
Oct 02 10:48:25 localhost kernel: CPU topo: Max. logical packages:   8
Oct 02 10:48:25 localhost kernel: CPU topo: Max. logical dies:       8
Oct 02 10:48:25 localhost kernel: CPU topo: Max. dies per package:   1
Oct 02 10:48:25 localhost kernel: CPU topo: Max. threads per core:   1
Oct 02 10:48:25 localhost kernel: CPU topo: Num. cores per package:     1
Oct 02 10:48:25 localhost kernel: CPU topo: Num. threads per package:   1
Oct 02 10:48:25 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 02 10:48:25 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 02 10:48:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 02 10:48:25 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 02 10:48:25 localhost kernel: Booting paravirtualized kernel on KVM
Oct 02 10:48:25 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 02 10:48:25 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 02 10:48:25 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 02 10:48:25 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 02 10:48:25 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 02 10:48:25 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 02 10:48:25 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:48:25 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 02 10:48:25 localhost kernel: random: crng init done
Oct 02 10:48:25 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 02 10:48:25 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 02 10:48:25 localhost kernel: Fallback order for Node 0: 0 
Oct 02 10:48:25 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 02 10:48:25 localhost kernel: Policy zone: Normal
Oct 02 10:48:25 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 02 10:48:25 localhost kernel: software IO TLB: area num 8.
Oct 02 10:48:25 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 02 10:48:25 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 02 10:48:25 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 02 10:48:25 localhost kernel: Dynamic Preempt: voluntary
Oct 02 10:48:25 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 02 10:48:25 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 02 10:48:25 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 02 10:48:25 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 02 10:48:25 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 02 10:48:25 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 02 10:48:25 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 02 10:48:25 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 02 10:48:25 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:48:25 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:48:25 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:48:25 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 02 10:48:25 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 02 10:48:25 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 02 10:48:25 localhost kernel: Console: colour VGA+ 80x25
Oct 02 10:48:25 localhost kernel: printk: console [ttyS0] enabled
Oct 02 10:48:25 localhost kernel: ACPI: Core revision 20230331
Oct 02 10:48:25 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 02 10:48:25 localhost kernel: x2apic enabled
Oct 02 10:48:25 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 02 10:48:25 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 02 10:48:25 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 02 10:48:25 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 02 10:48:25 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 02 10:48:25 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 02 10:48:25 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 02 10:48:25 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 02 10:48:25 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 02 10:48:25 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 02 10:48:25 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 02 10:48:25 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 02 10:48:25 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 02 10:48:25 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 02 10:48:25 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 02 10:48:25 localhost kernel: x86/bugs: return thunk changed
Oct 02 10:48:25 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 02 10:48:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 02 10:48:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 02 10:48:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 02 10:48:25 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 02 10:48:25 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 02 10:48:25 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 02 10:48:25 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 02 10:48:25 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 02 10:48:25 localhost kernel: landlock: Up and running.
Oct 02 10:48:25 localhost kernel: Yama: becoming mindful.
Oct 02 10:48:25 localhost kernel: SELinux:  Initializing.
Oct 02 10:48:25 localhost kernel: LSM support for eBPF active
Oct 02 10:48:25 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 10:48:25 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 10:48:25 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 02 10:48:25 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 02 10:48:25 localhost kernel: ... version:                0
Oct 02 10:48:25 localhost kernel: ... bit width:              48
Oct 02 10:48:25 localhost kernel: ... generic registers:      6
Oct 02 10:48:25 localhost kernel: ... value mask:             0000ffffffffffff
Oct 02 10:48:25 localhost kernel: ... max period:             00007fffffffffff
Oct 02 10:48:25 localhost kernel: ... fixed-purpose events:   0
Oct 02 10:48:25 localhost kernel: ... event mask:             000000000000003f
Oct 02 10:48:25 localhost kernel: signal: max sigframe size: 1776
Oct 02 10:48:25 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 02 10:48:25 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 02 10:48:25 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 02 10:48:25 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 02 10:48:25 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 02 10:48:25 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 02 10:48:25 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 02 10:48:25 localhost kernel: node 0 deferred pages initialised in 29ms
Oct 02 10:48:25 localhost kernel: Memory: 7765608K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616500K reserved, 0K cma-reserved)
Oct 02 10:48:25 localhost kernel: devtmpfs: initialized
Oct 02 10:48:25 localhost kernel: x86/mm: Memory block size: 128MB
Oct 02 10:48:25 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 02 10:48:25 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 02 10:48:25 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 02 10:48:25 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 02 10:48:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 02 10:48:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 02 10:48:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 02 10:48:25 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 02 10:48:25 localhost kernel: audit: type=2000 audit(1759402104.395:1): state=initialized audit_enabled=0 res=1
Oct 02 10:48:25 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 02 10:48:25 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 02 10:48:25 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 02 10:48:25 localhost kernel: cpuidle: using governor menu
Oct 02 10:48:25 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 02 10:48:25 localhost kernel: PCI: Using configuration type 1 for base access
Oct 02 10:48:25 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 02 10:48:25 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 02 10:48:25 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 02 10:48:25 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 02 10:48:25 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 02 10:48:25 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 02 10:48:25 localhost kernel: Demotion targets for Node 0: null
Oct 02 10:48:25 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 02 10:48:25 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 02 10:48:25 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 02 10:48:25 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 02 10:48:25 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 02 10:48:25 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 02 10:48:25 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 02 10:48:25 localhost kernel: ACPI: Interpreter enabled
Oct 02 10:48:25 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 02 10:48:25 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 02 10:48:25 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 02 10:48:25 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 02 10:48:25 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 02 10:48:25 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 02 10:48:25 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [3] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [4] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [5] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [6] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [7] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [8] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [9] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [10] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [11] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [12] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [13] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [14] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [15] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [16] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [17] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [18] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [19] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [20] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [21] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [22] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [23] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [24] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [25] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [26] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [27] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [28] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [29] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [30] registered
Oct 02 10:48:25 localhost kernel: acpiphp: Slot [31] registered
Oct 02 10:48:25 localhost kernel: PCI host bridge to bus 0000:00
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 02 10:48:25 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 02 10:48:25 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 02 10:48:25 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 02 10:48:25 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 02 10:48:25 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 02 10:48:25 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 02 10:48:25 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 02 10:48:25 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 02 10:48:25 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 02 10:48:25 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 02 10:48:25 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 10:48:25 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 02 10:48:25 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 02 10:48:25 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 02 10:48:25 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 02 10:48:25 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 02 10:48:25 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 02 10:48:25 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 02 10:48:25 localhost kernel: iommu: Default domain type: Translated
Oct 02 10:48:25 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 02 10:48:25 localhost kernel: SCSI subsystem initialized
Oct 02 10:48:25 localhost kernel: ACPI: bus type USB registered
Oct 02 10:48:25 localhost kernel: usbcore: registered new interface driver usbfs
Oct 02 10:48:25 localhost kernel: usbcore: registered new interface driver hub
Oct 02 10:48:25 localhost kernel: usbcore: registered new device driver usb
Oct 02 10:48:25 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 02 10:48:25 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 02 10:48:25 localhost kernel: PTP clock support registered
Oct 02 10:48:25 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 02 10:48:25 localhost kernel: NetLabel: Initializing
Oct 02 10:48:25 localhost kernel: NetLabel:  domain hash size = 128
Oct 02 10:48:25 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 02 10:48:25 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 02 10:48:25 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 02 10:48:25 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 02 10:48:25 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 02 10:48:25 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 02 10:48:25 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 02 10:48:25 localhost kernel: vgaarb: loaded
Oct 02 10:48:25 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 02 10:48:25 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 02 10:48:25 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 02 10:48:25 localhost kernel: pnp: PnP ACPI init
Oct 02 10:48:25 localhost kernel: pnp 00:03: [dma 2]
Oct 02 10:48:25 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 02 10:48:25 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 02 10:48:25 localhost kernel: NET: Registered PF_INET protocol family
Oct 02 10:48:25 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 02 10:48:25 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 02 10:48:25 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 02 10:48:25 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 02 10:48:25 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 02 10:48:25 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 02 10:48:25 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 02 10:48:25 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 10:48:25 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 10:48:25 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 02 10:48:25 localhost kernel: NET: Registered PF_XDP protocol family
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 02 10:48:25 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 02 10:48:25 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 02 10:48:25 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 02 10:48:25 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 82009 usecs
Oct 02 10:48:25 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 02 10:48:25 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 02 10:48:25 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 02 10:48:25 localhost kernel: ACPI: bus type thunderbolt registered
Oct 02 10:48:25 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 02 10:48:25 localhost kernel: Initialise system trusted keyrings
Oct 02 10:48:25 localhost kernel: Key type blacklist registered
Oct 02 10:48:25 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 02 10:48:25 localhost kernel: zbud: loaded
Oct 02 10:48:25 localhost kernel: integrity: Platform Keyring initialized
Oct 02 10:48:25 localhost kernel: integrity: Machine keyring initialized
Oct 02 10:48:25 localhost kernel: Freeing initrd memory: 86104K
Oct 02 10:48:25 localhost kernel: NET: Registered PF_ALG protocol family
Oct 02 10:48:25 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 02 10:48:25 localhost kernel: Key type asymmetric registered
Oct 02 10:48:25 localhost kernel: Asymmetric key parser 'x509' registered
Oct 02 10:48:25 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 02 10:48:25 localhost kernel: io scheduler mq-deadline registered
Oct 02 10:48:25 localhost kernel: io scheduler kyber registered
Oct 02 10:48:25 localhost kernel: io scheduler bfq registered
Oct 02 10:48:25 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 02 10:48:25 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 02 10:48:25 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 02 10:48:25 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 02 10:48:25 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 02 10:48:25 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 02 10:48:25 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 02 10:48:25 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 02 10:48:25 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 02 10:48:25 localhost kernel: Non-volatile memory driver v1.3
Oct 02 10:48:25 localhost kernel: rdac: device handler registered
Oct 02 10:48:25 localhost kernel: hp_sw: device handler registered
Oct 02 10:48:25 localhost kernel: emc: device handler registered
Oct 02 10:48:25 localhost kernel: alua: device handler registered
Oct 02 10:48:25 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 02 10:48:25 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 02 10:48:25 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 02 10:48:25 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 02 10:48:25 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 02 10:48:25 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 02 10:48:25 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 02 10:48:25 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 02 10:48:25 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 02 10:48:25 localhost kernel: hub 1-0:1.0: USB hub found
Oct 02 10:48:25 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 02 10:48:25 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 02 10:48:25 localhost kernel: usbserial: USB Serial support registered for generic
Oct 02 10:48:25 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 02 10:48:25 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 02 10:48:25 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 02 10:48:25 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 02 10:48:25 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 02 10:48:25 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 02 10:48:25 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 02 10:48:25 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T10:48:24 UTC (1759402104)
Oct 02 10:48:25 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 02 10:48:25 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 02 10:48:25 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 02 10:48:25 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 02 10:48:25 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 02 10:48:25 localhost kernel: usbcore: registered new interface driver usbhid
Oct 02 10:48:25 localhost kernel: usbhid: USB HID core driver
Oct 02 10:48:25 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 02 10:48:25 localhost kernel: Initializing XFRM netlink socket
Oct 02 10:48:25 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 02 10:48:25 localhost kernel: Segment Routing with IPv6
Oct 02 10:48:25 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 02 10:48:25 localhost kernel: mpls_gso: MPLS GSO support
Oct 02 10:48:25 localhost kernel: IPI shorthand broadcast: enabled
Oct 02 10:48:25 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 02 10:48:25 localhost kernel: AES CTR mode by8 optimization enabled
Oct 02 10:48:25 localhost kernel: sched_clock: Marking stable (1253004280, 148111080)->(1487135920, -86020560)
Oct 02 10:48:25 localhost kernel: registered taskstats version 1
Oct 02 10:48:25 localhost kernel: Loading compiled-in X.509 certificates
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 02 10:48:25 localhost kernel: Demotion targets for Node 0: null
Oct 02 10:48:25 localhost kernel: page_owner is disabled
Oct 02 10:48:25 localhost kernel: Key type .fscrypt registered
Oct 02 10:48:25 localhost kernel: Key type fscrypt-provisioning registered
Oct 02 10:48:25 localhost kernel: Key type big_key registered
Oct 02 10:48:25 localhost kernel: Key type encrypted registered
Oct 02 10:48:25 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 02 10:48:25 localhost kernel: Loading compiled-in module X.509 certificates
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 10:48:25 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 02 10:48:25 localhost kernel: ima: No architecture policies found
Oct 02 10:48:25 localhost kernel: evm: Initialising EVM extended attributes:
Oct 02 10:48:25 localhost kernel: evm: security.selinux
Oct 02 10:48:25 localhost kernel: evm: security.SMACK64 (disabled)
Oct 02 10:48:25 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 02 10:48:25 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 02 10:48:25 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 02 10:48:25 localhost kernel: evm: security.apparmor (disabled)
Oct 02 10:48:25 localhost kernel: evm: security.ima
Oct 02 10:48:25 localhost kernel: evm: security.capability
Oct 02 10:48:25 localhost kernel: evm: HMAC attrs: 0x1
Oct 02 10:48:25 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 02 10:48:25 localhost kernel: Running certificate verification RSA selftest
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 02 10:48:25 localhost kernel: Running certificate verification ECDSA selftest
Oct 02 10:48:25 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 02 10:48:25 localhost kernel: clk: Disabling unused clocks
Oct 02 10:48:25 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 02 10:48:25 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 02 10:48:25 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 02 10:48:25 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 02 10:48:25 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 02 10:48:25 localhost kernel: Run /init as init process
Oct 02 10:48:25 localhost kernel:   with arguments:
Oct 02 10:48:25 localhost kernel:     /init
Oct 02 10:48:25 localhost kernel:   with environment:
Oct 02 10:48:25 localhost kernel:     HOME=/
Oct 02 10:48:25 localhost kernel:     TERM=linux
Oct 02 10:48:25 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 02 10:48:25 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 10:48:25 localhost systemd[1]: Detected virtualization kvm.
Oct 02 10:48:25 localhost systemd[1]: Detected architecture x86-64.
Oct 02 10:48:25 localhost systemd[1]: Running in initrd.
Oct 02 10:48:25 localhost systemd[1]: No hostname configured, using default hostname.
Oct 02 10:48:25 localhost systemd[1]: Hostname set to <localhost>.
Oct 02 10:48:25 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 02 10:48:25 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 02 10:48:25 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 02 10:48:25 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 02 10:48:25 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 02 10:48:25 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 02 10:48:25 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 02 10:48:25 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 02 10:48:25 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 02 10:48:25 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 10:48:25 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 10:48:25 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 02 10:48:25 localhost systemd[1]: Reached target Local File Systems.
Oct 02 10:48:25 localhost systemd[1]: Reached target Path Units.
Oct 02 10:48:25 localhost systemd[1]: Reached target Slice Units.
Oct 02 10:48:25 localhost systemd[1]: Reached target Swaps.
Oct 02 10:48:25 localhost systemd[1]: Reached target Timer Units.
Oct 02 10:48:25 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 10:48:25 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 02 10:48:25 localhost systemd[1]: Listening on Journal Socket.
Oct 02 10:48:25 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 10:48:25 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 10:48:25 localhost systemd[1]: Reached target Socket Units.
Oct 02 10:48:25 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 10:48:25 localhost systemd[1]: Starting Journal Service...
Oct 02 10:48:25 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 10:48:25 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 10:48:25 localhost systemd[1]: Starting Create System Users...
Oct 02 10:48:25 localhost systemd[1]: Starting Setup Virtual Console...
Oct 02 10:48:25 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 10:48:25 localhost systemd[1]: Finished Create System Users.
Oct 02 10:48:25 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 10:48:25 localhost systemd-journald[309]: Journal started
Oct 02 10:48:25 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/8a1e3318b91c48d18474e3593dbdcd45) is 8.0M, max 153.5M, 145.5M free.
Oct 02 10:48:25 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Oct 02 10:48:25 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Oct 02 10:48:25 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 02 10:48:25 localhost systemd[1]: Started Journal Service.
Oct 02 10:48:25 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 10:48:25 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 10:48:25 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 10:48:25 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 10:48:25 localhost systemd[1]: Finished Setup Virtual Console.
Oct 02 10:48:25 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 02 10:48:25 localhost systemd[1]: Starting dracut cmdline hook...
Oct 02 10:48:25 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Oct 02 10:48:25 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:48:25 localhost systemd[1]: Finished dracut cmdline hook.
Oct 02 10:48:25 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 02 10:48:25 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 02 10:48:25 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 02 10:48:25 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 02 10:48:25 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 02 10:48:25 localhost kernel: RPC: Registered udp transport module.
Oct 02 10:48:25 localhost kernel: RPC: Registered tcp transport module.
Oct 02 10:48:25 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 02 10:48:25 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 02 10:48:25 localhost rpc.statd[443]: Version 2.5.4 starting
Oct 02 10:48:25 localhost rpc.statd[443]: Initializing NSM state
Oct 02 10:48:25 localhost rpc.idmapd[448]: Setting log level to 0
Oct 02 10:48:25 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 02 10:48:25 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 10:48:25 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 10:48:25 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 10:48:25 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 02 10:48:25 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 02 10:48:25 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 10:48:25 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 02 10:48:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 10:48:25 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 10:48:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 10:48:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 10:48:25 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 10:48:25 localhost systemd[1]: Reached target Network.
Oct 02 10:48:25 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 10:48:25 localhost systemd[1]: Starting dracut initqueue hook...
Oct 02 10:48:26 localhost kernel: libata version 3.00 loaded.
Oct 02 10:48:26 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 02 10:48:26 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 02 10:48:26 localhost kernel: scsi host0: ata_piix
Oct 02 10:48:26 localhost systemd-udevd[497]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 10:48:26 localhost kernel: scsi host1: ata_piix
Oct 02 10:48:26 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 02 10:48:26 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 02 10:48:26 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 02 10:48:26 localhost kernel:  vda: vda1
Oct 02 10:48:26 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 02 10:48:26 localhost kernel: ata1: found unknown device (class 0)
Oct 02 10:48:26 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 02 10:48:26 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 02 10:48:26 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 02 10:48:26 localhost systemd[1]: Reached target System Initialization.
Oct 02 10:48:26 localhost systemd[1]: Reached target Basic System.
Oct 02 10:48:26 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 02 10:48:26 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 02 10:48:26 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 02 10:48:26 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 10:48:26 localhost systemd[1]: Reached target Initrd Root Device.
Oct 02 10:48:26 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 02 10:48:26 localhost systemd[1]: Finished dracut initqueue hook.
Oct 02 10:48:26 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 10:48:26 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 02 10:48:26 localhost systemd[1]: Reached target Remote File Systems.
Oct 02 10:48:26 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 02 10:48:26 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 02 10:48:26 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 02 10:48:26 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Oct 02 10:48:26 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 10:48:26 localhost systemd[1]: Mounting /sysroot...
Oct 02 10:48:27 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 02 10:48:27 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 02 10:48:27 localhost kernel: XFS (vda1): Ending clean mount
Oct 02 10:48:27 localhost systemd[1]: Mounted /sysroot.
Oct 02 10:48:27 localhost systemd[1]: Reached target Initrd Root File System.
Oct 02 10:48:27 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 02 10:48:27 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 02 10:48:27 localhost systemd[1]: Reached target Initrd File Systems.
Oct 02 10:48:27 localhost systemd[1]: Reached target Initrd Default Target.
Oct 02 10:48:27 localhost systemd[1]: Starting dracut mount hook...
Oct 02 10:48:27 localhost systemd[1]: Finished dracut mount hook.
Oct 02 10:48:27 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 02 10:48:27 localhost rpc.idmapd[448]: exiting on signal 15
Oct 02 10:48:27 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 02 10:48:27 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 02 10:48:27 localhost systemd[1]: Stopped target Network.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Timer Units.
Oct 02 10:48:27 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 02 10:48:27 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Basic System.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Path Units.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Remote File Systems.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Slice Units.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Socket Units.
Oct 02 10:48:27 localhost systemd[1]: Stopped target System Initialization.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Local File Systems.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Swaps.
Oct 02 10:48:27 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped dracut mount hook.
Oct 02 10:48:27 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 02 10:48:27 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 02 10:48:27 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 02 10:48:27 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 02 10:48:27 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 02 10:48:27 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 02 10:48:27 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 02 10:48:27 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 02 10:48:27 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 02 10:48:27 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 02 10:48:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 02 10:48:27 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 02 10:48:27 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Closed udev Control Socket.
Oct 02 10:48:27 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Closed udev Kernel Socket.
Oct 02 10:48:27 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 02 10:48:27 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 02 10:48:27 localhost systemd[1]: Starting Cleanup udev Database...
Oct 02 10:48:27 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 02 10:48:27 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 02 10:48:27 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Stopped Create System Users.
Oct 02 10:48:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 02 10:48:27 localhost systemd[1]: Finished Cleanup udev Database.
Oct 02 10:48:27 localhost systemd[1]: Reached target Switch Root.
Oct 02 10:48:27 localhost systemd[1]: Starting Switch Root...
Oct 02 10:48:27 localhost systemd[1]: Switching root.
Oct 02 10:48:27 localhost systemd-journald[309]: Journal stopped
Oct 02 11:42:42 compute-0 python3.9[70460]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:42 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:42:43 compute-0 python3.9[70611]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:43 compute-0 sshd-session[69618]: Connection closed by 192.168.122.30 port 60724
Oct 02 11:42:43 compute-0 sshd-session[69615]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:42:43 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 02 11:42:43 compute-0 systemd[1]: session-18.scope: Consumed 5.922s CPU time.
Oct 02 11:42:43 compute-0 systemd-logind[820]: Session 18 logged out. Waiting for processes to exit.
Oct 02 11:42:43 compute-0 systemd-logind[820]: Removed session 18.
Oct 02 11:42:51 compute-0 sshd-session[70637]: Accepted publickey for zuul from 38.129.56.116 port 60878 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:42:51 compute-0 systemd-logind[820]: New session 19 of user zuul.
Oct 02 11:42:51 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 02 11:42:51 compute-0 sshd-session[70637]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:42:51 compute-0 sudo[70713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbldrcgazqcjvzothbzxgyaodglpdftr ; /usr/bin/python3'
Oct 02 11:42:51 compute-0 sudo[70713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:52 compute-0 useradd[70717]: new group: name=ceph-admin, GID=42478
Oct 02 11:42:52 compute-0 useradd[70717]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 02 11:42:52 compute-0 sudo[70713]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:52 compute-0 sudo[70799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-socfdxkydgsdwnpogivkgehxwbzokpyu ; /usr/bin/python3'
Oct 02 11:42:52 compute-0 sudo[70799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:52 compute-0 sudo[70799]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-0 sudo[70872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtnnpabplyqztlwvqverdcyizihgjwul ; /usr/bin/python3'
Oct 02 11:42:53 compute-0 sudo[70872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:53 compute-0 sudo[70872]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-0 sudo[70922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaxdlyiertfhsssnjgnxoygyudatoybq ; /usr/bin/python3'
Oct 02 11:42:53 compute-0 sudo[70922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:53 compute-0 sudo[70922]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-0 sudo[70948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goextdpzvkjcqmmfuxjjweodicrpeavh ; /usr/bin/python3'
Oct 02 11:42:53 compute-0 sudo[70948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:54 compute-0 sudo[70948]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:54 compute-0 sudo[70974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmeetmyjqocltzfqcmqlluedlbivnzxm ; /usr/bin/python3'
Oct 02 11:42:54 compute-0 sudo[70974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:54 compute-0 sudo[70974]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:54 compute-0 sudo[71000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmnkehjyodthgqqbbfuilimqfzioggfc ; /usr/bin/python3'
Oct 02 11:42:54 compute-0 sudo[71000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:55 compute-0 sudo[71000]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:55 compute-0 sudo[71078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjfwnedlnvxshirsjiaxlenllsxgzffn ; /usr/bin/python3'
Oct 02 11:42:55 compute-0 sudo[71078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:55 compute-0 sudo[71078]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:55 compute-0 sudo[71151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqvuqfrtmakkoiezmnidzaoesgvvjyoh ; /usr/bin/python3'
Oct 02 11:42:55 compute-0 sudo[71151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:55 compute-0 sudo[71151]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:56 compute-0 sudo[71253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdobfdotqfqcdvlimskilcmcetutsqxf ; /usr/bin/python3'
Oct 02 11:42:56 compute-0 sudo[71253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:56 compute-0 sudo[71253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:56 compute-0 sudo[71326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmzyazxxqwppafujrkkcsnqxtziekcns ; /usr/bin/python3'
Oct 02 11:42:56 compute-0 sudo[71326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:56 compute-0 sudo[71326]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:57 compute-0 sudo[71376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkugbumzxpsakneivdghsucdomltimht ; /usr/bin/python3'
Oct 02 11:42:57 compute-0 sudo[71376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:57 compute-0 python3[71378]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:58 compute-0 sudo[71376]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:58 compute-0 sudo[71472]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqpjqphqieyuetrnxhjbfgmcsnbbugz ; /usr/bin/python3'
Oct 02 11:42:58 compute-0 sudo[71472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:59 compute-0 python3[71474]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 11:43:00 compute-0 sudo[71472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:00 compute-0 sudo[71501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgxrwdvluwhkiwxhtohkejtqtmbeaupa ; /usr/bin/python3'
Oct 02 11:43:00 compute-0 sudo[71501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:00 compute-0 python3[71503]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:00 compute-0 sudo[71501]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:00 compute-0 sudo[71527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsppirphotxzjmvepsmygdpvbypfzlar ; /usr/bin/python3'
Oct 02 11:43:00 compute-0 sudo[71527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:00 compute-0 sshd-session[71481]: Invalid user admin from 80.94.95.25 port 14653
Oct 02 11:43:01 compute-0 python3[71529]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:01 compute-0 kernel: loop: module loaded
Oct 02 11:43:01 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Oct 02 11:43:01 compute-0 sudo[71527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:01 compute-0 sudo[71562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnuwccncvswmigrtswwlgmbvzbcgkpsn ; /usr/bin/python3'
Oct 02 11:43:01 compute-0 sudo[71562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:01 compute-0 python3[71564]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:01 compute-0 lvm[71567]: PV /dev/loop3 not used.
Oct 02 11:43:01 compute-0 sshd-session[71481]: Received disconnect from 80.94.95.25 port 14653:11: Bye [preauth]
Oct 02 11:43:01 compute-0 sshd-session[71481]: Disconnected from invalid user admin 80.94.95.25 port 14653 [preauth]
Oct 02 11:43:01 compute-0 lvm[71569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:43:01 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 02 11:43:01 compute-0 lvm[71575]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 02 11:43:02 compute-0 lvm[71579]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:43:02 compute-0 lvm[71579]: VG ceph_vg0 finished
Oct 02 11:43:02 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 02 11:43:02 compute-0 sudo[71562]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:02 compute-0 sudo[71655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sawaitgqvtcinyknvdlpdostunurtawh ; /usr/bin/python3'
Oct 02 11:43:02 compute-0 sudo[71655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:02 compute-0 python3[71657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:43:02 compute-0 sudo[71655]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:02 compute-0 sudo[71728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjxmjrmvjyuvtayaildbeqkrfzkifrgb ; /usr/bin/python3'
Oct 02 11:43:02 compute-0 sudo[71728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:02 compute-0 python3[71730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405382.238593-33445-24234421598910/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:02 compute-0 sudo[71728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:03 compute-0 sudo[71778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsmhydcyfrhyfjbvkzotdtlmtymijfhs ; /usr/bin/python3'
Oct 02 11:43:03 compute-0 sudo[71778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:03 compute-0 python3[71780]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:43:03 compute-0 systemd[1]: Reloading.
Oct 02 11:43:03 compute-0 systemd-sysv-generator[71813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:03 compute-0 systemd-rc-local-generator[71809]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:04 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 02 11:43:04 compute-0 bash[71821]: /dev/loop3: [64513]:4349021 (/var/lib/ceph-osd-0.img)
Oct 02 11:43:04 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 02 11:43:04 compute-0 lvm[71822]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:43:04 compute-0 lvm[71822]: VG ceph_vg0 finished
Oct 02 11:43:04 compute-0 sudo[71778]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:06 compute-0 python3[71846]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:43:08 compute-0 sudo[71937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-somvawqlrlvyqfitxylvbawiqrefwwzi ; /usr/bin/python3'
Oct 02 11:43:08 compute-0 sudo[71937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:08 compute-0 python3[71939]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 11:43:10 compute-0 groupadd[71945]: group added to /etc/group: name=cephadm, GID=992
Oct 02 11:43:11 compute-0 groupadd[71945]: group added to /etc/gshadow: name=cephadm
Oct 02 11:43:11 compute-0 groupadd[71945]: new group: name=cephadm, GID=992
Oct 02 11:43:11 compute-0 useradd[71952]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 02 11:43:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:43:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:43:12 compute-0 sudo[71937]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:12 compute-0 sudo[72051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uctaybiomeqmwaffxcugmzscqqbkccqz ; /usr/bin/python3'
Oct 02 11:43:12 compute-0 sudo[72051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:12 compute-0 python3[72053]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:12 compute-0 sudo[72051]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:12 compute-0 sudo[72079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-binlurbildwwxwdenmxseuktsvynehqu ; /usr/bin/python3'
Oct 02 11:43:12 compute-0 sudo[72079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:12 compute-0 python3[72081]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:13 compute-0 sudo[72079]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:43:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:43:13 compute-0 systemd[1]: run-r41f3857a7bbe46d4adf88d2d0f4bc604.service: Deactivated successfully.
Oct 02 11:43:13 compute-0 sudo[72141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvvugzzztfsqagsbqdhydptgzvnnpnk ; /usr/bin/python3'
Oct 02 11:43:13 compute-0 sudo[72141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:13 compute-0 python3[72143]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:13 compute-0 sudo[72141]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:13 compute-0 sudo[72167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znswplngpkqrocbxqnqjpidspsffoziw ; /usr/bin/python3'
Oct 02 11:43:13 compute-0 sudo[72167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:14 compute-0 python3[72169]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:14 compute-0 sudo[72167]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:14 compute-0 sudo[72245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ircendywhpdcrzxqueaowhbgrihrlgsh ; /usr/bin/python3'
Oct 02 11:43:14 compute-0 sudo[72245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:14 compute-0 python3[72247]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:43:14 compute-0 sudo[72245]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:15 compute-0 sudo[72318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnufqdqtlrjqguqwncjswigswrtctsom ; /usr/bin/python3'
Oct 02 11:43:15 compute-0 sudo[72318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:15 compute-0 python3[72320]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405394.573448-33636-96599394663150/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:15 compute-0 sudo[72318]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:15 compute-0 sudo[72420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coqubnkgslgbgvxqnrpewswlullajmzq ; /usr/bin/python3'
Oct 02 11:43:15 compute-0 sudo[72420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:15 compute-0 python3[72422]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:43:15 compute-0 sudo[72420]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:16 compute-0 sudo[72493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vspmomlvfkhtpkpffhhxqgtmkeoesdro ; /usr/bin/python3'
Oct 02 11:43:16 compute-0 sudo[72493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:16 compute-0 python3[72495]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405395.7132661-33654-247337975638220/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:16 compute-0 sudo[72493]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:16 compute-0 sudo[72543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szuzpigfffqdgdctkpslpdbmqcddnwzo ; /usr/bin/python3'
Oct 02 11:43:16 compute-0 sudo[72543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:16 compute-0 python3[72545]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:16 compute-0 sudo[72543]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:16 compute-0 sudo[72571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvlasriqwcqkuzktsictstpgmgogtyd ; /usr/bin/python3'
Oct 02 11:43:16 compute-0 sudo[72571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:17 compute-0 python3[72573]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:17 compute-0 sudo[72571]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:17 compute-0 sudo[72599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blmfawmywzyjmlsuuttlslffgsbknacy ; /usr/bin/python3'
Oct 02 11:43:17 compute-0 sudo[72599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:17 compute-0 python3[72601]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:17 compute-0 sudo[72599]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:17 compute-0 sudo[72627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afgzqmwonrlbabbkaflzwjstrqzgnzya ; /usr/bin/python3'
Oct 02 11:43:17 compute-0 sudo[72627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:17 compute-0 python3[72629]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:17 compute-0 sshd-session[72644]: Accepted publickey for ceph-admin from 192.168.122.100 port 39476 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:43:17 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 02 11:43:17 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 02 11:43:17 compute-0 systemd-logind[820]: New session 20 of user ceph-admin.
Oct 02 11:43:17 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 02 11:43:17 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 02 11:43:17 compute-0 systemd[72648]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:43:18 compute-0 systemd[72648]: Queued start job for default target Main User Target.
Oct 02 11:43:18 compute-0 systemd[72648]: Created slice User Application Slice.
Oct 02 11:43:18 compute-0 systemd[72648]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:43:18 compute-0 systemd[72648]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:43:18 compute-0 systemd[72648]: Reached target Paths.
Oct 02 11:43:18 compute-0 systemd[72648]: Reached target Timers.
Oct 02 11:43:18 compute-0 systemd[72648]: Starting D-Bus User Message Bus Socket...
Oct 02 11:43:18 compute-0 systemd[72648]: Starting Create User's Volatile Files and Directories...
Oct 02 11:43:18 compute-0 systemd[72648]: Finished Create User's Volatile Files and Directories.
Oct 02 11:43:18 compute-0 systemd[72648]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:43:18 compute-0 systemd[72648]: Reached target Sockets.
Oct 02 11:43:18 compute-0 systemd[72648]: Reached target Basic System.
Oct 02 11:43:18 compute-0 systemd[72648]: Reached target Main User Target.
Oct 02 11:43:18 compute-0 systemd[72648]: Startup finished in 110ms.
Oct 02 11:43:18 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 02 11:43:18 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Oct 02 11:43:18 compute-0 sshd-session[72644]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:43:18 compute-0 sudo[72665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 02 11:43:18 compute-0 sudo[72665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:43:18 compute-0 sudo[72665]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:18 compute-0 sshd-session[72664]: Received disconnect from 192.168.122.100 port 39476:11: disconnected by user
Oct 02 11:43:18 compute-0 sshd-session[72664]: Disconnected from user ceph-admin 192.168.122.100 port 39476
Oct 02 11:43:18 compute-0 sshd-session[72644]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 02 11:43:18 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct 02 11:43:18 compute-0 systemd-logind[820]: Session 20 logged out. Waiting for processes to exit.
Oct 02 11:43:18 compute-0 systemd-logind[820]: Removed session 20.
Oct 02 11:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2707602097-lower\x2dmapped.mount: Deactivated successfully.
Oct 02 11:43:28 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 02 11:43:28 compute-0 systemd[72648]: Activating special unit Exit the Session...
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped target Main User Target.
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped target Basic System.
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped target Paths.
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped target Sockets.
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped target Timers.
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 11:43:28 compute-0 systemd[72648]: Closed D-Bus User Message Bus Socket.
Oct 02 11:43:28 compute-0 systemd[72648]: Stopped Create User's Volatile Files and Directories.
Oct 02 11:43:28 compute-0 systemd[72648]: Removed slice User Application Slice.
Oct 02 11:43:28 compute-0 systemd[72648]: Reached target Shutdown.
Oct 02 11:43:28 compute-0 systemd[72648]: Finished Exit the Session.
Oct 02 11:43:28 compute-0 systemd[72648]: Reached target Exit the Session.
Oct 02 11:43:28 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 02 11:43:28 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 02 11:43:28 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 02 11:43:28 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 02 11:43:28 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 02 11:43:28 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 02 11:43:28 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 02 11:43:32 compute-0 podman[72702]: 2025-10-02 11:43:32.18662658 +0000 UTC m=+13.895972271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:32 compute-0 podman[72793]: 2025-10-02 11:43:32.25234868 +0000 UTC m=+0.038801927 container create 1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491 (image=quay.io/ceph/ceph:v18, name=pedantic_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:32 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 02 11:43:32 compute-0 systemd[1]: Started libpod-conmon-1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491.scope.
Oct 02 11:43:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:32 compute-0 podman[72793]: 2025-10-02 11:43:32.235376682 +0000 UTC m=+0.021829939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:32 compute-0 podman[72793]: 2025-10-02 11:43:32.340246459 +0000 UTC m=+0.126699726 container init 1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491 (image=quay.io/ceph/ceph:v18, name=pedantic_agnesi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:43:32 compute-0 podman[72793]: 2025-10-02 11:43:32.346581531 +0000 UTC m=+0.133034768 container start 1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491 (image=quay.io/ceph/ceph:v18, name=pedantic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:43:32 compute-0 podman[72793]: 2025-10-02 11:43:32.351169503 +0000 UTC m=+0.137622760 container attach 1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491 (image=quay.io/ceph/ceph:v18, name=pedantic_agnesi, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:32 compute-0 pedantic_agnesi[72809]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 02 11:43:32 compute-0 systemd[1]: libpod-1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491.scope: Deactivated successfully.
Oct 02 11:43:32 compute-0 podman[72793]: 2025-10-02 11:43:32.666993697 +0000 UTC m=+0.453446934 container died 1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491 (image=quay.io/ceph/ceph:v18, name=pedantic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5132ca59c1d9cf80eba8c1eb9875b4a50865ae078f5d8b12bafb3cf862e7091c-merged.mount: Deactivated successfully.
Oct 02 11:43:32 compute-0 podman[72793]: 2025-10-02 11:43:32.720203027 +0000 UTC m=+0.506656264 container remove 1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491 (image=quay.io/ceph/ceph:v18, name=pedantic_agnesi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:32 compute-0 systemd[1]: libpod-conmon-1d503943eff69aae0ba1728c9be106f057a8e51d42959d657a86371d02d7b491.scope: Deactivated successfully.
Oct 02 11:43:32 compute-0 podman[72824]: 2025-10-02 11:43:32.775245571 +0000 UTC m=+0.037113099 container create 235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6 (image=quay.io/ceph/ceph:v18, name=boring_raman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:43:32 compute-0 systemd[1]: Started libpod-conmon-235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6.scope.
Oct 02 11:43:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:32 compute-0 podman[72824]: 2025-10-02 11:43:32.825563308 +0000 UTC m=+0.087430856 container init 235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6 (image=quay.io/ceph/ceph:v18, name=boring_raman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:32 compute-0 podman[72824]: 2025-10-02 11:43:32.830644884 +0000 UTC m=+0.092512412 container start 235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6 (image=quay.io/ceph/ceph:v18, name=boring_raman, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:32 compute-0 boring_raman[72840]: 167 167
Oct 02 11:43:32 compute-0 systemd[1]: libpod-235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6.scope: Deactivated successfully.
Oct 02 11:43:32 compute-0 podman[72824]: 2025-10-02 11:43:32.834165815 +0000 UTC m=+0.096033343 container attach 235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6 (image=quay.io/ceph/ceph:v18, name=boring_raman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:43:32 compute-0 podman[72824]: 2025-10-02 11:43:32.834923807 +0000 UTC m=+0.096791335 container died 235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6 (image=quay.io/ceph/ceph:v18, name=boring_raman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:32 compute-0 podman[72824]: 2025-10-02 11:43:32.759081036 +0000 UTC m=+0.020948564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:32 compute-0 podman[72824]: 2025-10-02 11:43:32.86734175 +0000 UTC m=+0.129209278 container remove 235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6 (image=quay.io/ceph/ceph:v18, name=boring_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:43:32 compute-0 systemd[1]: libpod-conmon-235b8ecfb68f796ad6740cb94d0c1028457451b705b53a57f9d8688ef8dd83b6.scope: Deactivated successfully.
Oct 02 11:43:32 compute-0 podman[72857]: 2025-10-02 11:43:32.917750929 +0000 UTC m=+0.032096085 container create ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71 (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:43:32 compute-0 systemd[1]: Started libpod-conmon-ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71.scope.
Oct 02 11:43:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:32 compute-0 podman[72857]: 2025-10-02 11:43:32.975429499 +0000 UTC m=+0.089774674 container init ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71 (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:32 compute-0 podman[72857]: 2025-10-02 11:43:32.980327399 +0000 UTC m=+0.094672554 container start ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71 (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:43:32 compute-0 podman[72857]: 2025-10-02 11:43:32.983402288 +0000 UTC m=+0.097747443 container attach ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71 (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:43:32 compute-0 elastic_brahmagupta[72874]: AQBkZd5oLsGROxAAjNSJoIkkKSRRvApBJU5nNQ==
Oct 02 11:43:33 compute-0 podman[72857]: 2025-10-02 11:43:32.903358886 +0000 UTC m=+0.017704061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:33 compute-0 systemd[1]: libpod-ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71.scope: Deactivated successfully.
Oct 02 11:43:33 compute-0 podman[72857]: 2025-10-02 11:43:33.004178195 +0000 UTC m=+0.118523350 container died ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71 (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:43:33 compute-0 podman[72857]: 2025-10-02 11:43:33.042204859 +0000 UTC m=+0.156550014 container remove ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71 (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:33 compute-0 systemd[1]: libpod-conmon-ba154eb7ccd07b3894a56dc1862e934cf73e85658155dcc3fc1bb83fd3649c71.scope: Deactivated successfully.
Oct 02 11:43:33 compute-0 podman[72893]: 2025-10-02 11:43:33.104067369 +0000 UTC m=+0.040322831 container create 80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745 (image=quay.io/ceph/ceph:v18, name=hungry_gates, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:33 compute-0 systemd[1]: Started libpod-conmon-80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745.scope.
Oct 02 11:43:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:33 compute-0 podman[72893]: 2025-10-02 11:43:33.166052012 +0000 UTC m=+0.102307484 container init 80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745 (image=quay.io/ceph/ceph:v18, name=hungry_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:33 compute-0 podman[72893]: 2025-10-02 11:43:33.171145928 +0000 UTC m=+0.107401390 container start 80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745 (image=quay.io/ceph/ceph:v18, name=hungry_gates, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:43:33 compute-0 podman[72893]: 2025-10-02 11:43:33.17433825 +0000 UTC m=+0.110593912 container attach 80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745 (image=quay.io/ceph/ceph:v18, name=hungry_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:43:33 compute-0 podman[72893]: 2025-10-02 11:43:33.083475376 +0000 UTC m=+0.019730838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:33 compute-0 hungry_gates[72910]: AQBlZd5obytQCxAAvtI/mqfaMb0ggQdmLZgumA==
Oct 02 11:43:33 compute-0 systemd[1]: libpod-80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745.scope: Deactivated successfully.
Oct 02 11:43:33 compute-0 podman[72893]: 2025-10-02 11:43:33.194064367 +0000 UTC m=+0.130319829 container died 80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745 (image=quay.io/ceph/ceph:v18, name=hungry_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc794a6298bc4c1dd653807065b1d1d1fd014e1791a1f9b15cef6e86bfd63be2-merged.mount: Deactivated successfully.
Oct 02 11:43:33 compute-0 podman[72893]: 2025-10-02 11:43:33.223733581 +0000 UTC m=+0.159989043 container remove 80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745 (image=quay.io/ceph/ceph:v18, name=hungry_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:33 compute-0 systemd[1]: libpod-conmon-80e00d36138520bae8b6ca4581adf168c576bf6521e2cd29a00a3e29c0d93745.scope: Deactivated successfully.
Oct 02 11:43:33 compute-0 podman[72928]: 2025-10-02 11:43:33.280494533 +0000 UTC m=+0.037571581 container create 9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3 (image=quay.io/ceph/ceph:v18, name=jolly_solomon, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:43:33 compute-0 systemd[1]: Started libpod-conmon-9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3.scope.
Oct 02 11:43:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:33 compute-0 podman[72928]: 2025-10-02 11:43:33.263273078 +0000 UTC m=+0.020350146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:34 compute-0 podman[72928]: 2025-10-02 11:43:34.00737126 +0000 UTC m=+0.764448338 container init 9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3 (image=quay.io/ceph/ceph:v18, name=jolly_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:34 compute-0 podman[72928]: 2025-10-02 11:43:34.012472467 +0000 UTC m=+0.769549515 container start 9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3 (image=quay.io/ceph/ceph:v18, name=jolly_solomon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:34 compute-0 jolly_solomon[72945]: AQBmZd5orY8PAhAAsIkUHo8jd+SNxMIEHKVsSg==
Oct 02 11:43:34 compute-0 systemd[1]: libpod-9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3.scope: Deactivated successfully.
Oct 02 11:43:34 compute-0 podman[72928]: 2025-10-02 11:43:34.162789891 +0000 UTC m=+0.919866939 container attach 9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3 (image=quay.io/ceph/ceph:v18, name=jolly_solomon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:34 compute-0 podman[72928]: 2025-10-02 11:43:34.163299465 +0000 UTC m=+0.920376523 container died 9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3 (image=quay.io/ceph/ceph:v18, name=jolly_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:43:34 compute-0 podman[72928]: 2025-10-02 11:43:34.204286874 +0000 UTC m=+0.961363922 container remove 9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3 (image=quay.io/ceph/ceph:v18, name=jolly_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:34 compute-0 systemd[1]: libpod-conmon-9a873d14e714a7c638064aff8bc22d9ab8e4d3464b962afec1369dcc184eb9c3.scope: Deactivated successfully.
Oct 02 11:43:34 compute-0 podman[72964]: 2025-10-02 11:43:34.268501782 +0000 UTC m=+0.043181494 container create a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a (image=quay.io/ceph/ceph:v18, name=trusting_golick, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:34 compute-0 systemd[1]: Started libpod-conmon-a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a.scope.
Oct 02 11:43:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabbd68f6a44c9d0a67628a0c33470610e6105a6cb2b70dfb2c79ae7422cb676/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:34 compute-0 podman[72964]: 2025-10-02 11:43:34.322471214 +0000 UTC m=+0.097150966 container init a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a (image=quay.io/ceph/ceph:v18, name=trusting_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:43:34 compute-0 podman[72964]: 2025-10-02 11:43:34.32682794 +0000 UTC m=+0.101507652 container start a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a (image=quay.io/ceph/ceph:v18, name=trusting_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:43:34 compute-0 podman[72964]: 2025-10-02 11:43:34.330472854 +0000 UTC m=+0.105152576 container attach a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a (image=quay.io/ceph/ceph:v18, name=trusting_golick, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:43:34 compute-0 podman[72964]: 2025-10-02 11:43:34.247484217 +0000 UTC m=+0.022163949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:34 compute-0 trusting_golick[72981]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 02 11:43:34 compute-0 trusting_golick[72981]: setting min_mon_release = pacific
Oct 02 11:43:34 compute-0 trusting_golick[72981]: /usr/bin/monmaptool: set fsid to 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:34 compute-0 trusting_golick[72981]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 02 11:43:34 compute-0 systemd[1]: libpod-a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a.scope: Deactivated successfully.
Oct 02 11:43:34 compute-0 podman[72964]: 2025-10-02 11:43:34.354369552 +0000 UTC m=+0.129049264 container died a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a (image=quay.io/ceph/ceph:v18, name=trusting_golick, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:43:34 compute-0 podman[72964]: 2025-10-02 11:43:34.392519239 +0000 UTC m=+0.167198951 container remove a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a (image=quay.io/ceph/ceph:v18, name=trusting_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:34 compute-0 systemd[1]: libpod-conmon-a2f1a039a66e4515c580229032b80b05a4bff00da379bd3d413f716d0d04907a.scope: Deactivated successfully.
Oct 02 11:43:34 compute-0 podman[72998]: 2025-10-02 11:43:34.45127819 +0000 UTC m=+0.040534107 container create fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c (image=quay.io/ceph/ceph:v18, name=objective_bhaskara, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:43:34 compute-0 systemd[1]: Started libpod-conmon-fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c.scope.
Oct 02 11:43:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e242376826fb5c72e86fbf84d04711e1b5b3d98e9fd5ed4e01e7bb0c800b50/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e242376826fb5c72e86fbf84d04711e1b5b3d98e9fd5ed4e01e7bb0c800b50/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e242376826fb5c72e86fbf84d04711e1b5b3d98e9fd5ed4e01e7bb0c800b50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e242376826fb5c72e86fbf84d04711e1b5b3d98e9fd5ed4e01e7bb0c800b50/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:34 compute-0 podman[72998]: 2025-10-02 11:43:34.512938793 +0000 UTC m=+0.102194720 container init fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c (image=quay.io/ceph/ceph:v18, name=objective_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:34 compute-0 podman[72998]: 2025-10-02 11:43:34.517735331 +0000 UTC m=+0.106991238 container start fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c (image=quay.io/ceph/ceph:v18, name=objective_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:34 compute-0 podman[72998]: 2025-10-02 11:43:34.520683436 +0000 UTC m=+0.109939363 container attach fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c (image=quay.io/ceph/ceph:v18, name=objective_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:43:34 compute-0 podman[72998]: 2025-10-02 11:43:34.432878701 +0000 UTC m=+0.022134628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:34 compute-0 systemd[1]: libpod-fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c.scope: Deactivated successfully.
Oct 02 11:43:34 compute-0 podman[72998]: 2025-10-02 11:43:34.610192061 +0000 UTC m=+0.199447968 container died fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c (image=quay.io/ceph/ceph:v18, name=objective_bhaskara, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:43:34 compute-0 podman[72998]: 2025-10-02 11:43:34.647972807 +0000 UTC m=+0.237228714 container remove fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c (image=quay.io/ceph/ceph:v18, name=objective_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:34 compute-0 systemd[1]: libpod-conmon-fed1c8dc62d6432facd970e8fdf06bd0e20a87abbb9e26c811cd36c739fa1d4c.scope: Deactivated successfully.
Oct 02 11:43:34 compute-0 systemd[1]: Reloading.
Oct 02 11:43:34 compute-0 systemd-sysv-generator[73085]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:34 compute-0 systemd-rc-local-generator[73082]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-dabbd68f6a44c9d0a67628a0c33470610e6105a6cb2b70dfb2c79ae7422cb676-merged.mount: Deactivated successfully.
Oct 02 11:43:34 compute-0 systemd[1]: Reloading.
Oct 02 11:43:34 compute-0 systemd-rc-local-generator[73119]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:34 compute-0 systemd-sysv-generator[73122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:35 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 02 11:43:35 compute-0 systemd[1]: Reloading.
Oct 02 11:43:35 compute-0 systemd-sysv-generator[73160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:35 compute-0 systemd-rc-local-generator[73157]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:35 compute-0 systemd[1]: Reached target Ceph cluster 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:43:35 compute-0 systemd[1]: Reloading.
Oct 02 11:43:35 compute-0 systemd-sysv-generator[73197]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:35 compute-0 systemd-rc-local-generator[73194]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:35 compute-0 systemd[1]: Reloading.
Oct 02 11:43:35 compute-0 systemd-rc-local-generator[73231]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:35 compute-0 systemd-sysv-generator[73235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:35 compute-0 systemd[1]: Created slice Slice /system/ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:43:35 compute-0 systemd[1]: Reached target System Time Set.
Oct 02 11:43:35 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 02 11:43:35 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:36 compute-0 podman[73292]: 2025-10-02 11:43:36.10058912 +0000 UTC m=+0.041580277 container create 5721bb397f948dc12eacb2b9f2db9cf5443da1dca65ffdebb148568a007cb63b (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8113079a756d08634870396c286c7465fdc60ccfd0aad010051a81c1bc9df7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8113079a756d08634870396c286c7465fdc60ccfd0aad010051a81c1bc9df7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8113079a756d08634870396c286c7465fdc60ccfd0aad010051a81c1bc9df7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8113079a756d08634870396c286c7465fdc60ccfd0aad010051a81c1bc9df7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:36 compute-0 podman[73292]: 2025-10-02 11:43:36.164117737 +0000 UTC m=+0.105108924 container init 5721bb397f948dc12eacb2b9f2db9cf5443da1dca65ffdebb148568a007cb63b (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:43:36 compute-0 podman[73292]: 2025-10-02 11:43:36.16908091 +0000 UTC m=+0.110072077 container start 5721bb397f948dc12eacb2b9f2db9cf5443da1dca65ffdebb148568a007cb63b (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:43:36 compute-0 podman[73292]: 2025-10-02 11:43:36.079307798 +0000 UTC m=+0.020298985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:36 compute-0 bash[73292]: 5721bb397f948dc12eacb2b9f2db9cf5443da1dca65ffdebb148568a007cb63b
Oct 02 11:43:36 compute-0 systemd[1]: Started Ceph mon.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:43:36 compute-0 ceph-mon[73312]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:43:36 compute-0 ceph-mon[73312]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: pidfile_write: ignore empty --pid-file
Oct 02 11:43:36 compute-0 ceph-mon[73312]: load: jerasure load: lrc 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Git sha 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: DB SUMMARY
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: DB Session ID:  RBJHR254CG91HQS59BQF
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                                     Options.env: 0x562e2311dc40
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                                Options.info_log: 0x562e23b0aec0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                                 Options.wal_dir: 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                    Options.write_buffer_manager: 0x562e23b1ab40
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                               Options.row_cache: None
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                              Options.wal_filter: None
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.wal_compression: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.max_background_jobs: 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.max_total_wal_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:       Options.compaction_readahead_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Compression algorithms supported:
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kZSTD supported: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:           Options.merge_operator: 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:        Options.compaction_filter: None
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562e23b0aaa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562e23b031f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:        Options.write_buffer_size: 33554432
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:  Options.max_write_buffer_number: 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:          Options.compression: NoCompression
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.num_levels: 7
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 804ca3a6-55d1-491a-b05c-fd7bfcd46561
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405416208054, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405416347443, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "RBJHR254CG91HQS59BQF", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405416347728, "job": 1, "event": "recovery_finished"}
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562e23b2ce00
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: DB pointer 0x562e23bb6000
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:43:36 compute-0 ceph-mon[73312]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.139       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.139       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.139       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.14              0.00         1    0.139       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562e23b031f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:43:36 compute-0 ceph-mon[73312]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@-1(???) e0 preinit fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 02 11:43:36 compute-0 ceph-mon[73312]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:43:36 compute-0 ceph-mon[73312]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 02 11:43:36 compute-0 ceph-mon[73312]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:43:36 compute-0 ceph-mon[73312]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:43:36 compute-0 ceph-mon[73312]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T11:43:34.554500Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864108,os=Linux}
Oct 02 11:43:36 compute-0 podman[73334]: 2025-10-02 11:43:36.452584345 +0000 UTC m=+0.072106326 container create 07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).mds e1 new map
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 02 11:43:36 compute-0 ceph-mon[73312]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mkfs 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:43:36 compute-0 podman[73334]: 2025-10-02 11:43:36.404535392 +0000 UTC m=+0.024057423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 02 11:43:36 compute-0 ceph-mon[73312]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:43:36 compute-0 ceph-mon[73312]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 02 11:43:36 compute-0 ceph-mon[73312]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:43:36 compute-0 systemd[1]: Started libpod-conmon-07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57.scope.
Oct 02 11:43:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa71efbeedd016df96ff129b6260bb0d4c4a1bfc2db85d291e01596ee15b51bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa71efbeedd016df96ff129b6260bb0d4c4a1bfc2db85d291e01596ee15b51bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa71efbeedd016df96ff129b6260bb0d4c4a1bfc2db85d291e01596ee15b51bb/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:36 compute-0 podman[73334]: 2025-10-02 11:43:36.648531231 +0000 UTC m=+0.268053232 container init 07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:36 compute-0 podman[73334]: 2025-10-02 11:43:36.657046546 +0000 UTC m=+0.276568537 container start 07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:36 compute-0 podman[73334]: 2025-10-02 11:43:36.663753559 +0000 UTC m=+0.283275590 container attach 07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:43:37 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 11:43:37 compute-0 ceph-mon[73312]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207273425' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:   cluster:
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     id:     20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     health: HEALTH_OK
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:  
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:   services:
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     mon: 1 daemons, quorum compute-0 (age 0.681309s)
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     mgr: no daemons active
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     osd: 0 osds: 0 up, 0 in
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:  
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:   data:
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     pools:   0 pools, 0 pgs
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     objects: 0 objects, 0 B
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     usage:   0 B used, 0 B / 0 B avail
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:     pgs:     
Oct 02 11:43:37 compute-0 ecstatic_stonebraker[73367]:  
Oct 02 11:43:37 compute-0 systemd[1]: libpod-07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57.scope: Deactivated successfully.
Oct 02 11:43:37 compute-0 podman[73334]: 2025-10-02 11:43:37.132955465 +0000 UTC m=+0.752477446 container died 07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa71efbeedd016df96ff129b6260bb0d4c4a1bfc2db85d291e01596ee15b51bb-merged.mount: Deactivated successfully.
Oct 02 11:43:37 compute-0 podman[73334]: 2025-10-02 11:43:37.17659321 +0000 UTC m=+0.796115191 container remove 07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:37 compute-0 systemd[1]: libpod-conmon-07cd3feb3b89e2ea156b82d217094adeace167a900e87fcd5f1bd61f7b5d2d57.scope: Deactivated successfully.
Oct 02 11:43:37 compute-0 podman[73406]: 2025-10-02 11:43:37.243113623 +0000 UTC m=+0.048362892 container create f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd (image=quay.io/ceph/ceph:v18, name=sleepy_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:43:37 compute-0 systemd[1]: Started libpod-conmon-f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd.scope.
Oct 02 11:43:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:37 compute-0 podman[73406]: 2025-10-02 11:43:37.218813464 +0000 UTC m=+0.024062813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f4384386e2f551f9ea201c2dd93b695823b95069d9dc770e274b24b1c871a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f4384386e2f551f9ea201c2dd93b695823b95069d9dc770e274b24b1c871a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f4384386e2f551f9ea201c2dd93b695823b95069d9dc770e274b24b1c871a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f4384386e2f551f9ea201c2dd93b695823b95069d9dc770e274b24b1c871a2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 podman[73406]: 2025-10-02 11:43:37.325571705 +0000 UTC m=+0.130820984 container init f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd (image=quay.io/ceph/ceph:v18, name=sleepy_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:43:37 compute-0 podman[73406]: 2025-10-02 11:43:37.33060919 +0000 UTC m=+0.135858459 container start f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd (image=quay.io/ceph/ceph:v18, name=sleepy_allen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:43:37 compute-0 podman[73406]: 2025-10-02 11:43:37.334196703 +0000 UTC m=+0.139446022 container attach f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd (image=quay.io/ceph/ceph:v18, name=sleepy_allen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:43:37 compute-0 ceph-mon[73312]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:43:37 compute-0 ceph-mon[73312]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:43:37 compute-0 ceph-mon[73312]: fsmap 
Oct 02 11:43:37 compute-0 ceph-mon[73312]: osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:43:37 compute-0 ceph-mon[73312]: mgrmap e1: no daemons active
Oct 02 11:43:37 compute-0 ceph-mon[73312]: from='client.? 192.168.122.100:0/1207273425' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 11:43:37 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 11:43:37 compute-0 ceph-mon[73312]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/404998207' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:43:37 compute-0 ceph-mon[73312]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/404998207' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:43:37 compute-0 sleepy_allen[73422]: 
Oct 02 11:43:37 compute-0 sleepy_allen[73422]: [global]
Oct 02 11:43:37 compute-0 sleepy_allen[73422]:         fsid = 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:37 compute-0 sleepy_allen[73422]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 02 11:43:37 compute-0 systemd[1]: libpod-f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd.scope: Deactivated successfully.
Oct 02 11:43:37 compute-0 podman[73448]: 2025-10-02 11:43:37.770236275 +0000 UTC m=+0.022739195 container died f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd (image=quay.io/ceph/ceph:v18, name=sleepy_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f4384386e2f551f9ea201c2dd93b695823b95069d9dc770e274b24b1c871a2-merged.mount: Deactivated successfully.
Oct 02 11:43:37 compute-0 podman[73448]: 2025-10-02 11:43:37.809422922 +0000 UTC m=+0.061925822 container remove f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd (image=quay.io/ceph/ceph:v18, name=sleepy_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:37 compute-0 systemd[1]: libpod-conmon-f9d5fdbd38effabced79eba0bd01babbbdbc14f2b6ef99e48e42b146e10e37cd.scope: Deactivated successfully.
Oct 02 11:43:37 compute-0 podman[73463]: 2025-10-02 11:43:37.887202479 +0000 UTC m=+0.047862627 container create ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537 (image=quay.io/ceph/ceph:v18, name=musing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:37 compute-0 systemd[1]: Started libpod-conmon-ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537.scope.
Oct 02 11:43:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0808e9ab9848725ad9ea2ef45668e93c9cfe87cd125f37ed72be7747742edf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0808e9ab9848725ad9ea2ef45668e93c9cfe87cd125f37ed72be7747742edf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0808e9ab9848725ad9ea2ef45668e93c9cfe87cd125f37ed72be7747742edf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0808e9ab9848725ad9ea2ef45668e93c9cfe87cd125f37ed72be7747742edf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:37 compute-0 podman[73463]: 2025-10-02 11:43:37.954916107 +0000 UTC m=+0.115576255 container init ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537 (image=quay.io/ceph/ceph:v18, name=musing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:37 compute-0 podman[73463]: 2025-10-02 11:43:37.961041963 +0000 UTC m=+0.121702111 container start ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537 (image=quay.io/ceph/ceph:v18, name=musing_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:43:37 compute-0 podman[73463]: 2025-10-02 11:43:37.867778541 +0000 UTC m=+0.028438719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:37 compute-0 podman[73463]: 2025-10-02 11:43:37.965972785 +0000 UTC m=+0.126632973 container attach ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537 (image=quay.io/ceph/ceph:v18, name=musing_robinson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:38 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:43:38 compute-0 ceph-mon[73312]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3108194421' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:43:38 compute-0 systemd[1]: libpod-ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537.scope: Deactivated successfully.
Oct 02 11:43:38 compute-0 podman[73463]: 2025-10-02 11:43:38.414948819 +0000 UTC m=+0.575608967 container died ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537 (image=quay.io/ceph/ceph:v18, name=musing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d0808e9ab9848725ad9ea2ef45668e93c9cfe87cd125f37ed72be7747742edf-merged.mount: Deactivated successfully.
Oct 02 11:43:38 compute-0 ceph-mon[73312]: from='client.? 192.168.122.100:0/404998207' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:43:38 compute-0 ceph-mon[73312]: from='client.? 192.168.122.100:0/404998207' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:43:38 compute-0 ceph-mon[73312]: from='client.? 192.168.122.100:0/3108194421' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:43:38 compute-0 podman[73463]: 2025-10-02 11:43:38.587026238 +0000 UTC m=+0.747686386 container remove ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537 (image=quay.io/ceph/ceph:v18, name=musing_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:38 compute-0 systemd[1]: libpod-conmon-ee250a951db4b46ac57b4a3ef5c414f6dc6b8dc41075f9319fedf36b2afe2537.scope: Deactivated successfully.
Oct 02 11:43:38 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:43:38 compute-0 ceph-mon[73312]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 02 11:43:38 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 02 11:43:38 compute-0 ceph-mon[73312]: mon.compute-0@0(leader) e1 shutdown
Oct 02 11:43:38 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0[73308]: 2025-10-02T11:43:38.763+0000 7f889b9ef640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 02 11:43:38 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0[73308]: 2025-10-02T11:43:38.763+0000 7f889b9ef640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 02 11:43:38 compute-0 ceph-mon[73312]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 11:43:38 compute-0 ceph-mon[73312]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 11:43:38 compute-0 podman[73548]: 2025-10-02 11:43:38.784992512 +0000 UTC m=+0.051514393 container died 5721bb397f948dc12eacb2b9f2db9cf5443da1dca65ffdebb148568a007cb63b (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e8113079a756d08634870396c286c7465fdc60ccfd0aad010051a81c1bc9df7-merged.mount: Deactivated successfully.
Oct 02 11:43:38 compute-0 podman[73548]: 2025-10-02 11:43:38.831461218 +0000 UTC m=+0.097983089 container remove 5721bb397f948dc12eacb2b9f2db9cf5443da1dca65ffdebb148568a007cb63b (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:38 compute-0 bash[73548]: ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0
Oct 02 11:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:43:38 compute-0 systemd[1]: ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@mon.compute-0.service: Deactivated successfully.
Oct 02 11:43:38 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:43:38 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:43:39 compute-0 podman[73648]: 2025-10-02 11:43:39.172393365 +0000 UTC m=+0.040827136 container create 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1670c2408731d7142664c2ccd006b7c7fcef1d36d0a0f593d8dd104b27a8220f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1670c2408731d7142664c2ccd006b7c7fcef1d36d0a0f593d8dd104b27a8220f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1670c2408731d7142664c2ccd006b7c7fcef1d36d0a0f593d8dd104b27a8220f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1670c2408731d7142664c2ccd006b7c7fcef1d36d0a0f593d8dd104b27a8220f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:39 compute-0 podman[73648]: 2025-10-02 11:43:39.225223284 +0000 UTC m=+0.093657075 container init 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:43:39 compute-0 podman[73648]: 2025-10-02 11:43:39.232875204 +0000 UTC m=+0.101308975 container start 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:39 compute-0 bash[73648]: 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05
Oct 02 11:43:39 compute-0 podman[73648]: 2025-10-02 11:43:39.15764275 +0000 UTC m=+0.026076531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:39 compute-0 systemd[1]: Started Ceph mon.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:43:39 compute-0 ceph-mon[73668]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: pidfile_write: ignore empty --pid-file
Oct 02 11:43:39 compute-0 ceph-mon[73668]: load: jerasure load: lrc 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Git sha 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: DB SUMMARY
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: DB Session ID:  J3W12KU9TJU0P77F61TZ
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55202 ; 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                                     Options.env: 0x563da73b1c40
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                                Options.info_log: 0x563da8e5d040
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                                 Options.wal_dir: 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                    Options.write_buffer_manager: 0x563da8e6cb40
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                               Options.row_cache: None
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                              Options.wal_filter: None
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.wal_compression: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.max_background_jobs: 2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.max_total_wal_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:       Options.compaction_readahead_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Compression algorithms supported:
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kZSTD supported: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:           Options.merge_operator: 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:        Options.compaction_filter: None
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563da8e5cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563da8e551f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:        Options.write_buffer_size: 33554432
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:  Options.max_write_buffer_number: 2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:          Options.compression: NoCompression
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.num_levels: 7
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 804ca3a6-55d1-491a-b05c-fd7bfcd46561
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405419278268, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405419282519, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53377, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51019, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405419, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405419282628, "job": 1, "event": "recovery_finished"}
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563da8e7ee00
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: DB pointer 0x563da8f08000
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   55.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.37 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.37 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:43:39 compute-0 ceph-mon[73668]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???) e1 preinit fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???).mds e1 new map
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 02 11:43:39 compute-0 ceph-mon[73668]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:43:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:43:39 compute-0 podman[73669]: 2025-10-02 11:43:39.302501877 +0000 UTC m=+0.041217167 container create 60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4 (image=quay.io/ceph/ceph:v18, name=awesome_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:43:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:43:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 02 11:43:39 compute-0 systemd[1]: Started libpod-conmon-60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4.scope.
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:43:39 compute-0 ceph-mon[73668]: fsmap 
Oct 02 11:43:39 compute-0 ceph-mon[73668]: osdmap e1: 0 total, 0 up, 0 in
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mgrmap e1: no daemons active
Oct 02 11:43:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb631373058eded09613d9c895fb1cd08dbfa9cd1dbcc6a1e5754dbf6fc41e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:39 compute-0 podman[73669]: 2025-10-02 11:43:39.284091797 +0000 UTC m=+0.022807117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb631373058eded09613d9c895fb1cd08dbfa9cd1dbcc6a1e5754dbf6fc41e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb631373058eded09613d9c895fb1cd08dbfa9cd1dbcc6a1e5754dbf6fc41e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:39 compute-0 podman[73669]: 2025-10-02 11:43:39.394859513 +0000 UTC m=+0.133574863 container init 60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4 (image=quay.io/ceph/ceph:v18, name=awesome_jones, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:39 compute-0 podman[73669]: 2025-10-02 11:43:39.406127568 +0000 UTC m=+0.144842888 container start 60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4 (image=quay.io/ceph/ceph:v18, name=awesome_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:43:39 compute-0 podman[73669]: 2025-10-02 11:43:39.409424832 +0000 UTC m=+0.148140192 container attach 60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4 (image=quay.io/ceph/ceph:v18, name=awesome_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:43:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 02 11:43:39 compute-0 systemd[1]: libpod-60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4.scope: Deactivated successfully.
Oct 02 11:43:39 compute-0 podman[73669]: 2025-10-02 11:43:39.829368981 +0000 UTC m=+0.568084271 container died 60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4 (image=quay.io/ceph/ceph:v18, name=awesome_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fdb631373058eded09613d9c895fb1cd08dbfa9cd1dbcc6a1e5754dbf6fc41e-merged.mount: Deactivated successfully.
Oct 02 11:43:39 compute-0 podman[73669]: 2025-10-02 11:43:39.874787748 +0000 UTC m=+0.613503038 container remove 60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4 (image=quay.io/ceph/ceph:v18, name=awesome_jones, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:43:39 compute-0 systemd[1]: libpod-conmon-60ee05f33a3eefc27080029c0f641f15a2ed2ffc61e309cf487bf39d4dfd8ad4.scope: Deactivated successfully.
Oct 02 11:43:39 compute-0 podman[73761]: 2025-10-02 11:43:39.950234308 +0000 UTC m=+0.048016652 container create 533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33 (image=quay.io/ceph/ceph:v18, name=ecstatic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:39 compute-0 systemd[1]: Started libpod-conmon-533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33.scope.
Oct 02 11:43:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43434509365fa7468579672f295a61b84537f5de56a3af4fd627e201a6a01af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43434509365fa7468579672f295a61b84537f5de56a3af4fd627e201a6a01af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43434509365fa7468579672f295a61b84537f5de56a3af4fd627e201a6a01af7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:40 compute-0 podman[73761]: 2025-10-02 11:43:39.929855802 +0000 UTC m=+0.027638166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:40 compute-0 podman[73761]: 2025-10-02 11:43:40.033628486 +0000 UTC m=+0.131410850 container init 533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33 (image=quay.io/ceph/ceph:v18, name=ecstatic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:40 compute-0 podman[73761]: 2025-10-02 11:43:40.039196787 +0000 UTC m=+0.136979131 container start 533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33 (image=quay.io/ceph/ceph:v18, name=ecstatic_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:43:40 compute-0 podman[73761]: 2025-10-02 11:43:40.043166051 +0000 UTC m=+0.140948415 container attach 533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33 (image=quay.io/ceph/ceph:v18, name=ecstatic_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 02 11:43:40 compute-0 systemd[1]: libpod-533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33.scope: Deactivated successfully.
Oct 02 11:43:40 compute-0 podman[73761]: 2025-10-02 11:43:40.467536637 +0000 UTC m=+0.565318971 container died 533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33 (image=quay.io/ceph/ceph:v18, name=ecstatic_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-43434509365fa7468579672f295a61b84537f5de56a3af4fd627e201a6a01af7-merged.mount: Deactivated successfully.
Oct 02 11:43:40 compute-0 podman[73761]: 2025-10-02 11:43:40.51841162 +0000 UTC m=+0.616193964 container remove 533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33 (image=quay.io/ceph/ceph:v18, name=ecstatic_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:43:40 compute-0 systemd[1]: libpod-conmon-533dbcc8bcc24e3b5bdce78e576b61a380a59d40d25f16ea4bcbdf284d570e33.scope: Deactivated successfully.
Oct 02 11:43:40 compute-0 systemd[1]: Reloading.
Oct 02 11:43:40 compute-0 systemd-rc-local-generator[73845]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:40 compute-0 systemd-sysv-generator[73848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:40 compute-0 systemd[1]: Reloading.
Oct 02 11:43:40 compute-0 systemd-rc-local-generator[73886]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:40 compute-0 systemd-sysv-generator[73890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:41 compute-0 systemd[1]: Starting Ceph mgr.compute-0.unmtoh for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:43:41 compute-0 podman[73941]: 2025-10-02 11:43:41.277211036 +0000 UTC m=+0.041748672 container create 3c1cbf2aec50e75c2e43473e1cd7b70de91caf43d5e7588c3666f08894e14224 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d934b313052cbfbbdaceb6f4dcc9207756dbbe56f1289c0bc6b9225d860a6ef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d934b313052cbfbbdaceb6f4dcc9207756dbbe56f1289c0bc6b9225d860a6ef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d934b313052cbfbbdaceb6f4dcc9207756dbbe56f1289c0bc6b9225d860a6ef6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d934b313052cbfbbdaceb6f4dcc9207756dbbe56f1289c0bc6b9225d860a6ef6/merged/var/lib/ceph/mgr/ceph-compute-0.unmtoh supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:41 compute-0 podman[73941]: 2025-10-02 11:43:41.33507091 +0000 UTC m=+0.099608576 container init 3c1cbf2aec50e75c2e43473e1cd7b70de91caf43d5e7588c3666f08894e14224 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:43:41 compute-0 podman[73941]: 2025-10-02 11:43:41.340445765 +0000 UTC m=+0.104983411 container start 3c1cbf2aec50e75c2e43473e1cd7b70de91caf43d5e7588c3666f08894e14224 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:41 compute-0 bash[73941]: 3c1cbf2aec50e75c2e43473e1cd7b70de91caf43d5e7588c3666f08894e14224
Oct 02 11:43:41 compute-0 podman[73941]: 2025-10-02 11:43:41.256998045 +0000 UTC m=+0.021535701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:41 compute-0 systemd[1]: Started Ceph mgr.compute-0.unmtoh for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:43:41 compute-0 ceph-mgr[73961]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:43:41 compute-0 ceph-mgr[73961]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 11:43:41 compute-0 ceph-mgr[73961]: pidfile_write: ignore empty --pid-file
Oct 02 11:43:41 compute-0 podman[73962]: 2025-10-02 11:43:41.421532257 +0000 UTC m=+0.046543670 container create d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b (image=quay.io/ceph/ceph:v18, name=gallant_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:43:41 compute-0 systemd[1]: Started libpod-conmon-d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b.scope.
Oct 02 11:43:41 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'alerts'
Oct 02 11:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f325f517601a326fb77da15c5d940e73af985145ba28588c0bd235cee22b61/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f325f517601a326fb77da15c5d940e73af985145ba28588c0bd235cee22b61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f325f517601a326fb77da15c5d940e73af985145ba28588c0bd235cee22b61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:41 compute-0 podman[73962]: 2025-10-02 11:43:41.402389116 +0000 UTC m=+0.027400559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:41 compute-0 podman[73962]: 2025-10-02 11:43:41.509204129 +0000 UTC m=+0.134215562 container init d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b (image=quay.io/ceph/ceph:v18, name=gallant_cray, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:41 compute-0 podman[73962]: 2025-10-02 11:43:41.515349796 +0000 UTC m=+0.140361209 container start d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b (image=quay.io/ceph/ceph:v18, name=gallant_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:41 compute-0 podman[73962]: 2025-10-02 11:43:41.522186732 +0000 UTC m=+0.147198175 container attach d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b (image=quay.io/ceph/ceph:v18, name=gallant_cray, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 11:43:41 compute-0 ceph-mgr[73961]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:43:41 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'balancer'
Oct 02 11:43:41 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:41.821+0000 7fe70128a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:43:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:43:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159512488' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:41 compute-0 gallant_cray[74002]: 
Oct 02 11:43:41 compute-0 gallant_cray[74002]: {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "health": {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "status": "HEALTH_OK",
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "checks": {},
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "mutes": []
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     },
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "election_epoch": 5,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "quorum": [
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         0
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     ],
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "quorum_names": [
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "compute-0"
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     ],
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "quorum_age": 2,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "monmap": {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "epoch": 1,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "min_mon_release_name": "reef",
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_mons": 1
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     },
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "osdmap": {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "epoch": 1,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_osds": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_up_osds": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "osd_up_since": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_in_osds": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "osd_in_since": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_remapped_pgs": 0
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     },
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "pgmap": {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "pgs_by_state": [],
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_pgs": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_pools": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_objects": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "data_bytes": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "bytes_used": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "bytes_avail": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "bytes_total": 0
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     },
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "fsmap": {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "epoch": 1,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "by_rank": [],
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "up:standby": 0
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     },
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "mgrmap": {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "available": false,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "num_standbys": 0,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "modules": [
Oct 02 11:43:41 compute-0 gallant_cray[74002]:             "iostat",
Oct 02 11:43:41 compute-0 gallant_cray[74002]:             "nfs",
Oct 02 11:43:41 compute-0 gallant_cray[74002]:             "restful"
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         ],
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "services": {}
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     },
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "servicemap": {
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "epoch": 1,
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:43:41 compute-0 gallant_cray[74002]:         "services": {}
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     },
Oct 02 11:43:41 compute-0 gallant_cray[74002]:     "progress_events": {}
Oct 02 11:43:41 compute-0 gallant_cray[74002]: }
Oct 02 11:43:41 compute-0 systemd[1]: libpod-d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b.scope: Deactivated successfully.
Oct 02 11:43:41 compute-0 podman[73962]: 2025-10-02 11:43:41.927249583 +0000 UTC m=+0.552261006 container died d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b (image=quay.io/ceph/ceph:v18, name=gallant_cray, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f325f517601a326fb77da15c5d940e73af985145ba28588c0bd235cee22b61-merged.mount: Deactivated successfully.
Oct 02 11:43:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3159512488' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:41 compute-0 podman[73962]: 2025-10-02 11:43:41.977603971 +0000 UTC m=+0.602615384 container remove d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b (image=quay.io/ceph/ceph:v18, name=gallant_cray, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:41 compute-0 systemd[1]: libpod-conmon-d8d9c83730ff8b75829c7d80b86361d39b5ed72534cd8d992c197d022613470b.scope: Deactivated successfully.
Oct 02 11:43:42 compute-0 ceph-mgr[73961]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:43:42 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'cephadm'
Oct 02 11:43:42 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:42.085+0000 7fe70128a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:43:44 compute-0 podman[74054]: 2025-10-02 11:43:44.049857365 +0000 UTC m=+0.050661518 container create 0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e (image=quay.io/ceph/ceph:v18, name=epic_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:44 compute-0 systemd[1]: Started libpod-conmon-0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e.scope.
Oct 02 11:43:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380fd10677c666d6d3ab657d7c44641711bb53c49ead47787c807ede19b6840e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380fd10677c666d6d3ab657d7c44641711bb53c49ead47787c807ede19b6840e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380fd10677c666d6d3ab657d7c44641711bb53c49ead47787c807ede19b6840e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:44 compute-0 podman[74054]: 2025-10-02 11:43:44.020303205 +0000 UTC m=+0.021107368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:44 compute-0 podman[74054]: 2025-10-02 11:43:44.138155795 +0000 UTC m=+0.138960018 container init 0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e (image=quay.io/ceph/ceph:v18, name=epic_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:43:44 compute-0 podman[74054]: 2025-10-02 11:43:44.145035343 +0000 UTC m=+0.145839496 container start 0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e (image=quay.io/ceph/ceph:v18, name=epic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:44 compute-0 podman[74054]: 2025-10-02 11:43:44.149345137 +0000 UTC m=+0.150149330 container attach 0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e (image=quay.io/ceph/ceph:v18, name=epic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:43:44 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'crash'
Oct 02 11:43:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:43:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164723805' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:44 compute-0 epic_ritchie[74070]: 
Oct 02 11:43:44 compute-0 epic_ritchie[74070]: {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "health": {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "status": "HEALTH_OK",
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "checks": {},
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "mutes": []
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     },
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "election_epoch": 5,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "quorum": [
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         0
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     ],
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "quorum_names": [
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "compute-0"
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     ],
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "quorum_age": 5,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "monmap": {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "epoch": 1,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "min_mon_release_name": "reef",
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_mons": 1
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     },
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "osdmap": {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "epoch": 1,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_osds": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_up_osds": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "osd_up_since": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_in_osds": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "osd_in_since": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_remapped_pgs": 0
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     },
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "pgmap": {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "pgs_by_state": [],
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_pgs": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_pools": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_objects": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "data_bytes": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "bytes_used": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "bytes_avail": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "bytes_total": 0
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     },
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "fsmap": {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "epoch": 1,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "by_rank": [],
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "up:standby": 0
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     },
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "mgrmap": {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "available": false,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "num_standbys": 0,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "modules": [
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:             "iostat",
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:             "nfs",
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:             "restful"
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         ],
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "services": {}
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     },
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "servicemap": {
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "epoch": 1,
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:         "services": {}
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     },
Oct 02 11:43:44 compute-0 epic_ritchie[74070]:     "progress_events": {}
Oct 02 11:43:44 compute-0 epic_ritchie[74070]: }
Oct 02 11:43:44 compute-0 systemd[1]: libpod-0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e.scope: Deactivated successfully.
Oct 02 11:43:44 compute-0 ceph-mgr[73961]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:43:44 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:44.571+0000 7fe70128a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:43:44 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'dashboard'
Oct 02 11:43:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4164723805' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:44 compute-0 podman[74096]: 2025-10-02 11:43:44.584932926 +0000 UTC m=+0.028085649 container died 0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e (image=quay.io/ceph/ceph:v18, name=epic_ritchie, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-380fd10677c666d6d3ab657d7c44641711bb53c49ead47787c807ede19b6840e-merged.mount: Deactivated successfully.
Oct 02 11:43:44 compute-0 podman[74096]: 2025-10-02 11:43:44.660234812 +0000 UTC m=+0.103387535 container remove 0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e (image=quay.io/ceph/ceph:v18, name=epic_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:43:44 compute-0 systemd[1]: libpod-conmon-0d2de3c1fe87876764587cdf0f865428db1a7c4f89c82edd668b9279a67f3b8e.scope: Deactivated successfully.
Oct 02 11:43:46 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'devicehealth'
Oct 02 11:43:46 compute-0 ceph-mgr[73961]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:43:46 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:46.382+0000 7fe70128a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:43:46 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'diskprediction_local'
Oct 02 11:43:46 compute-0 podman[74110]: 2025-10-02 11:43:46.727921204 +0000 UTC m=+0.036019147 container create f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e (image=quay.io/ceph/ceph:v18, name=upbeat_banach, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:46 compute-0 systemd[1]: Started libpod-conmon-f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e.scope.
Oct 02 11:43:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6144319ed528572303cd63244175e71c4df0b5836994693402e2bb6c98c62d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6144319ed528572303cd63244175e71c4df0b5836994693402e2bb6c98c62d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6144319ed528572303cd63244175e71c4df0b5836994693402e2bb6c98c62d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:46 compute-0 podman[74110]: 2025-10-02 11:43:46.797690681 +0000 UTC m=+0.105788634 container init f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e (image=quay.io/ceph/ceph:v18, name=upbeat_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:43:46 compute-0 podman[74110]: 2025-10-02 11:43:46.803434106 +0000 UTC m=+0.111532039 container start f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e (image=quay.io/ceph/ceph:v18, name=upbeat_banach, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:43:46 compute-0 podman[74110]: 2025-10-02 11:43:46.806612497 +0000 UTC m=+0.114710420 container attach f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e (image=quay.io/ceph/ceph:v18, name=upbeat_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:46 compute-0 podman[74110]: 2025-10-02 11:43:46.712864191 +0000 UTC m=+0.020962114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:46 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 02 11:43:46 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 02 11:43:46 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   from numpy import show_config as show_numpy_config
Oct 02 11:43:46 compute-0 ceph-mgr[73961]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:43:46 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:46.947+0000 7fe70128a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:43:46 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'influx'
Oct 02 11:43:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:43:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12154032' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:47 compute-0 upbeat_banach[74126]: 
Oct 02 11:43:47 compute-0 upbeat_banach[74126]: {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "health": {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "status": "HEALTH_OK",
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "checks": {},
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "mutes": []
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     },
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "election_epoch": 5,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "quorum": [
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         0
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     ],
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "quorum_names": [
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "compute-0"
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     ],
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "quorum_age": 7,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "monmap": {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "epoch": 1,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "min_mon_release_name": "reef",
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_mons": 1
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     },
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "osdmap": {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "epoch": 1,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_osds": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_up_osds": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "osd_up_since": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_in_osds": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "osd_in_since": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_remapped_pgs": 0
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     },
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "pgmap": {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "pgs_by_state": [],
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_pgs": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_pools": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_objects": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "data_bytes": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "bytes_used": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "bytes_avail": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "bytes_total": 0
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     },
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "fsmap": {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "epoch": 1,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "by_rank": [],
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "up:standby": 0
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     },
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "mgrmap": {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "available": false,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "num_standbys": 0,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "modules": [
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:             "iostat",
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:             "nfs",
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:             "restful"
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         ],
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "services": {}
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     },
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "servicemap": {
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "epoch": 1,
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:         "services": {}
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     },
Oct 02 11:43:47 compute-0 upbeat_banach[74126]:     "progress_events": {}
Oct 02 11:43:47 compute-0 upbeat_banach[74126]: }
Oct 02 11:43:47 compute-0 systemd[1]: libpod-f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e.scope: Deactivated successfully.
Oct 02 11:43:47 compute-0 conmon[74126]: conmon f51c01d055e919410b83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e.scope/container/memory.events
Oct 02 11:43:47 compute-0 podman[74110]: 2025-10-02 11:43:47.201615149 +0000 UTC m=+0.509713042 container died f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e (image=quay.io/ceph/ceph:v18, name=upbeat_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:47 compute-0 ceph-mgr[73961]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:43:47 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:47.205+0000 7fe70128a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:43:47 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'insights'
Oct 02 11:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6144319ed528572303cd63244175e71c4df0b5836994693402e2bb6c98c62d2-merged.mount: Deactivated successfully.
Oct 02 11:43:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/12154032' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:47 compute-0 podman[74110]: 2025-10-02 11:43:47.242701171 +0000 UTC m=+0.550799094 container remove f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e (image=quay.io/ceph/ceph:v18, name=upbeat_banach, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:47 compute-0 systemd[1]: libpod-conmon-f51c01d055e919410b83185bda67a89cac61f4943ceda55677fe4d4113d6e04e.scope: Deactivated successfully.
Oct 02 11:43:47 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'iostat'
Oct 02 11:43:47 compute-0 ceph-mgr[73961]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:43:47 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:47.674+0000 7fe70128a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:43:47 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'k8sevents'
Oct 02 11:43:49 compute-0 podman[74166]: 2025-10-02 11:43:49.295469094 +0000 UTC m=+0.032913927 container create c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:49 compute-0 systemd[1]: Started libpod-conmon-c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4.scope.
Oct 02 11:43:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbfebf2281573bb8ce2982f613d66a9808a03b4586f4b21ff799bf703a6b481/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbfebf2281573bb8ce2982f613d66a9808a03b4586f4b21ff799bf703a6b481/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbfebf2281573bb8ce2982f613d66a9808a03b4586f4b21ff799bf703a6b481/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:49 compute-0 podman[74166]: 2025-10-02 11:43:49.36624869 +0000 UTC m=+0.103693553 container init c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:43:49 compute-0 podman[74166]: 2025-10-02 11:43:49.37354919 +0000 UTC m=+0.110994023 container start c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:43:49 compute-0 podman[74166]: 2025-10-02 11:43:49.376745012 +0000 UTC m=+0.114189845 container attach c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:43:49 compute-0 podman[74166]: 2025-10-02 11:43:49.280928836 +0000 UTC m=+0.018373689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:49 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'localpool'
Oct 02 11:43:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:43:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792873746' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]: 
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]: {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "health": {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "status": "HEALTH_OK",
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "checks": {},
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "mutes": []
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     },
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "election_epoch": 5,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "quorum": [
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         0
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     ],
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "quorum_names": [
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "compute-0"
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     ],
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "quorum_age": 10,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "monmap": {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "epoch": 1,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "min_mon_release_name": "reef",
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_mons": 1
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     },
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "osdmap": {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "epoch": 1,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_osds": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_up_osds": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "osd_up_since": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_in_osds": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "osd_in_since": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_remapped_pgs": 0
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     },
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "pgmap": {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "pgs_by_state": [],
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_pgs": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_pools": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_objects": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "data_bytes": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "bytes_used": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "bytes_avail": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "bytes_total": 0
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     },
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "fsmap": {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "epoch": 1,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "by_rank": [],
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "up:standby": 0
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     },
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "mgrmap": {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "available": false,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "num_standbys": 0,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "modules": [
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:             "iostat",
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:             "nfs",
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:             "restful"
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         ],
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "services": {}
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     },
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "servicemap": {
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "epoch": 1,
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:         "services": {}
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     },
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]:     "progress_events": {}
Oct 02 11:43:49 compute-0 suspicious_khayyam[74183]: }
Oct 02 11:43:49 compute-0 systemd[1]: libpod-c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4.scope: Deactivated successfully.
Oct 02 11:43:49 compute-0 podman[74166]: 2025-10-02 11:43:49.785930941 +0000 UTC m=+0.523375784 container died c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:43:49 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'mds_autoscaler'
Oct 02 11:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cbfebf2281573bb8ce2982f613d66a9808a03b4586f4b21ff799bf703a6b481-merged.mount: Deactivated successfully.
Oct 02 11:43:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3792873746' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:49 compute-0 podman[74166]: 2025-10-02 11:43:49.833992624 +0000 UTC m=+0.571437467 container remove c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:43:49 compute-0 systemd[1]: libpod-conmon-c699c689213236bfeb2894ee1a553dab5da4773323f09ccdd33c56445818c8a4.scope: Deactivated successfully.
Oct 02 11:43:50 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'mirroring'
Oct 02 11:43:50 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'nfs'
Oct 02 11:43:51 compute-0 ceph-mgr[73961]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:43:51 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'orchestrator'
Oct 02 11:43:51 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:51.565+0000 7fe70128a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:43:51 compute-0 podman[74221]: 2025-10-02 11:43:51.905421135 +0000 UTC m=+0.045205172 container create 198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf (image=quay.io/ceph/ceph:v18, name=adoring_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:51 compute-0 systemd[1]: Started libpod-conmon-198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf.scope.
Oct 02 11:43:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7914dae6e45cad612bfcabce4a6b5c020bc85945718d5965257dd4f2410919c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7914dae6e45cad612bfcabce4a6b5c020bc85945718d5965257dd4f2410919c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7914dae6e45cad612bfcabce4a6b5c020bc85945718d5965257dd4f2410919c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:51 compute-0 podman[74221]: 2025-10-02 11:43:51.968702395 +0000 UTC m=+0.108486452 container init 198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf (image=quay.io/ceph/ceph:v18, name=adoring_vaughan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:43:51 compute-0 podman[74221]: 2025-10-02 11:43:51.975058568 +0000 UTC m=+0.114842605 container start 198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf (image=quay.io/ceph/ceph:v18, name=adoring_vaughan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:43:51 compute-0 podman[74221]: 2025-10-02 11:43:51.889664951 +0000 UTC m=+0.029449008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:51 compute-0 podman[74221]: 2025-10-02 11:43:51.979777303 +0000 UTC m=+0.119561360 container attach 198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf (image=quay.io/ceph/ceph:v18, name=adoring_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:43:52 compute-0 ceph-mgr[73961]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:43:52 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'osd_perf_query'
Oct 02 11:43:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:52.299+0000 7fe70128a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:43:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:43:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807391214' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]: 
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]: {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "health": {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "status": "HEALTH_OK",
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "checks": {},
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "mutes": []
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     },
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "election_epoch": 5,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "quorum": [
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         0
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     ],
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "quorum_names": [
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "compute-0"
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     ],
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "quorum_age": 13,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "monmap": {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "epoch": 1,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "min_mon_release_name": "reef",
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_mons": 1
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     },
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "osdmap": {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "epoch": 1,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_osds": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_up_osds": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "osd_up_since": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_in_osds": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "osd_in_since": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_remapped_pgs": 0
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     },
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "pgmap": {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "pgs_by_state": [],
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_pgs": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_pools": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_objects": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "data_bytes": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "bytes_used": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "bytes_avail": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "bytes_total": 0
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     },
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "fsmap": {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "epoch": 1,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "by_rank": [],
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "up:standby": 0
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     },
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "mgrmap": {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "available": false,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "num_standbys": 0,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "modules": [
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:             "iostat",
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:             "nfs",
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:             "restful"
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         ],
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "services": {}
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     },
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "servicemap": {
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "epoch": 1,
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:         "services": {}
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     },
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]:     "progress_events": {}
Oct 02 11:43:52 compute-0 adoring_vaughan[74237]: }
Oct 02 11:43:52 compute-0 systemd[1]: libpod-198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf.scope: Deactivated successfully.
Oct 02 11:43:52 compute-0 conmon[74237]: conmon 198331276e40a27a4b5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf.scope/container/memory.events
Oct 02 11:43:52 compute-0 podman[74221]: 2025-10-02 11:43:52.392553596 +0000 UTC m=+0.532337633 container died 198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf (image=quay.io/ceph/ceph:v18, name=adoring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7914dae6e45cad612bfcabce4a6b5c020bc85945718d5965257dd4f2410919c8-merged.mount: Deactivated successfully.
Oct 02 11:43:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2807391214' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:52 compute-0 podman[74221]: 2025-10-02 11:43:52.437857919 +0000 UTC m=+0.577641956 container remove 198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf (image=quay.io/ceph/ceph:v18, name=adoring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:52 compute-0 systemd[1]: libpod-conmon-198331276e40a27a4b5df5d180dbb76e7c170032464d27fc488f6f9cd2d5cdbf.scope: Deactivated successfully.
Oct 02 11:43:52 compute-0 ceph-mgr[73961]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:43:52 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'osd_support'
Oct 02 11:43:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:52.576+0000 7fe70128a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:43:52 compute-0 ceph-mgr[73961]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:43:52 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'pg_autoscaler'
Oct 02 11:43:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:52.842+0000 7fe70128a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:43:53 compute-0 ceph-mgr[73961]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:43:53 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:53.152+0000 7fe70128a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:43:53 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'progress'
Oct 02 11:43:53 compute-0 ceph-mgr[73961]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:43:53 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'prometheus'
Oct 02 11:43:53 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:53.429+0000 7fe70128a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:43:54 compute-0 podman[74275]: 2025-10-02 11:43:54.517880886 +0000 UTC m=+0.048509076 container create 4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025 (image=quay.io/ceph/ceph:v18, name=cool_solomon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:43:54 compute-0 ceph-mgr[73961]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:43:54 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:54.525+0000 7fe70128a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:43:54 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'rbd_support'
Oct 02 11:43:54 compute-0 systemd[1]: Started libpod-conmon-4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025.scope.
Oct 02 11:43:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f926adc9e614e554dc3e7d6253eee2d707eb1ec5904dd0e9243316b3e2ed9eaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f926adc9e614e554dc3e7d6253eee2d707eb1ec5904dd0e9243316b3e2ed9eaa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f926adc9e614e554dc3e7d6253eee2d707eb1ec5904dd0e9243316b3e2ed9eaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:54 compute-0 podman[74275]: 2025-10-02 11:43:54.496934494 +0000 UTC m=+0.027562704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:54 compute-0 podman[74275]: 2025-10-02 11:43:54.651486469 +0000 UTC m=+0.182114679 container init 4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025 (image=quay.io/ceph/ceph:v18, name=cool_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:43:54 compute-0 podman[74275]: 2025-10-02 11:43:54.656459042 +0000 UTC m=+0.187087242 container start 4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025 (image=quay.io/ceph/ceph:v18, name=cool_solomon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:54 compute-0 podman[74275]: 2025-10-02 11:43:54.666077489 +0000 UTC m=+0.196705699 container attach 4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025 (image=quay.io/ceph/ceph:v18, name=cool_solomon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:43:54 compute-0 ceph-mgr[73961]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:43:54 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'restful'
Oct 02 11:43:54 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:54.847+0000 7fe70128a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147270719' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:55 compute-0 cool_solomon[74291]: 
Oct 02 11:43:55 compute-0 cool_solomon[74291]: {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "health": {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "status": "HEALTH_OK",
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "checks": {},
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "mutes": []
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     },
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "election_epoch": 5,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "quorum": [
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         0
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     ],
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "quorum_names": [
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "compute-0"
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     ],
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "quorum_age": 15,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "monmap": {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "epoch": 1,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "min_mon_release_name": "reef",
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_mons": 1
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     },
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "osdmap": {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "epoch": 1,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_osds": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_up_osds": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "osd_up_since": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_in_osds": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "osd_in_since": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_remapped_pgs": 0
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     },
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "pgmap": {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "pgs_by_state": [],
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_pgs": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_pools": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_objects": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "data_bytes": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "bytes_used": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "bytes_avail": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "bytes_total": 0
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     },
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "fsmap": {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "epoch": 1,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "by_rank": [],
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "up:standby": 0
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     },
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "mgrmap": {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "available": false,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "num_standbys": 0,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "modules": [
Oct 02 11:43:55 compute-0 cool_solomon[74291]:             "iostat",
Oct 02 11:43:55 compute-0 cool_solomon[74291]:             "nfs",
Oct 02 11:43:55 compute-0 cool_solomon[74291]:             "restful"
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         ],
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "services": {}
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     },
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "servicemap": {
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "epoch": 1,
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:43:55 compute-0 cool_solomon[74291]:         "services": {}
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     },
Oct 02 11:43:55 compute-0 cool_solomon[74291]:     "progress_events": {}
Oct 02 11:43:55 compute-0 cool_solomon[74291]: }
Oct 02 11:43:55 compute-0 systemd[1]: libpod-4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025.scope: Deactivated successfully.
Oct 02 11:43:55 compute-0 podman[74275]: 2025-10-02 11:43:55.10485909 +0000 UTC m=+0.635487280 container died 4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025 (image=quay.io/ceph/ceph:v18, name=cool_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f926adc9e614e554dc3e7d6253eee2d707eb1ec5904dd0e9243316b3e2ed9eaa-merged.mount: Deactivated successfully.
Oct 02 11:43:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3147270719' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:55 compute-0 podman[74275]: 2025-10-02 11:43:55.241461019 +0000 UTC m=+0.772089209 container remove 4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025 (image=quay.io/ceph/ceph:v18, name=cool_solomon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:55 compute-0 systemd[1]: libpod-conmon-4d59eb79cb12040a1177dbcb8223372f59857e51402a9942d0aeca88e4b3e025.scope: Deactivated successfully.
Oct 02 11:43:55 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'rgw'
Oct 02 11:43:56 compute-0 ceph-mgr[73961]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:43:56 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'rook'
Oct 02 11:43:56 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:56.392+0000 7fe70128a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:43:57 compute-0 podman[74329]: 2025-10-02 11:43:57.303675924 +0000 UTC m=+0.038283712 container create 1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7 (image=quay.io/ceph/ceph:v18, name=nifty_burnell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:43:57 compute-0 systemd[1]: Started libpod-conmon-1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7.scope.
Oct 02 11:43:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8254f05b6e3d9f8de0f4b55a8510fb542a0432c20b24bd7f8086d9a8417565ce/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8254f05b6e3d9f8de0f4b55a8510fb542a0432c20b24bd7f8086d9a8417565ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8254f05b6e3d9f8de0f4b55a8510fb542a0432c20b24bd7f8086d9a8417565ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:43:57 compute-0 podman[74329]: 2025-10-02 11:43:57.286397137 +0000 UTC m=+0.021004955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:43:57 compute-0 podman[74329]: 2025-10-02 11:43:57.392732325 +0000 UTC m=+0.127340143 container init 1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7 (image=quay.io/ceph/ceph:v18, name=nifty_burnell, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:43:57 compute-0 podman[74329]: 2025-10-02 11:43:57.400112698 +0000 UTC m=+0.134720496 container start 1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7 (image=quay.io/ceph/ceph:v18, name=nifty_burnell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:43:57 compute-0 podman[74329]: 2025-10-02 11:43:57.403988689 +0000 UTC m=+0.138596517 container attach 1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7 (image=quay.io/ceph/ceph:v18, name=nifty_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:43:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:43:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972081120' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:57 compute-0 nifty_burnell[74345]: 
Oct 02 11:43:57 compute-0 nifty_burnell[74345]: {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "health": {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "status": "HEALTH_OK",
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "checks": {},
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "mutes": []
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     },
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "election_epoch": 5,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "quorum": [
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         0
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     ],
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "quorum_names": [
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "compute-0"
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     ],
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "quorum_age": 18,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "monmap": {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "epoch": 1,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "min_mon_release_name": "reef",
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_mons": 1
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     },
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "osdmap": {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "epoch": 1,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_osds": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_up_osds": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "osd_up_since": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_in_osds": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "osd_in_since": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_remapped_pgs": 0
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     },
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "pgmap": {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "pgs_by_state": [],
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_pgs": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_pools": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_objects": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "data_bytes": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "bytes_used": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "bytes_avail": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "bytes_total": 0
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     },
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "fsmap": {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "epoch": 1,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "by_rank": [],
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "up:standby": 0
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     },
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "mgrmap": {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "available": false,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "num_standbys": 0,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "modules": [
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:             "iostat",
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:             "nfs",
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:             "restful"
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         ],
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "services": {}
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     },
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "servicemap": {
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "epoch": 1,
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:         "services": {}
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     },
Oct 02 11:43:57 compute-0 nifty_burnell[74345]:     "progress_events": {}
Oct 02 11:43:57 compute-0 nifty_burnell[74345]: }
Oct 02 11:43:57 compute-0 systemd[1]: libpod-1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7.scope: Deactivated successfully.
Oct 02 11:43:57 compute-0 podman[74329]: 2025-10-02 11:43:57.852380256 +0000 UTC m=+0.586988054 container died 1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7 (image=quay.io/ceph/ceph:v18, name=nifty_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8254f05b6e3d9f8de0f4b55a8510fb542a0432c20b24bd7f8086d9a8417565ce-merged.mount: Deactivated successfully.
Oct 02 11:43:57 compute-0 podman[74329]: 2025-10-02 11:43:57.891167612 +0000 UTC m=+0.625775410 container remove 1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7 (image=quay.io/ceph/ceph:v18, name=nifty_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:43:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2972081120' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:43:57 compute-0 systemd[1]: libpod-conmon-1b238fcc04f91a812ea2710046204f4fac7bf77e1e9e6193120c3b420843fab7.scope: Deactivated successfully.
Oct 02 11:43:58 compute-0 ceph-mgr[73961]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:43:58 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'selftest'
Oct 02 11:43:58 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:58.557+0000 7fe70128a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:43:58 compute-0 ceph-mgr[73961]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:43:58 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'snap_schedule'
Oct 02 11:43:58 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:58.849+0000 7fe70128a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:43:59 compute-0 ceph-mgr[73961]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:43:59 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'stats'
Oct 02 11:43:59 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:59.148+0000 7fe70128a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:43:59 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'status'
Oct 02 11:43:59 compute-0 ceph-mgr[73961]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:43:59 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'telegraf'
Oct 02 11:43:59 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:59.717+0000 7fe70128a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:43:59 compute-0 podman[74384]: 2025-10-02 11:43:59.956323313 +0000 UTC m=+0.043522413 container create d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f (image=quay.io/ceph/ceph:v18, name=trusting_burnell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:43:59 compute-0 systemd[1]: Started libpod-conmon-d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f.scope.
Oct 02 11:43:59 compute-0 ceph-mgr[73961]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:43:59 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'telemetry'
Oct 02 11:43:59 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:43:59.981+0000 7fe70128a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:44:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615cdc3b2c01d47ae053203f3a245b30697a8f01e06089fdfadb3e503e79aebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615cdc3b2c01d47ae053203f3a245b30697a8f01e06089fdfadb3e503e79aebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615cdc3b2c01d47ae053203f3a245b30697a8f01e06089fdfadb3e503e79aebe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:00 compute-0 podman[74384]: 2025-10-02 11:44:00.028991252 +0000 UTC m=+0.116190372 container init d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f (image=quay.io/ceph/ceph:v18, name=trusting_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:44:00 compute-0 podman[74384]: 2025-10-02 11:43:59.938643164 +0000 UTC m=+0.025842304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:00 compute-0 podman[74384]: 2025-10-02 11:44:00.034259123 +0000 UTC m=+0.121458223 container start d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f (image=quay.io/ceph/ceph:v18, name=trusting_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:44:00 compute-0 podman[74384]: 2025-10-02 11:44:00.045873767 +0000 UTC m=+0.133072917 container attach d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f (image=quay.io/ceph/ceph:v18, name=trusting_burnell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:44:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236601617' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:44:00 compute-0 trusting_burnell[74400]: 
Oct 02 11:44:00 compute-0 trusting_burnell[74400]: {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "health": {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "status": "HEALTH_OK",
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "checks": {},
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "mutes": []
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     },
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "election_epoch": 5,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "quorum": [
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         0
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     ],
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "quorum_names": [
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "compute-0"
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     ],
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "quorum_age": 21,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "monmap": {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "epoch": 1,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "min_mon_release_name": "reef",
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_mons": 1
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     },
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "osdmap": {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "epoch": 1,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_osds": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_up_osds": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "osd_up_since": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_in_osds": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "osd_in_since": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_remapped_pgs": 0
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     },
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "pgmap": {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "pgs_by_state": [],
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_pgs": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_pools": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_objects": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "data_bytes": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "bytes_used": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "bytes_avail": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "bytes_total": 0
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     },
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "fsmap": {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "epoch": 1,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "by_rank": [],
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "up:standby": 0
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     },
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "mgrmap": {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "available": false,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "num_standbys": 0,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "modules": [
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:             "iostat",
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:             "nfs",
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:             "restful"
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         ],
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "services": {}
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     },
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "servicemap": {
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "epoch": 1,
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:         "services": {}
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     },
Oct 02 11:44:00 compute-0 trusting_burnell[74400]:     "progress_events": {}
Oct 02 11:44:00 compute-0 trusting_burnell[74400]: }
Oct 02 11:44:00 compute-0 systemd[1]: libpod-d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f.scope: Deactivated successfully.
Oct 02 11:44:00 compute-0 podman[74384]: 2025-10-02 11:44:00.464288492 +0000 UTC m=+0.551487602 container died d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f (image=quay.io/ceph/ceph:v18, name=trusting_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-615cdc3b2c01d47ae053203f3a245b30697a8f01e06089fdfadb3e503e79aebe-merged.mount: Deactivated successfully.
Oct 02 11:44:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1236601617' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:44:00 compute-0 podman[74384]: 2025-10-02 11:44:00.521805787 +0000 UTC m=+0.609004887 container remove d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f (image=quay.io/ceph/ceph:v18, name=trusting_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:00 compute-0 systemd[1]: libpod-conmon-d7dbf02aea16d72b36c11f15976fae97d8a8019c63af423b1a1b132f9853c12f.scope: Deactivated successfully.
Oct 02 11:44:00 compute-0 ceph-mgr[73961]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:44:00 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'test_orchestrator'
Oct 02 11:44:00 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:00.651+0000 7fe70128a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:44:01 compute-0 ceph-mgr[73961]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:44:01 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'volumes'
Oct 02 11:44:01 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:01.413+0000 7fe70128a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'zabbix'
Oct 02 11:44:02 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:02.206+0000 7fe70128a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:44:02 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:02.447+0000 7fe70128a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: ms_deliver_dispatch: unhandled message 0x562248ed6f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.unmtoh
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr handle_mgr_map Activating!
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr handle_mgr_map I am now activating
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.unmtoh(active, starting, since 0.0145289s)
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e1 all = 1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.unmtoh", "id": "compute-0.unmtoh"} v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.unmtoh", "id": "compute-0.unmtoh"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: balancer
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Manager daemon compute-0.unmtoh is now available
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [balancer INFO root] Starting
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:44:02
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [balancer INFO root] No pools available
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: crash
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: devicehealth
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: iostat
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Starting
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: nfs
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: orchestrator
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: pg_autoscaler
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: progress
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [progress INFO root] Loading...
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [progress INFO root] No stored events to load
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [progress INFO root] Loaded [] historic events
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [progress INFO root] Loaded OSDMap, ready.
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] recovery thread starting
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] starting setup
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: rbd_support
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/mirror_snapshot_schedule"} v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: restful
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [restful INFO root] server_addr: :: server_port: 8003
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: status
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [restful WARNING root] server not running: no certificate configured
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] PerfHandler: starting
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: telemetry
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TaskHandler: starting
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/trash_purge_schedule"} v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/trash_purge_schedule"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: [rbd_support INFO root] setup complete
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 02 11:44:02 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: volumes
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 02 11:44:02 compute-0 ceph-mon[73668]: Activating manager daemon compute-0.unmtoh
Oct 02 11:44:02 compute-0 ceph-mon[73668]: mgrmap e2: compute-0.unmtoh(active, starting, since 0.0145289s)
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.unmtoh", "id": "compute-0.unmtoh"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: Manager daemon compute-0.unmtoh is now available
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/trash_purge_schedule"}]: dispatch
Oct 02 11:44:02 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:02 compute-0 podman[74517]: 2025-10-02 11:44:02.58393127 +0000 UTC m=+0.040767323 container create 2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2 (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:02 compute-0 systemd[1]: Started libpod-conmon-2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2.scope.
Oct 02 11:44:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ecd6c38e854995037f32aa55b4cc97c51ab831daef6d96f87bd3d39891890a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ecd6c38e854995037f32aa55b4cc97c51ab831daef6d96f87bd3d39891890a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ecd6c38e854995037f32aa55b4cc97c51ab831daef6d96f87bd3d39891890a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:02 compute-0 podman[74517]: 2025-10-02 11:44:02.567877038 +0000 UTC m=+0.024713111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:02 compute-0 podman[74517]: 2025-10-02 11:44:02.669382398 +0000 UTC m=+0.126218481 container init 2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2 (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:02 compute-0 podman[74517]: 2025-10-02 11:44:02.675186985 +0000 UTC m=+0.132023048 container start 2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2 (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:02 compute-0 podman[74517]: 2025-10-02 11:44:02.679226921 +0000 UTC m=+0.136063004 container attach 2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2 (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:44:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:44:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042643769' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]: 
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]: {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "health": {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "status": "HEALTH_OK",
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "checks": {},
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "mutes": []
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     },
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "election_epoch": 5,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "quorum": [
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         0
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     ],
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "quorum_names": [
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "compute-0"
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     ],
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "quorum_age": 23,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "monmap": {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "epoch": 1,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "min_mon_release_name": "reef",
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_mons": 1
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     },
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "osdmap": {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "epoch": 1,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_osds": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_up_osds": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "osd_up_since": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_in_osds": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "osd_in_since": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_remapped_pgs": 0
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     },
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "pgmap": {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "pgs_by_state": [],
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_pgs": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_pools": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_objects": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "data_bytes": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "bytes_used": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "bytes_avail": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "bytes_total": 0
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     },
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "fsmap": {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "epoch": 1,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "by_rank": [],
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "up:standby": 0
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     },
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "mgrmap": {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "available": false,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "num_standbys": 0,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "modules": [
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:             "iostat",
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:             "nfs",
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:             "restful"
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         ],
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "services": {}
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     },
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "servicemap": {
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "epoch": 1,
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:         "services": {}
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     },
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]:     "progress_events": {}
Oct 02 11:44:03 compute-0 strange_goldwasser[74533]: }
Oct 02 11:44:03 compute-0 systemd[1]: libpod-2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2.scope: Deactivated successfully.
Oct 02 11:44:03 compute-0 podman[74517]: 2025-10-02 11:44:03.074026457 +0000 UTC m=+0.530862510 container died 2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2 (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7ecd6c38e854995037f32aa55b4cc97c51ab831daef6d96f87bd3d39891890a-merged.mount: Deactivated successfully.
Oct 02 11:44:03 compute-0 podman[74517]: 2025-10-02 11:44:03.112356819 +0000 UTC m=+0.569192862 container remove 2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2 (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:03 compute-0 systemd[1]: libpod-conmon-2adbd1bfef59fdfd8dab9dc0dbc9f8ae392b9bb680556abbbedc75791ffa74f2.scope: Deactivated successfully.
Oct 02 11:44:03 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.unmtoh(active, since 1.02677s)
Oct 02 11:44:03 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:03 compute-0 ceph-mon[73668]: from='mgr.14102 192.168.122.100:0/1170196243' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1042643769' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:44:03 compute-0 ceph-mon[73668]: mgrmap e3: compute-0.unmtoh(active, since 1.02677s)
Oct 02 11:44:04 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.unmtoh(active, since 2s)
Oct 02 11:44:05 compute-0 podman[74573]: 2025-10-02 11:44:05.156880285 +0000 UTC m=+0.022260322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:05 compute-0 podman[74573]: 2025-10-02 11:44:05.37471986 +0000 UTC m=+0.240099877 container create 3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4 (image=quay.io/ceph/ceph:v18, name=cranky_gates, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:05 compute-0 systemd[1]: Started libpod-conmon-3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4.scope.
Oct 02 11:44:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f352a6c4e6b29736130a09217d90fa453de6aba667c3e15378e715e275c4c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f352a6c4e6b29736130a09217d90fa453de6aba667c3e15378e715e275c4c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f352a6c4e6b29736130a09217d90fa453de6aba667c3e15378e715e275c4c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:05 compute-0 podman[74573]: 2025-10-02 11:44:05.445320611 +0000 UTC m=+0.310700658 container init 3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4 (image=quay.io/ceph/ceph:v18, name=cranky_gates, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:05 compute-0 podman[74573]: 2025-10-02 11:44:05.451232121 +0000 UTC m=+0.316612138 container start 3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4 (image=quay.io/ceph/ceph:v18, name=cranky_gates, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:44:05 compute-0 podman[74573]: 2025-10-02 11:44:05.454708961 +0000 UTC m=+0.320089008 container attach 3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4 (image=quay.io/ceph/ceph:v18, name=cranky_gates, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:05 compute-0 ceph-mon[73668]: mgrmap e4: compute-0.unmtoh(active, since 2s)
Oct 02 11:44:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 11:44:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938819431' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:44:06 compute-0 cranky_gates[74589]: 
Oct 02 11:44:06 compute-0 cranky_gates[74589]: {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "health": {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "status": "HEALTH_OK",
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "checks": {},
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "mutes": []
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     },
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "election_epoch": 5,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "quorum": [
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         0
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     ],
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "quorum_names": [
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "compute-0"
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     ],
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "quorum_age": 26,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "monmap": {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "epoch": 1,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "min_mon_release_name": "reef",
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_mons": 1
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     },
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "osdmap": {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "epoch": 1,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_osds": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_up_osds": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "osd_up_since": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_in_osds": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "osd_in_since": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_remapped_pgs": 0
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     },
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "pgmap": {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "pgs_by_state": [],
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_pgs": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_pools": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_objects": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "data_bytes": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "bytes_used": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "bytes_avail": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "bytes_total": 0
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     },
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "fsmap": {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "epoch": 1,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "by_rank": [],
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "up:standby": 0
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     },
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "mgrmap": {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "available": true,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "num_standbys": 0,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "modules": [
Oct 02 11:44:06 compute-0 cranky_gates[74589]:             "iostat",
Oct 02 11:44:06 compute-0 cranky_gates[74589]:             "nfs",
Oct 02 11:44:06 compute-0 cranky_gates[74589]:             "restful"
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         ],
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "services": {}
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     },
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "servicemap": {
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "epoch": 1,
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "modified": "2025-10-02T11:43:36.457954+0000",
Oct 02 11:44:06 compute-0 cranky_gates[74589]:         "services": {}
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     },
Oct 02 11:44:06 compute-0 cranky_gates[74589]:     "progress_events": {}
Oct 02 11:44:06 compute-0 cranky_gates[74589]: }
Oct 02 11:44:06 compute-0 systemd[1]: libpod-3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4.scope: Deactivated successfully.
Oct 02 11:44:06 compute-0 podman[74573]: 2025-10-02 11:44:06.077324249 +0000 UTC m=+0.942704266 container died 3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4 (image=quay.io/ceph/ceph:v18, name=cranky_gates, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8f352a6c4e6b29736130a09217d90fa453de6aba667c3e15378e715e275c4c3-merged.mount: Deactivated successfully.
Oct 02 11:44:06 compute-0 podman[74573]: 2025-10-02 11:44:06.140418364 +0000 UTC m=+1.005798381 container remove 3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4 (image=quay.io/ceph/ceph:v18, name=cranky_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:44:06 compute-0 systemd[1]: libpod-conmon-3db3e6f304d6bc0e861be4163063bd86fe58084b96ad910e96513248e60591d4.scope: Deactivated successfully.
Oct 02 11:44:06 compute-0 podman[74626]: 2025-10-02 11:44:06.20040255 +0000 UTC m=+0.039072475 container create cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea (image=quay.io/ceph/ceph:v18, name=compassionate_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:06 compute-0 systemd[1]: Started libpod-conmon-cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea.scope.
Oct 02 11:44:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cfe11f1cb4b8256fbaf8b4cbef1c4ed125423cb65a8855db00ee795bba4882/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cfe11f1cb4b8256fbaf8b4cbef1c4ed125423cb65a8855db00ee795bba4882/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cfe11f1cb4b8256fbaf8b4cbef1c4ed125423cb65a8855db00ee795bba4882/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cfe11f1cb4b8256fbaf8b4cbef1c4ed125423cb65a8855db00ee795bba4882/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:06 compute-0 podman[74626]: 2025-10-02 11:44:06.182398282 +0000 UTC m=+0.021068237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:06 compute-0 podman[74626]: 2025-10-02 11:44:06.28663214 +0000 UTC m=+0.125302095 container init cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea (image=quay.io/ceph/ceph:v18, name=compassionate_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:44:06 compute-0 podman[74626]: 2025-10-02 11:44:06.29184502 +0000 UTC m=+0.130514945 container start cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea (image=quay.io/ceph/ceph:v18, name=compassionate_thompson, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:06 compute-0 podman[74626]: 2025-10-02 11:44:06.298943684 +0000 UTC m=+0.137613609 container attach cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea (image=quay.io/ceph/ceph:v18, name=compassionate_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:44:06 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2938819431' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 11:44:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 11:44:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/132081613' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:44:06 compute-0 systemd[1]: libpod-cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea.scope: Deactivated successfully.
Oct 02 11:44:06 compute-0 podman[74626]: 2025-10-02 11:44:06.881861881 +0000 UTC m=+0.720531806 container died cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea (image=quay.io/ceph/ceph:v18, name=compassionate_thompson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-46cfe11f1cb4b8256fbaf8b4cbef1c4ed125423cb65a8855db00ee795bba4882-merged.mount: Deactivated successfully.
Oct 02 11:44:06 compute-0 podman[74626]: 2025-10-02 11:44:06.927492433 +0000 UTC m=+0.766162358 container remove cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea (image=quay.io/ceph/ceph:v18, name=compassionate_thompson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:06 compute-0 systemd[1]: libpod-conmon-cf6717cb0e60ff76e75009d7de7747fc8a2943f15580cb05469d24df9cf963ea.scope: Deactivated successfully.
Oct 02 11:44:06 compute-0 podman[74682]: 2025-10-02 11:44:06.985281655 +0000 UTC m=+0.039671352 container create ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2 (image=quay.io/ceph/ceph:v18, name=heuristic_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:44:07 compute-0 systemd[1]: Started libpod-conmon-ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2.scope.
Oct 02 11:44:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3643e07a1332bc104ec3bcbba78272df781252d113bbad40ab2287c2503f119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3643e07a1332bc104ec3bcbba78272df781252d113bbad40ab2287c2503f119/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3643e07a1332bc104ec3bcbba78272df781252d113bbad40ab2287c2503f119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:07 compute-0 podman[74682]: 2025-10-02 11:44:07.043794288 +0000 UTC m=+0.098184005 container init ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2 (image=quay.io/ceph/ceph:v18, name=heuristic_noether, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:44:07 compute-0 podman[74682]: 2025-10-02 11:44:07.048662648 +0000 UTC m=+0.103052345 container start ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2 (image=quay.io/ceph/ceph:v18, name=heuristic_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:44:07 compute-0 podman[74682]: 2025-10-02 11:44:07.052720675 +0000 UTC m=+0.107110392 container attach ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2 (image=quay.io/ceph/ceph:v18, name=heuristic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:07 compute-0 podman[74682]: 2025-10-02 11:44:06.965700062 +0000 UTC m=+0.020089789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 02 11:44:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315532620' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 02 11:44:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/132081613' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:44:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/315532620' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 02 11:44:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315532620' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  1: '-n'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  2: 'mgr.compute-0.unmtoh'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  3: '-f'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  4: '--setuser'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  5: 'ceph'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  6: '--setgroup'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  7: 'ceph'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  8: '--default-log-to-file=false'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  9: '--default-log-to-journald=true'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: mgr respawn  exe_path /proc/self/exe
Oct 02 11:44:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.unmtoh(active, since 5s)
Oct 02 11:44:07 compute-0 systemd[1]: libpod-ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2.scope: Deactivated successfully.
Oct 02 11:44:07 compute-0 podman[74682]: 2025-10-02 11:44:07.82545105 +0000 UTC m=+0.879840747 container died ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2 (image=quay.io/ceph/ceph:v18, name=heuristic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3643e07a1332bc104ec3bcbba78272df781252d113bbad40ab2287c2503f119-merged.mount: Deactivated successfully.
Oct 02 11:44:07 compute-0 podman[74682]: 2025-10-02 11:44:07.86648424 +0000 UTC m=+0.920873937 container remove ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2 (image=quay.io/ceph/ceph:v18, name=heuristic_noether, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 02 11:44:07 compute-0 systemd[1]: libpod-conmon-ef1cabff79da162a11b76b5e9fb0821d3874d20385893983a47a5fe82d9dffd2.scope: Deactivated successfully.
Oct 02 11:44:07 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: ignoring --setuser ceph since I am not root
Oct 02 11:44:07 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: ignoring --setgroup ceph since I am not root
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 11:44:07 compute-0 ceph-mgr[73961]: pidfile_write: ignore empty --pid-file
Oct 02 11:44:07 compute-0 podman[74738]: 2025-10-02 11:44:07.942880758 +0000 UTC m=+0.058310389 container create 97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac (image=quay.io/ceph/ceph:v18, name=angry_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:08 compute-0 systemd[1]: Started libpod-conmon-97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac.scope.
Oct 02 11:44:08 compute-0 podman[74738]: 2025-10-02 11:44:07.907316435 +0000 UTC m=+0.022746116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:08 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'alerts'
Oct 02 11:44:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247978ea04bc7cd945c7d3a1f4c3f69357909c9127c11e4616a00d5c15311171/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247978ea04bc7cd945c7d3a1f4c3f69357909c9127c11e4616a00d5c15311171/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247978ea04bc7cd945c7d3a1f4c3f69357909c9127c11e4616a00d5c15311171/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:08 compute-0 podman[74738]: 2025-10-02 11:44:08.030423726 +0000 UTC m=+0.145853387 container init 97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac (image=quay.io/ceph/ceph:v18, name=angry_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:44:08 compute-0 podman[74738]: 2025-10-02 11:44:08.035241594 +0000 UTC m=+0.150671235 container start 97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac (image=quay.io/ceph/ceph:v18, name=angry_stonebraker, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:44:08 compute-0 podman[74738]: 2025-10-02 11:44:08.038516748 +0000 UTC m=+0.153946409 container attach 97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac (image=quay.io/ceph/ceph:v18, name=angry_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:44:08 compute-0 ceph-mgr[73961]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:44:08 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'balancer'
Oct 02 11:44:08 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:08.340+0000 7fa056b0c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:44:08 compute-0 ceph-mgr[73961]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:44:08 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'cephadm'
Oct 02 11:44:08 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:08.617+0000 7fa056b0c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:44:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 11:44:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958163598' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 11:44:08 compute-0 angry_stonebraker[74776]: {
Oct 02 11:44:08 compute-0 angry_stonebraker[74776]:     "epoch": 5,
Oct 02 11:44:08 compute-0 angry_stonebraker[74776]:     "available": true,
Oct 02 11:44:08 compute-0 angry_stonebraker[74776]:     "active_name": "compute-0.unmtoh",
Oct 02 11:44:08 compute-0 angry_stonebraker[74776]:     "num_standby": 0
Oct 02 11:44:08 compute-0 angry_stonebraker[74776]: }
Oct 02 11:44:08 compute-0 systemd[1]: libpod-97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac.scope: Deactivated successfully.
Oct 02 11:44:08 compute-0 podman[74738]: 2025-10-02 11:44:08.661909109 +0000 UTC m=+0.777338750 container died 97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac (image=quay.io/ceph/ceph:v18, name=angry_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-247978ea04bc7cd945c7d3a1f4c3f69357909c9127c11e4616a00d5c15311171-merged.mount: Deactivated successfully.
Oct 02 11:44:08 compute-0 podman[74738]: 2025-10-02 11:44:08.70887322 +0000 UTC m=+0.824302861 container remove 97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac (image=quay.io/ceph/ceph:v18, name=angry_stonebraker, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:08 compute-0 systemd[1]: libpod-conmon-97a2d720bf143a2b5fe206fe27bdd16d0ffdfd553dc3a3f4bcd774862787ecac.scope: Deactivated successfully.
Oct 02 11:44:08 compute-0 podman[74816]: 2025-10-02 11:44:08.765814818 +0000 UTC m=+0.039536298 container create 294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c (image=quay.io/ceph/ceph:v18, name=wonderful_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:08 compute-0 systemd[1]: Started libpod-conmon-294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c.scope.
Oct 02 11:44:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/315532620' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 02 11:44:08 compute-0 ceph-mon[73668]: mgrmap e5: compute-0.unmtoh(active, since 5s)
Oct 02 11:44:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1958163598' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 11:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8a8008eea511ab7a91265a69397615641bf2f908d5128ad21dbb107bc0057b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8a8008eea511ab7a91265a69397615641bf2f908d5128ad21dbb107bc0057b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8a8008eea511ab7a91265a69397615641bf2f908d5128ad21dbb107bc0057b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:08 compute-0 podman[74816]: 2025-10-02 11:44:08.828445559 +0000 UTC m=+0.102167039 container init 294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c (image=quay.io/ceph/ceph:v18, name=wonderful_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:44:08 compute-0 podman[74816]: 2025-10-02 11:44:08.833356791 +0000 UTC m=+0.107078251 container start 294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c (image=quay.io/ceph/ceph:v18, name=wonderful_hopper, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:08 compute-0 podman[74816]: 2025-10-02 11:44:08.744707871 +0000 UTC m=+0.018429351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:08 compute-0 podman[74816]: 2025-10-02 11:44:08.844139691 +0000 UTC m=+0.117861181 container attach 294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c (image=quay.io/ceph/ceph:v18, name=wonderful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:10 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'crash'
Oct 02 11:44:10 compute-0 ceph-mgr[73961]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:44:10 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'dashboard'
Oct 02 11:44:10 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:10.947+0000 7fa056b0c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:44:12 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'devicehealth'
Oct 02 11:44:12 compute-0 ceph-mgr[73961]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:44:12 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'diskprediction_local'
Oct 02 11:44:12 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:12.716+0000 7fa056b0c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:44:13 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 02 11:44:13 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 02 11:44:13 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   from numpy import show_config as show_numpy_config
Oct 02 11:44:13 compute-0 ceph-mgr[73961]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:44:13 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'influx'
Oct 02 11:44:13 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:13.268+0000 7fa056b0c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:44:13 compute-0 ceph-mgr[73961]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:44:13 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'insights'
Oct 02 11:44:13 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:13.513+0000 7fa056b0c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:44:13 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'iostat'
Oct 02 11:44:14 compute-0 ceph-mgr[73961]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:44:14 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'k8sevents'
Oct 02 11:44:14 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:14.015+0000 7fa056b0c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:44:15 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'localpool'
Oct 02 11:44:16 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'mds_autoscaler'
Oct 02 11:44:16 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'mirroring'
Oct 02 11:44:17 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'nfs'
Oct 02 11:44:17 compute-0 ceph-mgr[73961]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:44:17 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'orchestrator'
Oct 02 11:44:17 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:17.814+0000 7fa056b0c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:44:18 compute-0 ceph-mgr[73961]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:44:18 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'osd_perf_query'
Oct 02 11:44:18 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:18.470+0000 7fa056b0c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:44:18 compute-0 ceph-mgr[73961]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:44:18 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'osd_support'
Oct 02 11:44:18 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:18.744+0000 7fa056b0c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:44:18 compute-0 ceph-mgr[73961]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:44:18 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'pg_autoscaler'
Oct 02 11:44:18 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:18.978+0000 7fa056b0c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:44:19 compute-0 ceph-mgr[73961]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:44:19 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:19.284+0000 7fa056b0c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:44:19 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'progress'
Oct 02 11:44:19 compute-0 ceph-mgr[73961]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:44:19 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:19.539+0000 7fa056b0c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:44:19 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'prometheus'
Oct 02 11:44:20 compute-0 ceph-mgr[73961]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:44:20 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'rbd_support'
Oct 02 11:44:20 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:20.588+0000 7fa056b0c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:44:20 compute-0 ceph-mgr[73961]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:44:20 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'restful'
Oct 02 11:44:20 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:20.895+0000 7fa056b0c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:44:21 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'rgw'
Oct 02 11:44:22 compute-0 ceph-mgr[73961]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:44:22 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'rook'
Oct 02 11:44:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:22.318+0000 7fa056b0c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:44:24 compute-0 ceph-mgr[73961]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:44:24 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'selftest'
Oct 02 11:44:24 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:24.517+0000 7fa056b0c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:44:24 compute-0 ceph-mgr[73961]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:44:24 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'snap_schedule'
Oct 02 11:44:24 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:24.763+0000 7fa056b0c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:44:25 compute-0 ceph-mgr[73961]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:44:25 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'stats'
Oct 02 11:44:25 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:25.014+0000 7fa056b0c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:44:25 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'status'
Oct 02 11:44:25 compute-0 ceph-mgr[73961]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:44:25 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'telegraf'
Oct 02 11:44:25 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:25.580+0000 7fa056b0c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:44:25 compute-0 ceph-mgr[73961]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:44:25 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'telemetry'
Oct 02 11:44:25 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:25.845+0000 7fa056b0c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:44:26 compute-0 ceph-mgr[73961]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:44:26 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'test_orchestrator'
Oct 02 11:44:26 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:26.499+0000 7fa056b0c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:44:27 compute-0 ceph-mgr[73961]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:44:27 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:27.246+0000 7fa056b0c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:44:27 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'volumes'
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr[py] Loading python module 'zabbix'
Oct 02 11:44:28 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:28.014+0000 7fa056b0c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:44:28 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:44:28.282+0000 7fa056b0c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Active manager daemon compute-0.unmtoh restarted
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: ms_deliver_dispatch: unhandled message 0x56325a476420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.unmtoh
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr handle_mgr_map Activating!
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr handle_mgr_map I am now activating
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.unmtoh(active, starting, since 0.0130437s)
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.unmtoh", "id": "compute-0.unmtoh"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.unmtoh", "id": "compute-0.unmtoh"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e1 all = 1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Manager daemon compute-0.unmtoh is now available
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: balancer
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Starting
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:44:28
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] No pools available
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: Active manager daemon compute-0.unmtoh restarted
Oct 02 11:44:28 compute-0 ceph-mon[73668]: Activating manager daemon compute-0.unmtoh
Oct 02 11:44:28 compute-0 ceph-mon[73668]: osdmap e2: 0 total, 0 up, 0 in
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mgrmap e6: compute-0.unmtoh(active, starting, since 0.0130437s)
Oct 02 11:44:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.unmtoh", "id": "compute-0.unmtoh"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mon[73668]: Manager daemon compute-0.unmtoh is now available
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: cephadm
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: crash
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: devicehealth
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Starting
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: iostat
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: nfs
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: orchestrator
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: pg_autoscaler
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: progress
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [progress INFO root] Loading...
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [progress INFO root] No stored events to load
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [progress INFO root] Loaded [] historic events
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [progress INFO root] Loaded OSDMap, ready.
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] recovery thread starting
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] starting setup
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: rbd_support
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: restful
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [restful INFO root] server_addr: :: server_port: 8003
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/mirror_snapshot_schedule"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [restful WARNING root] server not running: no certificate configured
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: status
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: telemetry
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] PerfHandler: starting
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TaskHandler: starting
Oct 02 11:44:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/trash_purge_schedule"} v 0) v1
Oct 02 11:44:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/trash_purge_schedule"}]: dispatch
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] setup complete
Oct 02 11:44:28 compute-0 ceph-mgr[73961]: mgr load Constructed class from module: volumes
Oct 02 11:44:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923247 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.unmtoh(active, since 1.02875s)
Oct 02 11:44:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 02 11:44:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 02 11:44:29 compute-0 wonderful_hopper[74829]: {
Oct 02 11:44:29 compute-0 wonderful_hopper[74829]:     "mgrmap_epoch": 7,
Oct 02 11:44:29 compute-0 wonderful_hopper[74829]:     "initialized": true
Oct 02 11:44:29 compute-0 wonderful_hopper[74829]: }
Oct 02 11:44:29 compute-0 systemd[1]: libpod-294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c.scope: Deactivated successfully.
Oct 02 11:44:29 compute-0 podman[74816]: 2025-10-02 11:44:29.343279242 +0000 UTC m=+20.617000702 container died 294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c (image=quay.io/ceph/ceph:v18, name=wonderful_hopper, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:44:29 compute-0 ceph-mon[73668]: Found migration_current of "None". Setting to last migration.
Oct 02 11:44:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/mirror_snapshot_schedule"}]: dispatch
Oct 02 11:44:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.unmtoh/trash_purge_schedule"}]: dispatch
Oct 02 11:44:29 compute-0 ceph-mon[73668]: mgrmap e7: compute-0.unmtoh(active, since 1.02875s)
Oct 02 11:44:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-af8a8008eea511ab7a91265a69397615641bf2f908d5128ad21dbb107bc0057b-merged.mount: Deactivated successfully.
Oct 02 11:44:29 compute-0 podman[74816]: 2025-10-02 11:44:29.392560098 +0000 UTC m=+20.666281558 container remove 294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c (image=quay.io/ceph/ceph:v18, name=wonderful_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:44:29 compute-0 systemd[1]: libpod-conmon-294501381f0c4e72068cd403cfa6ca17488bd561caa46c71510b4db993c6f82c.scope: Deactivated successfully.
Oct 02 11:44:29 compute-0 podman[74990]: 2025-10-02 11:44:29.446771817 +0000 UTC m=+0.036241646 container create c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85 (image=quay.io/ceph/ceph:v18, name=compassionate_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:29 compute-0 systemd[1]: Started libpod-conmon-c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85.scope.
Oct 02 11:44:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d56a3acec29e5ae431d59616ea1838a4c453c52c99b4d17483a7c1ff3b0e51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d56a3acec29e5ae431d59616ea1838a4c453c52c99b4d17483a7c1ff3b0e51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d56a3acec29e5ae431d59616ea1838a4c453c52c99b4d17483a7c1ff3b0e51/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:29 compute-0 podman[74990]: 2025-10-02 11:44:29.512130527 +0000 UTC m=+0.101600386 container init c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85 (image=quay.io/ceph/ceph:v18, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:44:29 compute-0 podman[74990]: 2025-10-02 11:44:29.517622164 +0000 UTC m=+0.107091993 container start c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85 (image=quay.io/ceph/ceph:v18, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:44:29 compute-0 podman[74990]: 2025-10-02 11:44:29.52079182 +0000 UTC m=+0.110261679 container attach c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85 (image=quay.io/ceph/ceph:v18, name=compassionate_mestorf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:29 compute-0 podman[74990]: 2025-10-02 11:44:29.429260666 +0000 UTC m=+0.018730525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 02 11:44:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:44:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:30 compute-0 systemd[1]: libpod-c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85.scope: Deactivated successfully.
Oct 02 11:44:30 compute-0 podman[74990]: 2025-10-02 11:44:30.097053659 +0000 UTC m=+0.686523498 container died c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85 (image=quay.io/ceph/ceph:v18, name=compassionate_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-60d56a3acec29e5ae431d59616ea1838a4c453c52c99b4d17483a7c1ff3b0e51-merged.mount: Deactivated successfully.
Oct 02 11:44:30 compute-0 podman[74990]: 2025-10-02 11:44:30.139760689 +0000 UTC m=+0.729230518 container remove c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85 (image=quay.io/ceph/ceph:v18, name=compassionate_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 11:44:30 compute-0 systemd[1]: libpod-conmon-c032380feb35ed93ed29dc5cca813c6b9de48cbb4ca4d4db1eb7335e92718f85.scope: Deactivated successfully.
Oct 02 11:44:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 02 11:44:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 02 11:44:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 podman[75044]: 2025-10-02 11:44:30.205537479 +0000 UTC m=+0.045059204 container create 8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868 (image=quay.io/ceph/ceph:v18, name=quizzical_dirac, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:30 compute-0 systemd[1]: Started libpod-conmon-8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868.scope.
Oct 02 11:44:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44c03cc9bb989cbef006457d725e8beb2fcc17a3680fa84b2760241c4598c4c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44c03cc9bb989cbef006457d725e8beb2fcc17a3680fa84b2760241c4598c4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44c03cc9bb989cbef006457d725e8beb2fcc17a3680fa84b2760241c4598c4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:30 compute-0 podman[75044]: 2025-10-02 11:44:30.18850478 +0000 UTC m=+0.028026525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:30 compute-0 podman[75044]: 2025-10-02 11:44:30.288651576 +0000 UTC m=+0.128173331 container init 8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868 (image=quay.io/ceph/ceph:v18, name=quizzical_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:44:30 compute-0 podman[75044]: 2025-10-02 11:44:30.293449235 +0000 UTC m=+0.132970960 container start 8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868 (image=quay.io/ceph/ceph:v18, name=quizzical_dirac, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:30 compute-0 podman[75044]: 2025-10-02 11:44:30.296755094 +0000 UTC m=+0.136276819 container attach 8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868 (image=quay.io/ceph/ceph:v18, name=quizzical_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:30 compute-0 ceph-mon[73668]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 02 11:44:30 compute-0 ceph-mon[73668]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 02 11:44:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 02 11:44:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: [cephadm INFO root] Set ssh ssh_user
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 02 11:44:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 02 11:44:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: [cephadm INFO root] Set ssh ssh_config
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 02 11:44:30 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 02 11:44:30 compute-0 quizzical_dirac[75061]: ssh user set to ceph-admin. sudo will be used
Oct 02 11:44:30 compute-0 systemd[1]: libpod-8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868.scope: Deactivated successfully.
Oct 02 11:44:30 compute-0 podman[75087]: 2025-10-02 11:44:30.878229394 +0000 UTC m=+0.021310935 container died 8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868 (image=quay.io/ceph/ceph:v18, name=quizzical_dirac, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44c03cc9bb989cbef006457d725e8beb2fcc17a3680fa84b2760241c4598c4c-merged.mount: Deactivated successfully.
Oct 02 11:44:30 compute-0 podman[75087]: 2025-10-02 11:44:30.915586909 +0000 UTC m=+0.058668440 container remove 8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868 (image=quay.io/ceph/ceph:v18, name=quizzical_dirac, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:30 compute-0 systemd[1]: libpod-conmon-8f672688f9e7ca1e77d0f561e41f3b39717c3e35e7dd4a916769c344644b4868.scope: Deactivated successfully.
Oct 02 11:44:30 compute-0 podman[75102]: 2025-10-02 11:44:30.984192255 +0000 UTC m=+0.046845201 container create 25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a (image=quay.io/ceph/ceph:v18, name=confident_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:44:31 compute-0 systemd[1]: Started libpod-conmon-25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a.scope.
Oct 02 11:44:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5185c4e0c32afa7e7fea54ef7c3a1032e39edf4582c87734adf3c980145d0a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5185c4e0c32afa7e7fea54ef7c3a1032e39edf4582c87734adf3c980145d0a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5185c4e0c32afa7e7fea54ef7c3a1032e39edf4582c87734adf3c980145d0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5185c4e0c32afa7e7fea54ef7c3a1032e39edf4582c87734adf3c980145d0a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5185c4e0c32afa7e7fea54ef7c3a1032e39edf4582c87734adf3c980145d0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 podman[75102]: 2025-10-02 11:44:31.044267852 +0000 UTC m=+0.106920828 container init 25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a (image=quay.io/ceph/ceph:v18, name=confident_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:31 compute-0 podman[75102]: 2025-10-02 11:44:30.957857327 +0000 UTC m=+0.020510293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:31 compute-0 podman[75102]: 2025-10-02 11:44:31.053929952 +0000 UTC m=+0.116582898 container start 25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a (image=quay.io/ceph/ceph:v18, name=confident_buck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:44:31 compute-0 podman[75102]: 2025-10-02 11:44:31.056724438 +0000 UTC m=+0.119377384 container attach 25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a (image=quay.io/ceph/ceph:v18, name=confident_buck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:44:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.unmtoh(active, since 2s)
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:44:31] ENGINE Bus STARTING
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:44:31] ENGINE Bus STARTING
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 02 11:44:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: [cephadm INFO root] Set ssh private key
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:44:31] ENGINE Serving on https://192.168.122.100:7150
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:44:31] ENGINE Serving on https://192.168.122.100:7150
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:44:31] ENGINE Client ('192.168.122.100', 38700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:44:31] ENGINE Client ('192.168.122.100', 38700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 11:44:31 compute-0 systemd[1]: libpod-25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a.scope: Deactivated successfully.
Oct 02 11:44:31 compute-0 podman[75102]: 2025-10-02 11:44:31.611716385 +0000 UTC m=+0.674369331 container died 25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a (image=quay.io/ceph/ceph:v18, name=confident_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-af5185c4e0c32afa7e7fea54ef7c3a1032e39edf4582c87734adf3c980145d0a-merged.mount: Deactivated successfully.
Oct 02 11:44:31 compute-0 podman[75102]: 2025-10-02 11:44:31.652529623 +0000 UTC m=+0.715182569 container remove 25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a (image=quay.io/ceph/ceph:v18, name=confident_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:31 compute-0 systemd[1]: libpod-conmon-25fa48fe2077805296ab3d5c878d613ce91226735431cbcbde372f48ddceaa3a.scope: Deactivated successfully.
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:44:31] ENGINE Serving on http://192.168.122.100:8765
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:44:31] ENGINE Serving on http://192.168.122.100:8765
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: [cephadm INFO cherrypy.error] [02/Oct/2025:11:44:31] ENGINE Bus STARTED
Oct 02 11:44:31 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : [02/Oct/2025:11:44:31] ENGINE Bus STARTED
Oct 02 11:44:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:44:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:31 compute-0 podman[75179]: 2025-10-02 11:44:31.709752673 +0000 UTC m=+0.040166492 container create b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410 (image=quay.io/ceph/ceph:v18, name=relaxed_shannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:44:31 compute-0 systemd[1]: Started libpod-conmon-b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410.scope.
Oct 02 11:44:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef057c0941fa841040d6ffac4d6c2266eafe3d60a54553dd4ef29295ff7969ce/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef057c0941fa841040d6ffac4d6c2266eafe3d60a54553dd4ef29295ff7969ce/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef057c0941fa841040d6ffac4d6c2266eafe3d60a54553dd4ef29295ff7969ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef057c0941fa841040d6ffac4d6c2266eafe3d60a54553dd4ef29295ff7969ce/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef057c0941fa841040d6ffac4d6c2266eafe3d60a54553dd4ef29295ff7969ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:31 compute-0 podman[75179]: 2025-10-02 11:44:31.760943751 +0000 UTC m=+0.091357600 container init b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410 (image=quay.io/ceph/ceph:v18, name=relaxed_shannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:44:31 compute-0 podman[75179]: 2025-10-02 11:44:31.766243464 +0000 UTC m=+0.096657283 container start b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410 (image=quay.io/ceph/ceph:v18, name=relaxed_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:44:31 compute-0 podman[75179]: 2025-10-02 11:44:31.769843711 +0000 UTC m=+0.100257530 container attach b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410 (image=quay.io/ceph/ceph:v18, name=relaxed_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:31 compute-0 podman[75179]: 2025-10-02 11:44:31.691557824 +0000 UTC m=+0.021971673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:31 compute-0 ceph-mon[73668]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:31 compute-0 ceph-mon[73668]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:31 compute-0 ceph-mon[73668]: Set ssh ssh_user
Oct 02 11:44:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:31 compute-0 ceph-mon[73668]: Set ssh ssh_config
Oct 02 11:44:31 compute-0 ceph-mon[73668]: ssh user set to ceph-admin. sudo will be used
Oct 02 11:44:31 compute-0 ceph-mon[73668]: mgrmap e8: compute-0.unmtoh(active, since 2s)
Oct 02 11:44:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:32 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 02 11:44:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:32 compute-0 ceph-mgr[73961]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 02 11:44:32 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 02 11:44:32 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:32 compute-0 systemd[1]: libpod-b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410.scope: Deactivated successfully.
Oct 02 11:44:32 compute-0 conmon[75195]: conmon b4a21c0138a5f717154f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410.scope/container/memory.events
Oct 02 11:44:32 compute-0 podman[75179]: 2025-10-02 11:44:32.306440532 +0000 UTC m=+0.636854351 container died b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410 (image=quay.io/ceph/ceph:v18, name=relaxed_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef057c0941fa841040d6ffac4d6c2266eafe3d60a54553dd4ef29295ff7969ce-merged.mount: Deactivated successfully.
Oct 02 11:44:32 compute-0 podman[75179]: 2025-10-02 11:44:32.345628606 +0000 UTC m=+0.676042415 container remove b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410 (image=quay.io/ceph/ceph:v18, name=relaxed_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:44:32 compute-0 systemd[1]: libpod-conmon-b4a21c0138a5f717154f9dbe12064205d7005c1e8da5052fcdabcbfb799e9410.scope: Deactivated successfully.
Oct 02 11:44:32 compute-0 podman[75231]: 2025-10-02 11:44:32.4018844 +0000 UTC m=+0.039703799 container create a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060 (image=quay.io/ceph/ceph:v18, name=tender_black, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:32 compute-0 systemd[1]: Started libpod-conmon-a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060.scope.
Oct 02 11:44:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/852bf03e46a375de57245a8db1c34fc95c6babd9c4b8d471c8b9614757d77251/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/852bf03e46a375de57245a8db1c34fc95c6babd9c4b8d471c8b9614757d77251/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/852bf03e46a375de57245a8db1c34fc95c6babd9c4b8d471c8b9614757d77251/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:32 compute-0 podman[75231]: 2025-10-02 11:44:32.459948953 +0000 UTC m=+0.097768362 container init a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060 (image=quay.io/ceph/ceph:v18, name=tender_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:44:32 compute-0 podman[75231]: 2025-10-02 11:44:32.465536133 +0000 UTC m=+0.103355532 container start a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060 (image=quay.io/ceph/ceph:v18, name=tender_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:32 compute-0 podman[75231]: 2025-10-02 11:44:32.469065698 +0000 UTC m=+0.106885127 container attach a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060 (image=quay.io/ceph/ceph:v18, name=tender_black, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:44:32 compute-0 podman[75231]: 2025-10-02 11:44:32.382283663 +0000 UTC m=+0.020103082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:32 compute-0 ceph-mon[73668]: [02/Oct/2025:11:44:31] ENGINE Bus STARTING
Oct 02 11:44:32 compute-0 ceph-mon[73668]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:32 compute-0 ceph-mon[73668]: Set ssh ssh_identity_key
Oct 02 11:44:32 compute-0 ceph-mon[73668]: Set ssh private key
Oct 02 11:44:32 compute-0 ceph-mon[73668]: [02/Oct/2025:11:44:31] ENGINE Serving on https://192.168.122.100:7150
Oct 02 11:44:32 compute-0 ceph-mon[73668]: [02/Oct/2025:11:44:31] ENGINE Client ('192.168.122.100', 38700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 11:44:32 compute-0 ceph-mon[73668]: [02/Oct/2025:11:44:31] ENGINE Serving on http://192.168.122.100:8765
Oct 02 11:44:32 compute-0 ceph-mon[73668]: [02/Oct/2025:11:44:31] ENGINE Bus STARTED
Oct 02 11:44:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:33 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:33 compute-0 tender_black[75248]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcBMQ7UrCqD7AIthu4fQe2511mug7tnhyryVlPG3zqqBAkjQ3xd5/b/mSRR+jMJl7fIyHDKrr+jmVo8tgoEC9kgjzxiP3wom6QeFC7KPf2HsNQXseFuvfPtnq/rTvUlQaFjyiEbuToVLYa1sBma6mdaPOECTjCLber7yFzEm0RhLG7U5KMymA5bqmmYkJ1raAVNBSpZ8ycpbzlR2eVstGJbloHCocgXirfu3P6byUGJvQ55HGPv7oQf4fTU2pLyZ5tiBYSeT+HKpaXN9thDFruviIXfHcP4nPt60Lj11egqo2jzjZaJtyfoBReDfP5cUZPT5H6QTCNoDA8ZVSfr/Xwg98QT0Q6RoOlzq7RCdzgCO8zuzz4UepAj47bW/xQaLSA6JvxyzJbWSHQG0Z1nVB8RSacFhkogvxscWFyFM+CAynTkHMFVMHQThK9xQcBbGWiUww6KM7XUZJmQWBRgi9758jMirFMsOe837eeLFjQrTP9xCWjqxK4iMG1ql9PO50= zuul@controller
Oct 02 11:44:33 compute-0 systemd[1]: libpod-a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060.scope: Deactivated successfully.
Oct 02 11:44:33 compute-0 conmon[75248]: conmon a8482e51a10e357792b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060.scope/container/memory.events
Oct 02 11:44:33 compute-0 podman[75231]: 2025-10-02 11:44:33.01941018 +0000 UTC m=+0.657229579 container died a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060 (image=quay.io/ceph/ceph:v18, name=tender_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-852bf03e46a375de57245a8db1c34fc95c6babd9c4b8d471c8b9614757d77251-merged.mount: Deactivated successfully.
Oct 02 11:44:33 compute-0 podman[75231]: 2025-10-02 11:44:33.062072739 +0000 UTC m=+0.699892138 container remove a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060 (image=quay.io/ceph/ceph:v18, name=tender_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:33 compute-0 systemd[1]: libpod-conmon-a8482e51a10e357792b1f9a8d7ed5be45c1df91d3d22a9502106a42a983c0060.scope: Deactivated successfully.
Oct 02 11:44:33 compute-0 podman[75286]: 2025-10-02 11:44:33.119536665 +0000 UTC m=+0.039242917 container create 2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9 (image=quay.io/ceph/ceph:v18, name=cool_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:44:33 compute-0 systemd[1]: Started libpod-conmon-2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9.scope.
Oct 02 11:44:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c01361a0f01a0a6f9abd5c9e2000abac35f7838806458dd343038647944fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c01361a0f01a0a6f9abd5c9e2000abac35f7838806458dd343038647944fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c01361a0f01a0a6f9abd5c9e2000abac35f7838806458dd343038647944fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:33 compute-0 podman[75286]: 2025-10-02 11:44:33.186732384 +0000 UTC m=+0.106438666 container init 2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9 (image=quay.io/ceph/ceph:v18, name=cool_clarke, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:33 compute-0 podman[75286]: 2025-10-02 11:44:33.194795531 +0000 UTC m=+0.114501793 container start 2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9 (image=quay.io/ceph/ceph:v18, name=cool_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:33 compute-0 podman[75286]: 2025-10-02 11:44:33.101998083 +0000 UTC m=+0.021704355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:33 compute-0 podman[75286]: 2025-10-02 11:44:33.198824919 +0000 UTC m=+0.118531171 container attach 2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9 (image=quay.io/ceph/ceph:v18, name=cool_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:44:33 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:33 compute-0 ceph-mon[73668]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:33 compute-0 ceph-mon[73668]: Set ssh ssh_identity_pub
Oct 02 11:44:33 compute-0 sshd-session[75328]: Accepted publickey for ceph-admin from 192.168.122.100 port 59504 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:33 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 02 11:44:33 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 02 11:44:33 compute-0 systemd-logind[820]: New session 22 of user ceph-admin.
Oct 02 11:44:33 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 02 11:44:33 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 02 11:44:33 compute-0 systemd[75332]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:34 compute-0 systemd[75332]: Queued start job for default target Main User Target.
Oct 02 11:44:34 compute-0 sshd-session[75345]: Accepted publickey for ceph-admin from 192.168.122.100 port 59508 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:34 compute-0 systemd-logind[820]: New session 24 of user ceph-admin.
Oct 02 11:44:34 compute-0 systemd[75332]: Created slice User Application Slice.
Oct 02 11:44:34 compute-0 systemd[75332]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:44:34 compute-0 systemd[75332]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:44:34 compute-0 systemd[75332]: Reached target Paths.
Oct 02 11:44:34 compute-0 systemd[75332]: Reached target Timers.
Oct 02 11:44:34 compute-0 systemd[75332]: Starting D-Bus User Message Bus Socket...
Oct 02 11:44:34 compute-0 systemd[75332]: Starting Create User's Volatile Files and Directories...
Oct 02 11:44:34 compute-0 systemd[75332]: Finished Create User's Volatile Files and Directories.
Oct 02 11:44:34 compute-0 systemd[75332]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:44:34 compute-0 systemd[75332]: Reached target Sockets.
Oct 02 11:44:34 compute-0 systemd[75332]: Reached target Basic System.
Oct 02 11:44:34 compute-0 systemd[75332]: Reached target Main User Target.
Oct 02 11:44:34 compute-0 systemd[75332]: Startup finished in 128ms.
Oct 02 11:44:34 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 02 11:44:34 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Oct 02 11:44:34 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 02 11:44:34 compute-0 sshd-session[75328]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:34 compute-0 sshd-session[75345]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:34 compute-0 sudo[75352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:34 compute-0 sudo[75352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:34 compute-0 sudo[75352]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:34 compute-0 sudo[75377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:34 compute-0 sudo[75377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:34 compute-0 sudo[75377]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053052 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:34 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:34 compute-0 sshd-session[75402]: Accepted publickey for ceph-admin from 192.168.122.100 port 59510 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:34 compute-0 systemd-logind[820]: New session 25 of user ceph-admin.
Oct 02 11:44:34 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 02 11:44:34 compute-0 sshd-session[75402]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:34 compute-0 sudo[75406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:34 compute-0 sudo[75406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:34 compute-0 sudo[75406]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:34 compute-0 sudo[75431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 11:44:34 compute-0 sudo[75431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:34 compute-0 sudo[75431]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:34 compute-0 ceph-mon[73668]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:34 compute-0 ceph-mon[73668]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:34 compute-0 sshd-session[75456]: Accepted publickey for ceph-admin from 192.168.122.100 port 59526 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:34 compute-0 systemd-logind[820]: New session 26 of user ceph-admin.
Oct 02 11:44:34 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 02 11:44:34 compute-0 sshd-session[75456]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:34 compute-0 sudo[75460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:34 compute-0 sudo[75460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:34 compute-0 sudo[75460]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:35 compute-0 sudo[75485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 11:44:35 compute-0 sudo[75485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:35 compute-0 sudo[75485]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:35 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 02 11:44:35 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 02 11:44:35 compute-0 sshd-session[75510]: Accepted publickey for ceph-admin from 192.168.122.100 port 59528 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:35 compute-0 systemd-logind[820]: New session 27 of user ceph-admin.
Oct 02 11:44:35 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 02 11:44:35 compute-0 sshd-session[75510]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:35 compute-0 sudo[75514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:35 compute-0 sudo[75514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:35 compute-0 sudo[75514]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:35 compute-0 sudo[75539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:35 compute-0 sudo[75539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:35 compute-0 sudo[75539]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:35 compute-0 sshd-session[75564]: Accepted publickey for ceph-admin from 192.168.122.100 port 59538 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:35 compute-0 systemd-logind[820]: New session 28 of user ceph-admin.
Oct 02 11:44:35 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 02 11:44:35 compute-0 sshd-session[75564]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:35 compute-0 sudo[75568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:35 compute-0 sudo[75568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:35 compute-0 sudo[75568]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:35 compute-0 sudo[75593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:35 compute-0 sudo[75593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:35 compute-0 sudo[75593]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:35 compute-0 ceph-mon[73668]: Deploying cephadm binary to compute-0
Oct 02 11:44:35 compute-0 sshd-session[75618]: Accepted publickey for ceph-admin from 192.168.122.100 port 59540 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:35 compute-0 systemd-logind[820]: New session 29 of user ceph-admin.
Oct 02 11:44:35 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 02 11:44:36 compute-0 sshd-session[75618]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:36 compute-0 sudo[75622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:36 compute-0 sudo[75622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:36 compute-0 sudo[75622]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:36 compute-0 sudo[75647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 11:44:36 compute-0 sudo[75647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:36 compute-0 sudo[75647]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:36 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:36 compute-0 sshd-session[75672]: Accepted publickey for ceph-admin from 192.168.122.100 port 59542 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:36 compute-0 systemd-logind[820]: New session 30 of user ceph-admin.
Oct 02 11:44:36 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 02 11:44:36 compute-0 sshd-session[75672]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:36 compute-0 sudo[75676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:36 compute-0 sudo[75676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:36 compute-0 sudo[75676]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:36 compute-0 sudo[75701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:36 compute-0 sudo[75701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:36 compute-0 sudo[75701]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:36 compute-0 sshd-session[75726]: Accepted publickey for ceph-admin from 192.168.122.100 port 59550 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:36 compute-0 systemd-logind[820]: New session 31 of user ceph-admin.
Oct 02 11:44:36 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 02 11:44:36 compute-0 sshd-session[75726]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:36 compute-0 sudo[75730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:36 compute-0 sudo[75730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:36 compute-0 sudo[75730]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:36 compute-0 sudo[75755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 11:44:36 compute-0 sudo[75755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:36 compute-0 sudo[75755]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:37 compute-0 sshd-session[75780]: Accepted publickey for ceph-admin from 192.168.122.100 port 59566 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:37 compute-0 systemd-logind[820]: New session 32 of user ceph-admin.
Oct 02 11:44:37 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 02 11:44:37 compute-0 sshd-session[75780]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:37 compute-0 sshd-session[75807]: Accepted publickey for ceph-admin from 192.168.122.100 port 49642 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:37 compute-0 systemd-logind[820]: New session 33 of user ceph-admin.
Oct 02 11:44:37 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 02 11:44:37 compute-0 sshd-session[75807]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:37 compute-0 sudo[75811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:37 compute-0 sudo[75811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:37 compute-0 sudo[75811]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:37 compute-0 sudo[75836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 11:44:37 compute-0 sudo[75836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:37 compute-0 sudo[75836]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:37 compute-0 sshd-session[75861]: Accepted publickey for ceph-admin from 192.168.122.100 port 49656 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:38 compute-0 systemd-logind[820]: New session 34 of user ceph-admin.
Oct 02 11:44:38 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Oct 02 11:44:38 compute-0 sshd-session[75861]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:38 compute-0 sudo[75865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:38 compute-0 sudo[75865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:38 compute-0 sudo[75865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:38 compute-0 sudo[75890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 11:44:38 compute-0 sudo[75890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:38 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:38 compute-0 sudo[75890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:44:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:38 compute-0 ceph-mgr[73961]: [cephadm INFO root] Added host compute-0
Oct 02 11:44:38 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 02 11:44:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:44:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:38 compute-0 cool_clarke[75302]: Added host 'compute-0' with addr '192.168.122.100'
Oct 02 11:44:38 compute-0 podman[75286]: 2025-10-02 11:44:38.465069924 +0000 UTC m=+5.384776176 container died 2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9 (image=quay.io/ceph/ceph:v18, name=cool_clarke, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:44:38 compute-0 systemd[1]: libpod-2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9.scope: Deactivated successfully.
Oct 02 11:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b06c01361a0f01a0a6f9abd5c9e2000abac35f7838806458dd343038647944fb-merged.mount: Deactivated successfully.
Oct 02 11:44:38 compute-0 sudo[75935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:38 compute-0 sudo[75935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:38 compute-0 sudo[75935]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:38 compute-0 podman[75286]: 2025-10-02 11:44:38.512943032 +0000 UTC m=+5.432649284 container remove 2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9 (image=quay.io/ceph/ceph:v18, name=cool_clarke, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:44:38 compute-0 systemd[1]: libpod-conmon-2708d8b3da826b79c139f1af22d7a36440419eb277550ebfd5bda833519d6cc9.scope: Deactivated successfully.
Oct 02 11:44:38 compute-0 sudo[75974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:38 compute-0 sudo[75974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:38 compute-0 sudo[75974]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:38 compute-0 podman[75978]: 2025-10-02 11:44:38.573729578 +0000 UTC m=+0.037501200 container create 7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413 (image=quay.io/ceph/ceph:v18, name=zealous_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:44:38 compute-0 systemd[1]: Started libpod-conmon-7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413.scope.
Oct 02 11:44:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:38 compute-0 sudo[76012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:38 compute-0 sudo[76012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810bc2730b511f50f0a5c66695dc00c7820ddd9f826bbc03afe393797f6cb835/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810bc2730b511f50f0a5c66695dc00c7820ddd9f826bbc03afe393797f6cb835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810bc2730b511f50f0a5c66695dc00c7820ddd9f826bbc03afe393797f6cb835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:38 compute-0 sudo[76012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:38 compute-0 podman[75978]: 2025-10-02 11:44:38.650017131 +0000 UTC m=+0.113788773 container init 7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413 (image=quay.io/ceph/ceph:v18, name=zealous_banach, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:38 compute-0 podman[75978]: 2025-10-02 11:44:38.558431717 +0000 UTC m=+0.022203359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:38 compute-0 podman[75978]: 2025-10-02 11:44:38.658692785 +0000 UTC m=+0.122464407 container start 7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413 (image=quay.io/ceph/ceph:v18, name=zealous_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:38 compute-0 podman[75978]: 2025-10-02 11:44:38.663612117 +0000 UTC m=+0.127383799 container attach 7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413 (image=quay.io/ceph/ceph:v18, name=zealous_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:44:38 compute-0 sudo[76044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Oct 02 11:44:38 compute-0 sudo[76044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:38 compute-0 podman[76098]: 2025-10-02 11:44:38.938776933 +0000 UTC m=+0.041871198 container create 97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e (image=quay.io/ceph/ceph:v18, name=angry_goldstine, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:38 compute-0 systemd[1]: Started libpod-conmon-97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e.scope.
Oct 02 11:44:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:38 compute-0 podman[76098]: 2025-10-02 11:44:38.998299485 +0000 UTC m=+0.101393780 container init 97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e (image=quay.io/ceph/ceph:v18, name=angry_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:39 compute-0 podman[76098]: 2025-10-02 11:44:39.004156723 +0000 UTC m=+0.107250988 container start 97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e (image=quay.io/ceph/ceph:v18, name=angry_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:44:39 compute-0 podman[76098]: 2025-10-02 11:44:39.007490882 +0000 UTC m=+0.110585167 container attach 97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e (image=quay.io/ceph/ceph:v18, name=angry_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:44:39 compute-0 podman[76098]: 2025-10-02 11:44:38.919812673 +0000 UTC m=+0.022906958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:39 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:39 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 02 11:44:39 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 02 11:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:44:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:39 compute-0 zealous_banach[76039]: Scheduled mon update...
Oct 02 11:44:39 compute-0 systemd[1]: libpod-7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413.scope: Deactivated successfully.
Oct 02 11:44:39 compute-0 podman[75978]: 2025-10-02 11:44:39.243602817 +0000 UTC m=+0.707374429 container died 7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413 (image=quay.io/ceph/ceph:v18, name=zealous_banach, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-810bc2730b511f50f0a5c66695dc00c7820ddd9f826bbc03afe393797f6cb835-merged.mount: Deactivated successfully.
Oct 02 11:44:39 compute-0 podman[75978]: 2025-10-02 11:44:39.286494982 +0000 UTC m=+0.750266604 container remove 7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413 (image=quay.io/ceph/ceph:v18, name=zealous_banach, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:44:39 compute-0 systemd[1]: libpod-conmon-7e2f65dc7f5dea64602128747c5ce7d9fa7c6e5150018ab15013ebc704f2c413.scope: Deactivated successfully.
Oct 02 11:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:39 compute-0 angry_goldstine[76117]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 02 11:44:39 compute-0 systemd[1]: libpod-97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e.scope: Deactivated successfully.
Oct 02 11:44:39 compute-0 podman[76098]: 2025-10-02 11:44:39.31280592 +0000 UTC m=+0.415900195 container died 97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e (image=quay.io/ceph/ceph:v18, name=angry_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a39beeb054277b664e989aa106fb90f110facd9a3c2ecd8d3760690ec6f5dec0-merged.mount: Deactivated successfully.
Oct 02 11:44:39 compute-0 podman[76155]: 2025-10-02 11:44:39.353253388 +0000 UTC m=+0.049059781 container create 1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c (image=quay.io/ceph/ceph:v18, name=nice_saha, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:44:39 compute-0 podman[76098]: 2025-10-02 11:44:39.371306064 +0000 UTC m=+0.474400329 container remove 97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e (image=quay.io/ceph/ceph:v18, name=angry_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:39 compute-0 systemd[1]: Started libpod-conmon-1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c.scope.
Oct 02 11:44:39 compute-0 systemd[1]: libpod-conmon-97eab3430622f69e832a816e42b54b81ac7def7ccd6807370a024b62fe995b0e.scope: Deactivated successfully.
Oct 02 11:44:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049308f6f5dd98ab9cd2726db8a49f0dcf069054281e5e390dbc0f6ced44662e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049308f6f5dd98ab9cd2726db8a49f0dcf069054281e5e390dbc0f6ced44662e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049308f6f5dd98ab9cd2726db8a49f0dcf069054281e5e390dbc0f6ced44662e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:39 compute-0 sudo[76044]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 02 11:44:39 compute-0 podman[76155]: 2025-10-02 11:44:39.410252481 +0000 UTC m=+0.106058924 container init 1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c (image=quay.io/ceph/ceph:v18, name=nice_saha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:39 compute-0 podman[76155]: 2025-10-02 11:44:39.416339145 +0000 UTC m=+0.112145548 container start 1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c (image=quay.io/ceph/ceph:v18, name=nice_saha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:44:39 compute-0 podman[76155]: 2025-10-02 11:44:39.419247804 +0000 UTC m=+0.115054277 container attach 1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c (image=quay.io/ceph/ceph:v18, name=nice_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:44:39 compute-0 podman[76155]: 2025-10-02 11:44:39.326724874 +0000 UTC m=+0.022531307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:39 compute-0 ceph-mon[73668]: Added host compute-0
Oct 02 11:44:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:44:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:39 compute-0 sudo[76189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:39 compute-0 sudo[76189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:39 compute-0 sudo[76189]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:39 compute-0 sudo[76214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:39 compute-0 sudo[76214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:39 compute-0 sudo[76214]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:39 compute-0 sudo[76239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:39 compute-0 sudo[76239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:39 compute-0 sudo[76239]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:39 compute-0 sudo[76264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 11:44:39 compute-0 sudo[76264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:39 compute-0 sudo[76264]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:39 compute-0 sudo[76327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:39 compute-0 sudo[76327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:39 compute-0 sudo[76327]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:39 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:39 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 02 11:44:39 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 02 11:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:44:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:39 compute-0 sudo[76352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:39 compute-0 nice_saha[76184]: Scheduled mgr update...
Oct 02 11:44:39 compute-0 sudo[76352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:39 compute-0 sudo[76352]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:39 compute-0 systemd[1]: libpod-1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c.scope: Deactivated successfully.
Oct 02 11:44:39 compute-0 podman[76155]: 2025-10-02 11:44:39.995978396 +0000 UTC m=+0.691784799 container died 1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c (image=quay.io/ceph/ceph:v18, name=nice_saha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-049308f6f5dd98ab9cd2726db8a49f0dcf069054281e5e390dbc0f6ced44662e-merged.mount: Deactivated successfully.
Oct 02 11:44:40 compute-0 sudo[76379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:40 compute-0 sudo[76379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:40 compute-0 sudo[76379]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:40 compute-0 podman[76155]: 2025-10-02 11:44:40.041291955 +0000 UTC m=+0.737098368 container remove 1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c (image=quay.io/ceph/ceph:v18, name=nice_saha, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:44:40 compute-0 systemd[1]: libpod-conmon-1c4f9281491e76076b87af1339213c477fb5ad3ad7d294fe0abafa63c875f29c.scope: Deactivated successfully.
Oct 02 11:44:40 compute-0 sudo[76415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:44:40 compute-0 sudo[76415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:40 compute-0 podman[76420]: 2025-10-02 11:44:40.110490788 +0000 UTC m=+0.045634399 container create 0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2 (image=quay.io/ceph/ceph:v18, name=elated_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:40 compute-0 systemd[1]: Started libpod-conmon-0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2.scope.
Oct 02 11:44:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cceef6035571fe02662412c048e85cebd11a5edc4c5c87798bfa2d6256610800/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cceef6035571fe02662412c048e85cebd11a5edc4c5c87798bfa2d6256610800/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cceef6035571fe02662412c048e85cebd11a5edc4c5c87798bfa2d6256610800/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:40 compute-0 podman[76420]: 2025-10-02 11:44:40.183174484 +0000 UTC m=+0.118318095 container init 0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2 (image=quay.io/ceph/ceph:v18, name=elated_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:40 compute-0 podman[76420]: 2025-10-02 11:44:40.091549078 +0000 UTC m=+0.026692689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:40 compute-0 podman[76420]: 2025-10-02 11:44:40.190986354 +0000 UTC m=+0.126129945 container start 0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2 (image=quay.io/ceph/ceph:v18, name=elated_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:44:40 compute-0 podman[76420]: 2025-10-02 11:44:40.202144484 +0000 UTC m=+0.137288075 container attach 0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2 (image=quay.io/ceph/ceph:v18, name=elated_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:44:40 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:40 compute-0 podman[76530]: 2025-10-02 11:44:40.52114441 +0000 UTC m=+0.055923406 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:40 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:40 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service crash spec with placement *
Oct 02 11:44:40 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 02 11:44:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:44:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:40 compute-0 elated_germain[76456]: Scheduled crash update...
Oct 02 11:44:40 compute-0 systemd[1]: libpod-0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2.scope: Deactivated successfully.
Oct 02 11:44:40 compute-0 podman[76420]: 2025-10-02 11:44:40.749816764 +0000 UTC m=+0.684960365 container died 0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2 (image=quay.io/ceph/ceph:v18, name=elated_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cceef6035571fe02662412c048e85cebd11a5edc4c5c87798bfa2d6256610800-merged.mount: Deactivated successfully.
Oct 02 11:44:40 compute-0 podman[76420]: 2025-10-02 11:44:40.796754208 +0000 UTC m=+0.731897799 container remove 0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2 (image=quay.io/ceph/ceph:v18, name=elated_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:44:40 compute-0 systemd[1]: libpod-conmon-0ea4826bbf768eaf3652d3348bf0cab09cfa2c4f22aa584cf1755314d5fa25e2.scope: Deactivated successfully.
Oct 02 11:44:40 compute-0 podman[76530]: 2025-10-02 11:44:40.834448362 +0000 UTC m=+0.369227338 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:44:40 compute-0 podman[76584]: 2025-10-02 11:44:40.862954859 +0000 UTC m=+0.047255652 container create 622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba (image=quay.io/ceph/ceph:v18, name=elastic_gauss, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:44:40 compute-0 ceph-mon[73668]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:40 compute-0 ceph-mon[73668]: Saving service mon spec with placement count:5
Oct 02 11:44:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:40 compute-0 systemd[1]: Started libpod-conmon-622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba.scope.
Oct 02 11:44:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:40 compute-0 podman[76584]: 2025-10-02 11:44:40.838525122 +0000 UTC m=+0.022825965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6681422684c3969d7e9806a2c04c09185fc712efdbbccfe097bc0c928d9d9f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6681422684c3969d7e9806a2c04c09185fc712efdbbccfe097bc0c928d9d9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6681422684c3969d7e9806a2c04c09185fc712efdbbccfe097bc0c928d9d9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:40 compute-0 podman[76584]: 2025-10-02 11:44:40.9473359 +0000 UTC m=+0.131636693 container init 622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba (image=quay.io/ceph/ceph:v18, name=elastic_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:40 compute-0 podman[76584]: 2025-10-02 11:44:40.954342949 +0000 UTC m=+0.138643722 container start 622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba (image=quay.io/ceph/ceph:v18, name=elastic_gauss, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:44:40 compute-0 podman[76584]: 2025-10-02 11:44:40.958627694 +0000 UTC m=+0.142928497 container attach 622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba (image=quay.io/ceph/ceph:v18, name=elastic_gauss, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:44:41 compute-0 sudo[76415]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:41 compute-0 sudo[76632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:41 compute-0 sudo[76632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[76632]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 sudo[76657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:41 compute-0 sudo[76657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[76657]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 sudo[76682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:41 compute-0 sudo[76682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[76682]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 sudo[76707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:44:41 compute-0 sudo[76707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76764 (sysctl)
Oct 02 11:44:41 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 02 11:44:41 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 02 11:44:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 02 11:44:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1429254975' entity='client.admin' 
Oct 02 11:44:41 compute-0 systemd[1]: libpod-622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba.scope: Deactivated successfully.
Oct 02 11:44:41 compute-0 podman[76772]: 2025-10-02 11:44:41.551928312 +0000 UTC m=+0.024185561 container died 622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba (image=quay.io/ceph/ceph:v18, name=elastic_gauss, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:44:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6681422684c3969d7e9806a2c04c09185fc712efdbbccfe097bc0c928d9d9f-merged.mount: Deactivated successfully.
Oct 02 11:44:41 compute-0 podman[76772]: 2025-10-02 11:44:41.592070243 +0000 UTC m=+0.064327492 container remove 622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba (image=quay.io/ceph/ceph:v18, name=elastic_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:41 compute-0 systemd[1]: libpod-conmon-622bc2f5abca1551500e496d50ccbce3dea5c76776a9e1251569b3afeb1536ba.scope: Deactivated successfully.
Oct 02 11:44:41 compute-0 podman[76790]: 2025-10-02 11:44:41.652524 +0000 UTC m=+0.037990494 container create bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a (image=quay.io/ceph/ceph:v18, name=boring_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:41 compute-0 sudo[76707]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 systemd[1]: Started libpod-conmon-bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a.scope.
Oct 02 11:44:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b44d1d3192c02a9a2f7574f3118a85978cf197191c8bd076db7ff4105417bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b44d1d3192c02a9a2f7574f3118a85978cf197191c8bd076db7ff4105417bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b44d1d3192c02a9a2f7574f3118a85978cf197191c8bd076db7ff4105417bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:41 compute-0 podman[76790]: 2025-10-02 11:44:41.635359748 +0000 UTC m=+0.020826262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:41 compute-0 podman[76790]: 2025-10-02 11:44:41.737462536 +0000 UTC m=+0.122929160 container init bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a (image=quay.io/ceph/ceph:v18, name=boring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:41 compute-0 podman[76790]: 2025-10-02 11:44:41.743468768 +0000 UTC m=+0.128935252 container start bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a (image=quay.io/ceph/ceph:v18, name=boring_cannon, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:41 compute-0 sudo[76820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:41 compute-0 podman[76790]: 2025-10-02 11:44:41.747868206 +0000 UTC m=+0.133334690 container attach bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a (image=quay.io/ceph/ceph:v18, name=boring_cannon, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:41 compute-0 sudo[76820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[76820]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 sudo[76848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:41 compute-0 sudo[76848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[76848]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 sudo[76873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:41 compute-0 sudo[76873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:41 compute-0 sudo[76873]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:41 compute-0 ceph-mon[73668]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:41 compute-0 ceph-mon[73668]: Saving service mgr spec with placement count:2
Oct 02 11:44:41 compute-0 ceph-mon[73668]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:41 compute-0 ceph-mon[73668]: Saving service crash spec with placement *
Oct 02 11:44:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1429254975' entity='client.admin' 
Oct 02 11:44:41 compute-0 sudo[76898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 11:44:41 compute-0 sudo[76898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:42 compute-0 sudo[76898]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:42 compute-0 sudo[76959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:42 compute-0 sudo[76959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:42 compute-0 sudo[76959]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:42 compute-0 sudo[76984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:42 compute-0 sudo[76984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:42 compute-0 sudo[76984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:42 compute-0 sudo[77009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:42 compute-0 sudo[77009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:42 compute-0 sudo[77009]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:42 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:42 compute-0 sudo[77034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 11:44:42 compute-0 sudo[77034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:42 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 02 11:44:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:42 compute-0 podman[76790]: 2025-10-02 11:44:42.356723533 +0000 UTC m=+0.742190027 container died bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a (image=quay.io/ceph/ceph:v18, name=boring_cannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:42 compute-0 systemd[1]: libpod-bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a.scope: Deactivated successfully.
Oct 02 11:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b44d1d3192c02a9a2f7574f3118a85978cf197191c8bd076db7ff4105417bf-merged.mount: Deactivated successfully.
Oct 02 11:44:42 compute-0 podman[76790]: 2025-10-02 11:44:42.404157789 +0000 UTC m=+0.789624273 container remove bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a (image=quay.io/ceph/ceph:v18, name=boring_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:42 compute-0 systemd[1]: libpod-conmon-bd7adf030fea3474bff8d3fa5227591b19f5b9cde9980db02d628ca731e0e26a.scope: Deactivated successfully.
Oct 02 11:44:42 compute-0 podman[77073]: 2025-10-02 11:44:42.46919129 +0000 UTC m=+0.042049053 container create 864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa (image=quay.io/ceph/ceph:v18, name=silly_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:44:42 compute-0 systemd[1]: Started libpod-conmon-864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa.scope.
Oct 02 11:44:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8059adf9380c9c0fa8682341654343cb379ea4e51b839e9eb7ea63e9f9fa9ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8059adf9380c9c0fa8682341654343cb379ea4e51b839e9eb7ea63e9f9fa9ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8059adf9380c9c0fa8682341654343cb379ea4e51b839e9eb7ea63e9f9fa9ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:42 compute-0 podman[77073]: 2025-10-02 11:44:42.452070239 +0000 UTC m=+0.024928002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:42 compute-0 podman[77073]: 2025-10-02 11:44:42.549426279 +0000 UTC m=+0.122284062 container init 864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa (image=quay.io/ceph/ceph:v18, name=silly_rubin, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:42 compute-0 podman[77073]: 2025-10-02 11:44:42.55467116 +0000 UTC m=+0.127528913 container start 864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa (image=quay.io/ceph/ceph:v18, name=silly_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:42 compute-0 podman[77073]: 2025-10-02 11:44:42.55761946 +0000 UTC m=+0.130477243 container attach 864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa (image=quay.io/ceph/ceph:v18, name=silly_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:42 compute-0 podman[77132]: 2025-10-02 11:44:42.664908357 +0000 UTC m=+0.040501501 container create 7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lumiere, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:42 compute-0 systemd[1]: Started libpod-conmon-7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a.scope.
Oct 02 11:44:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:42 compute-0 podman[77132]: 2025-10-02 11:44:42.736611847 +0000 UTC m=+0.112205011 container init 7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:44:42 compute-0 podman[77132]: 2025-10-02 11:44:42.741950011 +0000 UTC m=+0.117543165 container start 7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:42 compute-0 podman[77132]: 2025-10-02 11:44:42.648246389 +0000 UTC m=+0.023839543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:42 compute-0 podman[77132]: 2025-10-02 11:44:42.744827468 +0000 UTC m=+0.120420632 container attach 7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:44:42 compute-0 wizardly_lumiere[77148]: 167 167
Oct 02 11:44:42 compute-0 systemd[1]: libpod-7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a.scope: Deactivated successfully.
Oct 02 11:44:42 compute-0 podman[77132]: 2025-10-02 11:44:42.747222863 +0000 UTC m=+0.122816017 container died 7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lumiere, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-56f9948e91e86dca7cfdfc1fc1d361a89a57a2a9e71ea604c89f9d77bc668f3e-merged.mount: Deactivated successfully.
Oct 02 11:44:42 compute-0 podman[77132]: 2025-10-02 11:44:42.788636617 +0000 UTC m=+0.164229771 container remove 7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:42 compute-0 systemd[1]: libpod-conmon-7d529f0df46f1d4b6aa65ac02cd5cc3e194a8ee1451e48533688cef1b224b77a.scope: Deactivated successfully.
Oct 02 11:44:43 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:44:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:43 compute-0 ceph-mgr[73961]: [cephadm INFO root] Added label _admin to host compute-0
Oct 02 11:44:43 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 02 11:44:43 compute-0 silly_rubin[77113]: Added label _admin to host compute-0
Oct 02 11:44:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:43 compute-0 ceph-mon[73668]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:43 compute-0 systemd[1]: libpod-864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa.scope: Deactivated successfully.
Oct 02 11:44:43 compute-0 podman[77073]: 2025-10-02 11:44:43.124010313 +0000 UTC m=+0.696868066 container died 864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa (image=quay.io/ceph/ceph:v18, name=silly_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8059adf9380c9c0fa8682341654343cb379ea4e51b839e9eb7ea63e9f9fa9ad-merged.mount: Deactivated successfully.
Oct 02 11:44:43 compute-0 podman[77073]: 2025-10-02 11:44:43.164656267 +0000 UTC m=+0.737514020 container remove 864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa (image=quay.io/ceph/ceph:v18, name=silly_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:43 compute-0 systemd[1]: libpod-conmon-864c14b5f48e1aa406cf116d576cc035b2ad6788855f3c574cb605f6123e69aa.scope: Deactivated successfully.
Oct 02 11:44:43 compute-0 podman[77198]: 2025-10-02 11:44:43.219326508 +0000 UTC m=+0.035641510 container create 88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e (image=quay.io/ceph/ceph:v18, name=hungry_gauss, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:43 compute-0 systemd[1]: Started libpod-conmon-88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e.scope.
Oct 02 11:44:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69512ff4b56ba4f633452a2e5915064d673b9fbc12764501a0bfe5c1ee74e4b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69512ff4b56ba4f633452a2e5915064d673b9fbc12764501a0bfe5c1ee74e4b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69512ff4b56ba4f633452a2e5915064d673b9fbc12764501a0bfe5c1ee74e4b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:43 compute-0 podman[77198]: 2025-10-02 11:44:43.271271066 +0000 UTC m=+0.087586098 container init 88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e (image=quay.io/ceph/ceph:v18, name=hungry_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:43 compute-0 podman[77198]: 2025-10-02 11:44:43.281236034 +0000 UTC m=+0.097551036 container start 88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e (image=quay.io/ceph/ceph:v18, name=hungry_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:44:43 compute-0 podman[77198]: 2025-10-02 11:44:43.284274186 +0000 UTC m=+0.100589228 container attach 88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e (image=quay.io/ceph/ceph:v18, name=hungry_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:43 compute-0 podman[77198]: 2025-10-02 11:44:43.20566353 +0000 UTC m=+0.021978552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 02 11:44:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4173641253' entity='client.admin' 
Oct 02 11:44:43 compute-0 systemd[1]: libpod-88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e.scope: Deactivated successfully.
Oct 02 11:44:43 compute-0 podman[77198]: 2025-10-02 11:44:43.842999503 +0000 UTC m=+0.659314505 container died 88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e (image=quay.io/ceph/ceph:v18, name=hungry_gauss, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-69512ff4b56ba4f633452a2e5915064d673b9fbc12764501a0bfe5c1ee74e4b6-merged.mount: Deactivated successfully.
Oct 02 11:44:43 compute-0 podman[77198]: 2025-10-02 11:44:43.87779745 +0000 UTC m=+0.694112452 container remove 88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e (image=quay.io/ceph/ceph:v18, name=hungry_gauss, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:43 compute-0 systemd[1]: libpod-conmon-88f639d8d32d5de7a9cb6f9c0a68122e2c7bdb6213b4710d555ab3bdb7ed5a3e.scope: Deactivated successfully.
Oct 02 11:44:43 compute-0 podman[77254]: 2025-10-02 11:44:43.937124227 +0000 UTC m=+0.042002582 container create 6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2 (image=quay.io/ceph/ceph:v18, name=loving_hofstadter, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:43 compute-0 systemd[1]: Started libpod-conmon-6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2.scope.
Oct 02 11:44:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4e6d8100eb1b1ce5e7667cd77df3e3e2edbff3f2540fb23990e9454a917f03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4e6d8100eb1b1ce5e7667cd77df3e3e2edbff3f2540fb23990e9454a917f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4e6d8100eb1b1ce5e7667cd77df3e3e2edbff3f2540fb23990e9454a917f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:44 compute-0 podman[77254]: 2025-10-02 11:44:43.917987832 +0000 UTC m=+0.022866247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:44 compute-0 podman[77254]: 2025-10-02 11:44:44.024418786 +0000 UTC m=+0.129297191 container init 6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2 (image=quay.io/ceph/ceph:v18, name=loving_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:44:44 compute-0 podman[77254]: 2025-10-02 11:44:44.033723597 +0000 UTC m=+0.138601992 container start 6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2 (image=quay.io/ceph/ceph:v18, name=loving_hofstadter, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:44:44 compute-0 podman[77254]: 2025-10-02 11:44:44.037752245 +0000 UTC m=+0.142630710 container attach 6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2 (image=quay.io/ceph/ceph:v18, name=loving_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:44 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 02 11:44:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3628001155' entity='client.admin' 
Oct 02 11:44:44 compute-0 loving_hofstadter[77270]: set mgr/dashboard/cluster/status
Oct 02 11:44:44 compute-0 systemd[1]: libpod-6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2.scope: Deactivated successfully.
Oct 02 11:44:44 compute-0 conmon[77270]: conmon 6b7ef4e1bca147db22d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2.scope/container/memory.events
Oct 02 11:44:44 compute-0 podman[77296]: 2025-10-02 11:44:44.711154549 +0000 UTC m=+0.022754463 container died 6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2 (image=quay.io/ceph/ceph:v18, name=loving_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe4e6d8100eb1b1ce5e7667cd77df3e3e2edbff3f2540fb23990e9454a917f03-merged.mount: Deactivated successfully.
Oct 02 11:44:44 compute-0 podman[77296]: 2025-10-02 11:44:44.747838366 +0000 UTC m=+0.059438250 container remove 6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2 (image=quay.io/ceph/ceph:v18, name=loving_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:44:44 compute-0 systemd[1]: libpod-conmon-6b7ef4e1bca147db22d8dac68c149c153c8b7f95b2efe40917c99af2047cbda2.scope: Deactivated successfully.
Oct 02 11:44:44 compute-0 sudo[72627]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:44 compute-0 ceph-mon[73668]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:44 compute-0 ceph-mon[73668]: Added label _admin to host compute-0
Oct 02 11:44:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4173641253' entity='client.admin' 
Oct 02 11:44:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3628001155' entity='client.admin' 
Oct 02 11:44:44 compute-0 podman[77318]: 2025-10-02 11:44:44.93486001 +0000 UTC m=+0.037671595 container create 63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:44:44 compute-0 systemd[1]: Started libpod-conmon-63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498.scope.
Oct 02 11:44:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20850e8bc594fa23fe5e8d297781fe50741a0bce26823de552affed85ed3a53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20850e8bc594fa23fe5e8d297781fe50741a0bce26823de552affed85ed3a53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20850e8bc594fa23fe5e8d297781fe50741a0bce26823de552affed85ed3a53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20850e8bc594fa23fe5e8d297781fe50741a0bce26823de552affed85ed3a53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:44 compute-0 podman[77318]: 2025-10-02 11:44:44.991002651 +0000 UTC m=+0.093814246 container init 63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:44:45 compute-0 podman[77318]: 2025-10-02 11:44:45.001277147 +0000 UTC m=+0.104088732 container start 63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hofstadter, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:45 compute-0 podman[77318]: 2025-10-02 11:44:45.004603267 +0000 UTC m=+0.107414852 container attach 63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:45 compute-0 podman[77318]: 2025-10-02 11:44:44.919949738 +0000 UTC m=+0.022761343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:45 compute-0 sudo[77362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adlxhkdwtvdstsptxolbhicthwepahzn ; /usr/bin/python3'
Oct 02 11:44:45 compute-0 sudo[77362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:45 compute-0 python3[77364]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:45 compute-0 podman[77365]: 2025-10-02 11:44:45.329217844 +0000 UTC m=+0.039430663 container create 7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39 (image=quay.io/ceph/ceph:v18, name=priceless_jemison, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:44:45 compute-0 systemd[1]: Started libpod-conmon-7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39.scope.
Oct 02 11:44:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea23402f17c2aafd0481e1a5d33d36d7d2b6a9fdd6e64235f91b8a0a496edbc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea23402f17c2aafd0481e1a5d33d36d7d2b6a9fdd6e64235f91b8a0a496edbc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:45 compute-0 podman[77365]: 2025-10-02 11:44:45.311572309 +0000 UTC m=+0.021785158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:45 compute-0 podman[77365]: 2025-10-02 11:44:45.410738448 +0000 UTC m=+0.120951307 container init 7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39 (image=quay.io/ceph/ceph:v18, name=priceless_jemison, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:44:45 compute-0 podman[77365]: 2025-10-02 11:44:45.417779397 +0000 UTC m=+0.127992216 container start 7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39 (image=quay.io/ceph/ceph:v18, name=priceless_jemison, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:45 compute-0 podman[77365]: 2025-10-02 11:44:45.421122437 +0000 UTC m=+0.131335306 container attach 7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39 (image=quay.io/ceph/ceph:v18, name=priceless_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 02 11:44:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/797940735' entity='client.admin' 
Oct 02 11:44:45 compute-0 systemd[1]: libpod-7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39.scope: Deactivated successfully.
Oct 02 11:44:45 compute-0 podman[77365]: 2025-10-02 11:44:45.968477849 +0000 UTC m=+0.678690678 container died 7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39 (image=quay.io/ceph/ceph:v18, name=priceless_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea23402f17c2aafd0481e1a5d33d36d7d2b6a9fdd6e64235f91b8a0a496edbc5-merged.mount: Deactivated successfully.
Oct 02 11:44:46 compute-0 podman[77365]: 2025-10-02 11:44:46.008386463 +0000 UTC m=+0.718599282 container remove 7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39 (image=quay.io/ceph/ceph:v18, name=priceless_jemison, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:46 compute-0 systemd[1]: libpod-conmon-7e44733c958ec944a4be97e07d4c06fdd25b25110da52fd97adf34337d1a7c39.scope: Deactivated successfully.
Oct 02 11:44:46 compute-0 sudo[77362]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]: [
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:     {
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "available": false,
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "ceph_device": false,
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "lsm_data": {},
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "lvs": [],
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "path": "/dev/sr0",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "rejected_reasons": [
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "Has a FileSystem",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "Insufficient space (<5GB)"
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         ],
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         "sys_api": {
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "actuators": null,
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "device_nodes": "sr0",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "devname": "sr0",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "human_readable_size": "482.00 KB",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "id_bus": "ata",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "model": "QEMU DVD-ROM",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "nr_requests": "2",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "parent": "/dev/sr0",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "partitions": {},
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "path": "/dev/sr0",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "removable": "1",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "rev": "2.5+",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "ro": "0",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "rotational": "0",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "sas_address": "",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "sas_device_handle": "",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "scheduler_mode": "mq-deadline",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "sectors": 0,
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "sectorsize": "2048",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "size": 493568.0,
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "support_discard": "2048",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "type": "disk",
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:             "vendor": "QEMU"
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:         }
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]:     }
Oct 02 11:44:46 compute-0 agitated_hofstadter[77334]: ]
Oct 02 11:44:46 compute-0 systemd[1]: libpod-63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498.scope: Deactivated successfully.
Oct 02 11:44:46 compute-0 podman[77318]: 2025-10-02 11:44:46.107457159 +0000 UTC m=+1.210268764 container died 63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hofstadter, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:44:46 compute-0 systemd[1]: libpod-63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498.scope: Consumed 1.084s CPU time.
Oct 02 11:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c20850e8bc594fa23fe5e8d297781fe50741a0bce26823de552affed85ed3a53-merged.mount: Deactivated successfully.
Oct 02 11:44:46 compute-0 podman[77318]: 2025-10-02 11:44:46.153883899 +0000 UTC m=+1.256695474 container remove 63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:46 compute-0 systemd[1]: libpod-conmon-63d727376b6d6a59ac61c0915b686b0c73a1347780306ddfb98ad9dd63122498.scope: Deactivated successfully.
Oct 02 11:44:46 compute-0 sudo[77034]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:44:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:44:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 11:44:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:44:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:44:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:46 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:44:46 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:44:46 compute-0 sudo[78442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:46 compute-0 sudo[78442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78442]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 ceph-mgr[73961]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 11:44:46 compute-0 sudo[78467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:44:46 compute-0 sudo[78467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78467]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:46 compute-0 sudo[78492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78492]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph
Oct 02 11:44:46 compute-0 sudo[78544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78544]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:46 compute-0 sudo[78602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78602]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:44:46 compute-0 sudo[78642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78642]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:46 compute-0 sudo[78667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78667]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:46 compute-0 sudo[78698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78698]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:46 compute-0 sudo[78746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78746]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aphwcwgyjshykcpkcdnmwqyqydkorgwu ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405486.4005253-33695-7912943537797/async_wrapper.py j741099364808 30 /home/zuul/.ansible/tmp/ansible-tmp-1759405486.4005253-33695-7912943537797/AnsiballZ_command.py _'
Oct 02 11:44:46 compute-0 sudo[78834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:46 compute-0 sudo[78797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:44:46 compute-0 sudo[78797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78797]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 sudo[78865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:46 compute-0 sudo[78865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:46 compute-0 sudo[78865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:46 compute-0 ansible-async_wrapper.py[78839]: Invoked with j741099364808 30 /home/zuul/.ansible/tmp/ansible-tmp-1759405486.4005253-33695-7912943537797/AnsiballZ_command.py _
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/797940735' entity='client.admin' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:46 compute-0 ceph-mon[73668]: Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:44:46 compute-0 ansible-async_wrapper.py[78895]: Starting module and watcher
Oct 02 11:44:46 compute-0 ansible-async_wrapper.py[78895]: Start watching 78897 (30)
Oct 02 11:44:46 compute-0 ansible-async_wrapper.py[78897]: Start module (78897)
Oct 02 11:44:46 compute-0 ansible-async_wrapper.py[78839]: Return async_wrapper task started.
Oct 02 11:44:47 compute-0 sudo[78834]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[78890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:44:47 compute-0 sudo[78890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[78890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[78920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 sudo[78920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[78920]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 python3[78901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:47 compute-0 sudo[78945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:44:47 compute-0 sudo[78945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[78945]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 podman[78968]: 2025-10-02 11:44:47.195088941 +0000 UTC m=+0.037708046 container create 80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:44:47 compute-0 sudo[78976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 sudo[78976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[78976]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 systemd[1]: Started libpod-conmon-80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9.scope.
Oct 02 11:44:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788fd3536688754a0ab2400faf07d07e55cec957c58c82be3529618eff5dc3bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:47 compute-0 sudo[79010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 02 11:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788fd3536688754a0ab2400faf07d07e55cec957c58c82be3529618eff5dc3bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:47 compute-0 sudo[79010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79010]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:44:47 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:44:47 compute-0 podman[78968]: 2025-10-02 11:44:47.176272365 +0000 UTC m=+0.018891490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:47 compute-0 podman[78968]: 2025-10-02 11:44:47.27382561 +0000 UTC m=+0.116444715 container init 80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:47 compute-0 podman[78968]: 2025-10-02 11:44:47.280416908 +0000 UTC m=+0.123036013 container start 80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 02 11:44:47 compute-0 podman[78968]: 2025-10-02 11:44:47.283805899 +0000 UTC m=+0.126424994 container attach 80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:47 compute-0 sudo[79039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 sudo[79039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79039]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:44:47 compute-0 sudo[79064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79064]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 sudo[79089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79089]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:44:47 compute-0 sudo[79114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79114]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 sudo[79139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79139]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:44:47 compute-0 sudo[79164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79164]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 sudo[79189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79189]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:47 compute-0 sudo[79233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79233]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 sudo[79258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79258]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 sudo[79283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:44:47 compute-0 sudo[79283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79283]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:44:47 compute-0 gracious_liskov[79016]: 
Oct 02 11:44:47 compute-0 gracious_liskov[79016]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:44:47 compute-0 systemd[1]: libpod-80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9.scope: Deactivated successfully.
Oct 02 11:44:47 compute-0 podman[78968]: 2025-10-02 11:44:47.872508193 +0000 UTC m=+0.715127298 container died 80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:44:47 compute-0 sudo[79331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-788fd3536688754a0ab2400faf07d07e55cec957c58c82be3529618eff5dc3bb-merged.mount: Deactivated successfully.
Oct 02 11:44:47 compute-0 sudo[79331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 podman[78968]: 2025-10-02 11:44:47.917975647 +0000 UTC m=+0.760594752 container remove 80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:44:47 compute-0 systemd[1]: libpod-conmon-80957c9cdcf86c98a08786712f808045a4d407cccc833fb054c0b8c214c283c9.scope: Deactivated successfully.
Oct 02 11:44:47 compute-0 ansible-async_wrapper.py[78897]: Module complete (78897)
Oct 02 11:44:47 compute-0 sudo[79371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:44:47 compute-0 sudo[79371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:47 compute-0 sudo[79371]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:47 compute-0 ceph-mon[73668]: Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:44:47 compute-0 ceph-mon[73668]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:44:48 compute-0 sudo[79396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79396]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:44:48 compute-0 sudo[79421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79421]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79469]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:44:48 compute-0 sudo[79494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79494]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:44:48 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:44:48 compute-0 sudo[79519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79519]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hynjawgmpnevlfmseuxuaudiyuiwqbyf ; /usr/bin/python3'
Oct 02 11:44:48 compute-0 sudo[79572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:48 compute-0 ceph-mgr[73961]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 02 11:44:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 02 11:44:48 compute-0 sudo[79564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:44:48 compute-0 sudo[79564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79564]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79595]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 python3[79590]: ansible-ansible.legacy.async_status Invoked with jid=j741099364808.78839 mode=status _async_dir=/root/.ansible_async
Oct 02 11:44:48 compute-0 sudo[79620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph
Oct 02 11:44:48 compute-0 sudo[79620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79620]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79572]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79645]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:44:48 compute-0 sudo[79693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79693]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqofnokvrszpnivviqtvobauhydikdip ; /usr/bin/python3'
Oct 02 11:44:48 compute-0 sudo[79739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:48 compute-0 sudo[79743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79743]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:48 compute-0 sudo[79769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79769]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 python3[79746]: ansible-ansible.legacy.async_status Invoked with jid=j741099364808.78839 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 11:44:48 compute-0 sudo[79739]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79794]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:44:48 compute-0 sudo[79819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79819]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:48 compute-0 sudo[79867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79867]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 sudo[79892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:44:48 compute-0 sudo[79892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:48 compute-0 sudo[79892]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:48 compute-0 ceph-mon[73668]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:44:48 compute-0 ceph-mon[73668]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:48 compute-0 ceph-mon[73668]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 02 11:44:49 compute-0 sudo[79920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[79920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[79963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igfkpyjlakdlkrjmazjkxlayelyptbcj ; /usr/bin/python3'
Oct 02 11:44:49 compute-0 sudo[79920]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[79963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:49 compute-0 sudo[79968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:44:49 compute-0 sudo[79968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[79968]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[79993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[79993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[79993]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 python3[79967]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:44:49 compute-0 sudo[79963]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 02 11:44:49 compute-0 sudo[80018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80018]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:44:49 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:44:49 compute-0 sudo[80045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[80045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80045]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:49 compute-0 sudo[80070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:44:49 compute-0 sudo[80070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80070]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[80095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80095]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:44:49 compute-0 sudo[80120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80120]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvlfeynfcevoxkjtkvyxyymjqchgddac ; /usr/bin/python3'
Oct 02 11:44:49 compute-0 sudo[80188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:49 compute-0 sudo[80149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[80149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80149]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:44:49 compute-0 sudo[80196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80196]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[80221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80221]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 python3[80193]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:49 compute-0 sudo[80246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:49 compute-0 sudo[80246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80246]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 podman[80252]: 2025-10-02 11:44:49.705012513 +0000 UTC m=+0.044962521 container create 40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab (image=quay.io/ceph/ceph:v18, name=elastic_archimedes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:44:49 compute-0 systemd[1]: Started libpod-conmon-40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab.scope.
Oct 02 11:44:49 compute-0 sudo[80284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[80284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80284]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e633743bdb1d9866d73b3330c38cef2e19a7860e514bd335929db02d0d097e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e633743bdb1d9866d73b3330c38cef2e19a7860e514bd335929db02d0d097e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e633743bdb1d9866d73b3330c38cef2e19a7860e514bd335929db02d0d097e1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:49 compute-0 podman[80252]: 2025-10-02 11:44:49.687865221 +0000 UTC m=+0.027815239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:49 compute-0 podman[80252]: 2025-10-02 11:44:49.789200099 +0000 UTC m=+0.129150127 container init 40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab (image=quay.io/ceph/ceph:v18, name=elastic_archimedes, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:49 compute-0 podman[80252]: 2025-10-02 11:44:49.795136068 +0000 UTC m=+0.135086086 container start 40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab (image=quay.io/ceph/ceph:v18, name=elastic_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:49 compute-0 podman[80252]: 2025-10-02 11:44:49.798130849 +0000 UTC m=+0.138080877 container attach 40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab (image=quay.io/ceph/ceph:v18, name=elastic_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:44:49 compute-0 sudo[80315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:44:49 compute-0 sudo[80315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80315]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:49 compute-0 sudo[80364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80364]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 sudo[80389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:44:49 compute-0 sudo[80389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:49 compute-0 sudo[80389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:49 compute-0 ceph-mon[73668]: Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:44:50 compute-0 sudo[80414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:50 compute-0 sudo[80414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 sudo[80439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:44:50 compute-0 sudo[80439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80439]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 sudo[80473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:50 compute-0 sudo[80473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80473]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 sudo[80508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:44:50 compute-0 sudo[80508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80508]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:44:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:44:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:50 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 58907075-e27c-43b3-9934-bd7890c86ad6 (Updating crash deployment (+1 -> 1))
Oct 02 11:44:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:44:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:44:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:44:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:50 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 02 11:44:50 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 02 11:44:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:50 compute-0 sudo[80533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:50 compute-0 sudo[80533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80533]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:44:50 compute-0 elastic_archimedes[80311]: 
Oct 02 11:44:50 compute-0 elastic_archimedes[80311]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:44:50 compute-0 sudo[80558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:50 compute-0 sudo[80558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80558]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 systemd[1]: libpod-40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab.scope: Deactivated successfully.
Oct 02 11:44:50 compute-0 podman[80252]: 2025-10-02 11:44:50.384063338 +0000 UTC m=+0.724013336 container died 40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab (image=quay.io/ceph/ceph:v18, name=elastic_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e633743bdb1d9866d73b3330c38cef2e19a7860e514bd335929db02d0d097e1-merged.mount: Deactivated successfully.
Oct 02 11:44:50 compute-0 sudo[80585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:50 compute-0 sudo[80585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80585]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 podman[80252]: 2025-10-02 11:44:50.436586451 +0000 UTC m=+0.776536449 container remove 40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab (image=quay.io/ceph/ceph:v18, name=elastic_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:50 compute-0 systemd[1]: libpod-conmon-40d88e0a7f038a1e8a857497708be47adddfdc6806c27c5c76f0e475abbbe6ab.scope: Deactivated successfully.
Oct 02 11:44:50 compute-0 sudo[80188]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:50 compute-0 sudo[80624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:50 compute-0 sudo[80624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:50 compute-0 sudo[80698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vniqublgmwrmkrzusyjmlwxxzkpeslcb ; /usr/bin/python3'
Oct 02 11:44:50 compute-0 sudo[80698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:50 compute-0 podman[80715]: 2025-10-02 11:44:50.828263493 +0000 UTC m=+0.037274304 container create b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:44:50 compute-0 systemd[1]: Started libpod-conmon-b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb.scope.
Oct 02 11:44:50 compute-0 python3[80704]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:50 compute-0 podman[80715]: 2025-10-02 11:44:50.812006665 +0000 UTC m=+0.021017496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:50 compute-0 podman[80715]: 2025-10-02 11:44:50.936419244 +0000 UTC m=+0.145430075 container init b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:50 compute-0 podman[80715]: 2025-10-02 11:44:50.942481827 +0000 UTC m=+0.151492638 container start b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hodgkin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:50 compute-0 podman[80715]: 2025-10-02 11:44:50.945263482 +0000 UTC m=+0.154274313 container attach b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hodgkin, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:50 compute-0 relaxed_hodgkin[80731]: 167 167
Oct 02 11:44:50 compute-0 systemd[1]: libpod-b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb.scope: Deactivated successfully.
Oct 02 11:44:50 compute-0 podman[80715]: 2025-10-02 11:44:50.946470425 +0000 UTC m=+0.155481236 container died b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hodgkin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be64fd03b378c2e0dd59367147e3453f7367a84d34bb700c016909328cf0910-merged.mount: Deactivated successfully.
Oct 02 11:44:50 compute-0 podman[80715]: 2025-10-02 11:44:50.977031307 +0000 UTC m=+0.186042118 container remove b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hodgkin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:44:50 compute-0 systemd[1]: libpod-conmon-b43c4cca9cb6143969f9785904a324b088a8a0cc72f2159a0a8a6fb4fbcef8bb.scope: Deactivated successfully.
Oct 02 11:44:51 compute-0 podman[80734]: 2025-10-02 11:44:51.010258931 +0000 UTC m=+0.114238205 container create 47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556 (image=quay.io/ceph/ceph:v18, name=lucid_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:51 compute-0 systemd[1]: Reloading.
Oct 02 11:44:51 compute-0 systemd-rc-local-generator[80792]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:44:51 compute-0 systemd-sysv-generator[80796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:44:51 compute-0 podman[80734]: 2025-10-02 11:44:50.995036212 +0000 UTC m=+0.099015506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:44:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:44:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:51 compute-0 ceph-mon[73668]: Deploying daemon crash.compute-0 on compute-0
Oct 02 11:44:51 compute-0 ceph-mon[73668]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:51 compute-0 ceph-mon[73668]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:44:51 compute-0 systemd[1]: Started libpod-conmon-47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556.scope.
Oct 02 11:44:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e57f202e3b877b232a9868edc0230e3352670e7c04313920539bce0dc520543/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e57f202e3b877b232a9868edc0230e3352670e7c04313920539bce0dc520543/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e57f202e3b877b232a9868edc0230e3352670e7c04313920539bce0dc520543/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:51 compute-0 podman[80734]: 2025-10-02 11:44:51.305200299 +0000 UTC m=+0.409179603 container init 47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556 (image=quay.io/ceph/ceph:v18, name=lucid_chebyshev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:44:51 compute-0 systemd[1]: Reloading.
Oct 02 11:44:51 compute-0 podman[80734]: 2025-10-02 11:44:51.315208829 +0000 UTC m=+0.419188113 container start 47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556 (image=quay.io/ceph/ceph:v18, name=lucid_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:44:51 compute-0 podman[80734]: 2025-10-02 11:44:51.325195078 +0000 UTC m=+0.429174372 container attach 47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556 (image=quay.io/ceph/ceph:v18, name=lucid_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:44:51 compute-0 systemd-rc-local-generator[80836]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:44:51 compute-0 systemd-sysv-generator[80840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:44:51 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:44:51 compute-0 podman[80915]: 2025-10-02 11:44:51.796171803 +0000 UTC m=+0.036868523 container create 1344746b2b67dde87e63fe330abed21e283e3bf52f3de06ea0459e4af8df0377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5650253fbe8697a462b7e655c4db0f2902a888d108a4368b26023669606beafd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5650253fbe8697a462b7e655c4db0f2902a888d108a4368b26023669606beafd/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5650253fbe8697a462b7e655c4db0f2902a888d108a4368b26023669606beafd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5650253fbe8697a462b7e655c4db0f2902a888d108a4368b26023669606beafd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:51 compute-0 podman[80915]: 2025-10-02 11:44:51.856537938 +0000 UTC m=+0.097234648 container init 1344746b2b67dde87e63fe330abed21e283e3bf52f3de06ea0459e4af8df0377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:44:51 compute-0 podman[80915]: 2025-10-02 11:44:51.864312327 +0000 UTC m=+0.105009027 container start 1344746b2b67dde87e63fe330abed21e283e3bf52f3de06ea0459e4af8df0377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:51 compute-0 bash[80915]: 1344746b2b67dde87e63fe330abed21e283e3bf52f3de06ea0459e4af8df0377
Oct 02 11:44:51 compute-0 podman[80915]: 2025-10-02 11:44:51.778050826 +0000 UTC m=+0.018747526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:51 compute-0 systemd[1]: Started Ceph crash.compute-0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:44:51 compute-0 sudo[80624]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:44:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:44:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 58907075-e27c-43b3-9934-bd7890c86ad6 (Updating crash deployment (+1 -> 1))
Oct 02 11:44:51 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 58907075-e27c-43b3-9934-bd7890c86ad6 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 02 11:44:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:44:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 02 11:44:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b8d103c9-035a-4c3a-bee4-99226e06980f does not exist
Oct 02 11:44:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:44:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2958923880' entity='client.admin' 
Oct 02 11:44:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6d0fc08a-f475-4813-8cec-f4649fc94230 does not exist
Oct 02 11:44:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:44:51 compute-0 systemd[1]: libpod-47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556.scope: Deactivated successfully.
Oct 02 11:44:51 compute-0 podman[80734]: 2025-10-02 11:44:51.958643336 +0000 UTC m=+1.062622620 container died 47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556 (image=quay.io/ceph/ceph:v18, name=lucid_chebyshev, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:52 compute-0 ansible-async_wrapper.py[78895]: Done in kid B.
Oct 02 11:44:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e57f202e3b877b232a9868edc0230e3352670e7c04313920539bce0dc520543-merged.mount: Deactivated successfully.
Oct 02 11:44:52 compute-0 sudo[80938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:52 compute-0 podman[80734]: 2025-10-02 11:44:52.036348467 +0000 UTC m=+1.140327741 container remove 47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556 (image=quay.io/ceph/ceph:v18, name=lucid_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:52 compute-0 sudo[80938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 systemd[1]: libpod-conmon-47064a4b5e2ac85a9c1d0e9d140fd1a5583e6cec9d1a5f256b735df3002c2556.scope: Deactivated successfully.
Oct 02 11:44:52 compute-0 sudo[80938]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[80698]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[80972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:44:52 compute-0 sudo[80972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[80972]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 02 11:44:52 compute-0 sudo[80998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:52 compute-0 sudo[80998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[80998]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[81065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxzdgyhvzvfhvshcbaejeulzhiwyitdm ; /usr/bin/python3'
Oct 02 11:44:52 compute-0 sudo[81065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:52 compute-0 sudo[81036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:52 compute-0 sudo[81036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[81036]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 sudo[81075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:52 compute-0 sudo[81075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 sudo[81075]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: 2025-10-02T11:44:52.329+0000 7f41ebe7f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: 2025-10-02T11:44:52.329+0000 7f41ebe7f640 -1 AuthRegistry(0x7f41e4067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: 2025-10-02T11:44:52.330+0000 7f41ebe7f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: 2025-10-02T11:44:52.330+0000 7f41ebe7f640 -1 AuthRegistry(0x7f41ebe7e000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: 2025-10-02T11:44:52.331+0000 7f41e9bf4640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: 2025-10-02T11:44:52.331+0000 7f41ebe7f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 02 11:44:52 compute-0 sudo[81100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:44:52 compute-0 sudo[81100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:52 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-0[80930]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 02 11:44:52 compute-0 python3[81072]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:52 compute-0 podman[81135]: 2025-10-02 11:44:52.412664856 +0000 UTC m=+0.043014939 container create e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363 (image=quay.io/ceph/ceph:v18, name=silly_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:44:52 compute-0 systemd[1]: Started libpod-conmon-e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363.scope.
Oct 02 11:44:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56895aba1ae266c088c1bf04f92641e43b3030b5b13f5c97d373fde76bfc4236/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56895aba1ae266c088c1bf04f92641e43b3030b5b13f5c97d373fde76bfc4236/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56895aba1ae266c088c1bf04f92641e43b3030b5b13f5c97d373fde76bfc4236/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:52 compute-0 podman[81135]: 2025-10-02 11:44:52.392229786 +0000 UTC m=+0.022579909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:52 compute-0 podman[81135]: 2025-10-02 11:44:52.519639125 +0000 UTC m=+0.149989258 container init e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363 (image=quay.io/ceph/ceph:v18, name=silly_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:52 compute-0 podman[81135]: 2025-10-02 11:44:52.526790577 +0000 UTC m=+0.157140660 container start e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363 (image=quay.io/ceph/ceph:v18, name=silly_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:52 compute-0 podman[81135]: 2025-10-02 11:44:52.532065909 +0000 UTC m=+0.162416012 container attach e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363 (image=quay.io/ceph/ceph:v18, name=silly_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:52 compute-0 podman[81225]: 2025-10-02 11:44:52.784083022 +0000 UTC m=+0.055564957 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:44:52 compute-0 podman[81225]: 2025-10-02 11:44:52.888933824 +0000 UTC m=+0.160415739 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2958923880' entity='client.admin' 
Oct 02 11:44:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 sudo[81100]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4004199834' entity='client.admin' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:44:53 compute-0 systemd[1]: libpod-e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363.scope: Deactivated successfully.
Oct 02 11:44:53 compute-0 podman[81135]: 2025-10-02 11:44:53.07940689 +0000 UTC m=+0.709756973 container died e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363 (image=quay.io/ceph/ceph:v18, name=silly_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5a033b23-b108-489b-b70c-68680dc4d772 does not exist
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3412fa24-2d03-4e88-9698-5972671897bc does not exist
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b128e46e-b025-4693-8bdf-1fd1a65253e3 does not exist
Oct 02 11:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-56895aba1ae266c088c1bf04f92641e43b3030b5b13f5c97d373fde76bfc4236-merged.mount: Deactivated successfully.
Oct 02 11:44:53 compute-0 podman[81135]: 2025-10-02 11:44:53.124549455 +0000 UTC m=+0.754899548 container remove e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363 (image=quay.io/ceph/ceph:v18, name=silly_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 02 11:44:53 compute-0 systemd[1]: libpod-conmon-e01ea9db88fc477df7251b45dbda2a5cac99cd318bd1e897b324ae69d40d0363.scope: Deactivated successfully.
Oct 02 11:44:53 compute-0 sudo[81320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:53 compute-0 sudo[81320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 sudo[81065]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 sudo[81320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 sudo[81349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:44:53 compute-0 sudo[81349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 sudo[81349]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:44:53 compute-0 sudo[81374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:53 compute-0 sudo[81374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 sudo[81374]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 sudo[81442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmlmsabappekgdhrtwexbugaitqsquma ; /usr/bin/python3'
Oct 02 11:44:53 compute-0 sudo[81442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:53 compute-0 sudo[81405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:53 compute-0 sudo[81405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 sudo[81405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 1 completed events
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:44:53 compute-0 sudo[81450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:53 compute-0 sudo[81450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 sudo[81450]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 sudo[81475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:53 compute-0 sudo[81475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 python3[81448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:53 compute-0 podman[81500]: 2025-10-02 11:44:53.515201309 +0000 UTC m=+0.036725759 container create 0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:44:53 compute-0 systemd[1]: Started libpod-conmon-0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac.scope.
Oct 02 11:44:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9815297788ab333c303ad060034bdd31ea0ab233db291c702256bb22c29c8d03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9815297788ab333c303ad060034bdd31ea0ab233db291c702256bb22c29c8d03/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9815297788ab333c303ad060034bdd31ea0ab233db291c702256bb22c29c8d03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:53 compute-0 podman[81500]: 2025-10-02 11:44:53.594398551 +0000 UTC m=+0.115923051 container init 0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:44:53 compute-0 podman[81500]: 2025-10-02 11:44:53.498947322 +0000 UTC m=+0.020471792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:53 compute-0 podman[81500]: 2025-10-02 11:44:53.599690983 +0000 UTC m=+0.121215423 container start 0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:53 compute-0 podman[81500]: 2025-10-02 11:44:53.602982052 +0000 UTC m=+0.124506502 container attach 0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:44:53 compute-0 podman[81535]: 2025-10-02 11:44:53.66682793 +0000 UTC m=+0.031623802 container create 448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:44:53 compute-0 systemd[1]: Started libpod-conmon-448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852.scope.
Oct 02 11:44:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:53 compute-0 podman[81535]: 2025-10-02 11:44:53.739445413 +0000 UTC m=+0.104241305 container init 448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:53 compute-0 podman[81535]: 2025-10-02 11:44:53.744778267 +0000 UTC m=+0.109574139 container start 448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:44:53 compute-0 intelligent_panini[81551]: 167 167
Oct 02 11:44:53 compute-0 podman[81535]: 2025-10-02 11:44:53.748161808 +0000 UTC m=+0.112957730 container attach 448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:44:53 compute-0 systemd[1]: libpod-448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852.scope: Deactivated successfully.
Oct 02 11:44:53 compute-0 podman[81535]: 2025-10-02 11:44:53.750243344 +0000 UTC m=+0.115039216 container died 448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:44:53 compute-0 podman[81535]: 2025-10-02 11:44:53.65383204 +0000 UTC m=+0.018627932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-429370b80529debd0c905140eef27476d6c2fdd3ffa5e593f87fadc113773326-merged.mount: Deactivated successfully.
Oct 02 11:44:53 compute-0 podman[81535]: 2025-10-02 11:44:53.782502292 +0000 UTC m=+0.147298184 container remove 448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:44:53 compute-0 systemd[1]: libpod-conmon-448435021aad09f99bb1e1699ecf0b63a264db3b4954ba73cbf06dcd3d615852.scope: Deactivated successfully.
Oct 02 11:44:53 compute-0 sudo[81475]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.unmtoh (unknown last config time)...
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.unmtoh (unknown last config time)...
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct 02 11:44:53 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct 02 11:44:53 compute-0 sudo[81569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:53 compute-0 sudo[81569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 sudo[81569]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:53 compute-0 ceph-mon[73668]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4004199834' entity='client.admin' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:53 compute-0 ceph-mon[73668]: Reconfiguring mgr.compute-0.unmtoh (unknown last config time)...
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:53 compute-0 ceph-mon[73668]: Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct 02 11:44:53 compute-0 sudo[81613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:53 compute-0 sudo[81613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:53 compute-0 sudo[81613]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 sudo[81638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:54 compute-0 sudo[81638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 sudo[81638]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 sudo[81663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:54 compute-0 sudo[81663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1441231807' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:54 compute-0 podman[81705]: 2025-10-02 11:44:54.309325241 +0000 UTC m=+0.037898381 container create b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:44:54 compute-0 systemd[1]: Started libpod-conmon-b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3.scope.
Oct 02 11:44:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:54 compute-0 podman[81705]: 2025-10-02 11:44:54.379453019 +0000 UTC m=+0.108026179 container init b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:44:54 compute-0 podman[81705]: 2025-10-02 11:44:54.385875621 +0000 UTC m=+0.114448761 container start b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcclintock, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:44:54 compute-0 podman[81705]: 2025-10-02 11:44:54.292685283 +0000 UTC m=+0.021258443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:44:54 compute-0 podman[81705]: 2025-10-02 11:44:54.388569414 +0000 UTC m=+0.117142554 container attach b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:44:54 compute-0 dazzling_mcclintock[81722]: 167 167
Oct 02 11:44:54 compute-0 systemd[1]: libpod-b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3.scope: Deactivated successfully.
Oct 02 11:44:54 compute-0 podman[81705]: 2025-10-02 11:44:54.390697061 +0000 UTC m=+0.119270201 container died b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcclintock, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-701e0afb66ca387b3f0a4d9a3ae59fe83afc64d7aaaf31410875c876d465633c-merged.mount: Deactivated successfully.
Oct 02 11:44:54 compute-0 podman[81705]: 2025-10-02 11:44:54.424739467 +0000 UTC m=+0.153312607 container remove b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcclintock, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:44:54 compute-0 systemd[1]: libpod-conmon-b9d1d72ce8bed67b339f38efbd5493e4cba088246eaa7f38175b3cf20c904bb3.scope: Deactivated successfully.
Oct 02 11:44:54 compute-0 sudo[81663]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 00714df4-917f-41cc-b65c-9aff33b702a6 does not exist
Oct 02 11:44:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 66159b58-c977-4959-8429-c41e84245906 does not exist
Oct 02 11:44:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 47ca4d79-7a65-4254-9015-58562c000f35 does not exist
Oct 02 11:44:54 compute-0 sudo[81740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:54 compute-0 sudo[81740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 sudo[81740]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 sudo[81765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:44:54 compute-0 sudo[81765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:54 compute-0 sudo[81765]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:44:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1441231807' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73668]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1441231807' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 02 11:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 02 11:44:54 compute-0 suspicious_poitras[81516]: set require_min_compat_client to mimic
Oct 02 11:44:54 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 02 11:44:54 compute-0 systemd[1]: libpod-0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac.scope: Deactivated successfully.
Oct 02 11:44:54 compute-0 podman[81500]: 2025-10-02 11:44:54.972548741 +0000 UTC m=+1.494073201 container died 0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9815297788ab333c303ad060034bdd31ea0ab233db291c702256bb22c29c8d03-merged.mount: Deactivated successfully.
Oct 02 11:44:55 compute-0 podman[81500]: 2025-10-02 11:44:55.018786456 +0000 UTC m=+1.540310906 container remove 0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:44:55 compute-0 systemd[1]: libpod-conmon-0eb5601eae3d61ed850b624f212e1330fede9b950a28dd7b739377d3701bbdac.scope: Deactivated successfully.
Oct 02 11:44:55 compute-0 sudo[81442]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:55 compute-0 sudo[81825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdgwrgtybcxgnwmjnodczvydbozirfu ; /usr/bin/python3'
Oct 02 11:44:55 compute-0 sudo[81825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:44:55 compute-0 python3[81827]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:44:55 compute-0 podman[81828]: 2025-10-02 11:44:55.721481618 +0000 UTC m=+0.063145710 container create f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec (image=quay.io/ceph/ceph:v18, name=eager_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 11:44:55 compute-0 systemd[1]: Started libpod-conmon-f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec.scope.
Oct 02 11:44:55 compute-0 podman[81828]: 2025-10-02 11:44:55.695516079 +0000 UTC m=+0.037180261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:44:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3984f39ef671d157851e395d6cbda1e6af93fa26ec0d0d327f5d77f7b876ec7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3984f39ef671d157851e395d6cbda1e6af93fa26ec0d0d327f5d77f7b876ec7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3984f39ef671d157851e395d6cbda1e6af93fa26ec0d0d327f5d77f7b876ec7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:44:55 compute-0 podman[81828]: 2025-10-02 11:44:55.813448623 +0000 UTC m=+0.155112735 container init f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec (image=quay.io/ceph/ceph:v18, name=eager_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:55 compute-0 podman[81828]: 2025-10-02 11:44:55.825927199 +0000 UTC m=+0.167591291 container start f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec (image=quay.io/ceph/ceph:v18, name=eager_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:44:55 compute-0 podman[81828]: 2025-10-02 11:44:55.829526526 +0000 UTC m=+0.171190638 container attach f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec (image=quay.io/ceph/ceph:v18, name=eager_gagarin, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:44:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1441231807' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 02 11:44:55 compute-0 ceph-mon[73668]: osdmap e3: 0 total, 0 up, 0 in
Oct 02 11:44:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:56 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:56 compute-0 sudo[81867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:56 compute-0 sudo[81867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[81867]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 sudo[81892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:56 compute-0 sudo[81892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[81892]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 sudo[81917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:56 compute-0 sudo[81917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[81917]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 sudo[81942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 11:44:56 compute-0 sudo[81942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[81942]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:44:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:44:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:44:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:44:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:56 compute-0 ceph-mgr[73961]: [cephadm INFO root] Added host compute-0
Oct 02 11:44:56 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 02 11:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:44:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:44:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:44:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 17c1e97c-aad4-4727-ac60-149c23842cb8 does not exist
Oct 02 11:44:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 225852ff-ca5b-4221-a9b7-8ed4e00785c8 does not exist
Oct 02 11:44:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cbb5df5d-d001-41fe-a763-2b0f955cb82a does not exist
Oct 02 11:44:56 compute-0 sudo[81987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:56 compute-0 sudo[81987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:56 compute-0 sudo[81987]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:57 compute-0 sudo[82012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:44:57 compute-0 sudo[82012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:57 compute-0 sudo[82012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:57 compute-0 ceph-mon[73668]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:57 compute-0 ceph-mon[73668]: Added host compute-0
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:44:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:44:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:44:59 compute-0 ceph-mon[73668]: Deploying cephadm binary to compute-1
Oct 02 11:44:59 compute-0 ceph-mon[73668]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:45:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:01 compute-0 ceph-mgr[73961]: [cephadm INFO root] Added host compute-1
Oct 02 11:45:01 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Added host compute-1
Oct 02 11:45:01 compute-0 ceph-mon[73668]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:02 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct 02 11:45:02 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct 02 11:45:02 compute-0 ceph-mon[73668]: Added host compute-1
Oct 02 11:45:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:03 compute-0 ceph-mon[73668]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:03 compute-0 ceph-mon[73668]: Deploying cephadm binary to compute-2
Oct 02 11:45:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:04 compute-0 ceph-mon[73668]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 11:45:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: [cephadm INFO root] Added host compute-2
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Added host compute-2
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:45:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:45:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:06 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 02 11:45:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:06 compute-0 eager_gagarin[81843]: Added host 'compute-0' with addr '192.168.122.100'
Oct 02 11:45:06 compute-0 eager_gagarin[81843]: Added host 'compute-1' with addr '192.168.122.101'
Oct 02 11:45:06 compute-0 eager_gagarin[81843]: Added host 'compute-2' with addr '192.168.122.102'
Oct 02 11:45:06 compute-0 eager_gagarin[81843]: Scheduled mon update...
Oct 02 11:45:06 compute-0 eager_gagarin[81843]: Scheduled mgr update...
Oct 02 11:45:06 compute-0 eager_gagarin[81843]: Scheduled osd.default_drive_group update...
Oct 02 11:45:06 compute-0 systemd[1]: libpod-f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec.scope: Deactivated successfully.
Oct 02 11:45:06 compute-0 podman[82038]: 2025-10-02 11:45:06.460626039 +0000 UTC m=+0.026579917 container died f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec (image=quay.io/ceph/ceph:v18, name=eager_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3984f39ef671d157851e395d6cbda1e6af93fa26ec0d0d327f5d77f7b876ec7-merged.mount: Deactivated successfully.
Oct 02 11:45:06 compute-0 podman[82038]: 2025-10-02 11:45:06.509843473 +0000 UTC m=+0.075797311 container remove f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec (image=quay.io/ceph/ceph:v18, name=eager_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:45:06 compute-0 systemd[1]: libpod-conmon-f2bbbb91c30ad6b2a70331d24b56b6f374cec6270669f8e87a158bc2eec2e3ec.scope: Deactivated successfully.
Oct 02 11:45:06 compute-0 sudo[81825]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:06 compute-0 sudo[82076]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yttbdupyoeuxhivtzmtacclfhacurcrl ; /usr/bin/python3'
Oct 02 11:45:06 compute-0 sudo[82076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:06 compute-0 python3[82078]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:06 compute-0 podman[82080]: 2025-10-02 11:45:06.974286073 +0000 UTC m=+0.041105817 container create f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e (image=quay.io/ceph/ceph:v18, name=jolly_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:45:07 compute-0 systemd[1]: Started libpod-conmon-f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e.scope.
Oct 02 11:45:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce11a4787ba6eaa350145130c07e024bb1fd08980622709e030d5358606628fc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce11a4787ba6eaa350145130c07e024bb1fd08980622709e030d5358606628fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce11a4787ba6eaa350145130c07e024bb1fd08980622709e030d5358606628fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:07 compute-0 podman[82080]: 2025-10-02 11:45:07.044294127 +0000 UTC m=+0.111113911 container init f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e (image=quay.io/ceph/ceph:v18, name=jolly_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:45:07 compute-0 podman[82080]: 2025-10-02 11:45:07.051642605 +0000 UTC m=+0.118462389 container start f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e (image=quay.io/ceph/ceph:v18, name=jolly_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:45:07 compute-0 podman[82080]: 2025-10-02 11:45:06.955666962 +0000 UTC m=+0.022486726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:45:07 compute-0 podman[82080]: 2025-10-02 11:45:07.056078255 +0000 UTC m=+0.122898099 container attach f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e (image=quay.io/ceph/ceph:v18, name=jolly_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:45:07 compute-0 ceph-mon[73668]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:07 compute-0 ceph-mon[73668]: Added host compute-2
Oct 02 11:45:07 compute-0 ceph-mon[73668]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:07 compute-0 ceph-mon[73668]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:07 compute-0 ceph-mon[73668]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 11:45:07 compute-0 ceph-mon[73668]: Marking host: compute-1 for OSDSpec preview refresh.
Oct 02 11:45:07 compute-0 ceph-mon[73668]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 02 11:45:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:45:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196812703' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:45:07 compute-0 jolly_kalam[82096]: 
Oct 02 11:45:07 compute-0 jolly_kalam[82096]: {"fsid":"20fdc58c-b037-5094-a8ef-d490aa7c36f3","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":88,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-02T11:43:36.457954+0000","services":{}},"progress_events":{}}
Oct 02 11:45:07 compute-0 systemd[1]: libpod-f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e.scope: Deactivated successfully.
Oct 02 11:45:07 compute-0 podman[82121]: 2025-10-02 11:45:07.718246996 +0000 UTC m=+0.026389571 container died f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e (image=quay.io/ceph/ceph:v18, name=jolly_kalam, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce11a4787ba6eaa350145130c07e024bb1fd08980622709e030d5358606628fc-merged.mount: Deactivated successfully.
Oct 02 11:45:07 compute-0 podman[82121]: 2025-10-02 11:45:07.758665704 +0000 UTC m=+0.066808269 container remove f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e (image=quay.io/ceph/ceph:v18, name=jolly_kalam, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:07 compute-0 systemd[1]: libpod-conmon-f191a49b2d9f95665bf4909198b59cb04d78c16609e9a327359ba023a53a634e.scope: Deactivated successfully.
Oct 02 11:45:07 compute-0 sudo[82076]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1196812703' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:45:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:09 compute-0 ceph-mon[73668]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:11 compute-0 ceph-mon[73668]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:13 compute-0 ceph-mon[73668]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:15 compute-0 ceph-mon[73668]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:17 compute-0 ceph-mon[73668]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 11:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:45:18 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:45:18 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:45:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:19 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:45:19 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:45:19 compute-0 ceph-mon[73668]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:45:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:45:19 compute-0 ceph-mon[73668]: Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:45:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:20 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:45:20 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:45:20 compute-0 ceph-mon[73668]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:45:21 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:45:21 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:45:21 compute-0 ceph-mon[73668]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:21 compute-0 ceph-mon[73668]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:45:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 2e337ff1-399a-4467-a523-2c75243dbd26 (Updating crash deployment (+1 -> 2))
Oct 02 11:45:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:45:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:45:22.729+0000 7f9fe5d18640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: service_name: mon
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: placement:
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   hosts:
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   - compute-0
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   - compute-1
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   - compute-2
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:45:22.730+0000 7f9fe5d18640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: service_name: mgr
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: placement:
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   hosts:
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   - compute-0
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   - compute-1
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]:   - compute-2
Oct 02 11:45:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:45:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:45:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:45:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct 02 11:45:22 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct 02 11:45:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 02 11:45:22 compute-0 ceph-mon[73668]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:45:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:45:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:45:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:23 compute-0 ceph-mon[73668]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:23 compute-0 ceph-mon[73668]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:45:23 compute-0 ceph-mon[73668]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:23 compute-0 ceph-mon[73668]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:45:23 compute-0 ceph-mon[73668]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:23 compute-0 ceph-mon[73668]: Deploying daemon crash.compute-1 on compute-1
Oct 02 11:45:23 compute-0 ceph-mon[73668]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 02 11:45:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:24 compute-0 ceph-mon[73668]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:25 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 2e337ff1-399a-4467-a523-2c75243dbd26 (Updating crash deployment (+1 -> 2))
Oct 02 11:45:25 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 2e337ff1-399a-4467-a523-2c75243dbd26 (Updating crash deployment (+1 -> 2)) in 3 seconds
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:45:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:45:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:26 compute-0 sudo[82135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:26 compute-0 sudo[82135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-0 sudo[82135]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:26 compute-0 sudo[82160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:26 compute-0 sudo[82160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-0 sudo[82160]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:26 compute-0 sudo[82185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:26 compute-0 sudo[82185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-0 sudo[82185]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:26 compute-0 sudo[82210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:45:26 compute-0 sudo[82210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-0 podman[82274]: 2025-10-02 11:45:26.602598979 +0000 UTC m=+0.078349653 container create 2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:26 compute-0 systemd[1]: Started libpod-conmon-2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6.scope.
Oct 02 11:45:26 compute-0 podman[82274]: 2025-10-02 11:45:26.554750138 +0000 UTC m=+0.030500842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:26 compute-0 podman[82274]: 2025-10-02 11:45:26.68551527 +0000 UTC m=+0.161265964 container init 2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:45:26 compute-0 podman[82274]: 2025-10-02 11:45:26.69362593 +0000 UTC m=+0.169376604 container start 2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:45:26 compute-0 practical_nightingale[82291]: 167 167
Oct 02 11:45:26 compute-0 systemd[1]: libpod-2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6.scope: Deactivated successfully.
Oct 02 11:45:26 compute-0 podman[82274]: 2025-10-02 11:45:26.71442918 +0000 UTC m=+0.190179854 container attach 2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:45:26 compute-0 podman[82274]: 2025-10-02 11:45:26.714775639 +0000 UTC m=+0.190526313 container died 2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:45:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-245dbf87e86df0e42ef4012cc9677ecf7d9b746e1a98c18e002b5b8de89fc509-merged.mount: Deactivated successfully.
Oct 02 11:45:26 compute-0 podman[82274]: 2025-10-02 11:45:26.878074956 +0000 UTC m=+0.353825630 container remove 2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:26 compute-0 systemd[1]: libpod-conmon-2d4078996fd9114b3d02a686c5cb1c739c9d1ee34d9d7ebc466fffd492c68ce6.scope: Deactivated successfully.
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:45:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:26 compute-0 ceph-mon[73668]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:27 compute-0 podman[82316]: 2025-10-02 11:45:27.052493351 +0000 UTC m=+0.051897398 container create 70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:27 compute-0 systemd[1]: Started libpod-conmon-70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3.scope.
Oct 02 11:45:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db69e580b131cd0b0a463aa47e4e88a7ed77b9859a18f6b6cb922d4d2d992181/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db69e580b131cd0b0a463aa47e4e88a7ed77b9859a18f6b6cb922d4d2d992181/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db69e580b131cd0b0a463aa47e4e88a7ed77b9859a18f6b6cb922d4d2d992181/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db69e580b131cd0b0a463aa47e4e88a7ed77b9859a18f6b6cb922d4d2d992181/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db69e580b131cd0b0a463aa47e4e88a7ed77b9859a18f6b6cb922d4d2d992181/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:27 compute-0 podman[82316]: 2025-10-02 11:45:27.032991915 +0000 UTC m=+0.032396012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:27 compute-0 podman[82316]: 2025-10-02 11:45:27.140989136 +0000 UTC m=+0.140393203 container init 70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:27 compute-0 podman[82316]: 2025-10-02 11:45:27.147191977 +0000 UTC m=+0.146596024 container start 70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:45:27 compute-0 podman[82316]: 2025-10-02 11:45:27.150951795 +0000 UTC m=+0.150355842 container attach 70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:27 compute-0 reverent_fermi[82332]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:45:27 compute-0 reverent_fermi[82332]: --> relative data size: 1.0
Oct 02 11:45:27 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 11:45:28 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3e590da2-9176-4197-8be9-66fc8d360a0c
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"} v 0) v1
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]: dispatch
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]': finished
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]: dispatch
Oct 02 11:45:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]': finished
Oct 02 11:45:28 compute-0 ceph-mon[73668]: osdmap e4: 1 total, 0 up, 1 in
Oct 02 11:45:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:45:28
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] No pools available
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 2 completed events
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"} v 0) v1
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]: dispatch
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]': finished
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:28 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 11:45:28 compute-0 lvm[82379]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:45:28 compute-0 lvm[82379]: VG ceph_vg0 finished
Oct 02 11:45:28 compute-0 reverent_fermi[82332]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 02 11:45:28 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 02 11:45:28 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:28 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:28 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 02 11:45:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 02 11:45:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2339471251' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:45:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 02 11:45:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789933639' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:45:29 compute-0 reverent_fermi[82332]:  stderr: got monmap epoch 1
Oct 02 11:45:29 compute-0 reverent_fermi[82332]: --> Creating keyring file for osd.1
Oct 02 11:45:29 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 02 11:45:29 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 02 11:45:29 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 3e590da2-9176-4197-8be9-66fc8d360a0c --setuser ceph --setgroup ceph
Oct 02 11:45:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]: dispatch
Oct 02 11:45:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]': finished
Oct 02 11:45:29 compute-0 ceph-mon[73668]: osdmap e5: 2 total, 0 up, 2 in
Oct 02 11:45:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:29 compute-0 ceph-mon[73668]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2339471251' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:45:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3789933639' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:45:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 02 11:45:30 compute-0 ceph-mon[73668]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 02 11:45:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:31 compute-0 reverent_fermi[82332]:  stderr: 2025-10-02T11:45:29.162+0000 7f1edbaaf740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:45:31 compute-0 reverent_fermi[82332]:  stderr: 2025-10-02T11:45:29.162+0000 7f1edbaaf740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:45:31 compute-0 reverent_fermi[82332]:  stderr: 2025-10-02T11:45:29.162+0000 7f1edbaaf740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:45:31 compute-0 reverent_fermi[82332]:  stderr: 2025-10-02T11:45:29.163+0000 7f1edbaaf740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:31 compute-0 ceph-mon[73668]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 02 11:45:31 compute-0 reverent_fermi[82332]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 02 11:45:31 compute-0 systemd[1]: libpod-70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3.scope: Deactivated successfully.
Oct 02 11:45:31 compute-0 systemd[1]: libpod-70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3.scope: Consumed 2.419s CPU time.
Oct 02 11:45:31 compute-0 podman[82316]: 2025-10-02 11:45:31.466621053 +0000 UTC m=+4.466025120 container died 70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-db69e580b131cd0b0a463aa47e4e88a7ed77b9859a18f6b6cb922d4d2d992181-merged.mount: Deactivated successfully.
Oct 02 11:45:31 compute-0 podman[82316]: 2025-10-02 11:45:31.520668365 +0000 UTC m=+4.520072402 container remove 70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:31 compute-0 systemd[1]: libpod-conmon-70ef7f8308c2b02d52ba6f7a61b4a146d08a800f4889053e7407740d001088f3.scope: Deactivated successfully.
Oct 02 11:45:31 compute-0 sudo[82210]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:31 compute-0 sudo[83311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:31 compute-0 sudo[83311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:31 compute-0 sudo[83311]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:31 compute-0 sudo[83336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:31 compute-0 sudo[83336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:31 compute-0 sudo[83336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:31 compute-0 sudo[83361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:31 compute-0 sudo[83361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:31 compute-0 sudo[83361]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:31 compute-0 sudo[83386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:45:31 compute-0 sudo[83386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:32 compute-0 podman[83454]: 2025-10-02 11:45:32.055791547 +0000 UTC m=+0.047126323 container create b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:45:32 compute-0 systemd[1]: Started libpod-conmon-b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759.scope.
Oct 02 11:45:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:32 compute-0 podman[83454]: 2025-10-02 11:45:32.029522346 +0000 UTC m=+0.020857142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:32 compute-0 podman[83454]: 2025-10-02 11:45:32.129348735 +0000 UTC m=+0.120683531 container init b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:45:32 compute-0 podman[83454]: 2025-10-02 11:45:32.136187393 +0000 UTC m=+0.127522169 container start b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:32 compute-0 brave_meninsky[83470]: 167 167
Oct 02 11:45:32 compute-0 systemd[1]: libpod-b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759.scope: Deactivated successfully.
Oct 02 11:45:32 compute-0 podman[83454]: 2025-10-02 11:45:32.154548879 +0000 UTC m=+0.145883655 container attach b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:32 compute-0 podman[83454]: 2025-10-02 11:45:32.15497782 +0000 UTC m=+0.146312596 container died b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f51eda1272d4966fcba72f031a24c7cff1ab25866fc5412ae16e8c1b922e432-merged.mount: Deactivated successfully.
Oct 02 11:45:32 compute-0 podman[83454]: 2025-10-02 11:45:32.188333036 +0000 UTC m=+0.179667812 container remove b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:45:32 compute-0 systemd[1]: libpod-conmon-b3d7eff2694d06777c0115f0a4d4eb26497503b1d127ccf2585a6cc865d43759.scope: Deactivated successfully.
Oct 02 11:45:32 compute-0 podman[83494]: 2025-10-02 11:45:32.337063414 +0000 UTC m=+0.043578701 container create 264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:32 compute-0 systemd[1]: Started libpod-conmon-264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9.scope.
Oct 02 11:45:32 compute-0 podman[83494]: 2025-10-02 11:45:32.316746887 +0000 UTC m=+0.023262184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d5ad6d70c8dfdc2ef66a929d39a6c80f3a23310a69e72ce7d6b2441a82cda2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d5ad6d70c8dfdc2ef66a929d39a6c80f3a23310a69e72ce7d6b2441a82cda2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d5ad6d70c8dfdc2ef66a929d39a6c80f3a23310a69e72ce7d6b2441a82cda2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d5ad6d70c8dfdc2ef66a929d39a6c80f3a23310a69e72ce7d6b2441a82cda2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:32 compute-0 podman[83494]: 2025-10-02 11:45:32.430811776 +0000 UTC m=+0.137327063 container init 264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:32 compute-0 podman[83494]: 2025-10-02 11:45:32.439007679 +0000 UTC m=+0.145522946 container start 264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:32 compute-0 podman[83494]: 2025-10-02 11:45:32.442179531 +0000 UTC m=+0.148694828 container attach 264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:45:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]: {
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:     "1": [
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:         {
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "devices": [
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "/dev/loop3"
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             ],
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "lv_name": "ceph_lv0",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "lv_size": "7511998464",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "name": "ceph_lv0",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "tags": {
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.cluster_name": "ceph",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.crush_device_class": "",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.encrypted": "0",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.osd_id": "1",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.type": "block",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:                 "ceph.vdo": "0"
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             },
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "type": "block",
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:             "vg_name": "ceph_vg0"
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:         }
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]:     ]
Oct 02 11:45:33 compute-0 dazzling_ptolemy[83510]: }
Oct 02 11:45:33 compute-0 systemd[1]: libpod-264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9.scope: Deactivated successfully.
Oct 02 11:45:33 compute-0 podman[83494]: 2025-10-02 11:45:33.257036909 +0000 UTC m=+0.963552186 container died 264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d5ad6d70c8dfdc2ef66a929d39a6c80f3a23310a69e72ce7d6b2441a82cda2-merged.mount: Deactivated successfully.
Oct 02 11:45:33 compute-0 podman[83494]: 2025-10-02 11:45:33.790581941 +0000 UTC m=+1.497097208 container remove 264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ptolemy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:45:33 compute-0 systemd[1]: libpod-conmon-264a9a12285f26d865654a9ceb2458e8883df7c28b811085f109d8c927b68bf9.scope: Deactivated successfully.
Oct 02 11:45:33 compute-0 sudo[83386]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 02 11:45:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:45:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:45:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:33 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 02 11:45:33 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 02 11:45:33 compute-0 ceph-mon[73668]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:33 compute-0 sudo[83530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:33 compute-0 sudo[83530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:33 compute-0 sudo[83530]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:33 compute-0 sudo[83555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:33 compute-0 sudo[83555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:33 compute-0 sudo[83555]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:34 compute-0 sudo[83580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:34 compute-0 sudo[83580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:34 compute-0 sudo[83580]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 02 11:45:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:45:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:45:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:34 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Oct 02 11:45:34 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Oct 02 11:45:34 compute-0 sudo[83605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:45:34 compute-0 sudo[83605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:34 compute-0 podman[83670]: 2025-10-02 11:45:34.438099069 +0000 UTC m=+0.037271508 container create 0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:34 compute-0 systemd[1]: Started libpod-conmon-0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1.scope.
Oct 02 11:45:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:34 compute-0 podman[83670]: 2025-10-02 11:45:34.510841426 +0000 UTC m=+0.110013895 container init 0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:34 compute-0 podman[83670]: 2025-10-02 11:45:34.517466928 +0000 UTC m=+0.116639367 container start 0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:45:34 compute-0 podman[83670]: 2025-10-02 11:45:34.422763872 +0000 UTC m=+0.021936331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:34 compute-0 podman[83670]: 2025-10-02 11:45:34.520684672 +0000 UTC m=+0.119857131 container attach 0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:45:34 compute-0 heuristic_borg[83686]: 167 167
Oct 02 11:45:34 compute-0 systemd[1]: libpod-0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1.scope: Deactivated successfully.
Oct 02 11:45:34 compute-0 podman[83670]: 2025-10-02 11:45:34.523631928 +0000 UTC m=+0.122804367 container died 0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0113e7904a6ad06d966e91bb20cbd9c503d0b8241ddea315609577d8c49b7df9-merged.mount: Deactivated successfully.
Oct 02 11:45:34 compute-0 podman[83670]: 2025-10-02 11:45:34.558751799 +0000 UTC m=+0.157924248 container remove 0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:34 compute-0 systemd[1]: libpod-conmon-0f43d5733fb4cabe7b7103a2d6724d6c69c598aab95b462fe7c68d07b5b70ad1.scope: Deactivated successfully.
Oct 02 11:45:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:34 compute-0 podman[83717]: 2025-10-02 11:45:34.797813131 +0000 UTC m=+0.047476772 container create 509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:45:34 compute-0 systemd[1]: Started libpod-conmon-509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22.scope.
Oct 02 11:45:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:45:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:34 compute-0 ceph-mon[73668]: Deploying daemon osd.1 on compute-0
Oct 02 11:45:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:45:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:45:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:34 compute-0 podman[83717]: 2025-10-02 11:45:34.78118587 +0000 UTC m=+0.030849531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5688aa57ab3829fe0753f66fde284633b3a1562e80bf199c66be0270a17e1ae1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5688aa57ab3829fe0753f66fde284633b3a1562e80bf199c66be0270a17e1ae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5688aa57ab3829fe0753f66fde284633b3a1562e80bf199c66be0270a17e1ae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5688aa57ab3829fe0753f66fde284633b3a1562e80bf199c66be0270a17e1ae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5688aa57ab3829fe0753f66fde284633b3a1562e80bf199c66be0270a17e1ae1/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:34 compute-0 podman[83717]: 2025-10-02 11:45:34.889331375 +0000 UTC m=+0.138995016 container init 509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:34 compute-0 podman[83717]: 2025-10-02 11:45:34.899505009 +0000 UTC m=+0.149168650 container start 509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:34 compute-0 podman[83717]: 2025-10-02 11:45:34.903087832 +0000 UTC m=+0.152751473 container attach 509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:45:35 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test[83733]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 02 11:45:35 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test[83733]:                             [--no-systemd] [--no-tmpfs]
Oct 02 11:45:35 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test[83733]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 02 11:45:35 compute-0 systemd[1]: libpod-509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22.scope: Deactivated successfully.
Oct 02 11:45:35 compute-0 podman[83717]: 2025-10-02 11:45:35.557490259 +0000 UTC m=+0.807153920 container died 509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5688aa57ab3829fe0753f66fde284633b3a1562e80bf199c66be0270a17e1ae1-merged.mount: Deactivated successfully.
Oct 02 11:45:35 compute-0 podman[83717]: 2025-10-02 11:45:35.611619823 +0000 UTC m=+0.861283464 container remove 509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:35 compute-0 systemd[1]: libpod-conmon-509451166d192c72dcba3e8a5334a9fcc14ecc954473ff76656949ad43ff4d22.scope: Deactivated successfully.
Oct 02 11:45:35 compute-0 systemd[1]: Reloading.
Oct 02 11:45:35 compute-0 ceph-mon[73668]: Deploying daemon osd.0 on compute-1
Oct 02 11:45:35 compute-0 ceph-mon[73668]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:35 compute-0 systemd-rc-local-generator[83791]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:35 compute-0 systemd-sysv-generator[83795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:36 compute-0 systemd[1]: Reloading.
Oct 02 11:45:36 compute-0 systemd-sysv-generator[83839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:36 compute-0 systemd-rc-local-generator[83836]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:36 compute-0 systemd[1]: Starting Ceph osd.1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:45:36 compute-0 podman[83892]: 2025-10-02 11:45:36.570592671 +0000 UTC m=+0.041970119 container create 130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb123c064b5bd6060d28f8a8ca49bf8bd667ce319a4ed4bbe0e69641507c5826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb123c064b5bd6060d28f8a8ca49bf8bd667ce319a4ed4bbe0e69641507c5826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb123c064b5bd6060d28f8a8ca49bf8bd667ce319a4ed4bbe0e69641507c5826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb123c064b5bd6060d28f8a8ca49bf8bd667ce319a4ed4bbe0e69641507c5826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb123c064b5bd6060d28f8a8ca49bf8bd667ce319a4ed4bbe0e69641507c5826/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-0 podman[83892]: 2025-10-02 11:45:36.635767872 +0000 UTC m=+0.107145350 container init 130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:36 compute-0 podman[83892]: 2025-10-02 11:45:36.643970345 +0000 UTC m=+0.115347793 container start 130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:45:36 compute-0 podman[83892]: 2025-10-02 11:45:36.551526027 +0000 UTC m=+0.022903505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:36 compute-0 podman[83892]: 2025-10-02 11:45:36.647969309 +0000 UTC m=+0.119346777 container attach 130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:37 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate[83907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:45:37 compute-0 bash[83892]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:45:37 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate[83907]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-0 bash[83892]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate[83907]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-0 bash[83892]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate[83907]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:37 compute-0 bash[83892]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:37 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate[83907]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:37 compute-0 bash[83892]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:37 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate[83907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:45:37 compute-0 bash[83892]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 11:45:37 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate[83907]: --> ceph-volume raw activate successful for osd ID: 1
Oct 02 11:45:37 compute-0 bash[83892]: --> ceph-volume raw activate successful for osd ID: 1
Oct 02 11:45:37 compute-0 systemd[1]: libpod-130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903.scope: Deactivated successfully.
Oct 02 11:45:37 compute-0 podman[83892]: 2025-10-02 11:45:37.636521273 +0000 UTC m=+1.107898721 container died 130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:45:37 compute-0 systemd[1]: libpod-130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903.scope: Consumed 1.003s CPU time.
Oct 02 11:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb123c064b5bd6060d28f8a8ca49bf8bd667ce319a4ed4bbe0e69641507c5826-merged.mount: Deactivated successfully.
Oct 02 11:45:37 compute-0 podman[83892]: 2025-10-02 11:45:37.691882379 +0000 UTC m=+1.163259837 container remove 130f61ddb156e6d66576ec0de67989b7c1ea6f25f4eaa3fd418470f0db627903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:45:37 compute-0 ceph-mon[73668]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:37 compute-0 podman[84070]: 2025-10-02 11:45:37.891799906 +0000 UTC m=+0.044612839 container create c8b3723bdbc9437f2a37285dcb352d94505de3b42888fa0eb852256dd1832135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:45:37 compute-0 sudo[84105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fehforoiriekfvpusjhvhooyfpjabhql ; /usr/bin/python3'
Oct 02 11:45:37 compute-0 sudo[84105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5088e94728ac1571d9c1d41303365d9b3c83ca3c35ebdae4e5627973688bb9e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5088e94728ac1571d9c1d41303365d9b3c83ca3c35ebdae4e5627973688bb9e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5088e94728ac1571d9c1d41303365d9b3c83ca3c35ebdae4e5627973688bb9e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5088e94728ac1571d9c1d41303365d9b3c83ca3c35ebdae4e5627973688bb9e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5088e94728ac1571d9c1d41303365d9b3c83ca3c35ebdae4e5627973688bb9e5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:37 compute-0 podman[84070]: 2025-10-02 11:45:37.952299525 +0000 UTC m=+0.105112488 container init c8b3723bdbc9437f2a37285dcb352d94505de3b42888fa0eb852256dd1832135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:37 compute-0 podman[84070]: 2025-10-02 11:45:37.957676825 +0000 UTC m=+0.110489758 container start c8b3723bdbc9437f2a37285dcb352d94505de3b42888fa0eb852256dd1832135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:45:37 compute-0 podman[84070]: 2025-10-02 11:45:37.87000251 +0000 UTC m=+0.022815473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:37 compute-0 bash[84070]: c8b3723bdbc9437f2a37285dcb352d94505de3b42888fa0eb852256dd1832135
Oct 02 11:45:37 compute-0 systemd[1]: Started Ceph osd.1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:45:37 compute-0 ceph-osd[84115]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:45:37 compute-0 ceph-osd[84115]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 02 11:45:37 compute-0 ceph-osd[84115]: pidfile_write: ignore empty --pid-file
Oct 02 11:45:37 compute-0 ceph-osd[84115]: bdev(0x55a06c387800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06c387800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06c387800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d1c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d1c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d1c1800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d1c1800 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:45:38 compute-0 sudo[83605]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:45:38 compute-0 python3[84107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:45:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:45:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:38 compute-0 podman[84129]: 2025-10-02 11:45:38.138064334 +0000 UTC m=+0.048989921 container create 0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1 (image=quay.io/ceph/ceph:v18, name=admiring_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:38 compute-0 sudo[84131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:38 compute-0 sudo[84131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-0 sudo[84131]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-0 systemd[1]: Started libpod-conmon-0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1.scope.
Oct 02 11:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553ec912435833e4156fcc142e2f177b3dcd456aea80f0f41530951a094eff16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553ec912435833e4156fcc142e2f177b3dcd456aea80f0f41530951a094eff16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553ec912435833e4156fcc142e2f177b3dcd456aea80f0f41530951a094eff16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-0 sudo[84168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:38 compute-0 sudo[84168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-0 podman[84129]: 2025-10-02 11:45:38.117755218 +0000 UTC m=+0.028680835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:45:38 compute-0 sudo[84168]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-0 podman[84129]: 2025-10-02 11:45:38.215072172 +0000 UTC m=+0.125997779 container init 0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1 (image=quay.io/ceph/ceph:v18, name=admiring_tu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:38 compute-0 podman[84129]: 2025-10-02 11:45:38.223556752 +0000 UTC m=+0.134482349 container start 0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1 (image=quay.io/ceph/ceph:v18, name=admiring_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:45:38 compute-0 podman[84129]: 2025-10-02 11:45:38.229354003 +0000 UTC m=+0.140279590 container attach 0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1 (image=quay.io/ceph/ceph:v18, name=admiring_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:38 compute-0 sudo[84199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:38 compute-0 sudo[84199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-0 sudo[84199]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06c387800 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:45:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:38 compute-0 sudo[84224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:45:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:38 compute-0 sudo[84224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:38 compute-0 ceph-osd[84115]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 02 11:45:38 compute-0 ceph-osd[84115]: load: jerasure load: lrc 
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:45:38 compute-0 podman[84315]: 2025-10-02 11:45:38.626700661 +0000 UTC m=+0.036575230 container create 30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamarr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:45:38 compute-0 systemd[1]: Started libpod-conmon-30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da.scope.
Oct 02 11:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:38 compute-0 podman[84315]: 2025-10-02 11:45:38.703405941 +0000 UTC m=+0.113280530 container init 30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:45:38 compute-0 podman[84315]: 2025-10-02 11:45:38.609301259 +0000 UTC m=+0.019175858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:38 compute-0 podman[84315]: 2025-10-02 11:45:38.711318126 +0000 UTC m=+0.121192695 container start 30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamarr, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:45:38 compute-0 peaceful_lamarr[84331]: 167 167
Oct 02 11:45:38 compute-0 systemd[1]: libpod-30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da.scope: Deactivated successfully.
Oct 02 11:45:38 compute-0 podman[84315]: 2025-10-02 11:45:38.717220019 +0000 UTC m=+0.127094608 container attach 30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamarr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:38 compute-0 podman[84315]: 2025-10-02 11:45:38.7176416 +0000 UTC m=+0.127516169 container died 30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamarr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:45:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d7ecba335200f9674c26140cf6ef4ea9188b3f3a7b5d9503eb48ce887b72a3-merged.mount: Deactivated successfully.
Oct 02 11:45:38 compute-0 podman[84315]: 2025-10-02 11:45:38.767076373 +0000 UTC m=+0.176950942 container remove 30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:45:38 compute-0 systemd[1]: libpod-conmon-30aaaf92ce07d7b2ed4c46b4c3969d9ea2fbb145629a5c233ed02a6ab21cf1da.scope: Deactivated successfully.
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:38 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:45:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:45:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3557356877' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:45:38 compute-0 admiring_tu[84183]: 
Oct 02 11:45:38 compute-0 admiring_tu[84183]: {"fsid":"20fdc58c-b037-5094-a8ef-d490aa7c36f3","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":119,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1759405528,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T11:45:30.732673+0000","services":{}},"progress_events":{}}
Oct 02 11:45:38 compute-0 systemd[1]: libpod-0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1.scope: Deactivated successfully.
Oct 02 11:45:38 compute-0 podman[84129]: 2025-10-02 11:45:38.8745106 +0000 UTC m=+0.785436187 container died 0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1 (image=quay.io/ceph/ceph:v18, name=admiring_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-553ec912435833e4156fcc142e2f177b3dcd456aea80f0f41530951a094eff16-merged.mount: Deactivated successfully.
Oct 02 11:45:39 compute-0 podman[84129]: 2025-10-02 11:45:39.027618712 +0000 UTC m=+0.938544289 container remove 0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1 (image=quay.io/ceph/ceph:v18, name=admiring_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:45:39 compute-0 systemd[1]: libpod-conmon-0458d636465232d418645d24e4b46901f0aaf0d925521101290075dd59da8bc1.scope: Deactivated successfully.
Oct 02 11:45:39 compute-0 sudo[84105]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:39 compute-0 podman[84363]: 2025-10-02 11:45:39.062505547 +0000 UTC m=+0.170452583 container create 293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:39 compute-0 ceph-mon[73668]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3557356877' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:45:39 compute-0 ceph-osd[84115]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24cc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24d400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs mount
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs mount shared_bdev_used = 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Git sha 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: DB SUMMARY
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: DB Session ID:  26W21XA660E0DKAHAZJB
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                     Options.env: 0x55a06d213c70
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                Options.info_log: 0x55a06c404ba0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.write_buffer_manager: 0x55a06d324460
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.row_cache: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                              Options.wal_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.wal_compression: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_background_jobs: 4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Compression algorithms supported:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kZSTD supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c404600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 systemd[1]: Started libpod-conmon-293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0.scope.
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c404600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c404600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c404600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c404600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c404600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8531be3d2fccbed739708fcce5b5e8bbb9e9a777d02d06f170f05059a627fba7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c404600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fadd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8531be3d2fccbed739708fcce5b5e8bbb9e9a777d02d06f170f05059a627fba7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8531be3d2fccbed739708fcce5b5e8bbb9e9a777d02d06f170f05059a627fba7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8531be3d2fccbed739708fcce5b5e8bbb9e9a777d02d06f170f05059a627fba7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c4045c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fa430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c4045c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fa430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 podman[84363]: 2025-10-02 11:45:39.13316257 +0000 UTC m=+0.241109626 container init 293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c4045c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fa430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f05ab963-3f47-45b9-a05a-8b65e29ac5c6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539099812, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 11:45:39 compute-0 podman[84363]: 2025-10-02 11:45:39.139461983 +0000 UTC m=+0.247409019 container start 293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539099995, "job": 1, "event": "recovery_finished"}
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: freelist init
Oct 02 11:45:39 compute-0 ceph-osd[84115]: freelist _read_cfg
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs umount
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24d400 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 11:45:39 compute-0 podman[84363]: 2025-10-02 11:45:39.044287004 +0000 UTC m=+0.152234070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:39 compute-0 podman[84363]: 2025-10-02 11:45:39.143024356 +0000 UTC m=+0.250971402 container attach 293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:45:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bdev(0x55a06d24d400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs mount
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluefs mount shared_bdev_used = 4718592
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Git sha 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: DB SUMMARY
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: DB Session ID:  26W21XA660E0DKAHAZJA
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                     Options.env: 0x55a06c446690
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                Options.info_log: 0x55a06c4058a0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.write_buffer_manager: 0x55a06d324460
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.row_cache: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                              Options.wal_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.wal_compression: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_background_jobs: 4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Compression algorithms supported:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kZSTD supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c3e1b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c3e1b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c3e1b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c3e1b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c3e1b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c3e1b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c3e1b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c405e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c405e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a06c405e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a06c3fb770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f05ab963-3f47-45b9-a05a-8b65e29ac5c6
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539368320, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539371670, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f05ab963-3f47-45b9-a05a-8b65e29ac5c6", "db_session_id": "26W21XA660E0DKAHAZJA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539374668, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f05ab963-3f47-45b9-a05a-8b65e29ac5c6", "db_session_id": "26W21XA660E0DKAHAZJA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539377953, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f05ab963-3f47-45b9-a05a-8b65e29ac5c6", "db_session_id": "26W21XA660E0DKAHAZJA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539379353, "job": 1, "event": "recovery_finished"}
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a06d22bc00
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: DB pointer 0x55a06d30da00
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 02 11:45:39 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 11:45:39 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 02 11:45:39 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 02 11:45:39 compute-0 ceph-osd[84115]: _get_class not permitted to load lua
Oct 02 11:45:39 compute-0 ceph-osd[84115]: _get_class not permitted to load sdk
Oct 02 11:45:39 compute-0 ceph-osd[84115]: _get_class not permitted to load test_remote_reads
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1 0 load_pgs
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1 0 load_pgs opened 0 pgs
Oct 02 11:45:39 compute-0 ceph-osd[84115]: osd.1 0 log_to_monitors true
Oct 02 11:45:39 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1[84110]: 2025-10-02T11:45:39.410+0000 7f747d6da740 -1 osd.1 0 log_to_monitors true
Oct 02 11:45:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 02 11:45:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 02 11:45:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 02 11:45:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 02 11:45:39 compute-0 vigilant_raman[84531]: {
Oct 02 11:45:39 compute-0 vigilant_raman[84531]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:45:39 compute-0 vigilant_raman[84531]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:45:39 compute-0 vigilant_raman[84531]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:45:39 compute-0 vigilant_raman[84531]:         "osd_id": 1,
Oct 02 11:45:39 compute-0 vigilant_raman[84531]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:45:39 compute-0 vigilant_raman[84531]:         "type": "bluestore"
Oct 02 11:45:39 compute-0 vigilant_raman[84531]:     }
Oct 02 11:45:39 compute-0 vigilant_raman[84531]: }
Oct 02 11:45:40 compute-0 systemd[1]: libpod-293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0.scope: Deactivated successfully.
Oct 02 11:45:40 compute-0 podman[84822]: 2025-10-02 11:45:40.041250868 +0000 UTC m=+0.021836758 container died 293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8531be3d2fccbed739708fcce5b5e8bbb9e9a777d02d06f170f05059a627fba7-merged.mount: Deactivated successfully.
Oct 02 11:45:40 compute-0 podman[84822]: 2025-10-02 11:45:40.121596772 +0000 UTC m=+0.102182642 container remove 293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:40 compute-0 systemd[1]: libpod-conmon-293ce379cc00977fa1ae03b65df94a5efe23e4a4f32a4cdcb58bad9d5fb5c4d0.scope: Deactivated successfully.
Oct 02 11:45:40 compute-0 sudo[84224]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:45:40 compute-0 ceph-mon[73668]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 02 11:45:40 compute-0 ceph-mon[73668]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 02 11:45:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:40 compute-0 sudo[84837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct 02 11:45:40 compute-0 sudo[84837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-0 sudo[84837]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-0,root=default}
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-1,root=default}
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:40 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:40 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:45:40 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 02 11:45:40 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 02 11:45:40 compute-0 sudo[84862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:45:40 compute-0 sudo[84862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-0 sudo[84862]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-0 sudo[84887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:40 compute-0 sudo[84887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:40 compute-0 sudo[84887]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-0 sudo[84912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:40 compute-0 sudo[84912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-0 sudo[84912]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-0 sudo[84937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:40 compute-0 sudo[84937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-0 sudo[84937]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-0 sudo[84962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:45:40 compute-0 sudo[84962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct 02 11:45:41 compute-0 ceph-osd[84115]: osd.1 0 done with init, starting boot process
Oct 02 11:45:41 compute-0 ceph-osd[84115]: osd.1 0 start_boot
Oct 02 11:45:41 compute-0 ceph-osd[84115]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 02 11:45:41 compute-0 ceph-osd[84115]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 02 11:45:41 compute-0 ceph-osd[84115]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 02 11:45:41 compute-0 ceph-osd[84115]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 02 11:45:41 compute-0 ceph-osd[84115]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:41 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:45:41 compute-0 ceph-mon[73668]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 02 11:45:41 compute-0 ceph-mon[73668]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 02 11:45:41 compute-0 ceph-mon[73668]: osdmap e6: 2 total, 0 up, 2 in
Oct 02 11:45:41 compute-0 ceph-mon[73668]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mon[73668]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mon[73668]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:41 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/993231012; not ready for session (expect reconnect)
Oct 02 11:45:41 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3815319485; not ready for session (expect reconnect)
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:41 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:45:41 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:41 compute-0 podman[85059]: 2025-10-02 11:45:41.428920567 +0000 UTC m=+0.067668457 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:45:41 compute-0 podman[85059]: 2025-10-02 11:45:41.537476323 +0000 UTC m=+0.176224183 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:41 compute-0 sudo[84962]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:45:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:42 compute-0 sudo[85142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:42 compute-0 sudo[85142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 sudo[85142]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 sudo[85167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:42 compute-0 sudo[85167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 sudo[85167]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 sudo[85192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:42 compute-0 sudo[85192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 sudo[85192]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 sudo[85217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:45:42 compute-0 sudo[85217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/993231012; not ready for session (expect reconnect)
Oct 02 11:45:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:42 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:45:42 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3815319485; not ready for session (expect reconnect)
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct 02 11:45:42 compute-0 ceph-mon[73668]: osdmap e7: 2 total, 0 up, 2 in
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:42 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:42 compute-0 sudo[85217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 sudo[85273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:42 compute-0 sudo[85273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 sudo[85273]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 sudo[85298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:42 compute-0 sudo[85298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:42 compute-0 sudo[85298]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:45:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:42 compute-0 sudo[85323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:42 compute-0 sudo[85323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-0 sudo[85323]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:42 compute-0 sudo[85348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 11:45:42 compute-0 sudo[85348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:43 compute-0 podman[85413]: 2025-10-02 11:45:43.20298641 +0000 UTC m=+0.043380026 container create 0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:43 compute-0 systemd[1]: Started libpod-conmon-0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30.scope.
Oct 02 11:45:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:43 compute-0 podman[85413]: 2025-10-02 11:45:43.183747891 +0000 UTC m=+0.024141527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:43 compute-0 podman[85413]: 2025-10-02 11:45:43.30935254 +0000 UTC m=+0.149746176 container init 0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:43 compute-0 podman[85413]: 2025-10-02 11:45:43.316002062 +0000 UTC m=+0.156395678 container start 0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_poitras, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:45:43 compute-0 peaceful_poitras[85429]: 167 167
Oct 02 11:45:43 compute-0 systemd[1]: libpod-0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30.scope: Deactivated successfully.
Oct 02 11:45:43 compute-0 conmon[85429]: conmon 0b811336e43a42355744 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30.scope/container/memory.events
Oct 02 11:45:43 compute-0 podman[85413]: 2025-10-02 11:45:43.346595946 +0000 UTC m=+0.186989592 container attach 0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_poitras, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:45:43 compute-0 podman[85413]: 2025-10-02 11:45:43.347396417 +0000 UTC m=+0.187790043 container died 0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:45:43 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/993231012; not ready for session (expect reconnect)
Oct 02 11:45:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:43 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcdb36a0e0397ff07e458abe0d8fb47249051d6bb6e20f98264d15877c2f3210-merged.mount: Deactivated successfully.
Oct 02 11:45:43 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3815319485; not ready for session (expect reconnect)
Oct 02 11:45:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:43 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:43 compute-0 ceph-mon[73668]: purged_snaps scrub starts
Oct 02 11:45:43 compute-0 ceph-mon[73668]: purged_snaps scrub ok
Oct 02 11:45:43 compute-0 ceph-mon[73668]: purged_snaps scrub starts
Oct 02 11:45:43 compute-0 ceph-mon[73668]: purged_snaps scrub ok
Oct 02 11:45:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:43 compute-0 ceph-mon[73668]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:43 compute-0 podman[85413]: 2025-10-02 11:45:43.424406504 +0000 UTC m=+0.264800120 container remove 0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:45:43 compute-0 systemd[1]: libpod-conmon-0b811336e43a4235574446ce626f8c96fdeb764e4be4d948bce95f7ea49e1b30.scope: Deactivated successfully.
Oct 02 11:45:43 compute-0 podman[85452]: 2025-10-02 11:45:43.58229211 +0000 UTC m=+0.049767952 container create 05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:45:43 compute-0 systemd[1]: Started libpod-conmon-05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33.scope.
Oct 02 11:45:43 compute-0 podman[85452]: 2025-10-02 11:45:43.55760762 +0000 UTC m=+0.025083482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49764979e35bf384899713ca44b1f09b517b4ae78471ae03d8fe9e10fb64b82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49764979e35bf384899713ca44b1f09b517b4ae78471ae03d8fe9e10fb64b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49764979e35bf384899713ca44b1f09b517b4ae78471ae03d8fe9e10fb64b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49764979e35bf384899713ca44b1f09b517b4ae78471ae03d8fe9e10fb64b82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-0 podman[85452]: 2025-10-02 11:45:43.677949272 +0000 UTC m=+0.145425134 container init 05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:45:43 compute-0 podman[85452]: 2025-10-02 11:45:43.686771561 +0000 UTC m=+0.154247403 container start 05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:43 compute-0 podman[85452]: 2025-10-02 11:45:43.702282333 +0000 UTC m=+0.169758175 container attach 05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.191 iops: 9008.888 elapsed_sec: 0.333
Oct 02 11:45:44 compute-0 ceph-osd[84115]: log_channel(cluster) log [WRN] : OSD bench result of 9008.887835 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 0 waiting for initial osdmap
Oct 02 11:45:44 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1[84110]: 2025-10-02T11:45:44.216+0000 7f7479e71640 -1 osd.1 0 waiting for initial osdmap
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 7 check_osdmap_features require_osd_release unknown -> reef
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 11:45:44 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-1[84110]: 2025-10-02T11:45:44.235+0000 7f7474c82640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 7 set_numa_affinity not setting numa affinity
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/993231012; not ready for session (expect reconnect)
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3815319485; not ready for session (expect reconnect)
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012] boot
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:44 compute-0 ceph-osd[84115]: osd.1 8 state: booting -> active
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 02 11:45:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:44 compute-0 peaceful_galois[85469]: [
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:     {
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "available": false,
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "ceph_device": false,
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "lsm_data": {},
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "lvs": [],
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "path": "/dev/sr0",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "rejected_reasons": [
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "Insufficient space (<5GB)",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "Has a FileSystem"
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         ],
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         "sys_api": {
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "actuators": null,
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "device_nodes": "sr0",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "devname": "sr0",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "human_readable_size": "482.00 KB",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "id_bus": "ata",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "model": "QEMU DVD-ROM",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "nr_requests": "2",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "parent": "/dev/sr0",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "partitions": {},
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "path": "/dev/sr0",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "removable": "1",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "rev": "2.5+",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "ro": "0",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "rotational": "0",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "sas_address": "",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "sas_device_handle": "",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "scheduler_mode": "mq-deadline",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "sectors": 0,
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "sectorsize": "2048",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "size": 493568.0,
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "support_discard": "2048",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "type": "disk",
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:             "vendor": "QEMU"
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:         }
Oct 02 11:45:44 compute-0 peaceful_galois[85469]:     }
Oct 02 11:45:44 compute-0 peaceful_galois[85469]: ]
Oct 02 11:45:44 compute-0 systemd[1]: libpod-05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33.scope: Deactivated successfully.
Oct 02 11:45:44 compute-0 podman[85452]: 2025-10-02 11:45:44.847732928 +0000 UTC m=+1.315208810 container died 05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:45:44 compute-0 systemd[1]: libpod-05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33.scope: Consumed 1.160s CPU time.
Oct 02 11:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f49764979e35bf384899713ca44b1f09b517b4ae78471ae03d8fe9e10fb64b82-merged.mount: Deactivated successfully.
Oct 02 11:45:44 compute-0 podman[85452]: 2025-10-02 11:45:44.926854971 +0000 UTC m=+1.394330823 container remove 05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 02 11:45:44 compute-0 systemd[1]: libpod-conmon-05ae590f4c603f65a51c2714e2704dab27a9a5508adba751bb168af2ee049d33.scope: Deactivated successfully.
Oct 02 11:45:44 compute-0 sudo[85348]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:45:45 compute-0 ceph-mgr[73961]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Oct 02 11:45:45 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 02 11:45:45 compute-0 ceph-mgr[73961]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134065766: error parsing value: Value '134065766' is below minimum 939524096
Oct 02 11:45:45 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134065766: error parsing value: Value '134065766' is below minimum 939524096
Oct 02 11:45:45 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3815319485; not ready for session (expect reconnect)
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:45 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 11:45:45 compute-0 ceph-mon[73668]: OSD bench result of 9008.887835 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:45:45 compute-0 ceph-mon[73668]: osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012] boot
Oct 02 11:45:45 compute-0 ceph-mon[73668]: osdmap e8: 2 total, 1 up, 2 in
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:45:45 compute-0 ceph-mon[73668]: Adjusting osd_memory_target on compute-1 to  5247M
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: OSD bench result of 7039.207197 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:45:45 compute-0 ceph-mon[73668]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:45:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485] boot
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Oct 02 11:45:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 11:45:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:46 compute-0 ceph-mgr[73961]: [devicehealth INFO root] creating mgr pool
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 02 11:45:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:45:46 compute-0 ceph-mon[73668]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct 02 11:45:46 compute-0 ceph-mon[73668]: Unable to set osd_memory_target on compute-0 to 134065766: error parsing value: Value '134065766' is below minimum 939524096
Oct 02 11:45:46 compute-0 ceph-mon[73668]: osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485] boot
Oct 02 11:45:46 compute-0 ceph-mon[73668]: osdmap e9: 2 total, 2 up, 2 in
Oct 02 11:45:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:45:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 11:45:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:45:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Oct 02 11:45:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 02 11:45:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:45:46 compute-0 ceph-osd[84115]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 11:45:46 compute-0 ceph-osd[84115]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 02 11:45:46 compute-0 ceph-osd[84115]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 11:45:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:45:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 02 11:45:47 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:45:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 02 11:45:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Oct 02 11:45:47 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Oct 02 11:45:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 02 11:45:47 compute-0 ceph-mon[73668]: osdmap e10: 2 total, 2 up, 2 in
Oct 02 11:45:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:45:47 compute-0 ceph-mon[73668]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:45:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] creating main.db for devicehealth
Oct 02 11:45:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 11:45:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 02 11:45:47 compute-0 sudo[86576]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 02 11:45:47 compute-0 sudo[86576]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 11:45:47 compute-0 sudo[86576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 02 11:45:47 compute-0 sudo[86576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 02 11:45:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:45:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:45:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 02 11:45:48 compute-0 ceph-mon[73668]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:45:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 02 11:45:48 compute-0 ceph-mon[73668]: osdmap e11: 2 total, 2 up, 2 in
Oct 02 11:45:48 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 02 11:45:48 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 02 11:45:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:45:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Oct 02 11:45:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.unmtoh(active, since 80s)
Oct 02 11:45:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:45:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:45:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:45:49 compute-0 ceph-mon[73668]: mgrmap e9: compute-0.unmtoh(active, since 80s)
Oct 02 11:45:49 compute-0 ceph-mon[73668]: osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:45:49 compute-0 ceph-mon[73668]: pgmap v45: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:45:50 compute-0 ceph-mon[73668]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:45:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 unknown; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:51 compute-0 ceph-mon[73668]: pgmap v46: 1 pgs: 1 unknown; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:53 compute-0 ceph-mon[73668]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:45:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:55 compute-0 ceph-mon[73668]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:57 compute-0 ceph-mon[73668]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:45:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:59 compute-0 ceph-mon[73668]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:45:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:01 compute-0 anacron[1075]: Job `cron.monthly' started
Oct 02 11:46:01 compute-0 anacron[1075]: Job `cron.monthly' terminated
Oct 02 11:46:01 compute-0 anacron[1075]: Normal exit (3 jobs run)
Oct 02 11:46:01 compute-0 ceph-mon[73668]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 11:46:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:46:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:02 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:02 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:02 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:02 compute-0 ceph-mon[73668]: Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:02 compute-0 ceph-mon[73668]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:02 compute-0 ceph-mon[73668]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:03 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:46:03 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:46:04 compute-0 ceph-mon[73668]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:46:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:04 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:46:04 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:46:05 compute-0 ceph-mon[73668]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:05 compute-0 ceph-mon[73668]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:05 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 352cd8de-e3a1-4b53-8e38-a352a463719b (Updating mon deployment (+2 -> 3))
Oct 02 11:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:05 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct 02 11:46:05 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct 02 11:46:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:06 compute-0 ceph-mon[73668]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:46:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:46:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:06 compute-0 ceph-mon[73668]: Deploying daemon mon.compute-2 on compute-2
Oct 02 11:46:06 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 02 11:46:06 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 11:46:07 compute-0 ceph-mon[73668]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 02 11:46:07 compute-0 ceph-mon[73668]: Cluster is now healthy
Oct 02 11:46:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:08 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct 02 11:46:08 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct 02 11:46:08 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3173851215; not ready for session (expect reconnect)
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:08 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:08 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:46:08 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 02 11:46:08 compute-0 ceph-mon[73668]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct 02 11:46:08 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:09 compute-0 sudo[86604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oruyorsyjljrhjecjoynwtnsjnurdgdt ; /usr/bin/python3'
Oct 02 11:46:09 compute-0 sudo[86604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:09 compute-0 python3[86606]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:09 compute-0 podman[86608]: 2025-10-02 11:46:09.36816552 +0000 UTC m=+0.051389684 container create cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c (image=quay.io/ceph/ceph:v18, name=compassionate_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:09 compute-0 systemd[1]: Started libpod-conmon-cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c.scope.
Oct 02 11:46:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:09 compute-0 podman[86608]: 2025-10-02 11:46:09.347091223 +0000 UTC m=+0.030315417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfc7b2b62edcfc9bdbdd14b30bffac1d67e16c858eab02295a8758a2bc09060/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfc7b2b62edcfc9bdbdd14b30bffac1d67e16c858eab02295a8758a2bc09060/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfc7b2b62edcfc9bdbdd14b30bffac1d67e16c858eab02295a8758a2bc09060/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:09 compute-0 podman[86608]: 2025-10-02 11:46:09.456406989 +0000 UTC m=+0.139631183 container init cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c (image=quay.io/ceph/ceph:v18, name=compassionate_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:09 compute-0 podman[86608]: 2025-10-02 11:46:09.468960825 +0000 UTC m=+0.152184989 container start cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c (image=quay.io/ceph/ceph:v18, name=compassionate_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:09 compute-0 podman[86608]: 2025-10-02 11:46:09.472439415 +0000 UTC m=+0.155663609 container attach cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c (image=quay.io/ceph/ceph:v18, name=compassionate_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:09 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:09 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3173851215; not ready for session (expect reconnect)
Oct 02 11:46:09 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:46:09 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 02 11:46:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 02 11:46:10 compute-0 sshd-session[86639]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 11:46:10 compute-0 sshd-session[86639]: Connection reset by 45.140.17.97 port 43848
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:46:10 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:10 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:46:10 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3173851215; not ready for session (expect reconnect)
Oct 02 11:46:10 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:10 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:46:11 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 02 11:46:11 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:11 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:11 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:46:11 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3173851215; not ready for session (expect reconnect)
Oct 02 11:46:11 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:11 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:46:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:12 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:46:12 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:12 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:12 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:46:12 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3173851215; not ready for session (expect reconnect)
Oct 02 11:46:12 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:12 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:46:12 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 02 11:46:13 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 02 11:46:13 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 02 11:46:13 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:13 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:13 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:46:13 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3173851215; not ready for session (expect reconnect)
Oct 02 11:46:13 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:13 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 02 11:46:13 compute-0 ceph-mon[73668]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct 02 11:46:13 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 02 11:46:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:46:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.unmtoh(active, since 105s)
Oct 02 11:46:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: Deploying daemon mon.compute-1 on compute-1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0 calling monitor election
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 02 11:46:14 compute-0 ceph-mon[73668]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:14 compute-0 ceph-mon[73668]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:46:14 compute-0 ceph-mon[73668]: fsmap 
Oct 02 11:46:14 compute-0 ceph-mon[73668]: osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mgrmap e9: compute-0.unmtoh(active, since 105s)
Oct 02 11:46:14 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:14 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 352cd8de-e3a1-4b53-8e38-a352a463719b (Updating mon deployment (+2 -> 3))
Oct 02 11:46:14 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 352cd8de-e3a1-4b53-8e38-a352a463719b (Updating mon deployment (+2 -> 3)) in 9 seconds
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993590339' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:46:14 compute-0 compassionate_shamir[86624]: 
Oct 02 11:46:14 compute-0 compassionate_shamir[86624]: {"fsid":"20fdc58c-b037-5094-a8ef-d490aa7c36f3","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"reef","num_mons":2},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1759405545,"num_in_osds":2,"osd_in_since":1759405528,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55754752,"bytes_avail":14968242176,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T11:45:30.732673+0000","services":{}},"progress_events":{"352cd8de-e3a1-4b53-8e38-a352a463719b":{"message":"Updating mon deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct 02 11:46:14 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 161b2987-5bd5-42fd-b2c4-b299f554cca2 (Updating mgr deployment (+2 -> 3))
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:14 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:46:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 02 11:46:14 compute-0 ceph-mon[73668]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct 02 11:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:14 compute-0 systemd[1]: libpod-cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c.scope: Deactivated successfully.
Oct 02 11:46:14 compute-0 podman[86608]: 2025-10-02 11:46:14.536593489 +0000 UTC m=+5.219817653 container died cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c (image=quay.io/ceph/ceph:v18, name=compassionate_shamir, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bfc7b2b62edcfc9bdbdd14b30bffac1d67e16c858eab02295a8758a2bc09060-merged.mount: Deactivated successfully.
Oct 02 11:46:14 compute-0 podman[86608]: 2025-10-02 11:46:14.585057176 +0000 UTC m=+5.268281340 container remove cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c (image=quay.io/ceph/ceph:v18, name=compassionate_shamir, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:14 compute-0 systemd[1]: libpod-conmon-cdf4768deaecea0a44272a92d080b538cd93d4bcadcb408c0fe7ccfde746af7c.scope: Deactivated successfully.
Oct 02 11:46:14 compute-0 sudo[86604]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:14 compute-0 ceph-mgr[73961]: mgr.server handle_report got status from non-daemon mon.compute-2
Oct 02 11:46:14 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:46:14.667+0000 7f9ff4535640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Oct 02 11:46:14 compute-0 sudo[86687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egvjrjhjnafqotjtamfjlaajvmagodtv ; /usr/bin/python3'
Oct 02 11:46:14 compute-0 sudo[86687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:15 compute-0 python3[86689]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:15 compute-0 podman[86690]: 2025-10-02 11:46:15.13295315 +0000 UTC m=+0.053286983 container create b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6 (image=quay.io/ceph/ceph:v18, name=stupefied_jepsen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:15 compute-0 systemd[1]: Started libpod-conmon-b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6.scope.
Oct 02 11:46:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bfbd55b2dbcf5985b83c4b1129b18ef944ee6eabf19190c5b3e76fbceefd8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bfbd55b2dbcf5985b83c4b1129b18ef944ee6eabf19190c5b3e76fbceefd8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:15 compute-0 podman[86690]: 2025-10-02 11:46:15.114766488 +0000 UTC m=+0.035100341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:15 compute-0 podman[86690]: 2025-10-02 11:46:15.214318841 +0000 UTC m=+0.134652694 container init b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6 (image=quay.io/ceph/ceph:v18, name=stupefied_jepsen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:46:15 compute-0 podman[86690]: 2025-10-02 11:46:15.220333957 +0000 UTC m=+0.140667800 container start b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6 (image=quay.io/ceph/ceph:v18, name=stupefied_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:15 compute-0 podman[86690]: 2025-10-02 11:46:15.224235588 +0000 UTC m=+0.144569441 container attach b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6 (image=quay.io/ceph/ceph:v18, name=stupefied_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:15 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:15 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:15 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:46:15 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:46:15 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:46:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:16 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:46:16 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:16 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:16 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:46:16 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:46:17 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:17 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:17 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:46:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:18 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 3 completed events
Oct 02 11:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:46:18 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:18 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 02 11:46:19 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 02 11:46:19 compute-0 ceph-mon[73668]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.unmtoh(active, since 111s)
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.kvxdhw on compute-2
Oct 02 11:46:19 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.kvxdhw on compute-2
Oct 02 11:46:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0 calling monitor election
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-2 calling monitor election
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-1 calling monitor election
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 11:46:19 compute-0 ceph-mon[73668]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:46:19 compute-0 ceph-mon[73668]: fsmap 
Oct 02 11:46:19 compute-0 ceph-mon[73668]: osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:46:19 compute-0 ceph-mon[73668]: mgrmap e9: compute-0.unmtoh(active, since 111s)
Oct 02 11:46:19 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:46:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:20 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1145943158; not ready for session (expect reconnect)
Oct 02 11:46:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 11:46:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 02 11:46:20 compute-0 ceph-mon[73668]: Deploying daemon mgr.compute-2.kvxdhw on compute-2
Oct 02 11:46:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:20 compute-0 ceph-mon[73668]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct 02 11:46:20 compute-0 stupefied_jepsen[86706]: pool 'vms' created
Oct 02 11:46:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct 02 11:46:20 compute-0 systemd[1]: libpod-b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6.scope: Deactivated successfully.
Oct 02 11:46:20 compute-0 podman[86690]: 2025-10-02 11:46:20.74275728 +0000 UTC m=+5.663091113 container died b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6 (image=quay.io/ceph/ceph:v18, name=stupefied_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3bfbd55b2dbcf5985b83c4b1129b18ef944ee6eabf19190c5b3e76fbceefd8d-merged.mount: Deactivated successfully.
Oct 02 11:46:20 compute-0 podman[86690]: 2025-10-02 11:46:20.812027517 +0000 UTC m=+5.732361350 container remove b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6 (image=quay.io/ceph/ceph:v18, name=stupefied_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:46:20 compute-0 systemd[1]: libpod-conmon-b3e06a846001fb99cfe3f655b3c1cfbe8c7e4aa31e64e964e2fc59bf847dc7f6.scope: Deactivated successfully.
Oct 02 11:46:20 compute-0 sudo[86687]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:20 compute-0 sudo[86767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuodemzlpbhancoamzzeiflrdhpyoswh ; /usr/bin/python3'
Oct 02 11:46:20 compute-0 sudo[86767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:21 compute-0 python3[86769]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:21 compute-0 podman[86770]: 2025-10-02 11:46:21.172832177 +0000 UTC m=+0.043318644 container create 62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085 (image=quay.io/ceph/ceph:v18, name=unruffled_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:46:21 compute-0 systemd[1]: Started libpod-conmon-62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085.scope.
Oct 02 11:46:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cab02c6d2798987f4233c2f8f566fc4d85b22fb2add66b45e6ab80ab9b5c7d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cab02c6d2798987f4233c2f8f566fc4d85b22fb2add66b45e6ab80ab9b5c7d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:21 compute-0 podman[86770]: 2025-10-02 11:46:21.152974092 +0000 UTC m=+0.023460479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:21 compute-0 podman[86770]: 2025-10-02 11:46:21.264543127 +0000 UTC m=+0.135029514 container init 62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085 (image=quay.io/ceph/ceph:v18, name=unruffled_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:46:21 compute-0 podman[86770]: 2025-10-02 11:46:21.272203645 +0000 UTC m=+0.142690002 container start 62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085 (image=quay.io/ceph/ceph:v18, name=unruffled_heyrovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:21 compute-0 podman[86770]: 2025-10-02 11:46:21.275416839 +0000 UTC m=+0.145903206 container attach 62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085 (image=quay.io/ceph/ceph:v18, name=unruffled_heyrovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:21 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.wtokkj on compute-1
Oct 02 11:46:21 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.wtokkj on compute-1
Oct 02 11:46:21 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T11:46:21.526+0000 7f9ff4535640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Oct 02 11:46:21 compute-0 ceph-mgr[73961]: mgr.server handle_report got status from non-daemon mon.compute-1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:21 compute-0 ceph-mon[73668]: osdmap e13: 2 total, 2 up, 2 in
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:46:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:46:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 02 11:46:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct 02 11:46:22 compute-0 unruffled_heyrovsky[86785]: pool 'volumes' created
Oct 02 11:46:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 02 11:46:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:22 compute-0 ceph-mon[73668]: Deploying daemon mgr.compute-1.wtokkj on compute-1
Oct 02 11:46:22 compute-0 ceph-mon[73668]: osdmap e14: 2 total, 2 up, 2 in
Oct 02 11:46:22 compute-0 ceph-mon[73668]: pgmap v64: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:22 compute-0 ceph-mon[73668]: osdmap e15: 2 total, 2 up, 2 in
Oct 02 11:46:22 compute-0 systemd[1]: libpod-62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085.scope: Deactivated successfully.
Oct 02 11:46:22 compute-0 podman[86770]: 2025-10-02 11:46:22.764327725 +0000 UTC m=+1.634814142 container died 62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085 (image=quay.io/ceph/ceph:v18, name=unruffled_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cab02c6d2798987f4233c2f8f566fc4d85b22fb2add66b45e6ab80ab9b5c7d7-merged.mount: Deactivated successfully.
Oct 02 11:46:22 compute-0 podman[86770]: 2025-10-02 11:46:22.812614577 +0000 UTC m=+1.683100944 container remove 62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085 (image=quay.io/ceph/ceph:v18, name=unruffled_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:46:22 compute-0 sudo[86767]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:22 compute-0 systemd[1]: libpod-conmon-62c265684870d71b1b4b67e40278705fed5b88e4ee18953d0a0c2b591a9df085.scope: Deactivated successfully.
Oct 02 11:46:22 compute-0 sudo[86847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjcabxukmzrhpcpwrrauxpjvmgyfzbwf ; /usr/bin/python3'
Oct 02 11:46:22 compute-0 sudo[86847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:22 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 161b2987-5bd5-42fd-b2c4-b299f554cca2 (Updating mgr deployment (+2 -> 3))
Oct 02 11:46:23 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 161b2987-5bd5-42fd-b2c4-b299f554cca2 (Updating mgr deployment (+2 -> 3)) in 9 seconds
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 929c1db7-0d7b-4945-8178-d18711230934 (Updating crash deployment (+1 -> 3))
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:23 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct 02 11:46:23 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct 02 11:46:23 compute-0 python3[86849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:23 compute-0 podman[86850]: 2025-10-02 11:46:23.186191388 +0000 UTC m=+0.059508244 container create 81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187 (image=quay.io/ceph/ceph:v18, name=zealous_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:23 compute-0 systemd[1]: Started libpod-conmon-81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187.scope.
Oct 02 11:46:23 compute-0 podman[86850]: 2025-10-02 11:46:23.149732163 +0000 UTC m=+0.023049039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd45b77238925e9388e9bca7bdefc0ac7cbf3cd5f105bcfed6ddff8eb4ccd888/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd45b77238925e9388e9bca7bdefc0ac7cbf3cd5f105bcfed6ddff8eb4ccd888/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:23 compute-0 podman[86850]: 2025-10-02 11:46:23.266436689 +0000 UTC m=+0.139753545 container init 81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187 (image=quay.io/ceph/ceph:v18, name=zealous_williamson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:23 compute-0 podman[86850]: 2025-10-02 11:46:23.272670311 +0000 UTC m=+0.145987167 container start 81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187 (image=quay.io/ceph/ceph:v18, name=zealous_williamson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:23 compute-0 podman[86850]: 2025-10-02 11:46:23.276784218 +0000 UTC m=+0.150101084 container attach 81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187 (image=quay.io/ceph/ceph:v18, name=zealous_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:46:23 compute-0 ceph-mon[73668]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:46:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:46:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Oct 02 11:46:23 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:46:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:24 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 4 completed events
Oct 02 11:46:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:46:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:24 compute-0 ceph-mgr[73961]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Oct 02 11:46:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 02 11:46:24 compute-0 ceph-mon[73668]: Deploying daemon crash.compute-2 on compute-2
Oct 02 11:46:24 compute-0 ceph-mon[73668]: osdmap e16: 2 total, 2 up, 2 in
Oct 02 11:46:24 compute-0 ceph-mon[73668]: pgmap v67: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:24 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Oct 02 11:46:24 compute-0 zealous_williamson[86865]: pool 'backups' created
Oct 02 11:46:24 compute-0 systemd[1]: libpod-81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187.scope: Deactivated successfully.
Oct 02 11:46:25 compute-0 podman[86850]: 2025-10-02 11:46:25.000021163 +0000 UTC m=+1.873338029 container died 81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187 (image=quay.io/ceph/ceph:v18, name=zealous_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Oct 02 11:46:25 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd45b77238925e9388e9bca7bdefc0ac7cbf3cd5f105bcfed6ddff8eb4ccd888-merged.mount: Deactivated successfully.
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:25 compute-0 podman[86850]: 2025-10-02 11:46:25.0658191 +0000 UTC m=+1.939135956 container remove 81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187 (image=quay.io/ceph/ceph:v18, name=zealous_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:46:25 compute-0 systemd[1]: libpod-conmon-81ae7e881e4c499313af481e762a620bc264804ce706cb7870c275f7b0faa187.scope: Deactivated successfully.
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 929c1db7-0d7b-4945-8178-d18711230934 (Updating crash deployment (+1 -> 3))
Oct 02 11:46:25 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 929c1db7-0d7b-4945-8178-d18711230934 (Updating crash deployment (+1 -> 3)) in 2 seconds
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:25 compute-0 sudo[86847]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:25 compute-0 sudo[86908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:25 compute-0 sudo[86908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:25 compute-0 sudo[86908]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:25 compute-0 sudo[86960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfpyvrnvbatqgfvarkvtlmxqcecymiqp ; /usr/bin/python3'
Oct 02 11:46:25 compute-0 sudo[86960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:25 compute-0 sudo[86955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:25 compute-0 sudo[86955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:25 compute-0 sudo[86955]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:25 compute-0 sudo[86984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:25 compute-0 sudo[86984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:25 compute-0 sudo[86984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:25 compute-0 python3[86975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:25 compute-0 sudo[87009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:46:25 compute-0 sudo[87009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:25 compute-0 podman[87027]: 2025-10-02 11:46:25.425952862 +0000 UTC m=+0.046557489 container create 91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9 (image=quay.io/ceph/ceph:v18, name=upbeat_euclid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:46:25 compute-0 systemd[1]: Started libpod-conmon-91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9.scope.
Oct 02 11:46:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54050fe778bfa757f46e4851cfbf2e98ed45c7e141902f2f6f273b2f2e6371b3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54050fe778bfa757f46e4851cfbf2e98ed45c7e141902f2f6f273b2f2e6371b3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:25 compute-0 podman[87027]: 2025-10-02 11:46:25.501329238 +0000 UTC m=+0.121933885 container init 91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9 (image=quay.io/ceph/ceph:v18, name=upbeat_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:46:25 compute-0 podman[87027]: 2025-10-02 11:46:25.407406891 +0000 UTC m=+0.028011558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:25 compute-0 podman[87027]: 2025-10-02 11:46:25.512095957 +0000 UTC m=+0.132700594 container start 91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9 (image=quay.io/ceph/ceph:v18, name=upbeat_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:25 compute-0 podman[87027]: 2025-10-02 11:46:25.515100805 +0000 UTC m=+0.135705452 container attach 91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9 (image=quay.io/ceph/ceph:v18, name=upbeat_euclid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:46:25 compute-0 podman[87092]: 2025-10-02 11:46:25.696358387 +0000 UTC m=+0.039772943 container create 7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elion, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:46:25 compute-0 systemd[1]: Started libpod-conmon-7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24.scope.
Oct 02 11:46:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:25 compute-0 podman[87092]: 2025-10-02 11:46:25.679651844 +0000 UTC m=+0.023066430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:25 compute-0 podman[87092]: 2025-10-02 11:46:25.775629584 +0000 UTC m=+0.119044190 container init 7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elion, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:25 compute-0 podman[87092]: 2025-10-02 11:46:25.783326033 +0000 UTC m=+0.126740589 container start 7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:46:25 compute-0 dazzling_elion[87109]: 167 167
Oct 02 11:46:25 compute-0 systemd[1]: libpod-7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24.scope: Deactivated successfully.
Oct 02 11:46:25 compute-0 podman[87092]: 2025-10-02 11:46:25.789878953 +0000 UTC m=+0.133293529 container attach 7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:25 compute-0 podman[87092]: 2025-10-02 11:46:25.790342955 +0000 UTC m=+0.133757511 container died 7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9541c8033e9e120b359b96ecbbebcd4b62c140668f425d529c4c9db37e1bac12-merged.mount: Deactivated successfully.
Oct 02 11:46:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v69: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:25 compute-0 podman[87092]: 2025-10-02 11:46:25.836248546 +0000 UTC m=+0.179663102 container remove 7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elion, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:46:25 compute-0 systemd[1]: libpod-conmon-7fbb8bc9ae67c34e745fcd4010c6c635d66e003de2304e551d5ebc74d1b25b24.scope: Deactivated successfully.
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:25 compute-0 ceph-mon[73668]: osdmap e17: 2 total, 2 up, 2 in
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:25 compute-0 ceph-mon[73668]: pgmap v69: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Oct 02 11:46:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Oct 02 11:46:26 compute-0 podman[87150]: 2025-10-02 11:46:26.006403551 +0000 UTC m=+0.046420326 container create 1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:46:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:26 compute-0 systemd[1]: Started libpod-conmon-1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181.scope.
Oct 02 11:46:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eef1c85a789984256c6b4e61fdbb353d4f291c78f3756e024b49ae63a27b0be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eef1c85a789984256c6b4e61fdbb353d4f291c78f3756e024b49ae63a27b0be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eef1c85a789984256c6b4e61fdbb353d4f291c78f3756e024b49ae63a27b0be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eef1c85a789984256c6b4e61fdbb353d4f291c78f3756e024b49ae63a27b0be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eef1c85a789984256c6b4e61fdbb353d4f291c78f3756e024b49ae63a27b0be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:26 compute-0 podman[87150]: 2025-10-02 11:46:25.986991007 +0000 UTC m=+0.027007802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:26 compute-0 podman[87150]: 2025-10-02 11:46:26.084943208 +0000 UTC m=+0.124959983 container init 1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:26 compute-0 podman[87150]: 2025-10-02 11:46:26.096163149 +0000 UTC m=+0.136179924 container start 1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:46:26 compute-0 podman[87150]: 2025-10-02 11:46:26.102245367 +0000 UTC m=+0.142262142 container attach 1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mestorf, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:46:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:46:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:26 compute-0 busy_mestorf[87167]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:46:26 compute-0 busy_mestorf[87167]: --> relative data size: 1.0
Oct 02 11:46:26 compute-0 busy_mestorf[87167]: --> All data devices are unavailable
Oct 02 11:46:26 compute-0 systemd[1]: libpod-1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181.scope: Deactivated successfully.
Oct 02 11:46:26 compute-0 podman[87150]: 2025-10-02 11:46:26.974290545 +0000 UTC m=+1.014307320 container died 1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mestorf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eef1c85a789984256c6b4e61fdbb353d4f291c78f3756e024b49ae63a27b0be-merged.mount: Deactivated successfully.
Oct 02 11:46:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 02 11:46:27 compute-0 ceph-mon[73668]: osdmap e18: 2 total, 2 up, 2 in
Oct 02 11:46:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:27 compute-0 podman[87150]: 2025-10-02 11:46:27.028841523 +0000 UTC m=+1.068858298 container remove 1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:27 compute-0 systemd[1]: libpod-conmon-1b76cb0a880afdf421f3968333aadf5804f62a4eae0dcb144a0a0a3647cdc181.scope: Deactivated successfully.
Oct 02 11:46:27 compute-0 sudo[87009]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Oct 02 11:46:27 compute-0 upbeat_euclid[87049]: pool 'images' created
Oct 02 11:46:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Oct 02 11:46:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:27 compute-0 systemd[1]: libpod-91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9.scope: Deactivated successfully.
Oct 02 11:46:27 compute-0 podman[87027]: 2025-10-02 11:46:27.102657303 +0000 UTC m=+1.723261940 container died 91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9 (image=quay.io/ceph/ceph:v18, name=upbeat_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:46:27 compute-0 sudo[87196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:27 compute-0 sudo[87196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-54050fe778bfa757f46e4851cfbf2e98ed45c7e141902f2f6f273b2f2e6371b3-merged.mount: Deactivated successfully.
Oct 02 11:46:27 compute-0 sudo[87196]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:27 compute-0 podman[87027]: 2025-10-02 11:46:27.147639932 +0000 UTC m=+1.768244569 container remove 91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9 (image=quay.io/ceph/ceph:v18, name=upbeat_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:46:27 compute-0 systemd[1]: libpod-conmon-91cbbb402a532fcae94f52dfbce4418b5174c74c5c309bcd801345a76fae70a9.scope: Deactivated successfully.
Oct 02 11:46:27 compute-0 sudo[86960]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:27 compute-0 sudo[87235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:27 compute-0 sudo[87235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:27 compute-0 sudo[87235]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:27 compute-0 sudo[87260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:27 compute-0 sudo[87260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:27 compute-0 sudo[87260]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:27 compute-0 sudo[87307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxhrrirmmljgzsfzfuexkkoawzvvusjd ; /usr/bin/python3'
Oct 02 11:46:27 compute-0 sudo[87307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:27 compute-0 sudo[87310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:46:27 compute-0 sudo[87310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"} v 0) v1
Oct 02 11:46:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct 02 11:46:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 02 11:46:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]': finished
Oct 02 11:46:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Oct 02 11:46:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Oct 02 11:46:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:27 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:27 compute-0 python3[87311]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:27 compute-0 podman[87336]: 2025-10-02 11:46:27.492750934 +0000 UTC m=+0.038805170 container create 8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34 (image=quay.io/ceph/ceph:v18, name=focused_jackson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:27 compute-0 systemd[1]: Started libpod-conmon-8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34.scope.
Oct 02 11:46:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fcf42ab710449c542d8e1ff13ead981a6bd6138f192ea2dc52c26a36671da6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fcf42ab710449c542d8e1ff13ead981a6bd6138f192ea2dc52c26a36671da6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:27 compute-0 podman[87336]: 2025-10-02 11:46:27.475934557 +0000 UTC m=+0.021988793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:27 compute-0 podman[87336]: 2025-10-02 11:46:27.580823994 +0000 UTC m=+0.126878250 container init 8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34 (image=quay.io/ceph/ceph:v18, name=focused_jackson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:46:27 compute-0 podman[87336]: 2025-10-02 11:46:27.588018191 +0000 UTC m=+0.134072427 container start 8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34 (image=quay.io/ceph/ceph:v18, name=focused_jackson, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:27 compute-0 podman[87336]: 2025-10-02 11:46:27.592403155 +0000 UTC m=+0.138457421 container attach 8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34 (image=quay.io/ceph/ceph:v18, name=focused_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:46:27 compute-0 podman[87394]: 2025-10-02 11:46:27.70762004 +0000 UTC m=+0.043360648 container create 08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:46:27 compute-0 systemd[1]: Started libpod-conmon-08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e.scope.
Oct 02 11:46:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:27 compute-0 podman[87394]: 2025-10-02 11:46:27.782650331 +0000 UTC m=+0.118390989 container init 08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:46:27 compute-0 podman[87394]: 2025-10-02 11:46:27.688315088 +0000 UTC m=+0.024055726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:27 compute-0 podman[87394]: 2025-10-02 11:46:27.789679454 +0000 UTC m=+0.125420082 container start 08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:46:27 compute-0 funny_galois[87411]: 167 167
Oct 02 11:46:27 compute-0 podman[87394]: 2025-10-02 11:46:27.793060822 +0000 UTC m=+0.128801480 container attach 08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:46:27 compute-0 systemd[1]: libpod-08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e.scope: Deactivated successfully.
Oct 02 11:46:27 compute-0 conmon[87411]: conmon 08a3c950796c7cded1de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e.scope/container/memory.events
Oct 02 11:46:27 compute-0 podman[87394]: 2025-10-02 11:46:27.794564341 +0000 UTC m=+0.130304959 container died 08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c38cc1242d0480c3355f3d5f8b7f81e3129c8b3535f25b413640919b0369f1-merged.mount: Deactivated successfully.
Oct 02 11:46:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v73: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:27 compute-0 podman[87394]: 2025-10-02 11:46:27.83338931 +0000 UTC m=+0.169129918 container remove 08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:46:27 compute-0 systemd[1]: libpod-conmon-08a3c950796c7cded1dea9705636d0e71398eadf0a651e0531d5daa41f9a668e.scope: Deactivated successfully.
Oct 02 11:46:27 compute-0 podman[87452]: 2025-10-02 11:46:27.988546354 +0000 UTC m=+0.037688831 container create 299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nobel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:28 compute-0 systemd[1]: Started libpod-conmon-299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99.scope.
Oct 02 11:46:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bed7c3d7d21e928324226e35840697da13d51304abfa5b132c8ebdb2b29d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bed7c3d7d21e928324226e35840697da13d51304abfa5b132c8ebdb2b29d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bed7c3d7d21e928324226e35840697da13d51304abfa5b132c8ebdb2b29d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bed7c3d7d21e928324226e35840697da13d51304abfa5b132c8ebdb2b29d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:28 compute-0 podman[87452]: 2025-10-02 11:46:28.067684241 +0000 UTC m=+0.116826718 container init 299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nobel, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:46:28 compute-0 podman[87452]: 2025-10-02 11:46:27.971121161 +0000 UTC m=+0.020263658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:28 compute-0 podman[87452]: 2025-10-02 11:46:28.07726104 +0000 UTC m=+0.126403527 container start 299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nobel, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:28 compute-0 podman[87452]: 2025-10-02 11:46:28.082706552 +0000 UTC m=+0.131849059 container attach 299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nobel, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:28 compute-0 ceph-mon[73668]: osdmap e19: 2 total, 2 up, 2 in
Oct 02 11:46:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3177347678' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct 02 11:46:28 compute-0 ceph-mon[73668]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct 02 11:46:28 compute-0 ceph-mon[73668]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]': finished
Oct 02 11:46:28 compute-0 ceph-mon[73668]: osdmap e20: 3 total, 2 up, 3 in
Oct 02 11:46:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:28 compute-0 ceph-mon[73668]: pgmap v73: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:46:28
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:46:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Oct 02 11:46:28 compute-0 focused_jackson[87376]: pool 'cephfs.cephfs.meta' created
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Oct 02 11:46:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:28 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev dfdfe753-ecce-4dd4-afa8-ce54c425842a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 02 11:46:28 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:46:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:28 compute-0 systemd[1]: libpod-8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34.scope: Deactivated successfully.
Oct 02 11:46:28 compute-0 podman[87336]: 2025-10-02 11:46:28.561497589 +0000 UTC m=+1.107551825 container died 8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34 (image=quay.io/ceph/ceph:v18, name=focused_jackson, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fcf42ab710449c542d8e1ff13ead981a6bd6138f192ea2dc52c26a36671da6d-merged.mount: Deactivated successfully.
Oct 02 11:46:28 compute-0 podman[87336]: 2025-10-02 11:46:28.610316469 +0000 UTC m=+1.156370705 container remove 8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34 (image=quay.io/ceph/ceph:v18, name=focused_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:46:28 compute-0 systemd[1]: libpod-conmon-8852b1edf9a2f71bcd9144f3ecafc39bfa5f55c5e56b5d058d4f5f15edec7c34.scope: Deactivated successfully.
Oct 02 11:46:28 compute-0 sudo[87307]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:28 compute-0 sudo[87514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twbunzkcxztpivisxheasogumcrqxlbm ; /usr/bin/python3'
Oct 02 11:46:28 compute-0 sudo[87514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:28 compute-0 python3[87516]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:28 compute-0 laughing_nobel[87468]: {
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:     "1": [
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:         {
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "devices": [
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "/dev/loop3"
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             ],
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "lv_name": "ceph_lv0",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "lv_size": "7511998464",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "name": "ceph_lv0",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "tags": {
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.cluster_name": "ceph",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.crush_device_class": "",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.encrypted": "0",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.osd_id": "1",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.type": "block",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:                 "ceph.vdo": "0"
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             },
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "type": "block",
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:             "vg_name": "ceph_vg0"
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:         }
Oct 02 11:46:28 compute-0 laughing_nobel[87468]:     ]
Oct 02 11:46:28 compute-0 laughing_nobel[87468]: }
Oct 02 11:46:28 compute-0 systemd[1]: libpod-299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99.scope: Deactivated successfully.
Oct 02 11:46:28 compute-0 podman[87452]: 2025-10-02 11:46:28.960859042 +0000 UTC m=+1.010001519 container died 299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nobel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:28 compute-0 podman[87521]: 2025-10-02 11:46:28.972147855 +0000 UTC m=+0.048072981 container create 221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8 (image=quay.io/ceph/ceph:v18, name=magical_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:29 compute-0 systemd[1]: Started libpod-conmon-221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8.scope.
Oct 02 11:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-587bed7c3d7d21e928324226e35840697da13d51304abfa5b132c8ebdb2b29d2-merged.mount: Deactivated successfully.
Oct 02 11:46:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1bd227c5005d70677f87028baf9ac708e4e6f25ce85e6f9d36c158d2f6f0f3e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1bd227c5005d70677f87028baf9ac708e4e6f25ce85e6f9d36c158d2f6f0f3e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:29 compute-0 podman[87521]: 2025-10-02 11:46:28.949889517 +0000 UTC m=+0.025814673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:29 compute-0 podman[87521]: 2025-10-02 11:46:29.046595871 +0000 UTC m=+0.122521017 container init 221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8 (image=quay.io/ceph/ceph:v18, name=magical_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:46:29 compute-0 podman[87452]: 2025-10-02 11:46:29.050461991 +0000 UTC m=+1.099604498 container remove 299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:29 compute-0 podman[87521]: 2025-10-02 11:46:29.054584908 +0000 UTC m=+0.130510044 container start 221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8 (image=quay.io/ceph/ceph:v18, name=magical_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:29 compute-0 podman[87521]: 2025-10-02 11:46:29.057567876 +0000 UTC m=+0.133493012 container attach 221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8 (image=quay.io/ceph/ceph:v18, name=magical_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:46:29 compute-0 systemd[1]: libpod-conmon-299d82fa6a47c7562c9ed9272269a2b9e0a93ef938580b91bb6a4f8670dc6e99.scope: Deactivated successfully.
Oct 02 11:46:29 compute-0 sudo[87310]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:29 compute-0 sudo[87550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:29 compute-0 sudo[87550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:29 compute-0 sudo[87550]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:29 compute-0 sudo[87575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:29 compute-0 sudo[87575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:29 compute-0 sudo[87575]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:29 compute-0 sudo[87600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:29 compute-0 sudo[87600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:29 compute-0 sudo[87600]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:29 compute-0 sudo[87625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:46:29 compute-0 sudo[87625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 02 11:46:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3491711820' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:46:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:29 compute-0 ceph-mon[73668]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:29 compute-0 ceph-mon[73668]: osdmap e21: 3 total, 2 up, 3 in
Oct 02 11:46:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 11:46:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:29 compute-0 podman[87707]: 2025-10-02 11:46:29.696439345 +0000 UTC m=+0.056981922 container create d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:29 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 5 completed events
Oct 02 11:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:46:29 compute-0 systemd[1]: Started libpod-conmon-d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09.scope.
Oct 02 11:46:29 compute-0 podman[87707]: 2025-10-02 11:46:29.66201192 +0000 UTC m=+0.022554507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:29 compute-0 podman[87707]: 2025-10-02 11:46:29.777445871 +0000 UTC m=+0.137988458 container init d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elgamal, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:29 compute-0 podman[87707]: 2025-10-02 11:46:29.78318629 +0000 UTC m=+0.143728887 container start d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:46:29 compute-0 podman[87707]: 2025-10-02 11:46:29.786716742 +0000 UTC m=+0.147259329 container attach d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:46:29 compute-0 priceless_elgamal[87724]: 167 167
Oct 02 11:46:29 compute-0 systemd[1]: libpod-d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09.scope: Deactivated successfully.
Oct 02 11:46:29 compute-0 conmon[87724]: conmon d5b52be5e18226093620 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09.scope/container/memory.events
Oct 02 11:46:29 compute-0 podman[87707]: 2025-10-02 11:46:29.789968357 +0000 UTC m=+0.150510924 container died d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elgamal, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b19b69e60b7e808ba2f332302827b9b2d8e81ef300d5f88780d6b197823d051-merged.mount: Deactivated successfully.
Oct 02 11:46:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v75: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:29 compute-0 podman[87707]: 2025-10-02 11:46:29.863151269 +0000 UTC m=+0.223693846 container remove d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:29 compute-0 systemd[1]: libpod-conmon-d5b52be5e18226093620139afd747567f7879679e3543b719cd82724eb6fff09.scope: Deactivated successfully.
Oct 02 11:46:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Oct 02 11:46:30 compute-0 podman[87746]: 2025-10-02 11:46:30.088016555 +0000 UTC m=+0.104169629 container create aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:46:30 compute-0 podman[87746]: 2025-10-02 11:46:30.008143289 +0000 UTC m=+0.024296393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Oct 02 11:46:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:30 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:30 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 391e059e-0062-436c-bb15-776fa96c7552 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 02 11:46:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:46:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:30 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:30 compute-0 systemd[1]: Started libpod-conmon-aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4.scope.
Oct 02 11:46:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02217ca9315d1f57a03a6b5eef9c90d509009947612d73d4a46ad1627f19c06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02217ca9315d1f57a03a6b5eef9c90d509009947612d73d4a46ad1627f19c06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02217ca9315d1f57a03a6b5eef9c90d509009947612d73d4a46ad1627f19c06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02217ca9315d1f57a03a6b5eef9c90d509009947612d73d4a46ad1627f19c06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:30 compute-0 podman[87746]: 2025-10-02 11:46:30.271519876 +0000 UTC m=+0.287672960 container init aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:46:30 compute-0 podman[87746]: 2025-10-02 11:46:30.280871739 +0000 UTC m=+0.297024803 container start aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:46:30 compute-0 podman[87746]: 2025-10-02 11:46:30.284021211 +0000 UTC m=+0.300174295 container attach aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:46:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 02 11:46:31 compute-0 hungry_goodall[87765]: {
Oct 02 11:46:31 compute-0 hungry_goodall[87765]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:46:31 compute-0 hungry_goodall[87765]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:46:31 compute-0 hungry_goodall[87765]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:46:31 compute-0 hungry_goodall[87765]:         "osd_id": 1,
Oct 02 11:46:31 compute-0 hungry_goodall[87765]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:46:31 compute-0 hungry_goodall[87765]:         "type": "bluestore"
Oct 02 11:46:31 compute-0 hungry_goodall[87765]:     }
Oct 02 11:46:31 compute-0 hungry_goodall[87765]: }
Oct 02 11:46:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:31 compute-0 ceph-mon[73668]: pgmap v75: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:31 compute-0 ceph-mon[73668]: osdmap e22: 3 total, 2 up, 3 in
Oct 02 11:46:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:31 compute-0 systemd[1]: libpod-aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4.scope: Deactivated successfully.
Oct 02 11:46:31 compute-0 podman[87746]: 2025-10-02 11:46:31.153377722 +0000 UTC m=+1.169530796 container died aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a02217ca9315d1f57a03a6b5eef9c90d509009947612d73d4a46ad1627f19c06-merged.mount: Deactivated successfully.
Oct 02 11:46:31 compute-0 podman[87746]: 2025-10-02 11:46:31.204772908 +0000 UTC m=+1.220925972 container remove aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:31 compute-0 systemd[1]: libpod-conmon-aadeff087184eed409b6a1c94bfe4a473598148318b61fc7918955d11bfb18e4.scope: Deactivated successfully.
Oct 02 11:46:31 compute-0 sudo[87625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:46:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v77: 6 pgs: 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Oct 02 11:46:32 compute-0 magical_lovelace[87546]: pool 'cephfs.cephfs.data' created
Oct 02 11:46:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:32 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:32 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 896ef9b6-12cb-4f34-8e71-c51230e5ebb9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 02 11:46:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:32 compute-0 systemd[1]: libpod-221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8.scope: Deactivated successfully.
Oct 02 11:46:32 compute-0 podman[87521]: 2025-10-02 11:46:32.164756305 +0000 UTC m=+3.240681441 container died 221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8 (image=quay.io/ceph/ceph:v18, name=magical_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1bd227c5005d70677f87028baf9ac708e4e6f25ce85e6f9d36c158d2f6f0f3e-merged.mount: Deactivated successfully.
Oct 02 11:46:32 compute-0 podman[87521]: 2025-10-02 11:46:32.321384837 +0000 UTC m=+3.397309973 container remove 221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8 (image=quay.io/ceph/ceph:v18, name=magical_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:32 compute-0 systemd[1]: libpod-conmon-221c5f338c174abbf1d3ab6ec998b9237f8b1c848ec8b25706653688c1a457a8.scope: Deactivated successfully.
Oct 02 11:46:32 compute-0 sudo[87514]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:32 compute-0 sudo[87834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnmmirsgtlzgdkbsqeasdxekheewtzik ; /usr/bin/python3'
Oct 02 11:46:32 compute-0 sudo[87834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:32 compute-0 ceph-mon[73668]: pgmap v77: 6 pgs: 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:32 compute-0 python3[87836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:32 compute-0 podman[87837]: 2025-10-02 11:46:32.70383613 +0000 UTC m=+0.025320120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:32 compute-0 podman[87837]: 2025-10-02 11:46:32.877574497 +0000 UTC m=+0.199058487 container create e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c (image=quay.io/ceph/ceph:v18, name=affectionate_moore, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:32 compute-0 systemd[1]: Started libpod-conmon-e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c.scope.
Oct 02 11:46:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f365071fa93c55cd6bc18b0d1c10c977ec368c4d18aff2e023a3ce266fd50bc2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f365071fa93c55cd6bc18b0d1c10c977ec368c4d18aff2e023a3ce266fd50bc2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:32 compute-0 podman[87837]: 2025-10-02 11:46:32.965123623 +0000 UTC m=+0.286607613 container init e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c (image=quay.io/ceph/ceph:v18, name=affectionate_moore, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:32 compute-0 podman[87837]: 2025-10-02 11:46:32.9715713 +0000 UTC m=+0.293055270 container start e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c (image=quay.io/ceph/ceph:v18, name=affectionate_moore, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:46:32 compute-0 podman[87837]: 2025-10-02 11:46:32.975809901 +0000 UTC m=+0.297293891 container attach e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c (image=quay.io/ceph/ceph:v18, name=affectionate_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct 02 11:46:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 74f79c3d-8fc8-41d5-8782-1671e921f331 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev dfdfe753-ecce-4dd4-afa8-ce54c425842a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event dfdfe753-ecce-4dd4-afa8-ce54c425842a (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 391e059e-0062-436c-bb15-776fa96c7552 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 391e059e-0062-436c-bb15-776fa96c7552 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 896ef9b6-12cb-4f34-8e71-c51230e5ebb9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 896ef9b6-12cb-4f34-8e71-c51230e5ebb9 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 74f79c3d-8fc8-41d5-8782-1671e921f331 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 74f79c3d-8fc8-41d5-8782-1671e921f331 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Oct 02 11:46:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: osdmap e23: 3 total, 2 up, 3 in
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:33 compute-0 ceph-mon[73668]: osdmap e24: 3 total, 2 up, 3 in
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 02 11:46:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v80: 69 pgs: 63 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 02 11:46:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 02 11:46:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct 02 11:46:34 compute-0 affectionate_moore[87852]: enabled application 'rbd' on pool 'vms'
Oct 02 11:46:34 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct 02 11:46:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:34 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:34 compute-0 systemd[1]: libpod-e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c.scope: Deactivated successfully.
Oct 02 11:46:34 compute-0 podman[87837]: 2025-10-02 11:46:34.205876909 +0000 UTC m=+1.527360889 container died e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c (image=quay.io/ceph/ceph:v18, name=affectionate_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f365071fa93c55cd6bc18b0d1c10c977ec368c4d18aff2e023a3ce266fd50bc2-merged.mount: Deactivated successfully.
Oct 02 11:46:34 compute-0 systemd[75332]: Starting Mark boot as successful...
Oct 02 11:46:34 compute-0 systemd[75332]: Finished Mark boot as successful.
Oct 02 11:46:34 compute-0 podman[87837]: 2025-10-02 11:46:34.261051913 +0000 UTC m=+1.582535883 container remove e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c (image=quay.io/ceph/ceph:v18, name=affectionate_moore, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:34 compute-0 systemd[1]: libpod-conmon-e5581531a57027941b63f2ce229b5b02031556e8d69c25ce0667835c8fa7270c.scope: Deactivated successfully.
Oct 02 11:46:34 compute-0 sudo[87834]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:34 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:34 compute-0 sudo[87914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykocmovxtmddxdsmdvgfxatcvfhbcrqu ; /usr/bin/python3'
Oct 02 11:46:34 compute-0 sudo[87914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:34 compute-0 python3[87916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:34 compute-0 ceph-mon[73668]: pgmap v80: 69 pgs: 63 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 02 11:46:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:34 compute-0 ceph-mon[73668]: osdmap e25: 3 total, 2 up, 3 in
Oct 02 11:46:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:34 compute-0 ceph-mon[73668]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:34 compute-0 podman[87917]: 2025-10-02 11:46:34.619541333 +0000 UTC m=+0.049119318 container create 014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39 (image=quay.io/ceph/ceph:v18, name=upbeat_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:46:34 compute-0 systemd[1]: Started libpod-conmon-014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39.scope.
Oct 02 11:46:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0614c7c9ff1be572b4031674e207cd11a7f18184fd163c74cc52434c25e9c086/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0614c7c9ff1be572b4031674e207cd11a7f18184fd163c74cc52434c25e9c086/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:34 compute-0 podman[87917]: 2025-10-02 11:46:34.600638612 +0000 UTC m=+0.030216617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:34 compute-0 podman[87917]: 2025-10-02 11:46:34.696453243 +0000 UTC m=+0.126031248 container init 014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39 (image=quay.io/ceph/ceph:v18, name=upbeat_kowalevski, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:46:34 compute-0 podman[87917]: 2025-10-02 11:46:34.704124742 +0000 UTC m=+0.133702717 container start 014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39 (image=quay.io/ceph/ceph:v18, name=upbeat_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:34 compute-0 podman[87917]: 2025-10-02 11:46:34.707150471 +0000 UTC m=+0.136728476 container attach 014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39 (image=quay.io/ceph/ceph:v18, name=upbeat_kowalevski, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 02 11:46:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=24 pruub=12.524834633s) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active pruub 68.384582520s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25 pruub=8.147358894s) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active pruub 64.007125854s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=14.751409531s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active pruub 70.611312866s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25 pruub=8.147358894s) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown pruub 64.007125854s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=14.751409531s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown pruub 70.611312866s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=24 pruub=12.524834633s) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown pruub 68.384582520s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.b( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.c( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.a( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.9( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.1( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.2( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.11( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.12( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.13( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.14( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.15( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.16( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.1d( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.1e( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.17( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.18( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.d( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.e( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.f( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.3( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.4( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.10( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.5( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.6( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.7( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.8( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.1b( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.1c( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.1f( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.19( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 25 pg[3.1a( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 9 completed events
Oct 02 11:46:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:46:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 02 11:46:35 compute-0 ceph-mon[73668]: 2.1 scrub starts
Oct 02 11:46:35 compute-0 ceph-mon[73668]: 2.1 scrub ok
Oct 02 11:46:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 02 11:46:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 02 11:46:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct 02 11:46:35 compute-0 upbeat_kowalevski[87932]: enabled application 'rbd' on pool 'volumes'
Oct 02 11:46:35 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct 02 11:46:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:35 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1f( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1e( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.11( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.10( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.10( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.11( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.13( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.12( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.12( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.15( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.14( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.14( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.13( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.16( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.15( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.16( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.17( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.9( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.17( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.8( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.8( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.9( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.b( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.a( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.d( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.c( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.6( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.7( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.3( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.7( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.2( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.6( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.4( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.5( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.5( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.4( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.2( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.e( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.3( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.f( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1d( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1c( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1a( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1b( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.18( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.19( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.19( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.18( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.18( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.17( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.19( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.16( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.15( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.13( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.14( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.11( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.14( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.10( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.12( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.17( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.f( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.e( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.d( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.16( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.b( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.17( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.c( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.6( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.0( empty local-lis/les=24/26 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.a( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.0( empty local-lis/les=25/26 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.6( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.7( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.5( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.3( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.1( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.2( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.5( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.3( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.4( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.8( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.9( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.1a( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.1b( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.1d( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.1c( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.1e( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[3.1f( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[5.19( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [1] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 26 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:35 compute-0 systemd[1]: libpod-014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39.scope: Deactivated successfully.
Oct 02 11:46:35 compute-0 podman[87917]: 2025-10-02 11:46:35.647414835 +0000 UTC m=+1.076992830 container died 014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39 (image=quay.io/ceph/ceph:v18, name=upbeat_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0614c7c9ff1be572b4031674e207cd11a7f18184fd163c74cc52434c25e9c086-merged.mount: Deactivated successfully.
Oct 02 11:46:35 compute-0 podman[87917]: 2025-10-02 11:46:35.685185477 +0000 UTC m=+1.114763452 container remove 014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39 (image=quay.io/ceph/ceph:v18, name=upbeat_kowalevski, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:35 compute-0 systemd[1]: libpod-conmon-014e258953e78f2cf57abc51e32ca2d877f2522e0b27708e16ef75932e887b39.scope: Deactivated successfully.
Oct 02 11:46:35 compute-0 sudo[87914]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v83: 131 pgs: 34 peering, 94 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:35 compute-0 sudo[87991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftunmdbjcfripvdklllajpuzkplplwv ; /usr/bin/python3'
Oct 02 11:46:35 compute-0 sudo[87991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:36 compute-0 python3[87993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:36 compute-0 podman[87994]: 2025-10-02 11:46:36.073725659 +0000 UTC m=+0.045371251 container create 6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec (image=quay.io/ceph/ceph:v18, name=brave_haslett, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:36 compute-0 systemd[1]: Started libpod-conmon-6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec.scope.
Oct 02 11:46:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13da32aabeea2304b65eacea0ac19d755820d99cabe8f67454903d6a219b46b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13da32aabeea2304b65eacea0ac19d755820d99cabe8f67454903d6a219b46b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:36 compute-0 podman[87994]: 2025-10-02 11:46:36.053444801 +0000 UTC m=+0.025090423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:36 compute-0 podman[87994]: 2025-10-02 11:46:36.15839792 +0000 UTC m=+0.130043532 container init 6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec (image=quay.io/ceph/ceph:v18, name=brave_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:36 compute-0 podman[87994]: 2025-10-02 11:46:36.164589691 +0000 UTC m=+0.136235283 container start 6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:36 compute-0 podman[87994]: 2025-10-02 11:46:36.167688881 +0000 UTC m=+0.139334473 container attach 6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec (image=quay.io/ceph/ceph:v18, name=brave_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:46:36 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 02 11:46:36 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 02 11:46:36 compute-0 ceph-mon[73668]: 2.2 scrub starts
Oct 02 11:46:36 compute-0 ceph-mon[73668]: 2.2 scrub ok
Oct 02 11:46:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 02 11:46:36 compute-0 ceph-mon[73668]: osdmap e26: 3 total, 2 up, 3 in
Oct 02 11:46:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:36 compute-0 ceph-mon[73668]: pgmap v83: 131 pgs: 34 peering, 94 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 02 11:46:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 02 11:46:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 02 11:46:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 02 11:46:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:36 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct 02 11:46:36 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct 02 11:46:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 02 11:46:37 compute-0 ceph-mon[73668]: 5.1 scrub starts
Oct 02 11:46:37 compute-0 ceph-mon[73668]: 5.1 scrub ok
Oct 02 11:46:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 02 11:46:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 02 11:46:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:37 compute-0 ceph-mon[73668]: Deploying daemon osd.2 on compute-2
Oct 02 11:46:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 02 11:46:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct 02 11:46:37 compute-0 brave_haslett[88009]: enabled application 'rbd' on pool 'backups'
Oct 02 11:46:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct 02 11:46:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:37 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:37 compute-0 systemd[1]: libpod-6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec.scope: Deactivated successfully.
Oct 02 11:46:37 compute-0 podman[87994]: 2025-10-02 11:46:37.691734092 +0000 UTC m=+1.663379694 container died 6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec (image=quay.io/ceph/ceph:v18, name=brave_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a13da32aabeea2304b65eacea0ac19d755820d99cabe8f67454903d6a219b46b-merged.mount: Deactivated successfully.
Oct 02 11:46:37 compute-0 podman[87994]: 2025-10-02 11:46:37.745755927 +0000 UTC m=+1.717401519 container remove 6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:46:37 compute-0 systemd[1]: libpod-conmon-6928c157b6e75181aab78ea4317233e75f133fab5c2dd59f660ebc8efa99f8ec.scope: Deactivated successfully.
Oct 02 11:46:37 compute-0 sudo[87991]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v85: 131 pgs: 34 peering, 62 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:37 compute-0 sudo[88071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqjyiwruzcbnnauqjrdlhgiltjcphrxq ; /usr/bin/python3'
Oct 02 11:46:37 compute-0 sudo[88071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:38 compute-0 python3[88073]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:38 compute-0 podman[88074]: 2025-10-02 11:46:38.10590203 +0000 UTC m=+0.047149767 container create 57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:46:38 compute-0 systemd[1]: Started libpod-conmon-57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e.scope.
Oct 02 11:46:38 compute-0 podman[88074]: 2025-10-02 11:46:38.08398672 +0000 UTC m=+0.025234467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd72b2856595f3ffb707581f5b4ef5953f485f656f5c8e2161fbfef244b652f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd72b2856595f3ffb707581f5b4ef5953f485f656f5c8e2161fbfef244b652f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:38 compute-0 podman[88074]: 2025-10-02 11:46:38.195632413 +0000 UTC m=+0.136880170 container init 57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:46:38 compute-0 podman[88074]: 2025-10-02 11:46:38.202986094 +0000 UTC m=+0.144233831 container start 57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e (image=quay.io/ceph/ceph:v18, name=priceless_jennings, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:46:38 compute-0 podman[88074]: 2025-10-02 11:46:38.206133476 +0000 UTC m=+0.147381233 container attach 57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e (image=quay.io/ceph/ceph:v18, name=priceless_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:38 compute-0 ceph-mon[73668]: 2.3 scrub starts
Oct 02 11:46:38 compute-0 ceph-mon[73668]: 2.3 scrub ok
Oct 02 11:46:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 02 11:46:38 compute-0 ceph-mon[73668]: osdmap e27: 3 total, 2 up, 3 in
Oct 02 11:46:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:38 compute-0 ceph-mon[73668]: pgmap v85: 131 pgs: 34 peering, 62 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 02 11:46:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 02 11:46:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 02 11:46:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 02 11:46:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 02 11:46:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 02 11:46:39 compute-0 ceph-mon[73668]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 02 11:46:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct 02 11:46:39 compute-0 priceless_jennings[88089]: enabled application 'rbd' on pool 'images'
Oct 02 11:46:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct 02 11:46:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:39 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:39 compute-0 systemd[1]: libpod-57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e.scope: Deactivated successfully.
Oct 02 11:46:39 compute-0 podman[88074]: 2025-10-02 11:46:39.746245695 +0000 UTC m=+1.687493432 container died 57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd72b2856595f3ffb707581f5b4ef5953f485f656f5c8e2161fbfef244b652f-merged.mount: Deactivated successfully.
Oct 02 11:46:39 compute-0 podman[88074]: 2025-10-02 11:46:39.79413115 +0000 UTC m=+1.735378887 container remove 57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e (image=quay.io/ceph/ceph:v18, name=priceless_jennings, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:39 compute-0 systemd[1]: libpod-conmon-57b5f5689a6e248cf18ab4e91e8d5eb5bec554091268dfb91dbbd056d1e8b18e.scope: Deactivated successfully.
Oct 02 11:46:39 compute-0 sudo[88071]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v87: 131 pgs: 34 peering, 62 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:39 compute-0 sudo[88152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcfqbgfrrxqvvnmffyadadiczjjcbuxb ; /usr/bin/python3'
Oct 02 11:46:39 compute-0 sudo[88152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:40 compute-0 python3[88154]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:40 compute-0 podman[88155]: 2025-10-02 11:46:40.152285921 +0000 UTC m=+0.043859611 container create 4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315 (image=quay.io/ceph/ceph:v18, name=suspicious_wescoff, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:46:40 compute-0 systemd[1]: Started libpod-conmon-4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315.scope.
Oct 02 11:46:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be00d97c9fcb224d74fd54a5ff16275270da391bb2b0daf176bc312098cd2a64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be00d97c9fcb224d74fd54a5ff16275270da391bb2b0daf176bc312098cd2a64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:40 compute-0 podman[88155]: 2025-10-02 11:46:40.131124471 +0000 UTC m=+0.022698191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:40 compute-0 podman[88155]: 2025-10-02 11:46:40.228086332 +0000 UTC m=+0.119660022 container init 4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315 (image=quay.io/ceph/ceph:v18, name=suspicious_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:40 compute-0 podman[88155]: 2025-10-02 11:46:40.235118785 +0000 UTC m=+0.126692475 container start 4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315 (image=quay.io/ceph/ceph:v18, name=suspicious_wescoff, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:40 compute-0 podman[88155]: 2025-10-02 11:46:40.238525304 +0000 UTC m=+0.130099084 container attach 4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315 (image=quay.io/ceph/ceph:v18, name=suspicious_wescoff, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:46:40 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 02 11:46:40 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 02 11:46:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 02 11:46:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 02 11:46:40 compute-0 ceph-mon[73668]: 2.4 scrub starts
Oct 02 11:46:40 compute-0 ceph-mon[73668]: 2.4 scrub ok
Oct 02 11:46:40 compute-0 ceph-mon[73668]: 3.1 scrub starts
Oct 02 11:46:40 compute-0 ceph-mon[73668]: 3.1 scrub ok
Oct 02 11:46:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 02 11:46:40 compute-0 ceph-mon[73668]: osdmap e28: 3 total, 2 up, 3 in
Oct 02 11:46:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:40 compute-0 ceph-mon[73668]: pgmap v87: 131 pgs: 34 peering, 62 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 02 11:46:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 02 11:46:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct 02 11:46:40 compute-0 suspicious_wescoff[88171]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 02 11:46:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct 02 11:46:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:40 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:40 compute-0 systemd[1]: libpod-4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315.scope: Deactivated successfully.
Oct 02 11:46:40 compute-0 conmon[88171]: conmon 4b50116e9c7cd4083601 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315.scope/container/memory.events
Oct 02 11:46:40 compute-0 podman[88155]: 2025-10-02 11:46:40.894135719 +0000 UTC m=+0.785709409 container died 4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315 (image=quay.io/ceph/ceph:v18, name=suspicious_wescoff, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-be00d97c9fcb224d74fd54a5ff16275270da391bb2b0daf176bc312098cd2a64-merged.mount: Deactivated successfully.
Oct 02 11:46:40 compute-0 podman[88155]: 2025-10-02 11:46:40.937857545 +0000 UTC m=+0.829431225 container remove 4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315 (image=quay.io/ceph/ceph:v18, name=suspicious_wescoff, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:46:40 compute-0 systemd[1]: libpod-conmon-4b50116e9c7cd4083601249b0a1ed36671b36fb6e626830d027c894816bcb315.scope: Deactivated successfully.
Oct 02 11:46:40 compute-0 sudo[88152]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:41 compute-0 sudo[88231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-optpwrgidjlnildzbtawzeqdbyzelmzb ; /usr/bin/python3'
Oct 02 11:46:41 compute-0 sudo[88231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:41 compute-0 python3[88233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:41 compute-0 podman[88234]: 2025-10-02 11:46:41.256703003 +0000 UTC m=+0.020407531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:41 compute-0 podman[88234]: 2025-10-02 11:46:41.42964375 +0000 UTC m=+0.193348248 container create 7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240 (image=quay.io/ceph/ceph:v18, name=boring_gagarin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:41 compute-0 systemd[1]: Started libpod-conmon-7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240.scope.
Oct 02 11:46:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0547eba7533f181b602a9b68e8eba4917960794e29dc947d77f1937d585c905c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0547eba7533f181b602a9b68e8eba4917960794e29dc947d77f1937d585c905c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:41 compute-0 podman[88234]: 2025-10-02 11:46:41.552710899 +0000 UTC m=+0.316415407 container init 7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240 (image=quay.io/ceph/ceph:v18, name=boring_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:41 compute-0 podman[88234]: 2025-10-02 11:46:41.558425548 +0000 UTC m=+0.322130046 container start 7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240 (image=quay.io/ceph/ceph:v18, name=boring_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:41 compute-0 podman[88234]: 2025-10-02 11:46:41.561757154 +0000 UTC m=+0.325461652 container attach 7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240 (image=quay.io/ceph/ceph:v18, name=boring_gagarin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v89: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:46:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:41 compute-0 ceph-mon[73668]: 5.2 scrub starts
Oct 02 11:46:41 compute-0 ceph-mon[73668]: 5.2 scrub ok
Oct 02 11:46:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 02 11:46:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 02 11:46:41 compute-0 ceph-mon[73668]: osdmap e29: 3 total, 2 up, 3 in
Oct 02 11:46:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct 02 11:46:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.571560860s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.229339600s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.576065063s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233856201s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.571714401s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.229522705s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.571759224s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.229614258s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.576013565s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233856201s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.571668625s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.229522705s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.571484566s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.229339600s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.571733475s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.229614258s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.571722984s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.229629517s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.571672440s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.229629517s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575676918s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233741760s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575844765s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233909607s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.571585655s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.229660034s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575737953s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233840942s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575816154s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233909607s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.571561813s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.229660034s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575588226s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233741760s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575616837s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233863831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575595856s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233863831s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575694084s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233978271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575584412s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233909607s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575672150s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233978271s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575566292s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233909607s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575549126s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233940125s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575500488s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.233917236s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575590134s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234016418s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575511932s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233940125s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575608253s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234054565s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575576782s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234016418s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575473785s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233917236s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575589180s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234054565s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575500488s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234062195s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575480461s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234062195s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575441360s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234062195s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575448990s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234100342s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575417519s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234062195s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575427055s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234100342s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575422287s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234130859s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575399399s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234130859s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575480461s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234260559s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575459480s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234260559s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575300217s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234085083s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575534821s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234413147s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575344086s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234237671s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575510979s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234413147s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575209618s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234085083s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575379372s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234313965s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575300217s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234237671s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575357437s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234313965s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.574999809s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.233840942s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575296402s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234382629s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575416565s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234519958s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575339317s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234466553s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575395584s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234519958s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575934410s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.235061646s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575248718s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234382629s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575317383s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234466553s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575896263s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.235061646s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575362206s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234611511s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575362206s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234642029s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575341225s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234611511s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575400352s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234687805s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575425148s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234718323s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575343132s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234642029s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575650215s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.234992981s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575384140s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234718323s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575627327s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234992981s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575358391s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.234687805s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.579926491s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239364624s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575585365s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.235023499s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575695992s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.235145569s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.579878807s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239364624s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.575675011s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.235145569s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580554008s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.240051270s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.575541496s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.235023499s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580533028s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.240051270s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.579787254s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239425659s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580031395s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239700317s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.579732895s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239425659s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580212593s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239868164s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580000877s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239700317s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.579936028s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239669800s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580128670s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239868164s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.579912186s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239669800s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580106735s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239921570s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.579920769s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239768982s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580063820s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239921570s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.579897881s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239768982s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580101967s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239982605s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580080032s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239982605s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580157280s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 72.240135193s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30 pruub=9.580128670s) [0] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.240135193s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.580166817s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 72.239822388s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30 pruub=9.579339981s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.239822388s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kvxdhw started
Oct 02 11:46:42 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mgr.compute-2.kvxdhw 192.168.122.102:0/1572661950; not ready for session (expect reconnect)
Oct 02 11:46:42 compute-0 ceph-mon[73668]: pgmap v89: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: 2.5 scrub starts
Oct 02 11:46:42 compute-0 ceph-mon[73668]: 2.5 scrub ok
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-0 ceph-mon[73668]: osdmap e30: 3 total, 2 up, 3 in
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:46:42 compute-0 ceph-mon[73668]: Standby manager daemon compute-2.kvxdhw started
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Oct 02 11:46:43 compute-0 boring_gagarin[88249]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:43 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e31 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.19( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.15( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.13( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.d( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.e( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.c( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.a( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.1( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.6( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.4( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.9( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.1b( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.1f( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.1e( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 31 pg[2.10( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30) [1] r=0 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:43 compute-0 systemd[1]: libpod-7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240.scope: Deactivated successfully.
Oct 02 11:46:43 compute-0 podman[88234]: 2025-10-02 11:46:43.12131538 +0000 UTC m=+1.885019878 container died 7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240 (image=quay.io/ceph/ceph:v18, name=boring_gagarin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0547eba7533f181b602a9b68e8eba4917960794e29dc947d77f1937d585c905c-merged.mount: Deactivated successfully.
Oct 02 11:46:43 compute-0 podman[88234]: 2025-10-02 11:46:43.173299961 +0000 UTC m=+1.937004459 container remove 7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240 (image=quay.io/ceph/ceph:v18, name=boring_gagarin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.unmtoh(active, since 2m), standbys: compute-2.kvxdhw
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kvxdhw", "id": "compute-2.kvxdhw"} v 0) v1
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kvxdhw", "id": "compute-2.kvxdhw"}]: dispatch
Oct 02 11:46:43 compute-0 systemd[1]: libpod-conmon-7fb15cfdbcefa3b318bb2434a6613e4076454864cbf31dc21d3c723c25bd7240.scope: Deactivated successfully.
Oct 02 11:46:43 compute-0 sudo[88231]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:43 compute-0 sudo[88285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:43 compute-0 sudo[88285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-0 sudo[88285]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-0 sudo[88310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:46:43 compute-0 sudo[88310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-0 sudo[88310]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-0 sudo[88335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:43 compute-0 sudo[88335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-0 sudo[88335]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:43 compute-0 sudo[88360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:43 compute-0 sudo[88360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-0 sudo[88360]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-0 sudo[88411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:43 compute-0 sudo[88411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-0 sudo[88411]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:44 compute-0 sudo[88462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:46:44 compute-0 sudo[88462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.wtokkj started
Oct 02 11:46:44 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from mgr.compute-1.wtokkj 192.168.122.101:0/783344217; not ready for session (expect reconnect)
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 02 11:46:44 compute-0 python3[88508]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:46:44 compute-0 ceph-mon[73668]: osdmap e31: 3 total, 2 up, 3 in
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mgrmap e10: compute-0.unmtoh(active, since 2m), standbys: compute-2.kvxdhw
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kvxdhw", "id": "compute-2.kvxdhw"}]: dispatch
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:44 compute-0 ceph-mon[73668]: pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:44 compute-0 ceph-mon[73668]: Standby manager daemon compute-1.wtokkj started
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:44 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:44 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/804192295; not ready for session (expect reconnect)
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:44 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.unmtoh(active, since 2m), standbys: compute-2.kvxdhw, compute-1.wtokkj
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.wtokkj", "id": "compute-1.wtokkj"} v 0) v1
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.wtokkj", "id": "compute-1.wtokkj"}]: dispatch
Oct 02 11:46:44 compute-0 sudo[88462]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:44 compute-0 python3[88597]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405603.8892927-33814-137340737240003/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:46:44 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 02 11:46:44 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:46:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-0 sudo[88709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzsbvrkophbftjapzhadponynodwwiya ; /usr/bin/python3'
Oct 02 11:46:45 compute-0 sudo[88709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:45 compute-0 python3[88711]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:46:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:45 compute-0 sudo[88709]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/804192295; not ready for session (expect reconnect)
Oct 02 11:46:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:45 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:45 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event da2100e0-4d07-4178-b193-c1620738a9b2 (Global Recovery Event) in 21 seconds
Oct 02 11:46:45 compute-0 sudo[88784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqbaoaxhdhwonvpiljgpkrhwkcjnvwv ; /usr/bin/python3'
Oct 02 11:46:45 compute-0 sudo[88784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:45 compute-0 ceph-mon[73668]: 2.7 scrub starts
Oct 02 11:46:45 compute-0 ceph-mon[73668]: 2.7 scrub ok
Oct 02 11:46:45 compute-0 ceph-mon[73668]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct 02 11:46:45 compute-0 ceph-mon[73668]: osdmap e32: 3 total, 2 up, 3 in
Oct 02 11:46:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:45 compute-0 ceph-mon[73668]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:46:45 compute-0 ceph-mon[73668]: Cluster is now healthy
Oct 02 11:46:45 compute-0 ceph-mon[73668]: mgrmap e11: compute-0.unmtoh(active, since 2m), standbys: compute-2.kvxdhw, compute-1.wtokkj
Oct 02 11:46:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.wtokkj", "id": "compute-1.wtokkj"}]: dispatch
Oct 02 11:46:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-0 python3[88786]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405604.850976-33828-102200723999879/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=d515765f329163f16c287ac6efa9a381ffd45ddc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:46:45 compute-0 sudo[88784]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v94: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:45 compute-0 sudo[88834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rupifkyzrjlrakyrzvkdhshbkavlmzns ; /usr/bin/python3'
Oct 02 11:46:45 compute-0 sudo[88834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:46 compute-0 python3[88836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:46 compute-0 podman[88837]: 2025-10-02 11:46:46.081794355 +0000 UTC m=+0.024488868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:46 compute-0 podman[88837]: 2025-10-02 11:46:46.25545906 +0000 UTC m=+0.198153553 container create d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:46 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/804192295; not ready for session (expect reconnect)
Oct 02 11:46:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:46 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:46 compute-0 systemd[1]: Started libpod-conmon-d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784.scope.
Oct 02 11:46:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085d420981ae62985a8c92cb54a80526b2755a7d6dce168801e349f639d6e501/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085d420981ae62985a8c92cb54a80526b2755a7d6dce168801e349f639d6e501/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085d420981ae62985a8c92cb54a80526b2755a7d6dce168801e349f639d6e501/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:47 compute-0 podman[88837]: 2025-10-02 11:46:47.003584869 +0000 UTC m=+0.946279362 container init d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:46:47 compute-0 podman[88837]: 2025-10-02 11:46:47.009949535 +0000 UTC m=+0.952644038 container start d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:46:47 compute-0 ceph-mon[73668]: purged_snaps scrub starts
Oct 02 11:46:47 compute-0 ceph-mon[73668]: purged_snaps scrub ok
Oct 02 11:46:47 compute-0 ceph-mon[73668]: 3.2 scrub starts
Oct 02 11:46:47 compute-0 ceph-mon[73668]: 3.2 scrub ok
Oct 02 11:46:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:47 compute-0 ceph-mon[73668]: pgmap v94: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:47 compute-0 podman[88837]: 2025-10-02 11:46:47.237457449 +0000 UTC m=+1.180151952 container attach d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.846186638s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.702461243s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.377553940s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.233856201s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.846186638s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702461243s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.377582550s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.233871460s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.377536774s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.233993530s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.846028328s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.702499390s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.377536774s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233993530s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.377582550s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233871460s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.846028328s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702499390s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843848228s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.702507019s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375788689s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.234580994s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843848228s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702507019s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375618935s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.234413147s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375788689s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234580994s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375618935s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234413147s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843653679s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.702590942s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.377553940s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233856201s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843653679s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702590942s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843715668s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.702720642s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843955994s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.702697754s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843715668s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702720642s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843955994s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702697754s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=24/26 n=0 ec=15/15 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=12.375552177s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 80.234687805s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=24/26 n=0 ec=15/15 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=12.375552177s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234687805s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=25/26 n=0 ec=19/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375603676s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.234809875s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=25/26 n=0 ec=19/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375603676s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234809875s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375626564s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.234909058s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375624657s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.234924316s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375624657s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234924316s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375815392s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.235214233s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375815392s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235214233s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843441010s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.702903748s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=12.375570297s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 80.235038757s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375626564s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234909058s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=11.843441010s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702903748s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375694275s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.235206604s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375694275s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235206604s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.380111694s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.239723206s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.380536079s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.240150452s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375434875s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 80.234603882s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.380536079s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.240150452s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=12.379982948s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 80.239593506s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=12.375570297s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235038757s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=12.379982948s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.239593506s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.375434875s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234603882s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=12.380111694s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.239723206s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:47 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/804192295; not ready for session (expect reconnect)
Oct 02 11:46:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:47 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 11:46:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:46:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:46:47 compute-0 friendly_heisenberg[88852]: 
Oct 02 11:46:47 compute-0 friendly_heisenberg[88852]: [global]
Oct 02 11:46:47 compute-0 friendly_heisenberg[88852]:         fsid = 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:47 compute-0 friendly_heisenberg[88852]:         mon_host = 192.168.122.100
Oct 02 11:46:47 compute-0 systemd[1]: libpod-d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784.scope: Deactivated successfully.
Oct 02 11:46:47 compute-0 podman[88837]: 2025-10-02 11:46:47.623744122 +0000 UTC m=+1.566438615 container died d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-085d420981ae62985a8c92cb54a80526b2755a7d6dce168801e349f639d6e501-merged.mount: Deactivated successfully.
Oct 02 11:46:47 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 02 11:46:47 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 02 11:46:47 compute-0 podman[88837]: 2025-10-02 11:46:47.741044912 +0000 UTC m=+1.683739405 container remove d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784 (image=quay.io/ceph/ceph:v18, name=friendly_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:47 compute-0 systemd[1]: libpod-conmon-d0a2cae06571e95cda0b42af510adefed86eb5a7aab60aa8d32689a61c563784.scope: Deactivated successfully.
Oct 02 11:46:47 compute-0 sudo[88834]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v95: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:47 compute-0 sudo[88912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llppegxroxkfotsbztdbamtvbwqacxcj ; /usr/bin/python3'
Oct 02 11:46:47 compute-0 sudo[88912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:48 compute-0 python3[88914]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:46:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:46:48 compute-0 ceph-mon[73668]: pgmap v95: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:48 compute-0 podman[88915]: 2025-10-02 11:46:48.140660341 +0000 UTC m=+0.065856463 container create bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345 (image=quay.io/ceph/ceph:v18, name=sweet_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:48 compute-0 systemd[1]: Started libpod-conmon-bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345.scope.
Oct 02 11:46:48 compute-0 podman[88915]: 2025-10-02 11:46:48.109942972 +0000 UTC m=+0.035139184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23a27c12fa81f96f76508d42ed228cb79914da77e766d874dddc1753d8876ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23a27c12fa81f96f76508d42ed228cb79914da77e766d874dddc1753d8876ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23a27c12fa81f96f76508d42ed228cb79914da77e766d874dddc1753d8876ba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:48 compute-0 podman[88915]: 2025-10-02 11:46:48.230610759 +0000 UTC m=+0.155806901 container init bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345 (image=quay.io/ceph/ceph:v18, name=sweet_haslett, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:48 compute-0 podman[88915]: 2025-10-02 11:46:48.237199911 +0000 UTC m=+0.162396033 container start bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345 (image=quay.io/ceph/ceph:v18, name=sweet_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:48 compute-0 podman[88915]: 2025-10-02 11:46:48.2487441 +0000 UTC m=+0.173940222 container attach bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345 (image=quay.io/ceph/ceph:v18, name=sweet_haslett, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/804192295; not ready for session (expect reconnect)
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295] boot
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:48 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.8M
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.8M
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:48 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:46:48 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:48 compute-0 sudo[88954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:48 compute-0 sudo[88954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-0 sudo[88954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-0 sudo[88979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:46:48 compute-0 sudo[88979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-0 sudo[88979]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-0 sudo[89004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:48 compute-0 sudo[89004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-0 sudo[89004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-0 sudo[89029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph
Oct 02 11:46:48 compute-0 sudo[89029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-0 sudo[89029]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 02 11:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1687359328' entity='client.admin' 
Oct 02 11:46:48 compute-0 sweet_haslett[88931]: set ssl_option
Oct 02 11:46:49 compute-0 systemd[1]: libpod-bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345.scope: Deactivated successfully.
Oct 02 11:46:49 compute-0 podman[88915]: 2025-10-02 11:46:49.000875083 +0000 UTC m=+0.926071275 container died bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345 (image=quay.io/ceph/ceph:v18, name=sweet_haslett, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:46:49 compute-0 sudo[89054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-0 sudo[89054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89054]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d23a27c12fa81f96f76508d42ed228cb79914da77e766d874dddc1753d8876ba-merged.mount: Deactivated successfully.
Oct 02 11:46:49 compute-0 podman[88915]: 2025-10-02 11:46:49.062196268 +0000 UTC m=+0.987392390 container remove bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345 (image=quay.io/ceph/ceph:v18, name=sweet_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:46:49 compute-0 systemd[1]: libpod-conmon-bedd4399135eebe20a74b11456295b07982b89d02936cb45b342318c4e10f345.scope: Deactivated successfully.
Oct 02 11:46:49 compute-0 sudo[88912]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 sudo[89093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-0 sudo[89093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89093]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 ceph-mon[73668]: 5.3 scrub starts
Oct 02 11:46:49 compute-0 ceph-mon[73668]: 5.3 scrub ok
Oct 02 11:46:49 compute-0 ceph-mon[73668]: OSD bench result of 8243.165808 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:49 compute-0 ceph-mon[73668]: osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295] boot
Oct 02 11:46:49 compute-0 ceph-mon[73668]: osdmap e33: 3 total, 3 up, 3 in
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:49 compute-0 ceph-mon[73668]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct 02 11:46:49 compute-0 ceph-mon[73668]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:49 compute-0 ceph-mon[73668]: Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:46:49 compute-0 ceph-mon[73668]: Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:46:49 compute-0 ceph-mon[73668]: Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1687359328' entity='client.admin' 
Oct 02 11:46:49 compute-0 sudo[89120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-0 sudo[89120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89120]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 sudo[89176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gatusndafpotlnlxkfyixyzdikgbxhxx ; /usr/bin/python3'
Oct 02 11:46:49 compute-0 sudo[89176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:49 compute-0 sudo[89161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:49 compute-0 sudo[89161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89161]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 sudo[89196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-0 sudo[89196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89196]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:49 compute-0 sudo[89221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-0 sudo[89221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89221]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 python3[89193]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:49 compute-0 podman[89246]: 2025-10-02 11:46:49.417266309 +0000 UTC m=+0.041884460 container create 821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81 (image=quay.io/ceph/ceph:v18, name=boring_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:49 compute-0 systemd[1]: Started libpod-conmon-821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81.scope.
Oct 02 11:46:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:49 compute-0 sudo[89282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-0 sudo[89282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 podman[89246]: 2025-10-02 11:46:49.397858264 +0000 UTC m=+0.022476435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7232e66b722cbacff44b8289ae6cc186dc7c6b325d531a3a42018950c579e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:49 compute-0 sudo[89282]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7232e66b722cbacff44b8289ae6cc186dc7c6b325d531a3a42018950c579e9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7232e66b722cbacff44b8289ae6cc186dc7c6b325d531a3a42018950c579e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:49 compute-0 podman[89246]: 2025-10-02 11:46:49.517300569 +0000 UTC m=+0.141918740 container init 821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81 (image=quay.io/ceph/ceph:v18, name=boring_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:49 compute-0 podman[89246]: 2025-10-02 11:46:49.527832423 +0000 UTC m=+0.152450614 container start 821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81 (image=quay.io/ceph/ceph:v18, name=boring_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:49 compute-0 podman[89246]: 2025-10-02 11:46:49.532262328 +0000 UTC m=+0.156880479 container attach 821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81 (image=quay.io/ceph/ceph:v18, name=boring_keldysh, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:49 compute-0 sudo[89312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-0 sudo[89312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89312]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:49 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:49 compute-0 sudo[89339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 02 11:46:49 compute-0 sudo[89339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89339]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 02 11:46:49 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 02 11:46:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 02 11:46:49 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 02 11:46:49 compute-0 sudo[89364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-0 sudo[89364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89364]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 sudo[89389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-0 sudo[89389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339970589s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702461243s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339926720s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702461243s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871445656s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233993530s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871313095s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233856201s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871292114s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233871460s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339781761s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702499390s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[2.13( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339755058s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702499390s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871385574s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233993530s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339657784s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702507019s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871006012s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233856201s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[5.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871660233s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234580994s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[5.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871640205s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234580994s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[2.10( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339586258s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702507019s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339750290s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702697754s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[2.c( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339722633s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702697754s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[5.b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871417999s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234413147s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[5.b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871401787s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234413147s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339534760s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702590942s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339632988s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702720642s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[5.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871488571s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234603882s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339610100s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702720642s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=24/26 n=0 ec=15/15 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=9.871524811s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234687805s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[5.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871452332s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234603882s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=24/26 n=0 ec=15/15 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=9.871507645s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234687805s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=25/26 n=0 ec=19/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871563911s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234809875s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=25/26 n=0 ec=19/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871521950s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234809875s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871582985s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234924316s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871560097s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234924316s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871797562s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235214233s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871778488s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235214233s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=9.871570587s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235038757s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339372635s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702903748s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.339355469s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702903748s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871553421s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235206604s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871523857s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235206604s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.870193481s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.233871460s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871538162s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234909058s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.871158600s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.234909058s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.875931740s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.239723206s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=9.875777245s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.239593506s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.875909805s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.239723206s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=9.875713348s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.239593506s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 33 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.876250267s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.240150452s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=30/31 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=9.338982582s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.702590942s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=9.876231194s) [2] r=-1 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.240150452s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=24/26 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=9.871551514s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.235038757s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:49 compute-0 sudo[89389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 sudo[89414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 02 11:46:49 compute-0 sudo[89414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v98: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:49 compute-0 sudo[89414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:49 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:49 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:49 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:49 compute-0 sudo[89458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-0 sudo[89458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89458]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-0 sudo[89483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:46:49 compute-0 sudo[89483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-0 sudo[89483]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 sudo[89508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-0 sudo[89508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89508]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 sudo[89533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:46:50 compute-0 sudo[89533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89533]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:50 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:50 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:46:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:50 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct 02 11:46:50 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:46:50 compute-0 ceph-mon[73668]: 5.5 scrub starts
Oct 02 11:46:50 compute-0 ceph-mon[73668]: 5.5 scrub ok
Oct 02 11:46:50 compute-0 ceph-mon[73668]: 2.8 scrub starts
Oct 02 11:46:50 compute-0 ceph-mon[73668]: 2.8 scrub ok
Oct 02 11:46:50 compute-0 ceph-mon[73668]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-0 ceph-mon[73668]: osdmap e34: 3 total, 3 up, 3 in
Oct 02 11:46:50 compute-0 ceph-mon[73668]: pgmap v98: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:50 compute-0 ceph-mon[73668]: Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-0 ceph-mon[73668]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:50 compute-0 boring_keldysh[89307]: Scheduled rgw.rgw update...
Oct 02 11:46:50 compute-0 boring_keldysh[89307]: Scheduled ingress.rgw.default update...
Oct 02 11:46:50 compute-0 sudo[89559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-0 sudo[89559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89559]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 systemd[1]: libpod-821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81.scope: Deactivated successfully.
Oct 02 11:46:50 compute-0 podman[89585]: 2025-10-02 11:46:50.241376694 +0000 UTC m=+0.029104048 container died 821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81 (image=quay.io/ceph/ceph:v18, name=boring_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:50 compute-0 sudo[89586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:50 compute-0 sudo[89586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89586]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7232e66b722cbacff44b8289ae6cc186dc7c6b325d531a3a42018950c579e9-merged.mount: Deactivated successfully.
Oct 02 11:46:50 compute-0 podman[89585]: 2025-10-02 11:46:50.289246948 +0000 UTC m=+0.076974272 container remove 821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81 (image=quay.io/ceph/ceph:v18, name=boring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:50 compute-0 systemd[1]: libpod-conmon-821a281aff2ea2bfd81ba6eab2f9884cffe47ff6266424e48bd0853ec0087c81.scope: Deactivated successfully.
Oct 02 11:46:50 compute-0 sudo[89176]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 sudo[89625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-0 sudo[89625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 sudo[89650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:50 compute-0 sudo[89650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89650]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 10 completed events
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:46:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:50 compute-0 sudo[89675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-0 sudo[89675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89675]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 sudo[89700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:50 compute-0 sudo[89700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89700]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:46:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:46:50 compute-0 sudo[89748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-0 sudo[89748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89748]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:50 compute-0 sudo[89773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:50 compute-0 sudo[89773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89773]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 sudo[89798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-0 sudo[89798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89798]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 sudo[89823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:50 compute-0 sudo[89823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:46:50 compute-0 sudo[89848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-0 sudo[89848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89848]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:46:50 compute-0 sudo[89873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-0 sudo[89873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-0 sudo[89873]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:46:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:46:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:46:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b5872cfa-572b-45c5-a5ba-e1a7441adce2 does not exist
Oct 02 11:46:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 52748124-35b9-410d-a384-040cfb084e09 does not exist
Oct 02 11:46:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 81f444eb-ef40-4fd2-aaa8-9103ab92a955 does not exist
Oct 02 11:46:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:46:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:46:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:51 compute-0 sudo[89974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:51 compute-0 sudo[89974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:51 compute-0 sudo[89974]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:51 compute-0 ceph-mon[73668]: 5.6 scrub starts
Oct 02 11:46:51 compute-0 ceph-mon[73668]: 5.6 scrub ok
Oct 02 11:46:51 compute-0 ceph-mon[73668]: 2.11 scrub starts
Oct 02 11:46:51 compute-0 ceph-mon[73668]: 2.11 scrub ok
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:51 compute-0 ceph-mon[73668]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: Saving service ingress.rgw.default spec with placement count:2
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-0 sudo[89999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:51 compute-0 sudo[89999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:51 compute-0 sudo[89999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:51 compute-0 sudo[90024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:51 compute-0 sudo[90024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:51 compute-0 python3[89973]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:46:51 compute-0 sudo[90024]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:51 compute-0 sudo[90049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:46:51 compute-0 sudo[90049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:51 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 02 11:46:51 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 02 11:46:51 compute-0 python3[90144]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405611.09764-33869-126150117784643/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:46:51 compute-0 podman[90186]: 2025-10-02 11:46:51.805937749 +0000 UTC m=+0.044378885 container create 42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v99: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:51 compute-0 systemd[1]: Started libpod-conmon-42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f.scope.
Oct 02 11:46:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:51 compute-0 podman[90186]: 2025-10-02 11:46:51.78751436 +0000 UTC m=+0.025955526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:51 compute-0 podman[90186]: 2025-10-02 11:46:51.896170414 +0000 UTC m=+0.134611570 container init 42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:46:51 compute-0 podman[90186]: 2025-10-02 11:46:51.90218844 +0000 UTC m=+0.140629576 container start 42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:46:51 compute-0 podman[90186]: 2025-10-02 11:46:51.906189194 +0000 UTC m=+0.144630340 container attach 42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ritchie, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:51 compute-0 jovial_ritchie[90226]: 167 167
Oct 02 11:46:51 compute-0 systemd[1]: libpod-42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f.scope: Deactivated successfully.
Oct 02 11:46:51 compute-0 podman[90186]: 2025-10-02 11:46:51.907303143 +0000 UTC m=+0.145744279 container died 42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5929784b2b09d33eb03d0704be6a7317ef79465de917bcf5b459beb4c73de9db-merged.mount: Deactivated successfully.
Oct 02 11:46:51 compute-0 podman[90186]: 2025-10-02 11:46:51.947068237 +0000 UTC m=+0.185509373 container remove 42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:46:51 compute-0 systemd[1]: libpod-conmon-42852882bba9f7dcd3582280af9f1a6ff21e516b08aa5a9e299e87fe3d9f5d0f.scope: Deactivated successfully.
Oct 02 11:46:52 compute-0 sudo[90275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnwfcwprojzwyzraeoaablgwsfgsygxe ; /usr/bin/python3'
Oct 02 11:46:52 compute-0 sudo[90275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:52 compute-0 podman[90270]: 2025-10-02 11:46:52.118821592 +0000 UTC m=+0.046427178 container create f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bouman, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:46:52 compute-0 systemd[1]: Started libpod-conmon-f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe.scope.
Oct 02 11:46:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b86c0f46c67bc6c8a2bbdbe6a7d3d1608ef8277f6d516d8e5780a4bcb51b89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b86c0f46c67bc6c8a2bbdbe6a7d3d1608ef8277f6d516d8e5780a4bcb51b89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b86c0f46c67bc6c8a2bbdbe6a7d3d1608ef8277f6d516d8e5780a4bcb51b89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b86c0f46c67bc6c8a2bbdbe6a7d3d1608ef8277f6d516d8e5780a4bcb51b89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b86c0f46c67bc6c8a2bbdbe6a7d3d1608ef8277f6d516d8e5780a4bcb51b89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 podman[90270]: 2025-10-02 11:46:52.101344258 +0000 UTC m=+0.028949884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:52 compute-0 python3[90282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:52 compute-0 podman[90270]: 2025-10-02 11:46:52.235353912 +0000 UTC m=+0.162959518 container init f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 11:46:52 compute-0 podman[90270]: 2025-10-02 11:46:52.24257332 +0000 UTC m=+0.170178916 container start f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:52 compute-0 podman[90270]: 2025-10-02 11:46:52.246579954 +0000 UTC m=+0.174185570 container attach f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bouman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:52 compute-0 podman[90295]: 2025-10-02 11:46:52.300081685 +0000 UTC m=+0.052747753 container create 354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2 (image=quay.io/ceph/ceph:v18, name=admiring_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 02 11:46:52 compute-0 systemd[1]: Started libpod-conmon-354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2.scope.
Oct 02 11:46:52 compute-0 podman[90295]: 2025-10-02 11:46:52.269657194 +0000 UTC m=+0.022323282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e094b11e251207993be788d51055e7574fa3e46e205e1357209a6cc640caaf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e094b11e251207993be788d51055e7574fa3e46e205e1357209a6cc640caaf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e094b11e251207993be788d51055e7574fa3e46e205e1357209a6cc640caaf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:52 compute-0 podman[90295]: 2025-10-02 11:46:52.397576199 +0000 UTC m=+0.150242297 container init 354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2 (image=quay.io/ceph/ceph:v18, name=admiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:52 compute-0 ceph-mon[73668]: pgmap v99: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:52 compute-0 podman[90295]: 2025-10-02 11:46:52.404283224 +0000 UTC m=+0.156949282 container start 354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2 (image=quay.io/ceph/ceph:v18, name=admiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:52 compute-0 podman[90295]: 2025-10-02 11:46:52.408419961 +0000 UTC m=+0.161086059 container attach 354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2 (image=quay.io/ceph/ceph:v18, name=admiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:46:53 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:53 compute-0 ceph-mgr[73961]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 02 11:46:53 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0[73664]: 2025-10-02T11:46:53.021+0000 7fe2bbfee640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e2 new map
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:46:53.022725+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 02 11:46:53 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:53 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:46:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:53 compute-0 ceph-mgr[73961]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 02 11:46:53 compute-0 awesome_bouman[90292]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:46:53 compute-0 awesome_bouman[90292]: --> relative data size: 1.0
Oct 02 11:46:53 compute-0 awesome_bouman[90292]: --> All data devices are unavailable
Oct 02 11:46:53 compute-0 systemd[1]: libpod-354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2.scope: Deactivated successfully.
Oct 02 11:46:53 compute-0 podman[90295]: 2025-10-02 11:46:53.076082949 +0000 UTC m=+0.828749027 container died 354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2 (image=quay.io/ceph/ceph:v18, name=admiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 11:46:53 compute-0 systemd[1]: libpod-f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe.scope: Deactivated successfully.
Oct 02 11:46:53 compute-0 podman[90270]: 2025-10-02 11:46:53.094620221 +0000 UTC m=+1.022225837 container died f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bouman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-26e094b11e251207993be788d51055e7574fa3e46e205e1357209a6cc640caaf-merged.mount: Deactivated successfully.
Oct 02 11:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-48b86c0f46c67bc6c8a2bbdbe6a7d3d1608ef8277f6d516d8e5780a4bcb51b89-merged.mount: Deactivated successfully.
Oct 02 11:46:53 compute-0 podman[90295]: 2025-10-02 11:46:53.137891886 +0000 UTC m=+0.890557954 container remove 354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2 (image=quay.io/ceph/ceph:v18, name=admiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:53 compute-0 systemd[1]: libpod-conmon-354c5957a573e30d04f35cfeaa34ec69e4bc0888bc0f174186ffd689b8c0d7e2.scope: Deactivated successfully.
Oct 02 11:46:53 compute-0 podman[90270]: 2025-10-02 11:46:53.149374724 +0000 UTC m=+1.076980340 container remove f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:46:53 compute-0 sudo[90275]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:53 compute-0 systemd[1]: libpod-conmon-f36f01123cbc739108357f087db4afabca50ac6ee4ab87f8268f97c75a7f8cfe.scope: Deactivated successfully.
Oct 02 11:46:53 compute-0 sudo[90049]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:53 compute-0 sudo[90369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:53 compute-0 sudo[90369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:53 compute-0 sudo[90369]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:53 compute-0 sudo[90438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzaayxefmkvuvzypjnabpotdiiirzgpr ; /usr/bin/python3'
Oct 02 11:46:53 compute-0 sudo[90438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:53 compute-0 sudo[90399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:53 compute-0 sudo[90399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:53 compute-0 sudo[90399]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:53 compute-0 sudo[90445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:53 compute-0 sudo[90445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:53 compute-0 sudo[90445]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:53 compute-0 ceph-mon[73668]: 3.4 scrub starts
Oct 02 11:46:53 compute-0 ceph-mon[73668]: 3.4 scrub ok
Oct 02 11:46:53 compute-0 ceph-mon[73668]: 4.15 scrub starts
Oct 02 11:46:53 compute-0 ceph-mon[73668]: 4.15 scrub ok
Oct 02 11:46:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 02 11:46:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:46:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:46:53 compute-0 ceph-mon[73668]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 11:46:53 compute-0 ceph-mon[73668]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 02 11:46:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 02 11:46:53 compute-0 ceph-mon[73668]: osdmap e35: 3 total, 3 up, 3 in
Oct 02 11:46:53 compute-0 ceph-mon[73668]: fsmap cephfs:0
Oct 02 11:46:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:53 compute-0 sudo[90470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:46:53 compute-0 sudo[90470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:53 compute-0 python3[90443]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:53 compute-0 podman[90495]: 2025-10-02 11:46:53.530808581 +0000 UTC m=+0.048084281 container create ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b (image=quay.io/ceph/ceph:v18, name=wonderful_cannon, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:53 compute-0 systemd[1]: Started libpod-conmon-ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b.scope.
Oct 02 11:46:53 compute-0 podman[90495]: 2025-10-02 11:46:53.506618702 +0000 UTC m=+0.023894432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a99680dc9820aa191dc970a58bf272a06829acb9d4be78885ea14a0f41a590/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a99680dc9820aa191dc970a58bf272a06829acb9d4be78885ea14a0f41a590/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a99680dc9820aa191dc970a58bf272a06829acb9d4be78885ea14a0f41a590/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:53 compute-0 podman[90495]: 2025-10-02 11:46:53.621035407 +0000 UTC m=+0.138311127 container init ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b (image=quay.io/ceph/ceph:v18, name=wonderful_cannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:46:53 compute-0 podman[90495]: 2025-10-02 11:46:53.629907177 +0000 UTC m=+0.147182877 container start ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b (image=quay.io/ceph/ceph:v18, name=wonderful_cannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:46:53 compute-0 podman[90495]: 2025-10-02 11:46:53.634158078 +0000 UTC m=+0.151433798 container attach ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b (image=quay.io/ceph/ceph:v18, name=wonderful_cannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:53 compute-0 podman[90554]: 2025-10-02 11:46:53.774879226 +0000 UTC m=+0.040133034 container create 9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:53 compute-0 systemd[1]: Started libpod-conmon-9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470.scope.
Oct 02 11:46:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:53 compute-0 podman[90554]: 2025-10-02 11:46:53.833717536 +0000 UTC m=+0.098971364 container init 9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v101: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:53 compute-0 podman[90554]: 2025-10-02 11:46:53.840118042 +0000 UTC m=+0.105371850 container start 9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:46:53 compute-0 podman[90554]: 2025-10-02 11:46:53.843800388 +0000 UTC m=+0.109054216 container attach 9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:53 compute-0 adoring_chaplygin[90570]: 167 167
Oct 02 11:46:53 compute-0 systemd[1]: libpod-9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470.scope: Deactivated successfully.
Oct 02 11:46:53 compute-0 podman[90554]: 2025-10-02 11:46:53.845904003 +0000 UTC m=+0.111157811 container died 9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:53 compute-0 podman[90554]: 2025-10-02 11:46:53.756020386 +0000 UTC m=+0.021274214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b0bdb4ebd19179b256d71bf23ef16f49a73950f999fc46335d056f7ab8d7774-merged.mount: Deactivated successfully.
Oct 02 11:46:53 compute-0 podman[90554]: 2025-10-02 11:46:53.887438753 +0000 UTC m=+0.152692561 container remove 9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:46:53 compute-0 systemd[1]: libpod-conmon-9d27be46633e20eafbf60d606eec0450b838a2b4a85a48f01d4a8885b4cd4470.scope: Deactivated successfully.
Oct 02 11:46:54 compute-0 podman[90594]: 2025-10-02 11:46:54.055886532 +0000 UTC m=+0.051017087 container create caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:46:54 compute-0 systemd[1]: Started libpod-conmon-caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609.scope.
Oct 02 11:46:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ffe782938b00af5ae4cc0f697b8507c0488026556a1388960edcabcdba1398/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ffe782938b00af5ae4cc0f697b8507c0488026556a1388960edcabcdba1398/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ffe782938b00af5ae4cc0f697b8507c0488026556a1388960edcabcdba1398/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ffe782938b00af5ae4cc0f697b8507c0488026556a1388960edcabcdba1398/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:54 compute-0 podman[90594]: 2025-10-02 11:46:54.129805184 +0000 UTC m=+0.124935769 container init caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:54 compute-0 podman[90594]: 2025-10-02 11:46:54.035073671 +0000 UTC m=+0.030204246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:54 compute-0 podman[90594]: 2025-10-02 11:46:54.137089413 +0000 UTC m=+0.132219968 container start caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:46:54 compute-0 podman[90594]: 2025-10-02 11:46:54.140918093 +0000 UTC m=+0.136048648 container attach caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:54 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:54 compute-0 ceph-mgr[73961]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:54 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:46:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:54 compute-0 wonderful_cannon[90512]: Scheduled mds.cephfs update...
Oct 02 11:46:54 compute-0 systemd[1]: libpod-ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b.scope: Deactivated successfully.
Oct 02 11:46:54 compute-0 podman[90495]: 2025-10-02 11:46:54.299548627 +0000 UTC m=+0.816824337 container died ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b (image=quay.io/ceph/ceph:v18, name=wonderful_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7a99680dc9820aa191dc970a58bf272a06829acb9d4be78885ea14a0f41a590-merged.mount: Deactivated successfully.
Oct 02 11:46:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:54 compute-0 podman[90495]: 2025-10-02 11:46:54.347854692 +0000 UTC m=+0.865130392 container remove ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b (image=quay.io/ceph/ceph:v18, name=wonderful_cannon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:54 compute-0 systemd[1]: libpod-conmon-ae0154d06755e6a06fc78dc56b14a02e740cec023a7136f564c2cff0832b477b.scope: Deactivated successfully.
Oct 02 11:46:54 compute-0 sudo[90438]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:54 compute-0 ceph-mon[73668]: 2.14 deep-scrub starts
Oct 02 11:46:54 compute-0 ceph-mon[73668]: from='client.14307 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:54 compute-0 ceph-mon[73668]: 2.14 deep-scrub ok
Oct 02 11:46:54 compute-0 ceph-mon[73668]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:54 compute-0 ceph-mon[73668]: 2.12 scrub starts
Oct 02 11:46:54 compute-0 ceph-mon[73668]: 2.12 scrub ok
Oct 02 11:46:54 compute-0 ceph-mon[73668]: pgmap v101: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:54 compute-0 sudo[90726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hauotynytkzgbbdeyfrdlqnmcomokref ; /usr/bin/python3'
Oct 02 11:46:54 compute-0 sudo[90726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]: {
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:     "1": [
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:         {
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "devices": [
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "/dev/loop3"
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             ],
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "lv_name": "ceph_lv0",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "lv_size": "7511998464",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "name": "ceph_lv0",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "tags": {
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.cluster_name": "ceph",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.crush_device_class": "",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.encrypted": "0",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.osd_id": "1",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.type": "block",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:                 "ceph.vdo": "0"
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             },
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "type": "block",
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:             "vg_name": "ceph_vg0"
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:         }
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]:     ]
Oct 02 11:46:54 compute-0 nervous_vaughan[90630]: }
Oct 02 11:46:55 compute-0 systemd[1]: libpod-caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609.scope: Deactivated successfully.
Oct 02 11:46:55 compute-0 podman[90594]: 2025-10-02 11:46:55.008033056 +0000 UTC m=+1.003163611 container died caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-54ffe782938b00af5ae4cc0f697b8507c0488026556a1388960edcabcdba1398-merged.mount: Deactivated successfully.
Oct 02 11:46:55 compute-0 python3[90730]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:46:55 compute-0 podman[90594]: 2025-10-02 11:46:55.069326639 +0000 UTC m=+1.064457194 container remove caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:46:55 compute-0 systemd[1]: libpod-conmon-caca62ded31cbe1b07dc066349d3c1d56cd6179f3ea41b706a4b2dd5aba37609.scope: Deactivated successfully.
Oct 02 11:46:55 compute-0 sudo[90726]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:55 compute-0 sudo[90470]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:55 compute-0 sudo[90746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:55 compute-0 sudo[90746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:55 compute-0 sudo[90746]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:55 compute-0 sudo[90800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:55 compute-0 sudo[90800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:55 compute-0 sudo[90800]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:55 compute-0 sudo[90872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoiqaioxxyprmhoarksryvbjzxjyubzt ; /usr/bin/python3'
Oct 02 11:46:55 compute-0 sudo[90872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:55 compute-0 sudo[90851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:55 compute-0 sudo[90851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:55 compute-0 sudo[90851]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:55 compute-0 sudo[90890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:46:55 compute-0 sudo[90890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:55 compute-0 python3[90887]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405614.7801747-33899-70477284868198/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=75f34a13e5eafe465b3328865c9fc53d2eab5578 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:46:55 compute-0 sudo[90872]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:55 compute-0 ceph-mon[73668]: from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:55 compute-0 ceph-mon[73668]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:55 compute-0 podman[90978]: 2025-10-02 11:46:55.709346307 +0000 UTC m=+0.045481563 container create 55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yalow, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:46:55 compute-0 systemd[1]: Started libpod-conmon-55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6.scope.
Oct 02 11:46:55 compute-0 podman[90978]: 2025-10-02 11:46:55.689800209 +0000 UTC m=+0.025935485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:55 compute-0 sudo[91020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzswmmrsbeplrvlipphvkksmkldsrxyg ; /usr/bin/python3'
Oct 02 11:46:55 compute-0 sudo[91020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:55 compute-0 podman[90978]: 2025-10-02 11:46:55.812750586 +0000 UTC m=+0.148885872 container init 55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:46:55 compute-0 podman[90978]: 2025-10-02 11:46:55.822489609 +0000 UTC m=+0.158624865 container start 55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:55 compute-0 eloquent_yalow[91012]: 167 167
Oct 02 11:46:55 compute-0 systemd[1]: libpod-55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6.scope: Deactivated successfully.
Oct 02 11:46:55 compute-0 podman[90978]: 2025-10-02 11:46:55.832178581 +0000 UTC m=+0.168313867 container attach 55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:46:55 compute-0 podman[90978]: 2025-10-02 11:46:55.832617872 +0000 UTC m=+0.168753128 container died 55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:46:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b15e7398fc88f4847a8d6288190e8e57f7c2d5ada9bd55c0b5df953d0079e8-merged.mount: Deactivated successfully.
Oct 02 11:46:55 compute-0 podman[90978]: 2025-10-02 11:46:55.885282561 +0000 UTC m=+0.221417817 container remove 55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yalow, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:55 compute-0 systemd[1]: libpod-conmon-55fab60951aa9773acb5f82b14bbcb2e7736812e189021549892c980141871f6.scope: Deactivated successfully.
Oct 02 11:46:55 compute-0 python3[91022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:56 compute-0 podman[91039]: 2025-10-02 11:46:56.024322746 +0000 UTC m=+0.041141641 container create 2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15 (image=quay.io/ceph/ceph:v18, name=epic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:46:56 compute-0 podman[91054]: 2025-10-02 11:46:56.058363891 +0000 UTC m=+0.041883010 container create 61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swanson, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:56 compute-0 systemd[1]: Started libpod-conmon-2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15.scope.
Oct 02 11:46:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08dcc8f1e8a9478c50c50f37656a7ceb5b665dbbcafc799fb721bfeed263318/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08dcc8f1e8a9478c50c50f37656a7ceb5b665dbbcafc799fb721bfeed263318/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:56 compute-0 systemd[1]: Started libpod-conmon-61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf.scope.
Oct 02 11:46:56 compute-0 podman[91039]: 2025-10-02 11:46:56.007190171 +0000 UTC m=+0.024009086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:56 compute-0 podman[91039]: 2025-10-02 11:46:56.113960106 +0000 UTC m=+0.130779021 container init 2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15 (image=quay.io/ceph/ceph:v18, name=epic_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:56 compute-0 podman[91039]: 2025-10-02 11:46:56.121351659 +0000 UTC m=+0.138170544 container start 2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15 (image=quay.io/ceph/ceph:v18, name=epic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ae1490b82539cc42044891138a2b5b3684af8aa6412986c0f25d98d965483a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ae1490b82539cc42044891138a2b5b3684af8aa6412986c0f25d98d965483a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ae1490b82539cc42044891138a2b5b3684af8aa6412986c0f25d98d965483a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ae1490b82539cc42044891138a2b5b3684af8aa6412986c0f25d98d965483a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:56 compute-0 podman[91039]: 2025-10-02 11:46:56.125903397 +0000 UTC m=+0.142722302 container attach 2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15 (image=quay.io/ceph/ceph:v18, name=epic_lalande, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:46:56 compute-0 podman[91054]: 2025-10-02 11:46:56.133136965 +0000 UTC m=+0.116656084 container init 61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:46:56 compute-0 podman[91054]: 2025-10-02 11:46:56.041510883 +0000 UTC m=+0.025030022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:56 compute-0 podman[91054]: 2025-10-02 11:46:56.140747783 +0000 UTC m=+0.124266892 container start 61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:46:56 compute-0 podman[91054]: 2025-10-02 11:46:56.144341747 +0000 UTC m=+0.127860886 container attach 61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:46:56 compute-0 ceph-mon[73668]: pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 02 11:46:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 02 11:46:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 02 11:46:56 compute-0 systemd[1]: libpod-2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15.scope: Deactivated successfully.
Oct 02 11:46:56 compute-0 podman[91112]: 2025-10-02 11:46:56.964297153 +0000 UTC m=+0.025686439 container died 2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15 (image=quay.io/ceph/ceph:v18, name=epic_lalande, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d08dcc8f1e8a9478c50c50f37656a7ceb5b665dbbcafc799fb721bfeed263318-merged.mount: Deactivated successfully.
Oct 02 11:46:57 compute-0 podman[91112]: 2025-10-02 11:46:57.015287319 +0000 UTC m=+0.076676575 container remove 2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15 (image=quay.io/ceph/ceph:v18, name=epic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:46:57 compute-0 systemd[1]: libpod-conmon-2a6c1403b418c87cb1e604307a0a4e782595419c9eb14596597b8bb1e8fd2e15.scope: Deactivated successfully.
Oct 02 11:46:57 compute-0 sudo[91020]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]: {
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]:         "osd_id": 1,
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]:         "type": "bluestore"
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]:     }
Oct 02 11:46:57 compute-0 heuristic_swanson[91080]: }
Oct 02 11:46:57 compute-0 systemd[1]: libpod-61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf.scope: Deactivated successfully.
Oct 02 11:46:57 compute-0 podman[91054]: 2025-10-02 11:46:57.097156897 +0000 UTC m=+1.080676016 container died 61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-28ae1490b82539cc42044891138a2b5b3684af8aa6412986c0f25d98d965483a-merged.mount: Deactivated successfully.
Oct 02 11:46:57 compute-0 podman[91054]: 2025-10-02 11:46:57.156566622 +0000 UTC m=+1.140085731 container remove 61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swanson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:46:57 compute-0 systemd[1]: libpod-conmon-61a3af3aa68835f05612fce33ab4b714ae7382846da28001b8d4a74140d45eaf.scope: Deactivated successfully.
Oct 02 11:46:57 compute-0 sudo[90890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:46:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:46:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev ff88074c-f7a5-4450-af18-9fc00cd3bcc9 (Updating rgw.rgw deployment (+3 -> 3))
Oct 02 11:46:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 02 11:46:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:46:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:46:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 02 11:46:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:46:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:57 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.tsbazp on compute-2
Oct 02 11:46:57 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.tsbazp on compute-2
Oct 02 11:46:57 compute-0 sudo[91175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tduliixikojzwnoicbnnvqohdjgagtlk ; /usr/bin/python3'
Oct 02 11:46:57 compute-0 sudo[91175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:57 compute-0 ceph-mon[73668]: 2.16 scrub starts
Oct 02 11:46:57 compute-0 ceph-mon[73668]: 2.16 scrub ok
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-0 ceph-mon[73668]: 3.11 scrub starts
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-0 ceph-mon[73668]: 3.11 scrub ok
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:57 compute-0 python3[91177]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:57 compute-0 podman[91179]: 2025-10-02 11:46:57.78076083 +0000 UTC m=+0.047518757 container create 2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00 (image=quay.io/ceph/ceph:v18, name=frosty_wiles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:46:57 compute-0 systemd[1]: Started libpod-conmon-2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00.scope.
Oct 02 11:46:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v103: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e52b1f580936f3583e4a0cde4f2296639ae3a23f122b61aaadf94f710522b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e52b1f580936f3583e4a0cde4f2296639ae3a23f122b61aaadf94f710522b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:57 compute-0 podman[91179]: 2025-10-02 11:46:57.855068692 +0000 UTC m=+0.121826649 container init 2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00 (image=quay.io/ceph/ceph:v18, name=frosty_wiles, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:57 compute-0 podman[91179]: 2025-10-02 11:46:57.760290658 +0000 UTC m=+0.027048605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:57 compute-0 podman[91179]: 2025-10-02 11:46:57.862175256 +0000 UTC m=+0.128933173 container start 2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00 (image=quay.io/ceph/ceph:v18, name=frosty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:57 compute-0 podman[91179]: 2025-10-02 11:46:57.865628996 +0000 UTC m=+0.132386933 container attach 2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00 (image=quay.io/ceph/ceph:v18, name=frosty_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:46:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:46:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363194234' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:46:58 compute-0 frosty_wiles[91195]: 
Oct 02 11:46:58 compute-0 frosty_wiles[91195]: {"fsid":"20fdc58c-b037-5094-a8ef-d490aa7c36f3","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":39,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1759405608,"num_in_osds":3,"osd_in_since":1759405587,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":131}],"num_pgs":131,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83939328,"bytes_avail":22452056064,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-10-02T11:46:47.836005+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.wtokkj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.kvxdhw":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 02 11:46:58 compute-0 systemd[1]: libpod-2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00.scope: Deactivated successfully.
Oct 02 11:46:58 compute-0 podman[91179]: 2025-10-02 11:46:58.595745918 +0000 UTC m=+0.862503865 container died 2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00 (image=quay.io/ceph/ceph:v18, name=frosty_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-27e52b1f580936f3583e4a0cde4f2296639ae3a23f122b61aaadf94f710522b7-merged.mount: Deactivated successfully.
Oct 02 11:46:58 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 02 11:46:58 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 02 11:46:58 compute-0 podman[91179]: 2025-10-02 11:46:58.719762532 +0000 UTC m=+0.986520459 container remove 2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00 (image=quay.io/ceph/ceph:v18, name=frosty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:46:58 compute-0 systemd[1]: libpod-conmon-2132e5a6026287f42d710609f8a294189576a8b51fb8aaf4d25a79faf3dedf00.scope: Deactivated successfully.
Oct 02 11:46:58 compute-0 sudo[91175]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:58 compute-0 sudo[91258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vizygycwdsflngltcoltqzwyljctylhm ; /usr/bin/python3'
Oct 02 11:46:58 compute-0 sudo[91258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:46:58 compute-0 ceph-mon[73668]: Deploying daemon rgw.rgw.compute-2.tsbazp on compute-2
Oct 02 11:46:58 compute-0 ceph-mon[73668]: pgmap v103: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2363194234' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:46:59 compute-0 python3[91260]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:46:59 compute-0 podman[91261]: 2025-10-02 11:46:59.094869123 +0000 UTC m=+0.049616691 container create 200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c (image=quay.io/ceph/ceph:v18, name=sweet_mahavira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:46:59 compute-0 systemd[1]: Started libpod-conmon-200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c.scope.
Oct 02 11:46:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87185df123a0289c3b8c7fdf5cdc14f57ad9c2e3787ab4c42ecd3f0af8de00cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87185df123a0289c3b8c7fdf5cdc14f57ad9c2e3787ab4c42ecd3f0af8de00cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:59 compute-0 podman[91261]: 2025-10-02 11:46:59.160499649 +0000 UTC m=+0.115247237 container init 200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c (image=quay.io/ceph/ceph:v18, name=sweet_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:46:59 compute-0 podman[91261]: 2025-10-02 11:46:59.071877225 +0000 UTC m=+0.026624843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:46:59 compute-0 podman[91261]: 2025-10-02 11:46:59.167278655 +0000 UTC m=+0.122026223 container start 200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c (image=quay.io/ceph/ceph:v18, name=sweet_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:59 compute-0 podman[91261]: 2025-10-02 11:46:59.171767932 +0000 UTC m=+0.126515530 container attach 200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c (image=quay.io/ceph/ceph:v18, name=sweet_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:46:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 11:46:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991034300' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:46:59 compute-0 sweet_mahavira[91276]: 
Oct 02 11:46:59 compute-0 sweet_mahavira[91276]: {"epoch":3,"fsid":"20fdc58c-b037-5094-a8ef-d490aa7c36f3","modified":"2025-10-02T11:46:14.519652Z","created":"2025-10-02T11:43:34.350998Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct 02 11:46:59 compute-0 sweet_mahavira[91276]: dumped monmap epoch 3
Oct 02 11:46:59 compute-0 systemd[1]: libpod-200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c.scope: Deactivated successfully.
Oct 02 11:46:59 compute-0 podman[91261]: 2025-10-02 11:46:59.780843416 +0000 UTC m=+0.735590984 container died 200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c (image=quay.io/ceph/ceph:v18, name=sweet_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-87185df123a0289c3b8c7fdf5cdc14f57ad9c2e3787ab4c42ecd3f0af8de00cc-merged.mount: Deactivated successfully.
Oct 02 11:46:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v104: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:59 compute-0 podman[91261]: 2025-10-02 11:46:59.892644703 +0000 UTC m=+0.847392271 container remove 200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c (image=quay.io/ceph/ceph:v18, name=sweet_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:46:59 compute-0 systemd[1]: libpod-conmon-200f79fdbe267338191bdfd7d2bd2382ab19a284f10620c657503b38f4ee101c.scope: Deactivated successfully.
Oct 02 11:46:59 compute-0 sudo[91258]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:00 compute-0 ceph-mon[73668]: 2.17 deep-scrub starts
Oct 02 11:47:00 compute-0 ceph-mon[73668]: 2.17 deep-scrub ok
Oct 02 11:47:00 compute-0 ceph-mon[73668]: 4.4 scrub starts
Oct 02 11:47:00 compute-0 ceph-mon[73668]: 4.4 scrub ok
Oct 02 11:47:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/991034300' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:47:00 compute-0 sudo[91336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxoewpmvbumxlinlxvvizlgeuxfebmbh ; /usr/bin/python3'
Oct 02 11:47:00 compute-0 sudo[91336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:00 compute-0 python3[91338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:00 compute-0 podman[91339]: 2025-10-02 11:47:00.564631513 +0000 UTC m=+0.040137404 container create 5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42 (image=quay.io/ceph/ceph:v18, name=determined_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:47:00 compute-0 systemd[1]: Started libpod-conmon-5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42.scope.
Oct 02 11:47:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5e1fe4b9bef5dcb4d599ea11627807540a71e56a4dacb4e9c2c16ecbbf0178/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5e1fe4b9bef5dcb4d599ea11627807540a71e56a4dacb4e9c2c16ecbbf0178/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:00 compute-0 podman[91339]: 2025-10-02 11:47:00.626228855 +0000 UTC m=+0.101734746 container init 5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42 (image=quay.io/ceph/ceph:v18, name=determined_northcutt, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:00 compute-0 podman[91339]: 2025-10-02 11:47:00.636540273 +0000 UTC m=+0.112046174 container start 5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42 (image=quay.io/ceph/ceph:v18, name=determined_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:00 compute-0 podman[91339]: 2025-10-02 11:47:00.640134326 +0000 UTC m=+0.115640207 container attach 5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42 (image=quay.io/ceph/ceph:v18, name=determined_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:47:00 compute-0 podman[91339]: 2025-10-02 11:47:00.546774559 +0000 UTC m=+0.022280470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:00 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 02 11:47:00 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 02 11:47:01 compute-0 ceph-mon[73668]: pgmap v104: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 02 11:47:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2424414405' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 02 11:47:01 compute-0 determined_northcutt[91354]: [client.openstack]
Oct 02 11:47:01 compute-0 determined_northcutt[91354]:         key = AQBLZd5oAAAAABAAIzZhCjE1jBJ2OFSOmUV6ug==
Oct 02 11:47:01 compute-0 determined_northcutt[91354]:         caps mgr = "allow *"
Oct 02 11:47:01 compute-0 determined_northcutt[91354]:         caps mon = "profile rbd"
Oct 02 11:47:01 compute-0 determined_northcutt[91354]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 02 11:47:01 compute-0 systemd[1]: libpod-5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42.scope: Deactivated successfully.
Oct 02 11:47:01 compute-0 podman[91339]: 2025-10-02 11:47:01.349393536 +0000 UTC m=+0.824899427 container died 5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42 (image=quay.io/ceph/ceph:v18, name=determined_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd5e1fe4b9bef5dcb4d599ea11627807540a71e56a4dacb4e9c2c16ecbbf0178-merged.mount: Deactivated successfully.
Oct 02 11:47:01 compute-0 podman[91339]: 2025-10-02 11:47:01.394060047 +0000 UTC m=+0.869565938 container remove 5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42 (image=quay.io/ceph/ceph:v18, name=determined_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:47:01 compute-0 systemd[1]: libpod-conmon-5e27841f6847093d70806fb9fea076ecf974e8ca5366d1ed830eb39958dadf42.scope: Deactivated successfully.
Oct 02 11:47:01 compute-0 sudo[91336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:01 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 02 11:47:01 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 02 11:47:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:47:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:47:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 02 11:47:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 02 11:47:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 02 11:47:02 compute-0 ceph-mon[73668]: 3.6 scrub starts
Oct 02 11:47:02 compute-0 ceph-mon[73668]: 3.6 scrub ok
Oct 02 11:47:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2424414405' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 02 11:47:02 compute-0 ceph-mon[73668]: pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 02 11:47:02 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 36 pg[8.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:02 compute-0 sudo[91539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlsweioiiugneafurzakaevmbidrqbmx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405622.3344991-33971-241771907184570/async_wrapper.py j872531278568 30 /home/zuul/.ansible/tmp/ansible-tmp-1759405622.3344991-33971-241771907184570/AnsiballZ_command.py _'
Oct 02 11:47:02 compute-0 sudo[91539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:02 compute-0 ansible-async_wrapper.py[91541]: Invoked with j872531278568 30 /home/zuul/.ansible/tmp/ansible-tmp-1759405622.3344991-33971-241771907184570/AnsiballZ_command.py _
Oct 02 11:47:02 compute-0 ansible-async_wrapper.py[91544]: Starting module and watcher
Oct 02 11:47:02 compute-0 ansible-async_wrapper.py[91544]: Start watching 91545 (30)
Oct 02 11:47:02 compute-0 ansible-async_wrapper.py[91545]: Start module (91545)
Oct 02 11:47:02 compute-0 ansible-async_wrapper.py[91541]: Return async_wrapper task started.
Oct 02 11:47:02 compute-0 sudo[91539]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 02 11:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 02 11:47:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:47:02 compute-0 python3[91546]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:03 compute-0 podman[91547]: 2025-10-02 11:47:03.059209206 +0000 UTC m=+0.045629987 container create 0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:03 compute-0 systemd[1]: Started libpod-conmon-0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f.scope.
Oct 02 11:47:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:03 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.vuotmz on compute-1
Oct 02 11:47:03 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.vuotmz on compute-1
Oct 02 11:47:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de48bd4034d629c8b8981460f233620b4ca20583ce96a6862e1c5703e28ef98a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de48bd4034d629c8b8981460f233620b4ca20583ce96a6862e1c5703e28ef98a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:03 compute-0 podman[91547]: 2025-10-02 11:47:03.03859618 +0000 UTC m=+0.025016991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:03 compute-0 podman[91547]: 2025-10-02 11:47:03.144964176 +0000 UTC m=+0.131384957 container init 0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:47:03 compute-0 podman[91547]: 2025-10-02 11:47:03.152746248 +0000 UTC m=+0.139167029 container start 0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:47:03 compute-0 podman[91547]: 2025-10-02 11:47:03.158319343 +0000 UTC m=+0.144740124 container attach 0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:47:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 02 11:47:03 compute-0 ceph-mon[73668]: 5.a scrub starts
Oct 02 11:47:03 compute-0 ceph-mon[73668]: 5.a scrub ok
Oct 02 11:47:03 compute-0 ceph-mon[73668]: 2.1a scrub starts
Oct 02 11:47:03 compute-0 ceph-mon[73668]: 2.1a scrub ok
Oct 02 11:47:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:47:03 compute-0 ceph-mon[73668]: osdmap e36: 3 total, 3 up, 3 in
Oct 02 11:47:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:47:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:47:03 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:47:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:03 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14343 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:03 compute-0 naughty_dubinsky[91562]: 
Oct 02 11:47:03 compute-0 naughty_dubinsky[91562]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:47:03 compute-0 systemd[1]: libpod-0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f.scope: Deactivated successfully.
Oct 02 11:47:03 compute-0 podman[91547]: 2025-10-02 11:47:03.763944358 +0000 UTC m=+0.750365149 container died 0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:47:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-de48bd4034d629c8b8981460f233620b4ca20583ce96a6862e1c5703e28ef98a-merged.mount: Deactivated successfully.
Oct 02 11:47:03 compute-0 podman[91547]: 2025-10-02 11:47:03.81596358 +0000 UTC m=+0.802384371 container remove 0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:03 compute-0 systemd[1]: libpod-conmon-0f7d3b92b04804490e12ce7c0a5e5f23e984b339921cd5217a441bafedd8177f.scope: Deactivated successfully.
Oct 02 11:47:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 02 11:47:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 02 11:47:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v107: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:03 compute-0 ansible-async_wrapper.py[91545]: Module complete (91545)
Oct 02 11:47:03 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 02 11:47:03 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 37 pg[8.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:04 compute-0 sudo[91648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhwibwgfooqqgxgjzkxnolsqilajkgif ; /usr/bin/python3'
Oct 02 11:47:04 compute-0 sudo[91648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:04 compute-0 python3[91650]: ansible-ansible.legacy.async_status Invoked with jid=j872531278568.91541 mode=status _async_dir=/root/.ansible_async
Oct 02 11:47:04 compute-0 sudo[91648]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:04 compute-0 sudo[91697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxaebhmqsuiajbackltgjwiprdghuoex ; /usr/bin/python3'
Oct 02 11:47:04 compute-0 sudo[91697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:04 compute-0 python3[91699]: ansible-ansible.legacy.async_status Invoked with jid=j872531278568.91541 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 11:47:04 compute-0 sudo[91697]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:04 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Oct 02 11:47:04 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 02 11:47:04 compute-0 ceph-mon[73668]: Deploying daemon rgw.rgw.compute-1.vuotmz on compute-1
Oct 02 11:47:04 compute-0 ceph-mon[73668]: from='client.14343 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:04 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 02 11:47:04 compute-0 ceph-mon[73668]: pgmap v107: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:04 compute-0 ceph-mon[73668]: osdmap e37: 3 total, 3 up, 3 in
Oct 02 11:47:04 compute-0 sudo[91725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxboblqvnqmbtxtaexgtcywfcpkhirra ; /usr/bin/python3'
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 02 11:47:04 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 38 pg[9.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:04 compute-0 sudo[91725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:47:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 02 11:47:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:05 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.hlkvzi on compute-0
Oct 02 11:47:05 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.hlkvzi on compute-0
Oct 02 11:47:05 compute-0 python3[91727]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:05 compute-0 sudo[91728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:05 compute-0 sudo[91728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:05 compute-0 sudo[91728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:05 compute-0 podman[91745]: 2025-10-02 11:47:05.11606412 +0000 UTC m=+0.044031496 container create 6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036 (image=quay.io/ceph/ceph:v18, name=recursing_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 11:47:05 compute-0 systemd[1]: Started libpod-conmon-6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036.scope.
Oct 02 11:47:05 compute-0 sudo[91766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:05 compute-0 sudo[91766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:05 compute-0 sudo[91766]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a367870c58912506c54be6befd000492784ebaed73aac5df4788e881ced80a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a367870c58912506c54be6befd000492784ebaed73aac5df4788e881ced80a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:05 compute-0 podman[91745]: 2025-10-02 11:47:05.192742323 +0000 UTC m=+0.120709719 container init 6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036 (image=quay.io/ceph/ceph:v18, name=recursing_shannon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:47:05 compute-0 podman[91745]: 2025-10-02 11:47:05.098040501 +0000 UTC m=+0.026007897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:05 compute-0 podman[91745]: 2025-10-02 11:47:05.199850748 +0000 UTC m=+0.127818124 container start 6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036 (image=quay.io/ceph/ceph:v18, name=recursing_shannon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:05 compute-0 podman[91745]: 2025-10-02 11:47:05.203269287 +0000 UTC m=+0.131236683 container attach 6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036 (image=quay.io/ceph/ceph:v18, name=recursing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:47:05 compute-0 sudo[91797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:05 compute-0 sudo[91797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:05 compute-0 sudo[91797]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:05 compute-0 sudo[91823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:05 compute-0 sudo[91823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:05 compute-0 ceph-mgr[73961]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Oct 02 11:47:05 compute-0 podman[91906]: 2025-10-02 11:47:05.600823773 +0000 UTC m=+0.040356401 container create 6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:47:05 compute-0 systemd[1]: Started libpod-conmon-6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232.scope.
Oct 02 11:47:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:05 compute-0 podman[91906]: 2025-10-02 11:47:05.669594341 +0000 UTC m=+0.109126999 container init 6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_spence, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:47:05 compute-0 podman[91906]: 2025-10-02 11:47:05.675488024 +0000 UTC m=+0.115020652 container start 6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:47:05 compute-0 sharp_spence[91923]: 167 167
Oct 02 11:47:05 compute-0 podman[91906]: 2025-10-02 11:47:05.583838001 +0000 UTC m=+0.023370649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:05 compute-0 systemd[1]: libpod-6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232.scope: Deactivated successfully.
Oct 02 11:47:05 compute-0 podman[91906]: 2025-10-02 11:47:05.679863687 +0000 UTC m=+0.119396355 container attach 6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_spence, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 11:47:05 compute-0 podman[91906]: 2025-10-02 11:47:05.680269658 +0000 UTC m=+0.119802296 container died 6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_spence, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-347efc6368d674c1cea792a3d29df8aa818fca440dad7a0d8bade57b22de9eee-merged.mount: Deactivated successfully.
Oct 02 11:47:05 compute-0 podman[91906]: 2025-10-02 11:47:05.715625117 +0000 UTC m=+0.155157745 container remove 6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_spence, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:47:05 compute-0 systemd[1]: libpod-conmon-6b81dfa41a5c9a2efccbc12ad70a43471a337356f17be6cc31883e93d06eb232.scope: Deactivated successfully.
Oct 02 11:47:05 compute-0 systemd[1]: Reloading.
Oct 02 11:47:05 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:05 compute-0 recursing_shannon[91793]: 
Oct 02 11:47:05 compute-0 recursing_shannon[91793]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 11:47:05 compute-0 podman[91745]: 2025-10-02 11:47:05.790663048 +0000 UTC m=+0.718630424 container died 6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036 (image=quay.io/ceph/ceph:v18, name=recursing_shannon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:05 compute-0 systemd-rc-local-generator[91971]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:05 compute-0 systemd-sysv-generator[91976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v110: 133 pgs: 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 1 op/s
Oct 02 11:47:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 02 11:47:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:47:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:47:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 02 11:47:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 02 11:47:05 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 39 pg[9.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:05 compute-0 ceph-mon[73668]: 3.7 deep-scrub starts
Oct 02 11:47:05 compute-0 ceph-mon[73668]: 3.7 deep-scrub ok
Oct 02 11:47:05 compute-0 ceph-mon[73668]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-0 ceph-mon[73668]: osdmap e38: 3 total, 3 up, 3 in
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:05 compute-0 systemd[1]: libpod-6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036.scope: Deactivated successfully.
Oct 02 11:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-23a367870c58912506c54be6befd000492784ebaed73aac5df4788e881ced80a-merged.mount: Deactivated successfully.
Oct 02 11:47:06 compute-0 podman[91745]: 2025-10-02 11:47:06.01689926 +0000 UTC m=+0.944866636 container remove 6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036 (image=quay.io/ceph/ceph:v18, name=recursing_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:47:06 compute-0 systemd[1]: libpod-conmon-6e21e6971b6c71e1d292a12bcc7451af9b473a9835e6007bfed9f3065677c036.scope: Deactivated successfully.
Oct 02 11:47:06 compute-0 sudo[91725]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:06 compute-0 systemd[1]: Reloading.
Oct 02 11:47:06 compute-0 systemd-sysv-generator[92031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:06 compute-0 systemd-rc-local-generator[92027]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:06 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.hlkvzi for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:47:06 compute-0 podman[92089]: 2025-10-02 11:47:06.518716085 +0000 UTC m=+0.041550241 container create 2ab3afb27d41646f62dba7446d9af2708a90910d2ec1dd74adab3068f6752e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-0-hlkvzi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13516e8b3107339455902a1337ebe716d69750efc8cda58d8beba5416d0fc24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13516e8b3107339455902a1337ebe716d69750efc8cda58d8beba5416d0fc24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13516e8b3107339455902a1337ebe716d69750efc8cda58d8beba5416d0fc24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13516e8b3107339455902a1337ebe716d69750efc8cda58d8beba5416d0fc24/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.hlkvzi supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:06 compute-0 podman[92089]: 2025-10-02 11:47:06.572703608 +0000 UTC m=+0.095537774 container init 2ab3afb27d41646f62dba7446d9af2708a90910d2ec1dd74adab3068f6752e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-0-hlkvzi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:47:06 compute-0 podman[92089]: 2025-10-02 11:47:06.577616196 +0000 UTC m=+0.100450352 container start 2ab3afb27d41646f62dba7446d9af2708a90910d2ec1dd74adab3068f6752e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-0-hlkvzi, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:47:06 compute-0 bash[92089]: 2ab3afb27d41646f62dba7446d9af2708a90910d2ec1dd74adab3068f6752e05
Oct 02 11:47:06 compute-0 podman[92089]: 2025-10-02 11:47:06.50159909 +0000 UTC m=+0.024433276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:06 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.hlkvzi for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:47:06 compute-0 sudo[91823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:06 compute-0 radosgw[92108]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:47:06 compute-0 radosgw[92108]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 02 11:47:06 compute-0 radosgw[92108]: framework: beast
Oct 02 11:47:06 compute-0 radosgw[92108]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 02 11:47:06 compute-0 radosgw[92108]: init_numa not setting numa affinity
Oct 02 11:47:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:47:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:06 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev ff88074c-f7a5-4450-af18-9fc00cd3bcc9 (Updating rgw.rgw deployment (+3 -> 3))
Oct 02 11:47:06 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event ff88074c-f7a5-4450-af18-9fc00cd3bcc9 (Updating rgw.rgw deployment (+3 -> 3)) in 9 seconds
Oct 02 11:47:06 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:47:06 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:47:06 compute-0 sudo[92193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfjaxircdxndbvckuheayqfhvajtblro ; /usr/bin/python3'
Oct 02 11:47:06 compute-0 sudo[92193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 11:47:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:06 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev e9bff29b-3ccd-41be-8371-4d66fbcef181 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 02 11:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Oct 02 11:47:06 compute-0 python3[92195]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 02 11:47:06 compute-0 podman[92196]: 2025-10-02 11:47:06.987643356 +0000 UTC m=+0.067896316 container create e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436 (image=quay.io/ceph/ceph:v18, name=dazzling_mestorf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:47:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.zhecum on compute-0
Oct 02 11:47:07 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.zhecum on compute-0
Oct 02 11:47:07 compute-0 systemd[1]: Started libpod-conmon-e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436.scope.
Oct 02 11:47:07 compute-0 podman[92196]: 2025-10-02 11:47:06.946431605 +0000 UTC m=+0.026684595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52840553844b1fd203ffd96e6df5edd53aa2aacab6f507e612695f23820d82f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52840553844b1fd203ffd96e6df5edd53aa2aacab6f507e612695f23820d82f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:07 compute-0 sudo[92211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:07 compute-0 sudo[92211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:07 compute-0 sudo[92211]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:07 compute-0 podman[92196]: 2025-10-02 11:47:07.087684317 +0000 UTC m=+0.167937297 container init e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436 (image=quay.io/ceph/ceph:v18, name=dazzling_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:47:07 compute-0 podman[92196]: 2025-10-02 11:47:07.096955088 +0000 UTC m=+0.177208048 container start e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436 (image=quay.io/ceph/ceph:v18, name=dazzling_mestorf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:07 compute-0 podman[92196]: 2025-10-02 11:47:07.100365986 +0000 UTC m=+0.180618956 container attach e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436 (image=quay.io/ceph/ceph:v18, name=dazzling_mestorf, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 02 11:47:07 compute-0 ceph-mon[73668]: Deploying daemon rgw.rgw.compute-0.hlkvzi on compute-0
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='client.14349 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:07 compute-0 ceph-mon[73668]: pgmap v110: 133 pgs: 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 1 op/s
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:47:07 compute-0 ceph-mon[73668]: osdmap e39: 3 total, 3 up, 3 in
Oct 02 11:47:07 compute-0 ceph-mon[73668]: 2.f scrub starts
Oct 02 11:47:07 compute-0 ceph-mon[73668]: 2.f scrub ok
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 02 11:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 11:47:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 11:47:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 11:47:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:07 compute-0 sudo[92240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:07 compute-0 sudo[92240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:07 compute-0 sudo[92240]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:07 compute-0 sudo[92265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:07 compute-0 sudo[92265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:07 compute-0 sudo[92265]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:07 compute-0 sudo[92290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:07 compute-0 sudo[92290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:07 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:07 compute-0 dazzling_mestorf[92218]: 
Oct 02 11:47:07 compute-0 dazzling_mestorf[92218]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 02 11:47:07 compute-0 systemd[1]: libpod-e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436.scope: Deactivated successfully.
Oct 02 11:47:07 compute-0 podman[92196]: 2025-10-02 11:47:07.67055914 +0000 UTC m=+0.750812100 container died e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436 (image=quay.io/ceph/ceph:v18, name=dazzling_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-52840553844b1fd203ffd96e6df5edd53aa2aacab6f507e612695f23820d82f0-merged.mount: Deactivated successfully.
Oct 02 11:47:07 compute-0 podman[92196]: 2025-10-02 11:47:07.715674263 +0000 UTC m=+0.795927243 container remove e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436 (image=quay.io/ceph/ceph:v18, name=dazzling_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:07 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 02 11:47:07 compute-0 systemd[1]: libpod-conmon-e028182bfe273c74bc37590028c56c24f40ee4a5abfa08afa29c0804ec283436.scope: Deactivated successfully.
Oct 02 11:47:07 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 02 11:47:07 compute-0 sudo[92193]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v113: 134 pgs: 1 unknown, 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 02 11:47:07 compute-0 ansible-async_wrapper.py[91544]: Done in kid B.
Oct 02 11:47:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 02 11:47:08 compute-0 ceph-mon[73668]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:47:08 compute-0 ceph-mon[73668]: 5.18 scrub starts
Oct 02 11:47:08 compute-0 ceph-mon[73668]: 5.18 scrub ok
Oct 02 11:47:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:08 compute-0 ceph-mon[73668]: Deploying daemon haproxy.rgw.default.compute-0.zhecum on compute-0
Oct 02 11:47:08 compute-0 ceph-mon[73668]: osdmap e40: 3 total, 3 up, 3 in
Oct 02 11:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-0 ceph-mon[73668]: from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:08 compute-0 ceph-mon[73668]: pgmap v113: 134 pgs: 1 unknown, 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 02 11:47:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 02 11:47:08 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 02 11:47:08 compute-0 sudo[92447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpxpumyaqejpmravyoxvhlejvglajjzj ; /usr/bin/python3'
Oct 02 11:47:08 compute-0 sudo[92447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:08 compute-0 python3[92449]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:08 compute-0 podman[92450]: 2025-10-02 11:47:08.766740599 +0000 UTC m=+0.047385903 container create 223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a (image=quay.io/ceph/ceph:v18, name=vibrant_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:08 compute-0 systemd[1]: Started libpod-conmon-223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a.scope.
Oct 02 11:47:08 compute-0 podman[92450]: 2025-10-02 11:47:08.747148449 +0000 UTC m=+0.027793773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a1ed18d912fb80cf33a18d290e8242582fdeee273845fb12c1c2d5077e1b56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a1ed18d912fb80cf33a18d290e8242582fdeee273845fb12c1c2d5077e1b56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:08 compute-0 podman[92450]: 2025-10-02 11:47:08.872985151 +0000 UTC m=+0.153630465 container init 223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a (image=quay.io/ceph/ceph:v18, name=vibrant_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:47:08 compute-0 podman[92450]: 2025-10-02 11:47:08.883853363 +0000 UTC m=+0.164498667 container start 223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a (image=quay.io/ceph/ceph:v18, name=vibrant_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:08 compute-0 podman[92450]: 2025-10-02 11:47:08.887623171 +0000 UTC m=+0.168268475 container attach 223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a (image=quay.io/ceph/ceph:v18, name=vibrant_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 02 11:47:09 compute-0 ceph-mon[73668]: 4.7 scrub starts
Oct 02 11:47:09 compute-0 ceph-mon[73668]: 4.7 scrub ok
Oct 02 11:47:09 compute-0 ceph-mon[73668]: 4.18 scrub starts
Oct 02 11:47:09 compute-0 ceph-mon[73668]: 4.18 scrub ok
Oct 02 11:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:09 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:09 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:09 compute-0 ceph-mon[73668]: osdmap e41: 3 total, 3 up, 3 in
Oct 02 11:47:09 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.14379 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:09 compute-0 vibrant_saha[92477]: 
Oct 02 11:47:09 compute-0 vibrant_saha[92477]: [{"container_id": "1344746b2b67", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.97%", "created": "2025-10-02T11:44:51.876286Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-02T11:44:51.921005Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:45:41.901456Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-10-02T11:44:51.782211Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@crash.compute-0", "version": "18.2.7"}, {"container_id": "f746e1325e76", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.76%", "created": "2025-10-02T11:45:25.889842Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2025-10-02T11:45:25.947848Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:46:44.717050Z", "memory_usage": 11733565, "ports": [], "service_name": "crash", "started": "2025-10-02T11:45:25.470009Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@crash.compute-1", "version": "18.2.7"}, {"container_id": "e2069e4312a7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.26%", "created": "2025-10-02T11:46:24.991349Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2025-10-02T11:46:25.068668Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:46:45.209215Z", "memory_usage": 11649679, "ports": [], "service_name": "crash", "started": "2025-10-02T11:46:24.669625Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@crash.compute-2", "version": "18.2.7"}, {"container_id": "3c1cbf2aec50", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "44.97%", "created": "2025-10-02T11:43:41.352321Z", "daemon_id": "compute-0.unmtoh", "daemon_name": "mgr.compute-0.unmtoh", "daemon_type": "mgr", "events": ["2025-10-02T11:44:54.489548Z daemon:mgr.compute-0.unmtoh [INFO] \"Reconfigured mgr.compute-0.unmtoh on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:45:41.901391Z", "memory_usage": 545678950, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-02T11:43:41.261059Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@mgr.compute-0.unmtoh", "version": "18.2.7"}, {"container_id": "7339018a450e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "99.44%", "created": "2025-10-02T11:46:22.986280Z", "daemon_id": "compute-1.wtokkj", "daemon_name": "mgr.compute-1.wtokkj", "daemon_type": "mgr", "events": ["2025-10-02T11:46:23.039571Z daemon:mgr.compute-1.wtokkj [INFO] \"Deployed mgr.compute-1.wtokkj on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:46:44.717339Z", "memory_usage": 511390515, "ports": [8765], "service_name": "mgr", "started": "2025-10-02T11:46:22.906600Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@mgr.compute-1.wtokkj", "version": "18.2.7"}, {"container_id": "53e32e8f9cb9", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "91.97%", "created": "2025-10-02T11:46:21.393252Z", "daemon_id": "compute-2.kvxdhw", "daemon_name": "mgr.compute-2.kvxdhw", "daemon_type": "mgr", "events": ["2025-10-02T11:46:21.467530Z daemon:mgr.compute-2.kvxdhw [INFO] \"Deployed mgr.compute-2.kvxdhw on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:46:45.209133Z", "memory_usage": 513697382, "ports": [8765], "service_name": "mgr", "started": "2025-10-02T11:46:21.286953Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@mgr.compute-2.kvxdhw", "version": "18.2.7"}, {"container_id": "59b10e0ac165", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.50%", "created": "2025-10-02T11:43:36.194172Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-02T11:44:53.839092Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:45:41.901291Z", "memory_request": 2147483648, "memory_usage": 31855738, "ports": [], "service_name": "mon", "started": "2025-10-02T11:43:39.161486Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@mon.compute-0", "version": "18.2.7"}, {"container_id": "1e591a1b9413", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.27%", "created": "2025-10-02T11:46:10.418403Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2025-10-02T11:46:14.456290Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:46:44.717277Z", "memory_request": 2147483648, "memory_usage": 31205621, "ports": [], "service_name": "mon", "started": "2025-10-02T11:46:10.329653Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@mon.compute-1", "version": "18.2.7"}, {"container_id": "b4dc2d85fe29", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.67%", "created": "2025-10-02T11:46:08.534904Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2025-10-02T11:46:08.585541Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:46:45.209004Z", "memory_request": 2147483648, "memory_usage": 28154265, "ports": [], "service_name": "mon", "started": "2025-10-02T11:46:08.440863Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@mon.compute-2", "version": "18.2.7"}, {"container_id": "c8b3723bdbc9", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.75%", "created": "2025-10-02T11:45:37.997512Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-02T11:45:38.082014Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T11:45:41.901519Z", "memory_request": 4294967296, "memory_usage": 26487029, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T11:45:37.877051Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@osd.1", "version": "18.2.7"}, {"container_id": "284fc2c9a45d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.53%", "created": "2025-10-02T11:45:38.254720Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-02T11:45:38.341718Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-02T11:46:44.717210Z", "memory_request": 5502772019, "memory_usage": 63533219, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T11:45:38.119911Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@osd.0", "version": "18.2.7"}, {"container_id": "920bca769216", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "6.88%", "created": "2025-10-02T11:46:40.968804Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-02T11:46:41.022896Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-02T11:46:45.209292Z", "memory_request": 4294967296, "memory_usage": 26833059, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T11:46:40.854429Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.hlkvzi", "daemon_name": "rgw.rgw.compute-0.hlkvzi", "daemon_type": "rgw", "events": ["2025-10-02T11:47:06.684884Z daemon:rgw.rgw.compute-0.hlkvzi [INFO] \"Deployed rgw.rgw.compute-0.hlkvzi on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-1.vuotmz", "daemon_name": "rgw.rgw.compute-1.vuotmz", "daemon_type": "rgw", "events": ["2025-10-02T11:47:04.927616Z daemon:rgw.rgw.compute-1.vuotmz [INFO] \"Deployed rgw.rgw.compute-1.vuotmz on host 'compute-1'\""], "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-2.tsbazp", "daemon_name": "rgw.rgw.compute-2.tsbazp", "daemon_type": "rgw", "events": ["2025-10-02T11:47:02.169901Z daemon:rgw.rgw.compute-2.tsbazp [INFO] \"Deployed rgw.rgw.compute-2.tsbazp on host 'compute-2'\""], "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct 02 11:47:09 compute-0 systemd[1]: libpod-223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a.scope: Deactivated successfully.
Oct 02 11:47:09 compute-0 podman[92450]: 2025-10-02 11:47:09.486505881 +0000 UTC m=+0.767151175 container died 223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a (image=quay.io/ceph/ceph:v18, name=vibrant_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:47:09 compute-0 rsyslogd[1006]: message too long (13999) with configured size 8096, begin of message is: [{"container_id": "1344746b2b67", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 11:47:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v115: 134 pgs: 1 unknown, 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 828 B/s wr, 2 op/s
Oct 02 11:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 02 11:47:09 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 02 11:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 11:47:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 42 pg[11.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 11:47:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6a1ed18d912fb80cf33a18d290e8242582fdeee273845fb12c1c2d5077e1b56-merged.mount: Deactivated successfully.
Oct 02 11:47:10 compute-0 podman[92450]: 2025-10-02 11:47:10.232064243 +0000 UTC m=+1.512709547 container remove 223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a (image=quay.io/ceph/ceph:v18, name=vibrant_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:47:10 compute-0 systemd[1]: libpod-conmon-223e1d17b20f92c48f8969a6faf809fe742c246e6859b9f8852e365e1c25451a.scope: Deactivated successfully.
Oct 02 11:47:10 compute-0 sudo[92447]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:10 compute-0 podman[92374]: 2025-10-02 11:47:10.293707235 +0000 UTC m=+2.777801176 container create 9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be (image=quay.io/ceph/haproxy:2.3, name=infallible_blackwell)
Oct 02 11:47:10 compute-0 systemd[1]: Started libpod-conmon-9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be.scope.
Oct 02 11:47:10 compute-0 podman[92374]: 2025-10-02 11:47:10.273628023 +0000 UTC m=+2.757721964 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 02 11:47:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:10 compute-0 podman[92374]: 2025-10-02 11:47:10.366143159 +0000 UTC m=+2.850237100 container init 9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be (image=quay.io/ceph/haproxy:2.3, name=infallible_blackwell)
Oct 02 11:47:10 compute-0 podman[92374]: 2025-10-02 11:47:10.372056912 +0000 UTC m=+2.856150843 container start 9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be (image=quay.io/ceph/haproxy:2.3, name=infallible_blackwell)
Oct 02 11:47:10 compute-0 podman[92374]: 2025-10-02 11:47:10.375373419 +0000 UTC m=+2.859467340 container attach 9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be (image=quay.io/ceph/haproxy:2.3, name=infallible_blackwell)
Oct 02 11:47:10 compute-0 infallible_blackwell[92599]: 0 0
Oct 02 11:47:10 compute-0 systemd[1]: libpod-9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be.scope: Deactivated successfully.
Oct 02 11:47:10 compute-0 podman[92374]: 2025-10-02 11:47:10.376670962 +0000 UTC m=+2.860764893 container died 9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be (image=quay.io/ceph/haproxy:2.3, name=infallible_blackwell)
Oct 02 11:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7370cd73b7bc3d0581952408586ffc940a9d1fc972ed3388ecfd0299b3453f36-merged.mount: Deactivated successfully.
Oct 02 11:47:10 compute-0 podman[92374]: 2025-10-02 11:47:10.41272901 +0000 UTC m=+2.896822931 container remove 9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be (image=quay.io/ceph/haproxy:2.3, name=infallible_blackwell)
Oct 02 11:47:10 compute-0 systemd[1]: libpod-conmon-9b60bed72affff7a629476a973fff4d7b29fa7e35cf7c26be63162f05d3c35be.scope: Deactivated successfully.
Oct 02 11:47:10 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 11 completed events
Oct 02 11:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:47:10 compute-0 systemd[1]: Reloading.
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:10 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 2e0ab098-3949-4faa-8e73-169c7662688e (Global Recovery Event) in 5 seconds
Oct 02 11:47:10 compute-0 systemd-sysv-generator[92647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:10 compute-0 systemd-rc-local-generator[92644]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:10 compute-0 ceph-mon[73668]: 4.9 deep-scrub starts
Oct 02 11:47:10 compute-0 ceph-mon[73668]: 4.9 deep-scrub ok
Oct 02 11:47:10 compute-0 ceph-mon[73668]: 4.1b deep-scrub starts
Oct 02 11:47:10 compute-0 ceph-mon[73668]: 4.1b deep-scrub ok
Oct 02 11:47:10 compute-0 ceph-mon[73668]: from='client.14379 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:10 compute-0 ceph-mon[73668]: pgmap v115: 134 pgs: 1 unknown, 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 828 B/s wr, 2 op/s
Oct 02 11:47:10 compute-0 ceph-mon[73668]: osdmap e42: 3 total, 3 up, 3 in
Oct 02 11:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-0 systemd[1]: Reloading.
Oct 02 11:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 02 11:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:10 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 43 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:10 compute-0 systemd-sysv-generator[92690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 11:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:10 compute-0 systemd-rc-local-generator[92686]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:11 compute-0 sudo[92717]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpcmsrmxuowoejyasoxrrocrxcntotdz ; /usr/bin/python3'
Oct 02 11:47:11 compute-0 sudo[92717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:11 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.zhecum for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:47:11 compute-0 python3[92721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:11 compute-0 podman[92759]: 2025-10-02 11:47:11.354777461 +0000 UTC m=+0.046158611 container create 110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837 (image=quay.io/ceph/ceph:v18, name=tender_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:11 compute-0 podman[92783]: 2025-10-02 11:47:11.387684696 +0000 UTC m=+0.046294484 container create 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:47:11 compute-0 systemd[1]: Started libpod-conmon-110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837.scope.
Oct 02 11:47:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5b5d0b97b7bee711c276a1008e89e0bcac22813b6d420dde3363765bdacef8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5b5d0b97b7bee711c276a1008e89e0bcac22813b6d420dde3363765bdacef8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:11 compute-0 podman[92759]: 2025-10-02 11:47:11.334899324 +0000 UTC m=+0.026280484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:11 compute-0 podman[92759]: 2025-10-02 11:47:11.434310059 +0000 UTC m=+0.125691229 container init 110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837 (image=quay.io/ceph/ceph:v18, name=tender_keldysh, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:47:11 compute-0 podman[92759]: 2025-10-02 11:47:11.442932233 +0000 UTC m=+0.134313373 container start 110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837 (image=quay.io/ceph/ceph:v18, name=tender_keldysh, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820671b5a2a178c4399f25c971b7fa0dfde6d22ec1c911a9420264cf37a63f82/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:11 compute-0 podman[92759]: 2025-10-02 11:47:11.447336657 +0000 UTC m=+0.138717867 container attach 110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837 (image=quay.io/ceph/ceph:v18, name=tender_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:47:11 compute-0 podman[92783]: 2025-10-02 11:47:11.454031321 +0000 UTC m=+0.112641139 container init 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:47:11 compute-0 podman[92783]: 2025-10-02 11:47:11.458933829 +0000 UTC m=+0.117543617 container start 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:47:11 compute-0 podman[92783]: 2025-10-02 11:47:11.366442544 +0000 UTC m=+0.025052352 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 02 11:47:11 compute-0 bash[92783]: 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791
Oct 02 11:47:11 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.zhecum for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:47:11 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum[92805]: [NOTICE] 274/114711 (2) : New worker #1 (4) forked
Oct 02 11:47:11 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum[92805]: [WARNING] 274/114711 (4) : Server backend/rgw.rgw.compute-0.hlkvzi is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 02 11:47:11 compute-0 sudo[92290]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:11 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 02 11:47:11 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 02 11:47:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:47:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v118: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 433 B/s wr, 4 op/s
Oct 02 11:47:11 compute-0 ceph-mon[73668]: 2.b scrub starts
Oct 02 11:47:11 compute-0 ceph-mon[73668]: 2.b scrub ok
Oct 02 11:47:11 compute-0 ceph-mon[73668]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:11 compute-0 ceph-mon[73668]: 4.1f scrub starts
Oct 02 11:47:11 compute-0 ceph-mon[73668]: 4.1f scrub ok
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:11 compute-0 ceph-mon[73668]: osdmap e43: 3 total, 3 up, 3 in
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 02 11:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 11:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331194290' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:47:12 compute-0 tender_keldysh[92797]: 
Oct 02 11:47:12 compute-0 tender_keldysh[92797]: {"fsid":"20fdc58c-b037-5094-a8ef-d490aa7c36f3","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":52,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":43,"num_osds":3,"num_up_osds":3,"osd_up_since":1759405608,"num_in_osds":3,"osd_in_since":1759405587,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":134,"num_pools":10,"num_objects":6,"data_bytes":460666,"bytes_used":84123648,"bytes_avail":22451871744,"bytes_total":22535995392,"unknown_pgs_ratio":0.0074626863934099674,"inactive_pgs_ratio":0.0074626863934099674,"read_bytes_sec":1656,"write_bytes_sec":828,"read_op_per_sec":1,"write_op_per_sec":0},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-10-02T11:46:47.836005+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.wtokkj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.kvxdhw":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"2e0ab098-3949-4faa-8e73-169c7662688e":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"e9bff29b-3ccd-41be-8371-4d66fbcef181":{"message":"Updating ingress.rgw.default deployment (+4 -> 4) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 02 11:47:12 compute-0 systemd[1]: libpod-110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837.scope: Deactivated successfully.
Oct 02 11:47:12 compute-0 podman[92759]: 2025-10-02 11:47:12.077196182 +0000 UTC m=+0.768577352 container died 110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837 (image=quay.io/ceph/ceph:v18, name=tender_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:12 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.zptkij on compute-2
Oct 02 11:47:12 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.zptkij on compute-2
Oct 02 11:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff5b5d0b97b7bee711c276a1008e89e0bcac22813b6d420dde3363765bdacef8-merged.mount: Deactivated successfully.
Oct 02 11:47:12 compute-0 podman[92759]: 2025-10-02 11:47:12.12748161 +0000 UTC m=+0.818862750 container remove 110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837 (image=quay.io/ceph/ceph:v18, name=tender_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:47:12 compute-0 systemd[1]: libpod-conmon-110d21a44b49585d0388d08f097ac685746ba3af3f7b7313ae73725714649837.scope: Deactivated successfully.
Oct 02 11:47:12 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum[92805]: [WARNING] 274/114712 (4) : Server backend/rgw.rgw.compute-1.vuotmz is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 02 11:47:12 compute-0 sudo[92717]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 02 11:47:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 02 11:47:12 compute-0 radosgw[92108]: LDAP not started since no server URIs were provided in the configuration.
Oct 02 11:47:12 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-0-hlkvzi[92104]: 2025-10-02T11:47:12.574+0000 7fc68382f940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 02 11:47:12 compute-0 radosgw[92108]: framework: beast
Oct 02 11:47:12 compute-0 radosgw[92108]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 02 11:47:12 compute-0 radosgw[92108]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 02 11:47:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-0 radosgw[92108]: starting handler: beast
Oct 02 11:47:12 compute-0 radosgw[92108]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:47:12 compute-0 radosgw[92108]: mgrc service_daemon_register rgw.14373 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.hlkvzi,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864108,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=16ba9875-e611-4c67-897a-e19079014af6,zone_name=default,zonegroup_id=407d395c-624c-4136-be08-de285eb61d42,zonegroup_name=default}
Oct 02 11:47:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 02 11:47:12 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 02 11:47:12 compute-0 sudo[93418]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzaaoxoilulssinccswpovwttardqwgu ; /usr/bin/python3'
Oct 02 11:47:12 compute-0 sudo[93418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:13 compute-0 ceph-mon[73668]: 5.c scrub starts
Oct 02 11:47:13 compute-0 ceph-mon[73668]: 5.c scrub ok
Oct 02 11:47:13 compute-0 ceph-mon[73668]: 5.1b deep-scrub starts
Oct 02 11:47:13 compute-0 ceph-mon[73668]: 5.1b deep-scrub ok
Oct 02 11:47:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:13 compute-0 ceph-mon[73668]: pgmap v118: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 433 B/s wr, 4 op/s
Oct 02 11:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1331194290' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:47:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:13 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:13 compute-0 ceph-mon[73668]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:13 compute-0 ceph-mon[73668]: osdmap e44: 3 total, 3 up, 3 in
Oct 02 11:47:13 compute-0 python3[93420]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:13 compute-0 podman[93421]: 2025-10-02 11:47:13.195374102 +0000 UTC m=+0.040367360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:13 compute-0 podman[93421]: 2025-10-02 11:47:13.388721898 +0000 UTC m=+0.233715116 container create 2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde (image=quay.io/ceph/ceph:v18, name=serene_cori, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:47:13 compute-0 systemd[1]: Started libpod-conmon-2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde.scope.
Oct 02 11:47:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 11:47:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 11:47:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e389aae9bb352e1da04ec3cad28acf5f34bbef66985f2d744aba53e7fb14004/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e389aae9bb352e1da04ec3cad28acf5f34bbef66985f2d744aba53e7fb14004/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:13 compute-0 podman[93421]: 2025-10-02 11:47:13.583489532 +0000 UTC m=+0.428482770 container init 2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde (image=quay.io/ceph/ceph:v18, name=serene_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:47:13 compute-0 podman[93421]: 2025-10-02 11:47:13.592462345 +0000 UTC m=+0.437455563 container start 2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde (image=quay.io/ceph/ceph:v18, name=serene_cori, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:47:13 compute-0 podman[93421]: 2025-10-02 11:47:13.677527506 +0000 UTC m=+0.522520754 container attach 2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde (image=quay.io/ceph/ceph:v18, name=serene_cori, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:47:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v120: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 374 B/s wr, 3 op/s
Oct 02 11:47:14 compute-0 ceph-mon[73668]: Deploying daemon haproxy.rgw.default.compute-2.zptkij on compute-2
Oct 02 11:47:14 compute-0 ceph-mon[73668]: 3.15 scrub starts
Oct 02 11:47:14 compute-0 ceph-mon[73668]: 3.15 scrub ok
Oct 02 11:47:14 compute-0 ceph-mon[73668]: 4.b scrub starts
Oct 02 11:47:14 compute-0 ceph-mon[73668]: 4.b scrub ok
Oct 02 11:47:14 compute-0 ceph-mon[73668]: pgmap v120: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 374 B/s wr, 3 op/s
Oct 02 11:47:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 11:47:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169703598' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:47:14 compute-0 serene_cori[93436]: 
Oct 02 11:47:14 compute-0 systemd[1]: libpod-2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde.scope: Deactivated successfully.
Oct 02 11:47:14 compute-0 serene_cori[93436]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502772019","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.hlkvzi","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.vuotmz","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.tsbazp","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 02 11:47:14 compute-0 podman[93421]: 2025-10-02 11:47:14.218438989 +0000 UTC m=+1.063432217 container died 2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde (image=quay.io/ceph/ceph:v18, name=serene_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e389aae9bb352e1da04ec3cad28acf5f34bbef66985f2d744aba53e7fb14004-merged.mount: Deactivated successfully.
Oct 02 11:47:14 compute-0 podman[93421]: 2025-10-02 11:47:14.279928007 +0000 UTC m=+1.124921215 container remove 2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde (image=quay.io/ceph/ceph:v18, name=serene_cori, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:47:14 compute-0 systemd[1]: libpod-conmon-2b3545807253a73fc49010ba30009e4a4435adc08c0f0e2c7b3391b2381c3fde.scope: Deactivated successfully.
Oct 02 11:47:14 compute-0 sudo[93418]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:15 compute-0 sudo[93496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsmhbiemneyvmetjigqieyeepeehjcpz ; /usr/bin/python3'
Oct 02 11:47:15 compute-0 sudo[93496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:15 compute-0 python3[93498]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3169703598' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:47:15 compute-0 ceph-mon[73668]: 3.1c scrub starts
Oct 02 11:47:15 compute-0 ceph-mon[73668]: 3.1c scrub ok
Oct 02 11:47:15 compute-0 podman[93499]: 2025-10-02 11:47:15.324624837 +0000 UTC m=+0.078383069 container create 9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2 (image=quay.io/ceph/ceph:v18, name=gifted_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:47:15 compute-0 podman[93499]: 2025-10-02 11:47:15.267988245 +0000 UTC m=+0.021746497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:15 compute-0 systemd[1]: Started libpod-conmon-9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2.scope.
Oct 02 11:47:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd437b846d27e399d8a7260aa1607db933851eb047f1cf37dadf18897c4b83f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd437b846d27e399d8a7260aa1607db933851eb047f1cf37dadf18897c4b83f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:15 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum[92805]: [WARNING] 274/114715 (4) : Server backend/rgw.rgw.compute-0.hlkvzi is UP, reason: Layer7 check passed, code: 200, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 02 11:47:15 compute-0 podman[93499]: 2025-10-02 11:47:15.498151689 +0000 UTC m=+0.251909941 container init 9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2 (image=quay.io/ceph/ceph:v18, name=gifted_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:47:15 compute-0 podman[93499]: 2025-10-02 11:47:15.509439622 +0000 UTC m=+0.263197854 container start 9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2 (image=quay.io/ceph/ceph:v18, name=gifted_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:47:15 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 12 completed events
Oct 02 11:47:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:47:15 compute-0 podman[93499]: 2025-10-02 11:47:15.527387499 +0000 UTC m=+0.281145731 container attach 9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2 (image=quay.io/ceph/ceph:v18, name=gifted_carson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:15 compute-0 ceph-mgr[73961]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Oct 02 11:47:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v121: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 210 KiB/s rd, 5.7 KiB/s wr, 388 op/s
Oct 02 11:47:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 02 11:47:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380703263' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 02 11:47:16 compute-0 gifted_carson[93515]: mimic
Oct 02 11:47:16 compute-0 systemd[1]: libpod-9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2.scope: Deactivated successfully.
Oct 02 11:47:16 compute-0 conmon[93515]: conmon 9a96336c799611e0e350 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2.scope/container/memory.events
Oct 02 11:47:16 compute-0 podman[93499]: 2025-10-02 11:47:16.134553674 +0000 UTC m=+0.888311936 container died 9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2 (image=quay.io/ceph/ceph:v18, name=gifted_carson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:47:16 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum[92805]: [WARNING] 274/114716 (4) : Server backend/rgw.rgw.compute-1.vuotmz is UP, reason: Layer7 check passed, code: 200, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 02 11:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd437b846d27e399d8a7260aa1607db933851eb047f1cf37dadf18897c4b83f0-merged.mount: Deactivated successfully.
Oct 02 11:47:16 compute-0 podman[93499]: 2025-10-02 11:47:16.281531185 +0000 UTC m=+1.035289457 container remove 9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2 (image=quay.io/ceph/ceph:v18, name=gifted_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:16 compute-0 systemd[1]: libpod-conmon-9a96336c799611e0e35036b1dfbedd8804861fea2e47ece28c6d76649a7e13d2.scope: Deactivated successfully.
Oct 02 11:47:16 compute-0 sudo[93496]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:16 compute-0 ceph-mon[73668]: 4.1 deep-scrub starts
Oct 02 11:47:16 compute-0 ceph-mon[73668]: 4.1 deep-scrub ok
Oct 02 11:47:16 compute-0 ceph-mon[73668]: pgmap v121: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 210 KiB/s rd, 5.7 KiB/s wr, 388 op/s
Oct 02 11:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1380703263' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 02 11:47:16 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Oct 02 11:47:16 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Oct 02 11:47:17 compute-0 sudo[93575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chyinvfwobzdzrplmolyoxnypqpngrvl ; /usr/bin/python3'
Oct 02 11:47:17 compute-0 sudo[93575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:47:17 compute-0 python3[93577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:47:17 compute-0 podman[93578]: 2025-10-02 11:47:17.30685032 +0000 UTC m=+0.041322686 container create b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66 (image=quay.io/ceph/ceph:v18, name=zen_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:17 compute-0 systemd[1]: Started libpod-conmon-b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66.scope.
Oct 02 11:47:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851644d52f338adceb41fa9d9e287da684925e132ab686423f4984a515d3d312/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851644d52f338adceb41fa9d9e287da684925e132ab686423f4984a515d3d312/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:17 compute-0 podman[93578]: 2025-10-02 11:47:17.374992821 +0000 UTC m=+0.109465197 container init b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66 (image=quay.io/ceph/ceph:v18, name=zen_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:47:17 compute-0 podman[93578]: 2025-10-02 11:47:17.381018108 +0000 UTC m=+0.115490474 container start b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66 (image=quay.io/ceph/ceph:v18, name=zen_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:17 compute-0 podman[93578]: 2025-10-02 11:47:17.288746699 +0000 UTC m=+0.023219085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:47:17 compute-0 podman[93578]: 2025-10-02 11:47:17.386182542 +0000 UTC m=+0.120654908 container attach b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66 (image=quay.io/ceph/ceph:v18, name=zen_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:17 compute-0 ceph-mon[73668]: 3.b deep-scrub starts
Oct 02 11:47:17 compute-0 ceph-mon[73668]: 3.b deep-scrub ok
Oct 02 11:47:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:17.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v122: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 159 KiB/s rd, 4.3 KiB/s wr, 294 op/s
Oct 02 11:47:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 02 11:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145980468' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 02 11:47:18 compute-0 zen_matsumoto[93593]: 
Oct 02 11:47:18 compute-0 systemd[1]: libpod-b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66.scope: Deactivated successfully.
Oct 02 11:47:18 compute-0 zen_matsumoto[93593]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":12}}
Oct 02 11:47:18 compute-0 podman[93578]: 2025-10-02 11:47:18.063899782 +0000 UTC m=+0.798372158 container died b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66 (image=quay.io/ceph/ceph:v18, name=zen_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-851644d52f338adceb41fa9d9e287da684925e132ab686423f4984a515d3d312-merged.mount: Deactivated successfully.
Oct 02 11:47:18 compute-0 podman[93578]: 2025-10-02 11:47:18.466037416 +0000 UTC m=+1.200509782 container remove b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66 (image=quay.io/ceph/ceph:v18, name=zen_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:18 compute-0 sudo[93575]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:18 compute-0 systemd[1]: libpod-conmon-b92ab802fb197d1aa9b05cf20188b9ae18ba2c5161f4d90458bcf34684848c66.scope: Deactivated successfully.
Oct 02 11:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Oct 02 11:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:18 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:18 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:18 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:18 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:18 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.emwnjv on compute-2
Oct 02 11:47:18 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.emwnjv on compute-2
Oct 02 11:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:19.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:19.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v123: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 3.6 KiB/s wr, 259 op/s
Oct 02 11:47:19 compute-0 ceph-mon[73668]: pgmap v122: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 159 KiB/s rd, 4.3 KiB/s wr, 294 op/s
Oct 02 11:47:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3145980468' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 02 11:47:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:20 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 6ff149e4-ec54-4bc0-bb20-9279cef33599 (Global Recovery Event) in 5 seconds
Oct 02 11:47:20 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 02 11:47:20 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 5.4 deep-scrub starts
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 5.4 deep-scrub ok
Oct 02 11:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:21 compute-0 ceph-mon[73668]: Deploying daemon keepalived.rgw.default.compute-2.emwnjv on compute-2
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 4.1a scrub starts
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 4.1a scrub ok
Oct 02 11:47:21 compute-0 ceph-mon[73668]: pgmap v123: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 3.6 KiB/s wr, 259 op/s
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 4.c deep-scrub starts
Oct 02 11:47:21 compute-0 ceph-mon[73668]: 4.c deep-scrub ok
Oct 02 11:47:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:21.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:21.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v124: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 3.2 KiB/s wr, 233 op/s
Oct 02 11:47:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.f deep-scrub starts
Oct 02 11:47:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.f deep-scrub ok
Oct 02 11:47:22 compute-0 ceph-mon[73668]: 5.14 scrub starts
Oct 02 11:47:22 compute-0 ceph-mon[73668]: 5.14 scrub ok
Oct 02 11:47:22 compute-0 ceph-mon[73668]: pgmap v124: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 3.2 KiB/s wr, 233 op/s
Oct 02 11:47:23 compute-0 ceph-mon[73668]: 4.f deep-scrub starts
Oct 02 11:47:23 compute-0 ceph-mon[73668]: 4.f deep-scrub ok
Oct 02 11:47:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:23.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:23.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v125: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.8 KiB/s wr, 202 op/s
Oct 02 11:47:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:24 compute-0 ceph-mon[73668]: 3.1d scrub starts
Oct 02 11:47:24 compute-0 ceph-mon[73668]: 3.1d scrub ok
Oct 02 11:47:24 compute-0 ceph-mon[73668]: 5.1c scrub starts
Oct 02 11:47:24 compute-0 ceph-mon[73668]: 5.1c scrub ok
Oct 02 11:47:24 compute-0 ceph-mon[73668]: pgmap v125: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.8 KiB/s wr, 202 op/s
Oct 02 11:47:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:25.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:25 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 13 completed events
Oct 02 11:47:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:47:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:25.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v126: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 2.7 KiB/s wr, 194 op/s
Oct 02 11:47:25 compute-0 ceph-mon[73668]: 3.9 scrub starts
Oct 02 11:47:25 compute-0 ceph-mon[73668]: 3.9 scrub ok
Oct 02 11:47:25 compute-0 ceph-mon[73668]: 5.f scrub starts
Oct 02 11:47:25 compute-0 ceph-mon[73668]: 5.f scrub ok
Oct 02 11:47:25 compute-0 ceph-mon[73668]: 3.e scrub starts
Oct 02 11:47:25 compute-0 ceph-mon[73668]: 3.e scrub ok
Oct 02 11:47:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:25 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 02 11:47:25 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 02 11:47:26 compute-0 ceph-mon[73668]: 3.1a scrub starts
Oct 02 11:47:26 compute-0 ceph-mon[73668]: 3.1a scrub ok
Oct 02 11:47:26 compute-0 ceph-mon[73668]: pgmap v126: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 2.7 KiB/s wr, 194 op/s
Oct 02 11:47:26 compute-0 ceph-mon[73668]: 4.10 scrub starts
Oct 02 11:47:26 compute-0 ceph-mon[73668]: 4.10 scrub ok
Oct 02 11:47:26 compute-0 ceph-mon[73668]: 4.e scrub starts
Oct 02 11:47:26 compute-0 ceph-mon[73668]: 4.e scrub ok
Oct 02 11:47:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:27.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:47:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:47:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:47:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:27.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v127: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:27 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:27 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:27 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:27 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:27 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.nghmbz on compute-0
Oct 02 11:47:27 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.nghmbz on compute-0
Oct 02 11:47:28 compute-0 sudo[93630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:28 compute-0 sudo[93630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:28 compute-0 sudo[93630]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:28 compute-0 ceph-mon[73668]: 2.1d scrub starts
Oct 02 11:47:28 compute-0 ceph-mon[73668]: 2.1d scrub ok
Oct 02 11:47:28 compute-0 ceph-mon[73668]: 3.5 scrub starts
Oct 02 11:47:28 compute-0 ceph-mon[73668]: 3.5 scrub ok
Oct 02 11:47:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:28 compute-0 ceph-mon[73668]: pgmap v127: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:28 compute-0 sudo[93655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:28 compute-0 sudo[93655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:28 compute-0 sudo[93655]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:28 compute-0 sudo[93680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:28 compute-0 sudo[93680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:28 compute-0 sudo[93680]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:28 compute-0 sudo[93705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:28 compute-0 sudo[93705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:47:28
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:47:29 compute-0 ceph-mon[73668]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:29 compute-0 ceph-mon[73668]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:29 compute-0 ceph-mon[73668]: Deploying daemon keepalived.rgw.default.compute-0.nghmbz on compute-0
Oct 02 11:47:29 compute-0 ceph-mon[73668]: 3.3 scrub starts
Oct 02 11:47:29 compute-0 ceph-mon[73668]: 3.3 scrub ok
Oct 02 11:47:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:29.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:29.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v128: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:30 compute-0 ceph-mon[73668]: 2.18 deep-scrub starts
Oct 02 11:47:30 compute-0 ceph-mon[73668]: 2.18 deep-scrub ok
Oct 02 11:47:30 compute-0 ceph-mon[73668]: pgmap v128: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:31 compute-0 podman[93767]: 2025-10-02 11:47:31.218686375 +0000 UTC m=+2.710430036 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 02 11:47:31 compute-0 podman[93767]: 2025-10-02 11:47:31.238154271 +0000 UTC m=+2.729897902 container create fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7 (image=quay.io/ceph/keepalived:2.2.4, name=gifted_dijkstra, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, release=1793, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, description=keepalived for Ceph, version=2.2.4, name=keepalived, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Oct 02 11:47:31 compute-0 systemd[1]: Started libpod-conmon-fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7.scope.
Oct 02 11:47:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:31 compute-0 podman[93767]: 2025-10-02 11:47:31.300018389 +0000 UTC m=+2.791762050 container init fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7 (image=quay.io/ceph/keepalived:2.2.4, name=gifted_dijkstra, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, name=keepalived, build-date=2023-02-22T09:23:20, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 02 11:47:31 compute-0 podman[93767]: 2025-10-02 11:47:31.310121642 +0000 UTC m=+2.801865273 container start fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7 (image=quay.io/ceph/keepalived:2.2.4, name=gifted_dijkstra, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Oct 02 11:47:31 compute-0 podman[93767]: 2025-10-02 11:47:31.313931311 +0000 UTC m=+2.805674942 container attach fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7 (image=quay.io/ceph/keepalived:2.2.4, name=gifted_dijkstra, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.buildah.version=1.28.2, distribution-scope=public, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, release=1793, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 02 11:47:31 compute-0 gifted_dijkstra[93864]: 0 0
Oct 02 11:47:31 compute-0 systemd[1]: libpod-fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7.scope: Deactivated successfully.
Oct 02 11:47:31 compute-0 podman[93767]: 2025-10-02 11:47:31.316424816 +0000 UTC m=+2.808168447 container died fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7 (image=quay.io/ceph/keepalived:2.2.4, name=gifted_dijkstra, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, name=keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, version=2.2.4)
Oct 02 11:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4be0d9e7637c3a7160a0c43990d4bacaf5a3638e3a238f37f94507335fd7eab-merged.mount: Deactivated successfully.
Oct 02 11:47:31 compute-0 podman[93767]: 2025-10-02 11:47:31.362096083 +0000 UTC m=+2.853839714 container remove fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7 (image=quay.io/ceph/keepalived:2.2.4, name=gifted_dijkstra, distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Oct 02 11:47:31 compute-0 systemd[1]: libpod-conmon-fbf55d0d695cdce10063169e500ca4908e4c82a96037cf519248442e2d490ad7.scope: Deactivated successfully.
Oct 02 11:47:31 compute-0 systemd[1]: Reloading.
Oct 02 11:47:31 compute-0 systemd-rc-local-generator[93910]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:31 compute-0 systemd-sysv-generator[93914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:31.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:31 compute-0 systemd[1]: Reloading.
Oct 02 11:47:31 compute-0 systemd-rc-local-generator[93954]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:31 compute-0 systemd-sysv-generator[93958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:31.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v129: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:31 compute-0 ceph-mon[73668]: 5.7 scrub starts
Oct 02 11:47:31 compute-0 ceph-mon[73668]: 5.7 scrub ok
Oct 02 11:47:31 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Oct 02 11:47:31 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.nghmbz for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:47:31 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Oct 02 11:47:32 compute-0 podman[94009]: 2025-10-02 11:47:32.182482461 +0000 UTC m=+0.043037337 container create 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vcs-type=git, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 02 11:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e920eac059e9b50a4d0946829345f9bf96d0312b81d2551ecb96c1056b9440e9/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:32 compute-0 podman[94009]: 2025-10-02 11:47:32.24298452 +0000 UTC m=+0.103539426 container init 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct 02 11:47:32 compute-0 podman[94009]: 2025-10-02 11:47:32.249330655 +0000 UTC m=+0.109885541 container start 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=keepalived for Ceph)
Oct 02 11:47:32 compute-0 bash[94009]: 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47
Oct 02 11:47:32 compute-0 podman[94009]: 2025-10-02 11:47:32.164638308 +0000 UTC m=+0.025193214 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 02 11:47:32 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.nghmbz for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: Starting VRRP child process, pid=4
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: Startup complete
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: (VI_0) Entering BACKUP STATE (init)
Oct 02 11:47:32 compute-0 sudo[93705]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:32 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:32 2025: VRRP_Script(check_backend) succeeded
Oct 02 11:47:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:47:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev e9bff29b-3ccd-41be-8371-4d66fbcef181 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 02 11:47:32 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event e9bff29b-3ccd-41be-8371-4d66fbcef181 (Updating ingress.rgw.default deployment (+4 -> 4)) in 26 seconds
Oct 02 11:47:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Oct 02 11:47:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 sudo[94032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:32 compute-0 sudo[94033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:32 compute-0 sudo[94032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:32 compute-0 sudo[94033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:32 compute-0 sudo[94032]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:32 compute-0 sudo[94033]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:32 compute-0 sudo[94083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:32 compute-0 sudo[94082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:47:32 compute-0 sudo[94083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:32 compute-0 sudo[94082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:32 compute-0 sudo[94083]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:32 compute-0 sudo[94082]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:32 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 02 11:47:32 compute-0 ceph-mon[73668]: pgmap v129: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:32 compute-0 ceph-mon[73668]: 5.17 deep-scrub starts
Oct 02 11:47:32 compute-0 ceph-mon[73668]: 5.17 deep-scrub ok
Oct 02 11:47:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:32 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 11:47:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct 02 11:47:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 02 11:47:33 compute-0 sudo[94132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:33 compute-0 sudo[94132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-0 sudo[94132]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-0 sudo[94157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:33 compute-0 sudo[94157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-0 sudo[94157]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-0 sudo[94182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:33 compute-0 sudo[94182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-0 sudo[94182]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-0 sudo[94207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:47:33 compute-0 sudo[94207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:33.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:33.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v130: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:33 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Oct 02 11:47:33 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Oct 02 11:47:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 02 11:47:34 compute-0 podman[94298]: 2025-10-02 11:47:34.032820103 +0000 UTC m=+0.103911866 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:47:34 compute-0 podman[94298]: 2025-10-02 11:47:34.136520142 +0000 UTC m=+0.207611875 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:47:34 compute-0 ceph-mon[73668]: 4.11 scrub starts
Oct 02 11:47:34 compute-0 ceph-mon[73668]: 4.11 scrub ok
Oct 02 11:47:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 02 11:47:34 compute-0 ceph-mon[73668]: pgmap v130: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 02 11:47:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 02 11:47:34 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 02 11:47:34 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev c5a39e79-d772-4d1b-94f6-c9a3ea948a68 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 02 11:47:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:47:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:47:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:47:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:34 compute-0 podman[94435]: 2025-10-02 11:47:34.733155577 +0000 UTC m=+0.067565533 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:47:34 compute-0 podman[94435]: 2025-10-02 11:47:34.750574129 +0000 UTC m=+0.084983995 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:47:34 compute-0 podman[94502]: 2025-10-02 11:47:34.957801483 +0000 UTC m=+0.055116571 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64)
Oct 02 11:47:34 compute-0 podman[94502]: 2025-10-02 11:47:34.994140465 +0000 UTC m=+0.091455553 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, description=keepalived for Ceph, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.buildah.version=1.28.2, version=2.2.4, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9)
Oct 02 11:47:35 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 02 11:47:35 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 02 11:47:35 compute-0 sudo[94207]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 698a5ae6-fe4c-4d7a-b3a3-fb53201f2126 does not exist
Oct 02 11:47:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8d82d1e2-2954-4579-8a62-64cbec8006c5 does not exist
Oct 02 11:47:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f642d61c-dbb0-426f-af5f-16134b1c47f6 does not exist
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:35 compute-0 sudo[94533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:35 compute-0 sudo[94533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:35 compute-0 sudo[94533]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 02 11:47:35 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 105a1776-c673-49bb-a1d2-2e0c6caddef3 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:35 compute-0 sudo[94558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:35 compute-0 sudo[94558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:35 compute-0 sudo[94558]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:35 compute-0 ceph-mon[73668]: 4.8 scrub starts
Oct 02 11:47:35 compute-0 ceph-mon[73668]: 4.8 scrub ok
Oct 02 11:47:35 compute-0 ceph-mon[73668]: 4.12 deep-scrub starts
Oct 02 11:47:35 compute-0 ceph-mon[73668]: 4.12 deep-scrub ok
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 02 11:47:35 compute-0 ceph-mon[73668]: osdmap e45: 3 total, 3 up, 3 in
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: 4.5 scrub starts
Oct 02 11:47:35 compute-0 ceph-mon[73668]: 4.5 scrub ok
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:35 compute-0 sudo[94583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:35 compute-0 sudo[94583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:35 compute-0 sudo[94583]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:35 compute-0 sudo[94608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:47:35 compute-0 sudo[94608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:35.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:35 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 14 completed events
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-0 podman[94672]: 2025-10-02 11:47:35.813879466 +0000 UTC m=+0.044536906 container create 1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:35.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v133: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct 02 11:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 02 11:47:35 compute-0 systemd[1]: Started libpod-conmon-1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24.scope.
Oct 02 11:47:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:35 compute-0 podman[94672]: 2025-10-02 11:47:35.793811286 +0000 UTC m=+0.024468746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:35 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz[94024]: Thu Oct  2 11:47:35 2025: (VI_0) Entering MASTER STATE
Oct 02 11:47:35 compute-0 podman[94672]: 2025-10-02 11:47:35.899771644 +0000 UTC m=+0.130429104 container init 1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:35 compute-0 podman[94672]: 2025-10-02 11:47:35.905595855 +0000 UTC m=+0.136253295 container start 1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:47:35 compute-0 elated_brattain[94688]: 167 167
Oct 02 11:47:35 compute-0 podman[94672]: 2025-10-02 11:47:35.912025102 +0000 UTC m=+0.142682562 container attach 1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:47:35 compute-0 systemd[1]: libpod-1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24.scope: Deactivated successfully.
Oct 02 11:47:35 compute-0 podman[94672]: 2025-10-02 11:47:35.923743506 +0000 UTC m=+0.154400956 container died 1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d999c8a64f3a221fe1ec254499f767986ec8b9da5085ad62f067e02577fd38-merged.mount: Deactivated successfully.
Oct 02 11:47:35 compute-0 podman[94672]: 2025-10-02 11:47:35.963338913 +0000 UTC m=+0.193996353 container remove 1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:47:35 compute-0 systemd[1]: libpod-conmon-1e143e49b8c3c75455b09052dc2dadbca30105b80fd677f5fed8457cf0de0c24.scope: Deactivated successfully.
Oct 02 11:47:36 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct 02 11:47:36 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct 02 11:47:36 compute-0 podman[94711]: 2025-10-02 11:47:36.118966819 +0000 UTC m=+0.046069836 container create 4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:36 compute-0 systemd[1]: Started libpod-conmon-4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad.scope.
Oct 02 11:47:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2e77334ece9af7f0cf5a5a697592f2308035c9c27dcd42dde8c6a43b50476/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2e77334ece9af7f0cf5a5a697592f2308035c9c27dcd42dde8c6a43b50476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:36 compute-0 podman[94711]: 2025-10-02 11:47:36.10085704 +0000 UTC m=+0.027960077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2e77334ece9af7f0cf5a5a697592f2308035c9c27dcd42dde8c6a43b50476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2e77334ece9af7f0cf5a5a697592f2308035c9c27dcd42dde8c6a43b50476/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2e77334ece9af7f0cf5a5a697592f2308035c9c27dcd42dde8c6a43b50476/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:36 compute-0 podman[94711]: 2025-10-02 11:47:36.20806277 +0000 UTC m=+0.135165837 container init 4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:47:36 compute-0 podman[94711]: 2025-10-02 11:47:36.217089704 +0000 UTC m=+0.144192721 container start 4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:47:36 compute-0 podman[94711]: 2025-10-02 11:47:36.229565898 +0000 UTC m=+0.156668905 container attach 4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wilson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:47:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 02 11:47:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 02 11:47:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 02 11:47:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 02 11:47:36 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 4f330f2e-9b6a-4e94-8518-49644f81c28d (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 02 11:47:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:47:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:36 compute-0 ceph-mon[73668]: 5.19 scrub starts
Oct 02 11:47:36 compute-0 ceph-mon[73668]: 5.19 scrub ok
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:36 compute-0 ceph-mon[73668]: osdmap e46: 3 total, 3 up, 3 in
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:36 compute-0 ceph-mon[73668]: pgmap v133: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 02 11:47:36 compute-0 ceph-mon[73668]: osdmap e47: 3 total, 3 up, 3 in
Oct 02 11:47:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:36 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 47 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=13.568790436s) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active pruub 130.814163208s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:36 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 47 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=13.568790436s) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown pruub 130.814163208s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 vigilant_wilson[94727]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:47:37 compute-0 vigilant_wilson[94727]: --> relative data size: 1.0
Oct 02 11:47:37 compute-0 vigilant_wilson[94727]: --> All data devices are unavailable
Oct 02 11:47:37 compute-0 systemd[1]: libpod-4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad.scope: Deactivated successfully.
Oct 02 11:47:37 compute-0 podman[94743]: 2025-10-02 11:47:37.114327135 +0000 UTC m=+0.026584660 container died 4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae2e77334ece9af7f0cf5a5a697592f2308035c9c27dcd42dde8c6a43b50476-merged.mount: Deactivated successfully.
Oct 02 11:47:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 02 11:47:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 02 11:47:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:37.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:37 compute-0 podman[94743]: 2025-10-02 11:47:37.540390216 +0000 UTC m=+0.452647721 container remove 4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:47:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 02 11:47:37 compute-0 systemd[1]: libpod-conmon-4a9ad1eec3d3450e068fd0cd520378957454db2525288b05b2002870a33983ad.scope: Deactivated successfully.
Oct 02 11:47:37 compute-0 sudo[94608]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:37 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 0c37b9e5-166d-4fc9-8b20-d7428d9cc6d1 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 02 11:47:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:47:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.8( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.b( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.a( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.9( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.f( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.e( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.5( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.2( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.3( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.7( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.6( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.1( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.d( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-0 ceph-mon[73668]: 5.1d deep-scrub starts
Oct 02 11:47:37 compute-0 ceph-mon[73668]: 5.1d deep-scrub ok
Oct 02 11:47:37 compute-0 ceph-mon[73668]: 5.1a scrub starts
Oct 02 11:47:37 compute-0 ceph-mon[73668]: 5.1a scrub ok
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.8( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.b( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.9( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.a( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.f( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.e( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.0( empty local-lis/les=47/48 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.3( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.5( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.6( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.7( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.1( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.d( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 48 pg[6.2( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [1] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-0 sudo[94757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:37 compute-0 sudo[94757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:37 compute-0 sudo[94757]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:37 compute-0 sudo[94782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:37 compute-0 sudo[94782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:37 compute-0 sudo[94782]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:37 compute-0 sudo[94807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:37 compute-0 sudo[94807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:37 compute-0 sudo[94807]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v136: 181 pgs: 1 peering, 46 unknown, 134 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:37.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:37 compute-0 sudo[94832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:47:37 compute-0 sudo[94832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:38 compute-0 podman[94898]: 2025-10-02 11:47:38.212888548 +0000 UTC m=+0.047053131 container create f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:38 compute-0 systemd[1]: Started libpod-conmon-f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b.scope.
Oct 02 11:47:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:38 compute-0 podman[94898]: 2025-10-02 11:47:38.192642123 +0000 UTC m=+0.026806726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:38 compute-0 podman[94898]: 2025-10-02 11:47:38.300073499 +0000 UTC m=+0.134238112 container init f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:38 compute-0 podman[94898]: 2025-10-02 11:47:38.310685084 +0000 UTC m=+0.144849667 container start f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:47:38 compute-0 podman[94898]: 2025-10-02 11:47:38.314474713 +0000 UTC m=+0.148639326 container attach f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:38 compute-0 mystifying_ellis[94915]: 167 167
Oct 02 11:47:38 compute-0 systemd[1]: libpod-f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b.scope: Deactivated successfully.
Oct 02 11:47:38 compute-0 podman[94898]: 2025-10-02 11:47:38.316935707 +0000 UTC m=+0.151100300 container died f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b68b6e8b944ba837cab09d5aa26396ed25b308c6e53ad8c732a26da081ba36c-merged.mount: Deactivated successfully.
Oct 02 11:47:38 compute-0 podman[94898]: 2025-10-02 11:47:38.35990355 +0000 UTC m=+0.194068133 container remove f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:47:38 compute-0 systemd[1]: libpod-conmon-f9ee5ede4daa02e4e36837ccf6753da2e90b13d73a66c606fd9e6237d16d232b.scope: Deactivated successfully.
Oct 02 11:47:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 02 11:47:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 02 11:47:38 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 02 11:47:38 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 9e75fc29-7398-4e56-96cd-771f823b515d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 02 11:47:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 11:47:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 49 pg[8.0( v 37'4 (0'0,37'4] local-lis/les=36/37 n=4 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=13.323875427s) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 37'3 mlcod 37'3 active pruub 132.449645996s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:38 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 49 pg[9.0( v 44'1012 (0'0,44'1012] local-lis/les=38/39 n=177 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=49 pruub=15.395671844s) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 44'1011 mlcod 44'1011 active pruub 134.521804810s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:38 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 49 pg[8.0( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=13.323875427s) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 37'3 mlcod 0'0 unknown pruub 132.449645996s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:38 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 49 pg[9.0( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=49 pruub=15.395671844s) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 44'1011 mlcod 0'0 unknown pruub 134.521804810s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:38 compute-0 podman[94938]: 2025-10-02 11:47:38.558674665 +0000 UTC m=+0.056849715 container create fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:38 compute-0 systemd[1]: Started libpod-conmon-fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c.scope.
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:38 compute-0 ceph-mon[73668]: osdmap e48: 3 total, 3 up, 3 in
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-0 ceph-mon[73668]: pgmap v136: 181 pgs: 1 peering, 46 unknown, 134 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-0 ceph-mon[73668]: 2.1c scrub starts
Oct 02 11:47:38 compute-0 ceph-mon[73668]: 2.1c scrub ok
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:38 compute-0 ceph-mon[73668]: osdmap e49: 3 total, 3 up, 3 in
Oct 02 11:47:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:38 compute-0 podman[94938]: 2025-10-02 11:47:38.536616283 +0000 UTC m=+0.034791353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ee8f683a618d471b2c52b772f7bc8e98261fde70eddf155d56286b564d6083/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ee8f683a618d471b2c52b772f7bc8e98261fde70eddf155d56286b564d6083/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ee8f683a618d471b2c52b772f7bc8e98261fde70eddf155d56286b564d6083/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ee8f683a618d471b2c52b772f7bc8e98261fde70eddf155d56286b564d6083/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:38 compute-0 podman[94938]: 2025-10-02 11:47:38.66255183 +0000 UTC m=+0.160726910 container init fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:38 compute-0 podman[94938]: 2025-10-02 11:47:38.67142523 +0000 UTC m=+0.169600280 container start fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:47:38 compute-0 podman[94938]: 2025-10-02 11:47:38.67488736 +0000 UTC m=+0.173062410 container attach fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 02 11:47:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 02 11:47:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]: {
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:     "1": [
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:         {
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "devices": [
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "/dev/loop3"
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             ],
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "lv_name": "ceph_lv0",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "lv_size": "7511998464",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "name": "ceph_lv0",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "tags": {
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.cluster_name": "ceph",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.crush_device_class": "",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.encrypted": "0",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.osd_id": "1",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.type": "block",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:                 "ceph.vdo": "0"
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             },
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "type": "block",
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:             "vg_name": "ceph_vg0"
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:         }
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]:     ]
Oct 02 11:47:39 compute-0 nostalgic_golick[94955]: }
Oct 02 11:47:39 compute-0 systemd[1]: libpod-fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c.scope: Deactivated successfully.
Oct 02 11:47:39 compute-0 podman[94938]: 2025-10-02 11:47:39.48687537 +0000 UTC m=+0.985050420 container died fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ee8f683a618d471b2c52b772f7bc8e98261fde70eddf155d56286b564d6083-merged.mount: Deactivated successfully.
Oct 02 11:47:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 02 11:47:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 02 11:47:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 24cc2892-de7d-4b0e-83e0-3170ad47ea82 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev c5a39e79-d772-4d1b-94f6-c9a3ea948a68 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event c5a39e79-d772-4d1b-94f6-c9a3ea948a68 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 105a1776-c673-49bb-a1d2-2e0c6caddef3 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 105a1776-c673-49bb-a1d2-2e0c6caddef3 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 4f330f2e-9b6a-4e94-8518-49644f81c28d (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 4f330f2e-9b6a-4e94-8518-49644f81c28d (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 0c37b9e5-166d-4fc9-8b20-d7428d9cc6d1 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 0c37b9e5-166d-4fc9-8b20-d7428d9cc6d1 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 9e75fc29-7398-4e56-96cd-771f823b515d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 9e75fc29-7398-4e56-96cd-771f823b515d (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 24cc2892-de7d-4b0e-83e0-3170ad47ea82 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 24cc2892-de7d-4b0e-83e0-3170ad47ea82 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1e( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.19( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1f( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.18( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.16( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.17( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.17( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.16( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.10( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.11( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.2( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.5( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.4( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.7( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.6( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.12( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.12( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.13( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.13( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1d( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1c( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1d( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1c( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1e( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.18( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.19( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1a( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1f( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.3( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1b( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1a( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.5( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1b( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.4( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.6( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.7( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1( v 37'4 (0'0,37'4] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.b( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.a( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.c( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.d( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.c( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.d( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.f( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.e( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.b( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.a( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.8( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.9( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.9( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.8( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.e( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.f( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.3( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.2( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.10( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.14( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.15( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.15( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.11( v 44'1012 lc 0'0 (0'0,44'1012] local-lis/les=38/39 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.14( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 podman[94938]: 2025-10-02 11:47:39.559658607 +0000 UTC m=+1.057833657 container remove fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:39 compute-0 systemd[1]: libpod-conmon-fa430dc9b6733baf32018ac83afd84b0cdef8297b995c676cb940da94270d34c.scope: Deactivated successfully.
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1f( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.17( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.16( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.11( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.18( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.2( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.5( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.6( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.12( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.13( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1c( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.4( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1d( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1c( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1a( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1e( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.19( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1b( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.4( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.7( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.0( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 37'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.0( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 44'1011 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.1( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.b( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.c( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.c( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.d( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.e( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.a( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.9( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.f( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.8( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.3( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.2( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.10( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.14( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.15( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [1] r=0 lpr=49 pi=[38,49)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 50 pg[8.14( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:39 compute-0 sudo[94832]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:39 compute-0 sudo[94975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:39 compute-0 sudo[94975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:39 compute-0 sudo[94975]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:39 compute-0 ceph-mon[73668]: 4.16 scrub starts
Oct 02 11:47:39 compute-0 ceph-mon[73668]: 4.16 scrub ok
Oct 02 11:47:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:39 compute-0 ceph-mon[73668]: osdmap e50: 3 total, 3 up, 3 in
Oct 02 11:47:39 compute-0 sudo[95000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:39 compute-0 sudo[95000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:39 compute-0 sudo[95000]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:39 compute-0 sudo[95025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:39 compute-0 sudo[95025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:39 compute-0 sudo[95025]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:39 compute-0 sudo[95050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:47:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v139: 243 pgs: 1 peering, 108 unknown, 134 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:39 compute-0 sudo[95050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:39.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:40 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Oct 02 11:47:40 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Oct 02 11:47:40 compute-0 podman[95112]: 2025-10-02 11:47:40.243317839 +0000 UTC m=+0.117282913 container create ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:40 compute-0 podman[95112]: 2025-10-02 11:47:40.151950479 +0000 UTC m=+0.025915573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:40 compute-0 systemd[1]: Started libpod-conmon-ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279.scope.
Oct 02 11:47:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:40 compute-0 podman[95112]: 2025-10-02 11:47:40.323501789 +0000 UTC m=+0.197466863 container init ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:40 compute-0 podman[95112]: 2025-10-02 11:47:40.330978503 +0000 UTC m=+0.204943577 container start ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:47:40 compute-0 podman[95112]: 2025-10-02 11:47:40.334825933 +0000 UTC m=+0.208791017 container attach ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:40 compute-0 nice_cerf[95128]: 167 167
Oct 02 11:47:40 compute-0 systemd[1]: libpod-ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279.scope: Deactivated successfully.
Oct 02 11:47:40 compute-0 podman[95112]: 2025-10-02 11:47:40.337586514 +0000 UTC m=+0.211551578 container died ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea75c09c8bf4d242909278933c785f05d67ae29030e3b3a69544adf775cd7963-merged.mount: Deactivated successfully.
Oct 02 11:47:40 compute-0 podman[95112]: 2025-10-02 11:47:40.551070221 +0000 UTC m=+0.425035295 container remove ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:47:40 compute-0 systemd[1]: libpod-conmon-ae823da73854c8f85b1488bd0cd0e98d9198d836bd98bb4b25652668d6316279.scope: Deactivated successfully.
Oct 02 11:47:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 02 11:47:40 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 20 completed events
Oct 02 11:47:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:47:40 compute-0 ceph-mon[73668]: pgmap v139: 243 pgs: 1 peering, 108 unknown, 134 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:40 compute-0 ceph-mon[73668]: 3.12 deep-scrub starts
Oct 02 11:47:40 compute-0 ceph-mon[73668]: 3.12 deep-scrub ok
Oct 02 11:47:40 compute-0 podman[95153]: 2025-10-02 11:47:40.723620067 +0000 UTC m=+0.048549161 container create d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:47:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 02 11:47:40 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=51 pruub=10.203452110s) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active pruub 131.546981812s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 02 11:47:40 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=51 pruub=10.203452110s) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown pruub 131.546981812s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:40 compute-0 systemd[1]: Started libpod-conmon-d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c.scope.
Oct 02 11:47:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7d320d092de81fcb21458b019e846da65de8a0cc7eb066d081711741637294/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:40 compute-0 podman[95153]: 2025-10-02 11:47:40.701895373 +0000 UTC m=+0.026824497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7d320d092de81fcb21458b019e846da65de8a0cc7eb066d081711741637294/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7d320d092de81fcb21458b019e846da65de8a0cc7eb066d081711741637294/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7d320d092de81fcb21458b019e846da65de8a0cc7eb066d081711741637294/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:40 compute-0 podman[95153]: 2025-10-02 11:47:40.815589032 +0000 UTC m=+0.140518146 container init d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:47:40 compute-0 podman[95153]: 2025-10-02 11:47:40.823838576 +0000 UTC m=+0.148767670 container start d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldwasser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:47:40 compute-0 podman[95153]: 2025-10-02 11:47:40.827266435 +0000 UTC m=+0.152195549 container attach d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:47:40 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 02 11:47:41 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 02 11:47:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 02 11:47:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 02 11:47:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]: {
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]:         "osd_id": 1,
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]:         "type": "bluestore"
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]:     }
Oct 02 11:47:41 compute-0 keen_goldwasser[95169]: }
Oct 02 11:47:41 compute-0 ceph-mon[73668]: 5.e scrub starts
Oct 02 11:47:41 compute-0 ceph-mon[73668]: 5.e scrub ok
Oct 02 11:47:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:41 compute-0 ceph-mon[73668]: osdmap e51: 3 total, 3 up, 3 in
Oct 02 11:47:41 compute-0 ceph-mon[73668]: 5.1e scrub starts
Oct 02 11:47:41 compute-0 ceph-mon[73668]: 5.1e scrub ok
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [1] r=0 lpr=51 pi=[42,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-0 systemd[1]: libpod-d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c.scope: Deactivated successfully.
Oct 02 11:47:41 compute-0 podman[95153]: 2025-10-02 11:47:41.806929074 +0000 UTC m=+1.131858168 container died d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b7d320d092de81fcb21458b019e846da65de8a0cc7eb066d081711741637294-merged.mount: Deactivated successfully.
Oct 02 11:47:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 62 unknown, 243 active+clean; 453 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:41.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:41 compute-0 podman[95153]: 2025-10-02 11:47:41.864889737 +0000 UTC m=+1.189818831 container remove d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:47:41 compute-0 systemd[1]: libpod-conmon-d983fd05dfb3e25d0d16b935a0e5d969498bdca5e32022f13314f853f21ee77c.scope: Deactivated successfully.
Oct 02 11:47:41 compute-0 sudo[95050]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:41 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 02 11:47:41 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 02 11:47:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 782b9443-27a9-4796-82d3-fba9f93737ff does not exist
Oct 02 11:47:42 compute-0 ceph-mgr[73961]: [progress INFO root] update: starting ev 810ee815-d733-42bd-bde3-80267cbc9537 (Updating mds.cephfs deployment (+3 -> 3))
Oct 02 11:47:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 02 11:47:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:42 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.dtavud on compute-2
Oct 02 11:47:42 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.dtavud on compute-2
Oct 02 11:47:42 compute-0 ceph-mon[73668]: osdmap e52: 3 total, 3 up, 3 in
Oct 02 11:47:42 compute-0 ceph-mon[73668]: pgmap v142: 305 pgs: 62 unknown, 243 active+clean; 453 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:42 compute-0 ceph-mon[73668]: 4.17 scrub starts
Oct 02 11:47:42 compute-0 ceph-mon[73668]: 4.17 scrub ok
Oct 02 11:47:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:43.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:43 compute-0 ceph-mon[73668]: Deploying daemon mds.cephfs.compute-2.dtavud on compute-2
Oct 02 11:47:43 compute-0 ceph-mon[73668]: 4.d scrub starts
Oct 02 11:47:43 compute-0 ceph-mon[73668]: 4.d scrub ok
Oct 02 11:47:43 compute-0 ceph-mon[73668]: 4.14 scrub starts
Oct 02 11:47:43 compute-0 ceph-mon[73668]: 4.14 scrub ok
Oct 02 11:47:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 62 unknown, 243 active+clean; 453 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:43.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:44 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 02 11:47:44 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] as mds.0
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.dtavud assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e3 new map
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:44.341938+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:boot
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:creating}
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dtavud"} v 0) v1
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dtavud"}]: dispatch
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e3 all = 0
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.dtavud is now active in filesystem cephfs as rank 0
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:44 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.yqiqns on compute-0
Oct 02 11:47:44 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.yqiqns on compute-0
Oct 02 11:47:44 compute-0 sudo[95203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:44 compute-0 sudo[95203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:44 compute-0 sudo[95203]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:44 compute-0 sudo[95228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:44 compute-0 sudo[95228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:44 compute-0 sudo[95228]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:44 compute-0 sudo[95253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:44 compute-0 sudo[95253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:44 compute-0 sudo[95253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:44 compute-0 sudo[95278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:44 compute-0 sudo[95278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:44 compute-0 ceph-mon[73668]: pgmap v143: 305 pgs: 62 unknown, 243 active+clean; 453 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:44 compute-0 ceph-mon[73668]: 3.17 scrub starts
Oct 02 11:47:44 compute-0 ceph-mon[73668]: 3.17 scrub ok
Oct 02 11:47:44 compute-0 ceph-mon[73668]: daemon mds.cephfs.compute-2.dtavud assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 02 11:47:44 compute-0 ceph-mon[73668]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 02 11:47:44 compute-0 ceph-mon[73668]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 02 11:47:44 compute-0 ceph-mon[73668]: Cluster is now healthy
Oct 02 11:47:44 compute-0 ceph-mon[73668]: mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:boot
Oct 02 11:47:44 compute-0 ceph-mon[73668]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:creating}
Oct 02 11:47:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dtavud"}]: dispatch
Oct 02 11:47:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-0 ceph-mon[73668]: daemon mds.cephfs.compute-2.dtavud is now active in filesystem cephfs as rank 0
Oct 02 11:47:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:45 compute-0 podman[95342]: 2025-10-02 11:47:45.103322381 +0000 UTC m=+0.043667234 container create 2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_elbakyan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:45 compute-0 systemd[1]: Started libpod-conmon-2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c.scope.
Oct 02 11:47:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:45 compute-0 podman[95342]: 2025-10-02 11:47:45.085714444 +0000 UTC m=+0.026059327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:45 compute-0 podman[95342]: 2025-10-02 11:47:45.187310249 +0000 UTC m=+0.127655112 container init 2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_elbakyan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:47:45 compute-0 podman[95342]: 2025-10-02 11:47:45.194879675 +0000 UTC m=+0.135224528 container start 2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_elbakyan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:45 compute-0 podman[95342]: 2025-10-02 11:47:45.199666999 +0000 UTC m=+0.140011852 container attach 2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:47:45 compute-0 clever_elbakyan[95359]: 167 167
Oct 02 11:47:45 compute-0 podman[95342]: 2025-10-02 11:47:45.201387514 +0000 UTC m=+0.141732357 container died 2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 02 11:47:45 compute-0 systemd[1]: libpod-2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c.scope: Deactivated successfully.
Oct 02 11:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dad46cad7d33715f5f587bad8211db50f28aa160e00aab775ca5e1c69792574-merged.mount: Deactivated successfully.
Oct 02 11:47:45 compute-0 podman[95342]: 2025-10-02 11:47:45.244996735 +0000 UTC m=+0.185341578 container remove 2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:45 compute-0 systemd[1]: libpod-conmon-2ffa587ce9bf3f21bbe34a31094d674160d468383ced9d3c71dcd4f2d7b1684c.scope: Deactivated successfully.
Oct 02 11:47:45 compute-0 systemd[1]: Reloading.
Oct 02 11:47:45 compute-0 systemd-sysv-generator[95409]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:45 compute-0 systemd-rc-local-generator[95406]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e4 new map
Oct 02 11:47:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:45.438767+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 02 11:47:45 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:active
Oct 02 11:47:45 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active}
Oct 02 11:47:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:45.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:45 compute-0 systemd[1]: Reloading.
Oct 02 11:47:45 compute-0 systemd-rc-local-generator[95446]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:45 compute-0 systemd-sysv-generator[95450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:45 compute-0 ceph-mgr[73961]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Oct 02 11:47:45 compute-0 ceph-mon[73668]: Deploying daemon mds.cephfs.compute-0.yqiqns on compute-0
Oct 02 11:47:45 compute-0 ceph-mon[73668]: 3.a scrub starts
Oct 02 11:47:45 compute-0 ceph-mon[73668]: mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:active
Oct 02 11:47:45 compute-0 ceph-mon[73668]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active}
Oct 02 11:47:45 compute-0 ceph-mon[73668]: 3.a scrub ok
Oct 02 11:47:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s wr, 6 op/s
Oct 02 11:47:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:45 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.yqiqns for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:47:46 compute-0 podman[95504]: 2025-10-02 11:47:46.131651771 +0000 UTC m=+0.046017795 container create 3a3c1bf8dda7e00dad46f0935a8964271cf672b44d04082cea96cf17965fbda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-0-yqiqns, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a42c4679a9432273f1b46c5e935531f5824457448e463ffd9139204118ceb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a42c4679a9432273f1b46c5e935531f5824457448e463ffd9139204118ceb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a42c4679a9432273f1b46c5e935531f5824457448e463ffd9139204118ceb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a42c4679a9432273f1b46c5e935531f5824457448e463ffd9139204118ceb0/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.yqiqns supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:46 compute-0 podman[95504]: 2025-10-02 11:47:46.195403244 +0000 UTC m=+0.109769288 container init 3a3c1bf8dda7e00dad46f0935a8964271cf672b44d04082cea96cf17965fbda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-0-yqiqns, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:47:46 compute-0 podman[95504]: 2025-10-02 11:47:46.20179197 +0000 UTC m=+0.116157994 container start 3a3c1bf8dda7e00dad46f0935a8964271cf672b44d04082cea96cf17965fbda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-0-yqiqns, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:46 compute-0 bash[95504]: 3a3c1bf8dda7e00dad46f0935a8964271cf672b44d04082cea96cf17965fbda8
Oct 02 11:47:46 compute-0 podman[95504]: 2025-10-02 11:47:46.111544949 +0000 UTC m=+0.025910993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:46 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.yqiqns for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:47:46 compute-0 ceph-mds[95523]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:47:46 compute-0 ceph-mds[95523]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 02 11:47:46 compute-0 ceph-mds[95523]: main not setting numa affinity
Oct 02 11:47:46 compute-0 ceph-mds[95523]: pidfile_write: ignore empty --pid-file
Oct 02 11:47:46 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-0-yqiqns[95519]: starting mds.cephfs.compute-0.yqiqns at 
Oct 02 11:47:46 compute-0 sudo[95278]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:46 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Updating MDS map to version 4 from mon.2
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:46 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.bhscyq on compute-1
Oct 02 11:47:46 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.bhscyq on compute-1
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e5 new map
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:45.438767+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] up:boot
Oct 02 11:47:46 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Updating MDS map to version 5 from mon.2
Oct 02 11:47:46 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Monitors have assigned me to become a standby.
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 1 up:standby
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.yqiqns"} v 0) v1
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.yqiqns"}]: dispatch
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e5 all = 0
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e6 new map
Oct 02 11:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:45.438767+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:46 compute-0 ceph-mon[73668]: pgmap v144: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s wr, 6 op/s
Oct 02 11:47:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-0 ceph-mon[73668]: 2.15 scrub starts
Oct 02 11:47:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-0 ceph-mon[73668]: 2.15 scrub ok
Oct 02 11:47:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 1 up:standby
Oct 02 11:47:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:47.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 02 11:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 02 11:47:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 02 11:47:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:47:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:47.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 02 11:47:48 compute-0 ceph-mon[73668]: Deploying daemon mds.cephfs.compute-1.bhscyq on compute-1
Oct 02 11:47:48 compute-0 ceph-mon[73668]: mds.? [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] up:boot
Oct 02 11:47:48 compute-0 ceph-mon[73668]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 1 up:standby
Oct 02 11:47:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.yqiqns"}]: dispatch
Oct 02 11:47:48 compute-0 ceph-mon[73668]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 1 up:standby
Oct 02 11:47:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.554290771s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369186401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.554213524s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369186401s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.14( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.357163429s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172317505s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551455498s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.366638184s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.15( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.357075691s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172286987s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.15( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.357056618s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172286987s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.14( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.357052803s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172317505s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551361084s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.366638184s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551239967s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.366653442s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551220894s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.366653442s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.10( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356728554s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172241211s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.10( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356706619s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172241211s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.d( v 52'3 (0'0,52'3] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.393189430s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'3 lcod 52'2 mlcod 52'2 active pruub 142.208847046s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.3( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356547356s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172225952s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.d( v 52'3 (0'0,52'3] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.393158913s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'3 lcod 52'2 mlcod 0'0 unknown NOTIFY pruub 142.208847046s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.392943382s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 142.208755493s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.f( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356362343s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172180176s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.f( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356326103s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172180176s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.3( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356513977s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172225952s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.392905235s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 142.208755493s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.553153992s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369216919s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.8( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356122017s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172210693s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.553127289s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369216919s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.7( v 52'2 (0'0,52'2] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.392623901s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'2 lcod 52'1 mlcod 52'1 active pruub 142.208724976s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.8( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.356095314s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172210693s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.7( v 52'2 (0'0,52'2] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.392584801s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'2 lcod 52'1 mlcod 0'0 unknown NOTIFY pruub 142.208724976s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.9( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355875015s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172073364s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.9( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355854034s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172073364s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.a( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355607033s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.172027588s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.a( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355584145s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.172027588s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.553005219s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369461060s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552982330s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369461060s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.3( v 52'2 (0'0,52'2] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.392077446s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'2 lcod 52'1 mlcod 52'1 active pruub 142.208587646s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.3( v 52'2 (0'0,52'2] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.392040253s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'2 lcod 52'1 mlcod 0'0 unknown NOTIFY pruub 142.208587646s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.d( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355327606s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.171890259s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552901268s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369491577s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552885056s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369491577s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.d( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355307579s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.171890259s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552873611s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369613647s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.c( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355158806s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.171905518s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552852631s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369613647s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.c( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.355129242s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.171905518s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.5( v 52'3 (0'0,52'3] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.391757011s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'3 lcod 52'2 mlcod 52'2 active pruub 142.208541870s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.5( v 52'3 (0'0,52'3] local-lis/les=47/48 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.391686440s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'3 lcod 52'2 mlcod 0'0 unknown NOTIFY pruub 142.208541870s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.b( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.354915619s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.171829224s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.b( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.354895592s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.171829224s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=47/48 n=3 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.391288757s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'4 lcod 52'4 mlcod 52'4 active pruub 142.208282471s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=47/48 n=3 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.391256332s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'4 lcod 52'4 mlcod 0'0 unknown NOTIFY pruub 142.208282471s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552357674s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369567871s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552334785s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369567871s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552574158s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369873047s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552532196s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369873047s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.4( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.354088783s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.171493530s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.390814781s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 142.208236694s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.4( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.354066849s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.171493530s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.390791893s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 142.208236694s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552282333s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369552612s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.1b( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.353754044s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.171356201s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.1b( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.353734970s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.171356201s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551955223s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369613647s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551929474s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369613647s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551914215s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369613647s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551860809s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369613647s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.19( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.353457451s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.171249390s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.19( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.353437424s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.171249390s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.552034378s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369918823s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551909447s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.369903564s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551889420s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369903564s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551848412s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369918823s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.1c( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.353054047s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.171157837s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551919937s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.369552612s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.1c( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.353021622s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.171157837s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.12( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352509499s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170883179s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.6( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352476120s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170883179s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.12( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352483749s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170883179s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.389907837s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'1 lcod 52'2 mlcod 52'2 active pruub 142.208404541s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551506042s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.370056152s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.5( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352231026s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170791626s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.389868736s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=52'1 lcod 52'2 mlcod 0'0 unknown NOTIFY pruub 142.208404541s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551481247s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.370056152s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.5( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352205276s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170791626s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.2( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352102280s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170806885s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.6( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352442741s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170883179s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.2( v 37'4 (0'0,37'4] local-lis/les=49/50 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.352076530s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170806885s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551185608s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.370010376s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551264763s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.370101929s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551161766s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.370010376s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.551239967s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.370101929s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.16( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351494789s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170440674s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.11( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351653099s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170623779s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.16( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351470947s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170440674s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.550981522s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.370040894s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.550958633s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.370040894s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.17( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351292610s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170394897s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.550893784s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.370040894s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.17( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351252556s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170394897s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.550870895s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.370040894s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.18( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351576805s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170791626s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.18( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351555824s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170791626s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.11( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.351624489s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170623779s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.550890923s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 138.370269775s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.550870895s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.370269775s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.1f( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.350955009s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active pruub 144.170440674s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[8.1f( v 37'4 (0'0,37'4] local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.350917816s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.170440674s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.1e( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.b( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.5( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.10( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.18( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[10.1b( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:48 compute-0 ceph-mgr[73961]: [progress INFO root] complete: finished ev 810ee815-d733-42bd-bde3-80267cbc9537 (Updating mds.cephfs deployment (+3 -> 3))
Oct 02 11:47:48 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event 810ee815-d733-42bd-bde3-80267cbc9537 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Oct 02 11:47:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 11:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:48 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9bce2dff-55f9-467e-b4de-397bee82941d does not exist
Oct 02 11:47:48 compute-0 sudo[95545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:48 compute-0 sudo[95545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-0 sudo[95545]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-0 sudo[95570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:47:48 compute-0 sudo[95570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-0 sudo[95570]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-0 sudo[95595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:48 compute-0 sudo[95595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-0 sudo[95595]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-0 sudo[95620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:48 compute-0 sudo[95620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-0 sudo[95620]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:49 compute-0 sudo[95645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:49 compute-0 sudo[95645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:49 compute-0 sudo[95645]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:49 compute-0 sudo[95670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:47:49 compute-0 sudo[95670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 02 11:47:49 compute-0 ceph-mon[73668]: pgmap v145: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-0 ceph-mon[73668]: osdmap e53: 3 total, 3 up, 3 in
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:49.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e7 new map
Oct 02 11:47:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:49.237571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 02 11:47:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 02 11:47:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] up:boot
Oct 02 11:47:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:active
Oct 02 11:47:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bhscyq"} v 0) v1
Oct 02 11:47:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bhscyq"}]: dispatch
Oct 02 11:47:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e7 all = 0
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=53/54 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=53/54 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=53/54 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.1e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.15( v 52'51 lc 41'30 (0'0,52'51] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=52'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[10.14( v 52'51 lc 41'45 (0'0,52'51] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=52'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 54 pg[7.10( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-0 podman[95767]: 2025-10-02 11:47:49.602867971 +0000 UTC m=+0.083164498 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:49 compute-0 podman[95767]: 2025-10-02 11:47:49.707597547 +0000 UTC m=+0.187894064 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 02 11:47:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:49.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:47:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:47:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 podman[95900]: 2025-10-02 11:47:50.316044828 +0000 UTC m=+0.057153063 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:47:50 compute-0 podman[95900]: 2025-10-02 11:47:50.322802203 +0000 UTC m=+0.063910388 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:47:50 compute-0 podman[95969]: 2025-10-02 11:47:50.542281786 +0000 UTC m=+0.057210815 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, vendor=Red Hat, Inc.)
Oct 02 11:47:50 compute-0 podman[95969]: 2025-10-02 11:47:50.556603597 +0000 UTC m=+0.071532576 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, name=keepalived)
Oct 02 11:47:50 compute-0 ceph-mon[73668]: 5.12 scrub starts
Oct 02 11:47:50 compute-0 ceph-mon[73668]: 5.12 scrub ok
Oct 02 11:47:50 compute-0 ceph-mon[73668]: 5.8 scrub starts
Oct 02 11:47:50 compute-0 ceph-mon[73668]: 5.8 scrub ok
Oct 02 11:47:50 compute-0 ceph-mon[73668]: osdmap e54: 3 total, 3 up, 3 in
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mds.? [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] up:boot
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:active
Oct 02 11:47:50 compute-0 ceph-mon[73668]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bhscyq"}]: dispatch
Oct 02 11:47:50 compute-0 ceph-mon[73668]: pgmap v148: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 02 11:47:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 sudo[95670]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:47:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:47:50 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 21 completed events
Oct 02 11:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:47:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-0 ceph-mgr[73961]: [progress INFO root] Completed event f78306c8-552c-4ad5-bce4-9a7deedfb2a6 (Global Recovery Event) in 5 seconds
Oct 02 11:47:50 compute-0 sudo[96019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:50 compute-0 sudo[96019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:50 compute-0 sudo[96019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:50 compute-0 sudo[96044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:50 compute-0 sudo[96044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:50 compute-0 sudo[96044]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:50 compute-0 sudo[96069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:50 compute-0 sudo[96069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:50 compute-0 sudo[96069]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:51 compute-0 sudo[96094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:47:51 compute-0 sudo[96094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e8 new map
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:49.237571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] up:standby
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:51 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Updating MDS map to version 8 from mon.2
Oct 02 11:47:51 compute-0 sudo[96094]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:47:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:51.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b055a7f1-5cc4-4a9c-839e-9b55223f197e does not exist
Oct 02 11:47:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 902503bd-e307-41ba-9fd1-6790c66cf605 does not exist
Oct 02 11:47:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7d834a6e-9bab-4fd5-b47e-80879467d840 does not exist
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:51 compute-0 sudo[96150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:51 compute-0 sudo[96150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:51 compute-0 sudo[96150]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:51 compute-0 sudo[96175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:51 compute-0 sudo[96175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:51 compute-0 sudo[96175]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:51 compute-0 sudo[96200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:51 compute-0 sudo[96200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:51 compute-0 sudo[96200]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-0 ceph-mon[73668]: mds.? [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] up:standby
Oct 02 11:47:51 compute-0 ceph-mon[73668]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:51 compute-0 sudo[96225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:47:51 compute-0 sudo[96225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s; 109 B/s, 0 objects/s recovering
Oct 02 11:47:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:52 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 02 11:47:52 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 02 11:47:52 compute-0 podman[96290]: 2025-10-02 11:47:52.124837352 +0000 UTC m=+0.062486492 container create b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 11:47:52 compute-0 podman[96290]: 2025-10-02 11:47:52.08542139 +0000 UTC m=+0.023070550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:52 compute-0 systemd[1]: Started libpod-conmon-b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26.scope.
Oct 02 11:47:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:52 compute-0 podman[96290]: 2025-10-02 11:47:52.377503085 +0000 UTC m=+0.315152235 container init b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:52 compute-0 podman[96290]: 2025-10-02 11:47:52.38617194 +0000 UTC m=+0.323821080 container start b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:47:52 compute-0 podman[96290]: 2025-10-02 11:47:52.39077875 +0000 UTC m=+0.328427920 container attach b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:47:52 compute-0 trusting_gould[96308]: 167 167
Oct 02 11:47:52 compute-0 systemd[1]: libpod-b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26.scope: Deactivated successfully.
Oct 02 11:47:52 compute-0 podman[96290]: 2025-10-02 11:47:52.39308866 +0000 UTC m=+0.330737830 container died b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b69938683edb8d0f684a3af053fa0551e250543b303c3a230194938e5dd5dd57-merged.mount: Deactivated successfully.
Oct 02 11:47:52 compute-0 podman[96290]: 2025-10-02 11:47:52.434026601 +0000 UTC m=+0.371675741 container remove b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:47:52 compute-0 systemd[1]: libpod-conmon-b61718a9af7e077ea93c5b37812774c40ceda7e7ca47854553c0ad16460d4b26.scope: Deactivated successfully.
Oct 02 11:47:52 compute-0 podman[96331]: 2025-10-02 11:47:52.613855916 +0000 UTC m=+0.050185643 container create b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:47:52 compute-0 systemd[1]: Started libpod-conmon-b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc.scope.
Oct 02 11:47:52 compute-0 podman[96331]: 2025-10-02 11:47:52.590655754 +0000 UTC m=+0.026985511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0a5a847eba0ca46d4e86f56bfd6dd6b80a9661634dc39272cfe94b33d6ff50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0a5a847eba0ca46d4e86f56bfd6dd6b80a9661634dc39272cfe94b33d6ff50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0a5a847eba0ca46d4e86f56bfd6dd6b80a9661634dc39272cfe94b33d6ff50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0a5a847eba0ca46d4e86f56bfd6dd6b80a9661634dc39272cfe94b33d6ff50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0a5a847eba0ca46d4e86f56bfd6dd6b80a9661634dc39272cfe94b33d6ff50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:52 compute-0 podman[96331]: 2025-10-02 11:47:52.711249091 +0000 UTC m=+0.147578848 container init b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:47:52 compute-0 podman[96331]: 2025-10-02 11:47:52.71894626 +0000 UTC m=+0.155275997 container start b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:47:52 compute-0 podman[96331]: 2025-10-02 11:47:52.722455481 +0000 UTC m=+0.158785218 container attach b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 02 11:47:52 compute-0 ceph-mon[73668]: pgmap v149: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s; 109 B/s, 0 objects/s recovering
Oct 02 11:47:52 compute-0 ceph-mon[73668]: 4.1e scrub starts
Oct 02 11:47:52 compute-0 ceph-mon[73668]: 4.1e scrub ok
Oct 02 11:47:52 compute-0 ceph-mon[73668]: 2.10 deep-scrub starts
Oct 02 11:47:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e9 new map
Oct 02 11:47:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:49.237571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:52 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] up:standby
Oct 02 11:47:52 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:53 compute-0 sudo[96353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:53 compute-0 sudo[96353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:53 compute-0 sudo[96353]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:53 compute-0 sudo[96378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:53 compute-0 sudo[96378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:53 compute-0 sudo[96378]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:53.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:53 compute-0 optimistic_goldwasser[96348]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:47:53 compute-0 optimistic_goldwasser[96348]: --> relative data size: 1.0
Oct 02 11:47:53 compute-0 optimistic_goldwasser[96348]: --> All data devices are unavailable
Oct 02 11:47:53 compute-0 systemd[1]: libpod-b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc.scope: Deactivated successfully.
Oct 02 11:47:53 compute-0 podman[96331]: 2025-10-02 11:47:53.635611945 +0000 UTC m=+1.071941792 container died b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:47:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d0a5a847eba0ca46d4e86f56bfd6dd6b80a9661634dc39272cfe94b33d6ff50-merged.mount: Deactivated successfully.
Oct 02 11:47:53 compute-0 podman[96331]: 2025-10-02 11:47:53.700881568 +0000 UTC m=+1.137211315 container remove b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:47:53 compute-0 systemd[1]: libpod-conmon-b82c39af34c3d4b156736604ada75d0f3c328ee920f4152abdc99df14a2ba1cc.scope: Deactivated successfully.
Oct 02 11:47:53 compute-0 sudo[96225]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:53 compute-0 sudo[96425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:53 compute-0 sudo[96425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:53 compute-0 sudo[96425]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:53 compute-0 ceph-mon[73668]: 3.d scrub starts
Oct 02 11:47:53 compute-0 ceph-mon[73668]: 3.d scrub ok
Oct 02 11:47:53 compute-0 ceph-mon[73668]: 2.10 deep-scrub ok
Oct 02 11:47:53 compute-0 ceph-mon[73668]: mds.? [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] up:standby
Oct 02 11:47:53 compute-0 ceph-mon[73668]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 109 B/s, 0 objects/s recovering
Oct 02 11:47:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:53.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:53 compute-0 sudo[96450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:53 compute-0 sudo[96450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:53 compute-0 sudo[96450]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:53 compute-0 sudo[96475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:53 compute-0 sudo[96475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:53 compute-0 sudo[96475]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:54 compute-0 sudo[96500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:47:54 compute-0 sudo[96500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:54 compute-0 podman[96566]: 2025-10-02 11:47:54.383815751 +0000 UTC m=+0.049742621 container create f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:54 compute-0 systemd[1]: Started libpod-conmon-f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf.scope.
Oct 02 11:47:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:54 compute-0 podman[96566]: 2025-10-02 11:47:54.362224851 +0000 UTC m=+0.028151751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:54 compute-0 podman[96566]: 2025-10-02 11:47:54.463326073 +0000 UTC m=+0.129252973 container init f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:47:54 compute-0 podman[96566]: 2025-10-02 11:47:54.47167966 +0000 UTC m=+0.137606530 container start f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:47:54 compute-0 podman[96566]: 2025-10-02 11:47:54.474968425 +0000 UTC m=+0.140895315 container attach f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:54 compute-0 optimistic_elbakyan[96582]: 167 167
Oct 02 11:47:54 compute-0 systemd[1]: libpod-f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf.scope: Deactivated successfully.
Oct 02 11:47:54 compute-0 conmon[96582]: conmon f7f22be0c5c805083de7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf.scope/container/memory.events
Oct 02 11:47:54 compute-0 podman[96566]: 2025-10-02 11:47:54.478330563 +0000 UTC m=+0.144257433 container died f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2f1c547d43e09d3e272223fa87ac1294920a9460cb565dfa2bb852cd50cea6c-merged.mount: Deactivated successfully.
Oct 02 11:47:54 compute-0 podman[96566]: 2025-10-02 11:47:54.514056529 +0000 UTC m=+0.179983399 container remove f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:47:54 compute-0 systemd[1]: libpod-conmon-f7f22be0c5c805083de77d88ba9d90c10477f53e36a93f0f0cabbdd4b854b7bf.scope: Deactivated successfully.
Oct 02 11:47:54 compute-0 podman[96607]: 2025-10-02 11:47:54.674419278 +0000 UTC m=+0.045814939 container create b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_robinson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:47:54 compute-0 systemd[1]: Started libpod-conmon-b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840.scope.
Oct 02 11:47:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4699a13cddd820427bb8222e3ba1ffaf07cd649e0e80dca8ddc1eedd80bce042/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:54 compute-0 podman[96607]: 2025-10-02 11:47:54.656065292 +0000 UTC m=+0.027460953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4699a13cddd820427bb8222e3ba1ffaf07cd649e0e80dca8ddc1eedd80bce042/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4699a13cddd820427bb8222e3ba1ffaf07cd649e0e80dca8ddc1eedd80bce042/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4699a13cddd820427bb8222e3ba1ffaf07cd649e0e80dca8ddc1eedd80bce042/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:54 compute-0 podman[96607]: 2025-10-02 11:47:54.765659065 +0000 UTC m=+0.137054726 container init b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:47:54 compute-0 podman[96607]: 2025-10-02 11:47:54.775860849 +0000 UTC m=+0.147256520 container start b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:54 compute-0 podman[96607]: 2025-10-02 11:47:54.780210342 +0000 UTC m=+0.151605983 container attach b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:47:54 compute-0 ceph-mon[73668]: 3.c deep-scrub starts
Oct 02 11:47:54 compute-0 ceph-mon[73668]: 3.c deep-scrub ok
Oct 02 11:47:54 compute-0 ceph-mon[73668]: pgmap v150: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 109 B/s, 0 objects/s recovering
Oct 02 11:47:55 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 02 11:47:55 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 02 11:47:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000077s ======
Oct 02 11:47:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:55.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000077s
Oct 02 11:47:55 compute-0 eager_robinson[96624]: {
Oct 02 11:47:55 compute-0 eager_robinson[96624]:     "1": [
Oct 02 11:47:55 compute-0 eager_robinson[96624]:         {
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "devices": [
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "/dev/loop3"
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             ],
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "lv_name": "ceph_lv0",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "lv_size": "7511998464",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "name": "ceph_lv0",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "tags": {
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.cluster_name": "ceph",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.crush_device_class": "",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.encrypted": "0",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.osd_id": "1",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.type": "block",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:                 "ceph.vdo": "0"
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             },
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "type": "block",
Oct 02 11:47:55 compute-0 eager_robinson[96624]:             "vg_name": "ceph_vg0"
Oct 02 11:47:55 compute-0 eager_robinson[96624]:         }
Oct 02 11:47:55 compute-0 eager_robinson[96624]:     ]
Oct 02 11:47:55 compute-0 eager_robinson[96624]: }
Oct 02 11:47:55 compute-0 systemd[1]: libpod-b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840.scope: Deactivated successfully.
Oct 02 11:47:55 compute-0 conmon[96624]: conmon b67d5854364630fc2b70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840.scope/container/memory.events
Oct 02 11:47:55 compute-0 podman[96607]: 2025-10-02 11:47:55.701003384 +0000 UTC m=+1.072399035 container died b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4699a13cddd820427bb8222e3ba1ffaf07cd649e0e80dca8ddc1eedd80bce042-merged.mount: Deactivated successfully.
Oct 02 11:47:55 compute-0 podman[96607]: 2025-10-02 11:47:55.767773036 +0000 UTC m=+1.139168677 container remove b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:55 compute-0 systemd[1]: libpod-conmon-b67d5854364630fc2b702c4517d4fef3f472ae034d42d9f77146760c78432840.scope: Deactivated successfully.
Oct 02 11:47:55 compute-0 ceph-mgr[73961]: [progress INFO root] Writing back 22 completed events
Oct 02 11:47:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 11:47:55 compute-0 sudo[96500]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 220 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:47:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 02 11:47:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:47:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 02 11:47:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:47:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:55.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:55 compute-0 sudo[96644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:55 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:47:55 compute-0 sudo[96644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:55 compute-0 sudo[96644]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:55 compute-0 ceph-mon[73668]: 2.c scrub starts
Oct 02 11:47:55 compute-0 ceph-mon[73668]: 2.c scrub ok
Oct 02 11:47:55 compute-0 ceph-mon[73668]: 3.18 scrub starts
Oct 02 11:47:55 compute-0 ceph-mon[73668]: 3.18 scrub ok
Oct 02 11:47:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:47:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:47:55 compute-0 sudo[96670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:55 compute-0 sudo[96670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:55 compute-0 sudo[96670]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:56 compute-0 sudo[96695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:56 compute-0 sudo[96695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:56 compute-0 sudo[96695]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:56 compute-0 sudo[96720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:47:56 compute-0 sudo[96720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:56 compute-0 podman[96786]: 2025-10-02 11:47:56.39216549 +0000 UTC m=+0.037159005 container create 6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:56 compute-0 systemd[1]: Started libpod-conmon-6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60.scope.
Oct 02 11:47:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:56 compute-0 podman[96786]: 2025-10-02 11:47:56.375376124 +0000 UTC m=+0.020369619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:56 compute-0 podman[96786]: 2025-10-02 11:47:56.479160266 +0000 UTC m=+0.124153771 container init 6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:47:56 compute-0 podman[96786]: 2025-10-02 11:47:56.486133667 +0000 UTC m=+0.131127152 container start 6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:56 compute-0 podman[96786]: 2025-10-02 11:47:56.489095074 +0000 UTC m=+0.134088579 container attach 6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:56 compute-0 fervent_margulis[96802]: 167 167
Oct 02 11:47:56 compute-0 systemd[1]: libpod-6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60.scope: Deactivated successfully.
Oct 02 11:47:56 compute-0 conmon[96802]: conmon 6f192ecfef340041aefb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60.scope/container/memory.events
Oct 02 11:47:56 compute-0 podman[96786]: 2025-10-02 11:47:56.492966804 +0000 UTC m=+0.137960289 container died 6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d7b1ac17f8cd967bcd25411c027a5ad448db591fc8ccd2f10f3506baa905c16-merged.mount: Deactivated successfully.
Oct 02 11:47:56 compute-0 podman[96786]: 2025-10-02 11:47:56.530134508 +0000 UTC m=+0.175127993 container remove 6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:56 compute-0 systemd[1]: libpod-conmon-6f192ecfef340041aefb5116277c9376b40b83731bf57366a0c1531a28893d60.scope: Deactivated successfully.
Oct 02 11:47:56 compute-0 podman[96827]: 2025-10-02 11:47:56.683462115 +0000 UTC m=+0.045562313 container create adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:47:56 compute-0 systemd[1]: Started libpod-conmon-adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2.scope.
Oct 02 11:47:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2078c0512a1990a46b6572ba250cadffda9b7a76694ea5f8c702361594dbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2078c0512a1990a46b6572ba250cadffda9b7a76694ea5f8c702361594dbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2078c0512a1990a46b6572ba250cadffda9b7a76694ea5f8c702361594dbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2078c0512a1990a46b6572ba250cadffda9b7a76694ea5f8c702361594dbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:56 compute-0 podman[96827]: 2025-10-02 11:47:56.664451102 +0000 UTC m=+0.026551310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:56 compute-0 podman[96827]: 2025-10-02 11:47:56.764014264 +0000 UTC m=+0.126114472 container init adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:56 compute-0 podman[96827]: 2025-10-02 11:47:56.774488286 +0000 UTC m=+0.136588494 container start adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:56 compute-0 podman[96827]: 2025-10-02 11:47:56.778093729 +0000 UTC m=+0.140193937 container attach adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:47:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 02 11:47:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:47:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:47:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 02 11:47:56 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 02 11:47:56 compute-0 ceph-mon[73668]: 5.b scrub starts
Oct 02 11:47:56 compute-0 ceph-mon[73668]: 5.b scrub ok
Oct 02 11:47:56 compute-0 ceph-mon[73668]: pgmap v151: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 220 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:47:56 compute-0 ceph-mon[73668]: 4.a scrub starts
Oct 02 11:47:56 compute-0 ceph-mon[73668]: 4.a scrub ok
Oct 02 11:47:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:47:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:47:56 compute-0 ceph-mon[73668]: osdmap e55: 3 total, 3 up, 3 in
Oct 02 11:47:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:57.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.6( v 53'1 (0'0,53'1] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.068844795s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'1 lcod 0'0 mlcod 0'0 active pruub 150.208648682s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.068798065s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 active pruub 150.208969116s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.068745613s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.208969116s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.068206787s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=52'2 lcod 52'2 mlcod 52'2 active pruub 150.208480835s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.6( v 53'1 (0'0,53'1] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.068638802s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.208648682s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=47/48 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.068153381s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=52'2 lcod 52'2 mlcod 0'0 unknown NOTIFY pruub 150.208480835s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.a( v 52'1 (0'0,52'1] local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.067891121s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=0'0 lcod 0'0 mlcod 0'0 active pruub 150.208419800s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:57 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 55 pg[6.a( v 52'1 (0'0,52'1] local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.067855835s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=0'0 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.208419800s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:57 compute-0 admiring_poincare[96844]: {
Oct 02 11:47:57 compute-0 admiring_poincare[96844]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:47:57 compute-0 admiring_poincare[96844]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:47:57 compute-0 admiring_poincare[96844]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:47:57 compute-0 admiring_poincare[96844]:         "osd_id": 1,
Oct 02 11:47:57 compute-0 admiring_poincare[96844]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:47:57 compute-0 admiring_poincare[96844]:         "type": "bluestore"
Oct 02 11:47:57 compute-0 admiring_poincare[96844]:     }
Oct 02 11:47:57 compute-0 admiring_poincare[96844]: }
Oct 02 11:47:57 compute-0 systemd[1]: libpod-adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2.scope: Deactivated successfully.
Oct 02 11:47:57 compute-0 podman[96827]: 2025-10-02 11:47:57.742716428 +0000 UTC m=+1.104816616 container died adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:47:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 212 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:47:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 02 11:47:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:47:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 02 11:47:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:47:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:47:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:47:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 02 11:47:58 compute-0 ceph-mon[73668]: 5.9 deep-scrub starts
Oct 02 11:47:58 compute-0 ceph-mon[73668]: 5.9 deep-scrub ok
Oct 02 11:47:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:47:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:47:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 02 11:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f2078c0512a1990a46b6572ba250cadffda9b7a76694ea5f8c702361594dbf-merged.mount: Deactivated successfully.
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.259764671s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.172378540s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.259688377s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.172378540s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.258845329s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.172256470s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.258816719s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.172256470s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.257383347s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.171478271s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.258041382s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.172180176s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.258010864s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.172180176s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.257305145s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.171478271s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256429672s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.170959473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256984711s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.171539307s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256963730s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.171539307s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256402016s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.170959473s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256711006s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.171203613s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256099701s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 152.170898438s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256347656s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.171203613s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 56 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=13.256032944s) [2] r=-1 lpr=56 pi=[49,56)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.170898438s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:58 compute-0 podman[96827]: 2025-10-02 11:47:58.348187401 +0000 UTC m=+1.710287589 container remove adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:58 compute-0 systemd[1]: libpod-conmon-adc7bc9ec4e2967512366f4f9b895d3841c12be68b9e41ca14a4d4deb30ae5e2.scope: Deactivated successfully.
Oct 02 11:47:58 compute-0 sudo[96720]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4805c99a-3004-490e-bf2e-08f44646ee4c does not exist
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f13a2e24-8970-4b5c-9e93-f8f5adfa574a does not exist
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cba37474-5644-4726-abb5-00a4dba180fd does not exist
Oct 02 11:47:58 compute-0 sudo[96882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:58 compute-0 sudo[96882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:58 compute-0 sudo[96882]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:58 compute-0 sudo[96907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:47:58 compute-0 sudo[96907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:58 compute-0 sudo[96907]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct 02 11:47:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:47:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:47:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:47:58 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:47:58 compute-0 sudo[96932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:58 compute-0 sudo[96932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:58 compute-0 sudo[96932]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:58 compute-0 sudo[96957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:58 compute-0 sudo[96957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:58 compute-0 sudo[96957]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:58 compute-0 sudo[96982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:58 compute-0 sudo[96982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:58 compute-0 sudo[96982]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-0 sudo[97007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:59 compute-0 sudo[97007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:59 compute-0 ceph-mon[73668]: pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 212 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:47:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:47:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:47:59 compute-0 ceph-mon[73668]: osdmap e56: 3 total, 3 up, 3 in
Oct 02 11:47:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:59 compute-0 ceph-mon[73668]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 02 11:47:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:47:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:47:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:59 compute-0 ceph-mon[73668]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:47:59 compute-0 podman[97048]: 2025-10-02 11:47:59.297313348 +0000 UTC m=+0.045485530 container create 73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 02 11:47:59 compute-0 systemd[1]: Started libpod-conmon-73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78.scope.
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 02 11:47:59 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:47:59 compute-0 podman[97048]: 2025-10-02 11:47:59.277118224 +0000 UTC m=+0.025290436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:59 compute-0 podman[97048]: 2025-10-02 11:47:59.385489615 +0000 UTC m=+0.133661817 container init 73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:59 compute-0 podman[97048]: 2025-10-02 11:47:59.393901563 +0000 UTC m=+0.142073745 container start 73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dhawan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:59 compute-0 podman[97048]: 2025-10-02 11:47:59.397949788 +0000 UTC m=+0.146121990 container attach 73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:47:59 compute-0 heuristic_dhawan[97064]: 167 167
Oct 02 11:47:59 compute-0 systemd[1]: libpod-73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78.scope: Deactivated successfully.
Oct 02 11:47:59 compute-0 podman[97048]: 2025-10-02 11:47:59.402906997 +0000 UTC m=+0.151079199 container died 73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dhawan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:47:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-312f77f3b7c14b01d5d64816feb1b61693c9bb1958373c1e48b2f1b2b0f868f9-merged.mount: Deactivated successfully.
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:59 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 57 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:59 compute-0 podman[97048]: 2025-10-02 11:47:59.490740925 +0000 UTC m=+0.238913107 container remove 73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:59 compute-0 systemd[1]: libpod-conmon-73d94cff0d30de53ec41dfb9aab8a6978e66cea2168a96b0cd97c99a7c1cfe78.scope: Deactivated successfully.
Oct 02 11:47:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:59.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:59 compute-0 sudo[97007]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:47:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:47:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:59 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.unmtoh (monmap changed)...
Oct 02 11:47:59 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.unmtoh (monmap changed)...
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 11:47:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 11:47:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:47:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:47:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:59 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct 02 11:47:59 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct 02 11:47:59 compute-0 sudo[97082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:59 compute-0 sudo[97082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:59 compute-0 sudo[97082]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-0 sudo[97107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:59 compute-0 sudo[97107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:59 compute-0 sudo[97107]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-0 sudo[97132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:59 compute-0 sudo[97132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:59 compute-0 sudo[97132]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-0 sudo[97157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:59 compute-0 sudo[97157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 148 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:47:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:47:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:00 compute-0 podman[97199]: 2025-10-02 11:48:00.06620502 +0000 UTC m=+0.042757610 container create c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:48:00 compute-0 systemd[1]: Started libpod-conmon-c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4.scope.
Oct 02 11:48:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:00 compute-0 podman[97199]: 2025-10-02 11:48:00.141154174 +0000 UTC m=+0.117706804 container init c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:48:00 compute-0 podman[97199]: 2025-10-02 11:48:00.04962407 +0000 UTC m=+0.026176680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:00 compute-0 podman[97199]: 2025-10-02 11:48:00.14679554 +0000 UTC m=+0.123348130 container start c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:48:00 compute-0 podman[97199]: 2025-10-02 11:48:00.15141552 +0000 UTC m=+0.127968110 container attach c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:00 compute-0 serene_cartwright[97216]: 167 167
Oct 02 11:48:00 compute-0 systemd[1]: libpod-c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4.scope: Deactivated successfully.
Oct 02 11:48:00 compute-0 podman[97199]: 2025-10-02 11:48:00.15257339 +0000 UTC m=+0.129125980 container died c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2126e0d73594c9d38c21f612306d7b669d2c287f8226b8fcb1b88040956f0f73-merged.mount: Deactivated successfully.
Oct 02 11:48:00 compute-0 podman[97199]: 2025-10-02 11:48:00.188178173 +0000 UTC m=+0.164730753 container remove c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:00 compute-0 systemd[1]: libpod-conmon-c6f8216ff478656793d5cb92f12862e5b885a7969f41899d502daf43da87e4f4.scope: Deactivated successfully.
Oct 02 11:48:00 compute-0 sudo[97157]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:48:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:48:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct 02 11:48:00 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct 02 11:48:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:48:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:48:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:00 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct 02 11:48:00 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct 02 11:48:00 compute-0 sudo[97237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 02 11:48:00 compute-0 sudo[97237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:00 compute-0 sudo[97237]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:00 compute-0 ceph-mon[73668]: 3.f scrub starts
Oct 02 11:48:00 compute-0 ceph-mon[73668]: 3.f scrub ok
Oct 02 11:48:00 compute-0 ceph-mon[73668]: osdmap e57: 3 total, 3 up, 3 in
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-0 ceph-mon[73668]: Reconfiguring mgr.compute-0.unmtoh (monmap changed)...
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:00 compute-0 ceph-mon[73668]: Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct 02 11:48:00 compute-0 ceph-mon[73668]: pgmap v156: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 148 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:48:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 02 11:48:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 58 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[49,57)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-0 sudo[97262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:00 compute-0 sudo[97262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:00 compute-0 sudo[97262]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:00 compute-0 sudo[97287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:00 compute-0 sudo[97287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:00 compute-0 sudo[97287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:00 compute-0 sudo[97312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:48:00 compute-0 sudo[97312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:00 compute-0 podman[97353]: 2025-10-02 11:48:00.801292715 +0000 UTC m=+0.041269741 container create e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:48:00 compute-0 systemd[1]: Started libpod-conmon-e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858.scope.
Oct 02 11:48:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:00 compute-0 podman[97353]: 2025-10-02 11:48:00.874428802 +0000 UTC m=+0.114405848 container init e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:48:00 compute-0 podman[97353]: 2025-10-02 11:48:00.783686529 +0000 UTC m=+0.023663585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:00 compute-0 podman[97353]: 2025-10-02 11:48:00.883477457 +0000 UTC m=+0.123454493 container start e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:00 compute-0 podman[97353]: 2025-10-02 11:48:00.8870874 +0000 UTC m=+0.127064456 container attach e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:00 compute-0 jovial_shirley[97370]: 167 167
Oct 02 11:48:00 compute-0 systemd[1]: libpod-e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858.scope: Deactivated successfully.
Oct 02 11:48:00 compute-0 podman[97353]: 2025-10-02 11:48:00.891014782 +0000 UTC m=+0.130991818 container died e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99285a77872741ce8435fe6b9f8059239d6ae703816d2bb4d8986325401c2a1-merged.mount: Deactivated successfully.
Oct 02 11:48:00 compute-0 podman[97353]: 2025-10-02 11:48:00.934030398 +0000 UTC m=+0.174007434 container remove e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:48:00 compute-0 systemd[1]: libpod-conmon-e410324892d564edfd13a43623b968991fc05532a372dd738fa31ad6c1345858.scope: Deactivated successfully.
Oct 02 11:48:00 compute-0 sudo[97312]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Oct 02 11:48:01 compute-0 sudo[97389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:01 compute-0 sudo[97389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:01 compute-0 sudo[97389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:01 compute-0 sudo[97414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:01 compute-0 sudo[97414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:01 compute-0 sudo[97414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:01 compute-0 sudo[97439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:01 compute-0 sudo[97439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:01 compute-0 sudo[97439]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:01 compute-0 sudo[97464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:48:01 compute-0 sudo[97464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 02 11:48:01 compute-0 ceph-mon[73668]: 3.0 scrub starts
Oct 02 11:48:01 compute-0 ceph-mon[73668]: 3.0 scrub ok
Oct 02 11:48:01 compute-0 ceph-mon[73668]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 02 11:48:01 compute-0 ceph-mon[73668]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 02 11:48:01 compute-0 ceph-mon[73668]: osdmap e58: 3 total, 3 up, 3 in
Oct 02 11:48:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:48:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.971585274s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.990127563s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.971487999s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.990127563s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973883629s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.992767334s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973762512s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.992767334s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973806381s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.992889404s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973770142s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.992889404s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973662376s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.993087769s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973599434s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.993087769s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973581314s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.993179321s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973533630s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.993179321s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973263741s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.992935181s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973206520s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.992935181s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973561287s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.993408203s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973580360s) [2] async=[2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 156.993423462s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973522186s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.993408203s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 59 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=57/58 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59 pruub=14.973496437s) [2] r=-1 lpr=59 pi=[49,59)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.993423462s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:01.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:01 compute-0 podman[97505]: 2025-10-02 11:48:01.617449434 +0000 UTC m=+0.043881400 container create 65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:01 compute-0 systemd[1]: Started libpod-conmon-65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1.scope.
Oct 02 11:48:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:01 compute-0 podman[97505]: 2025-10-02 11:48:01.596595503 +0000 UTC m=+0.023027489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:01 compute-0 podman[97505]: 2025-10-02 11:48:01.696385381 +0000 UTC m=+0.122817347 container init 65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:48:01 compute-0 podman[97505]: 2025-10-02 11:48:01.701968626 +0000 UTC m=+0.128400592 container start 65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:01 compute-0 focused_jang[97523]: 167 167
Oct 02 11:48:01 compute-0 systemd[1]: libpod-65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1.scope: Deactivated successfully.
Oct 02 11:48:01 compute-0 podman[97505]: 2025-10-02 11:48:01.70713262 +0000 UTC m=+0.133564586 container attach 65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:48:01 compute-0 podman[97505]: 2025-10-02 11:48:01.708411833 +0000 UTC m=+0.134843799 container died 65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-cac1b9cb4fef01b018555acb8b72acaa066b3887e49bb1328bcd770211497d85-merged.mount: Deactivated successfully.
Oct 02 11:48:01 compute-0 podman[97505]: 2025-10-02 11:48:01.753293337 +0000 UTC m=+0.179725303 container remove 65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:48:01 compute-0 systemd[1]: libpod-conmon-65d11f2b72425ae68f355b479582cf8e30ceec8df1cd59b6982cf6097c6f77b1.scope: Deactivated successfully.
Oct 02 11:48:01 compute-0 sudo[97566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upnnavyjyrhoznslnipczijjnzloahbm ; /usr/bin/python3'
Oct 02 11:48:01 compute-0 sudo[97566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 38/213 objects misplaced (17.840%); 186 B/s, 2 keys/s, 2 objects/s recovering
Oct 02 11:48:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:01 compute-0 sudo[97464]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:48:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct 02 11:48:01 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct 02 11:48:02 compute-0 python3[97569]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:02 compute-0 podman[97574]: 2025-10-02 11:48:02.083963903 +0000 UTC m=+0.066590478 container create c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:48:02 compute-0 systemd[1]: Started libpod-conmon-c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0.scope.
Oct 02 11:48:02 compute-0 podman[97574]: 2025-10-02 11:48:02.048683908 +0000 UTC m=+0.031310583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:48:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c4fa5b82100d21270e6e6952bff6a7645beb643016531888cdbb03933307a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c4fa5b82100d21270e6e6952bff6a7645beb643016531888cdbb03933307a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:02 compute-0 podman[97574]: 2025-10-02 11:48:02.174716367 +0000 UTC m=+0.157342942 container init c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0 (image=quay.io/ceph/ceph:v18, name=modest_lewin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:48:02 compute-0 podman[97574]: 2025-10-02 11:48:02.183649249 +0000 UTC m=+0.166275824 container start c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0 (image=quay.io/ceph/ceph:v18, name=modest_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:48:02 compute-0 podman[97574]: 2025-10-02 11:48:02.189309016 +0000 UTC m=+0.171935591 container attach c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:48:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 02 11:48:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 02 11:48:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 02 11:48:02 compute-0 ceph-mon[73668]: Reconfiguring osd.1 (monmap changed)...
Oct 02 11:48:02 compute-0 ceph-mon[73668]: Reconfiguring daemon osd.1 on compute-0
Oct 02 11:48:02 compute-0 ceph-mon[73668]: osdmap e59: 3 total, 3 up, 3 in
Oct 02 11:48:02 compute-0 ceph-mon[73668]: pgmap v159: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 38/213 objects misplaced (17.840%); 186 B/s, 2 keys/s, 2 objects/s recovering
Oct 02 11:48:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:02 compute-0 ceph-mon[73668]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 02 11:48:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:48:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:02 compute-0 ceph-mon[73668]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 02 11:48:02 compute-0 modest_lewin[97588]: could not fetch user info: no user info saved
Oct 02 11:48:02 compute-0 systemd[1]: libpod-c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0.scope: Deactivated successfully.
Oct 02 11:48:02 compute-0 podman[97574]: 2025-10-02 11:48:02.593032907 +0000 UTC m=+0.575659482 container died c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-06c4fa5b82100d21270e6e6952bff6a7645beb643016531888cdbb03933307a5-merged.mount: Deactivated successfully.
Oct 02 11:48:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:48:02 compute-0 podman[97574]: 2025-10-02 11:48:02.634936304 +0000 UTC m=+0.617562909 container remove c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:48:02 compute-0 systemd[1]: libpod-conmon-c2b22a51529520f304941b0a00b38e162f168bd9944fad62e97c4b05333fa3b0.scope: Deactivated successfully.
Oct 02 11:48:02 compute-0 sudo[97566]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:02 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct 02 11:48:02 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct 02 11:48:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 02 11:48:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:48:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:02 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Oct 02 11:48:02 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Oct 02 11:48:02 compute-0 sudo[97710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfpnccdalbuddohkugyvkxhhkukpohel ; /usr/bin/python3'
Oct 02 11:48:02 compute-0 sudo[97710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:02 compute-0 python3[97712]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:03 compute-0 podman[97713]: 2025-10-02 11:48:03.04192832 +0000 UTC m=+0.048078448 container create a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7 (image=quay.io/ceph/ceph:v18, name=jovial_lehmann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:48:03 compute-0 systemd[1]: Started libpod-conmon-a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7.scope.
Oct 02 11:48:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5a04258909940bc33f46f7a5471788936e12fdf413ca8aef1913c1aa0931b3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5a04258909940bc33f46f7a5471788936e12fdf413ca8aef1913c1aa0931b3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:03 compute-0 podman[97713]: 2025-10-02 11:48:03.02113701 +0000 UTC m=+0.027287148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 11:48:03 compute-0 podman[97713]: 2025-10-02 11:48:03.122825998 +0000 UTC m=+0.128976146 container init a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7 (image=quay.io/ceph/ceph:v18, name=jovial_lehmann, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:03 compute-0 podman[97713]: 2025-10-02 11:48:03.129891921 +0000 UTC m=+0.136042259 container start a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7 (image=quay.io/ceph/ceph:v18, name=jovial_lehmann, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:03 compute-0 podman[97713]: 2025-10-02 11:48:03.133583087 +0000 UTC m=+0.139733235 container attach a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7 (image=quay.io/ceph/ceph:v18, name=jovial_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]: {
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "user_id": "openstack",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "display_name": "openstack",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "email": "",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "suspended": 0,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "max_buckets": 1000,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "subusers": [],
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "keys": [
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         {
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:             "user": "openstack",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:             "access_key": "CQEMBENB3VPADIV9TFC7",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:             "secret_key": "04it50FPuRgkCMkGVTyKykC7ETQ6WVnI284PBRIm"
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         }
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     ],
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "swift_keys": [],
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "caps": [],
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "op_mask": "read, write, delete",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "default_placement": "",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "default_storage_class": "",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "placement_tags": [],
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "bucket_quota": {
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "enabled": false,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "check_on_raw": false,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "max_size": -1,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "max_size_kb": 0,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "max_objects": -1
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     },
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "user_quota": {
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "enabled": false,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "check_on_raw": false,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "max_size": -1,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "max_size_kb": 0,
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:         "max_objects": -1
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     },
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "temp_url_keys": [],
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "type": "rgw",
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]:     "mfa_ids": []
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]: }
Oct 02 11:48:03 compute-0 jovial_lehmann[97728]: 
Oct 02 11:48:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:48:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:48:03 compute-0 systemd[1]: libpod-a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7.scope: Deactivated successfully.
Oct 02 11:48:03 compute-0 conmon[97728]: conmon a1f3547a3a3f39b7eebd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7.scope/container/memory.events
Oct 02 11:48:03 compute-0 podman[97713]: 2025-10-02 11:48:03.394370411 +0000 UTC m=+0.400520539 container died a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7 (image=quay.io/ceph/ceph:v18, name=jovial_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:48:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct 02 11:48:03 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct 02 11:48:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:48:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:48:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:48:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:48:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:03 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct 02 11:48:03 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct 02 11:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d5a04258909940bc33f46f7a5471788936e12fdf413ca8aef1913c1aa0931b3-merged.mount: Deactivated successfully.
Oct 02 11:48:03 compute-0 ceph-mon[73668]: osdmap e60: 3 total, 3 up, 3 in
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-0 ceph-mon[73668]: Reconfiguring osd.0 (monmap changed)...
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:03 compute-0 ceph-mon[73668]: Reconfiguring daemon osd.0 on compute-1
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:48:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:03 compute-0 podman[97713]: 2025-10-02 11:48:03.453448062 +0000 UTC m=+0.459598190 container remove a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7 (image=quay.io/ceph/ceph:v18, name=jovial_lehmann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:48:03 compute-0 systemd[1]: libpod-conmon-a1f3547a3a3f39b7eebde6f5845f75955d3a5a169054a01d768c70fbd53733e7.scope: Deactivated successfully.
Oct 02 11:48:03 compute-0 sudo[97710]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:03.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 KiB/s wr, 130 op/s; 38/213 objects misplaced (17.840%); 165 B/s, 2 keys/s, 2 objects/s recovering
Oct 02 11:48:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:03.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct 02 11:48:04 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:04 compute-0 ceph-mgr[73961]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct 02 11:48:04 compute-0 ceph-mgr[73961]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:04 compute-0 ceph-mon[73668]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 02 11:48:04 compute-0 ceph-mon[73668]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: pgmap v161: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 KiB/s wr, 130 op/s; 38/213 objects misplaced (17.840%); 165 B/s, 2 keys/s, 2 objects/s recovering
Oct 02 11:48:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:48:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:48:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:48:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-0 sudo[97827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:04 compute-0 sudo[97827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:04 compute-0 sudo[97827]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:04 compute-0 sudo[97852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:04 compute-0 sudo[97852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:04 compute-0 sudo[97852]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:04 compute-0 sudo[97877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:04 compute-0 sudo[97877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:04 compute-0 sudo[97877]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:05 compute-0 sudo[97902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:48:05 compute-0 sudo[97902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:05 compute-0 ceph-mon[73668]: 2.a scrub starts
Oct 02 11:48:05 compute-0 ceph-mon[73668]: 2.a scrub ok
Oct 02 11:48:05 compute-0 ceph-mon[73668]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 02 11:48:05 compute-0 ceph-mon[73668]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 02 11:48:05 compute-0 ceph-mon[73668]: 5.16 scrub starts
Oct 02 11:48:05 compute-0 ceph-mon[73668]: 5.16 scrub ok
Oct 02 11:48:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:05.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:05 compute-0 podman[97997]: 2025-10-02 11:48:05.580132632 +0000 UTC m=+0.087245574 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 11:48:05 compute-0 podman[97997]: 2025-10-02 11:48:05.684639732 +0000 UTC m=+0.191752654 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:48:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 1.7 KiB/s wr, 100 op/s; 307 B/s, 1 keys/s, 7 objects/s recovering
Oct 02 11:48:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 02 11:48:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:48:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 02 11:48:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:48:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:05.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:48:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:48:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-0 podman[98132]: 2025-10-02 11:48:06.294634773 +0000 UTC m=+0.061060164 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:48:06 compute-0 podman[98132]: 2025-10-02 11:48:06.331582262 +0000 UTC m=+0.098007653 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:48:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:48:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:48:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 02 11:48:06 compute-0 ceph-mon[73668]: pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 1.7 KiB/s wr, 100 op/s; 307 B/s, 1 keys/s, 7 objects/s recovering
Oct 02 11:48:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:48:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:48:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-0 podman[98195]: 2025-10-02 11:48:06.945911235 +0000 UTC m=+0.446062790 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public)
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 02 11:48:07 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 02 11:48:07 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 02 11:48:07 compute-0 podman[98215]: 2025-10-02 11:48:07.108358077 +0000 UTC m=+0.134899508 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 02 11:48:07 compute-0 podman[98195]: 2025-10-02 11:48:07.170592371 +0000 UTC m=+0.670743906 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=keepalived, distribution-scope=public, vcs-type=git)
Oct 02 11:48:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:07.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:07 compute-0 sudo[97902]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev af31204e-da73-4571-a758-4ba3e53835ae does not exist
Oct 02 11:48:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 24965930-85fd-4598-b522-7b3f7dd02130 does not exist
Oct 02 11:48:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a1959f61-49d1-4d95-9cf5-f1c721a1cf77 does not exist
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:07 compute-0 sudo[98247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:07 compute-0 sudo[98247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:07 compute-0 sudo[98247]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:07 compute-0 sudo[98272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:07 compute-0 sudo[98272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:07 compute-0 sudo[98272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:07 compute-0 sudo[98297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:07 compute-0 sudo[98297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:07 compute-0 sudo[98297]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 317 B/s wr, 2 op/s; 269 B/s, 1 keys/s, 7 objects/s recovering
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 02 11:48:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:48:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:07 compute-0 sudo[98322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:48:07 compute-0 sudo[98322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:08 compute-0 ceph-mon[73668]: 5.d deep-scrub starts
Oct 02 11:48:08 compute-0 ceph-mon[73668]: 5.d deep-scrub ok
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:48:08 compute-0 ceph-mon[73668]: 3.19 scrub starts
Oct 02 11:48:08 compute-0 ceph-mon[73668]: osdmap e61: 3 total, 3 up, 3 in
Oct 02 11:48:08 compute-0 ceph-mon[73668]: 3.19 scrub ok
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:08 compute-0 ceph-mon[73668]: pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 317 B/s wr, 2 op/s; 269 B/s, 1 keys/s, 7 objects/s recovering
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:48:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:48:08 compute-0 podman[98384]: 2025-10-02 11:48:08.249046573 +0000 UTC m=+0.041281282 container create 71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hermann, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:48:08 compute-0 systemd[1]: Started libpod-conmon-71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9.scope.
Oct 02 11:48:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:08 compute-0 podman[98384]: 2025-10-02 11:48:08.228758917 +0000 UTC m=+0.020993646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:08 compute-0 podman[98384]: 2025-10-02 11:48:08.330159737 +0000 UTC m=+0.122394446 container init 71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:48:08 compute-0 podman[98384]: 2025-10-02 11:48:08.337331813 +0000 UTC m=+0.129566522 container start 71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 11:48:08 compute-0 podman[98384]: 2025-10-02 11:48:08.341204583 +0000 UTC m=+0.133439302 container attach 71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:08 compute-0 lucid_hermann[98401]: 167 167
Oct 02 11:48:08 compute-0 systemd[1]: libpod-71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9.scope: Deactivated successfully.
Oct 02 11:48:08 compute-0 podman[98384]: 2025-10-02 11:48:08.343433571 +0000 UTC m=+0.135668280 container died 71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a41fd0b754e295c15834e61210b0c38f51e37fa204af99cd8081530d2210ad84-merged.mount: Deactivated successfully.
Oct 02 11:48:08 compute-0 podman[98384]: 2025-10-02 11:48:08.383372547 +0000 UTC m=+0.175607266 container remove 71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hermann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:48:08 compute-0 systemd[1]: libpod-conmon-71a393ee2a4ef6dab534ca52319a35eccaeef511e1e7bdcf40e53f3a039b44e9.scope: Deactivated successfully.
Oct 02 11:48:08 compute-0 podman[98423]: 2025-10-02 11:48:08.550404249 +0000 UTC m=+0.048365615 container create 1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:48:08 compute-0 systemd[1]: Started libpod-conmon-1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4.scope.
Oct 02 11:48:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1b4eb2bcd641d45ee30138780597f382743600d1c35c9f4f5c6b84b96128e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1b4eb2bcd641d45ee30138780597f382743600d1c35c9f4f5c6b84b96128e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1b4eb2bcd641d45ee30138780597f382743600d1c35c9f4f5c6b84b96128e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1b4eb2bcd641d45ee30138780597f382743600d1c35c9f4f5c6b84b96128e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:08 compute-0 podman[98423]: 2025-10-02 11:48:08.52885762 +0000 UTC m=+0.026818996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1b4eb2bcd641d45ee30138780597f382743600d1c35c9f4f5c6b84b96128e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 02 11:48:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:48:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:48:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 02 11:48:08 compute-0 podman[98423]: 2025-10-02 11:48:08.638769361 +0000 UTC m=+0.136730727 container init 1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:08 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 02 11:48:08 compute-0 podman[98423]: 2025-10-02 11:48:08.645128526 +0000 UTC m=+0.143089892 container start 1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:08 compute-0 podman[98423]: 2025-10-02 11:48:08.648619556 +0000 UTC m=+0.146580952 container attach 1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:48:09 compute-0 ceph-mon[73668]: 2.13 scrub starts
Oct 02 11:48:09 compute-0 ceph-mon[73668]: 2.13 scrub ok
Oct 02 11:48:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:48:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:48:09 compute-0 ceph-mon[73668]: osdmap e62: 3 total, 3 up, 3 in
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.322387695s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 160.172943115s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.322319984s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.172943115s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.321143150s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 160.172286987s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.321082115s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.172286987s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.320450783s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 160.172195435s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.319664955s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 160.171569824s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.320338249s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.172195435s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 62 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62 pruub=10.319639206s) [2] r=-1 lpr=62 pi=[49,62)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.171569824s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 02 11:48:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 02 11:48:09 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:09 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 63 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:09 compute-0 xenodochial_bardeen[98439]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:48:09 compute-0 xenodochial_bardeen[98439]: --> relative data size: 1.0
Oct 02 11:48:09 compute-0 xenodochial_bardeen[98439]: --> All data devices are unavailable
Oct 02 11:48:09 compute-0 systemd[1]: libpod-1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4.scope: Deactivated successfully.
Oct 02 11:48:09 compute-0 podman[98423]: 2025-10-02 11:48:09.610543705 +0000 UTC m=+1.108505071 container died 1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:48:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba1b4eb2bcd641d45ee30138780597f382743600d1c35c9f4f5c6b84b96128e6-merged.mount: Deactivated successfully.
Oct 02 11:48:09 compute-0 podman[98423]: 2025-10-02 11:48:09.667417161 +0000 UTC m=+1.165378527 container remove 1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:09 compute-0 systemd[1]: libpod-conmon-1b214adfc79b179b3f0897a6a67d15a8f604fce73f718f5afa0fe2515c93d3a4.scope: Deactivated successfully.
Oct 02 11:48:09 compute-0 sudo[98322]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:09 compute-0 sudo[98468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:09 compute-0 sudo[98468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:09 compute-0 sudo[98468]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:09 compute-0 sudo[98493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:09 compute-0 sudo[98493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:09 compute-0 sudo[98493]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 183 B/s, 6 objects/s recovering
Oct 02 11:48:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:09.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:09 compute-0 sudo[98518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:09 compute-0 sudo[98518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:09 compute-0 sudo[98518]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:09 compute-0 sudo[98543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:48:09 compute-0 sudo[98543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:10 compute-0 podman[98611]: 2025-10-02 11:48:10.328732173 +0000 UTC m=+0.045277966 container create 9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:48:10 compute-0 systemd[1]: Started libpod-conmon-9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af.scope.
Oct 02 11:48:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:10 compute-0 podman[98611]: 2025-10-02 11:48:10.307534103 +0000 UTC m=+0.024079916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 02 11:48:10 compute-0 podman[98611]: 2025-10-02 11:48:10.413734587 +0000 UTC m=+0.130280400 container init 9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:48:10 compute-0 ceph-mon[73668]: osdmap e63: 3 total, 3 up, 3 in
Oct 02 11:48:10 compute-0 ceph-mon[73668]: pgmap v167: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 183 B/s, 6 objects/s recovering
Oct 02 11:48:10 compute-0 podman[98611]: 2025-10-02 11:48:10.422373901 +0000 UTC m=+0.138919694 container start 9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:48:10 compute-0 keen_brahmagupta[98628]: 167 167
Oct 02 11:48:10 compute-0 podman[98611]: 2025-10-02 11:48:10.427145075 +0000 UTC m=+0.143690868 container attach 9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:10 compute-0 systemd[1]: libpod-9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af.scope: Deactivated successfully.
Oct 02 11:48:10 compute-0 podman[98611]: 2025-10-02 11:48:10.427880994 +0000 UTC m=+0.144426787 container died 9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 02 11:48:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 02 11:48:10 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 64 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-be755ec51852a9386559c612e66fd01a83697509af15f1590b4ad06422b6dc50-merged.mount: Deactivated successfully.
Oct 02 11:48:10 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 64 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:10 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 64 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:10 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 64 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[49,63)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:10 compute-0 podman[98611]: 2025-10-02 11:48:10.48980101 +0000 UTC m=+0.206346803 container remove 9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:10 compute-0 systemd[1]: libpod-conmon-9c7ab746624c66ce995d7b385eb9c38488f02663dc9cabb37188631da13636af.scope: Deactivated successfully.
Oct 02 11:48:10 compute-0 podman[98651]: 2025-10-02 11:48:10.663369691 +0000 UTC m=+0.048515899 container create c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:48:10 compute-0 systemd[1]: Started libpod-conmon-c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8.scope.
Oct 02 11:48:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5f985440f46e0447b762970a2bbf7fc3398830c3a55eb90f4c8b3d268d1752/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:10 compute-0 podman[98651]: 2025-10-02 11:48:10.640467117 +0000 UTC m=+0.025613355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5f985440f46e0447b762970a2bbf7fc3398830c3a55eb90f4c8b3d268d1752/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5f985440f46e0447b762970a2bbf7fc3398830c3a55eb90f4c8b3d268d1752/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5f985440f46e0447b762970a2bbf7fc3398830c3a55eb90f4c8b3d268d1752/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:10 compute-0 podman[98651]: 2025-10-02 11:48:10.745995934 +0000 UTC m=+0.131142162 container init c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:10 compute-0 podman[98651]: 2025-10-02 11:48:10.753017056 +0000 UTC m=+0.138163264 container start c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_germain, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:48:10 compute-0 podman[98651]: 2025-10-02 11:48:10.756899337 +0000 UTC m=+0.142045565 container attach c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 02 11:48:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 02 11:48:11 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=14.997713089s) [2] async=[2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 167.058288574s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=14.997610092s) [2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.058288574s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=15.001796722s) [2] async=[2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 167.063156128s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=15.001672745s) [2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.063156128s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=15.001554489s) [2] async=[2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 167.063140869s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=15.001510620s) [2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.063140869s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=15.000932693s) [2] async=[2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 167.063125610s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:11 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 65 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=63/64 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65 pruub=15.000439644s) [2] r=-1 lpr=65 pi=[49,65)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.063125610s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:11 compute-0 ceph-mon[73668]: osdmap e64: 3 total, 3 up, 3 in
Oct 02 11:48:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:11 compute-0 charming_germain[98668]: {
Oct 02 11:48:11 compute-0 charming_germain[98668]:     "1": [
Oct 02 11:48:11 compute-0 charming_germain[98668]:         {
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "devices": [
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "/dev/loop3"
Oct 02 11:48:11 compute-0 charming_germain[98668]:             ],
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "lv_name": "ceph_lv0",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "lv_size": "7511998464",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "name": "ceph_lv0",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "tags": {
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.cluster_name": "ceph",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.crush_device_class": "",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.encrypted": "0",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.osd_id": "1",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.type": "block",
Oct 02 11:48:11 compute-0 charming_germain[98668]:                 "ceph.vdo": "0"
Oct 02 11:48:11 compute-0 charming_germain[98668]:             },
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "type": "block",
Oct 02 11:48:11 compute-0 charming_germain[98668]:             "vg_name": "ceph_vg0"
Oct 02 11:48:11 compute-0 charming_germain[98668]:         }
Oct 02 11:48:11 compute-0 charming_germain[98668]:     ]
Oct 02 11:48:11 compute-0 charming_germain[98668]: }
Oct 02 11:48:11 compute-0 systemd[1]: libpod-c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8.scope: Deactivated successfully.
Oct 02 11:48:11 compute-0 podman[98651]: 2025-10-02 11:48:11.680149023 +0000 UTC m=+1.065295231 container died c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_germain, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a5f985440f46e0447b762970a2bbf7fc3398830c3a55eb90f4c8b3d268d1752-merged.mount: Deactivated successfully.
Oct 02 11:48:11 compute-0 podman[98651]: 2025-10-02 11:48:11.746018841 +0000 UTC m=+1.131165059 container remove c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:11 compute-0 systemd[1]: libpod-conmon-c371a755f4c485dc889e20fc3b1f78a265004b202e1f20fd1bd4047cb0005dc8.scope: Deactivated successfully.
Oct 02 11:48:11 compute-0 sudo[98543]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:11 compute-0 sudo[98691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:11 compute-0 sudo[98691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:11 compute-0 sudo[98691]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 0 objects/s recovering
Oct 02 11:48:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:11.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:11 compute-0 sudo[98716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:11 compute-0 sudo[98716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:11 compute-0 sudo[98716]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:11 compute-0 sudo[98741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:12 compute-0 sudo[98741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:12 compute-0 sudo[98741]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:12 compute-0 sudo[98766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:48:12 compute-0 sudo[98766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:12 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 02 11:48:12 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 02 11:48:12 compute-0 podman[98832]: 2025-10-02 11:48:12.425237328 +0000 UTC m=+0.046669292 container create f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:48:12 compute-0 systemd[1]: Started libpod-conmon-f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77.scope.
Oct 02 11:48:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 02 11:48:12 compute-0 ceph-mon[73668]: osdmap e65: 3 total, 3 up, 3 in
Oct 02 11:48:12 compute-0 ceph-mon[73668]: pgmap v170: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 0 objects/s recovering
Oct 02 11:48:12 compute-0 ceph-mon[73668]: 3.1e scrub starts
Oct 02 11:48:12 compute-0 ceph-mon[73668]: 3.1e scrub ok
Oct 02 11:48:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 02 11:48:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 02 11:48:12 compute-0 podman[98832]: 2025-10-02 11:48:12.407515958 +0000 UTC m=+0.028947942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:12 compute-0 podman[98832]: 2025-10-02 11:48:12.509329639 +0000 UTC m=+0.130761623 container init f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:48:12 compute-0 podman[98832]: 2025-10-02 11:48:12.518021464 +0000 UTC m=+0.139453428 container start f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:12 compute-0 podman[98832]: 2025-10-02 11:48:12.5217001 +0000 UTC m=+0.143132094 container attach f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:48:12 compute-0 happy_goldberg[98849]: 167 167
Oct 02 11:48:12 compute-0 systemd[1]: libpod-f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77.scope: Deactivated successfully.
Oct 02 11:48:12 compute-0 podman[98832]: 2025-10-02 11:48:12.52597496 +0000 UTC m=+0.147406924 container died f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f5d6c4c001952f8b69e05a5fd5a4a17562a0fecb75d4989ede374aa2568ed5b-merged.mount: Deactivated successfully.
Oct 02 11:48:12 compute-0 podman[98832]: 2025-10-02 11:48:12.564558421 +0000 UTC m=+0.185990385 container remove f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:12 compute-0 systemd[1]: libpod-conmon-f3e7b593876ab75e235727f9e7fe8b1097c7e6a3f471635857b583c309acea77.scope: Deactivated successfully.
Oct 02 11:48:12 compute-0 podman[98873]: 2025-10-02 11:48:12.741547352 +0000 UTC m=+0.045634295 container create 230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:48:12 compute-0 systemd[1]: Started libpod-conmon-230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3.scope.
Oct 02 11:48:12 compute-0 podman[98873]: 2025-10-02 11:48:12.72181043 +0000 UTC m=+0.025897373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993444d889d023858ec0e0c342e3b006c66d9e94c7b0a2b088765fd0f23c3d30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993444d889d023858ec0e0c342e3b006c66d9e94c7b0a2b088765fd0f23c3d30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993444d889d023858ec0e0c342e3b006c66d9e94c7b0a2b088765fd0f23c3d30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993444d889d023858ec0e0c342e3b006c66d9e94c7b0a2b088765fd0f23c3d30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:48:12 compute-0 podman[98873]: 2025-10-02 11:48:12.871323008 +0000 UTC m=+0.175409951 container init 230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mendel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:48:12 compute-0 podman[98873]: 2025-10-02 11:48:12.878654448 +0000 UTC m=+0.182741371 container start 230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mendel, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:12 compute-0 podman[98873]: 2025-10-02 11:48:12.886226654 +0000 UTC m=+0.190313577 container attach 230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mendel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 11:48:13 compute-0 sudo[98894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:13 compute-0 sudo[98894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:13 compute-0 sudo[98894]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:13 compute-0 sudo[98919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:13 compute-0 sudo[98919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:13 compute-0 sudo[98919]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:13 compute-0 ceph-mon[73668]: osdmap e66: 3 total, 3 up, 3 in
Oct 02 11:48:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:13.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]: {
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]:         "osd_id": 1,
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]:         "type": "bluestore"
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]:     }
Oct 02 11:48:13 compute-0 upbeat_mendel[98889]: }
Oct 02 11:48:13 compute-0 systemd[1]: libpod-230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3.scope: Deactivated successfully.
Oct 02 11:48:13 compute-0 conmon[98889]: conmon 230fb42db5bda3f474d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3.scope/container/memory.events
Oct 02 11:48:13 compute-0 podman[98873]: 2025-10-02 11:48:13.789403969 +0000 UTC m=+1.093490912 container died 230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-993444d889d023858ec0e0c342e3b006c66d9e94c7b0a2b088765fd0f23c3d30-merged.mount: Deactivated successfully.
Oct 02 11:48:13 compute-0 podman[98873]: 2025-10-02 11:48:13.852796053 +0000 UTC m=+1.156882976 container remove 230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:48:13 compute-0 systemd[1]: libpod-conmon-230fb42db5bda3f474d8ea8a40fe61dcb68fa4239b49005fec2d4d713a0d78f3.scope: Deactivated successfully.
Oct 02 11:48:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 35 B/s, 0 objects/s recovering
Oct 02 11:48:13 compute-0 sudo[98766]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:48:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:13.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:48:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a6768083-f642-4e1d-b4af-07aad776ce6c does not exist
Oct 02 11:48:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 410984c1-8083-4c6f-9d20-c20da6316106 does not exist
Oct 02 11:48:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 71504cec-dd14-4e49-a307-d5a4541aebaa does not exist
Oct 02 11:48:13 compute-0 sudo[98970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:13 compute-0 sudo[98970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:13 compute-0 sudo[98970]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:14 compute-0 sudo[98995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:48:14 compute-0 sudo[98995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:14 compute-0 sudo[98995]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:14 compute-0 ceph-mon[73668]: pgmap v172: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 35 B/s, 0 objects/s recovering
Oct 02 11:48:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:15.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 682 B/s wr, 48 op/s; 125 B/s, 4 objects/s recovering
Oct 02 11:48:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 02 11:48:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:48:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 02 11:48:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:48:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:15.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 02 11:48:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:48:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:48:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:48:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:48:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 02 11:48:16 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.555806160s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 168.172958374s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.555717468s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.172958374s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.554639816s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 168.172149658s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.554580688s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.172149658s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.553364754s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 168.171539307s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.542740822s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 168.160919189s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.553328514s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.171539307s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=11.542695999s) [0] r=-1 lpr=67 pi=[49,67)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.160919189s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67) [1] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 67 pg[6.6( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67) [1] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:16 compute-0 PackageKit[31653]: daemon quit
Oct 02 11:48:16 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 11:48:16 compute-0 ceph-mon[73668]: 5.0 scrub starts
Oct 02 11:48:16 compute-0 ceph-mon[73668]: 5.0 scrub ok
Oct 02 11:48:16 compute-0 ceph-mon[73668]: pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 682 B/s wr, 48 op/s; 125 B/s, 4 objects/s recovering
Oct 02 11:48:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:48:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:48:16 compute-0 ceph-mon[73668]: osdmap e67: 3 total, 3 up, 3 in
Oct 02 11:48:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 02 11:48:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 02 11:48:17 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[6.e( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=67/68 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67) [1] r=0 lpr=67 pi=[55,67)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 68 pg[6.6( v 53'1 lc 0'0 (0'0,53'1] local-lis/les=67/68 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67) [1] r=0 lpr=67 pi=[55,67)/1 crt=53'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:17.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 0 op/s; 98 B/s, 3 objects/s recovering
Oct 02 11:48:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:17.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 02 11:48:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 02 11:48:18 compute-0 ceph-mon[73668]: 4.3 scrub starts
Oct 02 11:48:18 compute-0 ceph-mon[73668]: 4.3 scrub ok
Oct 02 11:48:18 compute-0 ceph-mon[73668]: osdmap e68: 3 total, 3 up, 3 in
Oct 02 11:48:18 compute-0 ceph-mon[73668]: pgmap v176: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 0 op/s; 98 B/s, 3 objects/s recovering
Oct 02 11:48:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 02 11:48:18 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 69 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:18 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 69 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:18 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 69 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:18 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 69 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[49,68)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 02 11:48:19 compute-0 ceph-mon[73668]: osdmap e69: 3 total, 3 up, 3 in
Oct 02 11:48:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 02 11:48:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.210776329s) [0] async=[0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 174.976669312s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.210844994s) [0] async=[0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 174.976745605s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.216663361s) [0] async=[0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 174.982574463s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.216662407s) [0] async=[0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 174.982604980s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.216572762s) [0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.982574463s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.210727692s) [0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.976745605s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.210643768s) [0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.976669312s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 70 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=68/69 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=15.216552734s) [0] r=-1 lpr=70 pi=[49,70)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.982604980s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:19.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 02 11:48:20 compute-0 ceph-mon[73668]: 4.6 scrub starts
Oct 02 11:48:20 compute-0 ceph-mon[73668]: 4.6 scrub ok
Oct 02 11:48:20 compute-0 ceph-mon[73668]: osdmap e70: 3 total, 3 up, 3 in
Oct 02 11:48:20 compute-0 ceph-mon[73668]: pgmap v179: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 02 11:48:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 02 11:48:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 02 11:48:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 02 11:48:21 compute-0 ceph-mon[73668]: osdmap e71: 3 total, 3 up, 3 in
Oct 02 11:48:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:21.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 848 B/s wr, 60 op/s; 45 B/s, 4 objects/s recovering
Oct 02 11:48:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 02 11:48:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:48:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 02 11:48:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:48:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:21.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:22 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Oct 02 11:48:22 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Oct 02 11:48:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 02 11:48:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:48:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:48:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 02 11:48:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 02 11:48:22 compute-0 ceph-mon[73668]: 2.1b scrub starts
Oct 02 11:48:22 compute-0 ceph-mon[73668]: 2.1b scrub ok
Oct 02 11:48:22 compute-0 ceph-mon[73668]: 3.1f scrub starts
Oct 02 11:48:22 compute-0 ceph-mon[73668]: 3.1f scrub ok
Oct 02 11:48:22 compute-0 ceph-mon[73668]: 3.13 scrub starts
Oct 02 11:48:22 compute-0 ceph-mon[73668]: 3.13 scrub ok
Oct 02 11:48:22 compute-0 ceph-mon[73668]: pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 848 B/s wr, 60 op/s; 45 B/s, 4 objects/s recovering
Oct 02 11:48:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:48:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:48:23 compute-0 ceph-mon[73668]: 2.19 deep-scrub starts
Oct 02 11:48:23 compute-0 ceph-mon[73668]: 2.19 deep-scrub ok
Oct 02 11:48:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:48:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:48:23 compute-0 ceph-mon[73668]: osdmap e72: 3 total, 3 up, 3 in
Oct 02 11:48:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:23.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 708 B/s wr, 50 op/s; 38 B/s, 3 objects/s recovering
Oct 02 11:48:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 02 11:48:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:48:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 02 11:48:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:48:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:23.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 02 11:48:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:48:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:48:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 02 11:48:24 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 02 11:48:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 73 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=11.218158722s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 176.173004150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 73 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=11.218087196s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.173004150s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 73 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=11.216378212s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 176.172012329s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 73 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=11.216349602s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.172012329s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 73 pg[6.8( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=9.245861053s) [0] r=-1 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 active pruub 174.201950073s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 73 pg[6.8( empty local-lis/les=47/48 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=9.245740891s) [0] r=-1 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 174.201950073s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:24 compute-0 ceph-mon[73668]: 5.13 scrub starts
Oct 02 11:48:24 compute-0 ceph-mon[73668]: 5.13 scrub ok
Oct 02 11:48:24 compute-0 ceph-mon[73668]: 3.10 scrub starts
Oct 02 11:48:24 compute-0 ceph-mon[73668]: 3.10 scrub ok
Oct 02 11:48:24 compute-0 ceph-mon[73668]: pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 708 B/s wr, 50 op/s; 38 B/s, 3 objects/s recovering
Oct 02 11:48:24 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:48:24 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:48:25 compute-0 sshd-session[99026]: Accepted publickey for zuul from 192.168.122.30 port 52700 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:48:25 compute-0 systemd-logind[820]: New session 35 of user zuul.
Oct 02 11:48:25 compute-0 systemd[1]: Started Session 35 of User zuul.
Oct 02 11:48:25 compute-0 sshd-session[99026]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:48:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 02 11:48:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 02 11:48:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 02 11:48:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:48:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:48:25 compute-0 ceph-mon[73668]: osdmap e73: 3 total, 3 up, 3 in
Oct 02 11:48:25 compute-0 ceph-mon[73668]: osdmap e74: 3 total, 3 up, 3 in
Oct 02 11:48:25 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 74 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=0 lpr=74 pi=[49,74)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:25 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 74 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=0 lpr=74 pi=[49,74)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:25 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 74 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=0 lpr=74 pi=[49,74)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:25 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 74 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=0 lpr=74 pi=[49,74)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:25.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 732 B/s wr, 52 op/s; 39 B/s, 3 objects/s recovering
Oct 02 11:48:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 02 11:48:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:48:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 02 11:48:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:48:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:25.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:26 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 02 11:48:26 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 02 11:48:26 compute-0 python3.9[99179]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:48:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 02 11:48:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:48:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:48:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 02 11:48:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 02 11:48:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 75 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=75 pruub=9.079144478s) [2] r=-1 lpr=75 pi=[49,75)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 176.173019409s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 75 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=75 pruub=9.079079628s) [2] r=-1 lpr=75 pi=[49,75)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.173019409s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 75 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=75 pruub=9.065492630s) [2] r=-1 lpr=75 pi=[49,75)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 176.160995483s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 75 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=75 pruub=9.065446854s) [2] r=-1 lpr=75 pi=[49,75)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.160995483s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 75 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=75) [1] r=0 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:26 compute-0 ceph-mon[73668]: 3.14 deep-scrub starts
Oct 02 11:48:26 compute-0 ceph-mon[73668]: 3.14 deep-scrub ok
Oct 02 11:48:26 compute-0 ceph-mon[73668]: pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 732 B/s wr, 52 op/s; 39 B/s, 3 objects/s recovering
Oct 02 11:48:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:48:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:48:26 compute-0 ceph-mon[73668]: 2.e scrub starts
Oct 02 11:48:26 compute-0 ceph-mon[73668]: 2.e scrub ok
Oct 02 11:48:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 75 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=74/75 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[49,74)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:26 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 75 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=74/75 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[49,74)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 02 11:48:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 02 11:48:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=74/75 n=6 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.043690681s) [2] async=[2] r=-1 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 183.146514893s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=74/75 n=6 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.043565750s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.146514893s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=74/75 n=5 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.042869568s) [2] async=[2] r=-1 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 183.146484375s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=74/75 n=5 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.042716980s) [2] r=-1 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.146484375s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:27 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=75/76 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=75) [1] r=0 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:27 compute-0 ceph-mon[73668]: 4.1c scrub starts
Oct 02 11:48:27 compute-0 ceph-mon[73668]: 4.1c scrub ok
Oct 02 11:48:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:48:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:48:27 compute-0 ceph-mon[73668]: osdmap e75: 3 total, 3 up, 3 in
Oct 02 11:48:27 compute-0 ceph-mon[73668]: osdmap e76: 3 total, 3 up, 3 in
Oct 02 11:48:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:27 compute-0 sudo[99392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmcrtrmpmkxswowfaetiewykvkwgyzdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405707.3097143-61-193963196580619/AnsiballZ_command.py'
Oct 02 11:48:27 compute-0 sudo[99392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 02 11:48:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:48:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 02 11:48:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:48:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:27.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:27 compute-0 python3.9[99394]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:28 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 02 11:48:28 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:48:28
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'backups', '.rgw.root', 'default.rgw.log', 'vms']
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 2/10 changes
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Executing plan auto_2025-10-02_11:48:28
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] ceph osd pg-upmap-items 9.f mappings [{'from': 2, 'to': 0}]
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] ceph osd pg-upmap-items 9.1f mappings [{'from': 2, 'to': 0}]
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]} v 0) v1
Oct 02 11:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]: dispatch
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]} v 0) v1
Oct 02 11:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]: dispatch
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:48:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 02 11:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]': finished
Oct 02 11:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]': finished
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e77 crush map has features 3314933000854323200, adjusting msgr requires
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct 02 11:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct 02 11:48:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 77 crush map has features 432629239337189376, adjusting msgr requires for clients
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 77 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 77 crush map has features 3314933000854323200, adjusting msgr requires for osds
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 77 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=14.978174210s) [0] r=-1 lpr=77 pi=[49,77)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 184.172668457s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 77 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=14.978096962s) [0] r=-1 lpr=77 pi=[49,77)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.172668457s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 77 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=14.977162361s) [0] r=-1 lpr=77 pi=[49,77)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 184.172576904s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 77 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=14.977022171s) [0] r=-1 lpr=77 pi=[49,77)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.172576904s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 77 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:28 compute-0 ceph-mon[73668]: 5.15 scrub starts
Oct 02 11:48:28 compute-0 ceph-mon[73668]: 5.15 scrub ok
Oct 02 11:48:28 compute-0 ceph-mon[73668]: pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:48:28 compute-0 ceph-mon[73668]: 2.6 scrub starts
Oct 02 11:48:28 compute-0 ceph-mon[73668]: 2.6 scrub ok
Oct 02 11:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]: dispatch
Oct 02 11:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]: dispatch
Oct 02 11:48:28 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 77 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 02 11:48:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 02 11:48:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=6 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78 pruub=15.230990410s) [2] async=[2] r=-1 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 185.238891602s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=6 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78 pruub=15.230671883s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.238891602s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=5 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78 pruub=15.246085167s) [2] async=[2] r=-1 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 185.254364014s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 78 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=5 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78 pruub=15.245867729s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.254364014s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:29.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:48:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:48:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]': finished
Oct 02 11:48:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]': finished
Oct 02 11:48:29 compute-0 ceph-mon[73668]: osdmap e77: 3 total, 3 up, 3 in
Oct 02 11:48:29 compute-0 ceph-mon[73668]: osdmap e78: 3 total, 3 up, 3 in
Oct 02 11:48:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 02 11:48:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:48:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 02 11:48:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:48:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:29.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 02 11:48:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:48:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:48:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 02 11:48:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 02 11:48:30 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 79 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=79) [1] r=0 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:30 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 79 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:30 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 79 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:30 compute-0 ceph-mon[73668]: 5.11 scrub starts
Oct 02 11:48:30 compute-0 ceph-mon[73668]: 5.11 scrub ok
Oct 02 11:48:30 compute-0 ceph-mon[73668]: pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:48:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:48:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:48:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:48:30 compute-0 ceph-mon[73668]: osdmap e79: 3 total, 3 up, 3 in
Oct 02 11:48:31 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 02 11:48:31 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 02 11:48:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 02 11:48:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 02 11:48:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 02 11:48:31 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 80 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80 pruub=15.117868423s) [0] async=[0] r=-1 lpr=80 pi=[49,80)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 187.191116333s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:31 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 80 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80 pruub=15.117607117s) [0] r=-1 lpr=80 pi=[49,80)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.191116333s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:31 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 80 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80 pruub=15.117424011s) [0] async=[0] r=-1 lpr=80 pi=[49,80)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 187.190963745s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:31 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 80 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80 pruub=15.117321968s) [0] r=-1 lpr=80 pi=[49,80)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.190963745s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:31 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 80 pg[6.b( v 52'3 lc 0'0 (0'0,52'3] local-lis/les=79/80 n=1 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=79) [1] r=0 lpr=79 pi=[56,79)/1 crt=52'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:31.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:31 compute-0 ceph-mon[73668]: 4.2 scrub starts
Oct 02 11:48:31 compute-0 ceph-mon[73668]: 2.1e scrub starts
Oct 02 11:48:31 compute-0 ceph-mon[73668]: 4.2 scrub ok
Oct 02 11:48:31 compute-0 ceph-mon[73668]: 2.1e scrub ok
Oct 02 11:48:31 compute-0 ceph-mon[73668]: osdmap e80: 3 total, 3 up, 3 in
Oct 02 11:48:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:31.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 02 11:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 02 11:48:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 02 11:48:32 compute-0 ceph-mon[73668]: pgmap v195: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:32 compute-0 ceph-mon[73668]: osdmap e81: 3 total, 3 up, 3 in
Oct 02 11:48:33 compute-0 sudo[99421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:33 compute-0 sudo[99421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:33 compute-0 sudo[99421]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:33 compute-0 sudo[99446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:33 compute-0 sudo[99446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:33 compute-0 sudo[99446]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:33.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:34 compute-0 ceph-mon[73668]: 3.1b scrub starts
Oct 02 11:48:34 compute-0 ceph-mon[73668]: 3.1b scrub ok
Oct 02 11:48:34 compute-0 ceph-mon[73668]: pgmap v197: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:35 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 02 11:48:35 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 02 11:48:35 compute-0 sudo[99392]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:35 compute-0 ceph-mon[73668]: 2.9 scrub starts
Oct 02 11:48:35 compute-0 ceph-mon[73668]: 2.9 scrub ok
Oct 02 11:48:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:35.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:35 compute-0 sshd-session[99029]: Connection closed by 192.168.122.30 port 52700
Oct 02 11:48:35 compute-0 sshd-session[99026]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:48:35 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 02 11:48:35 compute-0 systemd[1]: session-35.scope: Consumed 8.764s CPU time.
Oct 02 11:48:35 compute-0 systemd-logind[820]: Session 35 logged out. Waiting for processes to exit.
Oct 02 11:48:35 compute-0 systemd-logind[820]: Removed session 35.
Oct 02 11:48:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:35.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:36 compute-0 ceph-mon[73668]: pgmap v198: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:37.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 5 objects/s recovering
Oct 02 11:48:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 02 11:48:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:48:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 02 11:48:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:48:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 02 11:48:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:37.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:48:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:48:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 02 11:48:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:48:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:48:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 02 11:48:38 compute-0 ceph-mon[73668]: pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 5 objects/s recovering
Oct 02 11:48:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:48:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:48:38 compute-0 ceph-mon[73668]: osdmap e82: 3 total, 3 up, 3 in
Oct 02 11:48:38 compute-0 ceph-mon[73668]: 4.19 scrub starts
Oct 02 11:48:38 compute-0 ceph-mon[73668]: 4.19 scrub ok
Oct 02 11:48:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 02 11:48:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 02 11:48:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:48:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:39.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 4 objects/s recovering
Oct 02 11:48:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 02 11:48:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:48:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 02 11:48:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:48:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:39.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 02 11:48:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:48:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:48:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 02 11:48:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 02 11:48:40 compute-0 ceph-mon[73668]: 3.8 deep-scrub starts
Oct 02 11:48:40 compute-0 ceph-mon[73668]: 3.8 deep-scrub ok
Oct 02 11:48:40 compute-0 ceph-mon[73668]: 2.1f scrub starts
Oct 02 11:48:40 compute-0 ceph-mon[73668]: 2.1f scrub ok
Oct 02 11:48:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:48:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:48:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 02 11:48:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 02 11:48:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 02 11:48:41 compute-0 ceph-mon[73668]: pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 4 objects/s recovering
Oct 02 11:48:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:48:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:48:41 compute-0 ceph-mon[73668]: osdmap e83: 3 total, 3 up, 3 in
Oct 02 11:48:41 compute-0 ceph-mon[73668]: osdmap e84: 3 total, 3 up, 3 in
Oct 02 11:48:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:48:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:41.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:48:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 73 B/s, 4 objects/s recovering
Oct 02 11:48:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 02 11:48:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:48:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 02 11:48:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:48:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:41.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 02 11:48:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:48:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:48:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 02 11:48:42 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 02 11:48:42 compute-0 ceph-mon[73668]: 4.13 deep-scrub starts
Oct 02 11:48:42 compute-0 ceph-mon[73668]: 4.13 deep-scrub ok
Oct 02 11:48:42 compute-0 ceph-mon[73668]: 4.1d scrub starts
Oct 02 11:48:42 compute-0 ceph-mon[73668]: 4.1d scrub ok
Oct 02 11:48:42 compute-0 ceph-mon[73668]: pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 73 B/s, 4 objects/s recovering
Oct 02 11:48:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:48:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:48:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:48:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:48:42 compute-0 ceph-mon[73668]: osdmap e85: 3 total, 3 up, 3 in
Oct 02 11:48:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 85 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=67/68 n=1 ec=47/21 lis/c=67/67 les/c/f=68/68/0 sis=85 pruub=13.877998352s) [0] r=-1 lpr=85 pi=[67,85)/1 crt=52'3 mlcod 52'3 active pruub 197.685775757s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:43 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 85 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=67/68 n=1 ec=47/21 lis/c=67/67 les/c/f=68/68/0 sis=85 pruub=13.877917290s) [0] r=-1 lpr=85 pi=[67,85)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 197.685775757s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 02 11:48:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 02 11:48:43 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 02 11:48:43 compute-0 ceph-mon[73668]: 2.d scrub starts
Oct 02 11:48:43 compute-0 ceph-mon[73668]: 2.d scrub ok
Oct 02 11:48:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:43.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 02 11:48:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:48:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 02 11:48:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:48:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:48:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:43.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:48:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 02 11:48:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:48:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:48:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 02 11:48:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e87 crush map has features 3314933000852226048, adjusting msgr requires
Oct 02 11:48:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:48:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:48:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:48:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 02 11:48:44 compute-0 ceph-osd[84115]: osd.1 87 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 11:48:44 compute-0 ceph-osd[84115]: osd.1 87 crush map has features 288514051259236352 was 432629239337198081, adjusting msgr requires for mons
Oct 02 11:48:44 compute-0 ceph-osd[84115]: osd.1 87 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 11:48:44 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 87 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=87) [1] r=0 lpr=87 pi=[56,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:44 compute-0 ceph-mon[73668]: 5.10 deep-scrub starts
Oct 02 11:48:44 compute-0 ceph-mon[73668]: 5.10 deep-scrub ok
Oct 02 11:48:44 compute-0 ceph-mon[73668]: osdmap e86: 3 total, 3 up, 3 in
Oct 02 11:48:44 compute-0 ceph-mon[73668]: pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:48:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:48:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 02 11:48:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 02 11:48:45 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 02 11:48:45 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 88 pg[6.f( v 52'5 lc 52'1 (0'0,52'5] local-lis/les=87/88 n=3 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=87) [1] r=0 lpr=87 pi=[56,87)/1 crt=52'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:45 compute-0 ceph-mon[73668]: 5.1f deep-scrub starts
Oct 02 11:48:45 compute-0 ceph-mon[73668]: 5.1f deep-scrub ok
Oct 02 11:48:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:48:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:48:45 compute-0 ceph-mon[73668]: osdmap e87: 3 total, 3 up, 3 in
Oct 02 11:48:45 compute-0 ceph-mon[73668]: osdmap e88: 3 total, 3 up, 3 in
Oct 02 11:48:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:45.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 0 B/s, 0 objects/s recovering
Oct 02 11:48:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 02 11:48:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 02 11:48:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:45.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 02 11:48:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 02 11:48:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 02 11:48:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 02 11:48:46 compute-0 ceph-mon[73668]: 3.16 deep-scrub starts
Oct 02 11:48:46 compute-0 ceph-mon[73668]: 3.16 deep-scrub ok
Oct 02 11:48:46 compute-0 ceph-mon[73668]: pgmap v210: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 0 B/s, 0 objects/s recovering
Oct 02 11:48:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 02 11:48:47 compute-0 ceph-mon[73668]: 7.1 scrub starts
Oct 02 11:48:47 compute-0 ceph-mon[73668]: 7.1 scrub ok
Oct 02 11:48:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 02 11:48:47 compute-0 ceph-mon[73668]: osdmap e89: 3 total, 3 up, 3 in
Oct 02 11:48:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:47.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 71 B/s, 3 objects/s recovering
Oct 02 11:48:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 02 11:48:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 02 11:48:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 89 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=89 pruub=11.664540291s) [0] r=-1 lpr=89 pi=[49,89)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 200.172958374s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:47 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 89 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=89 pruub=11.664468765s) [0] r=-1 lpr=89 pi=[49,89)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.172958374s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:47.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 02 11:48:48 compute-0 ceph-mon[73668]: pgmap v212: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 71 B/s, 3 objects/s recovering
Oct 02 11:48:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 02 11:48:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 02 11:48:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 02 11:48:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 02 11:48:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 90 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0]/[1] r=0 lpr=90 pi=[49,90)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 90 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0]/[1] r=0 lpr=90 pi=[49,90)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 90 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90 pruub=11.023226738s) [0] r=-1 lpr=90 pi=[49,90)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 200.173629761s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:48 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 90 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90 pruub=11.023095131s) [0] r=-1 lpr=90 pi=[49,90)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.173629761s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 02 11:48:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 02 11:48:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 02 11:48:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 91 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=91) [0]/[1] r=0 lpr=91 pi=[49,91)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 91 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=91) [0]/[1] r=0 lpr=91 pi=[49,91)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:49 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 91 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=90/91 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0]/[1] async=[0] r=0 lpr=90 pi=[49,90)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 02 11:48:49 compute-0 ceph-mon[73668]: osdmap e90: 3 total, 3 up, 3 in
Oct 02 11:48:49 compute-0 ceph-mon[73668]: osdmap e91: 3 total, 3 up, 3 in
Oct 02 11:48:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:49.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 72 B/s, 3 objects/s recovering
Oct 02 11:48:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 02 11:48:49 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 02 11:48:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:49.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:50 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 02 11:48:50 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 02 11:48:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 02 11:48:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 02 11:48:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 02 11:48:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 02 11:48:50 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 92 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=90/91 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92 pruub=15.003633499s) [0] async=[0] r=-1 lpr=92 pi=[49,92)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 206.024765015s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:50 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 92 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=90/91 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92 pruub=15.003546715s) [0] r=-1 lpr=92 pi=[49,92)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.024765015s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:50 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 92 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=92 pruub=9.150441170s) [0] r=-1 lpr=92 pi=[49,92)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 200.172943115s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:50 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 92 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=92 pruub=9.150412560s) [0] r=-1 lpr=92 pi=[49,92)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.172943115s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:50 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 92 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=91/92 n=6 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=91) [0]/[1] async=[0] r=0 lpr=91 pi=[49,91)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:50 compute-0 ceph-mon[73668]: pgmap v215: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 72 B/s, 3 objects/s recovering
Oct 02 11:48:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 02 11:48:50 compute-0 ceph-mon[73668]: 6.4 scrub starts
Oct 02 11:48:50 compute-0 ceph-mon[73668]: 6.4 scrub ok
Oct 02 11:48:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 02 11:48:50 compute-0 ceph-mon[73668]: osdmap e92: 3 total, 3 up, 3 in
Oct 02 11:48:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 02 11:48:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 02 11:48:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 02 11:48:51 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 93 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=93) [0]/[1] r=0 lpr=93 pi=[49,93)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:51 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 93 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=91/92 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93 pruub=14.993165016s) [0] async=[0] r=-1 lpr=93 pi=[49,93)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 207.026046753s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:51 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 93 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=91/92 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93 pruub=14.993021965s) [0] r=-1 lpr=93 pi=[49,93)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 207.026046753s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:51 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 93 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=49/50 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=93) [0]/[1] r=0 lpr=93 pi=[49,93)/1 crt=44'1012 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:51 compute-0 ceph-mon[73668]: 7.7 deep-scrub starts
Oct 02 11:48:51 compute-0 ceph-mon[73668]: 7.7 deep-scrub ok
Oct 02 11:48:51 compute-0 ceph-mon[73668]: 7.1f scrub starts
Oct 02 11:48:51 compute-0 ceph-mon[73668]: 7.1f scrub ok
Oct 02 11:48:51 compute-0 ceph-mon[73668]: osdmap e93: 3 total, 3 up, 3 in
Oct 02 11:48:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:51.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Oct 02 11:48:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:48:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:51.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:48:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 02 11:48:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 02 11:48:52 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 02 11:48:52 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 94 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=93/94 n=5 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=93) [0]/[1] async=[0] r=0 lpr=93 pi=[49,93)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:52 compute-0 sshd-session[99514]: Accepted publickey for zuul from 192.168.122.30 port 43128 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:48:52 compute-0 systemd-logind[820]: New session 36 of user zuul.
Oct 02 11:48:52 compute-0 ceph-mon[73668]: pgmap v218: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Oct 02 11:48:52 compute-0 ceph-mon[73668]: osdmap e94: 3 total, 3 up, 3 in
Oct 02 11:48:52 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct 02 11:48:52 compute-0 sshd-session[99514]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:48:53 compute-0 python3.9[99667]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 11:48:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 02 11:48:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 02 11:48:53 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 02 11:48:53 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 95 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=93/94 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95 pruub=14.997019768s) [0] async=[0] r=-1 lpr=95 pi=[49,95)/1 crt=44'1012 lcod 0'0 mlcod 0'0 active pruub 209.046142578s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:53 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 95 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=93/94 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95 pruub=14.996909142s) [0] r=-1 lpr=95 pi=[49,95)/1 crt=44'1012 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.046142578s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:53 compute-0 sudo[99692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:53 compute-0 sudo[99692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:53 compute-0 sudo[99692]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:53 compute-0 ceph-mon[73668]: 7.c scrub starts
Oct 02 11:48:53 compute-0 ceph-mon[73668]: 7.c scrub ok
Oct 02 11:48:53 compute-0 ceph-mon[73668]: 11.17 scrub starts
Oct 02 11:48:53 compute-0 ceph-mon[73668]: 11.17 scrub ok
Oct 02 11:48:53 compute-0 ceph-mon[73668]: osdmap e95: 3 total, 3 up, 3 in
Oct 02 11:48:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:53 compute-0 sudo[99740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:53 compute-0 sudo[99740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:53 compute-0 sudo[99740]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Oct 02 11:48:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:53.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 02 11:48:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 02 11:48:54 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 02 11:48:54 compute-0 ceph-mon[73668]: pgmap v221: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Oct 02 11:48:54 compute-0 ceph-mon[73668]: osdmap e96: 3 total, 3 up, 3 in
Oct 02 11:48:54 compute-0 python3.9[99892]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:48:55 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 02 11:48:55 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 02 11:48:55 compute-0 sudo[100046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnhmfepkqrnesxkirhorwcjwdywekruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405735.1747515-98-13840501559146/AnsiballZ_command.py'
Oct 02 11:48:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:55 compute-0 sudo[100046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:55.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:55 compute-0 python3.9[100048]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:55 compute-0 sudo[100046]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:55 compute-0 ceph-mon[73668]: 6.c scrub starts
Oct 02 11:48:55 compute-0 ceph-mon[73668]: 6.c scrub ok
Oct 02 11:48:55 compute-0 ceph-mon[73668]: 8.15 scrub starts
Oct 02 11:48:55 compute-0 ceph-mon[73668]: 8.15 scrub ok
Oct 02 11:48:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:56 compute-0 sudo[100201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gklaqcrkrbhiaufvenyihjeonpdaineb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405736.2325714-134-10513173454430/AnsiballZ_stat.py'
Oct 02 11:48:56 compute-0 sudo[100201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:56 compute-0 python3.9[100203]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:48:56 compute-0 sudo[100201]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:57 compute-0 ceph-mon[73668]: pgmap v223: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:57.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:57 compute-0 sudo[100355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdeymgruakigvrigypydlarfmgglswiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405737.160997-167-5818600990867/AnsiballZ_file.py'
Oct 02 11:48:57 compute-0 sudo[100355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:57 compute-0 python3.9[100357]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:48:57 compute-0 sudo[100355]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 170 B/s wr, 11 op/s; 36 B/s, 1 objects/s recovering
Oct 02 11:48:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 02 11:48:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 02 11:48:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:48:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:57.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:48:57 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 02 11:48:58 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 02 11:48:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 02 11:48:58 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 02 11:48:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 02 11:48:58 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 02 11:48:58 compute-0 ceph-mon[73668]: 7.d scrub starts
Oct 02 11:48:58 compute-0 ceph-mon[73668]: 7.d scrub ok
Oct 02 11:48:58 compute-0 ceph-mon[73668]: 11.16 scrub starts
Oct 02 11:48:58 compute-0 ceph-mon[73668]: 11.16 scrub ok
Oct 02 11:48:58 compute-0 ceph-mon[73668]: pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 170 B/s wr, 11 op/s; 36 B/s, 1 objects/s recovering
Oct 02 11:48:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 02 11:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:48:58 compute-0 python3.9[100508]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:48:58 compute-0 network[100525]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:48:58 compute-0 network[100526]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:48:58 compute-0 network[100527]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:48:58 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 02 11:48:59 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 02 11:48:59 compute-0 ceph-mon[73668]: 8.1 scrub starts
Oct 02 11:48:59 compute-0 ceph-mon[73668]: 8.1 scrub ok
Oct 02 11:48:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 02 11:48:59 compute-0 ceph-mon[73668]: osdmap e97: 3 total, 3 up, 3 in
Oct 02 11:48:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:48:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:48:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 158 B/s wr, 10 op/s; 34 B/s, 1 objects/s recovering
Oct 02 11:48:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 02 11:48:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 02 11:48:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:48:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:48:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:59.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:48:59 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Oct 02 11:49:00 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Oct 02 11:49:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 02 11:49:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 02 11:49:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 02 11:49:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 02 11:49:00 compute-0 ceph-mon[73668]: 7.12 scrub starts
Oct 02 11:49:00 compute-0 ceph-mon[73668]: 7.12 scrub ok
Oct 02 11:49:00 compute-0 ceph-mon[73668]: 8.7 scrub starts
Oct 02 11:49:00 compute-0 ceph-mon[73668]: 8.7 scrub ok
Oct 02 11:49:00 compute-0 ceph-mon[73668]: 10.12 scrub starts
Oct 02 11:49:00 compute-0 ceph-mon[73668]: 10.12 scrub ok
Oct 02 11:49:00 compute-0 ceph-mon[73668]: pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 158 B/s wr, 10 op/s; 34 B/s, 1 objects/s recovering
Oct 02 11:49:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 02 11:49:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:01 compute-0 ceph-mon[73668]: 8.e deep-scrub starts
Oct 02 11:49:01 compute-0 ceph-mon[73668]: 8.e deep-scrub ok
Oct 02 11:49:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 02 11:49:01 compute-0 ceph-mon[73668]: osdmap e98: 3 total, 3 up, 3 in
Oct 02 11:49:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 5.1 KiB/s rd, 0 B/s wr, 9 op/s; 29 B/s, 1 objects/s recovering
Oct 02 11:49:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 02 11:49:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 02 11:49:01 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 02 11:49:01 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 02 11:49:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:02 compute-0 python3.9[100792]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 02 11:49:02 compute-0 ceph-mon[73668]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 5.1 KiB/s rd, 0 B/s wr, 9 op/s; 29 B/s, 1 objects/s recovering
Oct 02 11:49:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 02 11:49:02 compute-0 ceph-mon[73668]: 8.13 scrub starts
Oct 02 11:49:02 compute-0 ceph-mon[73668]: 8.13 scrub ok
Oct 02 11:49:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 02 11:49:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 02 11:49:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 02 11:49:03 compute-0 python3.9[100942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:49:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:49:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:03.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:49:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 02 11:49:03 compute-0 ceph-mon[73668]: 7.15 scrub starts
Oct 02 11:49:03 compute-0 ceph-mon[73668]: 7.15 scrub ok
Oct 02 11:49:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 02 11:49:03 compute-0 ceph-mon[73668]: osdmap e99: 3 total, 3 up, 3 in
Oct 02 11:49:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 02 11:49:03 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 02 11:49:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 02 11:49:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 02 11:49:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:03.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:04 compute-0 python3.9[101097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:49:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 02 11:49:04 compute-0 ceph-mon[73668]: osdmap e100: 3 total, 3 up, 3 in
Oct 02 11:49:04 compute-0 ceph-mon[73668]: pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 02 11:49:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 02 11:49:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 02 11:49:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 02 11:49:04 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 02 11:49:04 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 02 11:49:05 compute-0 sudo[101253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntssbtvghyvlewqpxgiiyjaowhvngdki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405745.0882082-311-262801798951854/AnsiballZ_setup.py'
Oct 02 11:49:05 compute-0 sudo[101253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:05.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:05 compute-0 python3.9[101255]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:49:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 02 11:49:05 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 02 11:49:05 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 02 11:49:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 02 11:49:05 compute-0 ceph-mon[73668]: osdmap e101: 3 total, 3 up, 3 in
Oct 02 11:49:05 compute-0 ceph-mon[73668]: 8.1a scrub starts
Oct 02 11:49:05 compute-0 ceph-mon[73668]: 8.1a scrub ok
Oct 02 11:49:05 compute-0 ceph-mon[73668]: 7.a scrub starts
Oct 02 11:49:05 compute-0 ceph-mon[73668]: 7.a scrub ok
Oct 02 11:49:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:05 compute-0 sudo[101253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 02 11:49:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 02 11:49:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:05.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:06 compute-0 sudo[101338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lamijtxarilklqznjaqvnebfzznnxqxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405745.0882082-311-262801798951854/AnsiballZ_dnf.py'
Oct 02 11:49:06 compute-0 sudo[101338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:06 compute-0 python3.9[101340]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:49:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 02 11:49:07 compute-0 ceph-mon[73668]: 8.1d scrub starts
Oct 02 11:49:07 compute-0 ceph-mon[73668]: 8.1d scrub ok
Oct 02 11:49:07 compute-0 ceph-mon[73668]: pgmap v233: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:07 compute-0 ceph-mon[73668]: osdmap e102: 3 total, 3 up, 3 in
Oct 02 11:49:07 compute-0 ceph-mon[73668]: 7.5 deep-scrub starts
Oct 02 11:49:07 compute-0 ceph-mon[73668]: 7.5 deep-scrub ok
Oct 02 11:49:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 02 11:49:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 02 11:49:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 26 B/s, 0 objects/s recovering
Oct 02 11:49:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:07.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 02 11:49:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 02 11:49:08 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 02 11:49:08 compute-0 ceph-mon[73668]: osdmap e103: 3 total, 3 up, 3 in
Oct 02 11:49:08 compute-0 ceph-mon[73668]: pgmap v236: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 26 B/s, 0 objects/s recovering
Oct 02 11:49:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 02 11:49:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 02 11:49:09 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 02 11:49:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:09 compute-0 ceph-mon[73668]: 7.17 scrub starts
Oct 02 11:49:09 compute-0 ceph-mon[73668]: 7.17 scrub ok
Oct 02 11:49:09 compute-0 ceph-mon[73668]: osdmap e104: 3 total, 3 up, 3 in
Oct 02 11:49:09 compute-0 ceph-mon[73668]: osdmap e105: 3 total, 3 up, 3 in
Oct 02 11:49:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:49:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:09.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:10 compute-0 ceph-mon[73668]: pgmap v239: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:49:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 37 B/s, 1 objects/s recovering
Oct 02 11:49:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 02 11:49:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 02 11:49:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 02 11:49:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:11.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 02 11:49:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 02 11:49:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 02 11:49:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 02 11:49:13 compute-0 ceph-mon[73668]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 37 B/s, 1 objects/s recovering
Oct 02 11:49:13 compute-0 ceph-mon[73668]: 7.19 scrub starts
Oct 02 11:49:13 compute-0 ceph-mon[73668]: 7.19 scrub ok
Oct 02 11:49:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 02 11:49:13 compute-0 ceph-mon[73668]: osdmap e106: 3 total, 3 up, 3 in
Oct 02 11:49:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:13 compute-0 sudo[101413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:13 compute-0 sudo[101413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:13 compute-0 sudo[101413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:13 compute-0 sudo[101438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:13 compute-0 sudo[101438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:13 compute-0 sudo[101438]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:49:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 02 11:49:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 02 11:49:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:14.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 02 11:49:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 02 11:49:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 02 11:49:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 02 11:49:14 compute-0 ceph-mon[73668]: 7.1a scrub starts
Oct 02 11:49:14 compute-0 ceph-mon[73668]: 7.1a scrub ok
Oct 02 11:49:14 compute-0 ceph-mon[73668]: 7.16 scrub starts
Oct 02 11:49:14 compute-0 ceph-mon[73668]: 7.16 scrub ok
Oct 02 11:49:14 compute-0 ceph-mon[73668]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:49:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 02 11:49:14 compute-0 sudo[101464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:14 compute-0 sudo[101464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:14 compute-0 sudo[101464]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:14 compute-0 sudo[101489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:14 compute-0 sudo[101489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:14 compute-0 sudo[101489]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:14 compute-0 sudo[101514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:14 compute-0 sudo[101514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:14 compute-0 sudo[101514]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:14 compute-0 sudo[101539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:49:14 compute-0 sudo[101539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:15 compute-0 podman[101635]: 2025-10-02 11:49:15.095307946 +0000 UTC m=+0.069316353 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:49:15 compute-0 podman[101635]: 2025-10-02 11:49:15.208484183 +0000 UTC m=+0.182492560 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 02 11:49:15 compute-0 ceph-mon[73668]: osdmap e107: 3 total, 3 up, 3 in
Oct 02 11:49:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:49:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:49:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:15.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:15 compute-0 podman[101775]: 2025-10-02 11:49:15.751431049 +0000 UTC m=+0.056675437 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:49:15 compute-0 podman[101775]: 2025-10-02 11:49:15.760545011 +0000 UTC m=+0.065789219 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:49:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 16 B/s, 0 objects/s recovering
Oct 02 11:49:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 02 11:49:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 02 11:49:15 compute-0 podman[101836]: 2025-10-02 11:49:15.96263571 +0000 UTC m=+0.046941118 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, distribution-scope=public, version=2.2.4)
Oct 02 11:49:15 compute-0 podman[101836]: 2025-10-02 11:49:15.975601705 +0000 UTC m=+0.059907113 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, distribution-scope=public)
Oct 02 11:49:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:16.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:16 compute-0 sudo[101539]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:49:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:49:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 sudo[101887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:16 compute-0 sudo[101887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:16 compute-0 sudo[101887]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:16 compute-0 sudo[101913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:16 compute-0 sudo[101913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:16 compute-0 sudo[101913]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:16 compute-0 sudo[101938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:16 compute-0 sudo[101938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:16 compute-0 sudo[101938]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:16 compute-0 sudo[101963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:49:16 compute-0 sudo[101963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:16 compute-0 ceph-mon[73668]: 7.1c scrub starts
Oct 02 11:49:16 compute-0 ceph-mon[73668]: 7.1c scrub ok
Oct 02 11:49:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 ceph-mon[73668]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 16 B/s, 0 objects/s recovering
Oct 02 11:49:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 02 11:49:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 ceph-mon[73668]: 8.a scrub starts
Oct 02 11:49:16 compute-0 ceph-mon[73668]: 8.a scrub ok
Oct 02 11:49:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:49:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:49:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 02 11:49:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 02 11:49:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 02 11:49:16 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 02 11:49:16 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=108) [1] r=0 lpr=108 pi=[78,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:16 compute-0 sudo[101963]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev eb16bd7c-d227-4407-a8b5-78cb764135fb does not exist
Oct 02 11:49:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b9074d2c-591a-4772-8984-bd2802a8ce53 does not exist
Oct 02 11:49:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b3c29a6e-2d93-492a-bc9f-9608cbee2b39 does not exist
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:17 compute-0 sudo[102019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:17 compute-0 sudo[102019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:17 compute-0 sudo[102019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:17 compute-0 sudo[102044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:17 compute-0 sudo[102044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:17 compute-0 sudo[102044]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:17 compute-0 sudo[102069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:17 compute-0 sudo[102069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:17 compute-0 sudo[102069]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:17 compute-0 sudo[102094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:49:17 compute-0 sudo[102094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:17.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 02 11:49:17 compute-0 ceph-mon[73668]: osdmap e108: 3 total, 3 up, 3 in
Oct 02 11:49:17 compute-0 ceph-mon[73668]: 10.6 scrub starts
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 02 11:49:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=109) [1]/[2] r=-1 lpr=109 pi=[78,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:17 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=109) [1]/[2] r=-1 lpr=109 pi=[78,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 02 11:49:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 02 11:49:17 compute-0 podman[102160]: 2025-10-02 11:49:17.948384341 +0000 UTC m=+0.043788835 container create 1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sammet, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:49:17 compute-0 systemd[1]: Started libpod-conmon-1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4.scope.
Oct 02 11:49:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:18.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:18 compute-0 podman[102160]: 2025-10-02 11:49:17.930054314 +0000 UTC m=+0.025458838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:18 compute-0 podman[102160]: 2025-10-02 11:49:18.026386223 +0000 UTC m=+0.121790737 container init 1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:49:18 compute-0 podman[102160]: 2025-10-02 11:49:18.035480415 +0000 UTC m=+0.130884909 container start 1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:49:18 compute-0 podman[102160]: 2025-10-02 11:49:18.038853325 +0000 UTC m=+0.134257849 container attach 1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sammet, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:49:18 compute-0 elastic_sammet[102177]: 167 167
Oct 02 11:49:18 compute-0 systemd[1]: libpod-1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4.scope: Deactivated successfully.
Oct 02 11:49:18 compute-0 podman[102160]: 2025-10-02 11:49:18.041792733 +0000 UTC m=+0.137197227 container died 1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d72e94ef55cac552c94ae1d7e9fad9cf29df0111d1d8a51b413dd6db8335cd71-merged.mount: Deactivated successfully.
Oct 02 11:49:18 compute-0 podman[102160]: 2025-10-02 11:49:18.079206047 +0000 UTC m=+0.174610541 container remove 1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sammet, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:49:18 compute-0 systemd[1]: libpod-conmon-1be35939fbccf7d0d36445c43b13e8cf76b11be38f006232b73671a631e8e7a4.scope: Deactivated successfully.
Oct 02 11:49:18 compute-0 podman[102201]: 2025-10-02 11:49:18.252284525 +0000 UTC m=+0.058065443 container create 42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:49:18 compute-0 systemd[1]: Started libpod-conmon-42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1.scope.
Oct 02 11:49:18 compute-0 podman[102201]: 2025-10-02 11:49:18.217862231 +0000 UTC m=+0.023643169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c87ac1460e603888463f3985c653c504f08b0624835c1c838b81b2251e5f5ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c87ac1460e603888463f3985c653c504f08b0624835c1c838b81b2251e5f5ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c87ac1460e603888463f3985c653c504f08b0624835c1c838b81b2251e5f5ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c87ac1460e603888463f3985c653c504f08b0624835c1c838b81b2251e5f5ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c87ac1460e603888463f3985c653c504f08b0624835c1c838b81b2251e5f5ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:18 compute-0 podman[102201]: 2025-10-02 11:49:18.36009047 +0000 UTC m=+0.165871388 container init 42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:49:18 compute-0 podman[102201]: 2025-10-02 11:49:18.368155354 +0000 UTC m=+0.173936272 container start 42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:49:18 compute-0 podman[102201]: 2025-10-02 11:49:18.373762173 +0000 UTC m=+0.179543361 container attach 42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:49:18 compute-0 ceph-mon[73668]: 10.6 scrub ok
Oct 02 11:49:18 compute-0 ceph-mon[73668]: osdmap e109: 3 total, 3 up, 3 in
Oct 02 11:49:18 compute-0 ceph-mon[73668]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 02 11:49:18 compute-0 ceph-mon[73668]: 11.13 scrub starts
Oct 02 11:49:18 compute-0 ceph-mon[73668]: 11.13 scrub ok
Oct 02 11:49:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 02 11:49:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 02 11:49:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 02 11:49:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 110 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=110) [1] r=0 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:19 compute-0 youthful_swanson[102219]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:49:19 compute-0 youthful_swanson[102219]: --> relative data size: 1.0
Oct 02 11:49:19 compute-0 youthful_swanson[102219]: --> All data devices are unavailable
Oct 02 11:49:19 compute-0 systemd[1]: libpod-42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1.scope: Deactivated successfully.
Oct 02 11:49:19 compute-0 podman[102201]: 2025-10-02 11:49:19.294379163 +0000 UTC m=+1.100160091 container died 42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:49:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c87ac1460e603888463f3985c653c504f08b0624835c1c838b81b2251e5f5ea-merged.mount: Deactivated successfully.
Oct 02 11:49:19 compute-0 podman[102201]: 2025-10-02 11:49:19.351237203 +0000 UTC m=+1.157018111 container remove 42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:19 compute-0 systemd[1]: libpod-conmon-42c7f294a848510a46ddeba2ebd0755f6c2082e057f4dfa9c4a4924ed671e9d1.scope: Deactivated successfully.
Oct 02 11:49:19 compute-0 sudo[102094]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 02 11:49:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 111 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=109/78 les/c/f=110/79/0 sis=111) [1] r=0 lpr=111 pi=[78,111)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 111 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=109/78 les/c/f=110/79/0 sis=111) [1] r=0 lpr=111 pi=[78,111)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 111 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=-1 lpr=111 pi=[80,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:19 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 111 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=-1 lpr=111 pi=[80,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:19 compute-0 sudo[102253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:19 compute-0 sudo[102253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:19 compute-0 sudo[102253]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:19 compute-0 sudo[102278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:19 compute-0 sudo[102278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:19 compute-0 sudo[102278]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:19 compute-0 sudo[102303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:19 compute-0 sudo[102303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:19 compute-0 sudo[102303]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:19.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:19 compute-0 sudo[102328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:49:19 compute-0 sudo[102328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:19 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 02 11:49:19 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 02 11:49:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Oct 02 11:49:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 02 11:49:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 02 11:49:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 02 11:49:19 compute-0 ceph-mon[73668]: osdmap e110: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-0 ceph-mon[73668]: osdmap e111: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-0 ceph-mon[73668]: 11.8 scrub starts
Oct 02 11:49:19 compute-0 ceph-mon[73668]: 11.8 scrub ok
Oct 02 11:49:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:49:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:20.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:49:20 compute-0 podman[102389]: 2025-10-02 11:49:20.021976195 +0000 UTC m=+0.057606212 container create 7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:49:20 compute-0 systemd[1]: Started libpod-conmon-7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156.scope.
Oct 02 11:49:20 compute-0 podman[102389]: 2025-10-02 11:49:20.000844023 +0000 UTC m=+0.036474040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:20 compute-0 podman[102389]: 2025-10-02 11:49:20.105943265 +0000 UTC m=+0.141573302 container init 7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_curie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 11:49:20 compute-0 podman[102389]: 2025-10-02 11:49:20.113879706 +0000 UTC m=+0.149509723 container start 7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:49:20 compute-0 podman[102389]: 2025-10-02 11:49:20.116744562 +0000 UTC m=+0.152374599 container attach 7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:49:20 compute-0 gallant_curie[102405]: 167 167
Oct 02 11:49:20 compute-0 systemd[1]: libpod-7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156.scope: Deactivated successfully.
Oct 02 11:49:20 compute-0 podman[102389]: 2025-10-02 11:49:20.120460031 +0000 UTC m=+0.156090048 container died 7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2de4c2a30343a804ec4e360d3f46dc905a3eb0b9520cad1a2fc8f78a319dc899-merged.mount: Deactivated successfully.
Oct 02 11:49:20 compute-0 podman[102389]: 2025-10-02 11:49:20.16747064 +0000 UTC m=+0.203100657 container remove 7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:49:20 compute-0 systemd[1]: libpod-conmon-7431dcef1d1e9fafdb053a8f303b316f43e39062d1c528a617986b54fbaaa156.scope: Deactivated successfully.
Oct 02 11:49:20 compute-0 podman[102431]: 2025-10-02 11:49:20.330467741 +0000 UTC m=+0.051988262 container create 84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:49:20 compute-0 systemd[1]: Started libpod-conmon-84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60.scope.
Oct 02 11:49:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:20 compute-0 podman[102431]: 2025-10-02 11:49:20.30106301 +0000 UTC m=+0.022583581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a16d479d0de10ec775f7dd45ac34733f82c06ae086b9d98685323dcbaf79215/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a16d479d0de10ec775f7dd45ac34733f82c06ae086b9d98685323dcbaf79215/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a16d479d0de10ec775f7dd45ac34733f82c06ae086b9d98685323dcbaf79215/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a16d479d0de10ec775f7dd45ac34733f82c06ae086b9d98685323dcbaf79215/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:20 compute-0 podman[102431]: 2025-10-02 11:49:20.419592269 +0000 UTC m=+0.141112820 container init 84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:49:20 compute-0 podman[102431]: 2025-10-02 11:49:20.43016986 +0000 UTC m=+0.151690391 container start 84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:49:20 compute-0 podman[102431]: 2025-10-02 11:49:20.434385272 +0000 UTC m=+0.155905883 container attach 84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:49:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 02 11:49:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 02 11:49:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 02 11:49:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 02 11:49:20 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 112 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=111/112 n=5 ec=49/38 lis/c=109/78 les/c/f=110/79/0 sis=111) [1] r=0 lpr=111 pi=[78,111)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:20 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 112 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=112) [1] r=0 lpr=112 pi=[59,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:21 compute-0 ceph-mon[73668]: 8.1e scrub starts
Oct 02 11:49:21 compute-0 ceph-mon[73668]: 8.1e scrub ok
Oct 02 11:49:21 compute-0 ceph-mon[73668]: pgmap v250: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Oct 02 11:49:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 02 11:49:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 02 11:49:21 compute-0 ceph-mon[73668]: osdmap e112: 3 total, 3 up, 3 in
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]: {
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:     "1": [
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:         {
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "devices": [
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "/dev/loop3"
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             ],
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "lv_name": "ceph_lv0",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "lv_size": "7511998464",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "name": "ceph_lv0",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "tags": {
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.cluster_name": "ceph",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.crush_device_class": "",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.encrypted": "0",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.osd_id": "1",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.type": "block",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:                 "ceph.vdo": "0"
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             },
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "type": "block",
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:             "vg_name": "ceph_vg0"
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:         }
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]:     ]
Oct 02 11:49:21 compute-0 amazing_ganguly[102449]: }
Oct 02 11:49:21 compute-0 systemd[1]: libpod-84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60.scope: Deactivated successfully.
Oct 02 11:49:21 compute-0 podman[102460]: 2025-10-02 11:49:21.371381988 +0000 UTC m=+0.026127745 container died 84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a16d479d0de10ec775f7dd45ac34733f82c06ae086b9d98685323dcbaf79215-merged.mount: Deactivated successfully.
Oct 02 11:49:21 compute-0 podman[102460]: 2025-10-02 11:49:21.429853181 +0000 UTC m=+0.084598918 container remove 84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 02 11:49:21 compute-0 systemd[1]: libpod-conmon-84a0081a4ac14eb781b4c57fa9a4b93e67782bca5d478d94000cd3a13df03e60.scope: Deactivated successfully.
Oct 02 11:49:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 02 11:49:21 compute-0 sudo[102328]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:21 compute-0 sudo[102477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:21 compute-0 sudo[102477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:21 compute-0 sudo[102477]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 02 11:49:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 02 11:49:21 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 113 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=111/80 les/c/f=112/81/0 sis=113) [1] r=0 lpr=113 pi=[80,113)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:21 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 113 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=111/80 les/c/f=112/81/0 sis=113) [1] r=0 lpr=113 pi=[80,113)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:21 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 113 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[59,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:21 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 113 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[59,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:21 compute-0 sudo[102502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:21 compute-0 sudo[102502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:21 compute-0 sudo[102502]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:21.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:21 compute-0 sudo[102527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:21 compute-0 sudo[102527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:21 compute-0 sudo[102527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:21 compute-0 sudo[102552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:49:21 compute-0 sudo[102552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 02 11:49:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 02 11:49:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:49:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:22.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:22 compute-0 podman[102619]: 2025-10-02 11:49:22.052947547 +0000 UTC m=+0.044906214 container create 96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:49:22 compute-0 systemd[1]: Started libpod-conmon-96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b.scope.
Oct 02 11:49:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:22 compute-0 podman[102619]: 2025-10-02 11:49:22.122551846 +0000 UTC m=+0.114510543 container init 96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:49:22 compute-0 podman[102619]: 2025-10-02 11:49:22.03347955 +0000 UTC m=+0.025438237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:22 compute-0 podman[102619]: 2025-10-02 11:49:22.130290362 +0000 UTC m=+0.122249039 container start 96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:49:22 compute-0 vigorous_maxwell[102635]: 167 167
Oct 02 11:49:22 compute-0 podman[102619]: 2025-10-02 11:49:22.134869044 +0000 UTC m=+0.126827731 container attach 96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 11:49:22 compute-0 systemd[1]: libpod-96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b.scope: Deactivated successfully.
Oct 02 11:49:22 compute-0 podman[102619]: 2025-10-02 11:49:22.138235283 +0000 UTC m=+0.130194000 container died 96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cb6314ef2bb1f28906a23f46630bc45774b316686f6714a8aa4625f88d8dc73-merged.mount: Deactivated successfully.
Oct 02 11:49:22 compute-0 podman[102619]: 2025-10-02 11:49:22.181380828 +0000 UTC m=+0.173339495 container remove 96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:49:22 compute-0 systemd[1]: libpod-conmon-96d4fc15158271ae01b6a51b370d93055e03ecf6d7df91f66767b1ed4877680b.scope: Deactivated successfully.
Oct 02 11:49:22 compute-0 podman[102659]: 2025-10-02 11:49:22.376667257 +0000 UTC m=+0.089144170 container create d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:49:22 compute-0 podman[102659]: 2025-10-02 11:49:22.312375149 +0000 UTC m=+0.024852082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:49:22 compute-0 systemd[1]: Started libpod-conmon-d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7.scope.
Oct 02 11:49:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead9b48d29c7e3a06f91d93f1c136069acffbc3de09ec647f0fe7e1aef087538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead9b48d29c7e3a06f91d93f1c136069acffbc3de09ec647f0fe7e1aef087538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead9b48d29c7e3a06f91d93f1c136069acffbc3de09ec647f0fe7e1aef087538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead9b48d29c7e3a06f91d93f1c136069acffbc3de09ec647f0fe7e1aef087538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:49:22 compute-0 podman[102659]: 2025-10-02 11:49:22.56345237 +0000 UTC m=+0.275929293 container init d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:49:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 02 11:49:22 compute-0 podman[102659]: 2025-10-02 11:49:22.572434798 +0000 UTC m=+0.284911721 container start d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sutherland, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:49:22 compute-0 podman[102659]: 2025-10-02 11:49:22.601712316 +0000 UTC m=+0.314189239 container attach d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:49:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 02 11:49:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 02 11:49:22 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 114 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=113/114 n=5 ec=49/38 lis/c=111/80 les/c/f=112/81/0 sis=113) [1] r=0 lpr=113 pi=[80,113)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:23 compute-0 ceph-mon[73668]: osdmap e113: 3 total, 3 up, 3 in
Oct 02 11:49:23 compute-0 ceph-mon[73668]: 10.7 scrub starts
Oct 02 11:49:23 compute-0 ceph-mon[73668]: 10.7 scrub ok
Oct 02 11:49:23 compute-0 ceph-mon[73668]: 9.1 scrub starts
Oct 02 11:49:23 compute-0 ceph-mon[73668]: 9.1 scrub ok
Oct 02 11:49:23 compute-0 ceph-mon[73668]: pgmap v253: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]: {
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]:         "osd_id": 1,
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]:         "type": "bluestore"
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]:     }
Oct 02 11:49:23 compute-0 inspiring_sutherland[102679]: }
Oct 02 11:49:23 compute-0 systemd[1]: libpod-d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7.scope: Deactivated successfully.
Oct 02 11:49:23 compute-0 podman[102659]: 2025-10-02 11:49:23.514495669 +0000 UTC m=+1.226972682 container died d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sutherland, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:49:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:23.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ead9b48d29c7e3a06f91d93f1c136069acffbc3de09ec647f0fe7e1aef087538-merged.mount: Deactivated successfully.
Oct 02 11:49:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 24 B/s, 0 objects/s recovering
Oct 02 11:49:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:24.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:24 compute-0 podman[102659]: 2025-10-02 11:49:24.025238539 +0000 UTC m=+1.737715462 container remove d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:49:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 02 11:49:24 compute-0 sudo[102552]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:49:24 compute-0 systemd[1]: libpod-conmon-d2a5ab74c80f807d85583b3dce2a0873d0d084ae1030d25c529ce69ab02da5f7.scope: Deactivated successfully.
Oct 02 11:49:24 compute-0 ceph-mon[73668]: osdmap e114: 3 total, 3 up, 3 in
Oct 02 11:49:24 compute-0 ceph-mon[73668]: pgmap v255: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 24 B/s, 0 objects/s recovering
Oct 02 11:49:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 02 11:49:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:24 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 02 11:49:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:49:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 115 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=113/59 les/c/f=114/60/0 sis=115) [1] r=0 lpr=115 pi=[59,115)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:24 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 115 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=113/59 les/c/f=114/60/0 sis=115) [1] r=0 lpr=115 pi=[59,115)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:24 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 245bb9c4-1bcf-4f07-8076-1acad3f232f8 does not exist
Oct 02 11:49:24 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0bb9a485-84a2-40a3-9102-0f45ddf1893e does not exist
Oct 02 11:49:24 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8be8cff2-d866-47e1-a164-730ce46f62bf does not exist
Oct 02 11:49:24 compute-0 sudo[102738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:24 compute-0 sudo[102738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:24 compute-0 sudo[102738]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:24 compute-0 sudo[102771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:49:24 compute-0 sudo[102771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:24 compute-0 sudo[102771]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:24 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Oct 02 11:49:24 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Oct 02 11:49:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 02 11:49:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:25 compute-0 ceph-mon[73668]: osdmap e115: 3 total, 3 up, 3 in
Oct 02 11:49:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:25 compute-0 ceph-mon[73668]: 11.3 scrub starts
Oct 02 11:49:25 compute-0 ceph-mon[73668]: 11.3 scrub ok
Oct 02 11:49:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 02 11:49:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 02 11:49:25 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 116 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=115/116 n=5 ec=49/38 lis/c=113/59 les/c/f=114/60/0 sis=115) [1] r=0 lpr=115 pi=[59,115)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.332048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765332250, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7436, "num_deletes": 251, "total_data_size": 9855386, "memory_usage": 10044136, "flush_reason": "Manual Compaction"}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765386046, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7964883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 7572, "table_properties": {"data_size": 7937367, "index_size": 18057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 77752, "raw_average_key_size": 23, "raw_value_size": 7872721, "raw_average_value_size": 2362, "num_data_blocks": 799, "num_entries": 3333, "num_filter_entries": 3333, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405419, "oldest_key_time": 1759405419, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 54077 microseconds, and 17350 cpu microseconds.
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.386148) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7964883 bytes OK
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.386172) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.387933) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.387955) EVENT_LOG_v1 {"time_micros": 1759405765387950, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.387980) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9822931, prev total WAL file size 9822931, number of live WAL files 2.
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.390147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7778KB) 13(53KB) 8(1944B)]
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765390273, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8021668, "oldest_snapshot_seqno": -1}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3145 keys, 7977038 bytes, temperature: kUnknown
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765433551, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7977038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7950027, "index_size": 18069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75657, "raw_average_key_size": 24, "raw_value_size": 7887140, "raw_average_value_size": 2507, "num_data_blocks": 802, "num_entries": 3145, "num_filter_entries": 3145, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.433833) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7977038 bytes
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.435085) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.9 rd, 183.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.7, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3437, records dropped: 292 output_compression: NoCompression
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.435117) EVENT_LOG_v1 {"time_micros": 1759405765435093, "job": 4, "event": "compaction_finished", "compaction_time_micros": 43378, "compaction_time_cpu_micros": 18911, "output_level": 6, "num_output_files": 1, "total_output_size": 7977038, "num_input_records": 3437, "num_output_records": 3145, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765436629, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765436682, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765436716, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 02 11:49:25 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:49:25.389964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:25.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:25 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 02 11:49:25 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 02 11:49:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Oct 02 11:49:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 02 11:49:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 02 11:49:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:26.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:26 compute-0 ceph-mon[73668]: 9.2 deep-scrub starts
Oct 02 11:49:26 compute-0 ceph-mon[73668]: 9.2 deep-scrub ok
Oct 02 11:49:26 compute-0 ceph-mon[73668]: osdmap e116: 3 total, 3 up, 3 in
Oct 02 11:49:26 compute-0 ceph-mon[73668]: pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Oct 02 11:49:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 02 11:49:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 02 11:49:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 02 11:49:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 02 11:49:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 02 11:49:26 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 02 11:49:26 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 02 11:49:27 compute-0 ceph-mon[73668]: 9.4 scrub starts
Oct 02 11:49:27 compute-0 ceph-mon[73668]: 9.4 scrub ok
Oct 02 11:49:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 02 11:49:27 compute-0 ceph-mon[73668]: osdmap e117: 3 total, 3 up, 3 in
Oct 02 11:49:27 compute-0 ceph-mon[73668]: 10.9 scrub starts
Oct 02 11:49:27 compute-0 ceph-mon[73668]: 10.9 scrub ok
Oct 02 11:49:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:27 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Oct 02 11:49:27 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Oct 02 11:49:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Oct 02 11:49:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 02 11:49:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 02 11:49:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:28.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:49:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 02 11:49:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:49:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:49:28 compute-0 ceph-mon[73668]: 8.b scrub starts
Oct 02 11:49:28 compute-0 ceph-mon[73668]: 8.b scrub ok
Oct 02 11:49:28 compute-0 ceph-mon[73668]: 9.c scrub starts
Oct 02 11:49:28 compute-0 ceph-mon[73668]: 9.c scrub ok
Oct 02 11:49:28 compute-0 ceph-mon[73668]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Oct 02 11:49:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:49:28
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'images', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Oct 02 11:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:49:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 02 11:49:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 02 11:49:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 02 11:49:29 compute-0 ceph-mon[73668]: 6.1 scrub starts
Oct 02 11:49:29 compute-0 ceph-mon[73668]: 6.1 scrub ok
Oct 02 11:49:29 compute-0 ceph-mon[73668]: 9.14 deep-scrub starts
Oct 02 11:49:29 compute-0 ceph-mon[73668]: 9.14 deep-scrub ok
Oct 02 11:49:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 02 11:49:29 compute-0 ceph-mon[73668]: osdmap e118: 3 total, 3 up, 3 in
Oct 02 11:49:29 compute-0 ceph-mon[73668]: osdmap e119: 3 total, 3 up, 3 in
Oct 02 11:49:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 02 11:49:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 02 11:49:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 02 11:49:30 compute-0 ceph-mon[73668]: 10.a scrub starts
Oct 02 11:49:30 compute-0 ceph-mon[73668]: 10.a scrub ok
Oct 02 11:49:30 compute-0 ceph-mon[73668]: 10.b deep-scrub starts
Oct 02 11:49:30 compute-0 ceph-mon[73668]: 10.b deep-scrub ok
Oct 02 11:49:30 compute-0 ceph-mon[73668]: pgmap v263: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:30 compute-0 ceph-mon[73668]: osdmap e120: 3 total, 3 up, 3 in
Oct 02 11:49:30 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1c deep-scrub starts
Oct 02 11:49:30 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1c deep-scrub ok
Oct 02 11:49:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 02 11:49:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:31.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 02 11:49:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 02 11:49:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:49:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 02 11:49:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 02 11:49:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:32 compute-0 ceph-mon[73668]: 9.1c deep-scrub starts
Oct 02 11:49:32 compute-0 ceph-mon[73668]: 9.1c deep-scrub ok
Oct 02 11:49:32 compute-0 ceph-mon[73668]: 10.c scrub starts
Oct 02 11:49:32 compute-0 ceph-mon[73668]: 10.c scrub ok
Oct 02 11:49:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 02 11:49:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 02 11:49:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 02 11:49:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 02 11:49:32 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=122) [1] r=0 lpr=122 pi=[70,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:33 compute-0 ceph-mon[73668]: osdmap e121: 3 total, 3 up, 3 in
Oct 02 11:49:33 compute-0 ceph-mon[73668]: pgmap v266: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:49:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 02 11:49:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 02 11:49:33 compute-0 ceph-mon[73668]: osdmap e122: 3 total, 3 up, 3 in
Oct 02 11:49:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:33.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:33 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 02 11:49:33 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 02 11:49:33 compute-0 sudo[102825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:33 compute-0 sudo[102825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:33 compute-0 sudo[102825]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:33 compute-0 sudo[102850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:33 compute-0 sudo[102850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:33 compute-0 sudo[102850]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 02 11:49:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 02 11:49:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 02 11:49:33 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[70,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:33 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[70,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 164 B/s, 3 objects/s recovering
Oct 02 11:49:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 11:49:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:49:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 02 11:49:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:49:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 02 11:49:35 compute-0 ceph-mon[73668]: 8.2 scrub starts
Oct 02 11:49:35 compute-0 ceph-mon[73668]: 8.2 scrub ok
Oct 02 11:49:35 compute-0 ceph-mon[73668]: 11.2 scrub starts
Oct 02 11:49:35 compute-0 ceph-mon[73668]: 11.2 scrub ok
Oct 02 11:49:35 compute-0 ceph-mon[73668]: osdmap e123: 3 total, 3 up, 3 in
Oct 02 11:49:35 compute-0 ceph-mon[73668]: pgmap v269: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 164 B/s, 3 objects/s recovering
Oct 02 11:49:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:49:35 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 02 11:49:35 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=124) [1] r=0 lpr=124 pi=[80,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:35.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 02 11:49:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:49:36 compute-0 ceph-mon[73668]: osdmap e124: 3 total, 3 up, 3 in
Oct 02 11:49:36 compute-0 ceph-mon[73668]: pgmap v271: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 02 11:49:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 02 11:49:36 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 125 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] r=-1 lpr=125 pi=[80,125)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:36 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 125 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] r=-1 lpr=125 pi=[80,125)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:36 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 125 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=123/70 les/c/f=124/71/0 sis=125) [1] r=0 lpr=125 pi=[70,125)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:36 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 125 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=123/70 les/c/f=124/71/0 sis=125) [1] r=0 lpr=125 pi=[70,125)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:36 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 02 11:49:36 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 02 11:49:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 02 11:49:37 compute-0 ceph-mon[73668]: 8.f scrub starts
Oct 02 11:49:37 compute-0 ceph-mon[73668]: 8.f scrub ok
Oct 02 11:49:37 compute-0 ceph-mon[73668]: osdmap e125: 3 total, 3 up, 3 in
Oct 02 11:49:37 compute-0 ceph-mon[73668]: 10.d scrub starts
Oct 02 11:49:37 compute-0 ceph-mon[73668]: 10.d scrub ok
Oct 02 11:49:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 02 11:49:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 02 11:49:37 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 126 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=125/126 n=5 ec=49/38 lis/c=123/70 les/c/f=124/71/0 sis=125) [1] r=0 lpr=125 pi=[70,125)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000053s ======
Oct 02 11:49:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:37.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct 02 11:49:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 02 11:49:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 02 11:49:38 compute-0 ceph-mon[73668]: 11.6 scrub starts
Oct 02 11:49:38 compute-0 ceph-mon[73668]: 11.6 scrub ok
Oct 02 11:49:38 compute-0 ceph-mon[73668]: osdmap e126: 3 total, 3 up, 3 in
Oct 02 11:49:38 compute-0 ceph-mon[73668]: pgmap v274: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:38 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 127 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=125/80 les/c/f=126/81/0 sis=127) [1] r=0 lpr=127 pi=[80,127)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:38 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 127 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=125/80 les/c/f=126/81/0 sis=127) [1] r=0 lpr=127 pi=[80,127)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:38 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 02 11:49:38 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 02 11:49:38 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 02 11:49:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 02 11:49:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 02 11:49:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 02 11:49:39 compute-0 ceph-osd[84115]: osd.1 pg_epoch: 128 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=127/128 n=5 ec=49/38 lis/c=125/80 les/c/f=126/81/0 sis=127) [1] r=0 lpr=127 pi=[80,127)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:49:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:39.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:39 compute-0 ceph-mon[73668]: osdmap e127: 3 total, 3 up, 3 in
Oct 02 11:49:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 02 11:49:39 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 02 11:49:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:40.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:40 compute-0 ceph-mon[73668]: 11.9 scrub starts
Oct 02 11:49:40 compute-0 ceph-mon[73668]: 11.9 scrub ok
Oct 02 11:49:40 compute-0 ceph-mon[73668]: osdmap e128: 3 total, 3 up, 3 in
Oct 02 11:49:40 compute-0 ceph-mon[73668]: 8.5 scrub starts
Oct 02 11:49:40 compute-0 ceph-mon[73668]: 8.5 scrub ok
Oct 02 11:49:40 compute-0 ceph-mon[73668]: 11.b scrub starts
Oct 02 11:49:40 compute-0 ceph-mon[73668]: 11.b scrub ok
Oct 02 11:49:40 compute-0 ceph-mon[73668]: pgmap v277: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:41.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 11 op/s; 41 B/s, 3 objects/s recovering
Oct 02 11:49:41 compute-0 ceph-mon[73668]: 10.e scrub starts
Oct 02 11:49:41 compute-0 ceph-mon[73668]: 10.e scrub ok
Oct 02 11:49:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:42.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:43 compute-0 ceph-mon[73668]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 11 op/s; 41 B/s, 3 objects/s recovering
Oct 02 11:49:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:43.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 10 op/s; 36 B/s, 2 objects/s recovering
Oct 02 11:49:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:44.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:44 compute-0 ceph-mon[73668]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 10 op/s; 36 B/s, 2 objects/s recovering
Oct 02 11:49:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:45 compute-0 ceph-mon[73668]: 10.f scrub starts
Oct 02 11:49:45 compute-0 ceph-mon[73668]: 10.f scrub ok
Oct 02 11:49:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:49:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:45.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:49:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 8 op/s; 29 B/s, 2 objects/s recovering
Oct 02 11:49:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:46 compute-0 ceph-mon[73668]: 10.3 deep-scrub starts
Oct 02 11:49:46 compute-0 ceph-mon[73668]: 10.3 deep-scrub ok
Oct 02 11:49:46 compute-0 ceph-mon[73668]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 8 op/s; 29 B/s, 2 objects/s recovering
Oct 02 11:49:46 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 02 11:49:46 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 02 11:49:47 compute-0 ceph-mon[73668]: 11.e deep-scrub starts
Oct 02 11:49:47 compute-0 ceph-mon[73668]: 11.e deep-scrub ok
Oct 02 11:49:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:47.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 6 op/s; 25 B/s, 1 objects/s recovering
Oct 02 11:49:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:48.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:49 compute-0 ceph-mon[73668]: 11.c scrub starts
Oct 02 11:49:49 compute-0 ceph-mon[73668]: 11.c scrub ok
Oct 02 11:49:49 compute-0 ceph-mon[73668]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 6 op/s; 25 B/s, 1 objects/s recovering
Oct 02 11:49:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:49.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s; 31 B/s, 1 objects/s recovering
Oct 02 11:49:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:50 compute-0 ceph-mon[73668]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s; 31 B/s, 1 objects/s recovering
Oct 02 11:49:50 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 02 11:49:50 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 02 11:49:51 compute-0 sudo[101338]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:51 compute-0 ceph-mon[73668]: 10.16 deep-scrub starts
Oct 02 11:49:51 compute-0 ceph-mon[73668]: 10.16 deep-scrub ok
Oct 02 11:49:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:51.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:51 compute-0 sudo[103038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzrstrzzxcwyqttvchufxqjzhmtnsgjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405791.422134-347-65870209084590/AnsiballZ_command.py'
Oct 02 11:49:51 compute-0 sudo[103038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Oct 02 11:49:51 compute-0 python3.9[103040]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:49:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:52 compute-0 ceph-mon[73668]: 11.d scrub starts
Oct 02 11:49:52 compute-0 ceph-mon[73668]: 11.d scrub ok
Oct 02 11:49:52 compute-0 ceph-mon[73668]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Oct 02 11:49:52 compute-0 sudo[103038]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:52 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 02 11:49:52 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 02 11:49:53 compute-0 sudo[103326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcransliuqqexhuobynocllzgiugmzgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405792.8454852-371-244027367646016/AnsiballZ_selinux.py'
Oct 02 11:49:53 compute-0 sudo[103326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:53 compute-0 ceph-mon[73668]: 11.10 scrub starts
Oct 02 11:49:53 compute-0 ceph-mon[73668]: 11.10 scrub ok
Oct 02 11:49:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:53.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:53 compute-0 python3.9[103328]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 11:49:53 compute-0 sudo[103326]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:53 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 02 11:49:53 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 02 11:49:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:53 compute-0 sudo[103353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:53 compute-0 sudo[103353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:53 compute-0 sudo[103353]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:54 compute-0 sudo[103378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:54 compute-0 sudo[103378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:54 compute-0 sudo[103378]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:54.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:54 compute-0 sudo[103529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqhndtcatijipkhuncyrykmdfefkspxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405794.1312268-404-74210732416913/AnsiballZ_command.py'
Oct 02 11:49:54 compute-0 sudo[103529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:54 compute-0 ceph-mon[73668]: 7.11 deep-scrub starts
Oct 02 11:49:54 compute-0 ceph-mon[73668]: 7.11 deep-scrub ok
Oct 02 11:49:54 compute-0 ceph-mon[73668]: 10.17 scrub starts
Oct 02 11:49:54 compute-0 ceph-mon[73668]: 10.17 scrub ok
Oct 02 11:49:54 compute-0 ceph-mon[73668]: 11.11 scrub starts
Oct 02 11:49:54 compute-0 ceph-mon[73668]: 11.11 scrub ok
Oct 02 11:49:54 compute-0 ceph-mon[73668]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:54 compute-0 python3.9[103531]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 11:49:54 compute-0 sudo[103529]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:55 compute-0 sudo[103681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpbjiqckxchrtonlidodyyejmcbvaqdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405794.832844-428-127399214067874/AnsiballZ_file.py'
Oct 02 11:49:55 compute-0 sudo[103681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:55 compute-0 python3.9[103683]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:55 compute-0 sudo[103681]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:55.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:55 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 02 11:49:55 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 02 11:49:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:55 compute-0 sudo[103833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnonojyewrgdkvuvzxzcxxpcmlhlteut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405795.5426435-452-202338528843044/AnsiballZ_mount.py'
Oct 02 11:49:55 compute-0 sudo[103833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:49:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:49:56 compute-0 ceph-mon[73668]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:56 compute-0 python3.9[103835]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 11:49:56 compute-0 sudo[103833]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:57 compute-0 ceph-mon[73668]: 11.15 scrub starts
Oct 02 11:49:57 compute-0 ceph-mon[73668]: 11.15 scrub ok
Oct 02 11:49:57 compute-0 ceph-mon[73668]: 10.1a scrub starts
Oct 02 11:49:57 compute-0 ceph-mon[73668]: 10.1a scrub ok
Oct 02 11:49:57 compute-0 sudo[103986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpfwwxwtvbkhwxhwhkshtejrjmemhhfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405797.0966692-536-97643925500064/AnsiballZ_file.py'
Oct 02 11:49:57 compute-0 sudo[103986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:57 compute-0 python3.9[103988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:57 compute-0 sudo[103986]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:57.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:58 compute-0 sudo[104138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epakdpnvqjyydstcqqyewtgusexxgcbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405797.8466156-560-91391840052105/AnsiballZ_stat.py'
Oct 02 11:49:58 compute-0 sudo[104138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:58 compute-0 python3.9[104140]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:49:58 compute-0 sudo[104138]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:58 compute-0 ceph-mon[73668]: pgmap v286: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:58 compute-0 sudo[104217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygpsfzpikgtdauelxulioolnrynggdbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405797.8466156-560-91391840052105/AnsiballZ_file.py'
Oct 02 11:49:58 compute-0 sudo[104217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:58 compute-0 python3.9[104219]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:58 compute-0 sudo[104217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:49:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:59.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:59 compute-0 ceph-mon[73668]: 8.d scrub starts
Oct 02 11:49:59 compute-0 ceph-mon[73668]: 8.d scrub ok
Oct 02 11:49:59 compute-0 ceph-mon[73668]: 10.11 scrub starts
Oct 02 11:49:59 compute-0 ceph-mon[73668]: 10.11 scrub ok
Oct 02 11:49:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 11:50:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:00 compute-0 sudo[104369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffjnlxmvcmwqitdehquskjqkyvceetgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405799.880442-632-167088826686843/AnsiballZ_getent.py'
Oct 02 11:50:00 compute-0 sudo[104369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:00 compute-0 python3.9[104371]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 11:50:00 compute-0 sudo[104369]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:00 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 02 11:50:00 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 02 11:50:00 compute-0 ceph-mon[73668]: 11.19 scrub starts
Oct 02 11:50:00 compute-0 ceph-mon[73668]: 11.19 scrub ok
Oct 02 11:50:00 compute-0 ceph-mon[73668]: pgmap v287: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 11:50:01 compute-0 sudo[104523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajntygfogqzjmordfuwutghullahuair ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405800.8286376-662-196349184948822/AnsiballZ_getent.py'
Oct 02 11:50:01 compute-0 sudo[104523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:01 compute-0 python3.9[104525]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 11:50:01 compute-0 sudo[104523]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:01.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:02 compute-0 ceph-mon[73668]: 11.18 scrub starts
Oct 02 11:50:02 compute-0 ceph-mon[73668]: 11.18 scrub ok
Oct 02 11:50:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:02 compute-0 sudo[104676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhazmlvhhvrkbqorqvgnollfqznxcvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405801.6429884-686-189167605519676/AnsiballZ_group.py'
Oct 02 11:50:02 compute-0 sudo[104676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:02 compute-0 python3.9[104678]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:50:02 compute-0 sudo[104676]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:02 compute-0 sudo[104829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqxpcfooihenshfihldlvdxgzdpazkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405802.585859-713-93220146930268/AnsiballZ_file.py'
Oct 02 11:50:02 compute-0 sudo[104829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:03 compute-0 ceph-mon[73668]: 10.1 scrub starts
Oct 02 11:50:03 compute-0 ceph-mon[73668]: 10.1 scrub ok
Oct 02 11:50:03 compute-0 ceph-mon[73668]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:03 compute-0 python3.9[104831]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 11:50:03 compute-0 sudo[104829]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:03 compute-0 sudo[104981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wovjonxnantlctpozouqvgssqqejvjit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405803.6769023-746-73818643110429/AnsiballZ_dnf.py'
Oct 02 11:50:03 compute-0 sudo[104981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:04 compute-0 ceph-mon[73668]: 10.10 scrub starts
Oct 02 11:50:04 compute-0 ceph-mon[73668]: 10.10 scrub ok
Oct 02 11:50:04 compute-0 ceph-mon[73668]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:04 compute-0 python3.9[104983]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:05 compute-0 ceph-mon[73668]: 7.1d scrub starts
Oct 02 11:50:05 compute-0 ceph-mon[73668]: 7.1d scrub ok
Oct 02 11:50:05 compute-0 sudo[104981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:05.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:05 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 02 11:50:05 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 02 11:50:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:06.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:06 compute-0 sudo[105135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uehsxzhzvmvqefvkmhegnemlvqjixria ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405805.8587418-770-66786540149708/AnsiballZ_file.py'
Oct 02 11:50:06 compute-0 sudo[105135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:06 compute-0 ceph-mon[73668]: 8.9 scrub starts
Oct 02 11:50:06 compute-0 ceph-mon[73668]: 8.9 scrub ok
Oct 02 11:50:06 compute-0 ceph-mon[73668]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:06 compute-0 python3.9[105137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:06 compute-0 sudo[105135]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:06 compute-0 sudo[105288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowqyeyvwthwvgmhxynmkdjqrnjpebsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405806.5336866-794-70021121061421/AnsiballZ_stat.py'
Oct 02 11:50:06 compute-0 sudo[105288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:06 compute-0 python3.9[105290]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:07 compute-0 sudo[105288]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:07 compute-0 sudo[105366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vocfgozjftzemzpmaladyrmdltdiukym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405806.5336866-794-70021121061421/AnsiballZ_file.py'
Oct 02 11:50:07 compute-0 sudo[105366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:07 compute-0 ceph-mon[73668]: 11.1f scrub starts
Oct 02 11:50:07 compute-0 ceph-mon[73668]: 11.1f scrub ok
Oct 02 11:50:07 compute-0 python3.9[105368]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:07 compute-0 sudo[105366]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:50:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:07.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:50:07 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 02 11:50:07 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 02 11:50:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:08 compute-0 sudo[105518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tonbbmutpmyzyybgfgluljsoonexcizu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405807.7550519-833-278341678851228/AnsiballZ_stat.py'
Oct 02 11:50:08 compute-0 sudo[105518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:08.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:08 compute-0 python3.9[105520]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:08 compute-0 sudo[105518]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:08 compute-0 ceph-mon[73668]: 10.4 scrub starts
Oct 02 11:50:08 compute-0 ceph-mon[73668]: 10.4 scrub ok
Oct 02 11:50:08 compute-0 ceph-mon[73668]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:08 compute-0 sudo[105597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwjwhmvtedvhalvemkknvpguhjzqdqzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405807.7550519-833-278341678851228/AnsiballZ_file.py'
Oct 02 11:50:08 compute-0 sudo[105597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:08 compute-0 python3.9[105599]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:08 compute-0 sudo[105597]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:08 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Oct 02 11:50:08 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Oct 02 11:50:09 compute-0 ceph-mon[73668]: 7.1b scrub starts
Oct 02 11:50:09 compute-0 ceph-mon[73668]: 7.1b scrub ok
Oct 02 11:50:09 compute-0 ceph-mon[73668]: 10.1c scrub starts
Oct 02 11:50:09 compute-0 ceph-mon[73668]: 10.1c scrub ok
Oct 02 11:50:09 compute-0 sudo[105749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfbxugbpgtmchthafacuyaekdnmbtcdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405809.154841-878-93930522961575/AnsiballZ_dnf.py'
Oct 02 11:50:09 compute-0 sudo[105749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:09 compute-0 python3.9[105751]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:09.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:09 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 02 11:50:09 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 02 11:50:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:10.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:10 compute-0 ceph-mon[73668]: 7.14 scrub starts
Oct 02 11:50:10 compute-0 ceph-mon[73668]: 7.14 scrub ok
Oct 02 11:50:10 compute-0 ceph-mon[73668]: 10.14 deep-scrub starts
Oct 02 11:50:10 compute-0 ceph-mon[73668]: 10.14 deep-scrub ok
Oct 02 11:50:10 compute-0 ceph-mon[73668]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:11 compute-0 sudo[105749]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:50:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:11.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:50:11 compute-0 ceph-mon[73668]: 7.18 scrub starts
Oct 02 11:50:11 compute-0 ceph-mon[73668]: 7.18 scrub ok
Oct 02 11:50:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:12 compute-0 python3.9[105903]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:12.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:12 compute-0 ceph-mon[73668]: 10.1d scrub starts
Oct 02 11:50:12 compute-0 ceph-mon[73668]: 10.1d scrub ok
Oct 02 11:50:12 compute-0 ceph-mon[73668]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:12 compute-0 python3.9[106056]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 11:50:13 compute-0 python3.9[106206]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:13 compute-0 ceph-mon[73668]: 10.1f scrub starts
Oct 02 11:50:13 compute-0 ceph-mon[73668]: 10.1f scrub ok
Oct 02 11:50:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:50:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:14.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:50:14 compute-0 sudo[106271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:14 compute-0 sudo[106271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:14 compute-0 sudo[106271]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:14 compute-0 sudo[106308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:14 compute-0 sudo[106308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:14 compute-0 sudo[106308]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:14 compute-0 sudo[106407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjfjqnwggveqbevbrzkajxftbgbujcst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405814.0576034-1001-46592252249323/AnsiballZ_systemd.py'
Oct 02 11:50:14 compute-0 sudo[106407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:15 compute-0 python3.9[106409]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:15 compute-0 ceph-mon[73668]: 11.a scrub starts
Oct 02 11:50:15 compute-0 ceph-mon[73668]: 11.a scrub ok
Oct 02 11:50:15 compute-0 ceph-mon[73668]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:15 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 11:50:15 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 11:50:15 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 11:50:15 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 11:50:15 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 11:50:15 compute-0 sudo[106407]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:15.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:15 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 02 11:50:15 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 02 11:50:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:16 compute-0 ceph-mon[73668]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:16 compute-0 python3.9[106571]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 11:50:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:16.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:16 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 02 11:50:16 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 02 11:50:17 compute-0 ceph-mon[73668]: 7.f scrub starts
Oct 02 11:50:17 compute-0 ceph-mon[73668]: 7.f scrub ok
Oct 02 11:50:17 compute-0 ceph-mon[73668]: 8.19 scrub starts
Oct 02 11:50:17 compute-0 ceph-mon[73668]: 8.19 scrub ok
Oct 02 11:50:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:17.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:18.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:18 compute-0 ceph-mon[73668]: 10.13 scrub starts
Oct 02 11:50:18 compute-0 ceph-mon[73668]: 10.13 scrub ok
Oct 02 11:50:18 compute-0 ceph-mon[73668]: pgmap v296: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:18 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 02 11:50:18 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 02 11:50:18 compute-0 sudo[106723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvacvpmyalybesiuuwfvtfjnqlfmfzfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405818.6128159-1172-231472455209233/AnsiballZ_systemd.py'
Oct 02 11:50:18 compute-0 sudo[106723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:19 compute-0 python3.9[106725]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:19 compute-0 sudo[106723]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:19 compute-0 ceph-mon[73668]: 10.15 scrub starts
Oct 02 11:50:19 compute-0 ceph-mon[73668]: 10.15 scrub ok
Oct 02 11:50:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:19.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:19 compute-0 sudo[106877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldygdaynsomkkewjvirqhgqnxfudyygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405819.4327593-1172-28627867153092/AnsiballZ_systemd.py'
Oct 02 11:50:19 compute-0 sudo[106877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:20 compute-0 python3.9[106879]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:20.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:20 compute-0 sudo[106877]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:20 compute-0 sshd-session[99517]: Connection closed by 192.168.122.30 port 43128
Oct 02 11:50:20 compute-0 sshd-session[99514]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:50:20 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 02 11:50:20 compute-0 systemd[1]: session-36.scope: Consumed 1min 6.444s CPU time.
Oct 02 11:50:20 compute-0 systemd-logind[820]: Session 36 logged out. Waiting for processes to exit.
Oct 02 11:50:20 compute-0 ceph-mon[73668]: pgmap v297: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:20 compute-0 systemd-logind[820]: Removed session 36.
Oct 02 11:50:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:21.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 02 11:50:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 02 11:50:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:22.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:23 compute-0 ceph-mon[73668]: 7.2 scrub starts
Oct 02 11:50:23 compute-0 ceph-mon[73668]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:23 compute-0 ceph-mon[73668]: 7.2 scrub ok
Oct 02 11:50:23 compute-0 ceph-mon[73668]: 11.1a scrub starts
Oct 02 11:50:23 compute-0 ceph-mon[73668]: 11.1a scrub ok
Oct 02 11:50:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:23.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:24.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:24 compute-0 ceph-mon[73668]: 8.16 scrub starts
Oct 02 11:50:24 compute-0 ceph-mon[73668]: 8.16 scrub ok
Oct 02 11:50:24 compute-0 ceph-mon[73668]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:24 compute-0 sudo[106909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:24 compute-0 sudo[106909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:24 compute-0 sudo[106909]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:24 compute-0 sudo[106934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:24 compute-0 sudo[106934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:24 compute-0 sudo[106934]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-0 sudo[106959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:25 compute-0 sudo[106959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:25 compute-0 sudo[106959]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-0 sudo[106984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:50:25 compute-0 sudo[106984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:25 compute-0 ceph-mon[73668]: 8.10 deep-scrub starts
Oct 02 11:50:25 compute-0 ceph-mon[73668]: 8.10 deep-scrub ok
Oct 02 11:50:25 compute-0 sudo[106984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-0 sshd-session[107040]: Accepted publickey for zuul from 192.168.122.30 port 47232 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:50:25 compute-0 systemd-logind[820]: New session 37 of user zuul.
Oct 02 11:50:25 compute-0 systemd[1]: Started Session 37 of User zuul.
Oct 02 11:50:25 compute-0 sshd-session[107040]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:50:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:25.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:26.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:26 compute-0 ceph-mon[73668]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:26 compute-0 python3.9[107194]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:27 compute-0 systemd[75332]: Created slice User Background Tasks Slice.
Oct 02 11:50:27 compute-0 systemd[75332]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 11:50:27 compute-0 systemd[75332]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 11:50:27 compute-0 sudo[107349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqdgdjejrvlqwlajbamtqirfhldspmfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405827.147498-73-170625998513310/AnsiballZ_getent.py'
Oct 02 11:50:27 compute-0 sudo[107349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:27.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:27 compute-0 python3.9[107351]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 11:50:27 compute-0 sudo[107349]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7bd835f0-4ce9-41dc-bee6-e69f2ad8b854 does not exist
Oct 02 11:50:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1425fa2d-e93e-4158-8c9d-ec68d19f8f1d does not exist
Oct 02 11:50:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev db1dcced-5ebc-43a8-a358-d592e09ced9a does not exist
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:50:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:27 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 02 11:50:27 compute-0 sudo[107377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:27 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 02 11:50:27 compute-0 sudo[107377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:27 compute-0 sudo[107377]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:28 compute-0 sudo[107402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:28 compute-0 sudo[107402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:28 compute-0 sudo[107402]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:28 compute-0 sudo[107427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:28 compute-0 sudo[107427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:28 compute-0 sudo[107427]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:28.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:28 compute-0 sudo[107452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:50:28 compute-0 sudo[107452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:50:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:28 compute-0 ceph-mon[73668]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:50:28 compute-0 podman[107615]: 2025-10-02 11:50:28.489957384 +0000 UTC m=+0.042182277 container create 2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:50:28 compute-0 sudo[107654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrclexrtoobrnuidbatfsmmkjcvfnat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405828.2369576-109-16352120038132/AnsiballZ_setup.py'
Oct 02 11:50:28 compute-0 sudo[107654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:28 compute-0 systemd[1]: Started libpod-conmon-2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e.scope.
Oct 02 11:50:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:28 compute-0 podman[107615]: 2025-10-02 11:50:28.473126609 +0000 UTC m=+0.025351532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:28 compute-0 podman[107615]: 2025-10-02 11:50:28.579401541 +0000 UTC m=+0.131626444 container init 2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:50:28 compute-0 podman[107615]: 2025-10-02 11:50:28.586640723 +0000 UTC m=+0.138865616 container start 2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:50:28 compute-0 podman[107615]: 2025-10-02 11:50:28.591197023 +0000 UTC m=+0.143421916 container attach 2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:50:28 compute-0 quizzical_greider[107659]: 167 167
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:50:28
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.mgr']
Oct 02 11:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:50:28 compute-0 systemd[1]: libpod-2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e.scope: Deactivated successfully.
Oct 02 11:50:28 compute-0 podman[107615]: 2025-10-02 11:50:28.600301534 +0000 UTC m=+0.152526427 container died 2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c09a0968638aa01cac67770e163008ed8cd25c374c3128c5bb5ee03b917499e2-merged.mount: Deactivated successfully.
Oct 02 11:50:28 compute-0 podman[107615]: 2025-10-02 11:50:28.649865886 +0000 UTC m=+0.202090779 container remove 2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:50:28 compute-0 systemd[1]: libpod-conmon-2e31a8ca12898d69d1ded00a1dd009f2bcddc89b667bc393e95cf9ec01180e8e.scope: Deactivated successfully.
Oct 02 11:50:28 compute-0 python3.9[107656]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:50:28 compute-0 podman[107682]: 2025-10-02 11:50:28.846828939 +0000 UTC m=+0.059739652 container create 9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:50:28 compute-0 systemd[1]: Started libpod-conmon-9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4.scope.
Oct 02 11:50:28 compute-0 podman[107682]: 2025-10-02 11:50:28.816275811 +0000 UTC m=+0.029186584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182e7c2a7f382c1641a83d21407aaf827ac005e33012b2a8cec50a28f369018e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182e7c2a7f382c1641a83d21407aaf827ac005e33012b2a8cec50a28f369018e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182e7c2a7f382c1641a83d21407aaf827ac005e33012b2a8cec50a28f369018e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182e7c2a7f382c1641a83d21407aaf827ac005e33012b2a8cec50a28f369018e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182e7c2a7f382c1641a83d21407aaf827ac005e33012b2a8cec50a28f369018e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:28 compute-0 podman[107682]: 2025-10-02 11:50:28.952380383 +0000 UTC m=+0.165291106 container init 9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:50:28 compute-0 podman[107682]: 2025-10-02 11:50:28.966669361 +0000 UTC m=+0.179580054 container start 9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:50:28 compute-0 podman[107682]: 2025-10-02 11:50:28.986401243 +0000 UTC m=+0.199311966 container attach 9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:50:29 compute-0 sudo[107654]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:29 compute-0 ceph-mon[73668]: 7.3 scrub starts
Oct 02 11:50:29 compute-0 ceph-mon[73668]: 7.3 scrub ok
Oct 02 11:50:29 compute-0 ceph-mon[73668]: 8.12 deep-scrub starts
Oct 02 11:50:29 compute-0 ceph-mon[73668]: 8.12 deep-scrub ok
Oct 02 11:50:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:29 compute-0 sudo[107784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrfoiigmgqtlactouzgrhiqlfiphtnwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405828.2369576-109-16352120038132/AnsiballZ_dnf.py'
Oct 02 11:50:29 compute-0 sudo[107784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:29 compute-0 python3.9[107786]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:50:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:29.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:29 compute-0 cranky_spence[107705]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:50:29 compute-0 cranky_spence[107705]: --> relative data size: 1.0
Oct 02 11:50:29 compute-0 cranky_spence[107705]: --> All data devices are unavailable
Oct 02 11:50:29 compute-0 systemd[1]: libpod-9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4.scope: Deactivated successfully.
Oct 02 11:50:29 compute-0 podman[107682]: 2025-10-02 11:50:29.875695421 +0000 UTC m=+1.088606104 container died 9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-182e7c2a7f382c1641a83d21407aaf827ac005e33012b2a8cec50a28f369018e-merged.mount: Deactivated successfully.
Oct 02 11:50:29 compute-0 podman[107682]: 2025-10-02 11:50:29.932610147 +0000 UTC m=+1.145520830 container remove 9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:50:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:29 compute-0 systemd[1]: libpod-conmon-9269e3f0845b7ede88e3cd282305a5ce01cdf58959a4f1883235745a64f280d4.scope: Deactivated successfully.
Oct 02 11:50:29 compute-0 sudo[107452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:30 compute-0 sudo[107812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:30 compute-0 sudo[107812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:30 compute-0 sudo[107812]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:30 compute-0 sudo[107837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:30 compute-0 sudo[107837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:30 compute-0 sudo[107837]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:30 compute-0 sudo[107862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:30 compute-0 sudo[107862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:30 compute-0 sudo[107862]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:30 compute-0 sudo[107887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:50:30 compute-0 sudo[107887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:30 compute-0 ceph-mon[73668]: 8.1f scrub starts
Oct 02 11:50:30 compute-0 ceph-mon[73668]: 8.1f scrub ok
Oct 02 11:50:30 compute-0 ceph-mon[73668]: 11.1b deep-scrub starts
Oct 02 11:50:30 compute-0 ceph-mon[73668]: 11.1b deep-scrub ok
Oct 02 11:50:30 compute-0 ceph-mon[73668]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:30 compute-0 podman[107953]: 2025-10-02 11:50:30.49722532 +0000 UTC m=+0.045566877 container create a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 11:50:30 compute-0 systemd[1]: Started libpod-conmon-a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db.scope.
Oct 02 11:50:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:30 compute-0 podman[107953]: 2025-10-02 11:50:30.47757333 +0000 UTC m=+0.025914907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:30 compute-0 podman[107953]: 2025-10-02 11:50:30.577588397 +0000 UTC m=+0.125929984 container init a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:30 compute-0 podman[107953]: 2025-10-02 11:50:30.588035483 +0000 UTC m=+0.136377050 container start a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:50:30 compute-0 podman[107953]: 2025-10-02 11:50:30.591475395 +0000 UTC m=+0.139816972 container attach a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 11:50:30 compute-0 optimistic_goldstine[107969]: 167 167
Oct 02 11:50:30 compute-0 systemd[1]: libpod-a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db.scope: Deactivated successfully.
Oct 02 11:50:30 compute-0 podman[107953]: 2025-10-02 11:50:30.594405432 +0000 UTC m=+0.142746989 container died a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fb463eaec7a6c751e922fe519e8bf2ac3a53cb4f3d1c4d5ab7b01788243aba-merged.mount: Deactivated successfully.
Oct 02 11:50:30 compute-0 podman[107953]: 2025-10-02 11:50:30.639644019 +0000 UTC m=+0.187985616 container remove a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:50:30 compute-0 systemd[1]: libpod-conmon-a058af0764f66997c0797f17fdd982c119d8ac0a4daf0b840bcfdd0e7d9c10db.scope: Deactivated successfully.
Oct 02 11:50:30 compute-0 podman[107993]: 2025-10-02 11:50:30.810848891 +0000 UTC m=+0.041614983 container create 4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:50:30 compute-0 systemd[1]: Started libpod-conmon-4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc.scope.
Oct 02 11:50:30 compute-0 podman[107993]: 2025-10-02 11:50:30.793352298 +0000 UTC m=+0.024118420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6ee35183bdb5fdd48b61f0dcb53855292d77f49a64acad9126f2978e092e16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6ee35183bdb5fdd48b61f0dcb53855292d77f49a64acad9126f2978e092e16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6ee35183bdb5fdd48b61f0dcb53855292d77f49a64acad9126f2978e092e16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6ee35183bdb5fdd48b61f0dcb53855292d77f49a64acad9126f2978e092e16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:30 compute-0 podman[107993]: 2025-10-02 11:50:30.926254915 +0000 UTC m=+0.157021027 container init 4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:50:30 compute-0 podman[107993]: 2025-10-02 11:50:30.935651624 +0000 UTC m=+0.166417726 container start 4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:50:30 compute-0 podman[107993]: 2025-10-02 11:50:30.939861845 +0000 UTC m=+0.170627957 container attach 4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:50:30 compute-0 sudo[107784]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:31 compute-0 ceph-mon[73668]: 11.1d scrub starts
Oct 02 11:50:31 compute-0 ceph-mon[73668]: 11.1d scrub ok
Oct 02 11:50:31 compute-0 sudo[108164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quacboahtaeotqicjduaixwzpbcypimt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405831.2748897-151-106973296967637/AnsiballZ_dnf.py'
Oct 02 11:50:31 compute-0 sudo[108164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]: {
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:     "1": [
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:         {
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "devices": [
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "/dev/loop3"
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             ],
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "lv_name": "ceph_lv0",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "lv_size": "7511998464",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "name": "ceph_lv0",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "tags": {
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.cluster_name": "ceph",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.crush_device_class": "",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.encrypted": "0",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.osd_id": "1",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.type": "block",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:                 "ceph.vdo": "0"
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             },
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "type": "block",
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:             "vg_name": "ceph_vg0"
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:         }
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]:     ]
Oct 02 11:50:31 compute-0 wonderful_sanderson[108010]: }
Oct 02 11:50:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:50:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:50:31 compute-0 systemd[1]: libpod-4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc.scope: Deactivated successfully.
Oct 02 11:50:31 compute-0 podman[107993]: 2025-10-02 11:50:31.730893602 +0000 UTC m=+0.961659694 container died 4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-de6ee35183bdb5fdd48b61f0dcb53855292d77f49a64acad9126f2978e092e16-merged.mount: Deactivated successfully.
Oct 02 11:50:31 compute-0 podman[107993]: 2025-10-02 11:50:31.783753521 +0000 UTC m=+1.014519613 container remove 4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:50:31 compute-0 systemd[1]: libpod-conmon-4df0c4f291644082f2a3a66ee773477dac6687cb137a17b3a1a079c1aa5589bc.scope: Deactivated successfully.
Oct 02 11:50:31 compute-0 python3.9[108166]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:31 compute-0 sudo[107887]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:31 compute-0 sudo[108184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:31 compute-0 sudo[108184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:31 compute-0 sudo[108184]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:31 compute-0 sudo[108209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:31 compute-0 sudo[108209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:31 compute-0 sudo[108209]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:31 compute-0 sudo[108234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:31 compute-0 sudo[108234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:31 compute-0 sudo[108234]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:32 compute-0 sudo[108259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:50:32 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 02 11:50:32 compute-0 sudo[108259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:32 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 02 11:50:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:32.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:32 compute-0 podman[108323]: 2025-10-02 11:50:32.374097256 +0000 UTC m=+0.040872453 container create e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:50:32 compute-0 systemd[1]: Started libpod-conmon-e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820.scope.
Oct 02 11:50:32 compute-0 podman[108323]: 2025-10-02 11:50:32.355162405 +0000 UTC m=+0.021937632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:32 compute-0 podman[108323]: 2025-10-02 11:50:32.479161207 +0000 UTC m=+0.145936414 container init e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:50:32 compute-0 podman[108323]: 2025-10-02 11:50:32.487709003 +0000 UTC m=+0.154484190 container start e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:50:32 compute-0 ceph-mon[73668]: pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:32 compute-0 ceph-mon[73668]: 10.2 scrub starts
Oct 02 11:50:32 compute-0 ceph-mon[73668]: 10.2 scrub ok
Oct 02 11:50:32 compute-0 elated_germain[108339]: 167 167
Oct 02 11:50:32 compute-0 systemd[1]: libpod-e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820.scope: Deactivated successfully.
Oct 02 11:50:32 compute-0 podman[108323]: 2025-10-02 11:50:32.496067464 +0000 UTC m=+0.162842671 container attach e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:50:32 compute-0 podman[108323]: 2025-10-02 11:50:32.496485155 +0000 UTC m=+0.163260342 container died e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b9e70ad9f065bef348ea2e9d9da8bec66b9e536d343836732e0cb27ccf1fb94-merged.mount: Deactivated successfully.
Oct 02 11:50:32 compute-0 podman[108323]: 2025-10-02 11:50:32.555387394 +0000 UTC m=+0.222162571 container remove e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 11:50:32 compute-0 systemd[1]: libpod-conmon-e9831749a23db9b6df876006a84fdeef639fea738432478714898e41322d1820.scope: Deactivated successfully.
Oct 02 11:50:32 compute-0 podman[108366]: 2025-10-02 11:50:32.689698639 +0000 UTC m=+0.035433869 container create a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:50:32 compute-0 systemd[1]: Started libpod-conmon-a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1.scope.
Oct 02 11:50:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b6a9a656c4b0542d7abb8a6469f20b4ffe7382598d8591efbf9be6af3a3bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b6a9a656c4b0542d7abb8a6469f20b4ffe7382598d8591efbf9be6af3a3bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b6a9a656c4b0542d7abb8a6469f20b4ffe7382598d8591efbf9be6af3a3bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326b6a9a656c4b0542d7abb8a6469f20b4ffe7382598d8591efbf9be6af3a3bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:50:32 compute-0 podman[108366]: 2025-10-02 11:50:32.768859194 +0000 UTC m=+0.114594444 container init a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:32 compute-0 podman[108366]: 2025-10-02 11:50:32.674773094 +0000 UTC m=+0.020508324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:50:32 compute-0 podman[108366]: 2025-10-02 11:50:32.774919565 +0000 UTC m=+0.120654795 container start a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_babbage, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:50:32 compute-0 podman[108366]: 2025-10-02 11:50:32.778228122 +0000 UTC m=+0.123963352 container attach a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 11:50:33 compute-0 sudo[108164]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]: {
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]:         "osd_id": 1,
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]:         "type": "bluestore"
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]:     }
Oct 02 11:50:33 compute-0 pedantic_babbage[108383]: }
Oct 02 11:50:33 compute-0 systemd[1]: libpod-a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1.scope: Deactivated successfully.
Oct 02 11:50:33 compute-0 podman[108366]: 2025-10-02 11:50:33.60144138 +0000 UTC m=+0.947176600 container died a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_babbage, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-326b6a9a656c4b0542d7abb8a6469f20b4ffe7382598d8591efbf9be6af3a3bf-merged.mount: Deactivated successfully.
Oct 02 11:50:33 compute-0 podman[108366]: 2025-10-02 11:50:33.649303247 +0000 UTC m=+0.995038467 container remove a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:50:33 compute-0 systemd[1]: libpod-conmon-a44b31c0af362576e4fb4a6a682d51e776b86a7acdab5ecc2dda67656c60c8b1.scope: Deactivated successfully.
Oct 02 11:50:33 compute-0 sudo[108259]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:50:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:50:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6dbcd1d5-041f-40bd-8999-ea5cdee0f2d3 does not exist
Oct 02 11:50:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 63645f9d-0ad4-4db3-a0b1-fa779169683e does not exist
Oct 02 11:50:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1d928227-c772-4f1c-91c5-c240fc45cea5 does not exist
Oct 02 11:50:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:33.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:33 compute-0 sudo[108494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:33 compute-0 sudo[108494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:33 compute-0 sudo[108494]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:33 compute-0 sudo[108519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:50:33 compute-0 sudo[108519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:33 compute-0 sudo[108519]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:34 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 02 11:50:34 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 02 11:50:34 compute-0 sudo[108617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjbcxsgheoigxtaomusakzhwvywokyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405833.5694065-175-245859520154846/AnsiballZ_systemd.py'
Oct 02 11:50:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:34 compute-0 sudo[108617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:34 compute-0 sudo[108620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:34 compute-0 sudo[108620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:34 compute-0 sudo[108620]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:34 compute-0 sudo[108645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:34 compute-0 sudo[108645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:34 compute-0 sudo[108645]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:34 compute-0 python3.9[108619]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:50:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:34 compute-0 sudo[108617]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:34 compute-0 ceph-mon[73668]: 8.18 scrub starts
Oct 02 11:50:34 compute-0 ceph-mon[73668]: 8.18 scrub ok
Oct 02 11:50:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:34 compute-0 ceph-mon[73668]: pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:34 compute-0 ceph-mon[73668]: 7.e scrub starts
Oct 02 11:50:34 compute-0 ceph-mon[73668]: 7.e scrub ok
Oct 02 11:50:35 compute-0 python3.9[108823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:35.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:35 compute-0 ceph-mon[73668]: 10.1e scrub starts
Oct 02 11:50:35 compute-0 ceph-mon[73668]: 10.1e scrub ok
Oct 02 11:50:35 compute-0 ceph-mon[73668]: 8.1b deep-scrub starts
Oct 02 11:50:35 compute-0 ceph-mon[73668]: 8.1b deep-scrub ok
Oct 02 11:50:36 compute-0 sudo[108973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kirmxuctplfjleygyfbcmvyxjggoxnbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405835.5786414-229-153861780706733/AnsiballZ_sefcontext.py'
Oct 02 11:50:36 compute-0 sudo[108973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:50:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:36.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:50:36 compute-0 python3.9[108975]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 11:50:36 compute-0 sudo[108973]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:37 compute-0 ceph-mon[73668]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:37 compute-0 ceph-mon[73668]: 11.1e scrub starts
Oct 02 11:50:37 compute-0 ceph-mon[73668]: 11.1e scrub ok
Oct 02 11:50:37 compute-0 python3.9[109126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:37.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:38 compute-0 ceph-mon[73668]: 8.11 deep-scrub starts
Oct 02 11:50:38 compute-0 ceph-mon[73668]: 8.11 deep-scrub ok
Oct 02 11:50:38 compute-0 sudo[109282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdcysoywuwspahswcgufxqygnjvtbpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405837.8626018-283-238142703246513/AnsiballZ_dnf.py'
Oct 02 11:50:38 compute-0 sudo[109282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:38.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:38 compute-0 python3.9[109284]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:39 compute-0 ceph-mon[73668]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:50:39 compute-0 sudo[109282]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:39.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:40 compute-0 ceph-mon[73668]: 8.4 scrub starts
Oct 02 11:50:40 compute-0 ceph-mon[73668]: 8.4 scrub ok
Oct 02 11:50:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:40.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:40 compute-0 sudo[109436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvpstsskzhaklpvzefvievldahrgckf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405839.8636127-307-154260777444111/AnsiballZ_command.py'
Oct 02 11:50:40 compute-0 sudo[109436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:40 compute-0 python3.9[109438]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:50:41 compute-0 ceph-mon[73668]: pgmap v307: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:41 compute-0 sudo[109436]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:41.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:41 compute-0 sudo[109724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjxgrirabtsbkryipxyidohyfdpfcrko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405841.3506541-331-38184530385460/AnsiballZ_file.py'
Oct 02 11:50:41 compute-0 sudo[109724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:41 compute-0 python3.9[109726]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:50:42 compute-0 sudo[109724]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:42 compute-0 ceph-mon[73668]: 11.7 scrub starts
Oct 02 11:50:42 compute-0 ceph-mon[73668]: 11.7 scrub ok
Oct 02 11:50:42 compute-0 ceph-mon[73668]: pgmap v308: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:42 compute-0 python3.9[109877]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:43 compute-0 ceph-mon[73668]: 8.3 scrub starts
Oct 02 11:50:43 compute-0 ceph-mon[73668]: 8.3 scrub ok
Oct 02 11:50:43 compute-0 sudo[110029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugcperazvatzvmyifdkmpzvooofzoxmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405842.9805846-379-85732542875044/AnsiballZ_dnf.py'
Oct 02 11:50:43 compute-0 sudo[110029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:43 compute-0 python3.9[110031]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:43.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:44.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:44 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 02 11:50:44 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 02 11:50:44 compute-0 ceph-mon[73668]: pgmap v309: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:45 compute-0 sudo[110029]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:45 compute-0 ceph-mon[73668]: 8.6 deep-scrub starts
Oct 02 11:50:45 compute-0 ceph-mon[73668]: 8.6 deep-scrub ok
Oct 02 11:50:45 compute-0 ceph-mon[73668]: 7.8 scrub starts
Oct 02 11:50:45 compute-0 ceph-mon[73668]: 7.8 scrub ok
Oct 02 11:50:45 compute-0 ceph-mon[73668]: 11.5 scrub starts
Oct 02 11:50:45 compute-0 ceph-mon[73668]: 11.5 scrub ok
Oct 02 11:50:45 compute-0 sudo[110183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kruhajolmtlwoxrvqvtihxmlybfjxjjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405845.2456343-406-96217891654468/AnsiballZ_dnf.py'
Oct 02 11:50:45 compute-0 sudo[110183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:45 compute-0 python3.9[110185]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:46.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:46 compute-0 ceph-mon[73668]: 8.c scrub starts
Oct 02 11:50:46 compute-0 ceph-mon[73668]: 8.c scrub ok
Oct 02 11:50:46 compute-0 ceph-mon[73668]: 11.1c scrub starts
Oct 02 11:50:46 compute-0 ceph-mon[73668]: 11.1c scrub ok
Oct 02 11:50:46 compute-0 ceph-mon[73668]: pgmap v310: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:47 compute-0 sudo[110183]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:47 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 02 11:50:47 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 02 11:50:47 compute-0 ceph-mon[73668]: 10.8 scrub starts
Oct 02 11:50:47 compute-0 ceph-mon[73668]: 10.8 scrub ok
Oct 02 11:50:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:47.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:47 compute-0 sudo[110337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkrbbbfmytamsltfdjipwuadfoanrvuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405847.4860399-442-216788820710331/AnsiballZ_stat.py'
Oct 02 11:50:47 compute-0 sudo[110337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:47 compute-0 python3.9[110339]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:48 compute-0 sudo[110337]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:48.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:48 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 02 11:50:48 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 02 11:50:48 compute-0 ceph-mon[73668]: 8.1c scrub starts
Oct 02 11:50:48 compute-0 ceph-mon[73668]: 8.1c scrub ok
Oct 02 11:50:48 compute-0 ceph-mon[73668]: pgmap v311: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:48 compute-0 ceph-mon[73668]: 7.1e scrub starts
Oct 02 11:50:48 compute-0 ceph-mon[73668]: 7.1e scrub ok
Oct 02 11:50:48 compute-0 sudo[110492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hajlmpwsoxjymfvevaslamuafcwdkotj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405848.168767-466-80639393478876/AnsiballZ_slurp.py'
Oct 02 11:50:48 compute-0 sudo[110492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:48 compute-0 python3.9[110494]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 02 11:50:48 compute-0 sudo[110492]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:50:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:49.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:50:49 compute-0 sshd-session[107043]: Connection closed by 192.168.122.30 port 47232
Oct 02 11:50:49 compute-0 sshd-session[107040]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:50:49 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Oct 02 11:50:49 compute-0 systemd[1]: session-37.scope: Consumed 18.418s CPU time.
Oct 02 11:50:49 compute-0 systemd-logind[820]: Session 37 logged out. Waiting for processes to exit.
Oct 02 11:50:49 compute-0 systemd-logind[820]: Removed session 37.
Oct 02 11:50:49 compute-0 ceph-mon[73668]: 8.8 scrub starts
Oct 02 11:50:49 compute-0 ceph-mon[73668]: 8.8 scrub ok
Oct 02 11:50:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:50:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:50.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:50:50 compute-0 ceph-mon[73668]: pgmap v312: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:51.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:52 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 02 11:50:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:52.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:52 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 02 11:50:52 compute-0 ceph-mon[73668]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:52 compute-0 ceph-mon[73668]: 10.5 scrub starts
Oct 02 11:50:52 compute-0 ceph-mon[73668]: 10.5 scrub ok
Oct 02 11:50:52 compute-0 ceph-mon[73668]: 9.13 scrub starts
Oct 02 11:50:52 compute-0 ceph-mon[73668]: 9.13 scrub ok
Oct 02 11:50:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:53.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:54.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:54 compute-0 sudo[110522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:54 compute-0 sudo[110522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:54 compute-0 sudo[110522]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:54 compute-0 sudo[110547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:54 compute-0 sudo[110547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:54 compute-0 sudo[110547]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:55 compute-0 ceph-mon[73668]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:55 compute-0 ceph-mon[73668]: 11.f scrub starts
Oct 02 11:50:55 compute-0 ceph-mon[73668]: 11.f scrub ok
Oct 02 11:50:55 compute-0 ceph-mon[73668]: 9.b scrub starts
Oct 02 11:50:55 compute-0 ceph-mon[73668]: 9.b scrub ok
Oct 02 11:50:55 compute-0 sshd-session[110572]: Accepted publickey for zuul from 192.168.122.30 port 51028 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:50:55 compute-0 systemd-logind[820]: New session 38 of user zuul.
Oct 02 11:50:55 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 02 11:50:55 compute-0 sshd-session[110572]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:50:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:55.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:56 compute-0 ceph-mon[73668]: 11.4 deep-scrub starts
Oct 02 11:50:56 compute-0 ceph-mon[73668]: 11.4 deep-scrub ok
Oct 02 11:50:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:56.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:56 compute-0 python3.9[110725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:57 compute-0 ceph-mon[73668]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:57 compute-0 python3.9[110880]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:50:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:57.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:58 compute-0 ceph-mon[73668]: 9.17 scrub starts
Oct 02 11:50:58 compute-0 ceph-mon[73668]: 9.17 scrub ok
Oct 02 11:50:58 compute-0 ceph-mon[73668]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:58 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 02 11:50:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:50:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:58.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:50:58 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 02 11:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:50:58 compute-0 python3.9[111073]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:50:58 compute-0 sshd-session[110575]: Connection closed by 192.168.122.30 port 51028
Oct 02 11:50:58 compute-0 sshd-session[110572]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:50:58 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 02 11:50:58 compute-0 systemd[1]: session-38.scope: Consumed 2.379s CPU time.
Oct 02 11:50:58 compute-0 systemd-logind[820]: Session 38 logged out. Waiting for processes to exit.
Oct 02 11:50:58 compute-0 systemd-logind[820]: Removed session 38.
Oct 02 11:50:59 compute-0 ceph-mon[73668]: 7.6 scrub starts
Oct 02 11:50:59 compute-0 ceph-mon[73668]: 7.6 scrub ok
Oct 02 11:50:59 compute-0 ceph-mon[73668]: 11.1 scrub starts
Oct 02 11:50:59 compute-0 ceph-mon[73668]: 11.1 scrub ok
Oct 02 11:50:59 compute-0 ceph-mon[73668]: 9.3 scrub starts
Oct 02 11:50:59 compute-0 ceph-mon[73668]: 9.3 scrub ok
Oct 02 11:50:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:50:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:59.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:00 compute-0 ceph-mon[73668]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:00.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:00 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Oct 02 11:51:00 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Oct 02 11:51:01 compute-0 ceph-mon[73668]: 10.19 deep-scrub starts
Oct 02 11:51:01 compute-0 ceph-mon[73668]: 10.19 deep-scrub ok
Oct 02 11:51:01 compute-0 ceph-mon[73668]: 8.17 scrub starts
Oct 02 11:51:01 compute-0 ceph-mon[73668]: 8.17 scrub ok
Oct 02 11:51:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:01.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:02.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:02 compute-0 ceph-mon[73668]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:03 compute-0 ceph-mon[73668]: 11.12 scrub starts
Oct 02 11:51:03 compute-0 ceph-mon[73668]: 11.12 scrub ok
Oct 02 11:51:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:03.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:04 compute-0 ceph-mon[73668]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:04 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Oct 02 11:51:04 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Oct 02 11:51:04 compute-0 sshd-session[111102]: Accepted publickey for zuul from 192.168.122.30 port 38114 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:51:04 compute-0 systemd-logind[820]: New session 39 of user zuul.
Oct 02 11:51:04 compute-0 systemd[1]: Started Session 39 of User zuul.
Oct 02 11:51:04 compute-0 sshd-session[111102]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:51:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:05 compute-0 ceph-mon[73668]: 7.10 deep-scrub starts
Oct 02 11:51:05 compute-0 ceph-mon[73668]: 7.10 deep-scrub ok
Oct 02 11:51:05 compute-0 ceph-mon[73668]: 11.14 scrub starts
Oct 02 11:51:05 compute-0 ceph-mon[73668]: 11.14 scrub ok
Oct 02 11:51:05 compute-0 ceph-mon[73668]: 9.7 scrub starts
Oct 02 11:51:05 compute-0 ceph-mon[73668]: 9.7 scrub ok
Oct 02 11:51:05 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Oct 02 11:51:05 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Oct 02 11:51:05 compute-0 python3.9[111256]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:05.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:06.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:06 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 02 11:51:06 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 02 11:51:06 compute-0 ceph-mon[73668]: 8.14 scrub starts
Oct 02 11:51:06 compute-0 ceph-mon[73668]: 8.14 scrub ok
Oct 02 11:51:06 compute-0 ceph-mon[73668]: 7.4 deep-scrub starts
Oct 02 11:51:06 compute-0 ceph-mon[73668]: 7.4 deep-scrub ok
Oct 02 11:51:06 compute-0 ceph-mon[73668]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:06 compute-0 python3.9[111410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:07 compute-0 sudo[111565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lobrciwghiigrjxlyxuaxcbjkvzreyah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405866.785754-85-197201021203477/AnsiballZ_setup.py'
Oct 02 11:51:07 compute-0 sudo[111565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:07 compute-0 ceph-mon[73668]: 7.b scrub starts
Oct 02 11:51:07 compute-0 ceph-mon[73668]: 7.b scrub ok
Oct 02 11:51:07 compute-0 python3.9[111567]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:51:07 compute-0 sudo[111565]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:07.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:08 compute-0 sudo[111649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmarrrpeojeuqbmkjmeznsoekfrojhzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405866.785754-85-197201021203477/AnsiballZ_dnf.py'
Oct 02 11:51:08 compute-0 sudo[111649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:08.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:08 compute-0 ceph-mon[73668]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:08 compute-0 python3.9[111651]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:51:09 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 02 11:51:09 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 02 11:51:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:09 compute-0 ceph-mon[73668]: 10.1b scrub starts
Oct 02 11:51:09 compute-0 ceph-mon[73668]: 10.1b scrub ok
Oct 02 11:51:09 compute-0 sudo[111649]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:09.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:10 compute-0 sudo[111803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tohtgztlzqdzixznsqzouusegbtsztfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405869.7876875-121-160066255105530/AnsiballZ_setup.py'
Oct 02 11:51:10 compute-0 sudo[111803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:51:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:10.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:51:10 compute-0 python3.9[111805]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:51:10 compute-0 ceph-mon[73668]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:10 compute-0 sudo[111803]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:11 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Oct 02 11:51:11 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Oct 02 11:51:11 compute-0 sudo[111999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdeglzwbgpbbnmkcjjmrxbdyktwbixpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405870.9878922-154-165848286541781/AnsiballZ_file.py'
Oct 02 11:51:11 compute-0 sudo[111999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:11 compute-0 ceph-mon[73668]: 10.18 deep-scrub starts
Oct 02 11:51:11 compute-0 ceph-mon[73668]: 10.18 deep-scrub ok
Oct 02 11:51:11 compute-0 python3.9[112001]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:11 compute-0 sudo[111999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:11.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:12.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:12 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 02 11:51:12 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 02 11:51:12 compute-0 sudo[112151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvkpwnkpsnjbwyagnwmoyrckzyvltpbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405871.8295329-178-246835186362015/AnsiballZ_command.py'
Oct 02 11:51:12 compute-0 sudo[112151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:12 compute-0 python3.9[112154]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:51:12 compute-0 ceph-mon[73668]: 6.2 scrub starts
Oct 02 11:51:12 compute-0 ceph-mon[73668]: 6.2 scrub ok
Oct 02 11:51:12 compute-0 ceph-mon[73668]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:12 compute-0 ceph-mon[73668]: 7.13 scrub starts
Oct 02 11:51:12 compute-0 ceph-mon[73668]: 7.13 scrub ok
Oct 02 11:51:12 compute-0 sudo[112151]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:13 compute-0 sudo[112317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qunxoiqwohdwdhmnvtivyegscrqwvxnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405872.8099403-202-75063054463743/AnsiballZ_stat.py'
Oct 02 11:51:13 compute-0 sudo[112317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:13 compute-0 python3.9[112319]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:13 compute-0 sudo[112317]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:13 compute-0 ceph-mon[73668]: 6.a deep-scrub starts
Oct 02 11:51:13 compute-0 ceph-mon[73668]: 6.a deep-scrub ok
Oct 02 11:51:13 compute-0 sudo[112395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygpzwiateudqiyrqlimmpnxjmwjmriwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405872.8099403-202-75063054463743/AnsiballZ_file.py'
Oct 02 11:51:13 compute-0 sudo[112395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:13.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:13 compute-0 python3.9[112397]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:13 compute-0 sudo[112395]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:14.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:14 compute-0 sudo[112548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbvontaogqabplibxmqcymoagfmwygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405874.142525-238-113653639089749/AnsiballZ_stat.py'
Oct 02 11:51:14 compute-0 sudo[112548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:14 compute-0 sudo[112551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:14 compute-0 sudo[112551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:14 compute-0 sudo[112551]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:14 compute-0 ceph-mon[73668]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:14 compute-0 python3.9[112550]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:14 compute-0 sudo[112576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:14 compute-0 sudo[112576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:14 compute-0 sudo[112576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:14 compute-0 sudo[112548]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:14 compute-0 sudo[112676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjwkvjkfmuxnnvivqltihssfruyzkqzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405874.142525-238-113653639089749/AnsiballZ_file.py'
Oct 02 11:51:14 compute-0 sudo[112676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:15 compute-0 python3.9[112678]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:15 compute-0 sudo[112676]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:15 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 02 11:51:15 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 02 11:51:15 compute-0 ceph-mon[73668]: 7.9 scrub starts
Oct 02 11:51:15 compute-0 ceph-mon[73668]: 7.9 scrub ok
Oct 02 11:51:15 compute-0 ceph-mon[73668]: 9.5 scrub starts
Oct 02 11:51:15 compute-0 ceph-mon[73668]: 9.5 scrub ok
Oct 02 11:51:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:15.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:15 compute-0 sudo[112828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evugwmyefkcnwgsizcphanqgoglmwloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405875.394842-277-103618620042310/AnsiballZ_ini_file.py'
Oct 02 11:51:15 compute-0 sudo[112828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:16 compute-0 python3.9[112830]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:16 compute-0 sudo[112828]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:16.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:16 compute-0 sudo[112981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enzqysfnvwvvsktjafuwhheazclvbbux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405876.2085023-277-232845977408620/AnsiballZ_ini_file.py'
Oct 02 11:51:16 compute-0 sudo[112981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:16 compute-0 ceph-mon[73668]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:16 compute-0 ceph-mon[73668]: 9.18 scrub starts
Oct 02 11:51:16 compute-0 ceph-mon[73668]: 9.18 scrub ok
Oct 02 11:51:16 compute-0 python3.9[112983]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:16 compute-0 sudo[112981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:17 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 02 11:51:17 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 02 11:51:17 compute-0 sudo[113133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqopkpomuuskjxkanmomdrtwdhqgbld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405876.9811876-277-165697653674024/AnsiballZ_ini_file.py'
Oct 02 11:51:17 compute-0 sudo[113133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:17 compute-0 python3.9[113135]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:17 compute-0 sudo[113133]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:17 compute-0 ceph-mon[73668]: 6.6 scrub starts
Oct 02 11:51:17 compute-0 ceph-mon[73668]: 6.6 scrub ok
Oct 02 11:51:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:17.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:17 compute-0 sudo[113285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytlfigyohywttmbhrfaeqlxygzddbiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405877.6276307-277-86723993561147/AnsiballZ_ini_file.py'
Oct 02 11:51:17 compute-0 sudo[113285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:18 compute-0 python3.9[113287]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:18 compute-0 sudo[113285]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000076s ======
Oct 02 11:51:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:18.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Oct 02 11:51:18 compute-0 sudo[113438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpjbgtijxqvltiwjeqsasdlhcubgnmfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405878.4306924-370-8229054259932/AnsiballZ_dnf.py'
Oct 02 11:51:18 compute-0 sudo[113438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:18 compute-0 ceph-mon[73668]: 6.7 scrub starts
Oct 02 11:51:18 compute-0 ceph-mon[73668]: 6.7 scrub ok
Oct 02 11:51:18 compute-0 ceph-mon[73668]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:18 compute-0 python3.9[113440]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:51:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:19.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:20 compute-0 sudo[113438]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:21 compute-0 ceph-mon[73668]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:21 compute-0 ceph-mon[73668]: 9.8 scrub starts
Oct 02 11:51:21 compute-0 ceph-mon[73668]: 9.8 scrub ok
Oct 02 11:51:21 compute-0 sudo[113592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wibpdngeyyrrocexesmqxzzrqyxisfne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405880.8010209-403-20184624182650/AnsiballZ_setup.py'
Oct 02 11:51:21 compute-0 sudo[113592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 02 11:51:21 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 02 11:51:21 compute-0 python3.9[113594]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:21 compute-0 sudo[113592]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:51:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:51:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:22 compute-0 sudo[113746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzdslotiiaabbbamxlatqacyzljzzafw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405881.745483-427-193664781161131/AnsiballZ_stat.py'
Oct 02 11:51:22 compute-0 sudo[113746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:22 compute-0 ceph-mon[73668]: 6.9 scrub starts
Oct 02 11:51:22 compute-0 ceph-mon[73668]: 6.9 scrub ok
Oct 02 11:51:22 compute-0 ceph-mon[73668]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:22 compute-0 python3.9[113748]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:51:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:22.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:22 compute-0 sudo[113746]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:22 compute-0 sudo[113899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgtdqbmepkdhhjqdfpnmycmzhvvnuvjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405882.463009-454-143019318529962/AnsiballZ_stat.py'
Oct 02 11:51:22 compute-0 sudo[113899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:22 compute-0 python3.9[113901]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:51:22 compute-0 sudo[113899]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:23 compute-0 ceph-mon[73668]: 9.9 scrub starts
Oct 02 11:51:23 compute-0 ceph-mon[73668]: 9.9 scrub ok
Oct 02 11:51:23 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Oct 02 11:51:23 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Oct 02 11:51:23 compute-0 sudo[114051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqdszywnfbgqevtuzveadmhmfcbspltt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405883.2385852-484-147010466787200/AnsiballZ_service_facts.py'
Oct 02 11:51:23 compute-0 sudo[114051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:23.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:23 compute-0 python3.9[114053]: ansible-service_facts Invoked
Oct 02 11:51:23 compute-0 network[114070]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:51:23 compute-0 network[114071]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:51:23 compute-0 network[114072]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:51:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:24 compute-0 ceph-mon[73668]: 6.b deep-scrub starts
Oct 02 11:51:24 compute-0 ceph-mon[73668]: 6.b deep-scrub ok
Oct 02 11:51:24 compute-0 ceph-mon[73668]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:25 compute-0 ceph-mon[73668]: 9.16 scrub starts
Oct 02 11:51:25 compute-0 ceph-mon[73668]: 9.16 scrub ok
Oct 02 11:51:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:26.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:26 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 02 11:51:26 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 02 11:51:26 compute-0 ceph-mon[73668]: 9.1d scrub starts
Oct 02 11:51:26 compute-0 ceph-mon[73668]: 6.3 deep-scrub starts
Oct 02 11:51:26 compute-0 ceph-mon[73668]: 9.1d scrub ok
Oct 02 11:51:26 compute-0 ceph-mon[73668]: 6.3 deep-scrub ok
Oct 02 11:51:26 compute-0 ceph-mon[73668]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:27 compute-0 sudo[114051]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:27 compute-0 ceph-mon[73668]: 6.f scrub starts
Oct 02 11:51:27 compute-0 ceph-mon[73668]: 6.f scrub ok
Oct 02 11:51:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:27.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:28 compute-0 sudo[114360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnmgzhgzscwahgdoozltpsuzomcuzjio ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759405887.7514288-523-33019459521771/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759405887.7514288-523-33019459521771/args'
Oct 02 11:51:28 compute-0 sudo[114360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:28 compute-0 sudo[114360]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:28 compute-0 ceph-mon[73668]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:51:28
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root']
Oct 02 11:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:51:28 compute-0 sudo[114528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcvmbcxppjacgunradepwamlunywpcuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405888.4804769-556-210408334581135/AnsiballZ_dnf.py'
Oct 02 11:51:28 compute-0 sudo[114528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:29 compute-0 python3.9[114530]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:51:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:29.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:51:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:30.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:51:30 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 02 11:51:30 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 02 11:51:30 compute-0 sudo[114528]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:31 compute-0 ceph-mon[73668]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:31 compute-0 ceph-mon[73668]: 9.19 scrub starts
Oct 02 11:51:31 compute-0 ceph-mon[73668]: 9.19 scrub ok
Oct 02 11:51:31 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Oct 02 11:51:31 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Oct 02 11:51:31 compute-0 sudo[114682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pedgvblqlzmngjqlukreyvkcoxffpejx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405890.7747047-595-117286414000659/AnsiballZ_package_facts.py'
Oct 02 11:51:31 compute-0 sudo[114682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:31 compute-0 python3.9[114684]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 11:51:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:31.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:31 compute-0 sudo[114682]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:32 compute-0 ceph-mon[73668]: 9.1a deep-scrub starts
Oct 02 11:51:32 compute-0 ceph-mon[73668]: 9.1a deep-scrub ok
Oct 02 11:51:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:32.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:33 compute-0 sudo[114835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwesfkxdkdotmixohvhxfzcquuwiqpah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405892.6842585-625-46763818378924/AnsiballZ_stat.py'
Oct 02 11:51:33 compute-0 sudo[114835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:33 compute-0 ceph-mon[73668]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:33 compute-0 ceph-mon[73668]: 6.d scrub starts
Oct 02 11:51:33 compute-0 ceph-mon[73668]: 6.d scrub ok
Oct 02 11:51:33 compute-0 python3.9[114837]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:33 compute-0 sudo[114835]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:33 compute-0 sudo[114913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wurocfmrjfhloeezwqnhxskzcqmoqgru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405892.6842585-625-46763818378924/AnsiballZ_file.py'
Oct 02 11:51:33 compute-0 sudo[114913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:33 compute-0 python3.9[114915]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:33 compute-0 sudo[114913]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:34 compute-0 ceph-mon[73668]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:34 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 02 11:51:34 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 02 11:51:34 compute-0 sudo[115015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:34 compute-0 sudo[115015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-0 sudo[115015]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:34 compute-0 sudo[115064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:34 compute-0 sudo[115064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-0 sudo[115064]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 sudo[115114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwvxpnijcuupfqceenwosynzwhwzocok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405893.9219553-661-68097722640436/AnsiballZ_stat.py'
Oct 02 11:51:34 compute-0 sudo[115114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:34 compute-0 sudo[115117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:34 compute-0 sudo[115117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-0 sudo[115117]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 sudo[115144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:51:34 compute-0 sudo[115144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-0 python3.9[115120]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:34 compute-0 sudo[115114]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 sudo[115260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzsqrcqguoclisbthustfdvrowqokdoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405893.9219553-661-68097722640436/AnsiballZ_file.py'
Oct 02 11:51:34 compute-0 sudo[115260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:34 compute-0 sudo[115267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:34 compute-0 sudo[115267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-0 sudo[115267]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 sudo[115144]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 python3.9[115264]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:34 compute-0 sudo[115260]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-0 sudo[115302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:34 compute-0 sudo[115302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-0 sudo[115302]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:35 compute-0 ceph-mon[73668]: 9.1b scrub starts
Oct 02 11:51:35 compute-0 ceph-mon[73668]: 9.1b scrub ok
Oct 02 11:51:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:35.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:51:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:51:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:36.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:36 compute-0 sudo[115476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtnessnubjpkkkrbuqqnckmvqhzmltrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405895.7648017-715-21514505516106/AnsiballZ_lineinfile.py'
Oct 02 11:51:36 compute-0 sudo[115476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:36 compute-0 python3.9[115478]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:36 compute-0 sudo[115476]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:51:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:51:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:51:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2ba2b8e7-681b-4f70-b7e0-035ffe42a094 does not exist
Oct 02 11:51:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 89aac7fb-51ae-420f-808f-c5c392bf3ddf does not exist
Oct 02 11:51:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d71ab02e-9d10-44ec-86fe-d8399cefc1ef does not exist
Oct 02 11:51:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:51:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:51:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:51:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:36 compute-0 sudo[115504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:36 compute-0 sudo[115504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:36 compute-0 sudo[115504]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:36 compute-0 sudo[115529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:36 compute-0 sudo[115529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:36 compute-0 sudo[115529]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:36 compute-0 ceph-mon[73668]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:36 compute-0 ceph-mon[73668]: 6.5 scrub starts
Oct 02 11:51:36 compute-0 ceph-mon[73668]: 6.5 scrub ok
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:51:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:37 compute-0 sudo[115554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:37 compute-0 sudo[115554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:37 compute-0 sudo[115554]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:37 compute-0 sudo[115579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:51:37 compute-0 sudo[115579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:37 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 02 11:51:37 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 02 11:51:37 compute-0 podman[115667]: 2025-10-02 11:51:37.479072678 +0000 UTC m=+0.045326629 container create 42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:51:37 compute-0 systemd[1]: Started libpod-conmon-42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8.scope.
Oct 02 11:51:37 compute-0 podman[115667]: 2025-10-02 11:51:37.457931913 +0000 UTC m=+0.024185884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:37 compute-0 podman[115667]: 2025-10-02 11:51:37.698669301 +0000 UTC m=+0.264923292 container init 42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:51:37 compute-0 podman[115667]: 2025-10-02 11:51:37.710252255 +0000 UTC m=+0.276506206 container start 42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:37 compute-0 systemd[1]: libpod-42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8.scope: Deactivated successfully.
Oct 02 11:51:37 compute-0 stupefied_williamson[115721]: 167 167
Oct 02 11:51:37 compute-0 conmon[115721]: conmon 42c45c73f5497310480d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8.scope/container/memory.events
Oct 02 11:51:37 compute-0 sudo[115792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlxayoinmyybuqbnidrmdpbnaspjkmfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405897.4108512-760-178779289810437/AnsiballZ_setup.py'
Oct 02 11:51:37 compute-0 sudo[115792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:37.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:37 compute-0 podman[115667]: 2025-10-02 11:51:37.960953765 +0000 UTC m=+0.527207716 container attach 42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:51:37 compute-0 podman[115667]: 2025-10-02 11:51:37.961507709 +0000 UTC m=+0.527761660 container died 42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:51:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:37 compute-0 ceph-mon[73668]: 9.1e scrub starts
Oct 02 11:51:37 compute-0 ceph-mon[73668]: 9.1e scrub ok
Oct 02 11:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ffc16266f1293c7a30053d3df399099c52a84ef6b39bed0b5d33adac42fb665-merged.mount: Deactivated successfully.
Oct 02 11:51:38 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 02 11:51:38 compute-0 ceph-osd[84115]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 02 11:51:38 compute-0 podman[115667]: 2025-10-02 11:51:38.166788779 +0000 UTC m=+0.733042740 container remove 42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:38 compute-0 systemd[1]: libpod-conmon-42c45c73f5497310480dc338fe077197e939a094c581f73ac4f67dddd9b493d8.scope: Deactivated successfully.
Oct 02 11:51:38 compute-0 python3.9[115804]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:51:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:38 compute-0 podman[115817]: 2025-10-02 11:51:38.379924548 +0000 UTC m=+0.074776055 container create b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:38 compute-0 systemd[1]: Started libpod-conmon-b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3.scope.
Oct 02 11:51:38 compute-0 podman[115817]: 2025-10-02 11:51:38.343640429 +0000 UTC m=+0.038491996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177fb81ee9cec8fb3e7cbff2a76a955ee627ea40ac3d327634851d358119e42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177fb81ee9cec8fb3e7cbff2a76a955ee627ea40ac3d327634851d358119e42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177fb81ee9cec8fb3e7cbff2a76a955ee627ea40ac3d327634851d358119e42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177fb81ee9cec8fb3e7cbff2a76a955ee627ea40ac3d327634851d358119e42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177fb81ee9cec8fb3e7cbff2a76a955ee627ea40ac3d327634851d358119e42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:38 compute-0 sudo[115792]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:38 compute-0 podman[115817]: 2025-10-02 11:51:38.482093296 +0000 UTC m=+0.176944783 container init b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:51:38 compute-0 podman[115817]: 2025-10-02 11:51:38.489847233 +0000 UTC m=+0.184698690 container start b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:51:38 compute-0 podman[115817]: 2025-10-02 11:51:38.493411013 +0000 UTC m=+0.188262470 container attach b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:51:38 compute-0 sudo[115917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkxvphlmblkkcaasykltndjkxiajkkzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405897.4108512-760-178779289810437/AnsiballZ_systemd.py'
Oct 02 11:51:38 compute-0 sudo[115917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:38 compute-0 ceph-mon[73668]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:38 compute-0 ceph-mon[73668]: 9.1f scrub starts
Oct 02 11:51:38 compute-0 ceph-mon[73668]: 9.1f scrub ok
Oct 02 11:51:38 compute-0 ceph-mon[73668]: 9.6 scrub starts
Oct 02 11:51:38 compute-0 ceph-mon[73668]: 9.6 scrub ok
Oct 02 11:51:39 compute-0 python3.9[115919]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:39 compute-0 sudo[115917]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:39 compute-0 gifted_knuth[115838]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:51:39 compute-0 gifted_knuth[115838]: --> relative data size: 1.0
Oct 02 11:51:39 compute-0 gifted_knuth[115838]: --> All data devices are unavailable
Oct 02 11:51:39 compute-0 systemd[1]: libpod-b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3.scope: Deactivated successfully.
Oct 02 11:51:39 compute-0 podman[115817]: 2025-10-02 11:51:39.413370808 +0000 UTC m=+1.108222285 container died b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b177fb81ee9cec8fb3e7cbff2a76a955ee627ea40ac3d327634851d358119e42-merged.mount: Deactivated successfully.
Oct 02 11:51:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:39 compute-0 podman[115817]: 2025-10-02 11:51:39.482484759 +0000 UTC m=+1.177336216 container remove b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:51:39 compute-0 systemd[1]: libpod-conmon-b5716283d3fc316acc83c44a6a7b7e7ed405cb758c9a90ba68a1d150362897e3.scope: Deactivated successfully.
Oct 02 11:51:39 compute-0 sudo[115579]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:51:39 compute-0 sudo[115969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:39 compute-0 sudo[115969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:39 compute-0 sudo[115969]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:39 compute-0 sudo[115994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:39 compute-0 sudo[115994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:39 compute-0 sudo[115994]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:39 compute-0 sudo[116019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:39 compute-0 sudo[116019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:39 compute-0 sudo[116019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:39 compute-0 sudo[116044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:51:39 compute-0 sudo[116044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 11:51:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:39.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 11:51:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:40 compute-0 sshd-session[111106]: Connection closed by 192.168.122.30 port 38114
Oct 02 11:51:40 compute-0 sshd-session[111102]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:51:40 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Oct 02 11:51:40 compute-0 systemd[1]: session-39.scope: Consumed 25.542s CPU time.
Oct 02 11:51:40 compute-0 systemd-logind[820]: Session 39 logged out. Waiting for processes to exit.
Oct 02 11:51:40 compute-0 systemd-logind[820]: Removed session 39.
Oct 02 11:51:40 compute-0 podman[116110]: 2025-10-02 11:51:40.15479043 +0000 UTC m=+0.043852152 container create 3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:40 compute-0 systemd[1]: Started libpod-conmon-3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5.scope.
Oct 02 11:51:40 compute-0 podman[116110]: 2025-10-02 11:51:40.137783199 +0000 UTC m=+0.026844941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:40.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:40 compute-0 podman[116110]: 2025-10-02 11:51:40.246038592 +0000 UTC m=+0.135100394 container init 3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:51:40 compute-0 podman[116110]: 2025-10-02 11:51:40.256220349 +0000 UTC m=+0.145282071 container start 3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 11:51:40 compute-0 podman[116110]: 2025-10-02 11:51:40.259700328 +0000 UTC m=+0.148762070 container attach 3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:51:40 compute-0 silly_swanson[116127]: 167 167
Oct 02 11:51:40 compute-0 systemd[1]: libpod-3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5.scope: Deactivated successfully.
Oct 02 11:51:40 compute-0 podman[116110]: 2025-10-02 11:51:40.26375969 +0000 UTC m=+0.152821412 container died 3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-247f98c91c10b9b7e70e9387e2101dafcc13236dc1b46ddb9c74ac111c77bd21-merged.mount: Deactivated successfully.
Oct 02 11:51:40 compute-0 podman[116110]: 2025-10-02 11:51:40.30126015 +0000 UTC m=+0.190321862 container remove 3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:51:40 compute-0 systemd[1]: libpod-conmon-3938734e2b7ff495db87300947fdad7e29137f93027f16467ddb48c0783b16b5.scope: Deactivated successfully.
Oct 02 11:51:40 compute-0 podman[116153]: 2025-10-02 11:51:40.48205687 +0000 UTC m=+0.040604089 container create ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 11:51:40 compute-0 systemd[1]: Started libpod-conmon-ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647.scope.
Oct 02 11:51:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef03256ae062d729472cef01343bd6fb4f9b48c88e4b100d5b68187b6507b3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef03256ae062d729472cef01343bd6fb4f9b48c88e4b100d5b68187b6507b3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef03256ae062d729472cef01343bd6fb4f9b48c88e4b100d5b68187b6507b3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef03256ae062d729472cef01343bd6fb4f9b48c88e4b100d5b68187b6507b3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:40 compute-0 podman[116153]: 2025-10-02 11:51:40.465907811 +0000 UTC m=+0.024455030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:40 compute-0 podman[116153]: 2025-10-02 11:51:40.568272205 +0000 UTC m=+0.126819434 container init ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:51:40 compute-0 podman[116153]: 2025-10-02 11:51:40.575150049 +0000 UTC m=+0.133697258 container start ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:51:40 compute-0 podman[116153]: 2025-10-02 11:51:40.580949106 +0000 UTC m=+0.139496345 container attach ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:51:41 compute-0 ceph-mon[73668]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:41 compute-0 brave_germain[116169]: {
Oct 02 11:51:41 compute-0 brave_germain[116169]:     "1": [
Oct 02 11:51:41 compute-0 brave_germain[116169]:         {
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "devices": [
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "/dev/loop3"
Oct 02 11:51:41 compute-0 brave_germain[116169]:             ],
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "lv_name": "ceph_lv0",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "lv_size": "7511998464",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "name": "ceph_lv0",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "tags": {
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.cluster_name": "ceph",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.crush_device_class": "",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.encrypted": "0",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.osd_id": "1",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.type": "block",
Oct 02 11:51:41 compute-0 brave_germain[116169]:                 "ceph.vdo": "0"
Oct 02 11:51:41 compute-0 brave_germain[116169]:             },
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "type": "block",
Oct 02 11:51:41 compute-0 brave_germain[116169]:             "vg_name": "ceph_vg0"
Oct 02 11:51:41 compute-0 brave_germain[116169]:         }
Oct 02 11:51:41 compute-0 brave_germain[116169]:     ]
Oct 02 11:51:41 compute-0 brave_germain[116169]: }
Oct 02 11:51:41 compute-0 systemd[1]: libpod-ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647.scope: Deactivated successfully.
Oct 02 11:51:41 compute-0 podman[116153]: 2025-10-02 11:51:41.335411148 +0000 UTC m=+0.893958367 container died ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef03256ae062d729472cef01343bd6fb4f9b48c88e4b100d5b68187b6507b3a-merged.mount: Deactivated successfully.
Oct 02 11:51:41 compute-0 podman[116153]: 2025-10-02 11:51:41.397754657 +0000 UTC m=+0.956301876 container remove ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:51:41 compute-0 systemd[1]: libpod-conmon-ce76749ec5e9cd6f682883a2d275acbf5088998bc4b2e64bc718b59048bdb647.scope: Deactivated successfully.
Oct 02 11:51:41 compute-0 sudo[116044]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:41 compute-0 sudo[116189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:41 compute-0 sudo[116189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:41 compute-0 sudo[116189]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:41 compute-0 sudo[116214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:41 compute-0 sudo[116214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:41 compute-0 sudo[116214]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:41 compute-0 sudo[116239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:41 compute-0 sudo[116239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:41 compute-0 sudo[116239]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:41 compute-0 sudo[116264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:51:41 compute-0 sudo[116264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:41.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:42 compute-0 ceph-mon[73668]: 9.e scrub starts
Oct 02 11:51:42 compute-0 ceph-mon[73668]: 9.e scrub ok
Oct 02 11:51:42 compute-0 podman[116329]: 2025-10-02 11:51:42.035341378 +0000 UTC m=+0.041479962 container create eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:51:42 compute-0 systemd[1]: Started libpod-conmon-eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d.scope.
Oct 02 11:51:42 compute-0 podman[116329]: 2025-10-02 11:51:42.017019284 +0000 UTC m=+0.023157858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:42 compute-0 podman[116329]: 2025-10-02 11:51:42.135722191 +0000 UTC m=+0.141860785 container init eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:51:42 compute-0 podman[116329]: 2025-10-02 11:51:42.144998296 +0000 UTC m=+0.151136860 container start eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermat, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 11:51:42 compute-0 podman[116329]: 2025-10-02 11:51:42.148517035 +0000 UTC m=+0.154655599 container attach eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:51:42 compute-0 laughing_fermat[116345]: 167 167
Oct 02 11:51:42 compute-0 systemd[1]: libpod-eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d.scope: Deactivated successfully.
Oct 02 11:51:42 compute-0 podman[116329]: 2025-10-02 11:51:42.153550573 +0000 UTC m=+0.159689137 container died eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermat, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0f7ef5609d6728232c88eb52f4c0d2292d60135ea26913159ef5cc5da984b50-merged.mount: Deactivated successfully.
Oct 02 11:51:42 compute-0 podman[116329]: 2025-10-02 11:51:42.194556262 +0000 UTC m=+0.200694826 container remove eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:51:42 compute-0 systemd[1]: libpod-conmon-eaf6991ecfd89614a0e1a61f2df4c6f24e7a3c8da9089a7cc6ec1d01fa59950d.scope: Deactivated successfully.
Oct 02 11:51:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:42.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:42 compute-0 podman[116370]: 2025-10-02 11:51:42.385386396 +0000 UTC m=+0.048513390 container create deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:51:42 compute-0 systemd[1]: Started libpod-conmon-deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65.scope.
Oct 02 11:51:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b492813b8e2bcdbf68d1cc20d03551bb5a18c9ce77bad44b3b0365c6ac93b897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b492813b8e2bcdbf68d1cc20d03551bb5a18c9ce77bad44b3b0365c6ac93b897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b492813b8e2bcdbf68d1cc20d03551bb5a18c9ce77bad44b3b0365c6ac93b897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:42 compute-0 podman[116370]: 2025-10-02 11:51:42.362933957 +0000 UTC m=+0.026061001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b492813b8e2bcdbf68d1cc20d03551bb5a18c9ce77bad44b3b0365c6ac93b897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:51:42 compute-0 podman[116370]: 2025-10-02 11:51:42.468870881 +0000 UTC m=+0.131997905 container init deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:51:42 compute-0 podman[116370]: 2025-10-02 11:51:42.479976742 +0000 UTC m=+0.143103736 container start deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:42 compute-0 podman[116370]: 2025-10-02 11:51:42.484081626 +0000 UTC m=+0.147208620 container attach deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:51:43 compute-0 ceph-mon[73668]: pgmap v338: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:43 compute-0 musing_robinson[116387]: {
Oct 02 11:51:43 compute-0 musing_robinson[116387]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:51:43 compute-0 musing_robinson[116387]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:51:43 compute-0 musing_robinson[116387]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:51:43 compute-0 musing_robinson[116387]:         "osd_id": 1,
Oct 02 11:51:43 compute-0 musing_robinson[116387]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:51:43 compute-0 musing_robinson[116387]:         "type": "bluestore"
Oct 02 11:51:43 compute-0 musing_robinson[116387]:     }
Oct 02 11:51:43 compute-0 musing_robinson[116387]: }
Oct 02 11:51:43 compute-0 systemd[1]: libpod-deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65.scope: Deactivated successfully.
Oct 02 11:51:43 compute-0 conmon[116387]: conmon deef612e8ec920595e11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65.scope/container/memory.events
Oct 02 11:51:43 compute-0 podman[116409]: 2025-10-02 11:51:43.462856851 +0000 UTC m=+0.024638835 container died deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b492813b8e2bcdbf68d1cc20d03551bb5a18c9ce77bad44b3b0365c6ac93b897-merged.mount: Deactivated successfully.
Oct 02 11:51:43 compute-0 podman[116409]: 2025-10-02 11:51:43.532816833 +0000 UTC m=+0.094598797 container remove deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:51:43 compute-0 systemd[1]: libpod-conmon-deef612e8ec920595e110f66fe0a857e63a49c00520c699927c34060dd4d5c65.scope: Deactivated successfully.
Oct 02 11:51:43 compute-0 sudo[116264]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:51:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:51:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e452fe23-bc10-485c-9daa-acdfbad6add4 does not exist
Oct 02 11:51:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 042ac678-ade5-4041-b37c-9d9bfb8438b5 does not exist
Oct 02 11:51:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cb3b983f-b39d-4546-9e3d-361912ccebb0 does not exist
Oct 02 11:51:43 compute-0 sudo[116424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:43 compute-0 sudo[116424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:43 compute-0 sudo[116424]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:43 compute-0 sudo[116449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:51:43 compute-0 sudo[116449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:43 compute-0 sudo[116449]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:43.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:44 compute-0 ceph-mon[73668]: 6.8 scrub starts
Oct 02 11:51:44 compute-0 ceph-mon[73668]: 6.8 scrub ok
Oct 02 11:51:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:44 compute-0 ceph-mon[73668]: pgmap v339: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:45 compute-0 sshd-session[116475]: Accepted publickey for zuul from 192.168.122.30 port 45624 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:51:45 compute-0 systemd-logind[820]: New session 40 of user zuul.
Oct 02 11:51:45 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 02 11:51:45 compute-0 sshd-session[116475]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:51:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:45.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:45 compute-0 ceph-mon[73668]: 9.a scrub starts
Oct 02 11:51:45 compute-0 ceph-mon[73668]: 9.a scrub ok
Oct 02 11:51:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:46 compute-0 sudo[116628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sskudkphojhhdmpymjvihpljhsasskyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405905.5822802-30-168510336335967/AnsiballZ_file.py'
Oct 02 11:51:46 compute-0 sudo[116628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:51:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:46.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:51:46 compute-0 python3.9[116630]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:46 compute-0 sudo[116628]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:46 compute-0 sudo[116781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afcnpbfxdpnqznxnzjkxtrtxncvgdqvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405906.4986336-66-203151520205329/AnsiballZ_stat.py'
Oct 02 11:51:46 compute-0 sudo[116781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:46 compute-0 ceph-mon[73668]: pgmap v340: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:46 compute-0 ceph-mon[73668]: 9.f scrub starts
Oct 02 11:51:46 compute-0 ceph-mon[73668]: 9.f scrub ok
Oct 02 11:51:47 compute-0 python3.9[116783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:47 compute-0 sudo[116781]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-0 sudo[116859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpfylwyrvaybdionygnrjtcpwyfuntlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405906.4986336-66-203151520205329/AnsiballZ_file.py'
Oct 02 11:51:47 compute-0 sudo[116859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:47 compute-0 python3.9[116861]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:47 compute-0 sudo[116859]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:47 compute-0 sshd-session[116478]: Connection closed by 192.168.122.30 port 45624
Oct 02 11:51:48 compute-0 sshd-session[116475]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:51:48 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 02 11:51:48 compute-0 systemd[1]: session-40.scope: Consumed 1.750s CPU time.
Oct 02 11:51:48 compute-0 systemd-logind[820]: Session 40 logged out. Waiting for processes to exit.
Oct 02 11:51:48 compute-0 systemd-logind[820]: Removed session 40.
Oct 02 11:51:48 compute-0 ceph-mon[73668]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:48.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:49 compute-0 ceph-mon[73668]: 6.e scrub starts
Oct 02 11:51:49 compute-0 ceph-mon[73668]: 6.e scrub ok
Oct 02 11:51:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:50.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:50 compute-0 ceph-mon[73668]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:50 compute-0 ceph-mon[73668]: 9.d scrub starts
Oct 02 11:51:50 compute-0 ceph-mon[73668]: 9.d scrub ok
Oct 02 11:51:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:51.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:51 compute-0 ceph-mon[73668]: 9.10 scrub starts
Oct 02 11:51:51 compute-0 ceph-mon[73668]: 9.10 scrub ok
Oct 02 11:51:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:52 compute-0 ceph-mon[73668]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:53 compute-0 sshd-session[116889]: Accepted publickey for zuul from 192.168.122.30 port 39296 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:51:53 compute-0 systemd-logind[820]: New session 41 of user zuul.
Oct 02 11:51:53 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 02 11:51:53 compute-0 sshd-session[116889]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:51:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:53.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:54 compute-0 ceph-mon[73668]: 9.11 deep-scrub starts
Oct 02 11:51:54 compute-0 ceph-mon[73668]: 9.11 deep-scrub ok
Oct 02 11:51:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:54.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:54 compute-0 python3.9[117043]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:55 compute-0 sudo[117048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:55 compute-0 sudo[117048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:55 compute-0 sudo[117048]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:55 compute-0 ceph-mon[73668]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:55 compute-0 sudo[117073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:55 compute-0 sudo[117073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:55 compute-0 sudo[117073]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:55 compute-0 sudo[117247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjrnnnreunersvdfplsoytdfnmtddxju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405915.2761376-64-208837746633971/AnsiballZ_file.py'
Oct 02 11:51:55 compute-0 sudo[117247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:55.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:55 compute-0 python3.9[117249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:55 compute-0 sudo[117247]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:56 compute-0 sudo[117423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgqtwgypxcirqwicbhuxcwtjhsishsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405916.1607168-88-13788020363079/AnsiballZ_stat.py'
Oct 02 11:51:56 compute-0 sudo[117423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:56 compute-0 python3.9[117425]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:56 compute-0 sudo[117423]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:57 compute-0 ceph-mon[73668]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:57 compute-0 ceph-mon[73668]: 9.12 scrub starts
Oct 02 11:51:57 compute-0 ceph-mon[73668]: 9.12 scrub ok
Oct 02 11:51:57 compute-0 sudo[117501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuufxvexwducmnimznbvgjywfnnytfvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405916.1607168-88-13788020363079/AnsiballZ_file.py'
Oct 02 11:51:57 compute-0 sudo[117501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:57 compute-0 python3.9[117503]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.x5rp7qss recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:57 compute-0 sudo[117501]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:57.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:58 compute-0 ceph-mon[73668]: 9.15 scrub starts
Oct 02 11:51:58 compute-0 ceph-mon[73668]: 9.15 scrub ok
Oct 02 11:51:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:58.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:58 compute-0 sudo[117654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxfxaxnvzjoxleyiqsighxbcevywxtbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405918.0081837-148-28140987678744/AnsiballZ_stat.py'
Oct 02 11:51:58 compute-0 sudo[117654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:51:58 compute-0 python3.9[117656]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:58 compute-0 sudo[117654]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:58 compute-0 sudo[117732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyulpnohqgisebeyvihqzpnhrsjxrgto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405918.0081837-148-28140987678744/AnsiballZ_file.py'
Oct 02 11:51:58 compute-0 sudo[117732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:59 compute-0 python3.9[117734]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.yhmlfqph recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:59 compute-0 sudo[117732]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:59 compute-0 ceph-mon[73668]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:59 compute-0 sudo[117884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjelkqrpmuhuixdozyzikowckcnspuej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405919.259072-187-84721410810030/AnsiballZ_file.py'
Oct 02 11:51:59 compute-0 sudo[117884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:59 compute-0 python3.9[117886]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:51:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:51:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:59.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:51:59 compute-0 sudo[117884]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:00 compute-0 ceph-mon[73668]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:00 compute-0 sudo[118036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfzpowfyysbbiciutucvtcvpdyknuwmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405920.0101316-211-237880826027485/AnsiballZ_stat.py'
Oct 02 11:52:00 compute-0 sudo[118036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:00 compute-0 python3.9[118038]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:00 compute-0 sudo[118036]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:00 compute-0 sudo[118115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzqbwsegeyunkqtumhqffbcvutegkhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405920.0101316-211-237880826027485/AnsiballZ_file.py'
Oct 02 11:52:00 compute-0 sudo[118115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:00 compute-0 python3.9[118117]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:00 compute-0 sudo[118115]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:01 compute-0 sudo[118267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvxwxiqjsrdkftjgugqflulonhphmbpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405921.097351-211-228560657879969/AnsiballZ_stat.py'
Oct 02 11:52:01 compute-0 sudo[118267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:01 compute-0 python3.9[118269]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:01 compute-0 sudo[118267]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:01.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:01 compute-0 sudo[118345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtdupnotcjdlkdehoehowexvhcbxwjli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405921.097351-211-228560657879969/AnsiballZ_file.py'
Oct 02 11:52:01 compute-0 sudo[118345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:02 compute-0 python3.9[118347]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:02 compute-0 sudo[118345]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:02.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:02 compute-0 sudo[118498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kchofbohytrfviqgyjbbtriamkvhwspj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405922.3707957-280-132394113431252/AnsiballZ_file.py'
Oct 02 11:52:02 compute-0 sudo[118498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:02 compute-0 python3.9[118500]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:02 compute-0 sudo[118498]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:03 compute-0 ceph-mon[73668]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:03 compute-0 sudo[118650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkbtiprnzjuimntyxcrkehvtmtkyiuoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405923.0620103-304-184137289808537/AnsiballZ_stat.py'
Oct 02 11:52:03 compute-0 sudo[118650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:03 compute-0 python3.9[118652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:03 compute-0 sudo[118650]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:03 compute-0 sudo[118728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wufcbivchkgnylzjhpjcgfluddadiyzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405923.0620103-304-184137289808537/AnsiballZ_file.py'
Oct 02 11:52:03 compute-0 sudo[118728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:03.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:03 compute-0 python3.9[118730]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:04 compute-0 sudo[118728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:04 compute-0 ceph-mon[73668]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:04.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:04 compute-0 sudo[118881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wehnfdsljiifgqtcipgyryuaqnhwwtun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405924.1777833-340-69639194716663/AnsiballZ_stat.py'
Oct 02 11:52:04 compute-0 sudo[118881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:04 compute-0 python3.9[118883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:04 compute-0 sudo[118881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:04 compute-0 sudo[118959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pderseptjmhxksvuzkgkemvcxnpvqoka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405924.1777833-340-69639194716663/AnsiballZ_file.py'
Oct 02 11:52:04 compute-0 sudo[118959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:05 compute-0 python3.9[118961]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:05 compute-0 sudo[118959]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:05.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:05 compute-0 sudo[119111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dciuynomovwahrafwrlbvlsdoldjrqme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405925.3349602-376-161080242812318/AnsiballZ_systemd.py'
Oct 02 11:52:05 compute-0 sudo[119111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:06 compute-0 python3.9[119113]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:52:06 compute-0 systemd[1]: Reloading.
Oct 02 11:52:06 compute-0 systemd-rc-local-generator[119141]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:52:06 compute-0 systemd-sysv-generator[119144]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:52:06 compute-0 sudo[119111]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:07 compute-0 ceph-mon[73668]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:07 compute-0 sudo[119302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihlrcunvoovfrmmsinywdbalmjlolcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405926.8605795-400-271755692341253/AnsiballZ_stat.py'
Oct 02 11:52:07 compute-0 sudo[119302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:07 compute-0 python3.9[119304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:07 compute-0 sudo[119302]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:07 compute-0 sudo[119380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkhbnkvpslboikopeatushpgbgbhwozg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405926.8605795-400-271755692341253/AnsiballZ_file.py'
Oct 02 11:52:07 compute-0 sudo[119380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:07 compute-0 python3.9[119382]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:07 compute-0 sudo[119380]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:08 compute-0 ceph-mon[73668]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:08 compute-0 sudo[119533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnanvphztrxohnjsjttfeacqanibrljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405928.0745878-436-103583213719646/AnsiballZ_stat.py'
Oct 02 11:52:08 compute-0 sudo[119533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:08 compute-0 python3.9[119535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:08 compute-0 sudo[119533]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:08 compute-0 sudo[119611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nifapapvwlvgwucrbrvfbdfylgpgoofc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405928.0745878-436-103583213719646/AnsiballZ_file.py'
Oct 02 11:52:08 compute-0 sudo[119611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:09 compute-0 python3.9[119613]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:09 compute-0 sudo[119611]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:09 compute-0 sudo[119763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruwdoigwpccixmeqnkoygzxsrdnowljr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405929.3146677-472-274928160641822/AnsiballZ_systemd.py'
Oct 02 11:52:09 compute-0 sudo[119763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:09.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:09 compute-0 python3.9[119765]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:52:09 compute-0 systemd[1]: Reloading.
Oct 02 11:52:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:10 compute-0 systemd-rc-local-generator[119791]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:52:10 compute-0 systemd-sysv-generator[119795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:52:10 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:52:10 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:52:10 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:52:10 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:52:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:10.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:10 compute-0 sudo[119763]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:11 compute-0 ceph-mon[73668]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:11 compute-0 python3.9[119958]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:52:11 compute-0 network[119975]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:52:11 compute-0 network[119976]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:52:11 compute-0 network[119977]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:52:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:11.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:13 compute-0 ceph-mon[73668]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:13.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:14 compute-0 ceph-mon[73668]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:14.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:15 compute-0 sudo[120117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:15 compute-0 sudo[120117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:15 compute-0 sudo[120117]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:15 compute-0 sudo[120142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:15 compute-0 sudo[120142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:15 compute-0 sudo[120142]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:15.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:16.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:17 compute-0 ceph-mon[73668]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:17.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:18 compute-0 sudo[120293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqexkozxeqtmjhsgpmjhukzpghpserkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405937.776864-550-49326150784008/AnsiballZ_stat.py'
Oct 02 11:52:18 compute-0 sudo[120293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:18 compute-0 ceph-mon[73668]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:18 compute-0 python3.9[120295]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:18.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:18 compute-0 sudo[120293]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:18 compute-0 sudo[120372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvovjijwoaiizjasxcnipklknsonklxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405937.776864-550-49326150784008/AnsiballZ_file.py'
Oct 02 11:52:18 compute-0 sudo[120372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:18 compute-0 python3.9[120374]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:18 compute-0 sudo[120372]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:19 compute-0 sudo[120524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nayhmubridjthvbmqumxscldjzwvfkiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405939.0089395-589-72632918053296/AnsiballZ_file.py'
Oct 02 11:52:19 compute-0 sudo[120524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:19 compute-0 python3.9[120526]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:19 compute-0 sudo[120524]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:19.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:20 compute-0 sudo[120676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lesqozkhrydmjgdpqizdcfjexlwfawlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405939.8073936-613-33623645978050/AnsiballZ_stat.py'
Oct 02 11:52:20 compute-0 sudo[120676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:20 compute-0 ceph-mon[73668]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:20 compute-0 python3.9[120678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:20 compute-0 sudo[120676]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:20 compute-0 sudo[120755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjwhzjxevvtdalwbjawmmhaodrcxzuvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405939.8073936-613-33623645978050/AnsiballZ_file.py'
Oct 02 11:52:20 compute-0 sudo[120755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:20 compute-0 python3.9[120757]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:20 compute-0 sudo[120755]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:21 compute-0 sudo[120907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yugpbuabdzuuwaecdtycqexzsmjtfbgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405941.1954834-658-215903042162226/AnsiballZ_timezone.py'
Oct 02 11:52:21 compute-0 sudo[120907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:21.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:21 compute-0 python3.9[120909]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 11:52:21 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 11:52:21 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 11:52:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:22 compute-0 sudo[120907]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:22 compute-0 ceph-mon[73668]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:22.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:22 compute-0 sudo[121064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbfdgrufpvnhuhjpsgsdhzjroolnuhgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405942.3491836-685-24396721942664/AnsiballZ_file.py'
Oct 02 11:52:22 compute-0 sudo[121064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:22 compute-0 python3.9[121066]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:22 compute-0 sudo[121064]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:23 compute-0 sudo[121216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crardrimxiqxibpxmuwwbeygvmrrizel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405943.1668942-709-273013049960713/AnsiballZ_stat.py'
Oct 02 11:52:23 compute-0 sudo[121216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:23 compute-0 python3.9[121218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:23 compute-0 sudo[121216]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:23.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:23 compute-0 sudo[121294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsnptlarxlyjgtqevkjwhrupoledbwmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405943.1668942-709-273013049960713/AnsiballZ_file.py'
Oct 02 11:52:23 compute-0 sudo[121294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.094418) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944094506, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2355, "num_deletes": 251, "total_data_size": 3502025, "memory_usage": 3583776, "flush_reason": "Manual Compaction"}
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944112328, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3433301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7574, "largest_seqno": 9927, "table_properties": {"data_size": 3423416, "index_size": 5803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25468, "raw_average_key_size": 21, "raw_value_size": 3401260, "raw_average_value_size": 2853, "num_data_blocks": 259, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405765, "oldest_key_time": 1759405765, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17996 microseconds, and 7650 cpu microseconds.
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.112418) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3433301 bytes OK
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.112454) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.114352) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.114386) EVENT_LOG_v1 {"time_micros": 1759405944114374, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.114415) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3491889, prev total WAL file size 3491889, number of live WAL files 2.
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.116612) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3352KB)], [20(7790KB)]
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944116701, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11410339, "oldest_snapshot_seqno": -1}
Oct 02 11:52:24 compute-0 ceph-mon[73668]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3816 keys, 9730222 bytes, temperature: kUnknown
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944172162, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9730222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9698904, "index_size": 20648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 91949, "raw_average_key_size": 24, "raw_value_size": 9624296, "raw_average_value_size": 2522, "num_data_blocks": 903, "num_entries": 3816, "num_filter_entries": 3816, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.172468) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9730222 bytes
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.173861) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.4 rd, 175.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 4337, records dropped: 521 output_compression: NoCompression
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.173884) EVENT_LOG_v1 {"time_micros": 1759405944173873, "job": 6, "event": "compaction_finished", "compaction_time_micros": 55554, "compaction_time_cpu_micros": 26450, "output_level": 6, "num_output_files": 1, "total_output_size": 9730222, "num_input_records": 4337, "num_output_records": 3816, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.116435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.174064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.174073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.174075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.174077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:52:24.174080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-0 python3.9[121296]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944176410, "job": 0, "event": "table_file_deletion", "file_number": 22}
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:52:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944178726, "job": 0, "event": "table_file_deletion", "file_number": 20}
Oct 02 11:52:24 compute-0 sudo[121294]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:24.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:24 compute-0 sudo[121447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejvokoovjiiofxoylsuijgouullggpxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405944.3708951-745-70503347689480/AnsiballZ_stat.py'
Oct 02 11:52:24 compute-0 sudo[121447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:24 compute-0 python3.9[121449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:24 compute-0 sudo[121447]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:25 compute-0 sudo[121525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rukqhaygqrubuoqwdqhuvtzkwkzowmxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405944.3708951-745-70503347689480/AnsiballZ_file.py'
Oct 02 11:52:25 compute-0 sudo[121525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:25 compute-0 python3.9[121527]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yz2xjkin recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:25 compute-0 sudo[121525]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:25 compute-0 sudo[121677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpypwbmdaeykjpspjbppnqggfewbthoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405945.5537295-781-84517390881938/AnsiballZ_stat.py'
Oct 02 11:52:25 compute-0 sudo[121677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:26 compute-0 python3.9[121679]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:26 compute-0 sudo[121677]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:26 compute-0 sudo[121756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvskbqdygonmlopzyqgondqppkcnujb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405945.5537295-781-84517390881938/AnsiballZ_file.py'
Oct 02 11:52:26 compute-0 sudo[121756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:26 compute-0 python3.9[121758]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:26 compute-0 sudo[121756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:27 compute-0 ceph-mon[73668]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:27 compute-0 sudo[121908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxpupxldtfdjgwbnpzgykpyqnfrexdzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405946.7957644-820-73458969287480/AnsiballZ_command.py'
Oct 02 11:52:27 compute-0 sudo[121908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:27 compute-0 python3.9[121910]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:27 compute-0 sudo[121908]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:28 compute-0 ceph-mon[73668]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:28 compute-0 sudo[122062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnancxmmeqqqmvigqmclnzxedxhxdlwe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405947.755711-844-17953015583498/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:52:28 compute-0 sudo[122062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:52:28 compute-0 python3[122064]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:52:28
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', '.rgw.root']
Oct 02 11:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:52:28 compute-0 sudo[122062]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:29 compute-0 sudo[122214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykmpnnzugxibmywauaezsafscymlbaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405948.7942016-868-99292422258845/AnsiballZ_stat.py'
Oct 02 11:52:29 compute-0 sudo[122214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:29 compute-0 python3.9[122216]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:29 compute-0 sudo[122214]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:29 compute-0 sudo[122292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyakojthfeqpzfvrwykmqupxpmokieeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405948.7942016-868-99292422258845/AnsiballZ_file.py'
Oct 02 11:52:29 compute-0 sudo[122292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:29 compute-0 python3.9[122294]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:29 compute-0 sudo[122292]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:29.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:30 compute-0 ceph-mon[73668]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:30 compute-0 sudo[122445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcabzewqfsfnhyquxtjwmzzsvkzxecna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405950.3471782-904-176190558486212/AnsiballZ_stat.py'
Oct 02 11:52:30 compute-0 sudo[122445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:30 compute-0 python3.9[122447]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:30 compute-0 sudo[122445]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:31 compute-0 sudo[122523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btbwntfxyipjlglivqkuwfymaosviylq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405950.3471782-904-176190558486212/AnsiballZ_file.py'
Oct 02 11:52:31 compute-0 sudo[122523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:31 compute-0 python3.9[122525]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:31 compute-0 sudo[122523]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:31.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:31 compute-0 sudo[122675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfosnzdsyxvdhsdnuikauwcwytcmkomf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405951.5675893-940-2219192091103/AnsiballZ_stat.py'
Oct 02 11:52:31 compute-0 sudo[122675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:32 compute-0 python3.9[122677]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:32 compute-0 sudo[122675]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:32.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:32 compute-0 sudo[122754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgquhhqxpdydulkyfllllyztpfbxcpzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405951.5675893-940-2219192091103/AnsiballZ_file.py'
Oct 02 11:52:32 compute-0 sudo[122754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:32 compute-0 python3.9[122756]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:32 compute-0 sudo[122754]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:33 compute-0 ceph-mon[73668]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:33 compute-0 sudo[122906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwqfnxtajtyqygbwaobvdlopzvmeyui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405952.7973826-976-232297782863093/AnsiballZ_stat.py'
Oct 02 11:52:33 compute-0 sudo[122906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:33 compute-0 python3.9[122908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:33 compute-0 sudo[122906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:33 compute-0 sudo[122984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgymvqksgwjngklszrsdicgpkxgwcqbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405952.7973826-976-232297782863093/AnsiballZ_file.py'
Oct 02 11:52:33 compute-0 sudo[122984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:33.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:33 compute-0 python3.9[122986]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:33 compute-0 sudo[122984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:34 compute-0 ceph-mon[73668]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:34 compute-0 sudo[123137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xktcuafmmoqbtexsciutnhamozqixbvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405954.0772583-1012-182293424601686/AnsiballZ_stat.py'
Oct 02 11:52:34 compute-0 sudo[123137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:34 compute-0 python3.9[123139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:34 compute-0 sudo[123137]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:34 compute-0 sudo[123215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvgrembkowenvroaajvelzluwnfmhcqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405954.0772583-1012-182293424601686/AnsiballZ_file.py'
Oct 02 11:52:34 compute-0 sudo[123215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:35 compute-0 python3.9[123217]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:35 compute-0 sudo[123215]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:35 compute-0 sudo[123242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:35 compute-0 sudo[123242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:35 compute-0 sudo[123242]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:35 compute-0 sudo[123267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:35 compute-0 sudo[123267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:35 compute-0 sudo[123267]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:35 compute-0 sudo[123417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzzhljjdrgxaofuyexaxtyxwsmbrlymv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405955.4483812-1051-55963059409487/AnsiballZ_command.py'
Oct 02 11:52:35 compute-0 sudo[123417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:35.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:35 compute-0 python3.9[123419]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:35 compute-0 sudo[123417]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:36.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:36 compute-0 sudo[123573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uglmgmmdhecfakfqxiemdnnntrhiblty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405956.2188256-1075-29899788579174/AnsiballZ_blockinfile.py'
Oct 02 11:52:36 compute-0 sudo[123573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:36 compute-0 python3.9[123575]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:36 compute-0 sudo[123573]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:37 compute-0 ceph-mon[73668]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:37 compute-0 sudo[123725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rldofwipazisyuqidngiomzuwtetaclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405957.1865473-1102-158241634998496/AnsiballZ_file.py'
Oct 02 11:52:37 compute-0 sudo[123725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:37 compute-0 python3.9[123727]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:37 compute-0 sudo[123725]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:37.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:38 compute-0 sudo[123877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjchmuwxmuwsxssanlpwebmnhmmnsbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405957.8095262-1102-927045618698/AnsiballZ_file.py'
Oct 02 11:52:38 compute-0 sudo[123877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:38 compute-0 ceph-mon[73668]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:38 compute-0 python3.9[123879]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:38 compute-0 sudo[123877]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:38.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:39 compute-0 sudo[124030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsmiohcypmejqmfxhpbnblrycmjolxgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405958.6769438-1147-41621609418362/AnsiballZ_mount.py'
Oct 02 11:52:39 compute-0 sudo[124030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:39 compute-0 python3.9[124032]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:52:39 compute-0 sudo[124030]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:52:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:52:39 compute-0 sudo[124182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wakohvuotzpfslywisdgfuuixwicgfcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405959.5134592-1147-257640988080742/AnsiballZ_mount.py'
Oct 02 11:52:39 compute-0 sudo[124182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:39.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:40 compute-0 python3.9[124184]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:52:40 compute-0 sudo[124182]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:40.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:40 compute-0 sshd-session[116892]: Connection closed by 192.168.122.30 port 39296
Oct 02 11:52:40 compute-0 sshd-session[116889]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:52:40 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 02 11:52:40 compute-0 systemd[1]: session-41.scope: Consumed 31.212s CPU time.
Oct 02 11:52:40 compute-0 systemd-logind[820]: Session 41 logged out. Waiting for processes to exit.
Oct 02 11:52:40 compute-0 systemd-logind[820]: Removed session 41.
Oct 02 11:52:41 compute-0 ceph-mon[73668]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000048s ======
Oct 02 11:52:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 02 11:52:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:42 compute-0 ceph-mon[73668]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:42 compute-0 sshd-session[124209]: Received disconnect from 183.66.17.82 port 9094:11:  [preauth]
Oct 02 11:52:42 compute-0 sshd-session[124209]: Disconnected from authenticating user root 183.66.17.82 port 9094 [preauth]
Oct 02 11:52:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:44 compute-0 sudo[124213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:44 compute-0 sudo[124213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-0 sudo[124213]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-0 ceph-mon[73668]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:44 compute-0 sudo[124238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:52:44 compute-0 sudo[124238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-0 sudo[124238]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-0 sudo[124263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:44 compute-0 sudo[124263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-0 sudo[124263]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-0 sudo[124288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:52:44 compute-0 sudo[124288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:44 compute-0 sudo[124288]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:45.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:52:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:52:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:45 compute-0 sshd-session[124345]: Accepted publickey for zuul from 192.168.122.30 port 45306 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:52:45 compute-0 systemd-logind[820]: New session 42 of user zuul.
Oct 02 11:52:45 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 02 11:52:45 compute-0 sshd-session[124345]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:52:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:52:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:52:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:52:46 compute-0 sudo[124499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmicihmprkjcmavglczxjklrcrlxelvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405966.057707-23-118130090436400/AnsiballZ_tempfile.py'
Oct 02 11:52:46 compute-0 sudo[124499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 80b119d8-db0d-4f7e-9699-a4e6a92b51ba does not exist
Oct 02 11:52:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8c9f4d18-3d7d-4bb5-a32f-dd18c1870cbd does not exist
Oct 02 11:52:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 51a05968-def5-49bc-b97c-8f015beef0a4 does not exist
Oct 02 11:52:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:52:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:52:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:52:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:52:46 compute-0 sudo[124502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:46 compute-0 sudo[124502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:46 compute-0 sudo[124502]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:46 compute-0 sudo[124527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:52:46 compute-0 sudo[124527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:46 compute-0 sudo[124527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:46 compute-0 sudo[124552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:46 compute-0 sudo[124552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:46 compute-0 sudo[124552]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:46 compute-0 python3.9[124501]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 11:52:46 compute-0 sudo[124499]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:46 compute-0 sudo[124577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:52:46 compute-0 sudo[124577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:46 compute-0 ceph-mon[73668]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:52:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:52:47 compute-0 podman[124718]: 2025-10-02 11:52:47.261199648 +0000 UTC m=+0.046978088 container create 7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:52:47 compute-0 systemd[1]: Started libpod-conmon-7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209.scope.
Oct 02 11:52:47 compute-0 podman[124718]: 2025-10-02 11:52:47.241237719 +0000 UTC m=+0.027016179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:52:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:52:47 compute-0 podman[124718]: 2025-10-02 11:52:47.363280667 +0000 UTC m=+0.149059137 container init 7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 02 11:52:47 compute-0 podman[124718]: 2025-10-02 11:52:47.374804554 +0000 UTC m=+0.160582994 container start 7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_matsumoto, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:52:47 compute-0 silly_matsumoto[124752]: 167 167
Oct 02 11:52:47 compute-0 systemd[1]: libpod-7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209.scope: Deactivated successfully.
Oct 02 11:52:47 compute-0 podman[124718]: 2025-10-02 11:52:47.408333798 +0000 UTC m=+0.194112268 container attach 7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_matsumoto, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:52:47 compute-0 podman[124718]: 2025-10-02 11:52:47.40968311 +0000 UTC m=+0.195461560 container died 7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1907f320e60c58f357f78bae5a11dea919c75b3bcd7dcdd8ee692d710c835e8-merged.mount: Deactivated successfully.
Oct 02 11:52:47 compute-0 podman[124718]: 2025-10-02 11:52:47.493660025 +0000 UTC m=+0.279438465 container remove 7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:52:47 compute-0 systemd[1]: libpod-conmon-7009711c6ec1289aafc3c7178b9e05244ebeb79f647a961d09a09c834ebc3209.scope: Deactivated successfully.
Oct 02 11:52:47 compute-0 sudo[124822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcqayfdghpwkftqhbiuuvsdfrdbjqnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405967.0348198-59-84438745718602/AnsiballZ_stat.py'
Oct 02 11:52:47 compute-0 sudo[124822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:47 compute-0 podman[124835]: 2025-10-02 11:52:47.710909397 +0000 UTC m=+0.088360551 container create 14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:52:47 compute-0 python3.9[124829]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:52:47 compute-0 podman[124835]: 2025-10-02 11:52:47.645281353 +0000 UTC m=+0.022732527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:52:47 compute-0 systemd[1]: Started libpod-conmon-14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5.scope.
Oct 02 11:52:47 compute-0 sudo[124822]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07e2a2143b33f64683cb2c8fcf0a237925dbae3cb5a23b3256e177ee482198/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07e2a2143b33f64683cb2c8fcf0a237925dbae3cb5a23b3256e177ee482198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07e2a2143b33f64683cb2c8fcf0a237925dbae3cb5a23b3256e177ee482198/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07e2a2143b33f64683cb2c8fcf0a237925dbae3cb5a23b3256e177ee482198/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07e2a2143b33f64683cb2c8fcf0a237925dbae3cb5a23b3256e177ee482198/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:47 compute-0 podman[124835]: 2025-10-02 11:52:47.80480927 +0000 UTC m=+0.182260434 container init 14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:52:47 compute-0 podman[124835]: 2025-10-02 11:52:47.813019037 +0000 UTC m=+0.190470191 container start 14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:52:47 compute-0 podman[124835]: 2025-10-02 11:52:47.816683035 +0000 UTC m=+0.194134209 container attach 14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:52:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:47.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:48 compute-0 sudo[125009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alanvgqwhvtrbsodcqaguibcakdaatnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405967.9404893-83-249079423453249/AnsiballZ_slurp.py'
Oct 02 11:52:48 compute-0 sudo[125009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:48 compute-0 python3.9[125011]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 02 11:52:48 compute-0 sudo[125009]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:48 compute-0 condescending_hermann[124854]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:52:48 compute-0 condescending_hermann[124854]: --> relative data size: 1.0
Oct 02 11:52:48 compute-0 condescending_hermann[124854]: --> All data devices are unavailable
Oct 02 11:52:48 compute-0 systemd[1]: libpod-14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5.scope: Deactivated successfully.
Oct 02 11:52:48 compute-0 podman[124835]: 2025-10-02 11:52:48.714054794 +0000 UTC m=+1.091505948 container died 14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 11:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb07e2a2143b33f64683cb2c8fcf0a237925dbae3cb5a23b3256e177ee482198-merged.mount: Deactivated successfully.
Oct 02 11:52:48 compute-0 podman[124835]: 2025-10-02 11:52:48.779496744 +0000 UTC m=+1.156947898 container remove 14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:52:48 compute-0 systemd[1]: libpod-conmon-14cc98f279c4b66f2ae58a7b3f7051c0e56b8a79e50b8bcd837d875e480fabe5.scope: Deactivated successfully.
Oct 02 11:52:48 compute-0 sudo[124577]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:48 compute-0 sudo[125086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:48 compute-0 sudo[125086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:48 compute-0 sudo[125086]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:48 compute-0 sudo[125136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:52:48 compute-0 sudo[125136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:48 compute-0 sudo[125136]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:49 compute-0 sudo[125184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:49 compute-0 sudo[125184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:49 compute-0 sudo[125184]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:49 compute-0 sudo[125233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:52:49 compute-0 sudo[125233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:49 compute-0 ceph-mon[73668]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:49 compute-0 sudo[125284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykbksrlwtmrqqhatyvkftjowbhikmnlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405968.8085816-107-151456408543066/AnsiballZ_stat.py'
Oct 02 11:52:49 compute-0 sudo[125284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:49 compute-0 python3.9[125286]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.lbul63_x follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:49 compute-0 sudo[125284]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:49 compute-0 podman[125329]: 2025-10-02 11:52:49.431985797 +0000 UTC m=+0.056639539 container create 7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 11:52:49 compute-0 systemd[1]: Started libpod-conmon-7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf.scope.
Oct 02 11:52:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:49 compute-0 podman[125329]: 2025-10-02 11:52:49.400164624 +0000 UTC m=+0.024818386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:52:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:52:49 compute-0 podman[125329]: 2025-10-02 11:52:49.562641062 +0000 UTC m=+0.187294824 container init 7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:52:49 compute-0 podman[125329]: 2025-10-02 11:52:49.571002223 +0000 UTC m=+0.195655965 container start 7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:52:49 compute-0 recursing_greider[125369]: 167 167
Oct 02 11:52:49 compute-0 systemd[1]: libpod-7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf.scope: Deactivated successfully.
Oct 02 11:52:49 compute-0 podman[125329]: 2025-10-02 11:52:49.58213221 +0000 UTC m=+0.206785982 container attach 7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:52:49 compute-0 podman[125329]: 2025-10-02 11:52:49.583211195 +0000 UTC m=+0.207864937 container died 7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc25cacb75088706e40b5f07c1c5aac869c335f99dd719822d51792d19f3c541-merged.mount: Deactivated successfully.
Oct 02 11:52:49 compute-0 sudo[125484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myrtzmiphrjogpwqplpelmqiulvvxulc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405968.8085816-107-151456408543066/AnsiballZ_copy.py'
Oct 02 11:52:49 compute-0 sudo[125484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:49.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:49 compute-0 python3.9[125486]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.lbul63_x mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405968.8085816-107-151456408543066/.source.lbul63_x _original_basename=.3g5lkbd6 follow=False checksum=d30a0a751a32c4c6fb89a77f8bd3d66e091396ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:50 compute-0 sudo[125484]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:50 compute-0 podman[125329]: 2025-10-02 11:52:50.052876732 +0000 UTC m=+0.677530464 container remove 7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:52:50 compute-0 systemd[1]: libpod-conmon-7658c9a06ad7b87086a9f52da319c92eb9d313096aba84cedf2a9ce73c69fecf.scope: Deactivated successfully.
Oct 02 11:52:50 compute-0 ceph-mon[73668]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:50 compute-0 podman[125518]: 2025-10-02 11:52:50.238177008 +0000 UTC m=+0.066992078 container create d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:52:50 compute-0 systemd[1]: Started libpod-conmon-d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80.scope.
Oct 02 11:52:50 compute-0 podman[125518]: 2025-10-02 11:52:50.200318279 +0000 UTC m=+0.029133379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:52:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44005eaf6c6c7b2b0a5ec77f1aa2edb5dda308d3e954d24c9819094a87b4cf03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44005eaf6c6c7b2b0a5ec77f1aa2edb5dda308d3e954d24c9819094a87b4cf03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44005eaf6c6c7b2b0a5ec77f1aa2edb5dda308d3e954d24c9819094a87b4cf03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44005eaf6c6c7b2b0a5ec77f1aa2edb5dda308d3e954d24c9819094a87b4cf03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:50 compute-0 podman[125518]: 2025-10-02 11:52:50.34205617 +0000 UTC m=+0.170871260 container init d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:52:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:50.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:50 compute-0 podman[125518]: 2025-10-02 11:52:50.351186949 +0000 UTC m=+0.180002019 container start d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:52:50 compute-0 podman[125518]: 2025-10-02 11:52:50.365956673 +0000 UTC m=+0.194771773 container attach d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:52:50 compute-0 sudo[125667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmkmsxsxzozbvyuvozzabgxqqziemzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405970.2546608-152-160782343119794/AnsiballZ_setup.py'
Oct 02 11:52:50 compute-0 sudo[125667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:51 compute-0 kind_lamarr[125570]: {
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:     "1": [
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:         {
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "devices": [
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "/dev/loop3"
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             ],
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "lv_name": "ceph_lv0",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "lv_size": "7511998464",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "name": "ceph_lv0",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "tags": {
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.cluster_name": "ceph",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.crush_device_class": "",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.encrypted": "0",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.osd_id": "1",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.type": "block",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:                 "ceph.vdo": "0"
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             },
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "type": "block",
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:             "vg_name": "ceph_vg0"
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:         }
Oct 02 11:52:51 compute-0 kind_lamarr[125570]:     ]
Oct 02 11:52:51 compute-0 kind_lamarr[125570]: }
Oct 02 11:52:51 compute-0 systemd[1]: libpod-d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80.scope: Deactivated successfully.
Oct 02 11:52:51 compute-0 podman[125518]: 2025-10-02 11:52:51.208952697 +0000 UTC m=+1.037767797 container died d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:52:51 compute-0 python3.9[125669]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:52:51 compute-0 sudo[125667]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-44005eaf6c6c7b2b0a5ec77f1aa2edb5dda308d3e954d24c9819094a87b4cf03-merged.mount: Deactivated successfully.
Oct 02 11:52:51 compute-0 podman[125518]: 2025-10-02 11:52:51.420552504 +0000 UTC m=+1.249367584 container remove d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:52:51 compute-0 sudo[125233]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:51 compute-0 systemd[1]: libpod-conmon-d3a4a710440c43cdc14dd21c5c9df9634f835e0970cda17c6da7ee5ca2d79f80.scope: Deactivated successfully.
Oct 02 11:52:51 compute-0 sudo[125712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:51 compute-0 sudo[125712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:51 compute-0 sudo[125712]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:51 compute-0 sudo[125771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:52:51 compute-0 sudo[125771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:51 compute-0 sudo[125771]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:51 compute-0 sudo[125813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:51 compute-0 sudo[125813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:51 compute-0 sudo[125813]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:51 compute-0 sudo[125838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:52:51 compute-0 sudo[125838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:51.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:51 compute-0 sudo[125963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yglnrokreyqytzlxkryggjrlgxzkbuvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405971.5118077-177-234931233157663/AnsiballZ_blockinfile.py'
Oct 02 11:52:51 compute-0 sudo[125963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:52 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 11:52:52 compute-0 podman[125979]: 2025-10-02 11:52:52.05353121 +0000 UTC m=+0.025716628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:52:52 compute-0 podman[125979]: 2025-10-02 11:52:52.151449309 +0000 UTC m=+0.123634727 container create e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:52:52 compute-0 python3.9[125974]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfikJfuUE7Xs2lF9Qh9l0WUdl+Tct7ff0gJQZVpPwLHlAwFnY1lIlqF2IQ3J7LtFcsjYF5RcofKcj+ARkMTobXFoygI/H3Yl5EGDehZbaNONLkDXT20bcYtosTZBjJTZWMJaDGUobRPnKWEbt7P8G/CVwj+LKBYxYcl65Bs0m8Ii2JZObV/41E/44oNBbTT6VnLqrH1BjRfNgToFyoYZToIU6gJw+lDGgt/afrHnDeR8fo6fgHkoHZKHxctrFraqhPOEX+SW/RD5ra4/WxZTBDAcOelVyZhpZ0V6HTQuS0IuD/sy9RD9W59TrF0oFH8kP6H1F3EbhrMfM/wkGJqxcBEMPIlGjUgoOCOY4tgCsAuyKcqelTUJIoL5uTuk06fd+1+B0t8j//vY7eWDCGwHAYrOCbL954GsjqhEOd/SL8vW6cT4Eh+DaWzKpvnl+bEN+G7wkI9etJ4B8NugtDyE25Ikfn9nsBLIcPcuepnlcBQkTN4sC+w0I1AEm3Uo8MFOM=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxo/cGygmGP55Hjd3RI5yFpLqrtrtdd2PGw/FbMnxJJ
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbUwjRfNWPOWmPM9kXykw3bNz7sYSt7DYbalJhzh+E3yGMACUO+HxFuSQ4lHBBXquZltdOcmR202cRP+4s05oI=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+zJSXp4XBwGccVvswqz0/27MxV0mWhHJ9EKngmPOQ2Et2f+QArNFJsEaUEJankaYSrISVt8m0QscyZhZUgrxp07g0OV9pVQ2pkqF/CSC7RnN96odOHOeQjRmSOj9vF8Q3EeyRZ7MS1CWH6TT+jYOD77TFol6cQhi7o5bzgAdL6yB/ili/PG3bBxtbYtNwSqCSpiGaN8z8j/REszkW2GM6wvDGXk9NgNfBZT4goP4O3qz/wVeMM/OQFGQa/34tMNX3QEE/XOdAUIRXXLw0vmVj7oRDzGVMc12TDalGOqphS+LkUS4PB+ns/IaplTUzc8zlwhycQQPxnzEcm+z3QP8Bo+iBGw+aKpc5UTMMtZocXrjHCv0Q6irXug6N6b7aaANiHMmveZua/Gjp6Ef//Q/+thKtkvcvvhUDZknHLDrHGT5QbVQYjN23MyFdWCu6MgpBw8NNyeI5sO605lOrxk2oXwX19ah7Qt7iAU7KRijLzQBjnMjNb6bcSOCFXVzpl0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxmfzZIbNhcux/tJpdvzaDW/iX/PRMqNcEGpeyKOTEV
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANBfiBul8lZFa5T9kjEYk719DZo4CtW2bTDn+SPcbu/2U71Ms3Qc1tvqiM9B/ciT9t/uzxk25klpGuFqieJFkk=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb8D90laelhslbtmfz72Mp6Q7iCMu+KiPRuBFH59nBtb1LmjrIFjvU1qZnJ+wipHW+bRcdDzNWNM8KJ4IImBqFxbrg17RhHeunE84nnR8leX3OYiMZumpygvXYCykppXcKbe6pfxYUtyTc8Tz3bNoayi7uGoKgN/iaUeADLuyJUDDVyusj2q7uIj7gZ6PbtorR5cUUn0wBZTo3Jx84NmdiJr/xDGrtfawsV6ATz+Rpx3vzz4EE4dq4wN3eTUJiPCpc4jbTvHpp0GdJTK1BkZ4IANgw3a+loOO2MHq2JgMRjKJrH7sqrw7s9XgzHSh/ufOmEKAtgw75tWExEcy/05QGGbR2jnIKde4vVIS5JheT1z4gYASjKEEidjisDxig5nigPddxe3nSxKRQczKXPV+KUOB14AljRbnyqgbw4Dv9wtnkFL/QLMXFA0/NaOAZxhI+fOoAcg+No2ZsB95IgQ49ay/LN011x9o1vfwVPfReOtkjpVxQB8oCXhA53BfrG3M=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAtzqd+HKKUdtdjsFK/O61rbaIfH2/ANnbsFBvd1WLXA
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOyw0g2rIQxTWmEkqBGUUvYwuDopCg/ppyBGUh5LatbQKlwO7AkEzPUhEeFZv2/qzobLbOH4kVCTAQVjiQm//WM=
                                              create=True mode=0644 path=/tmp/ansible.lbul63_x state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:52 compute-0 ceph-mon[73668]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:52 compute-0 sudo[125963]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:52 compute-0 systemd[1]: Started libpod-conmon-e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233.scope.
Oct 02 11:52:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:52:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:52.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:52 compute-0 podman[125979]: 2025-10-02 11:52:52.401970299 +0000 UTC m=+0.374155687 container init e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:52:52 compute-0 podman[125979]: 2025-10-02 11:52:52.412604174 +0000 UTC m=+0.384789562 container start e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:52:52 compute-0 magical_meitner[125998]: 167 167
Oct 02 11:52:52 compute-0 systemd[1]: libpod-e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233.scope: Deactivated successfully.
Oct 02 11:52:52 compute-0 podman[125979]: 2025-10-02 11:52:52.521492556 +0000 UTC m=+0.493677944 container attach e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:52:52 compute-0 podman[125979]: 2025-10-02 11:52:52.52206253 +0000 UTC m=+0.494247928 container died e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc310f3b618229078266e599bde9af50d5587c8e3ab4839c898392a918986268-merged.mount: Deactivated successfully.
Oct 02 11:52:52 compute-0 sudo[126167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkzpqgysuvehvomgxsgwojrewmfjidkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405972.3904047-201-7181421643406/AnsiballZ_command.py'
Oct 02 11:52:52 compute-0 sudo[126167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:53 compute-0 podman[125979]: 2025-10-02 11:52:53.053826027 +0000 UTC m=+1.026011445 container remove e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:52:53 compute-0 systemd[1]: libpod-conmon-e4db1f14cf66c2a35828fdc2df36ad731b8664abf7fa6597452fa19ece377233.scope: Deactivated successfully.
Oct 02 11:52:53 compute-0 podman[126177]: 2025-10-02 11:52:53.242313549 +0000 UTC m=+0.071295771 container create 2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:52:53 compute-0 python3.9[126169]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lbul63_x' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:53 compute-0 sudo[126167]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:53 compute-0 podman[126177]: 2025-10-02 11:52:53.197041593 +0000 UTC m=+0.026023845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:52:53 compute-0 systemd[1]: Started libpod-conmon-2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d.scope.
Oct 02 11:52:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fbb74e75c096a177975f86d510bbbac74368412c5ee0dc601e778d895599a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fbb74e75c096a177975f86d510bbbac74368412c5ee0dc601e778d895599a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fbb74e75c096a177975f86d510bbbac74368412c5ee0dc601e778d895599a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fbb74e75c096a177975f86d510bbbac74368412c5ee0dc601e778d895599a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:52:53 compute-0 podman[126177]: 2025-10-02 11:52:53.426464396 +0000 UTC m=+0.255446638 container init 2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:52:53 compute-0 podman[126177]: 2025-10-02 11:52:53.437580293 +0000 UTC m=+0.266562525 container start 2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_zhukovsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:52:53 compute-0 podman[126177]: 2025-10-02 11:52:53.443906645 +0000 UTC m=+0.272888897 container attach 2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_zhukovsky, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 11:52:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:53 compute-0 sudo[126349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdjjzofdboxfaujbzvrjljczdmabxqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405973.4443917-225-208976963640467/AnsiballZ_file.py'
Oct 02 11:52:53 compute-0 sudo[126349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:54 compute-0 python3.9[126351]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lbul63_x state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:54 compute-0 ceph-mon[73668]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:54 compute-0 sudo[126349]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:54.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]: {
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]:         "osd_id": 1,
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]:         "type": "bluestore"
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]:     }
Oct 02 11:52:54 compute-0 boring_zhukovsky[126217]: }
Oct 02 11:52:54 compute-0 systemd[1]: libpod-2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d.scope: Deactivated successfully.
Oct 02 11:52:54 compute-0 podman[126177]: 2025-10-02 11:52:54.410869723 +0000 UTC m=+1.239851965 container died 2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:52:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-76fbb74e75c096a177975f86d510bbbac74368412c5ee0dc601e778d895599a6-merged.mount: Deactivated successfully.
Oct 02 11:52:54 compute-0 podman[126177]: 2025-10-02 11:52:54.477527722 +0000 UTC m=+1.306509944 container remove 2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:52:54 compute-0 systemd[1]: libpod-conmon-2929a64c07a88957dd665324fc14edb4cdbe05b46a1ca278073619644a9a790d.scope: Deactivated successfully.
Oct 02 11:52:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:54 compute-0 sudo[125838]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:52:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:52:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 58a552e3-f1f4-4cb6-9753-d2fbe6dd3491 does not exist
Oct 02 11:52:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9534fd04-202e-495d-861f-49616e6428b0 does not exist
Oct 02 11:52:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e859f964-785a-4183-9ab3-c2a903f85479 does not exist
Oct 02 11:52:54 compute-0 sudo[126404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:54 compute-0 sudo[126404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-0 sudo[126404]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-0 sudo[126429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:52:54 compute-0 sudo[126429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-0 sudo[126429]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-0 sshd-session[124348]: Connection closed by 192.168.122.30 port 45306
Oct 02 11:52:54 compute-0 sshd-session[124345]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:52:54 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 02 11:52:54 compute-0 systemd[1]: session-42.scope: Consumed 5.469s CPU time.
Oct 02 11:52:54 compute-0 systemd-logind[820]: Session 42 logged out. Waiting for processes to exit.
Oct 02 11:52:54 compute-0 systemd-logind[820]: Removed session 42.
Oct 02 11:52:55 compute-0 sudo[126454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:55 compute-0 sudo[126454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:55 compute-0 sudo[126454]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:55 compute-0 sudo[126479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:55 compute-0 sudo[126479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:55 compute-0 sudo[126479]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:55.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:56.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:56 compute-0 ceph-mon[73668]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:52:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:57.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:52:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:58 compute-0 ceph-mon[73668]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:52:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:52:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:52:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:59.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:00 compute-0 sshd-session[126506]: Accepted publickey for zuul from 192.168.122.30 port 39806 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:53:00 compute-0 ceph-mon[73668]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:00 compute-0 systemd-logind[820]: New session 43 of user zuul.
Oct 02 11:53:00 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 02 11:53:00 compute-0 sshd-session[126506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:53:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:01 compute-0 python3.9[126660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:53:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:01.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:02 compute-0 sudo[126814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjhpxjqtbxdsgjysntigiagcdrblgguf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405981.695192-61-221292753210259/AnsiballZ_systemd.py'
Oct 02 11:53:02 compute-0 sudo[126814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:02 compute-0 python3.9[126817]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 11:53:02 compute-0 sudo[126814]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 ceph-mon[73668]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:03 compute-0 sudo[126969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqlmhpllpuhmwsczqskrimgwtpafmzxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405982.8369222-85-49635937497600/AnsiballZ_systemd.py'
Oct 02 11:53:03 compute-0 sudo[126969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:03 compute-0 sshd-session[70640]: Received disconnect from 38.129.56.116 port 60878:11: disconnected by user
Oct 02 11:53:03 compute-0 sshd-session[70640]: Disconnected from user zuul 38.129.56.116 port 60878
Oct 02 11:53:03 compute-0 sshd-session[70637]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:03 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 02 11:53:03 compute-0 systemd[1]: session-19.scope: Consumed 1min 16.727s CPU time.
Oct 02 11:53:03 compute-0 systemd-logind[820]: Session 19 logged out. Waiting for processes to exit.
Oct 02 11:53:03 compute-0 systemd-logind[820]: Removed session 19.
Oct 02 11:53:03 compute-0 python3.9[126971]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:53:03 compute-0 sudo[126969]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:03.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:04 compute-0 ceph-mon[73668]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:04 compute-0 sudo[127122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znbzvjtuakzoxknzpsijieyrdvrdkwqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405983.8864603-112-135177890203948/AnsiballZ_command.py'
Oct 02 11:53:04 compute-0 sudo[127122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:04.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:04 compute-0 python3.9[127125]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:53:04 compute-0 sudo[127122]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:05 compute-0 sudo[127276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhdmqvzjfbspwgvgrenfotngldgjgfyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405984.7161891-136-11049469101934/AnsiballZ_stat.py'
Oct 02 11:53:05 compute-0 sudo[127276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:05 compute-0 python3.9[127278]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:05 compute-0 sudo[127276]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:05.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:05 compute-0 sudo[127428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbfofohevvxirbacyzohxxktknvzikuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405985.5063922-163-45883416372198/AnsiballZ_file.py'
Oct 02 11:53:05 compute-0 sudo[127428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:06 compute-0 python3.9[127430]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:06 compute-0 sudo[127428]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:06 compute-0 sshd-session[126510]: Connection closed by 192.168.122.30 port 39806
Oct 02 11:53:06 compute-0 sshd-session[126506]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:06 compute-0 systemd-logind[820]: Session 43 logged out. Waiting for processes to exit.
Oct 02 11:53:06 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 02 11:53:06 compute-0 systemd[1]: session-43.scope: Consumed 3.849s CPU time.
Oct 02 11:53:06 compute-0 systemd-logind[820]: Removed session 43.
Oct 02 11:53:07 compute-0 ceph-mon[73668]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:07.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:08 compute-0 ceph-mon[73668]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:09.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:11 compute-0 ceph-mon[73668]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:11.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:12 compute-0 sshd-session[127458]: Accepted publickey for zuul from 192.168.122.30 port 43894 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:53:12 compute-0 systemd-logind[820]: New session 44 of user zuul.
Oct 02 11:53:12 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 02 11:53:12 compute-0 sshd-session[127458]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:53:12 compute-0 ceph-mon[73668]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:13 compute-0 python3.9[127612]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:53:13 compute-0 sudo[127766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonpswmedkwjbsvjatgaotgcuctxjfpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405993.5326564-67-63519736653478/AnsiballZ_setup.py'
Oct 02 11:53:13 compute-0 sudo[127766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:13.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:14 compute-0 python3.9[127768]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:53:14 compute-0 sudo[127766]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:14 compute-0 sudo[127851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bznjaesligyluoajruacoxumtvtsrjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405993.5326564-67-63519736653478/AnsiballZ_dnf.py'
Oct 02 11:53:14 compute-0 sudo[127851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:15 compute-0 python3.9[127853]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:53:15 compute-0 ceph-mon[73668]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:15 compute-0 sudo[127855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:15 compute-0 sudo[127855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:15 compute-0 sudo[127855]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:15 compute-0 sudo[127880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:15 compute-0 sudo[127880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:15 compute-0 sudo[127880]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:15.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:16 compute-0 ceph-mon[73668]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:16 compute-0 sudo[127851]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:17 compute-0 python3.9[128055]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:53:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:17.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:18 compute-0 python3.9[128206]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:53:19 compute-0 ceph-mon[73668]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:19 compute-0 python3.9[128357]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:19 compute-0 python3.9[128507]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:19.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:20 compute-0 ceph-mon[73668]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:20 compute-0 sshd-session[127461]: Connection closed by 192.168.122.30 port 43894
Oct 02 11:53:20 compute-0 sshd-session[127458]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:20 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 02 11:53:20 compute-0 systemd[1]: session-44.scope: Consumed 6.108s CPU time.
Oct 02 11:53:20 compute-0 systemd-logind[820]: Session 44 logged out. Waiting for processes to exit.
Oct 02 11:53:20 compute-0 systemd-logind[820]: Removed session 44.
Oct 02 11:53:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:20.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:53:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:21.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:53:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000023s ======
Oct 02 11:53:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:22.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 02 11:53:23 compute-0 ceph-mon[73668]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:23.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:24 compute-0 ceph-mon[73668]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.509226) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004509317, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 747, "num_deletes": 250, "total_data_size": 1079106, "memory_usage": 1100504, "flush_reason": "Manual Compaction"}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004517355, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 701185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9928, "largest_seqno": 10674, "table_properties": {"data_size": 697888, "index_size": 1141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8288, "raw_average_key_size": 19, "raw_value_size": 691034, "raw_average_value_size": 1653, "num_data_blocks": 50, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405944, "oldest_key_time": 1759405944, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8169 microseconds, and 3354 cpu microseconds.
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.517413) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 701185 bytes OK
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.517440) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518849) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518866) EVENT_LOG_v1 {"time_micros": 1759406004518861, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518892) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1075403, prev total WAL file size 1075403, number of live WAL files 2.
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.519590) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(684KB)], [23(9502KB)]
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004519654, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10431407, "oldest_snapshot_seqno": -1}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3743 keys, 7707430 bytes, temperature: kUnknown
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004582689, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7707430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7679640, "index_size": 17312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90873, "raw_average_key_size": 24, "raw_value_size": 7609251, "raw_average_value_size": 2032, "num_data_blocks": 757, "num_entries": 3743, "num_filter_entries": 3743, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.582996) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7707430 bytes
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.584375) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 122.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.3 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(25.9) write-amplify(11.0) OK, records in: 4234, records dropped: 491 output_compression: NoCompression
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.584399) EVENT_LOG_v1 {"time_micros": 1759406004584387, "job": 8, "event": "compaction_finished", "compaction_time_micros": 63153, "compaction_time_cpu_micros": 23516, "output_level": 6, "num_output_files": 1, "total_output_size": 7707430, "num_input_records": 4234, "num_output_records": 3743, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004584835, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004587198, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.519477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.587401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.587410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.587412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.587416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:53:24.587418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:25 compute-0 sshd-session[128535]: Accepted publickey for zuul from 192.168.122.30 port 36644 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:53:25 compute-0 systemd-logind[820]: New session 45 of user zuul.
Oct 02 11:53:25 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 02 11:53:25 compute-0 sshd-session[128535]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:53:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:25.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:26 compute-0 python3.9[128688]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:53:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:27 compute-0 ceph-mon[73668]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:27 compute-0 sudo[128843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnbsdwamvcogzcjkxtbqcabkkklchsgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406007.1443834-116-176003852751895/AnsiballZ_file.py'
Oct 02 11:53:27 compute-0 sudo[128843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:27 compute-0 python3.9[128845]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:27 compute-0 sudo[128843]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:28 compute-0 ceph-mon[73668]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:28 compute-0 sudo[128995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdnaptzwgwvberyayiiyfrupuwxhkhcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406007.9339812-116-62835896104406/AnsiballZ_file.py'
Oct 02 11:53:28 compute-0 sudo[128995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:28.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:53:28 compute-0 python3.9[128997]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:28 compute-0 sudo[128995]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:53:28
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'images', '.rgw.root', 'default.rgw.meta', 'vms']
Oct 02 11:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:53:29 compute-0 sudo[129148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orofhpwimczzpvpmizntwhetqsoyfbey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406008.7350612-163-181144947098176/AnsiballZ_stat.py'
Oct 02 11:53:29 compute-0 sudo[129148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:29 compute-0 python3.9[129150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:29 compute-0 sudo[129148]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:29.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:30 compute-0 sudo[129271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvptopchshuzbnwxkdjkymtuiqhwefux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406008.7350612-163-181144947098176/AnsiballZ_copy.py'
Oct 02 11:53:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:30 compute-0 sudo[129271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:30 compute-0 python3.9[129273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406008.7350612-163-181144947098176/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=246f8517d1f5b014dcefb676a7fba3fad20c82da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:30 compute-0 sudo[129271]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:30 compute-0 sudo[129424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcadntlsciurqkdgckysdtevsuqonhzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406010.4241407-163-277420550469449/AnsiballZ_stat.py'
Oct 02 11:53:30 compute-0 sudo[129424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:30 compute-0 python3.9[129426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:30 compute-0 sudo[129424]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:31 compute-0 ceph-mon[73668]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:31 compute-0 sudo[129547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqkzmhcsyudkaqgohznuckhgnedldlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406010.4241407-163-277420550469449/AnsiballZ_copy.py'
Oct 02 11:53:31 compute-0 sudo[129547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:31 compute-0 python3.9[129549]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406010.4241407-163-277420550469449/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3ecf5e4f77066a77590b5118d192b2b931dec8bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:31 compute-0 sudo[129547]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:31 compute-0 sudo[129699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpzibfaqpdqnlmkpdyjzhutiwvqijwmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406011.617212-163-234420343932456/AnsiballZ_stat.py'
Oct 02 11:53:31 compute-0 sudo[129699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:31.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:32 compute-0 python3.9[129701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:32 compute-0 sudo[129699]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:32 compute-0 ceph-mon[73668]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:32 compute-0 sudo[129823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexudkpzcccnhcacyrteloggbilpdxbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406011.617212-163-234420343932456/AnsiballZ_copy.py'
Oct 02 11:53:32 compute-0 sudo[129823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:32 compute-0 python3.9[129825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406011.617212-163-234420343932456/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c4c149abc26bce4e35fd3da0fff24d91ac3331a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:32 compute-0 sudo[129823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:33 compute-0 sudo[129975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cssfhjiewagywuuwqvsvidikuzgifhgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406012.8922262-300-97802045018959/AnsiballZ_file.py'
Oct 02 11:53:33 compute-0 sudo[129975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:33 compute-0 python3.9[129977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:33 compute-0 sudo[129975]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:33 compute-0 sudo[130127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcztsodgkplxdgrkohsvhxsxdrxmskid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406013.5595868-300-212492376455371/AnsiballZ_file.py'
Oct 02 11:53:33 compute-0 sudo[130127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:33.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:34 compute-0 python3.9[130129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:34 compute-0 sudo[130127]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:34.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:34 compute-0 sudo[130280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cocsbhzxmiakkullwmqqhkrbsabycddb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406014.227515-347-32337313396052/AnsiballZ_stat.py'
Oct 02 11:53:34 compute-0 sudo[130280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:34 compute-0 python3.9[130282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:34 compute-0 sudo[130280]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:35 compute-0 sudo[130403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhohwfmusymwmebvihayjeyduxlhpsys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406014.227515-347-32337313396052/AnsiballZ_copy.py'
Oct 02 11:53:35 compute-0 sudo[130403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:35 compute-0 ceph-mon[73668]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:35 compute-0 python3.9[130405]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406014.227515-347-32337313396052/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=632ff4f7407284a0feaaef5a544f54174506f107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:35 compute-0 sudo[130403]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:35 compute-0 sudo[130555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnrwitoacgpisjnkynskckifuquauda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406015.4157932-347-280055793487824/AnsiballZ_stat.py'
Oct 02 11:53:35 compute-0 sudo[130555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:35 compute-0 sudo[130557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:35 compute-0 sudo[130557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:35 compute-0 sudo[130557]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:35 compute-0 sudo[130583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:35 compute-0 sudo[130583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:35 compute-0 sudo[130583]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:35 compute-0 python3.9[130560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:35 compute-0 sudo[130555]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:35.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:36 compute-0 ceph-mon[73668]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:36 compute-0 sudo[130728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvpldbeleeivaollsbebcscbvtigbvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406015.4157932-347-280055793487824/AnsiballZ_copy.py'
Oct 02 11:53:36 compute-0 sudo[130728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:36 compute-0 python3.9[130730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406015.4157932-347-280055793487824/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:36 compute-0 sudo[130728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:36 compute-0 sudo[130881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzqgvhpdppuzbmsjwqdvaetxqnpvsgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406016.6349657-347-127351534736944/AnsiballZ_stat.py'
Oct 02 11:53:36 compute-0 sudo[130881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:37 compute-0 python3.9[130883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:37 compute-0 sudo[130881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:37 compute-0 sudo[131004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfxaebydatqvbdtyxdxawvosjqrprni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406016.6349657-347-127351534736944/AnsiballZ_copy.py'
Oct 02 11:53:37 compute-0 sudo[131004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:37 compute-0 python3.9[131006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406016.6349657-347-127351534736944/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6ff722a97f2c4b21a85bd3d64d7f01c875e646f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:37 compute-0 sudo[131004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:37.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:38 compute-0 sudo[131156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elanfvwamikfjymvymnnrtotxgqpobkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406017.8323576-478-32003800111834/AnsiballZ_file.py'
Oct 02 11:53:38 compute-0 sudo[131156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:38 compute-0 python3.9[131158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:38 compute-0 sudo[131156]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:38.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:38 compute-0 sudo[131309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmckrhwlzfiipommrerzpfrgzilakjvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406018.5173938-478-184132894576238/AnsiballZ_file.py'
Oct 02 11:53:38 compute-0 sudo[131309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:39 compute-0 python3.9[131311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:39 compute-0 sudo[131309]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:39 compute-0 ceph-mon[73668]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:53:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2383 writes, 10K keys, 2383 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2383 writes, 2383 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2383 writes, 10K keys, 2383 commit groups, 1.0 writes per commit group, ingest: 13.94 MB, 0.02 MB/s
                                           Interval WAL: 2383 writes, 2383 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    138.0      0.08              0.03         4    0.021       0      0       0.0       0.0
                                             L6      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    175.7    149.5      0.16              0.07         3    0.054     12K   1304       0.0       0.0
                                            Sum      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    115.7    145.6      0.25              0.10         7    0.035     12K   1304       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    117.5    147.6      0.24              0.10         6    0.040     12K   1304       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    175.7    149.5      0.16              0.07         3    0.054     12K   1304       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    143.8      0.08              0.03         3    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 308.00 MB usage: 1.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(59,916.73 KB,0.290665%) FilterBlock(8,41.73 KB,0.0132325%) IndexBlock(8,91.95 KB,0.0291552%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:53:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:39 compute-0 sudo[131461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaeeybhzyckdaekfbznbatstxardxrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406019.2559736-527-140394080522376/AnsiballZ_stat.py'
Oct 02 11:53:39 compute-0 sudo[131461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:53:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:53:39 compute-0 python3.9[131463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:39 compute-0 sudo[131461]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:39.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:40 compute-0 sudo[131584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mklxnmhjwgbrwoibkxjhzwijzmmoybvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406019.2559736-527-140394080522376/AnsiballZ_copy.py'
Oct 02 11:53:40 compute-0 sudo[131584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:40 compute-0 ceph-mon[73668]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:40 compute-0 python3.9[131586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406019.2559736-527-140394080522376/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9dc8a00c25953e92dd910a9a68294e84b15966a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:40 compute-0 sudo[131584]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:40.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:40 compute-0 sudo[131737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlrmwrtqurboowgbmdijqvysytedmwfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406020.500777-527-19156759771029/AnsiballZ_stat.py'
Oct 02 11:53:40 compute-0 sudo[131737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:40 compute-0 python3.9[131739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:41 compute-0 sudo[131737]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:41 compute-0 sudo[131860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlquplbymtmpjhaffifncauqkblygxqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406020.500777-527-19156759771029/AnsiballZ_copy.py'
Oct 02 11:53:41 compute-0 sudo[131860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:41 compute-0 python3.9[131862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406020.500777-527-19156759771029/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:41 compute-0 sudo[131860]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:41.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:42 compute-0 sudo[132012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjsaypgjuyllbvwohsiawercengdfdbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406021.73204-527-66141724299798/AnsiballZ_stat.py'
Oct 02 11:53:42 compute-0 sudo[132012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:42 compute-0 python3.9[132014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:42 compute-0 sudo[132012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:42 compute-0 sudo[132136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjuwdozaacnsuwfrskafebmjuwdcdry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406021.73204-527-66141724299798/AnsiballZ_copy.py'
Oct 02 11:53:42 compute-0 sudo[132136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:42 compute-0 python3.9[132138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406021.73204-527-66141724299798/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a91c30a52570643cb1dc68dc40f4746ff04d4791 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:42 compute-0 sudo[132136]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:43 compute-0 ceph-mon[73668]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:43 compute-0 sudo[132288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrjbnddyzqhvvdpyyfrcwjskjxllufo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406023.4449124-712-59494194352983/AnsiballZ_file.py'
Oct 02 11:53:43 compute-0 sudo[132288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:43 compute-0 python3.9[132290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:43.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:43 compute-0 sudo[132288]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:44 compute-0 ceph-mon[73668]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:44 compute-0 sudo[132441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvknnrixpveunheshtpiullsbiaozseq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406024.1138468-735-86643483655377/AnsiballZ_stat.py'
Oct 02 11:53:44 compute-0 sudo[132441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:44 compute-0 python3.9[132443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:44 compute-0 sudo[132441]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:45 compute-0 sudo[132564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykdimgovunyopnguexkaejvweocxdtym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406024.1138468-735-86643483655377/AnsiballZ_copy.py'
Oct 02 11:53:45 compute-0 sudo[132564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:45 compute-0 python3.9[132566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406024.1138468-735-86643483655377/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:45 compute-0 sudo[132564]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:45 compute-0 sudo[132716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbrfdoskiostwzplrfobqpwlnvdjknfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406025.4173787-784-27284562940160/AnsiballZ_file.py'
Oct 02 11:53:45 compute-0 sudo[132716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:45 compute-0 python3.9[132718]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:45.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:45 compute-0 sudo[132716]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:46 compute-0 sudo[132869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twyfapfqvgrnhhejhmvhghkwouxwkjxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406026.131514-810-202955623368272/AnsiballZ_stat.py'
Oct 02 11:53:46 compute-0 sudo[132869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:46 compute-0 python3.9[132871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:46 compute-0 sudo[132869]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:46 compute-0 sudo[132992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkxqukdaootfycqdhwuwxnmkfqvucpwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406026.131514-810-202955623368272/AnsiballZ_copy.py'
Oct 02 11:53:46 compute-0 sudo[132992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:47 compute-0 ceph-mon[73668]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:47 compute-0 python3.9[132994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406026.131514-810-202955623368272/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:47 compute-0 sudo[132992]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:47 compute-0 sudo[133144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmbiyteyqyvdgfyuazukxhhukcrieszu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406027.3902354-860-73878306392066/AnsiballZ_file.py'
Oct 02 11:53:47 compute-0 sudo[133144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:47 compute-0 python3.9[133146]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:47 compute-0 sudo[133144]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:47.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:48 compute-0 ceph-mon[73668]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:48 compute-0 sudo[133296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufkxxzpilmncmewqtchrmmxmosbsvqaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406028.0510275-883-275342780582686/AnsiballZ_stat.py'
Oct 02 11:53:48 compute-0 sudo[133296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:48 compute-0 python3.9[133299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:48 compute-0 sudo[133296]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:48 compute-0 sudo[133420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajoidpmmviocnnbvftgcnzcazphhpqom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406028.0510275-883-275342780582686/AnsiballZ_copy.py'
Oct 02 11:53:48 compute-0 sudo[133420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:49 compute-0 python3.9[133422]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406028.0510275-883-275342780582686/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:49 compute-0 sudo[133420]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:49 compute-0 sudo[133572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvpugwrrwbndzmjfwiueewpxscxnauex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406029.3320386-932-224805199684158/AnsiballZ_file.py'
Oct 02 11:53:49 compute-0 sudo[133572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:49 compute-0 python3.9[133574]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:49 compute-0 sudo[133572]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:49.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:50 compute-0 sudo[133724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzikfhiuvvaspnbafskfysowpkyncxve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406030.0240521-958-159417220415452/AnsiballZ_stat.py'
Oct 02 11:53:50 compute-0 sudo[133724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:50.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:50 compute-0 python3.9[133726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:50 compute-0 sudo[133724]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:50 compute-0 sudo[133848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inirsyvjdmmsnsmyjtibzerwqwyjgyur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406030.0240521-958-159417220415452/AnsiballZ_copy.py'
Oct 02 11:53:50 compute-0 sudo[133848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:51 compute-0 ceph-mon[73668]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:51 compute-0 python3.9[133850]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406030.0240521-958-159417220415452/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:51 compute-0 sudo[133848]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:51 compute-0 sudo[134000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxivtckswfnkbclelznkctervrqvhio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406031.342821-1007-54712222723499/AnsiballZ_file.py'
Oct 02 11:53:51 compute-0 sudo[134000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:51 compute-0 python3.9[134002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:51 compute-0 sudo[134000]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:51.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:52 compute-0 ceph-mon[73668]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:52 compute-0 sudo[134152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkcbginsxhftokbgmzepnsflnkuiiocx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406032.0395613-1037-58418775861273/AnsiballZ_stat.py'
Oct 02 11:53:52 compute-0 sudo[134152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:52.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:52 compute-0 python3.9[134154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:52 compute-0 sudo[134152]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:52 compute-0 sudo[134276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thopivshrkgscuilwkqrjrevunfyafsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406032.0395613-1037-58418775861273/AnsiballZ_copy.py'
Oct 02 11:53:52 compute-0 sudo[134276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:53 compute-0 python3.9[134278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406032.0395613-1037-58418775861273/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:53 compute-0 sudo[134276]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:53 compute-0 sudo[134428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipkysfkxpkocatobzatgindpyeqadsgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406033.2049303-1080-263803487674257/AnsiballZ_file.py'
Oct 02 11:53:53 compute-0 sudo[134428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:53 compute-0 python3.9[134430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:53 compute-0 sudo[134428]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:53.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:54 compute-0 sudo[134580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptfatpcyzzhedmgblcsconrueapiscbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406033.8432596-1102-237446370998661/AnsiballZ_stat.py'
Oct 02 11:53:54 compute-0 sudo[134580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:54 compute-0 python3.9[134582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:54 compute-0 sudo[134580]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:54 compute-0 sudo[134704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeukuuduiwfzeewsopwmkkabcigoeueh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406033.8432596-1102-237446370998661/AnsiballZ_copy.py'
Oct 02 11:53:54 compute-0 sudo[134704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:54 compute-0 python3.9[134706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406033.8432596-1102-237446370998661/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:54 compute-0 sudo[134704]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 sudo[134731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-0 sudo[134731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134731]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 sudo[134756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:55 compute-0 sudo[134756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 ceph-mon[73668]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:55 compute-0 sudo[134781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-0 sudo[134781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134781]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 sshd-session[128538]: Connection closed by 192.168.122.30 port 36644
Oct 02 11:53:55 compute-0 sshd-session[128535]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:55 compute-0 sudo[134806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:53:55 compute-0 systemd-logind[820]: Session 45 logged out. Waiting for processes to exit.
Oct 02 11:53:55 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 02 11:53:55 compute-0 systemd[1]: session-45.scope: Consumed 24.084s CPU time.
Oct 02 11:53:55 compute-0 systemd-logind[820]: Removed session 45.
Oct 02 11:53:55 compute-0 sudo[134806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134806]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:53:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:53:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:53:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:53:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 13aed2c6-c8bf-43da-b51d-f127ab7545c9 does not exist
Oct 02 11:53:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 100f14bb-3fc2-4836-902a-ed380528d749 does not exist
Oct 02 11:53:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a678fe64-b6aa-4532-bd11-ef8507b3bbfc does not exist
Oct 02 11:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:53:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:53:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:53:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:55 compute-0 sudo[134862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-0 sudo[134862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134862]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 sudo[134887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:55 compute-0 sudo[134887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134887]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 sudo[134889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-0 sudo[134889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134889]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 sudo[134936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-0 sudo[134936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134936]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:55.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:55 compute-0 sudo[134942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-0 sudo[134942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-0 sudo[134942]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:56 compute-0 sudo[134986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:53:56 compute-0 sudo[134986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:53:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:53:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:53:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:53:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:56 compute-0 ceph-mon[73668]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:56 compute-0 podman[135051]: 2025-10-02 11:53:56.374170366 +0000 UTC m=+0.065966394 container create 6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:53:56 compute-0 systemd[1]: Started libpod-conmon-6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a.scope.
Oct 02 11:53:56 compute-0 podman[135051]: 2025-10-02 11:53:56.330572827 +0000 UTC m=+0.022368765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:56.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:56 compute-0 podman[135051]: 2025-10-02 11:53:56.462139834 +0000 UTC m=+0.153935782 container init 6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:53:56 compute-0 podman[135051]: 2025-10-02 11:53:56.469264434 +0000 UTC m=+0.161060352 container start 6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:56 compute-0 podman[135051]: 2025-10-02 11:53:56.472913951 +0000 UTC m=+0.164709909 container attach 6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:53:56 compute-0 xenodochial_solomon[135069]: 167 167
Oct 02 11:53:56 compute-0 systemd[1]: libpod-6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a.scope: Deactivated successfully.
Oct 02 11:53:56 compute-0 podman[135051]: 2025-10-02 11:53:56.476545718 +0000 UTC m=+0.168341646 container died 6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6616567f8a73e7e10d67e904d6cd9faabd204968a621344eaae07605a7fe4d-merged.mount: Deactivated successfully.
Oct 02 11:53:56 compute-0 podman[135051]: 2025-10-02 11:53:56.519859001 +0000 UTC m=+0.211654929 container remove 6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_solomon, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:56 compute-0 systemd[1]: libpod-conmon-6d59eea12c453b3c6e652dc917c4973e6bc5430985244f51037f05aecd9eaa1a.scope: Deactivated successfully.
Oct 02 11:53:56 compute-0 podman[135091]: 2025-10-02 11:53:56.668832354 +0000 UTC m=+0.039591116 container create 26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:53:56 compute-0 systemd[1]: Started libpod-conmon-26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e.scope.
Oct 02 11:53:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa639a1123cb613e594aabca7a6c8c70d1bdad43970e0dc44f31f95389d0273/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa639a1123cb613e594aabca7a6c8c70d1bdad43970e0dc44f31f95389d0273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa639a1123cb613e594aabca7a6c8c70d1bdad43970e0dc44f31f95389d0273/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa639a1123cb613e594aabca7a6c8c70d1bdad43970e0dc44f31f95389d0273/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa639a1123cb613e594aabca7a6c8c70d1bdad43970e0dc44f31f95389d0273/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:56 compute-0 podman[135091]: 2025-10-02 11:53:56.650525887 +0000 UTC m=+0.021284679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:56 compute-0 podman[135091]: 2025-10-02 11:53:56.757710383 +0000 UTC m=+0.128469165 container init 26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 11:53:56 compute-0 podman[135091]: 2025-10-02 11:53:56.766138384 +0000 UTC m=+0.136897146 container start 26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:53:56 compute-0 podman[135091]: 2025-10-02 11:53:56.769864103 +0000 UTC m=+0.140622885 container attach 26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:57 compute-0 sharp_bartik[135107]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:53:57 compute-0 sharp_bartik[135107]: --> relative data size: 1.0
Oct 02 11:53:57 compute-0 sharp_bartik[135107]: --> All data devices are unavailable
Oct 02 11:53:57 compute-0 systemd[1]: libpod-26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e.scope: Deactivated successfully.
Oct 02 11:53:57 compute-0 podman[135091]: 2025-10-02 11:53:57.604238182 +0000 UTC m=+0.974997034 container died 26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:53:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa639a1123cb613e594aabca7a6c8c70d1bdad43970e0dc44f31f95389d0273-merged.mount: Deactivated successfully.
Oct 02 11:53:57 compute-0 podman[135091]: 2025-10-02 11:53:57.665583195 +0000 UTC m=+1.036341957 container remove 26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:53:57 compute-0 systemd[1]: libpod-conmon-26ac2aeee081a4ffdc5357069c4433756a98a8b8815b3cc2ecd89aa3c2e9695e.scope: Deactivated successfully.
Oct 02 11:53:57 compute-0 sudo[134986]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:57 compute-0 sudo[135134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:57 compute-0 sudo[135134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:57 compute-0 sudo[135134]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:57 compute-0 sudo[135159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:57 compute-0 sudo[135159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:57 compute-0 sudo[135159]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:57 compute-0 sudo[135184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:57 compute-0 sudo[135184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:57 compute-0 sudo[135184]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:57 compute-0 sudo[135209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:53:57 compute-0 sudo[135209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:57.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:58 compute-0 ceph-mon[73668]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:58 compute-0 podman[135273]: 2025-10-02 11:53:58.302070803 +0000 UTC m=+0.050095956 container create d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_noyce, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:53:58 compute-0 systemd[1]: Started libpod-conmon-d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190.scope.
Oct 02 11:53:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:58 compute-0 podman[135273]: 2025-10-02 11:53:58.281766909 +0000 UTC m=+0.029792102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:58 compute-0 podman[135273]: 2025-10-02 11:53:58.389798255 +0000 UTC m=+0.137823428 container init d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_noyce, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:58 compute-0 podman[135273]: 2025-10-02 11:53:58.396560006 +0000 UTC m=+0.144585159 container start d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:53:58 compute-0 suspicious_noyce[135290]: 167 167
Oct 02 11:53:58 compute-0 podman[135273]: 2025-10-02 11:53:58.401183817 +0000 UTC m=+0.149208990 container attach d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:53:58 compute-0 systemd[1]: libpod-d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190.scope: Deactivated successfully.
Oct 02 11:53:58 compute-0 podman[135273]: 2025-10-02 11:53:58.402417566 +0000 UTC m=+0.150442719 container died d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 11:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f03c4cec79fa6c627f06548f591358689d7d26462d344a722bd6787f5c85965b-merged.mount: Deactivated successfully.
Oct 02 11:53:58 compute-0 podman[135273]: 2025-10-02 11:53:58.447696876 +0000 UTC m=+0.195722029 container remove d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:53:58 compute-0 systemd[1]: libpod-conmon-d44868b6e1c9bb1ee1c7fae07244f634a4fe9cb46f1f34330da138fe64b0f190.scope: Deactivated successfully.
Oct 02 11:53:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000024s ======
Oct 02 11:53:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 02 11:53:58 compute-0 podman[135313]: 2025-10-02 11:53:58.627893564 +0000 UTC m=+0.056131880 container create dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_clarke, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:53:58 compute-0 systemd[1]: Started libpod-conmon-dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1.scope.
Oct 02 11:53:58 compute-0 podman[135313]: 2025-10-02 11:53:58.597053688 +0000 UTC m=+0.025292064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:53:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d09f2575d2bd36a7efa98c94efd35bd93320f1ef269232ff6ca82bd3449e04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d09f2575d2bd36a7efa98c94efd35bd93320f1ef269232ff6ca82bd3449e04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d09f2575d2bd36a7efa98c94efd35bd93320f1ef269232ff6ca82bd3449e04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d09f2575d2bd36a7efa98c94efd35bd93320f1ef269232ff6ca82bd3449e04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:53:58 compute-0 podman[135313]: 2025-10-02 11:53:58.718659208 +0000 UTC m=+0.146897494 container init dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:53:58 compute-0 podman[135313]: 2025-10-02 11:53:58.725914281 +0000 UTC m=+0.154152567 container start dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:53:58 compute-0 podman[135313]: 2025-10-02 11:53:58.729588799 +0000 UTC m=+0.157827185 container attach dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:53:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:59 compute-0 lucid_clarke[135329]: {
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:     "1": [
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:         {
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "devices": [
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "/dev/loop3"
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             ],
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "lv_name": "ceph_lv0",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "lv_size": "7511998464",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "name": "ceph_lv0",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "tags": {
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.cluster_name": "ceph",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.crush_device_class": "",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.encrypted": "0",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.osd_id": "1",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.type": "block",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:                 "ceph.vdo": "0"
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             },
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "type": "block",
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:             "vg_name": "ceph_vg0"
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:         }
Oct 02 11:53:59 compute-0 lucid_clarke[135329]:     ]
Oct 02 11:53:59 compute-0 lucid_clarke[135329]: }
Oct 02 11:53:59 compute-0 systemd[1]: libpod-dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1.scope: Deactivated successfully.
Oct 02 11:53:59 compute-0 podman[135313]: 2025-10-02 11:53:59.595656303 +0000 UTC m=+1.023894629 container died dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 11:53:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-73d09f2575d2bd36a7efa98c94efd35bd93320f1ef269232ff6ca82bd3449e04-merged.mount: Deactivated successfully.
Oct 02 11:53:59 compute-0 podman[135313]: 2025-10-02 11:53:59.656732663 +0000 UTC m=+1.084970949 container remove dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_clarke, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:53:59 compute-0 systemd[1]: libpod-conmon-dd3610c16a8fe213d5c17c4446c630c2a36efbd3366b934cb31f6b22502588f1.scope: Deactivated successfully.
Oct 02 11:53:59 compute-0 sudo[135209]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:59 compute-0 sudo[135351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:59 compute-0 sudo[135351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:59 compute-0 sudo[135351]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:59 compute-0 sudo[135376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:59 compute-0 sudo[135376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:59 compute-0 sudo[135376]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:59 compute-0 sudo[135401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:59 compute-0 sudo[135401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:59 compute-0 sudo[135401]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:53:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:59.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:59 compute-0 sudo[135426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:53:59 compute-0 sudo[135426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:00 compute-0 podman[135491]: 2025-10-02 11:54:00.335669368 +0000 UTC m=+0.041446591 container create cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:54:00 compute-0 systemd[1]: Started libpod-conmon-cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1.scope.
Oct 02 11:54:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:00 compute-0 podman[135491]: 2025-10-02 11:54:00.317535528 +0000 UTC m=+0.023312771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:00 compute-0 podman[135491]: 2025-10-02 11:54:00.41569015 +0000 UTC m=+0.121467393 container init cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:54:00 compute-0 podman[135491]: 2025-10-02 11:54:00.424685683 +0000 UTC m=+0.130462906 container start cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 02 11:54:00 compute-0 podman[135491]: 2025-10-02 11:54:00.428498356 +0000 UTC m=+0.134275599 container attach cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:54:00 compute-0 thirsty_ride[135509]: 167 167
Oct 02 11:54:00 compute-0 systemd[1]: libpod-cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1.scope: Deactivated successfully.
Oct 02 11:54:00 compute-0 podman[135491]: 2025-10-02 11:54:00.43120839 +0000 UTC m=+0.136985613 container died cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ride, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:54:00 compute-0 sshd-session[135507]: Accepted publickey for zuul from 192.168.122.30 port 50566 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:54:00 compute-0 systemd-logind[820]: New session 46 of user zuul.
Oct 02 11:54:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:00.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:00 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 02 11:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bce95743bc30683653d060813cde3626a5c13f7d1a6cafe18733da92c7c33ef0-merged.mount: Deactivated successfully.
Oct 02 11:54:00 compute-0 sshd-session[135507]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:54:00 compute-0 podman[135491]: 2025-10-02 11:54:00.523891264 +0000 UTC m=+0.229668487 container remove cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:54:00 compute-0 systemd[1]: libpod-conmon-cc06cc745ca9636501a844938fd79c7a9ff829cdf919175d26c5f0c9e9ed6af1.scope: Deactivated successfully.
Oct 02 11:54:00 compute-0 podman[135582]: 2025-10-02 11:54:00.700574428 +0000 UTC m=+0.061334128 container create 60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:54:00 compute-0 systemd[1]: Started libpod-conmon-60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c.scope.
Oct 02 11:54:00 compute-0 podman[135582]: 2025-10-02 11:54:00.668422069 +0000 UTC m=+0.029181799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:54:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf04632d12a03fb99e40665167960c619de9a60bd5f6d59dca752c5578d0c08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf04632d12a03fb99e40665167960c619de9a60bd5f6d59dca752c5578d0c08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf04632d12a03fb99e40665167960c619de9a60bd5f6d59dca752c5578d0c08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf04632d12a03fb99e40665167960c619de9a60bd5f6d59dca752c5578d0c08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:54:00 compute-0 podman[135582]: 2025-10-02 11:54:00.798013561 +0000 UTC m=+0.158773271 container init 60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:54:00 compute-0 podman[135582]: 2025-10-02 11:54:00.808379391 +0000 UTC m=+0.169139091 container start 60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:54:00 compute-0 podman[135582]: 2025-10-02 11:54:00.813669194 +0000 UTC m=+0.174428914 container attach 60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:54:01 compute-0 sudo[135707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crhwzlpvwghoxyxauumtgwbodboncebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406040.577843-31-53344557314162/AnsiballZ_file.py'
Oct 02 11:54:01 compute-0 sudo[135707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:01 compute-0 ceph-mon[73668]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:01 compute-0 python3.9[135709]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:01 compute-0 sudo[135707]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:01 compute-0 dreamy_wing[135605]: {
Oct 02 11:54:01 compute-0 dreamy_wing[135605]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:54:01 compute-0 dreamy_wing[135605]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:54:01 compute-0 dreamy_wing[135605]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:54:01 compute-0 dreamy_wing[135605]:         "osd_id": 1,
Oct 02 11:54:01 compute-0 dreamy_wing[135605]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:54:01 compute-0 dreamy_wing[135605]:         "type": "bluestore"
Oct 02 11:54:01 compute-0 dreamy_wing[135605]:     }
Oct 02 11:54:01 compute-0 dreamy_wing[135605]: }
Oct 02 11:54:01 compute-0 systemd[1]: libpod-60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c.scope: Deactivated successfully.
Oct 02 11:54:01 compute-0 podman[135582]: 2025-10-02 11:54:01.751392081 +0000 UTC m=+1.112151781 container died 60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:54:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf04632d12a03fb99e40665167960c619de9a60bd5f6d59dca752c5578d0c08-merged.mount: Deactivated successfully.
Oct 02 11:54:01 compute-0 podman[135582]: 2025-10-02 11:54:01.824343332 +0000 UTC m=+1.185103032 container remove 60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wing, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:54:01 compute-0 systemd[1]: libpod-conmon-60e9782d99988bb92cd3c16edd2492f4dedceb1609a95982a8dfb0c026c43a2c.scope: Deactivated successfully.
Oct 02 11:54:01 compute-0 sudo[135426]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:54:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:54:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:54:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:54:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e9c42a81-53c0-4fb6-a3b2-1feff30a833c does not exist
Oct 02 11:54:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 11ce91c1-7808-48e3-9eec-af21d2850fbe does not exist
Oct 02 11:54:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 618f56e1-9cc5-4e3b-85c5-bcec7561916c does not exist
Oct 02 11:54:01 compute-0 sudo[135889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-houxjieilglgxrulpkbkuelcgqashemt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406041.4664497-67-82857352464283/AnsiballZ_stat.py'
Oct 02 11:54:01 compute-0 sudo[135889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:01 compute-0 sudo[135891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:01 compute-0 sudo[135891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:01.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:01 compute-0 sudo[135891]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-0 sudo[135917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:54:02 compute-0 sudo[135917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:02 compute-0 sudo[135917]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:02 compute-0 python3.9[135892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:02 compute-0 sudo[135889]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:02 compute-0 sudo[136063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzczdpchaxglvsotxgytyskfavenkuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406041.4664497-67-82857352464283/AnsiballZ_copy.py'
Oct 02 11:54:02 compute-0 sudo[136063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:02 compute-0 python3.9[136065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406041.4664497-67-82857352464283/.source.conf _original_basename=ceph.conf follow=False checksum=bc6368cedc2ad3c8a4bd89508113374e22439583 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:02 compute-0 sudo[136063]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:54:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:54:02 compute-0 ceph-mon[73668]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:03 compute-0 sudo[136215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcgpheiurjikglngvhilwhcdhyransqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406042.937829-67-20431942734013/AnsiballZ_stat.py'
Oct 02 11:54:03 compute-0 sudo[136215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:03 compute-0 python3.9[136217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:03 compute-0 sudo[136215]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:03 compute-0 sudo[136338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icxobmxrswagncydmgchlvyvaqnqffct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406042.937829-67-20431942734013/AnsiballZ_copy.py'
Oct 02 11:54:03 compute-0 sudo[136338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:03.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:04 compute-0 python3.9[136340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406042.937829-67-20431942734013/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=75f34a13e5eafe465b3328865c9fc53d2eab5578 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:04 compute-0 sudo[136338]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:04 compute-0 sshd-session[135528]: Connection closed by 192.168.122.30 port 50566
Oct 02 11:54:04 compute-0 sshd-session[135507]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:54:04 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 02 11:54:04 compute-0 systemd[1]: session-46.scope: Consumed 2.861s CPU time.
Oct 02 11:54:04 compute-0 systemd-logind[820]: Session 46 logged out. Waiting for processes to exit.
Oct 02 11:54:04 compute-0 systemd-logind[820]: Removed session 46.
Oct 02 11:54:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:04.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:05 compute-0 ceph-mon[73668]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:05.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:06 compute-0 ceph-mon[73668]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:06.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:08 compute-0 ceph-mon[73668]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:08.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:09 compute-0 sshd-session[136368]: Accepted publickey for zuul from 192.168.122.30 port 46268 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:54:09 compute-0 systemd-logind[820]: New session 47 of user zuul.
Oct 02 11:54:09 compute-0 systemd[1]: Started Session 47 of User zuul.
Oct 02 11:54:09 compute-0 sshd-session[136368]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:54:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:10 compute-0 ceph-mon[73668]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:11 compute-0 python3.9[136522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:54:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:54:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:11.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:54:12 compute-0 sudo[136676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grivthybytnqjcryopcmocyuvfvjtydu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406051.5649714-67-1427850998073/AnsiballZ_file.py'
Oct 02 11:54:12 compute-0 sudo[136676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:12 compute-0 python3.9[136678]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:12 compute-0 sudo[136676]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:12 compute-0 sudo[136829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzgnztnjojyqfwtxluvjaxhyumvbpmcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406052.4153368-67-178477905902886/AnsiballZ_file.py'
Oct 02 11:54:12 compute-0 sudo[136829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:12 compute-0 python3.9[136831]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:12 compute-0 sudo[136829]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:13 compute-0 ceph-mon[73668]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:13 compute-0 python3.9[136981]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:54:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000054s ======
Oct 02 11:54:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:13.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 02 11:54:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:14 compute-0 ceph-mon[73668]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:14 compute-0 sudo[137131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkpwqoepiacuryafvblwegxlrkcveaqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406053.8337698-136-199399336805684/AnsiballZ_seboolean.py'
Oct 02 11:54:14 compute-0 sudo[137131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:14 compute-0 python3.9[137133]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 11:54:15 compute-0 sudo[137131]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:15.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:16 compute-0 sudo[137163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:16 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 02 11:54:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:16 compute-0 sudo[137163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:16 compute-0 sudo[137163]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:16 compute-0 sudo[137219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:16 compute-0 sudo[137219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:16 compute-0 sudo[137219]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:16 compute-0 ceph-mon[73668]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:16 compute-0 sudo[137338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ainsdzfwkcpvrhvbevqphzvalyxmpyox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406056.0364113-166-164786454581550/AnsiballZ_setup.py'
Oct 02 11:54:16 compute-0 sudo[137338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:16 compute-0 python3.9[137341]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:54:16 compute-0 sudo[137338]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:17 compute-0 sudo[137423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aokyzpfozmgsspoawalpotxapkuvmzxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406056.0364113-166-164786454581550/AnsiballZ_dnf.py'
Oct 02 11:54:17 compute-0 sudo[137423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:17 compute-0 python3.9[137425]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:54:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:17.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:18.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:18 compute-0 sudo[137423]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:19 compute-0 ceph-mon[73668]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:19 compute-0 sudo[137577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-morlchntisjpohauayshetkbqllfsxyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406059.0380619-202-76866302187576/AnsiballZ_systemd.py'
Oct 02 11:54:19 compute-0 sudo[137577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:19.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:20 compute-0 python3.9[137579]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:54:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:20 compute-0 sudo[137577]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:20 compute-0 ceph-mon[73668]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:20 compute-0 sudo[137733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewjfmopzpzzlujjcuxdnfvtvenmntty ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406060.3247995-226-266531320267384/AnsiballZ_edpm_nftables_snippet.py'
Oct 02 11:54:20 compute-0 sudo[137733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:20 compute-0 python3[137735]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 02 11:54:21 compute-0 sudo[137733]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:21 compute-0 sudo[137885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bssdnqgvunfuufbjornewvmtbedjijor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406061.2530606-253-138633451393806/AnsiballZ_file.py'
Oct 02 11:54:21 compute-0 sudo[137885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:21 compute-0 python3.9[137887]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:21 compute-0 sudo[137885]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:21.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:22 compute-0 sudo[138038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymioguufgppyooxkawnbnoteudhxqgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406061.8978539-277-166768192691792/AnsiballZ_stat.py'
Oct 02 11:54:22 compute-0 sudo[138038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:22 compute-0 python3.9[138040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:22 compute-0 sudo[138038]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:22 compute-0 sudo[138116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibjnnthneapdyedrhldjmfboqzezhbbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406061.8978539-277-166768192691792/AnsiballZ_file.py'
Oct 02 11:54:22 compute-0 sudo[138116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:23 compute-0 python3.9[138118]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:23 compute-0 sudo[138116]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:23 compute-0 ceph-mon[73668]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:23 compute-0 sudo[138268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffpjdhxmyhjhgfatypapdquotkvbjpgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406063.207671-313-213031459540780/AnsiballZ_stat.py'
Oct 02 11:54:23 compute-0 sudo[138268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:23 compute-0 python3.9[138270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:23 compute-0 sudo[138268]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:23 compute-0 sudo[138346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njlczpzerxcytujvofqnehkxoxvmvutw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406063.207671-313-213031459540780/AnsiballZ_file.py'
Oct 02 11:54:23 compute-0 sudo[138346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:23.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:24 compute-0 python3.9[138348]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.bs2wbulm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:24 compute-0 ceph-mon[73668]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:24 compute-0 sudo[138346]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:24 compute-0 sudo[138499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsbvljeblyzqptyocekjoqcucwfuyhej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406064.3383634-349-41133443482630/AnsiballZ_stat.py'
Oct 02 11:54:24 compute-0 sudo[138499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:24 compute-0 python3.9[138501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:24 compute-0 sudo[138499]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:25 compute-0 sudo[138577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxthugrnmgabczhpkqhsirdzveabtycv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406064.3383634-349-41133443482630/AnsiballZ_file.py'
Oct 02 11:54:25 compute-0 sudo[138577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:25 compute-0 python3.9[138579]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:25 compute-0 sudo[138577]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:25 compute-0 sudo[138729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdoaewkdzogknskmtvndtqwfihxeppwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406065.4802496-388-266377043591523/AnsiballZ_command.py'
Oct 02 11:54:25 compute-0 sudo[138729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:25.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:26 compute-0 python3.9[138731]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:26 compute-0 ceph-mon[73668]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:26 compute-0 sudo[138729]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:26 compute-0 sudo[138883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odmidykldqnalxyqytuyvoibfdzsirza ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406066.3298225-412-233395922914006/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:54:26 compute-0 sudo[138883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:26 compute-0 python3[138885]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:54:26 compute-0 sudo[138883]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:27 compute-0 sudo[139035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yionwnhhdkwghjvydtvjrdbcgdomupyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406067.1519818-436-36552963368851/AnsiballZ_stat.py'
Oct 02 11:54:27 compute-0 sudo[139035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:27 compute-0 python3.9[139037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:27 compute-0 sudo[139035]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:27.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:28 compute-0 ceph-mon[73668]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:28 compute-0 sudo[139161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcovnmfnifjxvlrpomvrhfzkrajffbda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406067.1519818-436-36552963368851/AnsiballZ_copy.py'
Oct 02 11:54:28 compute-0 sudo[139161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:54:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:28 compute-0 python3.9[139163]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406067.1519818-436-36552963368851/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:28 compute-0 sudo[139161]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:54:28
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', '.rgw.root', 'backups', '.mgr']
Oct 02 11:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:54:28 compute-0 sudo[139313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsaywscictxrisbyguixpedbyebcfba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406068.681289-481-243831106098955/AnsiballZ_stat.py'
Oct 02 11:54:28 compute-0 sudo[139313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:29 compute-0 python3.9[139315]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:29 compute-0 sudo[139313]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:29 compute-0 sudo[139438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmsvpmiyueloyhbjelaafumjxhuutobc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406068.681289-481-243831106098955/AnsiballZ_copy.py'
Oct 02 11:54:29 compute-0 sudo[139438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:29 compute-0 python3.9[139440]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406068.681289-481-243831106098955/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:29 compute-0 sudo[139438]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:30 compute-0 sudo[139590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrsuggxatnhvxgjoupewrhshtqkeiho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406069.8800125-526-122862565637367/AnsiballZ_stat.py'
Oct 02 11:54:30 compute-0 sudo[139590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:30 compute-0 python3.9[139592]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:30 compute-0 sudo[139590]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:30 compute-0 ceph-mon[73668]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:30 compute-0 sudo[139716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imkclftochqfvsfohzlxwinhypzkefwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406069.8800125-526-122862565637367/AnsiballZ_copy.py'
Oct 02 11:54:30 compute-0 sudo[139716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:30 compute-0 python3.9[139718]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406069.8800125-526-122862565637367/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:30 compute-0 sudo[139716]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:31 compute-0 sudo[139868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwsnjrmxxssmdrbqzabmqwjhoqpxtfgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406071.1020427-571-56897578708137/AnsiballZ_stat.py'
Oct 02 11:54:31 compute-0 sudo[139868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:31 compute-0 python3.9[139870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:31 compute-0 sudo[139868]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:31 compute-0 sudo[139993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldhxdjluasvklukvdyjddscajwcwvdtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406071.1020427-571-56897578708137/AnsiballZ_copy.py'
Oct 02 11:54:31 compute-0 sudo[139993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:32 compute-0 python3.9[139995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406071.1020427-571-56897578708137/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:32 compute-0 sudo[139993]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:32 compute-0 ceph-mon[73668]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:32.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:32 compute-0 sudo[140146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzodbtiyznqvwxafmyucbwfmdzueczep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406072.3394923-616-189343527523842/AnsiballZ_stat.py'
Oct 02 11:54:32 compute-0 sudo[140146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:32 compute-0 python3.9[140148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:32 compute-0 sudo[140146]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:33 compute-0 sudo[140271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbjyuxhkfujgpmbxvqeezzevxyfbhckb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406072.3394923-616-189343527523842/AnsiballZ_copy.py'
Oct 02 11:54:33 compute-0 sudo[140271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:33 compute-0 python3.9[140273]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406072.3394923-616-189343527523842/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:33 compute-0 sudo[140271]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:33 compute-0 sudo[140423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-purpgbatwjnjusyqfqklbphudcnabhaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406073.621801-661-170391067958186/AnsiballZ_file.py'
Oct 02 11:54:33 compute-0 sudo[140423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:34 compute-0 python3.9[140425]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:34 compute-0 sudo[140423]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:34 compute-0 ceph-mon[73668]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:34 compute-0 sudo[140576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhqsanvtvkbhnyrpeinedkrxushflxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406074.277646-685-9792358662007/AnsiballZ_command.py'
Oct 02 11:54:34 compute-0 sudo[140576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:34 compute-0 python3.9[140578]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:34 compute-0 sudo[140576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:35 compute-0 sudo[140731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkqssoxirravvythvozqqrrmhvazlpjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406074.9864123-709-91616057044947/AnsiballZ_blockinfile.py'
Oct 02 11:54:35 compute-0 sudo[140731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:35 compute-0 python3.9[140733]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:35 compute-0 sudo[140731]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:36 compute-0 ceph-mon[73668]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:36 compute-0 sudo[140906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbzfewjqjnmbhzthxpaoytvrfykjjedx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406075.898837-736-99939079813726/AnsiballZ_command.py'
Oct 02 11:54:36 compute-0 sudo[140861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:36 compute-0 sudo[140906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:36 compute-0 sudo[140861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:36 compute-0 sudo[140861]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:36 compute-0 sudo[140911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:36 compute-0 sudo[140911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:36 compute-0 sudo[140911]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:36 compute-0 python3.9[140909]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:36 compute-0 sudo[140906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:36.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:36 compute-0 sudo[141087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghemsttaheuxdpopoxpdtjqeulwhphsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406076.5976183-760-259254287854167/AnsiballZ_stat.py'
Oct 02 11:54:36 compute-0 sudo[141087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:37 compute-0 python3.9[141089]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:54:37 compute-0 sudo[141087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:37 compute-0 sudo[141241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-focipacgqmgqjnhejzmuoehyenrnrcfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406077.2466745-784-83732095459729/AnsiballZ_command.py'
Oct 02 11:54:37 compute-0 sudo[141241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:37 compute-0 python3.9[141243]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:37 compute-0 sudo[141241]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:38 compute-0 ceph-mon[73668]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:38 compute-0 sudo[141396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hphjeoicqksedleniogpydbejbibhbnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406077.9376752-808-40621914429993/AnsiballZ_file.py'
Oct 02 11:54:38 compute-0 sudo[141396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:38 compute-0 python3.9[141398]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:38 compute-0 sudo[141396]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:38.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:39 compute-0 python3.9[141549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:54:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:54:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:54:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:40.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:54:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:40 compute-0 ceph-mon[73668]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:40.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:40 compute-0 sudo[141701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocxujjplwetgisqkvesbuqqusjyqxryq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406080.3122807-928-74004742826020/AnsiballZ_command.py'
Oct 02 11:54:40 compute-0 sudo[141701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:40 compute-0 python3.9[141703]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:40 compute-0 ovs-vsctl[141704]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 02 11:54:40 compute-0 sudo[141701]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:41 compute-0 sudo[141854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiytokbanfirzrsjyhdmnacxgjcbuqzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406081.014352-955-62937566774008/AnsiballZ_command.py'
Oct 02 11:54:41 compute-0 sudo[141854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:41 compute-0 python3.9[141856]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:41 compute-0 sudo[141854]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:41 compute-0 sudo[142009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drinuloysnxlvhasmpzsumdwelaqrhgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406081.694795-979-138808557250131/AnsiballZ_command.py'
Oct 02 11:54:41 compute-0 sudo[142009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:42 compute-0 python3.9[142011]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:42 compute-0 ovs-vsctl[142012]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 02 11:54:42 compute-0 sudo[142009]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:42 compute-0 ceph-mon[73668]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:42.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:42 compute-0 python3.9[142163]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:54:43 compute-0 sudo[142315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltczxktxywjgqvpvcvmzqskdzbqfvniu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406083.0804908-1030-44515547564683/AnsiballZ_file.py'
Oct 02 11:54:43 compute-0 sudo[142315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:43 compute-0 python3.9[142317]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:43 compute-0 sudo[142315]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:44.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:44 compute-0 sudo[142467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrogmrtmzuteqlkxookonfrpfzqpqpwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406083.733039-1054-58063128167130/AnsiballZ_stat.py'
Oct 02 11:54:44 compute-0 sudo[142467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:44 compute-0 ceph-mon[73668]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:44 compute-0 python3.9[142469]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:44 compute-0 sudo[142467]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:44 compute-0 sudo[142546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjfttnqngaumjuymtipavqysepivmjek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406083.733039-1054-58063128167130/AnsiballZ_file.py'
Oct 02 11:54:44 compute-0 sudo[142546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:44.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:44 compute-0 python3.9[142548]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:44 compute-0 sudo[142546]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:45 compute-0 sudo[142698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-offfocejvfiqugwwxcjfhbvgabfbdrak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406084.894481-1054-186198433313338/AnsiballZ_stat.py'
Oct 02 11:54:45 compute-0 sudo[142698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:45 compute-0 python3.9[142700]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:45 compute-0 sudo[142698]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:45 compute-0 sudo[142776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppfqywidbeqnolacsitvbwyxrfsouplp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406084.894481-1054-186198433313338/AnsiballZ_file.py'
Oct 02 11:54:45 compute-0 sudo[142776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:45 compute-0 python3.9[142778]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:45 compute-0 sudo[142776]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:46.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:46 compute-0 ceph-mon[73668]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:46 compute-0 sudo[142928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvniafpzsffufoacwmqydeixzxyiyqgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406086.03962-1123-59336379351153/AnsiballZ_file.py'
Oct 02 11:54:46 compute-0 sudo[142928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:46 compute-0 python3.9[142930]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:46 compute-0 sudo[142928]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:46.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:46 compute-0 sudo[143081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgipwasaxjehtlcljxgtlpfcxswcfywz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406086.674563-1147-36077670624672/AnsiballZ_stat.py'
Oct 02 11:54:46 compute-0 sudo[143081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:47 compute-0 python3.9[143083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:47 compute-0 sudo[143081]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:47 compute-0 sudo[143159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzrvxkwhdfnkzpgjhzbmjbdpjnpgwtnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406086.674563-1147-36077670624672/AnsiballZ_file.py'
Oct 02 11:54:47 compute-0 sudo[143159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:47 compute-0 python3.9[143161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:47 compute-0 sudo[143159]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:48.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:48 compute-0 ceph-mon[73668]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:48 compute-0 sudo[143311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npqszdjibqbiolawhtamtnyjhupjbnpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406087.8442776-1183-15605835397276/AnsiballZ_stat.py'
Oct 02 11:54:48 compute-0 sudo[143311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:48 compute-0 python3.9[143313]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:48 compute-0 sudo[143311]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:48 compute-0 sudo[143390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yirxhjawmfdhwoonpysnewxgzdcmloco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406087.8442776-1183-15605835397276/AnsiballZ_file.py'
Oct 02 11:54:48 compute-0 sudo[143390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:48 compute-0 python3.9[143392]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:48 compute-0 sudo[143390]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:49 compute-0 sudo[143542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbevjjtykkazhkzjuuldantkrbomqmjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406089.0744061-1219-257252247515018/AnsiballZ_systemd.py'
Oct 02 11:54:49 compute-0 sudo[143542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:49 compute-0 python3.9[143544]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:54:49 compute-0 systemd[1]: Reloading.
Oct 02 11:54:49 compute-0 systemd-rc-local-generator[143568]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:54:49 compute-0 systemd-sysv-generator[143571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:54:50 compute-0 sudo[143542]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:50.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:50 compute-0 ceph-mon[73668]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:50 compute-0 sudo[143732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpqjhwktcxitoebdlraxfeazthwklnts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406090.1879218-1243-80775271140542/AnsiballZ_stat.py'
Oct 02 11:54:50 compute-0 sudo[143732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:50.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:50 compute-0 python3.9[143734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:50 compute-0 sudo[143732]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:50 compute-0 sudo[143810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnwewifmfalgwfyjmqxgjhvrkohxeuiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406090.1879218-1243-80775271140542/AnsiballZ_file.py'
Oct 02 11:54:50 compute-0 sudo[143810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:51 compute-0 python3.9[143812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:51 compute-0 sudo[143810]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:51 compute-0 sudo[143962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drobchvrpsuaqqvnqitpdkentogogybn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406091.4177694-1279-181310225689446/AnsiballZ_stat.py'
Oct 02 11:54:51 compute-0 sudo[143962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:51 compute-0 python3.9[143964]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:51 compute-0 sudo[143962]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:52.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:52 compute-0 sudo[144040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtngkukpmuwqyfxrwqacgdciuhlpfvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406091.4177694-1279-181310225689446/AnsiballZ_file.py'
Oct 02 11:54:52 compute-0 sudo[144040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:52 compute-0 ceph-mon[73668]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:52 compute-0 python3.9[144042]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:52 compute-0 sudo[144040]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:52 compute-0 sudo[144193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpuegtfdmdljonvmcmcdqdpgujypwndz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406092.6447127-1315-261988666928455/AnsiballZ_systemd.py'
Oct 02 11:54:52 compute-0 sudo[144193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:53 compute-0 python3.9[144195]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:54:53 compute-0 systemd[1]: Reloading.
Oct 02 11:54:53 compute-0 systemd-rc-local-generator[144223]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:54:53 compute-0 systemd-sysv-generator[144226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:54:53 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:54:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:54:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:54:53 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:54:53 compute-0 sudo[144193]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:54.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:54 compute-0 ceph-mon[73668]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:54 compute-0 sudo[144388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvtlwvaympwkfvpswyrhcbmgpbiojvzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406093.9336874-1345-274046004520315/AnsiballZ_file.py'
Oct 02 11:54:54 compute-0 sudo[144388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:54 compute-0 python3.9[144390]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:54 compute-0 sudo[144388]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:54.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:54 compute-0 sudo[144541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqunrilnwfcitwxqoxuubzywevmlocja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406094.6767218-1369-221799803436738/AnsiballZ_stat.py'
Oct 02 11:54:54 compute-0 sudo[144541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:55 compute-0 python3.9[144543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:55 compute-0 sudo[144541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:55 compute-0 sudo[144664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfttwvthdksrmrzdyqegjiimpqcfupba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406094.6767218-1369-221799803436738/AnsiballZ_copy.py'
Oct 02 11:54:55 compute-0 sudo[144664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:55 compute-0 python3.9[144666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406094.6767218-1369-221799803436738/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:55 compute-0 sudo[144664]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:56.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:56 compute-0 ceph-mon[73668]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:56 compute-0 sudo[144696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:56 compute-0 sudo[144696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:56 compute-0 sudo[144696]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:56 compute-0 sudo[144757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:56 compute-0 sudo[144757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:56 compute-0 sudo[144757]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:56 compute-0 sudo[144867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcdloulujdafpevgghhakyesaphtqfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406096.2960477-1420-217211106221107/AnsiballZ_file.py'
Oct 02 11:54:56 compute-0 sudo[144867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:56.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:56 compute-0 python3.9[144869]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:56 compute-0 sudo[144867]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:57 compute-0 sudo[145019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bslgkfyeyaoxjqnazkfvwbkyaifxbedd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406097.020431-1444-134138018964732/AnsiballZ_stat.py'
Oct 02 11:54:57 compute-0 sudo[145019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:57 compute-0 python3.9[145021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:57 compute-0 sudo[145019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:57 compute-0 sudo[145142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbgyeqysfudsxitpsvyzwohxcfovirps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406097.020431-1444-134138018964732/AnsiballZ_copy.py'
Oct 02 11:54:57 compute-0 sudo[145142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:58.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:58 compute-0 python3.9[145144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406097.020431-1444-134138018964732/.source.json _original_basename=._k9u_4of follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:58 compute-0 sudo[145142]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:58 compute-0 ceph-mon[73668]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:54:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:54:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:54:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:54:58 compute-0 sudo[145295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daiglpfxvjztdyavmmrsugkuwbsxrlww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406098.3099024-1489-185106601261513/AnsiballZ_file.py'
Oct 02 11:54:58 compute-0 sudo[145295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:58 compute-0 python3.9[145297]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:58 compute-0 sudo[145295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:59 compute-0 sudo[145447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sshpaumhstxfhdaxnmzqhmoppacjvkkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406098.9834769-1513-275551332563037/AnsiballZ_stat.py'
Oct 02 11:54:59 compute-0 sudo[145447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:59 compute-0 sudo[145447]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:59 compute-0 sudo[145570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrjlkzlqptvzsmtklzlmjecnyqzppqmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406098.9834769-1513-275551332563037/AnsiballZ_copy.py'
Oct 02 11:54:59 compute-0 sudo[145570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:00.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:00 compute-0 sudo[145570]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:00 compute-0 ceph-mon[73668]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:00.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:00 compute-0 sudo[145723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urpsykgkrpmidahhhufocwsxbfardaol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406100.362018-1564-19714117404157/AnsiballZ_container_config_data.py'
Oct 02 11:55:00 compute-0 sudo[145723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:01 compute-0 python3.9[145725]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 02 11:55:01 compute-0 sudo[145723]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:01 compute-0 sudo[145875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbfxrsvkwushtvkrzjnjoyvkjqcupxsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406101.3281355-1591-250989034844827/AnsiballZ_container_config_hash.py'
Oct 02 11:55:01 compute-0 sudo[145875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:02 compute-0 python3.9[145877]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:55:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:02 compute-0 sudo[145875]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:02 compute-0 ceph-mon[73668]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:02 compute-0 sudo[145932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:02 compute-0 sudo[145932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-0 sudo[145932]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-0 sudo[145980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:02 compute-0 sudo[145980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-0 sudo[145980]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-0 sudo[146005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:02 compute-0 sudo[146005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-0 sudo[146005]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-0 sudo[146030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 11:55:02 compute-0 sudo[146030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:02 compute-0 sudo[146145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgumzapwsqnfdbcofmbugonfutvvmvat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406102.3004892-1618-34682281246426/AnsiballZ_podman_container_info.py'
Oct 02 11:55:02 compute-0 sudo[146145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:55:02 compute-0 sudo[146030]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:55:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:55:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:55:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:03 compute-0 python3.9[146149]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:55:03 compute-0 sudo[146150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:03 compute-0 sudo[146150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-0 sudo[146150]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-0 sudo[146181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:03 compute-0 sudo[146181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-0 sudo[146181]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-0 sudo[146225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:03 compute-0 sudo[146225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-0 sudo[146225]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-0 sudo[146145]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-0 sudo[146250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:55:03 compute-0 sudo[146250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-0 sudo[146250]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:55:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:55:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:55:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:55:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b099c070-199d-4625-ad52-6076720b5336 does not exist
Oct 02 11:55:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c459f9f4-0e60-4d9b-81a5-44ea87d45f28 does not exist
Oct 02 11:55:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e6d9bab8-e102-4648-b4e9-868579b40c06 does not exist
Oct 02 11:55:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:55:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:55:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:55:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:55:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:55:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:03 compute-0 sudo[146331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:03 compute-0 sudo[146331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-0 sudo[146331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-0 sudo[146356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:03 compute-0 sudo[146356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-0 sudo[146356]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:55:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:04 compute-0 sudo[146404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:04 compute-0 sudo[146404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:04 compute-0 sudo[146404]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:04 compute-0 sudo[146457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:55:04 compute-0 sudo[146457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:04 compute-0 podman[146546]: 2025-10-02 11:55:04.457715731 +0000 UTC m=+0.042727875 container create cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:04 compute-0 systemd[1]: Started libpod-conmon-cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa.scope.
Oct 02 11:55:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:04 compute-0 podman[146546]: 2025-10-02 11:55:04.440428544 +0000 UTC m=+0.025440698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:04 compute-0 podman[146546]: 2025-10-02 11:55:04.553742176 +0000 UTC m=+0.138754340 container init cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 11:55:04 compute-0 podman[146546]: 2025-10-02 11:55:04.563315824 +0000 UTC m=+0.148327958 container start cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 11:55:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:04.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:04 compute-0 podman[146546]: 2025-10-02 11:55:04.567551949 +0000 UTC m=+0.152564103 container attach cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 02 11:55:04 compute-0 hopeful_aryabhata[146586]: 167 167
Oct 02 11:55:04 compute-0 systemd[1]: libpod-cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa.scope: Deactivated successfully.
Oct 02 11:55:04 compute-0 podman[146546]: 2025-10-02 11:55:04.573460125 +0000 UTC m=+0.158472269 container died cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:55:04 compute-0 sudo[146615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqzqzapnwouwpzzhwdgywxeifthfsrsj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406103.9934397-1657-96945267587911/AnsiballZ_edpm_container_manage.py'
Oct 02 11:55:04 compute-0 sudo[146615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-eee959b413be3547f3fcc659586e77aee1cf309fff42f613db2581107f4be496-merged.mount: Deactivated successfully.
Oct 02 11:55:04 compute-0 podman[146546]: 2025-10-02 11:55:04.614770774 +0000 UTC m=+0.199782908 container remove cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:55:04 compute-0 systemd[1]: libpod-conmon-cf0ddb6e5fd11164f8cb2e0c5ff5ae0547d9d08f6fbef13b7ab0484e7bec08aa.scope: Deactivated successfully.
Oct 02 11:55:04 compute-0 podman[146639]: 2025-10-02 11:55:04.770468777 +0000 UTC m=+0.046503036 container create d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:04 compute-0 systemd[1]: Started libpod-conmon-d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f.scope.
Oct 02 11:55:04 compute-0 python3[146620]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:55:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:04 compute-0 podman[146639]: 2025-10-02 11:55:04.750709237 +0000 UTC m=+0.026743546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80aea63bc62f84da4509050794b6a44f6428e5746d1f896e865d04ac365a7735/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80aea63bc62f84da4509050794b6a44f6428e5746d1f896e865d04ac365a7735/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80aea63bc62f84da4509050794b6a44f6428e5746d1f896e865d04ac365a7735/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80aea63bc62f84da4509050794b6a44f6428e5746d1f896e865d04ac365a7735/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80aea63bc62f84da4509050794b6a44f6428e5746d1f896e865d04ac365a7735/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:04 compute-0 podman[146639]: 2025-10-02 11:55:04.863308284 +0000 UTC m=+0.139342553 container init d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 11:55:04 compute-0 podman[146639]: 2025-10-02 11:55:04.871223493 +0000 UTC m=+0.147257752 container start d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_meninsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:55:04 compute-0 podman[146639]: 2025-10-02 11:55:04.874726495 +0000 UTC m=+0.150760754 container attach d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:55:05 compute-0 ceph-mon[73668]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:05 compute-0 condescending_meninsky[146656]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:55:05 compute-0 condescending_meninsky[146656]: --> relative data size: 1.0
Oct 02 11:55:05 compute-0 condescending_meninsky[146656]: --> All data devices are unavailable
Oct 02 11:55:05 compute-0 systemd[1]: libpod-d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f.scope: Deactivated successfully.
Oct 02 11:55:05 compute-0 podman[146639]: 2025-10-02 11:55:05.795240953 +0000 UTC m=+1.071275222 container died d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-80aea63bc62f84da4509050794b6a44f6428e5746d1f896e865d04ac365a7735-merged.mount: Deactivated successfully.
Oct 02 11:55:05 compute-0 podman[146639]: 2025-10-02 11:55:05.872427487 +0000 UTC m=+1.148461746 container remove d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:55:05 compute-0 systemd[1]: libpod-conmon-d49db88a5fe95fda8c240f1df93a5143119758019a5ebbb6d22297b5bcef839f.scope: Deactivated successfully.
Oct 02 11:55:05 compute-0 sudo[146457]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:06 compute-0 sudo[146728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:06 compute-0 sudo[146728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:06 compute-0 sudo[146728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:06 compute-0 sudo[146753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:06 compute-0 sudo[146753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:06 compute-0 sudo[146753]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:06 compute-0 sudo[146778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:06 compute-0 sudo[146778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:06 compute-0 sudo[146778]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:06 compute-0 ceph-mon[73668]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:06 compute-0 sudo[146803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:55:06 compute-0 sudo[146803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:08.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:08 compute-0 ceph-mon[73668]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:09 compute-0 podman[146889]: 2025-10-02 11:55:09.961432436 +0000 UTC m=+1.564237074 container create e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:55:09 compute-0 podman[146674]: 2025-10-02 11:55:09.984463422 +0000 UTC m=+5.080244041 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:55:09 compute-0 podman[146889]: 2025-10-02 11:55:09.932436881 +0000 UTC m=+1.535241539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:09 compute-0 systemd[1]: Started libpod-conmon-e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed.scope.
Oct 02 11:55:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:10 compute-0 podman[146889]: 2025-10-02 11:55:10.037319965 +0000 UTC m=+1.640124633 container init e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:55:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:10 compute-0 podman[146889]: 2025-10-02 11:55:10.044228808 +0000 UTC m=+1.647033436 container start e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:55:10 compute-0 compassionate_taussig[146947]: 167 167
Oct 02 11:55:10 compute-0 podman[146889]: 2025-10-02 11:55:10.049355333 +0000 UTC m=+1.652160001 container attach e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:10 compute-0 systemd[1]: libpod-e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed.scope: Deactivated successfully.
Oct 02 11:55:10 compute-0 podman[146889]: 2025-10-02 11:55:10.05116064 +0000 UTC m=+1.653965278 container died e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:55:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7436c9f3243a1a5af2c4a903ad5212c54e1360dacef46e4de3e2587b559652e-merged.mount: Deactivated successfully.
Oct 02 11:55:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:10 compute-0 podman[146889]: 2025-10-02 11:55:10.098720054 +0000 UTC m=+1.701524702 container remove e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:10 compute-0 systemd[1]: libpod-conmon-e7d76d864ea0048a6d7947a1c621270c84c40ffe30d3e3d78f1aec4cfc05d0ed.scope: Deactivated successfully.
Oct 02 11:55:10 compute-0 podman[146975]: 2025-10-02 11:55:10.14260133 +0000 UTC m=+0.056681065 container create aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 02 11:55:10 compute-0 podman[146975]: 2025-10-02 11:55:10.107806353 +0000 UTC m=+0.021886108 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:55:10 compute-0 python3[146620]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:55:10 compute-0 podman[147015]: 2025-10-02 11:55:10.249340803 +0000 UTC m=+0.038187658 container create 0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:55:10 compute-0 systemd[1]: Started libpod-conmon-0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9.scope.
Oct 02 11:55:10 compute-0 sudo[146615]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d93e5ccecf5e99be8e7fb874ad0409e67faa07669fee20c69a26ea8ab22faa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d93e5ccecf5e99be8e7fb874ad0409e67faa07669fee20c69a26ea8ab22faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d93e5ccecf5e99be8e7fb874ad0409e67faa07669fee20c69a26ea8ab22faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:10 compute-0 podman[147015]: 2025-10-02 11:55:10.232843218 +0000 UTC m=+0.021690093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d93e5ccecf5e99be8e7fb874ad0409e67faa07669fee20c69a26ea8ab22faa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:10 compute-0 podman[147015]: 2025-10-02 11:55:10.343025032 +0000 UTC m=+0.131871897 container init 0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:55:10 compute-0 podman[147015]: 2025-10-02 11:55:10.352258685 +0000 UTC m=+0.141105540 container start 0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:55:10 compute-0 podman[147015]: 2025-10-02 11:55:10.356219799 +0000 UTC m=+0.145066694 container attach 0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:55:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:10 compute-0 sudo[147200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvkhzdkforuwjtmvljqrbwijaxdgtuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406110.4765635-1681-116511420355387/AnsiballZ_stat.py'
Oct 02 11:55:10 compute-0 sudo[147200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:11 compute-0 python3.9[147202]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:55:11 compute-0 sudo[147200]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-0 ceph-mon[73668]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:11 compute-0 festive_yalow[147045]: {
Oct 02 11:55:11 compute-0 festive_yalow[147045]:     "1": [
Oct 02 11:55:11 compute-0 festive_yalow[147045]:         {
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "devices": [
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "/dev/loop3"
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             ],
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "lv_name": "ceph_lv0",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "lv_size": "7511998464",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "name": "ceph_lv0",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "tags": {
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.cluster_name": "ceph",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.crush_device_class": "",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.encrypted": "0",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.osd_id": "1",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.type": "block",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:                 "ceph.vdo": "0"
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             },
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "type": "block",
Oct 02 11:55:11 compute-0 festive_yalow[147045]:             "vg_name": "ceph_vg0"
Oct 02 11:55:11 compute-0 festive_yalow[147045]:         }
Oct 02 11:55:11 compute-0 festive_yalow[147045]:     ]
Oct 02 11:55:11 compute-0 festive_yalow[147045]: }
Oct 02 11:55:11 compute-0 systemd[1]: libpod-0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9.scope: Deactivated successfully.
Oct 02 11:55:11 compute-0 podman[147015]: 2025-10-02 11:55:11.21834134 +0000 UTC m=+1.007188215 container died 0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 11:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-57d93e5ccecf5e99be8e7fb874ad0409e67faa07669fee20c69a26ea8ab22faa-merged.mount: Deactivated successfully.
Oct 02 11:55:11 compute-0 podman[147015]: 2025-10-02 11:55:11.282992733 +0000 UTC m=+1.071839588 container remove 0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:55:11 compute-0 systemd[1]: libpod-conmon-0b5e0f8961ed320f042cbc14cc0c1010cd171969039d986e61fa065e9e36f9b9.scope: Deactivated successfully.
Oct 02 11:55:11 compute-0 sudo[146803]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-0 sudo[147279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:11 compute-0 sudo[147279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:11 compute-0 sudo[147279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-0 sudo[147332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:11 compute-0 sudo[147332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:11 compute-0 sudo[147332]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-0 sudo[147376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:11 compute-0 sudo[147376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:11 compute-0 sudo[147376]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-0 sudo[147464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aymjkdirahmpomcpgvycjvtjjdfhnauu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406111.290136-1708-193553408242204/AnsiballZ_file.py'
Oct 02 11:55:11 compute-0 sudo[147464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:11 compute-0 sudo[147427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:55:11 compute-0 sudo[147427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:11 compute-0 python3.9[147468]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:11 compute-0 sudo[147464]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-0 podman[147514]: 2025-10-02 11:55:11.901932285 +0000 UTC m=+0.043985171 container create 6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 11:55:11 compute-0 systemd[1]: Started libpod-conmon-6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4.scope.
Oct 02 11:55:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:11 compute-0 podman[147514]: 2025-10-02 11:55:11.88202243 +0000 UTC m=+0.024075346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:11 compute-0 podman[147514]: 2025-10-02 11:55:11.979668563 +0000 UTC m=+0.121721479 container init 6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:55:11 compute-0 podman[147514]: 2025-10-02 11:55:11.988736642 +0000 UTC m=+0.130789528 container start 6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:55:11 compute-0 podman[147514]: 2025-10-02 11:55:11.99205783 +0000 UTC m=+0.134110736 container attach 6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:55:11 compute-0 peaceful_liskov[147568]: 167 167
Oct 02 11:55:11 compute-0 systemd[1]: libpod-6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4.scope: Deactivated successfully.
Oct 02 11:55:11 compute-0 podman[147514]: 2025-10-02 11:55:11.997181805 +0000 UTC m=+0.139234691 container died 6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:55:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1058ed565a111a323a4bcbf8155f63f8a3bd744da4dd0847dde1a91de221fd22-merged.mount: Deactivated successfully.
Oct 02 11:55:12 compute-0 podman[147514]: 2025-10-02 11:55:12.036699036 +0000 UTC m=+0.178751922 container remove 6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:55:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:12 compute-0 systemd[1]: libpod-conmon-6af8767b2c9cf5ecbe8827dd4c1d92d6fffa1acd354ed6678f5dbb6cb77bf1a4.scope: Deactivated successfully.
Oct 02 11:55:12 compute-0 sudo[147615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmqfjaewgwgcbxqswnsaezwmoirnblri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406111.290136-1708-193553408242204/AnsiballZ_stat.py'
Oct 02 11:55:12 compute-0 sudo[147615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:12 compute-0 podman[147627]: 2025-10-02 11:55:12.189209015 +0000 UTC m=+0.043637671 container create 4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:55:12 compute-0 ceph-mon[73668]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:12 compute-0 systemd[1]: Started libpod-conmon-4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7.scope.
Oct 02 11:55:12 compute-0 python3.9[147621]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:55:12 compute-0 sudo[147615]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:12 compute-0 podman[147627]: 2025-10-02 11:55:12.171827497 +0000 UTC m=+0.026256183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79dfd1535bdb89f0834b5219212be95725ab45c8dcd7fd6d73d2fa2ebcb4410e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79dfd1535bdb89f0834b5219212be95725ab45c8dcd7fd6d73d2fa2ebcb4410e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79dfd1535bdb89f0834b5219212be95725ab45c8dcd7fd6d73d2fa2ebcb4410e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79dfd1535bdb89f0834b5219212be95725ab45c8dcd7fd6d73d2fa2ebcb4410e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:12 compute-0 podman[147627]: 2025-10-02 11:55:12.291387668 +0000 UTC m=+0.145816344 container init 4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:55:12 compute-0 podman[147627]: 2025-10-02 11:55:12.299004779 +0000 UTC m=+0.153433425 container start 4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 11:55:12 compute-0 podman[147627]: 2025-10-02 11:55:12.302984434 +0000 UTC m=+0.157413090 container attach 4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:55:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:12 compute-0 sudo[147797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejprdjqriearhirgtinjaaekwgwimxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406112.3258078-1708-242359410477033/AnsiballZ_copy.py'
Oct 02 11:55:12 compute-0 sudo[147797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:12 compute-0 python3.9[147799]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406112.3258078-1708-242359410477033/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:12 compute-0 sudo[147797]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:13 compute-0 bold_clarke[147643]: {
Oct 02 11:55:13 compute-0 bold_clarke[147643]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:55:13 compute-0 bold_clarke[147643]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:55:13 compute-0 bold_clarke[147643]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:55:13 compute-0 bold_clarke[147643]:         "osd_id": 1,
Oct 02 11:55:13 compute-0 bold_clarke[147643]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:55:13 compute-0 bold_clarke[147643]:         "type": "bluestore"
Oct 02 11:55:13 compute-0 bold_clarke[147643]:     }
Oct 02 11:55:13 compute-0 bold_clarke[147643]: }
Oct 02 11:55:13 compute-0 systemd[1]: libpod-4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7.scope: Deactivated successfully.
Oct 02 11:55:13 compute-0 podman[147627]: 2025-10-02 11:55:13.225544065 +0000 UTC m=+1.079972721 container died 4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:55:13 compute-0 sudo[147889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhicrdnoggycbwbjaurgrhejoimkntkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406112.3258078-1708-242359410477033/AnsiballZ_systemd.py'
Oct 02 11:55:13 compute-0 sudo[147889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-79dfd1535bdb89f0834b5219212be95725ab45c8dcd7fd6d73d2fa2ebcb4410e-merged.mount: Deactivated successfully.
Oct 02 11:55:13 compute-0 podman[147627]: 2025-10-02 11:55:13.284296443 +0000 UTC m=+1.138725099 container remove 4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:55:13 compute-0 systemd[1]: libpod-conmon-4aba53eeaed5084a5ac21435f0cd2758c3514d60154b6ac3d61e47fd9f4793f7.scope: Deactivated successfully.
Oct 02 11:55:13 compute-0 sudo[147427]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:55:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:55:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 09866adc-c1ae-4e64-b1f7-699ba695ac58 does not exist
Oct 02 11:55:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1ffecd7f-77f9-4c56-8fee-3c9c2a9a4b98 does not exist
Oct 02 11:55:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7cfc467f-fac0-43f9-80c9-469c192fa540 does not exist
Oct 02 11:55:13 compute-0 sudo[147903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:13 compute-0 sudo[147903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:13 compute-0 sudo[147903]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:13 compute-0 sudo[147928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:55:13 compute-0 sudo[147928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:13 compute-0 sudo[147928]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:13 compute-0 python3.9[147898]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:55:13 compute-0 systemd[1]: Reloading.
Oct 02 11:55:13 compute-0 systemd-rc-local-generator[147978]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:13 compute-0 systemd-sysv-generator[147984]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:13 compute-0 sudo[147889]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:14 compute-0 sudo[148062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljgcjgdrqeicckobodzklwpjwixbavxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406112.3258078-1708-242359410477033/AnsiballZ_systemd.py'
Oct 02 11:55:14 compute-0 sudo[148062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:14 compute-0 ceph-mon[73668]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:14 compute-0 python3.9[148064]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:55:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:14 compute-0 systemd[1]: Reloading.
Oct 02 11:55:14 compute-0 systemd-rc-local-generator[148097]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:14 compute-0 systemd-sysv-generator[148101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:14 compute-0 systemd[1]: Starting ovn_controller container...
Oct 02 11:55:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8870dd9eeb06ee28b5205865615e54a877cbdb5d2afe0efdbc9a83220c511b9/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68.
Oct 02 11:55:15 compute-0 podman[148107]: 2025-10-02 11:55:15.420584602 +0000 UTC m=+0.501702833 container init aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + sudo -E kolla_set_configs
Oct 02 11:55:15 compute-0 podman[148107]: 2025-10-02 11:55:15.456092758 +0000 UTC m=+0.537210969 container start aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:55:15 compute-0 edpm-start-podman-container[148107]: ovn_controller
Oct 02 11:55:15 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 11:55:15 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 11:55:15 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 11:55:15 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 11:55:15 compute-0 systemd[148153]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 11:55:15 compute-0 edpm-start-podman-container[148106]: Creating additional drop-in dependency for "ovn_controller" (aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68)
Oct 02 11:55:15 compute-0 podman[148130]: 2025-10-02 11:55:15.539768613 +0000 UTC m=+0.070690114 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 11:55:15 compute-0 systemd[1]: aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68-30441fdd12dbc9f4.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:55:15 compute-0 systemd[1]: aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68-30441fdd12dbc9f4.service: Failed with result 'exit-code'.
Oct 02 11:55:15 compute-0 systemd[1]: Reloading.
Oct 02 11:55:15 compute-0 systemd-rc-local-generator[148208]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:15 compute-0 systemd-sysv-generator[148212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:15 compute-0 systemd[148153]: Queued start job for default target Main User Target.
Oct 02 11:55:15 compute-0 systemd[148153]: Created slice User Application Slice.
Oct 02 11:55:15 compute-0 systemd[148153]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 11:55:15 compute-0 systemd[148153]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:55:15 compute-0 systemd[148153]: Reached target Paths.
Oct 02 11:55:15 compute-0 systemd[148153]: Reached target Timers.
Oct 02 11:55:15 compute-0 systemd[148153]: Starting D-Bus User Message Bus Socket...
Oct 02 11:55:15 compute-0 systemd[148153]: Starting Create User's Volatile Files and Directories...
Oct 02 11:55:15 compute-0 systemd[148153]: Finished Create User's Volatile Files and Directories.
Oct 02 11:55:15 compute-0 systemd[148153]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:55:15 compute-0 systemd[148153]: Reached target Sockets.
Oct 02 11:55:15 compute-0 systemd[148153]: Reached target Basic System.
Oct 02 11:55:15 compute-0 systemd[148153]: Reached target Main User Target.
Oct 02 11:55:15 compute-0 systemd[148153]: Startup finished in 151ms.
Oct 02 11:55:15 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 11:55:15 compute-0 systemd[1]: Started ovn_controller container.
Oct 02 11:55:15 compute-0 systemd[1]: Started Session c1 of User root.
Oct 02 11:55:15 compute-0 sudo[148062]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:15 compute-0 ovn_controller[148123]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:55:15 compute-0 ovn_controller[148123]: INFO:__main__:Validating config file
Oct 02 11:55:15 compute-0 ovn_controller[148123]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:55:15 compute-0 ovn_controller[148123]: INFO:__main__:Writing out command to execute
Oct 02 11:55:15 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 02 11:55:15 compute-0 ovn_controller[148123]: ++ cat /run_command
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + ARGS=
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + sudo kolla_copy_cacerts
Oct 02 11:55:15 compute-0 systemd[1]: Started Session c2 of User root.
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + [[ ! -n '' ]]
Oct 02 11:55:15 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + . kolla_extend_start
Oct 02 11:55:15 compute-0 ovn_controller[148123]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + umask 0022
Oct 02 11:55:15 compute-0 ovn_controller[148123]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 02 11:55:15 compute-0 ovn_controller[148123]: 2025-10-02T11:55:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 11:55:15 compute-0 ovn_controller[148123]: 2025-10-02T11:55:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 11:55:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:16.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0670] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0681] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0699] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0707] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0713] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 11:55:16 compute-0 kernel: br-int: entered promiscuous mode
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:55:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00019|main|INFO|OVS feature set changed, force recompute.
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00020|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0928] manager: (ovn-bfdd72-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:16 compute-0 ovn_controller[148123]: 2025-10-02T11:55:16Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0935] manager: (ovn-db2221-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.0943] manager: (ovn-b95886-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct 02 11:55:16 compute-0 systemd-udevd[148278]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:55:16 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 02 11:55:16 compute-0 systemd-udevd[148279]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.1187] device (genev_sys_6081): carrier: link connected
Oct 02 11:55:16 compute-0 NetworkManager[44981]: <info>  [1759406116.1192] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct 02 11:55:16 compute-0 sudo[148386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubeqzghoqkchogspeahntxoqlkpmzrxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406116.1050165-1792-23288086631353/AnsiballZ_command.py'
Oct 02 11:55:16 compute-0 sudo[148386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:16 compute-0 sudo[148387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:16 compute-0 sudo[148387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:16 compute-0 sudo[148387]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:16 compute-0 sudo[148414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:16 compute-0 sudo[148414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:16 compute-0 sudo[148414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:16 compute-0 ceph-mon[73668]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:16 compute-0 python3.9[148397]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:55:16 compute-0 ovs-vsctl[148439]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 02 11:55:16 compute-0 sudo[148386]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:17 compute-0 sudo[148589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miscllpsbffrnofmgngqcupmgksxsgxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406116.8329341-1816-17132443854691/AnsiballZ_command.py'
Oct 02 11:55:17 compute-0 sudo[148589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:17 compute-0 python3.9[148591]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:55:17 compute-0 ovs-vsctl[148593]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 02 11:55:17 compute-0 sudo[148589]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:18 compute-0 ceph-mon[73668]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:18 compute-0 sudo[148744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isyedfeqgtuefeaekhvovdbhemznjfps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406117.9406805-1858-58791405307524/AnsiballZ_command.py'
Oct 02 11:55:18 compute-0 sudo[148744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:18 compute-0 python3.9[148746]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:55:18 compute-0 ovs-vsctl[148748]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 02 11:55:18 compute-0 sudo[148744]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:18.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:18 compute-0 sshd-session[136371]: Connection closed by 192.168.122.30 port 46268
Oct 02 11:55:18 compute-0 sshd-session[136368]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:55:18 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Oct 02 11:55:18 compute-0 systemd[1]: session-47.scope: Consumed 58.964s CPU time.
Oct 02 11:55:18 compute-0 systemd-logind[820]: Session 47 logged out. Waiting for processes to exit.
Oct 02 11:55:18 compute-0 systemd-logind[820]: Removed session 47.
Oct 02 11:55:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:20.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:20 compute-0 ceph-mon[73668]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:22.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:22 compute-0 ceph-mon[73668]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:24.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:24 compute-0 ceph-mon[73668]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:24 compute-0 sshd-session[148775]: Accepted publickey for zuul from 192.168.122.30 port 48868 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:55:24 compute-0 systemd-logind[820]: New session 49 of user zuul.
Oct 02 11:55:24 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 02 11:55:24 compute-0 sshd-session[148775]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:55:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:25 compute-0 python3.9[148929]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:55:26 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 11:55:26 compute-0 systemd[148153]: Activating special unit Exit the Session...
Oct 02 11:55:26 compute-0 systemd[148153]: Stopped target Main User Target.
Oct 02 11:55:26 compute-0 systemd[148153]: Stopped target Basic System.
Oct 02 11:55:26 compute-0 systemd[148153]: Stopped target Paths.
Oct 02 11:55:26 compute-0 systemd[148153]: Stopped target Sockets.
Oct 02 11:55:26 compute-0 systemd[148153]: Stopped target Timers.
Oct 02 11:55:26 compute-0 systemd[148153]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 11:55:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:26 compute-0 systemd[148153]: Closed D-Bus User Message Bus Socket.
Oct 02 11:55:26 compute-0 systemd[148153]: Stopped Create User's Volatile Files and Directories.
Oct 02 11:55:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:26.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:26 compute-0 systemd[148153]: Removed slice User Application Slice.
Oct 02 11:55:26 compute-0 systemd[148153]: Reached target Shutdown.
Oct 02 11:55:26 compute-0 systemd[148153]: Finished Exit the Session.
Oct 02 11:55:26 compute-0 systemd[148153]: Reached target Exit the Session.
Oct 02 11:55:26 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 11:55:26 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 11:55:26 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 11:55:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:26 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 11:55:26 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 11:55:26 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 11:55:26 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 11:55:26 compute-0 ceph-mon[73668]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:26 compute-0 sudo[149087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msitxukjrryjjpyytycvcllgkxqvvybc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406125.9833481-67-133624672179704/AnsiballZ_file.py'
Oct 02 11:55:26 compute-0 sudo[149087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:26.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:26 compute-0 python3.9[149089]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:26 compute-0 sudo[149087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:27 compute-0 sudo[149239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ielxwsghaldbehuppsacljllgjlglcfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406126.7681096-67-158015910352012/AnsiballZ_file.py'
Oct 02 11:55:27 compute-0 sudo[149239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:27 compute-0 python3.9[149241]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:27 compute-0 sudo[149239]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:27 compute-0 sudo[149391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrmlhzwumppvgfkebktgjmdvsajwsvfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406127.367008-67-236168125200099/AnsiballZ_file.py'
Oct 02 11:55:27 compute-0 sudo[149391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:27 compute-0 python3.9[149393]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:27 compute-0 sudo[149391]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:28.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:28 compute-0 ceph-mon[73668]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:28 compute-0 sudo[149543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbzxkbpmtvicxaqqifvxprrlfxcisra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406127.9876504-67-168364272943982/AnsiballZ_file.py'
Oct 02 11:55:28 compute-0 sudo[149543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:55:28 compute-0 python3.9[149545]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:28 compute-0 sudo[149543]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:28.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:55:28
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'vms']
Oct 02 11:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:55:28 compute-0 sudo[149696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtalrwnzgkduqyoyzvuofatxvslkqoho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406128.6097958-67-232496759004179/AnsiballZ_file.py'
Oct 02 11:55:28 compute-0 sudo[149696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:29 compute-0 python3.9[149698]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:29 compute-0 sudo[149696]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:29 compute-0 python3.9[149848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:55:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:30 compute-0 ceph-mon[73668]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:30 compute-0 sudo[149999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsvfnyjqpphhoksxtknvubgpmlyqrjta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406130.0445576-199-26376485198624/AnsiballZ_seboolean.py'
Oct 02 11:55:30 compute-0 sudo[149999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:30.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:30 compute-0 python3.9[150001]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 11:55:31 compute-0 sudo[149999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:32.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:32 compute-0 python3.9[150151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:32 compute-0 ceph-mon[73668]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:32.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:32 compute-0 python3.9[150273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406131.523208-223-1405146835879/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:34.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:34 compute-0 ceph-mon[73668]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:34.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:34 compute-0 python3.9[150425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:35 compute-0 python3.9[150546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406134.3913057-268-196477630244311/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:36 compute-0 sudo[150696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtwejjpbkzxyzpqxjnkdkliqiionlgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406135.7403133-319-122608406789179/AnsiballZ_setup.py'
Oct 02 11:55:36 compute-0 sudo[150696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:36.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:36 compute-0 ceph-mon[73668]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:36 compute-0 python3.9[150698]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:55:36 compute-0 sudo[150708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:36 compute-0 sudo[150708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:36 compute-0 sudo[150696]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:36 compute-0 sudo[150708]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:36.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:36 compute-0 sudo[150733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:36 compute-0 sudo[150733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:36 compute-0 sudo[150733]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:37 compute-0 sudo[150831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ercwbzneizfuygletusnuywczzlsqgcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406135.7403133-319-122608406789179/AnsiballZ_dnf.py'
Oct 02 11:55:37 compute-0 sudo[150831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:37 compute-0 python3.9[150833]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:55:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:38.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:38 compute-0 ceph-mon[73668]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:38 compute-0 sudo[150831]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:38.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:39 compute-0 sudo[150985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhtggcqlzeqypqxycwazanxlgtqhhqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406138.690941-355-99233063638256/AnsiballZ_systemd.py'
Oct 02 11:55:39 compute-0 sudo[150985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:55:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 7925 writes, 32K keys, 7925 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7925 writes, 1514 syncs, 5.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7925 writes, 32K keys, 7925 commit groups, 1.0 writes per commit group, ingest: 20.69 MB, 0.03 MB/s
                                           Interval WAL: 7925 writes, 1514 syncs, 5.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 11:55:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:55:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:55:39 compute-0 python3.9[150987]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:55:39 compute-0 sudo[150985]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:40.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:40 compute-0 ceph-mon[73668]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:40 compute-0 python3.9[151140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:40.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:41 compute-0 python3.9[151262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406139.9619129-379-72357584042250/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:41 compute-0 python3.9[151412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:42 compute-0 ceph-mon[73668]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:42 compute-0 python3.9[151533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406141.4593892-379-276839375653875/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:42.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:43 compute-0 python3.9[151684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:44 compute-0 ceph-mon[73668]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:44 compute-0 python3.9[151805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406143.354861-511-188197578878850/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:44.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:44 compute-0 python3.9[151956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:45 compute-0 python3.9[152077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406144.5620105-511-75511527750811/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:46 compute-0 ovn_controller[148123]: 2025-10-02T11:55:46Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Oct 02 11:55:46 compute-0 ovn_controller[148123]: 2025-10-02T11:55:46Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Oct 02 11:55:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:46.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:46 compute-0 podman[152201]: 2025-10-02 11:55:46.079969841 +0000 UTC m=+0.098112947 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 11:55:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:46 compute-0 python3.9[152234]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:55:46 compute-0 ceph-mon[73668]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:55:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:46.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:55:46 compute-0 sudo[152406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfhddybkoqrymkxtidyvmmfsyasgixuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406146.4649343-625-117348324983397/AnsiballZ_file.py'
Oct 02 11:55:46 compute-0 sudo[152406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:46 compute-0 python3.9[152408]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:46 compute-0 sudo[152406]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:47 compute-0 sudo[152558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucbrdwzofuipedpgprmvxegoyruokygl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406147.1440492-649-122250327695544/AnsiballZ_stat.py'
Oct 02 11:55:47 compute-0 sudo[152558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:47 compute-0 python3.9[152560]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:47 compute-0 sudo[152558]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 11:55:47 compute-0 sudo[152636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvmbifjmhvoxzchkaoqarcyqwftifwzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406147.1440492-649-122250327695544/AnsiballZ_file.py'
Oct 02 11:55:47 compute-0 sudo[152636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:48.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:48 compute-0 python3.9[152638]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:48 compute-0 sudo[152636]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:48 compute-0 ceph-mon[73668]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:48 compute-0 sudo[152789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiqssfhauaicjzcqbsimjazsvinopwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406148.2764893-649-7374148929883/AnsiballZ_stat.py'
Oct 02 11:55:48 compute-0 sudo[152789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:48.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:48 compute-0 python3.9[152791]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:48 compute-0 sudo[152789]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:48 compute-0 sudo[152867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spikzylmvypvuznjpjikdwxzssbznxyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406148.2764893-649-7374148929883/AnsiballZ_file.py'
Oct 02 11:55:48 compute-0 sudo[152867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:49 compute-0 python3.9[152869]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:49 compute-0 sudo[152867]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:49 compute-0 sudo[153019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veepgvqizqfyjhwxtltrtesvhcszfnwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406149.340015-718-196547176838035/AnsiballZ_file.py'
Oct 02 11:55:49 compute-0 sudo[153019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:49 compute-0 python3.9[153021]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:49 compute-0 sudo[153019]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:50.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:50 compute-0 ceph-mon[73668]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:50 compute-0 sudo[153172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fracpppkhpuhjrjhlydntgzfknesafcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406150.069266-742-61825444965381/AnsiballZ_stat.py'
Oct 02 11:55:50 compute-0 sudo[153172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:50 compute-0 python3.9[153174]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:50 compute-0 sudo[153172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:50.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:50 compute-0 sudo[153250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvugqbxnqdwpvfrltzzqibpclakklmab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406150.069266-742-61825444965381/AnsiballZ_file.py'
Oct 02 11:55:50 compute-0 sudo[153250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:51 compute-0 python3.9[153252]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:51 compute-0 sudo[153250]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:51 compute-0 sudo[153402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zznnlobmvudyjaghfisfnynmmxglazvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406151.2200103-778-228067967019214/AnsiballZ_stat.py'
Oct 02 11:55:51 compute-0 sudo[153402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:51 compute-0 python3.9[153404]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:51 compute-0 sudo[153402]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:51 compute-0 sudo[153480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jscbetatmcewvfceukufnoadtwanlezi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406151.2200103-778-228067967019214/AnsiballZ_file.py'
Oct 02 11:55:51 compute-0 sudo[153480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:52 compute-0 python3.9[153482]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:52 compute-0 sudo[153480]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:52 compute-0 ceph-mon[73668]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:52 compute-0 sudo[153633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxphgitblisaglwwesgknlzhhrklpzzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406152.3618743-814-6735491289437/AnsiballZ_systemd.py'
Oct 02 11:55:52 compute-0 sudo[153633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:52.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:52 compute-0 python3.9[153635]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:55:52 compute-0 systemd[1]: Reloading.
Oct 02 11:55:53 compute-0 systemd-rc-local-generator[153658]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:53 compute-0 systemd-sysv-generator[153664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:53 compute-0 sudo[153633]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:53 compute-0 sudo[153822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqnfbavzsdfewxpvavrgywptplebritl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406153.452675-838-264154172517313/AnsiballZ_stat.py'
Oct 02 11:55:53 compute-0 sudo[153822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:53 compute-0 python3.9[153824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:53 compute-0 sudo[153822]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:54 compute-0 sudo[153900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksfrqztbzlaeatbrqbyqwccsmonxtxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406153.452675-838-264154172517313/AnsiballZ_file.py'
Oct 02 11:55:54 compute-0 sudo[153900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:54 compute-0 ceph-mon[73668]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:54 compute-0 python3.9[153902]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:54 compute-0 sudo[153900]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:54.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:54 compute-0 sudo[154053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekvezuhzhygrmerjscybvbekfdbqekmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406154.5976682-874-274079223975188/AnsiballZ_stat.py'
Oct 02 11:55:54 compute-0 sudo[154053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:55 compute-0 python3.9[154055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:55 compute-0 sudo[154053]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:55 compute-0 sudo[154131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkazgnivujoowtdzbwkaglxtjrpwzrjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406154.5976682-874-274079223975188/AnsiballZ_file.py'
Oct 02 11:55:55 compute-0 sudo[154131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:55 compute-0 python3.9[154133]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:55 compute-0 sudo[154131]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:56 compute-0 sudo[154283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vumxgfoxczdczlskmkvpednnhilxwcxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406155.776996-910-229662309925343/AnsiballZ_systemd.py'
Oct 02 11:55:56 compute-0 sudo[154283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:56.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:56 compute-0 ceph-mon[73668]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:56 compute-0 python3.9[154285]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:55:56 compute-0 systemd[1]: Reloading.
Oct 02 11:55:56 compute-0 systemd-rc-local-generator[154312]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:56 compute-0 systemd-sysv-generator[154316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:56.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:56 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 11:55:56 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:55:56 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:55:56 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 11:55:56 compute-0 sudo[154283]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:56 compute-0 sudo[154326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:56 compute-0 sudo[154326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:56 compute-0 sudo[154326]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:56 compute-0 sudo[154356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:56 compute-0 sudo[154356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:56 compute-0 sudo[154356]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:57 compute-0 sudo[154526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjdrrlrtjxgezybdfsbuhyahrmzzbvqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406157.0926423-940-19356492955685/AnsiballZ_file.py'
Oct 02 11:55:57 compute-0 sudo[154526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:57 compute-0 python3.9[154528]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:57 compute-0 sudo[154526]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:58.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:58 compute-0 sudo[154678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzcvytvveaktksonaqbpuplccebltxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406157.8053536-964-264254591970318/AnsiballZ_stat.py'
Oct 02 11:55:58 compute-0 sudo[154678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:58 compute-0 ceph-mon[73668]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:58 compute-0 python3.9[154680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:58 compute-0 sudo[154678]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:55:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:55:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:55:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:55:58 compute-0 sudo[154802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gukaqtoydairkbngmpfnjggixepqwwqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406157.8053536-964-264254591970318/AnsiballZ_copy.py'
Oct 02 11:55:58 compute-0 sudo[154802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:58 compute-0 python3.9[154804]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406157.8053536-964-264254591970318/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:58 compute-0 sudo[154802]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:59 compute-0 sudo[154954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxqtvxigoeamyvwlvdijmxvqzdudyjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406159.266618-1015-225941257112990/AnsiballZ_file.py'
Oct 02 11:55:59 compute-0 sudo[154954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:59 compute-0 python3.9[154956]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:59 compute-0 sudo[154954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:56:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:56:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:00 compute-0 ceph-mon[73668]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:00 compute-0 sudo[155106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxhghtukhdkmcfpnomhqrkdwbafjbkky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406159.9747908-1039-29963671282947/AnsiballZ_stat.py'
Oct 02 11:56:00 compute-0 sudo[155106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:00 compute-0 python3.9[155108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:56:00 compute-0 sudo[155106]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:00 compute-0 sudo[155230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrucrxzjhryqchljslnodtuzwdifyaew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406159.9747908-1039-29963671282947/AnsiballZ_copy.py'
Oct 02 11:56:00 compute-0 sudo[155230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:01 compute-0 python3.9[155232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406159.9747908-1039-29963671282947/.source.json _original_basename=.f5mfl1kf follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:01 compute-0 sudo[155230]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:01 compute-0 sudo[155382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzimqsiroppawwjcitnvorkduaohwyby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406161.2509692-1084-123020597898944/AnsiballZ_file.py'
Oct 02 11:56:01 compute-0 sudo[155382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:01 compute-0 python3.9[155384]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:01 compute-0 sudo[155382]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:56:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:56:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:02 compute-0 ceph-mon[73668]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:02 compute-0 sudo[155534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhdrcfqwhxmvjjmgrcnulcxuqehxsdeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406162.026613-1108-148638602043762/AnsiballZ_stat.py'
Oct 02 11:56:02 compute-0 sudo[155534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:02 compute-0 sudo[155534]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:02 compute-0 sudo[155658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nndnotrrozedmkoglftqtfarxzoqwkmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406162.026613-1108-148638602043762/AnsiballZ_copy.py'
Oct 02 11:56:02 compute-0 sudo[155658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:03 compute-0 sudo[155658]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:03 compute-0 sudo[155810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osxkwwozeaiaskxadyazfyjkwmwnxonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406163.4875364-1159-152126122595328/AnsiballZ_container_config_data.py'
Oct 02 11:56:03 compute-0 sudo[155810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:04 compute-0 python3.9[155812]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 02 11:56:04 compute-0 sudo[155810]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:04 compute-0 ceph-mon[73668]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:04.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:04 compute-0 sudo[155963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fybhxuvkgudcdksmjwvaelbjjcfuzkfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406164.517143-1186-26768823221569/AnsiballZ_container_config_hash.py'
Oct 02 11:56:04 compute-0 sudo[155963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:05 compute-0 python3.9[155965]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:56:05 compute-0 sudo[155963]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:05 compute-0 sudo[156115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knrcesvrkvgcthimpffjggxxihvkksiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406165.4659626-1213-94167590809586/AnsiballZ_podman_container_info.py'
Oct 02 11:56:05 compute-0 sudo[156115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:06 compute-0 python3.9[156117]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:56:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:06 compute-0 ceph-mon[73668]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:06 compute-0 sudo[156115]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:56:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:06.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:56:07 compute-0 sudo[156295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdcccwcrzswjtitftygqwxomoutgrqul ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406167.0645258-1252-279464342583839/AnsiballZ_edpm_container_manage.py'
Oct 02 11:56:07 compute-0 sudo[156295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:07 compute-0 python3[156297]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:56:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:08.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:08 compute-0 ceph-mon[73668]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:08.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:10.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:12 compute-0 ceph-mon[73668]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:12.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:12.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:13 compute-0 ceph-mon[73668]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:14.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:14 compute-0 sudo[156378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:14 compute-0 sudo[156378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-0 sudo[156378]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:14 compute-0 sudo[156403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:14 compute-0 sudo[156403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-0 sudo[156403]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:14 compute-0 sudo[156429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:14 compute-0 sudo[156429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-0 sudo[156429]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:14 compute-0 sudo[156454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:56:14 compute-0 sudo[156454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:14 compute-0 ceph-mon[73668]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:16 compute-0 sudo[156454]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:16 compute-0 podman[156310]: 2025-10-02 11:56:16.15097088 +0000 UTC m=+8.292438677 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:56:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 11:56:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:56:16 compute-0 podman[156572]: 2025-10-02 11:56:16.325908335 +0000 UTC m=+0.063705494 container create 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 11:56:16 compute-0 podman[156572]: 2025-10-02 11:56:16.290961855 +0000 UTC m=+0.028758994 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:56:16 compute-0 python3[156297]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.403198) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176405264, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1592, "num_deletes": 251, "total_data_size": 2967050, "memory_usage": 3009456, "flush_reason": "Manual Compaction"}
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 02 11:56:16 compute-0 ceph-mon[73668]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176422113, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2913291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10676, "largest_seqno": 12266, "table_properties": {"data_size": 2905944, "index_size": 4418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14364, "raw_average_key_size": 19, "raw_value_size": 2891373, "raw_average_value_size": 3901, "num_data_blocks": 199, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406005, "oldest_key_time": 1759406005, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 18960 microseconds, and 6570 cpu microseconds.
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.422168) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2913291 bytes OK
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.422189) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.423651) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.423667) EVENT_LOG_v1 {"time_micros": 1759406176423662, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.423691) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2960467, prev total WAL file size 2960467, number of live WAL files 2.
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.425670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2845KB)], [26(7526KB)]
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176425735, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10620721, "oldest_snapshot_seqno": -1}
Oct 02 11:56:16 compute-0 podman[156585]: 2025-10-02 11:56:16.444983327 +0000 UTC m=+0.097275997 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3967 keys, 8437614 bytes, temperature: kUnknown
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176462833, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8437614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8408256, "index_size": 18368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 96169, "raw_average_key_size": 24, "raw_value_size": 8333704, "raw_average_value_size": 2100, "num_data_blocks": 794, "num_entries": 3967, "num_filter_entries": 3967, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.463165) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8437614 bytes
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.464598) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 285.4 rd, 226.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.5) write-amplify(2.9) OK, records in: 4484, records dropped: 517 output_compression: NoCompression
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.464622) EVENT_LOG_v1 {"time_micros": 1759406176464612, "job": 10, "event": "compaction_finished", "compaction_time_micros": 37217, "compaction_time_cpu_micros": 19859, "output_level": 6, "num_output_files": 1, "total_output_size": 8437614, "num_input_records": 4484, "num_output_records": 3967, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176465311, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176488411, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.425350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.488502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.488509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.488511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.488513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:56:16.488514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-0 sudo[156295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:16 compute-0 sudo[156658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:16 compute-0 sudo[156658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:16 compute-0 sudo[156658]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:17 compute-0 sudo[156683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:17 compute-0 sudo[156683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:17 compute-0 sudo[156683]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 11:56:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:56:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 02 11:56:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:56:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 02 11:56:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 11:56:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 11:56:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:56:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:56:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:56:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f65b809f-2bb4-41cc-952e-7e1d2febac62 does not exist
Oct 02 11:56:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f40d9d8d-942b-4f8a-885f-f916db6044ff does not exist
Oct 02 11:56:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 41efb551-2a8e-40fb-a8cb-cf6443f6b437 does not exist
Oct 02 11:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:56:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:56:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:56:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:18 compute-0 ceph-mon[73668]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:56:18 compute-0 sudo[156709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:18 compute-0 sudo[156709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:18 compute-0 sudo[156709]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:18.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:18 compute-0 sudo[156734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:18 compute-0 sudo[156734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:18 compute-0 sudo[156734]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:18 compute-0 sudo[156759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:18 compute-0 sudo[156759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:18 compute-0 sudo[156759]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:18 compute-0 sudo[156784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:56:18 compute-0 sudo[156784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:19 compute-0 podman[156850]: 2025-10-02 11:56:19.198820737 +0000 UTC m=+0.043427559 container create 8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:56:19 compute-0 systemd[1]: Started libpod-conmon-8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d.scope.
Oct 02 11:56:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:19 compute-0 podman[156850]: 2025-10-02 11:56:19.181141171 +0000 UTC m=+0.025748023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:19 compute-0 podman[156850]: 2025-10-02 11:56:19.290199284 +0000 UTC m=+0.134806156 container init 8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:56:19 compute-0 podman[156850]: 2025-10-02 11:56:19.298818636 +0000 UTC m=+0.143425498 container start 8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elbakyan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:19 compute-0 podman[156850]: 2025-10-02 11:56:19.303664226 +0000 UTC m=+0.148271068 container attach 8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elbakyan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:19 compute-0 stoic_elbakyan[156867]: 167 167
Oct 02 11:56:19 compute-0 systemd[1]: libpod-8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d.scope: Deactivated successfully.
Oct 02 11:56:19 compute-0 podman[156850]: 2025-10-02 11:56:19.306866522 +0000 UTC m=+0.151473344 container died 8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elbakyan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-db84d74a9f83e5efdb74157687d154d51df4e8d7fcc205a27295f971710752cc-merged.mount: Deactivated successfully.
Oct 02 11:56:19 compute-0 podman[156850]: 2025-10-02 11:56:19.36629014 +0000 UTC m=+0.210896982 container remove 8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:56:19 compute-0 systemd[1]: libpod-conmon-8acff736f3dcae373f502582cfb4994f1c158f2de4ef6d10f42832b2cca1974d.scope: Deactivated successfully.
Oct 02 11:56:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:19 compute-0 podman[156981]: 2025-10-02 11:56:19.56042022 +0000 UTC m=+0.055088922 container create 337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:56:19 compute-0 sudo[157028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lackfxjpmkrqapicvmxkpafrasyptkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406179.292882-1276-158959374202002/AnsiballZ_stat.py'
Oct 02 11:56:19 compute-0 sudo[157028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:19 compute-0 systemd[1]: Started libpod-conmon-337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f.scope.
Oct 02 11:56:19 compute-0 podman[156981]: 2025-10-02 11:56:19.537127504 +0000 UTC m=+0.031796256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c672813291cc847c750d26b595f8c34edcba5695cb97faf46c7b04b49cd348/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c672813291cc847c750d26b595f8c34edcba5695cb97faf46c7b04b49cd348/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c672813291cc847c750d26b595f8c34edcba5695cb97faf46c7b04b49cd348/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c672813291cc847c750d26b595f8c34edcba5695cb97faf46c7b04b49cd348/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c672813291cc847c750d26b595f8c34edcba5695cb97faf46c7b04b49cd348/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:19 compute-0 podman[156981]: 2025-10-02 11:56:19.662793413 +0000 UTC m=+0.157462145 container init 337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:19 compute-0 podman[156981]: 2025-10-02 11:56:19.673756448 +0000 UTC m=+0.168425150 container start 337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:56:19 compute-0 podman[156981]: 2025-10-02 11:56:19.677810657 +0000 UTC m=+0.172479399 container attach 337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:56:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:56:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:56:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:19 compute-0 python3.9[157030]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:56:19 compute-0 sudo[157028]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:20 compute-0 sudo[157196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acthkquvryduyqbskuwurnyrveiatzui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406180.221126-1303-265283657223022/AnsiballZ_file.py'
Oct 02 11:56:20 compute-0 sudo[157196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:20 compute-0 confident_khayyam[157033]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:56:20 compute-0 confident_khayyam[157033]: --> relative data size: 1.0
Oct 02 11:56:20 compute-0 confident_khayyam[157033]: --> All data devices are unavailable
Oct 02 11:56:20 compute-0 systemd[1]: libpod-337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f.scope: Deactivated successfully.
Oct 02 11:56:20 compute-0 podman[156981]: 2025-10-02 11:56:20.608835333 +0000 UTC m=+1.103504025 container died 337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-99c672813291cc847c750d26b595f8c34edcba5695cb97faf46c7b04b49cd348-merged.mount: Deactivated successfully.
Oct 02 11:56:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:20 compute-0 ceph-mon[73668]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:20 compute-0 podman[156981]: 2025-10-02 11:56:20.759492274 +0000 UTC m=+1.254160966 container remove 337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:56:20 compute-0 systemd[1]: libpod-conmon-337da4ad94af39f6e94d90f54b4b4b108d162558c0d78555c268fd569775ab8f.scope: Deactivated successfully.
Oct 02 11:56:20 compute-0 sudo[156784]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:20 compute-0 python3.9[157201]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:20 compute-0 sudo[157196]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:20 compute-0 sudo[157217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:20 compute-0 sudo[157217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:20 compute-0 sudo[157217]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:20 compute-0 sudo[157254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:20 compute-0 sudo[157254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:20 compute-0 sudo[157254]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:20 compute-0 sudo[157291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:21 compute-0 sudo[157291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:21 compute-0 sudo[157291]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:21 compute-0 sudo[157339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:56:21 compute-0 sudo[157339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:21 compute-0 sudo[157388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bclehmeggfozeemyindahptqscbvkbbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406180.221126-1303-265283657223022/AnsiballZ_stat.py'
Oct 02 11:56:21 compute-0 sudo[157388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:21 compute-0 python3.9[157392]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:56:21 compute-0 sudo[157388]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:21 compute-0 podman[157439]: 2025-10-02 11:56:21.394312683 +0000 UTC m=+0.044041135 container create f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:21 compute-0 systemd[1]: Started libpod-conmon-f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad.scope.
Oct 02 11:56:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:21 compute-0 podman[157439]: 2025-10-02 11:56:21.37337294 +0000 UTC m=+0.023101392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:21 compute-0 podman[157439]: 2025-10-02 11:56:21.475872997 +0000 UTC m=+0.125601469 container init f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:56:21 compute-0 podman[157439]: 2025-10-02 11:56:21.484223391 +0000 UTC m=+0.133951843 container start f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:56:21 compute-0 podman[157439]: 2025-10-02 11:56:21.487249373 +0000 UTC m=+0.136977855 container attach f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:56:21 compute-0 relaxed_mirzakhani[157501]: 167 167
Oct 02 11:56:21 compute-0 systemd[1]: libpod-f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad.scope: Deactivated successfully.
Oct 02 11:56:21 compute-0 conmon[157501]: conmon f0e21c5103a6eb8a9b29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad.scope/container/memory.events
Oct 02 11:56:21 compute-0 podman[157439]: 2025-10-02 11:56:21.491651621 +0000 UTC m=+0.141380073 container died f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mirzakhani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a902d11adc9d0a1737c55340b0790217427865abd9e71dbbba375eff5d71bb44-merged.mount: Deactivated successfully.
Oct 02 11:56:21 compute-0 podman[157439]: 2025-10-02 11:56:21.527411423 +0000 UTC m=+0.177139875 container remove f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 11:56:21 compute-0 systemd[1]: libpod-conmon-f0e21c5103a6eb8a9b292bab0e0631a290da97ee8075824650c3b3c9149201ad.scope: Deactivated successfully.
Oct 02 11:56:21 compute-0 podman[157571]: 2025-10-02 11:56:21.747313896 +0000 UTC m=+0.098045068 container create d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:56:21 compute-0 podman[157571]: 2025-10-02 11:56:21.670926612 +0000 UTC m=+0.021657804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:21 compute-0 sudo[157635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uivpaxrwwjgzhchjleyznxcxwpmshfna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406181.3673458-1303-214719142957857/AnsiballZ_copy.py'
Oct 02 11:56:21 compute-0 sudo[157635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:21 compute-0 systemd[1]: Started libpod-conmon-d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0.scope.
Oct 02 11:56:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86ba606fe435abfa954c894383dc52ed3124a563ba6ce87c154aee8c23668eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86ba606fe435abfa954c894383dc52ed3124a563ba6ce87c154aee8c23668eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86ba606fe435abfa954c894383dc52ed3124a563ba6ce87c154aee8c23668eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86ba606fe435abfa954c894383dc52ed3124a563ba6ce87c154aee8c23668eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:22 compute-0 python3.9[157637]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406181.3673458-1303-214719142957857/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:22 compute-0 podman[157571]: 2025-10-02 11:56:22.080329161 +0000 UTC m=+0.431060363 container init d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 11:56:22 compute-0 podman[157571]: 2025-10-02 11:56:22.09144986 +0000 UTC m=+0.442181032 container start d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:56:22 compute-0 sudo[157635]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:22.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:22 compute-0 podman[157571]: 2025-10-02 11:56:22.129983856 +0000 UTC m=+0.480715048 container attach d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:56:22 compute-0 ceph-mon[73668]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:22 compute-0 sudo[157719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myipyvfzacnmibxsqstzrpocnwhkmimv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406181.3673458-1303-214719142957857/AnsiballZ_systemd.py'
Oct 02 11:56:22 compute-0 sudo[157719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:22 compute-0 python3.9[157721]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:56:22 compute-0 systemd[1]: Reloading.
Oct 02 11:56:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:22.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:22 compute-0 systemd-rc-local-generator[157752]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:22 compute-0 systemd-sysv-generator[157756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]: {
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:     "1": [
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:         {
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "devices": [
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "/dev/loop3"
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             ],
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "lv_name": "ceph_lv0",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "lv_size": "7511998464",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "name": "ceph_lv0",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "tags": {
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.cluster_name": "ceph",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.crush_device_class": "",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.encrypted": "0",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.osd_id": "1",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.type": "block",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:                 "ceph.vdo": "0"
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             },
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "type": "block",
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:             "vg_name": "ceph_vg0"
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:         }
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]:     ]
Oct 02 11:56:22 compute-0 romantic_ritchie[157640]: }
Oct 02 11:56:22 compute-0 podman[157571]: 2025-10-02 11:56:22.913992149 +0000 UTC m=+1.264723391 container died d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:56:23 compute-0 systemd[1]: libpod-d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0.scope: Deactivated successfully.
Oct 02 11:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c86ba606fe435abfa954c894383dc52ed3124a563ba6ce87c154aee8c23668eb-merged.mount: Deactivated successfully.
Oct 02 11:56:23 compute-0 sudo[157719]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:23 compute-0 podman[157571]: 2025-10-02 11:56:23.068063542 +0000 UTC m=+1.418794744 container remove d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:56:23 compute-0 systemd[1]: libpod-conmon-d3962dd59545d3a00fee8dcb90be11f7635951f78f40c378b4aa8b8cd594fbd0.scope: Deactivated successfully.
Oct 02 11:56:23 compute-0 sudo[157339]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:23 compute-0 sudo[157781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:23 compute-0 sudo[157781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:23 compute-0 sudo[157781]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:23 compute-0 sudo[157823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:23 compute-0 sudo[157823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:23 compute-0 sudo[157823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:23 compute-0 sudo[157921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqczwhvmajdirqmkwwbajadinibpmslt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406181.3673458-1303-214719142957857/AnsiballZ_systemd.py'
Oct 02 11:56:23 compute-0 sudo[157873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:23 compute-0 sudo[157921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:23 compute-0 sudo[157873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:23 compute-0 sudo[157873]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:23 compute-0 sudo[157926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:56:23 compute-0 sudo[157926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:23 compute-0 python3.9[157924]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:23 compute-0 systemd[1]: Reloading.
Oct 02 11:56:23 compute-0 podman[157991]: 2025-10-02 11:56:23.719128239 +0000 UTC m=+0.044150018 container create 41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamport, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:56:23 compute-0 systemd-sysv-generator[158036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:23 compute-0 systemd-rc-local-generator[158033]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:23 compute-0 podman[157991]: 2025-10-02 11:56:23.701222988 +0000 UTC m=+0.026244787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:23 compute-0 systemd[1]: Started libpod-conmon-41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09.scope.
Oct 02 11:56:23 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 02 11:56:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:24 compute-0 podman[157991]: 2025-10-02 11:56:24.019393574 +0000 UTC m=+0.344415363 container init 41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:56:24 compute-0 podman[157991]: 2025-10-02 11:56:24.031658994 +0000 UTC m=+0.356680773 container start 41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:56:24 compute-0 podman[157991]: 2025-10-02 11:56:24.036237557 +0000 UTC m=+0.361259356 container attach 41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:56:24 compute-0 crazy_lamport[158045]: 167 167
Oct 02 11:56:24 compute-0 systemd[1]: libpod-41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09.scope: Deactivated successfully.
Oct 02 11:56:24 compute-0 podman[157991]: 2025-10-02 11:56:24.043871592 +0000 UTC m=+0.368893371 container died 41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamport, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:56:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-56dc91c2713c94c01a89258eaf206e516803eea25aec0bf2b0737de5e3a32b0b-merged.mount: Deactivated successfully.
Oct 02 11:56:24 compute-0 podman[157991]: 2025-10-02 11:56:24.090253419 +0000 UTC m=+0.415275198 container remove 41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:56:24 compute-0 systemd[1]: libpod-conmon-41b47f235d447f7ef20aa9d0fb30dfc323889cf15bb1215b788c90586a2f7d09.scope: Deactivated successfully.
Oct 02 11:56:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:24.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc2ed1b9cf9641a72085a311cc7c706c9e228b13341ff6cc07eb0a44679f1a6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc2ed1b9cf9641a72085a311cc7c706c9e228b13341ff6cc07eb0a44679f1a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012.
Oct 02 11:56:24 compute-0 podman[158048]: 2025-10-02 11:56:24.289361333 +0000 UTC m=+0.277114872 container init 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 11:56:24 compute-0 ceph-mon[73668]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + sudo -E kolla_set_configs
Oct 02 11:56:24 compute-0 podman[158088]: 2025-10-02 11:56:24.317396677 +0000 UTC m=+0.104502571 container create 19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:56:24 compute-0 podman[158048]: 2025-10-02 11:56:24.320325396 +0000 UTC m=+0.308078915 container start 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 11:56:24 compute-0 edpm-start-podman-container[158048]: ovn_metadata_agent
Oct 02 11:56:24 compute-0 podman[158088]: 2025-10-02 11:56:24.240555331 +0000 UTC m=+0.027661255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:56:24 compute-0 systemd[1]: Started libpod-conmon-19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01.scope.
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Validating config file
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Copying service configuration files
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Writing out command to execute
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: ++ cat /run_command
Oct 02 11:56:24 compute-0 podman[158106]: 2025-10-02 11:56:24.397350677 +0000 UTC m=+0.064535726 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + CMD=neutron-ovn-metadata-agent
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + ARGS=
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + sudo kolla_copy_cacerts
Oct 02 11:56:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623fbf6eb9ddacb4355901900afbb5e7e17dda0a6b4611549f04e6b3d57fc956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623fbf6eb9ddacb4355901900afbb5e7e17dda0a6b4611549f04e6b3d57fc956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623fbf6eb9ddacb4355901900afbb5e7e17dda0a6b4611549f04e6b3d57fc956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623fbf6eb9ddacb4355901900afbb5e7e17dda0a6b4611549f04e6b3d57fc956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: Running command: 'neutron-ovn-metadata-agent'
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + [[ ! -n '' ]]
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + . kolla_extend_start
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + umask 0022
Oct 02 11:56:24 compute-0 ovn_metadata_agent[158078]: + exec neutron-ovn-metadata-agent
Oct 02 11:56:24 compute-0 podman[158088]: 2025-10-02 11:56:24.432838481 +0000 UTC m=+0.219944395 container init 19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamarr, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 11:56:24 compute-0 podman[158088]: 2025-10-02 11:56:24.441988167 +0000 UTC m=+0.229094071 container start 19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 11:56:24 compute-0 podman[158088]: 2025-10-02 11:56:24.446064767 +0000 UTC m=+0.233170661 container attach 19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:56:24 compute-0 edpm-start-podman-container[158044]: Creating additional drop-in dependency for "ovn_metadata_agent" (17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012)
Oct 02 11:56:24 compute-0 systemd[1]: Reloading.
Oct 02 11:56:24 compute-0 systemd-sysv-generator[158184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:24 compute-0 systemd-rc-local-generator[158180]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:24.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:24 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 02 11:56:24 compute-0 sudo[157921]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]: {
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]:         "osd_id": 1,
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]:         "type": "bluestore"
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]:     }
Oct 02 11:56:25 compute-0 crazy_lamarr[158133]: }
Oct 02 11:56:25 compute-0 sshd-session[148778]: Connection closed by 192.168.122.30 port 48868
Oct 02 11:56:25 compute-0 sshd-session[148775]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:56:25 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 02 11:56:25 compute-0 systemd[1]: session-49.scope: Consumed 57.620s CPU time.
Oct 02 11:56:25 compute-0 systemd-logind[820]: Session 49 logged out. Waiting for processes to exit.
Oct 02 11:56:25 compute-0 systemd-logind[820]: Removed session 49.
Oct 02 11:56:25 compute-0 systemd[1]: libpod-19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01.scope: Deactivated successfully.
Oct 02 11:56:25 compute-0 podman[158088]: 2025-10-02 11:56:25.390376669 +0000 UTC m=+1.177482583 container died 19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 11:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-623fbf6eb9ddacb4355901900afbb5e7e17dda0a6b4611549f04e6b3d57fc956-merged.mount: Deactivated successfully.
Oct 02 11:56:25 compute-0 podman[158088]: 2025-10-02 11:56:25.457556556 +0000 UTC m=+1.244662450 container remove 19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:56:25 compute-0 systemd[1]: libpod-conmon-19e24c710101b73f3504e50f36c404a21d05080f886bf97e82346f7e15fa5b01.scope: Deactivated successfully.
Oct 02 11:56:25 compute-0 sudo[157926]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:56:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:56:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5c34d040-7a71-4b2b-a23e-e8031a4c6c0d does not exist
Oct 02 11:56:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0d676418-84c3-41d1-bfc3-66574ab1a035 does not exist
Oct 02 11:56:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c2cd03ff-34dd-4aeb-a87c-93445558fe1c does not exist
Oct 02 11:56:25 compute-0 sudo[158245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:25 compute-0 sudo[158245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:25 compute-0 sudo[158245]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:25 compute-0 sudo[158270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:56:25 compute-0 sudo[158270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:25 compute-0 sudo[158270]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:26.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.388 158104 INFO neutron.common.config [-] Logging enabled!
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.388 158104 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.388 158104 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.389 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.389 158104 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.389 158104 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.389 158104 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.389 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.390 158104 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.390 158104 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.390 158104 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.390 158104 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.390 158104 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.390 158104 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.390 158104 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.391 158104 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.392 158104 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.393 158104 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.394 158104 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.395 158104 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.396 158104 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.397 158104 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.398 158104 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.399 158104 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.400 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.401 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.402 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.403 158104 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.404 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.405 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.406 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.407 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.408 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.409 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.410 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.411 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.412 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.413 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.414 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.415 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.416 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.417 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.418 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.419 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.420 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.421 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.422 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.422 158104 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.422 158104 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.461 158104 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.462 158104 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.462 158104 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.462 158104 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.462 158104 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.476 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 17f11839-42bc-4ba9-92b4-53d0d88b0404 (UUID: 17f11839-42bc-4ba9-92b4-53d0d88b0404) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.499 158104 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.499 158104 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.499 158104 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.499 158104 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.502 158104 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.507 158104 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:56:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:26 compute-0 ceph-mon[73668]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.524 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '17f11839-42bc-4ba9-92b4-53d0d88b0404'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], external_ids={}, name=17f11839-42bc-4ba9-92b4-53d0d88b0404, nb_cfg_timestamp=1759406124089, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.525 158104 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f66b80ada00>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.527 158104 INFO oslo_service.service [-] Starting 1 workers
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.533 158104 DEBUG oslo_service.service [-] Started child 158296 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.536 158296 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-242358'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.537 158104 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpgpd8l8g8/privsep.sock']
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.555 158296 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.556 158296 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.556 158296 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.559 158296 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.565 158296 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:26.572 158296 INFO eventlet.wsgi.server [-] (158296) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 02 11:56:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:26.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:27 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.233 158104 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.235 158104 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgpd8l8g8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.108 158301 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.113 158301 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.115 158301 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.116 158301 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158301
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.238 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a721247d-fbc0-43e1-bb6c-a7d8ad6b09cf]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.793 158301 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.794 158301 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:56:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:27.794 158301 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:28.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:28 compute-0 ceph-mon[73668]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.369 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0897ea17-0bf6-450b-a4e5-08fe33314fb9]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.372 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, column=external_ids, values=({'neutron:ovn-metadata-id': 'c28a30b4-71e8-5790-a88a-717ec5493ef2'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.381 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.388 158104 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.388 158104 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.388 158104 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.388 158104 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.388 158104 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.388 158104 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.389 158104 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.389 158104 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.389 158104 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.389 158104 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.389 158104 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.389 158104 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.390 158104 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.391 158104 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.392 158104 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.393 158104 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.393 158104 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.393 158104 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.393 158104 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.393 158104 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.393 158104 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.393 158104 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.394 158104 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.395 158104 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.396 158104 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.396 158104 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.396 158104 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.396 158104 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.396 158104 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.397 158104 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.398 158104 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.399 158104 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.400 158104 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.401 158104 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.401 158104 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.401 158104 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.401 158104 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.401 158104 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.401 158104 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.401 158104 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.402 158104 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.403 158104 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.404 158104 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.405 158104 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.406 158104 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.407 158104 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.408 158104 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.409 158104 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.410 158104 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.411 158104 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.412 158104 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.413 158104 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.414 158104 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.415 158104 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.416 158104 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.417 158104 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.418 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.419 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.420 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.421 158104 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:56:28.422 158104 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:56:28
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.meta']
Oct 02 11:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:56:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:30.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:30 compute-0 ceph-mon[73668]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:30 compute-0 sshd-session[158308]: Accepted publickey for zuul from 192.168.122.30 port 43406 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:56:30 compute-0 systemd-logind[820]: New session 50 of user zuul.
Oct 02 11:56:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:30 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 02 11:56:30 compute-0 sshd-session[158308]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:56:31 compute-0 python3.9[158461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:56:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:32 compute-0 ceph-mon[73668]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:32 compute-0 sudo[158616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvfetvprqzyvrkghnfqxsthvfohdfgsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406192.3875515-67-139536453572342/AnsiballZ_command.py'
Oct 02 11:56:32 compute-0 sudo[158616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:33 compute-0 python3.9[158618]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:56:33 compute-0 sudo[158616]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:34 compute-0 sudo[158781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dglmbknscsixcqqzwkayiinjvbpiotgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406193.4749165-100-159075396199705/AnsiballZ_systemd_service.py'
Oct 02 11:56:34 compute-0 sudo[158781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:34.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:34 compute-0 ceph-mon[73668]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:34 compute-0 python3.9[158783]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:56:34 compute-0 systemd[1]: Reloading.
Oct 02 11:56:34 compute-0 systemd-rc-local-generator[158807]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:34 compute-0 systemd-sysv-generator[158814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:34 compute-0 sudo[158781]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:35 compute-0 python3.9[158969]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:56:35 compute-0 network[158986]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:56:35 compute-0 network[158987]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:56:35 compute-0 network[158988]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:56:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:36.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:36 compute-0 ceph-mon[73668]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:37 compute-0 sudo[159034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:37 compute-0 sudo[159034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:37 compute-0 sudo[159034]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:37 compute-0 sudo[159063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:37 compute-0 sudo[159063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:37 compute-0 sudo[159063]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:38 compute-0 ceph-mon[73668]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:56:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:56:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:40 compute-0 ceph-mon[73668]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:40.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:41 compute-0 sudo[159304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krznwudircddlclhwnhaerztvefksozn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406201.2381907-157-261671891592402/AnsiballZ_systemd_service.py'
Oct 02 11:56:41 compute-0 sudo[159304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:42 compute-0 python3.9[159306]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:42 compute-0 sudo[159304]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:42.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:42 compute-0 ceph-mon[73668]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:42 compute-0 sudo[159458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vocyrhjgggixhkipdwrxcyaeyrqbzobd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406202.2802675-157-84892810899738/AnsiballZ_systemd_service.py'
Oct 02 11:56:42 compute-0 sudo[159458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:42.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:42 compute-0 python3.9[159460]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:42 compute-0 sudo[159458]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:43 compute-0 sudo[159611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yylcymvrwoizkabembwfpiclheoacmfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406203.1676688-157-121736166320171/AnsiballZ_systemd_service.py'
Oct 02 11:56:43 compute-0 sudo[159611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:43 compute-0 python3.9[159613]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:43 compute-0 sudo[159611]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:44.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:44 compute-0 ceph-mon[73668]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-0 sudo[159765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czrssktlefzqpjbdqwwetpvwslobscfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406204.0140083-157-42980941915684/AnsiballZ_systemd_service.py'
Oct 02 11:56:44 compute-0 sudo[159765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:44 compute-0 python3.9[159767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:44.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:44 compute-0 sudo[159765]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:45 compute-0 sudo[159918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igyifndidqapqaqqgmidfjxzpvtalfau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406204.9373238-157-48608585589935/AnsiballZ_systemd_service.py'
Oct 02 11:56:45 compute-0 sudo[159918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:45 compute-0 python3.9[159920]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:45 compute-0 sudo[159918]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:46 compute-0 sudo[160071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgxkeudrdxftwcrsxdwwwxirhxfovtgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406205.8450034-157-231706759781814/AnsiballZ_systemd_service.py'
Oct 02 11:56:46 compute-0 sudo[160071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:46 compute-0 ceph-mon[73668]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:46 compute-0 python3.9[160073]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:46 compute-0 sudo[160071]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:46 compute-0 podman[160076]: 2025-10-02 11:56:46.623834403 +0000 UTC m=+0.093173306 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 11:56:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:47 compute-0 sudo[160250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twmcfwbvcithuxienjoeohvknxzyrbud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406206.6802592-157-12086602455785/AnsiballZ_systemd_service.py'
Oct 02 11:56:47 compute-0 sudo[160250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:47 compute-0 python3.9[160252]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:47 compute-0 sudo[160250]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:48 compute-0 ceph-mon[73668]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:48 compute-0 sudo[160404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyctpdxpumesginobanlutoopvatwqij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406207.9410088-313-137415569268820/AnsiballZ_file.py'
Oct 02 11:56:48 compute-0 sudo[160404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:48 compute-0 python3.9[160406]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:48 compute-0 sudo[160404]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:49 compute-0 sudo[160556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iejegqlpjnvlicofsmgxvwdemerxkvij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406208.9573827-313-181602954265475/AnsiballZ_file.py'
Oct 02 11:56:49 compute-0 sudo[160556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:49 compute-0 python3.9[160558]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:49 compute-0 sudo[160556]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:49 compute-0 sudo[160708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnemfjmdgxcnwgljzcxlcuvwaolggrhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406209.6503513-313-67321027661781/AnsiballZ_file.py'
Oct 02 11:56:49 compute-0 sudo[160708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:50.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:50 compute-0 python3.9[160710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:50 compute-0 sudo[160708]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:50 compute-0 ceph-mon[73668]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:50 compute-0 sudo[160861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpqhuslmugtkwrtkwsfhpgbxhazkraee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406210.3583877-313-21583084008881/AnsiballZ_file.py'
Oct 02 11:56:50 compute-0 sudo[160861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:50.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:50 compute-0 python3.9[160863]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:50 compute-0 sudo[160861]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:51 compute-0 sudo[161013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdiprwypzffvezduaakbbtlfrkhcvgyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406211.0492435-313-238365423664223/AnsiballZ_file.py'
Oct 02 11:56:51 compute-0 sudo[161013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:51 compute-0 python3.9[161015]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:51 compute-0 sudo[161013]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:52 compute-0 sudo[161165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkpqoyatreqqieovuqqpslnwwzlqlent ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406211.7393544-313-98639673578731/AnsiballZ_file.py'
Oct 02 11:56:52 compute-0 sudo[161165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:52.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:52 compute-0 ceph-mon[73668]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:52 compute-0 python3.9[161167]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:52 compute-0 sudo[161165]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:52 compute-0 sudo[161318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flmipycpkhjlfdnczxtiehdmofxmelpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406212.380714-313-85892188745756/AnsiballZ_file.py'
Oct 02 11:56:52 compute-0 sudo[161318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:52 compute-0 python3.9[161320]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:52 compute-0 sudo[161318]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:53 compute-0 sudo[161470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-burgabolwlsyhzyamnzcvswmeyidxenp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406213.4617355-463-6127081436300/AnsiballZ_file.py'
Oct 02 11:56:53 compute-0 sudo[161470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:53 compute-0 python3.9[161472]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:53 compute-0 sudo[161470]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:56:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:54.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:56:54 compute-0 ceph-mon[73668]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:54 compute-0 sudo[161623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bewdahkrbulkyorsddczyhtedyenvfmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406214.088071-463-49480373587000/AnsiballZ_file.py'
Oct 02 11:56:54 compute-0 sudo[161623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:54 compute-0 python3.9[161625]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:54 compute-0 sudo[161623]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:54.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:55 compute-0 sudo[161786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cliymkhyctsozdasmbykaynwmocfswlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406214.7505157-463-72775968750076/AnsiballZ_file.py'
Oct 02 11:56:55 compute-0 sudo[161786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:55 compute-0 podman[161749]: 2025-10-02 11:56:55.103293048 +0000 UTC m=+0.062038499 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:56:55 compute-0 python3.9[161793]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:55 compute-0 sudo[161786]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:55 compute-0 sudo[161945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cklorgvpjsnkbpyaavpqunsyqqcmsbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406215.421584-463-176722616651640/AnsiballZ_file.py'
Oct 02 11:56:55 compute-0 sudo[161945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:55 compute-0 python3.9[161947]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:55 compute-0 sudo[161945]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:56 compute-0 ceph-mon[73668]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:56 compute-0 sudo[162097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqlvmttqpibrdhaupvrclzgyuaxmkeek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406216.039434-463-31654442068345/AnsiballZ_file.py'
Oct 02 11:56:56 compute-0 sudo[162097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:56 compute-0 python3.9[162099]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:56 compute-0 sudo[162097]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:56:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:56.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:56:56 compute-0 sudo[162250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpmqytidaudakkvoyzvqttqgzkfmvaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406216.6409223-463-16421743744010/AnsiballZ_file.py'
Oct 02 11:56:56 compute-0 sudo[162250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:57 compute-0 python3.9[162252]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:57 compute-0 sudo[162250]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:57 compute-0 sudo[162300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:57 compute-0 sudo[162300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:57 compute-0 sudo[162300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:57 compute-0 sudo[162354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:57 compute-0 sudo[162354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:57 compute-0 sudo[162354]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:57 compute-0 sudo[162452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifrpmohxjoaiwtxrmmrtpeluwgxuauae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406217.2752056-463-144729600271447/AnsiballZ_file.py'
Oct 02 11:56:57 compute-0 sudo[162452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:57 compute-0 python3.9[162454]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:57 compute-0 sudo[162452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:58.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:58 compute-0 ceph-mon[73668]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:56:58 compute-0 sudo[162605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohnbfkrudminbveesnowewevazerakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406218.1335864-616-41665406630528/AnsiballZ_command.py'
Oct 02 11:56:58 compute-0 sudo[162605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:58 compute-0 python3.9[162607]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:56:58 compute-0 sudo[162605]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:56:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:58.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:59 compute-0 python3.9[162759]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:57:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:00.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:00 compute-0 ceph-mon[73668]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:00 compute-0 sudo[162909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njnsppovrvctgmvqwceziohukablevqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406219.9570353-670-203052835513759/AnsiballZ_systemd_service.py'
Oct 02 11:57:00 compute-0 sudo[162909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:00 compute-0 python3.9[162911]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:57:00 compute-0 systemd[1]: Reloading.
Oct 02 11:57:00 compute-0 systemd-rc-local-generator[162943]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:57:00 compute-0 systemd-sysv-generator[162946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:57:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:00.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:00 compute-0 sudo[162909]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:01 compute-0 sudo[163098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgcpnuadpogbqsqyezwajbbwlhjbyvwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406221.1445374-694-246316297682945/AnsiballZ_command.py'
Oct 02 11:57:01 compute-0 sudo[163098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:01 compute-0 python3.9[163100]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:01 compute-0 sudo[163098]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:02 compute-0 sudo[163251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbsdgiqtxwhgnpnyymfogdbagxokrzaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406221.8186138-694-52017398979859/AnsiballZ_command.py'
Oct 02 11:57:02 compute-0 sudo[163251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:02.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:02 compute-0 ceph-mon[73668]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:02 compute-0 python3.9[163253]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:02 compute-0 sudo[163251]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:02.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:02 compute-0 sudo[163405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbjpcunnfyiinfcxlwkkmtueiulakxmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406222.4862142-694-260492282363397/AnsiballZ_command.py'
Oct 02 11:57:02 compute-0 sudo[163405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:03 compute-0 python3.9[163407]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:03 compute-0 sudo[163405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:03 compute-0 sudo[163558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udqqmeooxunixzrllvdmdfcbkmekakko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406223.2232618-694-206685976134076/AnsiballZ_command.py'
Oct 02 11:57:03 compute-0 sudo[163558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:03 compute-0 python3.9[163560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:03 compute-0 sudo[163558]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:04 compute-0 sudo[163711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sldrpruwwbrbojqucmnyiwxygcrmqklu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406223.8733165-694-202942610711416/AnsiballZ_command.py'
Oct 02 11:57:04 compute-0 sudo[163711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:04.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:04 compute-0 ceph-mon[73668]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:04 compute-0 python3.9[163713]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:04 compute-0 sudo[163711]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:04.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:04 compute-0 sudo[163865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikrvwhgtfoopcuzavkohwgtrpltyvgen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406224.5455525-694-110406512413110/AnsiballZ_command.py'
Oct 02 11:57:04 compute-0 sudo[163865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:05 compute-0 python3.9[163867]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:05 compute-0 sudo[163865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:05 compute-0 sudo[164018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejzmqeyypnhpuytrdemspkllhgvowbfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406225.1951272-694-214943252626471/AnsiballZ_command.py'
Oct 02 11:57:05 compute-0 sudo[164018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:05 compute-0 python3.9[164020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:05 compute-0 sudo[164018]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:06.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:06 compute-0 ceph-mon[73668]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:06.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:06 compute-0 sudo[164172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdtwwyvcervukmhwjdsrxktcjrdcqtjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406226.3954933-856-18609118837743/AnsiballZ_getent.py'
Oct 02 11:57:06 compute-0 sudo[164172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:07 compute-0 python3.9[164174]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 02 11:57:07 compute-0 sudo[164172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:07 compute-0 sudo[164325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddmpopheflczdmezroqibdoelxztisns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406227.2314417-880-126798765045832/AnsiballZ_group.py'
Oct 02 11:57:07 compute-0 sudo[164325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:07 compute-0 python3.9[164327]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:57:07 compute-0 groupadd[164328]: group added to /etc/group: name=libvirt, GID=42473
Oct 02 11:57:07 compute-0 groupadd[164328]: group added to /etc/gshadow: name=libvirt
Oct 02 11:57:07 compute-0 groupadd[164328]: new group: name=libvirt, GID=42473
Oct 02 11:57:08 compute-0 sudo[164325]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:08.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:08 compute-0 ceph-mon[73668]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:08.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:08 compute-0 sudo[164484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnlllzecnhpezvwyxhkhvvieantygktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406228.3168964-904-238655182606456/AnsiballZ_user.py'
Oct 02 11:57:08 compute-0 sudo[164484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:09 compute-0 python3.9[164486]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:57:09 compute-0 useradd[164488]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 11:57:09 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:57:09 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:57:09 compute-0 sudo[164484]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:10 compute-0 sudo[164645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gksmfxuhqlutcwxgvqmchlwerdukxodg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406229.8062408-937-101954542424282/AnsiballZ_setup.py'
Oct 02 11:57:10 compute-0 sudo[164645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:10.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:10 compute-0 ceph-mon[73668]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:10 compute-0 python3.9[164647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:57:10 compute-0 sudo[164645]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:10.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:11 compute-0 sudo[164730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruayghicwahflaqhjvjsxhxnjbgzkajv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406229.8062408-937-101954542424282/AnsiballZ_dnf.py'
Oct 02 11:57:11 compute-0 sudo[164730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:11 compute-0 python3.9[164732]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:57:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:12.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:12 compute-0 ceph-mon[73668]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:12.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:12 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 02 11:57:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:14.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:14 compute-0 ceph-mon[73668]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 92 op/s
Oct 02 11:57:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:16.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:16 compute-0 ceph-mon[73668]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 92 op/s
Oct 02 11:57:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:16.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:17 compute-0 podman[164747]: 2025-10-02 11:57:17.431675985 +0000 UTC m=+0.093579814 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 11:57:17 compute-0 sudo[164774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:17 compute-0 sudo[164774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:17 compute-0 sudo[164774]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:17 compute-0 sudo[164800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:17 compute-0 auditd[701]: Audit daemon rotating log files
Oct 02 11:57:17 compute-0 sudo[164800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:17 compute-0 sudo[164800]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 02 11:57:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:18.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:18 compute-0 ceph-mon[73668]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 02 11:57:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:18.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:20.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:20 compute-0 ceph-mon[73668]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:20.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:22.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:22 compute-0 ceph-mon[73668]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:22.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:24.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:24 compute-0 ceph-mon[73668]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:24.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:25 compute-0 podman[165002]: 2025-10-02 11:57:25.420486924 +0000 UTC m=+0.085057649 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 11:57:26 compute-0 sudo[165021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:26 compute-0 sudo[165021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-0 sudo[165021]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:26 compute-0 sudo[165046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:26 compute-0 sudo[165046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-0 sudo[165046]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:26.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:26 compute-0 sudo[165071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:26 compute-0 sudo[165071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-0 sudo[165071]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:26 compute-0 sudo[165096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:57:26 compute-0 sudo[165096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-0 ceph-mon[73668]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:57:26.425 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:57:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:57:26.426 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:57:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:57:26.426 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:57:26 compute-0 sudo[165096]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:26.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 11:57:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:57:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:57:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:57:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:57:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:57:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e35aa0ee-5a7b-4d16-aced-e51c92bab2f8 does not exist
Oct 02 11:57:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 487af944-2e2c-42c6-ac77-072fb047c7a9 does not exist
Oct 02 11:57:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b152c543-3c05-4489-a42f-2b7862cb1d93 does not exist
Oct 02 11:57:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:57:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:57:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:57:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:57:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:57:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:26 compute-0 sudo[165153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:26 compute-0 sudo[165153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-0 sudo[165153]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:27 compute-0 sudo[165178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:27 compute-0 sudo[165178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:27 compute-0 sudo[165178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:27 compute-0 sudo[165203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:27 compute-0 sudo[165203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:27 compute-0 sudo[165203]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:27 compute-0 sudo[165228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:57:27 compute-0 sudo[165228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:57:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:27 compute-0 podman[165294]: 2025-10-02 11:57:27.609533087 +0000 UTC m=+0.078596001 container create 32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:57:27 compute-0 podman[165294]: 2025-10-02 11:57:27.554836743 +0000 UTC m=+0.023899677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:27 compute-0 systemd[1]: Started libpod-conmon-32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3.scope.
Oct 02 11:57:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:27 compute-0 podman[165294]: 2025-10-02 11:57:27.793917069 +0000 UTC m=+0.262980023 container init 32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:57:27 compute-0 podman[165294]: 2025-10-02 11:57:27.80424899 +0000 UTC m=+0.273311904 container start 32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:57:27 compute-0 unruffled_saha[165310]: 167 167
Oct 02 11:57:27 compute-0 systemd[1]: libpod-32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3.scope: Deactivated successfully.
Oct 02 11:57:27 compute-0 conmon[165310]: conmon 32346f7ae2386373a67d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3.scope/container/memory.events
Oct 02 11:57:27 compute-0 podman[165294]: 2025-10-02 11:57:27.835765876 +0000 UTC m=+0.304828790 container attach 32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:57:27 compute-0 podman[165294]: 2025-10-02 11:57:27.837574884 +0000 UTC m=+0.306637808 container died 32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 11:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f17417e9dcf71547ae69e6c44d9049f242cfba6d20e6184de4a368e8c782886-merged.mount: Deactivated successfully.
Oct 02 11:57:28 compute-0 podman[165294]: 2025-10-02 11:57:28.038424158 +0000 UTC m=+0.507487072 container remove 32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 11:57:28 compute-0 systemd[1]: libpod-conmon-32346f7ae2386373a67d08f85a5ca353878e6ab2952cde38119527600c6c42a3.scope: Deactivated successfully.
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Oct 02 11:57:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:28.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:28 compute-0 podman[165336]: 2025-10-02 11:57:28.27467612 +0000 UTC m=+0.077398650 container create 4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:57:28 compute-0 podman[165336]: 2025-10-02 11:57:28.22927925 +0000 UTC m=+0.032001870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:28 compute-0 systemd[1]: Started libpod-conmon-4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56.scope.
Oct 02 11:57:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f268f11b2f48312df10e7b2f55a32cca0dcf449473c79c4b653c2cb9ecac9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f268f11b2f48312df10e7b2f55a32cca0dcf449473c79c4b653c2cb9ecac9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f268f11b2f48312df10e7b2f55a32cca0dcf449473c79c4b653c2cb9ecac9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f268f11b2f48312df10e7b2f55a32cca0dcf449473c79c4b653c2cb9ecac9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f268f11b2f48312df10e7b2f55a32cca0dcf449473c79c4b653c2cb9ecac9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:28 compute-0 podman[165336]: 2025-10-02 11:57:28.383636666 +0000 UTC m=+0.186359196 container init 4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:57:28 compute-0 podman[165336]: 2025-10-02 11:57:28.391695317 +0000 UTC m=+0.194417837 container start 4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:28 compute-0 podman[165336]: 2025-10-02 11:57:28.427353212 +0000 UTC m=+0.230075842 container attach 4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:57:28 compute-0 ceph-mon[73668]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:57:28
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'images']
Oct 02 11:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:57:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:29 compute-0 peaceful_blackburn[165354]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:57:29 compute-0 peaceful_blackburn[165354]: --> relative data size: 1.0
Oct 02 11:57:29 compute-0 peaceful_blackburn[165354]: --> All data devices are unavailable
Oct 02 11:57:29 compute-0 systemd[1]: libpod-4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56.scope: Deactivated successfully.
Oct 02 11:57:29 compute-0 podman[165336]: 2025-10-02 11:57:29.283608533 +0000 UTC m=+1.086331063 container died 4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:57:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9f268f11b2f48312df10e7b2f55a32cca0dcf449473c79c4b653c2cb9ecac9d-merged.mount: Deactivated successfully.
Oct 02 11:57:29 compute-0 podman[165336]: 2025-10-02 11:57:29.371036594 +0000 UTC m=+1.173759124 container remove 4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 11:57:29 compute-0 systemd[1]: libpod-conmon-4bd3442253102ec502711e1e66773355b9c30f921eaf64f50fe889bd8cdfbb56.scope: Deactivated successfully.
Oct 02 11:57:29 compute-0 sudo[165228]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:29 compute-0 sudo[165382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:29 compute-0 sudo[165382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:29 compute-0 sudo[165382]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:29 compute-0 sudo[165407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:29 compute-0 sudo[165407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:29 compute-0 sudo[165407]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:29 compute-0 sudo[165432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:29 compute-0 sudo[165432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:29 compute-0 sudo[165432]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:29 compute-0 sudo[165457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:57:29 compute-0 sudo[165457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:30 compute-0 podman[165523]: 2025-10-02 11:57:30.012990929 +0000 UTC m=+0.061586705 container create 313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:57:30 compute-0 systemd[1]: Started libpod-conmon-313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01.scope.
Oct 02 11:57:30 compute-0 podman[165523]: 2025-10-02 11:57:29.986858674 +0000 UTC m=+0.035454480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:30 compute-0 podman[165523]: 2025-10-02 11:57:30.119562292 +0000 UTC m=+0.168158088 container init 313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:57:30 compute-0 podman[165523]: 2025-10-02 11:57:30.128201439 +0000 UTC m=+0.176797215 container start 313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:57:30 compute-0 amazing_wilson[165539]: 167 167
Oct 02 11:57:30 compute-0 systemd[1]: libpod-313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01.scope: Deactivated successfully.
Oct 02 11:57:30 compute-0 podman[165523]: 2025-10-02 11:57:30.136415774 +0000 UTC m=+0.185011600 container attach 313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:57:30 compute-0 podman[165523]: 2025-10-02 11:57:30.136903097 +0000 UTC m=+0.185498893 container died 313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 11:57:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 11:57:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:30.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef650e169d7148b03799d6656a259a97931c04c613a9ad01905c72e07bf5d9b0-merged.mount: Deactivated successfully.
Oct 02 11:57:30 compute-0 podman[165523]: 2025-10-02 11:57:30.289197848 +0000 UTC m=+0.337793614 container remove 313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:57:30 compute-0 systemd[1]: libpod-conmon-313863d8f2c7da4f1647c2dc370c3bcef948155575fad0b61bfe1a726277ac01.scope: Deactivated successfully.
Oct 02 11:57:30 compute-0 ceph-mon[73668]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 11:57:30 compute-0 podman[165566]: 2025-10-02 11:57:30.489305803 +0000 UTC m=+0.066046362 container create 6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:57:30 compute-0 systemd[1]: Started libpod-conmon-6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad.scope.
Oct 02 11:57:30 compute-0 podman[165566]: 2025-10-02 11:57:30.455793685 +0000 UTC m=+0.032534274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561abc6abd7c9357b826922efa73029e533418e144b190aef6eb0597852c72c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561abc6abd7c9357b826922efa73029e533418e144b190aef6eb0597852c72c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561abc6abd7c9357b826922efa73029e533418e144b190aef6eb0597852c72c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561abc6abd7c9357b826922efa73029e533418e144b190aef6eb0597852c72c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:30 compute-0 podman[165566]: 2025-10-02 11:57:30.593913295 +0000 UTC m=+0.170653914 container init 6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lederberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:57:30 compute-0 podman[165566]: 2025-10-02 11:57:30.601833203 +0000 UTC m=+0.178573802 container start 6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:57:30 compute-0 podman[165566]: 2025-10-02 11:57:30.616394744 +0000 UTC m=+0.193135373 container attach 6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lederberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:57:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:30.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]: {
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:     "1": [
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:         {
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "devices": [
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "/dev/loop3"
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             ],
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "lv_name": "ceph_lv0",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "lv_size": "7511998464",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "name": "ceph_lv0",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "tags": {
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.cluster_name": "ceph",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.crush_device_class": "",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.encrypted": "0",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.osd_id": "1",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.type": "block",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:                 "ceph.vdo": "0"
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             },
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "type": "block",
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:             "vg_name": "ceph_vg0"
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:         }
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]:     ]
Oct 02 11:57:31 compute-0 stoic_lederberg[165583]: }
Oct 02 11:57:31 compute-0 systemd[1]: libpod-6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad.scope: Deactivated successfully.
Oct 02 11:57:31 compute-0 podman[165566]: 2025-10-02 11:57:31.397884097 +0000 UTC m=+0.974624686 container died 6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:57:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-561abc6abd7c9357b826922efa73029e533418e144b190aef6eb0597852c72c3-merged.mount: Deactivated successfully.
Oct 02 11:57:31 compute-0 podman[165566]: 2025-10-02 11:57:31.547276752 +0000 UTC m=+1.124017321 container remove 6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lederberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:57:31 compute-0 systemd[1]: libpod-conmon-6c2ceb836743fe2893175cfd0a4026c5f933ecee76553857b696733ba9adecad.scope: Deactivated successfully.
Oct 02 11:57:31 compute-0 sudo[165457]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:31 compute-0 sudo[165609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:31 compute-0 sudo[165609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:31 compute-0 sudo[165609]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:31 compute-0 sudo[165634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:31 compute-0 sudo[165634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:31 compute-0 sudo[165634]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:31 compute-0 sudo[165659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:31 compute-0 sudo[165659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:31 compute-0 sudo[165659]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:31 compute-0 sudo[165686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:57:31 compute-0 sudo[165686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:32.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:32 compute-0 podman[165752]: 2025-10-02 11:57:32.321583956 +0000 UTC m=+0.101265015 container create 10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:57:32 compute-0 podman[165752]: 2025-10-02 11:57:32.268548156 +0000 UTC m=+0.048229225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:32 compute-0 systemd[1]: Started libpod-conmon-10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd.scope.
Oct 02 11:57:32 compute-0 ceph-mon[73668]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:32 compute-0 podman[165752]: 2025-10-02 11:57:32.447279941 +0000 UTC m=+0.226961080 container init 10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 11:57:32 compute-0 podman[165752]: 2025-10-02 11:57:32.458229898 +0000 UTC m=+0.237910987 container start 10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:57:32 compute-0 hardcore_yonath[165769]: 167 167
Oct 02 11:57:32 compute-0 systemd[1]: libpod-10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd.scope: Deactivated successfully.
Oct 02 11:57:32 compute-0 podman[165752]: 2025-10-02 11:57:32.477047761 +0000 UTC m=+0.256728890 container attach 10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:57:32 compute-0 podman[165752]: 2025-10-02 11:57:32.477989386 +0000 UTC m=+0.257670465 container died 10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-127fc71cb002a17107b881c1582737fb2ffbbc4275ef23f48ed50c2a2ff26310-merged.mount: Deactivated successfully.
Oct 02 11:57:32 compute-0 podman[165752]: 2025-10-02 11:57:32.606530373 +0000 UTC m=+0.386211432 container remove 10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 11:57:32 compute-0 systemd[1]: libpod-conmon-10f5687e5116f10f03797cf610868e487356c4143e50bb20ee62e20b363b6ebd.scope: Deactivated successfully.
Oct 02 11:57:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:32 compute-0 podman[165793]: 2025-10-02 11:57:32.905245223 +0000 UTC m=+0.123740005 container create 9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:57:32 compute-0 podman[165793]: 2025-10-02 11:57:32.813498828 +0000 UTC m=+0.031993640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:57:33 compute-0 systemd[1]: Started libpod-conmon-9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9.scope.
Oct 02 11:57:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2568f7983583d5b1bb823c7d0e62e25d15818e506734a45a7f6171405d1e7e55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2568f7983583d5b1bb823c7d0e62e25d15818e506734a45a7f6171405d1e7e55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2568f7983583d5b1bb823c7d0e62e25d15818e506734a45a7f6171405d1e7e55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2568f7983583d5b1bb823c7d0e62e25d15818e506734a45a7f6171405d1e7e55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:57:33 compute-0 podman[165793]: 2025-10-02 11:57:33.129995343 +0000 UTC m=+0.348490135 container init 9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:57:33 compute-0 podman[165793]: 2025-10-02 11:57:33.143930038 +0000 UTC m=+0.362424790 container start 9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 11:57:33 compute-0 podman[165793]: 2025-10-02 11:57:33.177282993 +0000 UTC m=+0.395777775 container attach 9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]: {
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]:         "osd_id": 1,
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]:         "type": "bluestore"
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]:     }
Oct 02 11:57:34 compute-0 zen_stonebraker[165811]: }
Oct 02 11:57:34 compute-0 systemd[1]: libpod-9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9.scope: Deactivated successfully.
Oct 02 11:57:34 compute-0 podman[165833]: 2025-10-02 11:57:34.099629997 +0000 UTC m=+0.038991903 container died 9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 02 11:57:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2568f7983583d5b1bb823c7d0e62e25d15818e506734a45a7f6171405d1e7e55-merged.mount: Deactivated successfully.
Oct 02 11:57:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:34.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:34 compute-0 podman[165833]: 2025-10-02 11:57:34.265306139 +0000 UTC m=+0.204667995 container remove 9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:57:34 compute-0 systemd[1]: libpod-conmon-9c4ffb906dd4e48e18c075842b7f2105c77cfd2e73a41c2d2fdc2d93eceb77b9.scope: Deactivated successfully.
Oct 02 11:57:34 compute-0 sudo[165686]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:57:34 compute-0 ceph-mon[73668]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:57:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f81cd8a2-0bf4-4882-bf53-cc3790124c20 does not exist
Oct 02 11:57:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 24889cd7-136e-4b98-b8cd-d661fac49419 does not exist
Oct 02 11:57:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d5f5e06a-f922-46a4-978a-8b0b0ef3fbe7 does not exist
Oct 02 11:57:34 compute-0 sudo[165850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:34 compute-0 sudo[165850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:34 compute-0 sudo[165850]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:34 compute-0 sudo[165875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:57:34 compute-0 sudo[165875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:34 compute-0 sudo[165875]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:34.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:36.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:36 compute-0 ceph-mon[73668]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:36.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:37 compute-0 sudo[165901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:37 compute-0 sudo[165901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:37 compute-0 sudo[165901]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:37 compute-0 sudo[165926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:37 compute-0 sudo[165926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:37 compute-0 sudo[165926]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:38.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:38 compute-0 ceph-mon[73668]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:38.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:57:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:57:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:40.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:40 compute-0 ceph-mon[73668]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:40.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:42.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:42 compute-0 ceph-mon[73668]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:42.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:44 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Oct 02 11:57:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:57:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:57:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:57:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:57:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:57:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:57:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:57:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:44.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:44.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:45 compute-0 ceph-mon[73668]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:46.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:46 compute-0 ceph-mon[73668]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:48.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:48 compute-0 ceph-mon[73668]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:48 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 02 11:57:48 compute-0 podman[165965]: 2025-10-02 11:57:48.443137497 +0000 UTC m=+0.098801201 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:57:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:48.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:50 compute-0 ceph-mon[73668]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:52.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:52 compute-0 ceph-mon[73668]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:57:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:52.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:57:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:54.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:54 compute-0 ceph-mon[73668]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:54.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:56.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:56 compute-0 podman[165999]: 2025-10-02 11:57:56.409819297 +0000 UTC m=+0.069763420 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 11:57:56 compute-0 ceph-mon[73668]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:56 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Oct 02 11:57:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:57:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:57:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:57:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:57:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:57:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:57:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:57:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:56.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:57 compute-0 sudo[166022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:57 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 02 11:57:57 compute-0 sudo[166022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:57 compute-0 sudo[166022]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:57 compute-0 sudo[166047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:57 compute-0 sudo[166047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:57 compute-0 sudo[166047]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:58.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:58 compute-0 ceph-mon[73668]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:57:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:57:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:57:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:58.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:57:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:58:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:00.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:58:00 compute-0 ceph-mon[73668]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:58:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:00.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:58:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:58:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:02.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:58:02 compute-0 ceph-mon[73668]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:58:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:02.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:58:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:58:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:04.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:58:04 compute-0 ceph-mon[73668]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:04.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:06 compute-0 ceph-mon[73668]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:06.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:08.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:08 compute-0 ceph-mon[73668]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:08.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:58:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:10.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:58:10 compute-0 ceph-mon[73668]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:58:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:10.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:58:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:12.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:12 compute-0 ceph-mon[73668]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:12.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:14 compute-0 ceph-mon[73668]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:58:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:58:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:58:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:58:16 compute-0 ceph-mon[73668]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:16.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:17 compute-0 sudo[170858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:17 compute-0 sudo[170858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:17 compute-0 sudo[170858]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:18 compute-0 sudo[170930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:18 compute-0 sudo[170930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:18 compute-0 sudo[170930]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:58:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:18.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:58:18 compute-0 ceph-mon[73668]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:18.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:19 compute-0 podman[171800]: 2025-10-02 11:58:19.43957623 +0000 UTC m=+0.096445633 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 11:58:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:20.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:20 compute-0 ceph-mon[73668]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:20.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:22.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:22 compute-0 ceph-mon[73668]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:24.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:24 compute-0 ceph-mon[73668]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:24.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:26.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:26 compute-0 ceph-mon[73668]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:58:26.433 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:58:26.435 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:58:26.435 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:27 compute-0 podman[176428]: 2025-10-02 11:58:27.389202989 +0000 UTC m=+0.058878622 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:28.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:28 compute-0 ceph-mon[73668]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:58:28
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'vms', '.mgr', '.rgw.root', 'images', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 02 11:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:58:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:28.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:30 compute-0 ceph-mon[73668]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000029s ======
Oct 02 11:58:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 02 11:58:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:32.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:32 compute-0 ceph-mon[73668]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:34.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:34 compute-0 ceph-mon[73668]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000029s ======
Oct 02 11:58:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:34.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 02 11:58:34 compute-0 sudo[180789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:34 compute-0 sudo[180789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:34 compute-0 sudo[180789]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:34 compute-0 sudo[180855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:58:34 compute-0 sudo[180855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:34 compute-0 sudo[180855]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:35 compute-0 sudo[180916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:35 compute-0 sudo[180916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:35 compute-0 sudo[180916]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:35 compute-0 sudo[180979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:58:35 compute-0 sudo[180979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:35 compute-0 sudo[180979]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:58:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:58:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:58:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 478d44df-d6bf-4488-85f1-36a1709a964c does not exist
Oct 02 11:58:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e44c2141-9b69-4ff0-8948-eddaf278c801 does not exist
Oct 02 11:58:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b783ea95-756d-404e-aae3-8c105a0bcdcc does not exist
Oct 02 11:58:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:58:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:58:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:58:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:58:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:58:35 compute-0 sudo[181453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:35 compute-0 sudo[181453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:35 compute-0 sudo[181453]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:35 compute-0 sudo[181518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:58:35 compute-0 sudo[181518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:35 compute-0 sudo[181518]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:36 compute-0 sudo[181584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:36 compute-0 sudo[181584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:36 compute-0 sudo[181584]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:36 compute-0 sudo[181651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:58:36 compute-0 sudo[181651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:36.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:36 compute-0 podman[181945]: 2025-10-02 11:58:36.425711003 +0000 UTC m=+0.045624869 container create 1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:58:36 compute-0 systemd[1]: Started libpod-conmon-1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370.scope.
Oct 02 11:58:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:36 compute-0 podman[181945]: 2025-10-02 11:58:36.497800906 +0000 UTC m=+0.117714772 container init 1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:58:36 compute-0 podman[181945]: 2025-10-02 11:58:36.40683592 +0000 UTC m=+0.026749796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:36 compute-0 podman[181945]: 2025-10-02 11:58:36.509236059 +0000 UTC m=+0.129149925 container start 1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:58:36 compute-0 podman[181945]: 2025-10-02 11:58:36.513149719 +0000 UTC m=+0.133063585 container attach 1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:58:36 compute-0 zen_wright[182035]: 167 167
Oct 02 11:58:36 compute-0 systemd[1]: libpod-1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370.scope: Deactivated successfully.
Oct 02 11:58:36 compute-0 podman[181945]: 2025-10-02 11:58:36.515422043 +0000 UTC m=+0.135335929 container died 1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f9fee5a9e3721e2aa1638e4dbb8f59ebe20e453c44cec09784f1cc53f189476-merged.mount: Deactivated successfully.
Oct 02 11:58:36 compute-0 podman[181945]: 2025-10-02 11:58:36.580361345 +0000 UTC m=+0.200275211 container remove 1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:58:36 compute-0 systemd[1]: libpod-conmon-1998ebe8fb9425b63670fa411eab82a7ae3ca599704cf41c355888e3e3a38370.scope: Deactivated successfully.
Oct 02 11:58:36 compute-0 podman[182206]: 2025-10-02 11:58:36.757916995 +0000 UTC m=+0.053413498 container create 6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:58:36 compute-0 systemd[1]: Started libpod-conmon-6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80.scope.
Oct 02 11:58:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785dcaf83aa56cb3bab993591ca1e3382468b20d14dec936c09bd3634de56db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785dcaf83aa56cb3bab993591ca1e3382468b20d14dec936c09bd3634de56db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785dcaf83aa56cb3bab993591ca1e3382468b20d14dec936c09bd3634de56db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785dcaf83aa56cb3bab993591ca1e3382468b20d14dec936c09bd3634de56db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:36 compute-0 podman[182206]: 2025-10-02 11:58:36.73649858 +0000 UTC m=+0.031995123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785dcaf83aa56cb3bab993591ca1e3382468b20d14dec936c09bd3634de56db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:36 compute-0 podman[182206]: 2025-10-02 11:58:36.843781107 +0000 UTC m=+0.139277650 container init 6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 11:58:36 compute-0 podman[182206]: 2025-10-02 11:58:36.851391132 +0000 UTC m=+0.146887645 container start 6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:58:36 compute-0 podman[182206]: 2025-10-02 11:58:36.856858676 +0000 UTC m=+0.152355219 container attach 6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:58:36 compute-0 ceph-mon[73668]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:37 compute-0 affectionate_blackwell[182270]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:58:37 compute-0 affectionate_blackwell[182270]: --> relative data size: 1.0
Oct 02 11:58:37 compute-0 affectionate_blackwell[182270]: --> All data devices are unavailable
Oct 02 11:58:37 compute-0 systemd[1]: libpod-6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80.scope: Deactivated successfully.
Oct 02 11:58:37 compute-0 podman[182206]: 2025-10-02 11:58:37.798251493 +0000 UTC m=+1.093748036 container died 6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 11:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6785dcaf83aa56cb3bab993591ca1e3382468b20d14dec936c09bd3634de56db-merged.mount: Deactivated successfully.
Oct 02 11:58:37 compute-0 podman[182206]: 2025-10-02 11:58:37.864217774 +0000 UTC m=+1.159714277 container remove 6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:58:37 compute-0 systemd[1]: libpod-conmon-6c5d53318ff539f75c5cb169639ff6657b41605925fdb0e206154a09257b2c80.scope: Deactivated successfully.
Oct 02 11:58:37 compute-0 sudo[181651]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:37 compute-0 sudo[182902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:37 compute-0 sudo[182902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:37 compute-0 sudo[182902]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:38 compute-0 sudo[182975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:58:38 compute-0 sudo[182975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:38 compute-0 sudo[182975]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:38 compute-0 sudo[183040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:38 compute-0 sudo[183040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:38 compute-0 sudo[183046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:38 compute-0 sudo[183046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:38 compute-0 sudo[183040]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:38 compute-0 sudo[183046]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:38 compute-0 sudo[183128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:58:38 compute-0 sudo[183128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:38 compute-0 sudo[183135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:38 compute-0 sudo[183135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:38 compute-0 sudo[183135]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:38.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:38 compute-0 ceph-mon[73668]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:38 compute-0 podman[183442]: 2025-10-02 11:58:38.510656172 +0000 UTC m=+0.040564616 container create ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:58:38 compute-0 systemd[1]: Started libpod-conmon-ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad.scope.
Oct 02 11:58:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:38 compute-0 podman[183442]: 2025-10-02 11:58:38.492785537 +0000 UTC m=+0.022694001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:38 compute-0 podman[183442]: 2025-10-02 11:58:38.595765583 +0000 UTC m=+0.125674077 container init ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:58:38 compute-0 podman[183442]: 2025-10-02 11:58:38.603498321 +0000 UTC m=+0.133406765 container start ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:58:38 compute-0 pedantic_booth[183481]: 167 167
Oct 02 11:58:38 compute-0 systemd[1]: libpod-ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad.scope: Deactivated successfully.
Oct 02 11:58:38 compute-0 podman[183442]: 2025-10-02 11:58:38.612853085 +0000 UTC m=+0.142761579 container attach ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:58:38 compute-0 podman[183442]: 2025-10-02 11:58:38.613524664 +0000 UTC m=+0.143433118 container died ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-190a582cba292b02c14f4704983db0b3b4c13497885782c360d6dac53dbfa014-merged.mount: Deactivated successfully.
Oct 02 11:58:38 compute-0 podman[183442]: 2025-10-02 11:58:38.65239681 +0000 UTC m=+0.182305254 container remove ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:58:38 compute-0 systemd[1]: libpod-conmon-ede72653bef711551f359e0fa8d3361de54ca9dc53c0cb5e0e0b5787b676bfad.scope: Deactivated successfully.
Oct 02 11:58:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:38.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:38 compute-0 podman[183507]: 2025-10-02 11:58:38.923668523 +0000 UTC m=+0.124356389 container create 226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:58:38 compute-0 podman[183507]: 2025-10-02 11:58:38.834787116 +0000 UTC m=+0.035475042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:39 compute-0 systemd[1]: Started libpod-conmon-226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c.scope.
Oct 02 11:58:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f94c9f49f511ee109142395cb2a2a6e1cc01cd5645a0d1ee88728caabfe27ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f94c9f49f511ee109142395cb2a2a6e1cc01cd5645a0d1ee88728caabfe27ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f94c9f49f511ee109142395cb2a2a6e1cc01cd5645a0d1ee88728caabfe27ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f94c9f49f511ee109142395cb2a2a6e1cc01cd5645a0d1ee88728caabfe27ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:39 compute-0 podman[183507]: 2025-10-02 11:58:39.141488518 +0000 UTC m=+0.342176374 container init 226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:58:39 compute-0 podman[183507]: 2025-10-02 11:58:39.151998675 +0000 UTC m=+0.352686511 container start 226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:58:39 compute-0 podman[183507]: 2025-10-02 11:58:39.162631485 +0000 UTC m=+0.363319331 container attach 226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 11:58:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:58:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:58:39 compute-0 friendly_galois[183527]: {
Oct 02 11:58:39 compute-0 friendly_galois[183527]:     "1": [
Oct 02 11:58:39 compute-0 friendly_galois[183527]:         {
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "devices": [
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "/dev/loop3"
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             ],
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "lv_name": "ceph_lv0",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "lv_size": "7511998464",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "name": "ceph_lv0",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "tags": {
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.cluster_name": "ceph",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.crush_device_class": "",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.encrypted": "0",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.osd_id": "1",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.type": "block",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:                 "ceph.vdo": "0"
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             },
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "type": "block",
Oct 02 11:58:39 compute-0 friendly_galois[183527]:             "vg_name": "ceph_vg0"
Oct 02 11:58:39 compute-0 friendly_galois[183527]:         }
Oct 02 11:58:39 compute-0 friendly_galois[183527]:     ]
Oct 02 11:58:39 compute-0 friendly_galois[183527]: }
Oct 02 11:58:39 compute-0 systemd[1]: libpod-226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c.scope: Deactivated successfully.
Oct 02 11:58:39 compute-0 podman[183507]: 2025-10-02 11:58:39.965093314 +0000 UTC m=+1.165781150 container died 226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f94c9f49f511ee109142395cb2a2a6e1cc01cd5645a0d1ee88728caabfe27ef-merged.mount: Deactivated successfully.
Oct 02 11:58:40 compute-0 podman[183507]: 2025-10-02 11:58:40.03585665 +0000 UTC m=+1.236544496 container remove 226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 11:58:40 compute-0 systemd[1]: libpod-conmon-226b72411498f0d8d9727c5e73f30ff056d066cbeb501707e971a88a633e694c.scope: Deactivated successfully.
Oct 02 11:58:40 compute-0 sudo[183128]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:40 compute-0 sudo[183550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:40 compute-0 sudo[183550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:40 compute-0 sudo[183550]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:40 compute-0 sudo[183575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:58:40 compute-0 sudo[183575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:40 compute-0 sudo[183575]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:40.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:40 compute-0 sudo[183600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:40 compute-0 sudo[183600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:40 compute-0 sudo[183600]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:40 compute-0 ceph-mon[73668]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:40 compute-0 sudo[183626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:58:40 compute-0 sudo[183626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:40 compute-0 podman[183693]: 2025-10-02 11:58:40.732724029 +0000 UTC m=+0.047275165 container create 26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wright, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:58:40 compute-0 systemd[1]: Started libpod-conmon-26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413.scope.
Oct 02 11:58:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:40 compute-0 podman[183693]: 2025-10-02 11:58:40.710871862 +0000 UTC m=+0.025423028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:40 compute-0 podman[183693]: 2025-10-02 11:58:40.81889186 +0000 UTC m=+0.133443016 container init 26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wright, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 11:58:40 compute-0 podman[183693]: 2025-10-02 11:58:40.828254784 +0000 UTC m=+0.142805920 container start 26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wright, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:58:40 compute-0 podman[183693]: 2025-10-02 11:58:40.833118391 +0000 UTC m=+0.147669527 container attach 26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:58:40 compute-0 jolly_wright[183709]: 167 167
Oct 02 11:58:40 compute-0 systemd[1]: libpod-26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413.scope: Deactivated successfully.
Oct 02 11:58:40 compute-0 podman[183693]: 2025-10-02 11:58:40.836803435 +0000 UTC m=+0.151354571 container died 26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e862e336bf3157fb1accc7513810199e4a29801df4207f417b8cec9a63333aeb-merged.mount: Deactivated successfully.
Oct 02 11:58:40 compute-0 podman[183693]: 2025-10-02 11:58:40.880679303 +0000 UTC m=+0.195230439 container remove 26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:58:40 compute-0 systemd[1]: libpod-conmon-26a285d15c6f441fda5ba26042dfacdf2520cab4d2c41b9f82cf6512a4ebb413.scope: Deactivated successfully.
Oct 02 11:58:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:40.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:41 compute-0 podman[183733]: 2025-10-02 11:58:41.076776585 +0000 UTC m=+0.047129580 container create fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:58:41 compute-0 systemd[1]: Started libpod-conmon-fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1.scope.
Oct 02 11:58:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf53a553d24a2d987019a488ef66d90444e9f25eb3abbd7a013dacc287068464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf53a553d24a2d987019a488ef66d90444e9f25eb3abbd7a013dacc287068464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf53a553d24a2d987019a488ef66d90444e9f25eb3abbd7a013dacc287068464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf53a553d24a2d987019a488ef66d90444e9f25eb3abbd7a013dacc287068464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:58:41 compute-0 podman[183733]: 2025-10-02 11:58:41.055799103 +0000 UTC m=+0.026152128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:58:41 compute-0 podman[183733]: 2025-10-02 11:58:41.159911601 +0000 UTC m=+0.130264616 container init fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_moore, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:58:41 compute-0 podman[183733]: 2025-10-02 11:58:41.166546478 +0000 UTC m=+0.136899473 container start fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_moore, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 11:58:41 compute-0 podman[183733]: 2025-10-02 11:58:41.173158184 +0000 UTC m=+0.143511209 container attach fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:58:42 compute-0 competent_moore[183751]: {
Oct 02 11:58:42 compute-0 competent_moore[183751]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:58:42 compute-0 competent_moore[183751]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:58:42 compute-0 competent_moore[183751]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:58:42 compute-0 competent_moore[183751]:         "osd_id": 1,
Oct 02 11:58:42 compute-0 competent_moore[183751]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:58:42 compute-0 competent_moore[183751]:         "type": "bluestore"
Oct 02 11:58:42 compute-0 competent_moore[183751]:     }
Oct 02 11:58:42 compute-0 competent_moore[183751]: }
Oct 02 11:58:42 compute-0 systemd[1]: libpod-fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1.scope: Deactivated successfully.
Oct 02 11:58:42 compute-0 podman[183733]: 2025-10-02 11:58:42.076618402 +0000 UTC m=+1.046971437 container died fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf53a553d24a2d987019a488ef66d90444e9f25eb3abbd7a013dacc287068464-merged.mount: Deactivated successfully.
Oct 02 11:58:42 compute-0 podman[183733]: 2025-10-02 11:58:42.173136555 +0000 UTC m=+1.143489570 container remove fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 11:58:42 compute-0 systemd[1]: libpod-conmon-fce35e7f2d4217084696dfad5a7481ee20c925b0621805fa88d6ca1700d609f1.scope: Deactivated successfully.
Oct 02 11:58:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:42 compute-0 sudo[183626]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:58:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:58:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2356cf6e-b60c-4c9f-ac4c-9c4e356ca4ec does not exist
Oct 02 11:58:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ab699247-6400-485d-b427-0dbb46ef5622 does not exist
Oct 02 11:58:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 38270e50-ae3e-4085-a557-01a9a321847e does not exist
Oct 02 11:58:42 compute-0 ceph-mon[73668]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:42.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:42 compute-0 sudo[183791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:42 compute-0 sudo[183791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:42 compute-0 sudo[183791]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:42 compute-0 sudo[183817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:58:42 compute-0 sudo[183817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:42 compute-0 sudo[183817]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:42.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:44 compute-0 ceph-mon[73668]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:44.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:44.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:46.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:46 compute-0 ceph-mon[73668]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:46.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:48 compute-0 ceph-mon[73668]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:48.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000029s ======
Oct 02 11:58:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:48.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 02 11:58:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:50.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:50 compute-0 ceph-mon[73668]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:50 compute-0 podman[183846]: 2025-10-02 11:58:50.473916793 +0000 UTC m=+0.134249969 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct 02 11:58:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:50.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:52.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:52 compute-0 ceph-mon[73668]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:52 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Oct 02 11:58:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:58:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:58:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:58:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:58:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:58:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:58:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:58:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000029s ======
Oct 02 11:58:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:52.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 02 11:58:53 compute-0 groupadd[183888]: group added to /etc/group: name=dnsmasq, GID=991
Oct 02 11:58:53 compute-0 groupadd[183888]: group added to /etc/gshadow: name=dnsmasq
Oct 02 11:58:53 compute-0 groupadd[183888]: new group: name=dnsmasq, GID=991
Oct 02 11:58:53 compute-0 useradd[183895]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 02 11:58:53 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Oct 02 11:58:53 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 02 11:58:53 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Oct 02 11:58:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:54.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:54 compute-0 ceph-mon[73668]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:54 compute-0 groupadd[183910]: group added to /etc/group: name=clevis, GID=990
Oct 02 11:58:54 compute-0 groupadd[183910]: group added to /etc/gshadow: name=clevis
Oct 02 11:58:54 compute-0 groupadd[183910]: new group: name=clevis, GID=990
Oct 02 11:58:54 compute-0 useradd[183917]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 02 11:58:54 compute-0 usermod[183927]: add 'clevis' to group 'tss'
Oct 02 11:58:54 compute-0 usermod[183927]: add 'clevis' to shadow group 'tss'
Oct 02 11:58:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:58:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:54.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:58:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:56 compute-0 ceph-mon[73668]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:56.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:57 compute-0 polkitd[6323]: Reloading rules
Oct 02 11:58:57 compute-0 polkitd[6323]: Collecting garbage unconditionally...
Oct 02 11:58:57 compute-0 polkitd[6323]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 11:58:57 compute-0 polkitd[6323]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 11:58:57 compute-0 polkitd[6323]: Finished loading, compiling and executing 4 rules
Oct 02 11:58:57 compute-0 polkitd[6323]: Reloading rules
Oct 02 11:58:57 compute-0 polkitd[6323]: Collecting garbage unconditionally...
Oct 02 11:58:57 compute-0 polkitd[6323]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 11:58:57 compute-0 polkitd[6323]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 11:58:57 compute-0 polkitd[6323]: Finished loading, compiling and executing 4 rules
Oct 02 11:58:57 compute-0 podman[184022]: 2025-10-02 11:58:57.924275677 +0000 UTC m=+0.070390547 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:58:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:58 compute-0 sudo[184086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:58 compute-0 sudo[184086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:58 compute-0 sudo[184086]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:58 compute-0 sudo[184123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:58 compute-0 sudo[184123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:58 compute-0 sudo[184123]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:58:58 compute-0 groupadd[184186]: group added to /etc/group: name=ceph, GID=167
Oct 02 11:58:58 compute-0 groupadd[184186]: group added to /etc/gshadow: name=ceph
Oct 02 11:58:58 compute-0 groupadd[184186]: new group: name=ceph, GID=167
Oct 02 11:58:58 compute-0 useradd[184192]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 02 11:58:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:58:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:58.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:59 compute-0 ceph-mon[73668]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:00.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:00.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:01 compute-0 ceph-mon[73668]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:01 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 02 11:59:01 compute-0 sshd[1007]: Received signal 15; terminating.
Oct 02 11:59:01 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 02 11:59:01 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 02 11:59:01 compute-0 systemd[1]: sshd.service: Consumed 3.405s CPU time, no IO.
Oct 02 11:59:01 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 02 11:59:01 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 02 11:59:01 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:59:01 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:59:01 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:59:01 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 02 11:59:02 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 02 11:59:02 compute-0 sshd[184818]: Server listening on 0.0.0.0 port 22.
Oct 02 11:59:02 compute-0 sshd[184818]: Server listening on :: port 22.
Oct 02 11:59:02 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 02 11:59:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000029s ======
Oct 02 11:59:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 02 11:59:02 compute-0 ceph-mon[73668]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:02.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:04 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:59:04 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:59:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:04 compute-0 ceph-mon[73668]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:04 compute-0 systemd[1]: Reloading.
Oct 02 11:59:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:04.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:04 compute-0 systemd-rc-local-generator[185078]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:04 compute-0 systemd-sysv-generator[185082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:59:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:04.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:06 compute-0 ceph-mon[73668]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:06 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 11:59:06 compute-0 PackageKit[187206]: daemon start
Oct 02 11:59:06 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 11:59:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:06.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:07 compute-0 sudo[164730]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 sudo[188782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piazgzlpxdoqpaudsffulipoehqfukce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406347.4959853-973-32866175115461/AnsiballZ_systemd.py'
Oct 02 11:59:08 compute-0 sudo[188782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:08 compute-0 ceph-mon[73668]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:08.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:08 compute-0 python3.9[188821]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:08 compute-0 systemd[1]: Reloading.
Oct 02 11:59:08 compute-0 systemd-rc-local-generator[189209]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:08 compute-0 systemd-sysv-generator[189212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:08 compute-0 sudo[188782]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:08.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:09 compute-0 sudo[189969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqnczvelqnnccvxpkzgctakgoearhcvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406348.9631324-973-275528353907862/AnsiballZ_systemd.py'
Oct 02 11:59:09 compute-0 sudo[189969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:09 compute-0 python3.9[189984]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:09 compute-0 systemd[1]: Reloading.
Oct 02 11:59:09 compute-0 systemd-rc-local-generator[190335]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:09 compute-0 systemd-sysv-generator[190342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:10 compute-0 sudo[189969]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:10 compute-0 ceph-mon[73668]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:10.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:10 compute-0 sudo[191089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxcedmamozfnjsqyemddsumefvissvuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406350.139401-973-60712580120646/AnsiballZ_systemd.py'
Oct 02 11:59:10 compute-0 sudo[191089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:10 compute-0 python3.9[191109]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:10 compute-0 systemd[1]: Reloading.
Oct 02 11:59:10 compute-0 systemd-sysv-generator[191508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:10 compute-0 systemd-rc-local-generator[191503]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:10.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:11 compute-0 sudo[191089]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:11 compute-0 sudo[192204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lesvhsnjdyumbdvlxdtttofywwyxyvqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406351.3327398-973-134687838707018/AnsiballZ_systemd.py'
Oct 02 11:59:11 compute-0 sudo[192204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:11 compute-0 python3.9[192226]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:11 compute-0 systemd[1]: Reloading.
Oct 02 11:59:12 compute-0 systemd-rc-local-generator[192680]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:12 compute-0 systemd-sysv-generator[192683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:12 compute-0 ceph-mon[73668]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:12.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:12 compute-0 sudo[192204]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:12 compute-0 sudo[193495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opikmxbkttwxjaddjhenkoicetuuvnwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406352.5174975-1060-103748539667335/AnsiballZ_systemd.py'
Oct 02 11:59:12 compute-0 sudo[193495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:13 compute-0 python3.9[193516]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:13 compute-0 systemd[1]: Reloading.
Oct 02 11:59:13 compute-0 systemd-rc-local-generator[193968]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:13 compute-0 systemd-sysv-generator[193975]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:13 compute-0 sudo[193495]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:59:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:59:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.903s CPU time.
Oct 02 11:59:13 compute-0 systemd[1]: run-r86bc2a49529d4e609d167c0a796a986b.service: Deactivated successfully.
Oct 02 11:59:14 compute-0 sudo[194427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbdlvqbjvzjgiuozunmvzihlsynftva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406353.661492-1060-34271517204607/AnsiballZ_systemd.py'
Oct 02 11:59:14 compute-0 sudo[194427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:14 compute-0 ceph-mon[73668]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:14 compute-0 python3.9[194429]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:14 compute-0 systemd[1]: Reloading.
Oct 02 11:59:14 compute-0 systemd-rc-local-generator[194461]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:14 compute-0 systemd-sysv-generator[194465]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:14 compute-0 sudo[194427]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:14.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:15 compute-0 sudo[194618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aawrdhogvodngwpooigsojuukctpxpnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406354.8982427-1060-269879220975105/AnsiballZ_systemd.py'
Oct 02 11:59:15 compute-0 sudo[194618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:15 compute-0 python3.9[194620]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:15 compute-0 systemd[1]: Reloading.
Oct 02 11:59:15 compute-0 systemd-sysv-generator[194651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:15 compute-0 systemd-rc-local-generator[194648]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:15 compute-0 sudo[194618]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:16 compute-0 ceph-mon[73668]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:59:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:59:16 compute-0 sudo[194809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zocdlgorelwaxyoygfpfalrrshjqjxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406356.1022775-1060-18226831034627/AnsiballZ_systemd.py'
Oct 02 11:59:16 compute-0 sudo[194809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:16 compute-0 python3.9[194811]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:16 compute-0 sudo[194809]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:16.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:17 compute-0 sudo[194964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvygdqjbcxiomjqphalxsbqztqaqrbxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406356.8848004-1060-199664079779073/AnsiballZ_systemd.py'
Oct 02 11:59:17 compute-0 sudo[194964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:17 compute-0 python3.9[194966]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:17 compute-0 systemd[1]: Reloading.
Oct 02 11:59:17 compute-0 systemd-rc-local-generator[194997]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:17 compute-0 systemd-sysv-generator[195001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:17 compute-0 sudo[194964]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:18 compute-0 sudo[195155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-algnzjhrcjqrwlenooiofkpsphrxctnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406358.023986-1168-193693632287032/AnsiballZ_systemd.py'
Oct 02 11:59:18 compute-0 sudo[195155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:18 compute-0 sudo[195158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:18 compute-0 sudo[195158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:18 compute-0 sudo[195158]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:18 compute-0 sudo[195183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:18 compute-0 sudo[195183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:18 compute-0 sudo[195183]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:18 compute-0 python3.9[195157]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:18 compute-0 systemd[1]: Reloading.
Oct 02 11:59:18 compute-0 systemd-rc-local-generator[195237]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:18 compute-0 systemd-sysv-generator[195241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:19 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 02 11:59:19 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 02 11:59:19 compute-0 sudo[195155]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:19 compute-0 ceph-mon[73668]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:19 compute-0 sudo[195397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lntgehdkwooqyonqletdkabeollsifwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406359.347036-1192-232079609985118/AnsiballZ_systemd.py'
Oct 02 11:59:19 compute-0 sudo[195397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:19 compute-0 python3.9[195399]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:20 compute-0 sudo[195397]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:20 compute-0 sudo[195553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbrqykzruhfpvfgwikfzgfntafbailmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406360.158994-1192-152864457050690/AnsiballZ_systemd.py'
Oct 02 11:59:20 compute-0 sudo[195553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:20 compute-0 python3.9[195555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:20 compute-0 sudo[195553]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000029s ======
Oct 02 11:59:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:20.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 02 11:59:20 compute-0 podman[195557]: 2025-10-02 11:59:20.971348938 +0000 UTC m=+0.151026392 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 11:59:21 compute-0 sudo[195732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bolgfahpxwwihrbqsigpwlsuldwqfdly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406361.0023675-1192-229215571101002/AnsiballZ_systemd.py'
Oct 02 11:59:21 compute-0 sudo[195732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:21 compute-0 ceph-mon[73668]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:21 compute-0 python3.9[195734]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:21 compute-0 sudo[195732]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:22 compute-0 sudo[195887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftftjhcnioahzzebaaehotofjsropijc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406361.7580416-1192-115912751732478/AnsiballZ_systemd.py'
Oct 02 11:59:22 compute-0 sudo[195887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:22.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:22 compute-0 python3.9[195889]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:22 compute-0 sudo[195887]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:22 compute-0 sudo[196043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxuynyroomxdsezlymytxwkzllwgcyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406362.6578066-1192-163500012293428/AnsiballZ_systemd.py'
Oct 02 11:59:22 compute-0 sudo[196043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000028s ======
Oct 02 11:59:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:22.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 02 11:59:23 compute-0 python3.9[196045]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:23 compute-0 sudo[196043]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:23 compute-0 ceph-mon[73668]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:23 compute-0 sudo[196198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvxbrftsxubheymltvzcogoajnxtbzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406363.517306-1192-103675928896725/AnsiballZ_systemd.py'
Oct 02 11:59:23 compute-0 sudo[196198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:24 compute-0 python3.9[196200]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:24 compute-0 sudo[196198]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:24.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:24 compute-0 sudo[196354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pymslqokzdejjvxhbxpvlzfdfjelfigc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406364.4266706-1192-1902065384365/AnsiballZ_systemd.py'
Oct 02 11:59:24 compute-0 sudo[196354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:24.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:25 compute-0 python3.9[196356]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:25 compute-0 sudo[196354]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:25 compute-0 ceph-mon[73668]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:25 compute-0 sudo[196509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koqtmooqumvfxezvpiiyueyschhoohot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406365.286937-1192-265698972708115/AnsiballZ_systemd.py'
Oct 02 11:59:25 compute-0 sudo[196509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:26 compute-0 python3.9[196511]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:26 compute-0 sudo[196509]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:59:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:26.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:59:26.433 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:59:26.434 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 11:59:26.434 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:26 compute-0 sudo[196665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoztwwjucmcxgahuqkvoadnojzkchuwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406366.2363398-1192-28675018412631/AnsiballZ_systemd.py'
Oct 02 11:59:26 compute-0 sudo[196665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:26 compute-0 python3.9[196667]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:59:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:26.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:59:26 compute-0 sudo[196665]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:27 compute-0 sudo[196820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebgfyfcisiggwhtsrtlkyaycegcmfiic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406367.1288998-1192-77322287827626/AnsiballZ_systemd.py'
Oct 02 11:59:27 compute-0 sudo[196820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:27 compute-0 ceph-mon[73668]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:27 compute-0 python3.9[196822]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:27 compute-0 sudo[196820]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:28.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:28 compute-0 sudo[196989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyubvqjmsdxupkpvxqpiiyyskkiicjok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406368.059203-1192-200200696958399/AnsiballZ_systemd.py'
Oct 02 11:59:28 compute-0 sudo[196989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:28 compute-0 podman[196949]: 2025-10-02 11:59:28.419240155 +0000 UTC m=+0.081793379 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_11:59:28
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta']
Oct 02 11:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 11:59:28 compute-0 python3.9[196994]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:28 compute-0 sudo[196989]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:29 compute-0 sudo[197151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtsefwwsihleafylbovlkwutzxscwmwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406368.983261-1192-137928260791787/AnsiballZ_systemd.py'
Oct 02 11:59:29 compute-0 sudo[197151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:29 compute-0 ceph-mon[73668]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:29 compute-0 python3.9[197153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:29 compute-0 sudo[197151]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:30 compute-0 sudo[197306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmendoygecdgzqbmmcqztfjeqcbhoupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406369.8555772-1192-10160002147518/AnsiballZ_systemd.py'
Oct 02 11:59:30 compute-0 sudo[197306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:30.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:30 compute-0 python3.9[197308]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:30 compute-0 sudo[197306]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:31 compute-0 sudo[197462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgxpbwqisouxmkccujemzstufxjfhydl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406370.8797731-1192-24625077701409/AnsiballZ_systemd.py'
Oct 02 11:59:31 compute-0 sudo[197462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:31 compute-0 python3.9[197464]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:31 compute-0 ceph-mon[73668]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:31 compute-0 sudo[197462]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:32.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:32 compute-0 sudo[197618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpcmipanjtevljhwmsemdmdlcmgvqmds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406372.0618522-1498-230851196922236/AnsiballZ_file.py'
Oct 02 11:59:32 compute-0 sudo[197618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:32 compute-0 python3.9[197620]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:32 compute-0 sudo[197618]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:33 compute-0 sudo[197770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykidwktvjnsbkxypgfwrdojyqxgooafe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406372.7901628-1498-5604425460380/AnsiballZ_file.py'
Oct 02 11:59:33 compute-0 sudo[197770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:33 compute-0 python3.9[197772]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:33 compute-0 sudo[197770]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:33 compute-0 ceph-mon[73668]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:33 compute-0 sudo[197922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbcftnbiuymkwsxwjgpxrzrnvfaoccya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406373.5026128-1498-108721610448267/AnsiballZ_file.py'
Oct 02 11:59:33 compute-0 sudo[197922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:34 compute-0 python3.9[197924]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:34 compute-0 sudo[197922]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:34.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:34 compute-0 sudo[198075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irbohmmomruurcsloolnpmtcvhlzucka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406374.2500558-1498-61919836589581/AnsiballZ_file.py'
Oct 02 11:59:34 compute-0 sudo[198075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:34 compute-0 python3.9[198077]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:34 compute-0 sudo[198075]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:35 compute-0 sudo[198227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igxzfwgyucyoykeryyenrbmwavfytetw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406375.021877-1498-207107104056006/AnsiballZ_file.py'
Oct 02 11:59:35 compute-0 sudo[198227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:35 compute-0 python3.9[198229]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:35 compute-0 ceph-mon[73668]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:35 compute-0 sudo[198227]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:36 compute-0 sudo[198379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvscrotjxrronybqcjytrancafaocft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406375.7928967-1498-76576335297489/AnsiballZ_file.py'
Oct 02 11:59:36 compute-0 sudo[198379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:36.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:36 compute-0 python3.9[198381]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:36 compute-0 sudo[198379]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:59:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:36.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:59:37 compute-0 sudo[198532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icmtwpuaalbbxylkwbinfjxplvqxvkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406376.693177-1627-104230252685657/AnsiballZ_stat.py'
Oct 02 11:59:37 compute-0 sudo[198532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:37 compute-0 python3.9[198534]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:37 compute-0 sudo[198532]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:37 compute-0 ceph-mon[73668]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:37 compute-0 sudo[198657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbklrfvihcrmjczglzwscjiwkiypakvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406376.693177-1627-104230252685657/AnsiballZ_copy.py'
Oct 02 11:59:37 compute-0 sudo[198657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:38 compute-0 python3.9[198659]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406376.693177-1627-104230252685657/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:38 compute-0 sudo[198657]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:38 compute-0 sudo[198761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:38 compute-0 sudo[198761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:38 compute-0 sudo[198761]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:38 compute-0 sudo[198816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:38 compute-0 sudo[198856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfmdpzsvylcrnpvzzkeljaiutypuitne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406378.3168292-1627-264526923336488/AnsiballZ_stat.py'
Oct 02 11:59:38 compute-0 sudo[198816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:38 compute-0 sudo[198856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:38 compute-0 sudo[198816]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:38 compute-0 python3.9[198862]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:38 compute-0 sudo[198856]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:59:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:38.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:59:39 compute-0 sudo[198985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udyxtxckbzljnmrbtoygzylasoewxgmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406378.3168292-1627-264526923336488/AnsiballZ_copy.py'
Oct 02 11:59:39 compute-0 sudo[198985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:39 compute-0 python3.9[198987]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406378.3168292-1627-264526923336488/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:39 compute-0 sudo[198985]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:39 compute-0 ceph-mon[73668]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 11:59:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 11:59:40 compute-0 sudo[199137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvovstvwuytsqrfbigccmvmiciwrojkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406379.7944741-1627-139679967737975/AnsiballZ_stat.py'
Oct 02 11:59:40 compute-0 sudo[199137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:40 compute-0 python3.9[199139]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:59:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:59:40 compute-0 sudo[199137]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:40 compute-0 sudo[199263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmwjgjqtflomnfijakvcfuzdzntqgvpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406379.7944741-1627-139679967737975/AnsiballZ_copy.py'
Oct 02 11:59:40 compute-0 sudo[199263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:59:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:40.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:59:41 compute-0 python3.9[199265]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406379.7944741-1627-139679967737975/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:41 compute-0 sudo[199263]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:41 compute-0 sudo[199415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvxwvcxuihdslduoqfexuvuivmpdskj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406381.2673905-1627-52688589386660/AnsiballZ_stat.py'
Oct 02 11:59:41 compute-0 sudo[199415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:41 compute-0 ceph-mon[73668]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:41 compute-0 python3.9[199417]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:41 compute-0 sudo[199415]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-0 sudo[199540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkmzqlrxjtqtsajbvtzgxhmdgwqfysbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406381.2673905-1627-52688589386660/AnsiballZ_copy.py'
Oct 02 11:59:42 compute-0 sudo[199540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:59:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:42.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:59:42 compute-0 python3.9[199542]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406381.2673905-1627-52688589386660/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:42 compute-0 sudo[199540]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-0 sudo[199591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:42 compute-0 sudo[199591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:42 compute-0 sudo[199591]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-0 sudo[199644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:42 compute-0 sudo[199644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:42 compute-0 sudo[199644]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-0 sudo[199677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:42 compute-0 sudo[199677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:42 compute-0 sudo[199677]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-0 sudo[199723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:59:42 compute-0 sudo[199723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 11:59:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:42.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 11:59:43 compute-0 sudo[199793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oansmbhqgzbpuqqdntfseowornqvpzvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406382.6605422-1627-103822352677933/AnsiballZ_stat.py'
Oct 02 11:59:43 compute-0 sudo[199793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:43 compute-0 python3.9[199795]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:43 compute-0 sudo[199793]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:43 compute-0 podman[199924]: 2025-10-02 11:59:43.587408689 +0000 UTC m=+0.084170770 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 02 11:59:43 compute-0 sudo[200012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxnlkrnldjspxtbfgqqtaklfpekqhgox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406382.6605422-1627-103822352677933/AnsiballZ_copy.py'
Oct 02 11:59:43 compute-0 sudo[200012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:43 compute-0 ceph-mon[73668]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:43 compute-0 podman[199924]: 2025-10-02 11:59:43.763899223 +0000 UTC m=+0.260661324 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:59:43 compute-0 python3.9[200014]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406382.6605422-1627-103822352677933/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:44 compute-0 sudo[200012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 11:59:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 11:59:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:44 compute-0 podman[200231]: 2025-10-02 11:59:44.371461954 +0000 UTC m=+0.060617352 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:59:44 compute-0 podman[200231]: 2025-10-02 11:59:44.378579141 +0000 UTC m=+0.067734549 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 11:59:44 compute-0 sudo[200323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juvxdsosmgsuaofzckszjsnahatsopks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406384.1537783-1627-254551568691288/AnsiballZ_stat.py'
Oct 02 11:59:44 compute-0 sudo[200323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:44 compute-0 podman[200350]: 2025-10-02 11:59:44.634148251 +0000 UTC m=+0.060446108 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, release=1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=)
Oct 02 11:59:44 compute-0 podman[200350]: 2025-10-02 11:59:44.656423956 +0000 UTC m=+0.082721723 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, version=2.2.4, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2)
Oct 02 11:59:44 compute-0 python3.9[200330]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:44 compute-0 sudo[200323]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-0 sudo[199723]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:59:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:59:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:44 compute-0 sudo[200452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:44 compute-0 sudo[200452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:44 compute-0 sudo[200452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:45.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:45 compute-0 sudo[200498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:45 compute-0 sudo[200498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:45 compute-0 sudo[200498]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-0 sudo[200547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:45 compute-0 sudo[200547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:45 compute-0 sudo[200547]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-0 sudo[200597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exwmegkamoqbinusdpzgswtqndkycpyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406384.1537783-1627-254551568691288/AnsiballZ_copy.py'
Oct 02 11:59:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-0 sudo[200597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:45 compute-0 sudo[200599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:59:45 compute-0 sudo[200599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:45 compute-0 python3.9[200603]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406384.1537783-1627-254551568691288/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:45 compute-0 sudo[200597]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-0 sudo[200599]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:59:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 11:59:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:59:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 11:59:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 44942fa1-47a9-41bf-acaa-ab775def6e9c does not exist
Oct 02 11:59:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev aebd82cc-ddbb-4fef-a671-55752289fee8 does not exist
Oct 02 11:59:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a2179015-9d93-4ac8-a2e1-0c1cfcc84843 does not exist
Oct 02 11:59:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 11:59:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:59:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 11:59:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:59:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 11:59:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:45 compute-0 sudo[200808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyxesbnagopweuyngvhdrabbmxqhqnpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406385.4388902-1627-36011829217316/AnsiballZ_stat.py'
Oct 02 11:59:45 compute-0 sudo[200808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:45 compute-0 sudo[200811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:45 compute-0 sudo[200811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:45 compute-0 sudo[200811]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-0 sudo[200836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:45 compute-0 sudo[200836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:45 compute-0 sudo[200836]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:46 compute-0 python3.9[200810]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:46 compute-0 sudo[200808]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:46 compute-0 sudo[200861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:46 compute-0 sudo[200861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:46 compute-0 sudo[200861]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:46 compute-0 sudo[200890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:59:46 compute-0 sudo[200890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:46 compute-0 ceph-mon[73668]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:59:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:59:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:59:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:46.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:46 compute-0 sudo[201057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyaylrujtiuzyoiqtfnqcmiiqfhhoyur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406385.4388902-1627-36011829217316/AnsiballZ_copy.py'
Oct 02 11:59:46 compute-0 sudo[201057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:46 compute-0 podman[201074]: 2025-10-02 11:59:46.570181749 +0000 UTC m=+0.058779844 container create 4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ride, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:59:46 compute-0 python3.9[201061]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406385.4388902-1627-36011829217316/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:46 compute-0 systemd[1]: Started libpod-conmon-4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab.scope.
Oct 02 11:59:46 compute-0 sudo[201057]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:46 compute-0 podman[201074]: 2025-10-02 11:59:46.549879386 +0000 UTC m=+0.038477491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:46 compute-0 podman[201074]: 2025-10-02 11:59:46.666534479 +0000 UTC m=+0.155132644 container init 4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:59:46 compute-0 podman[201074]: 2025-10-02 11:59:46.675708579 +0000 UTC m=+0.164306704 container start 4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:59:46 compute-0 podman[201074]: 2025-10-02 11:59:46.679850378 +0000 UTC m=+0.168448503 container attach 4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:59:46 compute-0 confident_ride[201090]: 167 167
Oct 02 11:59:46 compute-0 systemd[1]: libpod-4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab.scope: Deactivated successfully.
Oct 02 11:59:46 compute-0 podman[201074]: 2025-10-02 11:59:46.685022014 +0000 UTC m=+0.173620099 container died 4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e9c27d68c457c5886317979b47d8eb592f03d594cb983283c7c4b52aec9f25f-merged.mount: Deactivated successfully.
Oct 02 11:59:46 compute-0 podman[201074]: 2025-10-02 11:59:46.739679179 +0000 UTC m=+0.228277304 container remove 4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 11:59:46 compute-0 systemd[1]: libpod-conmon-4e080002895152866c02cf27d1584384bc86d5cfcd8f92980edf9f8712aaecab.scope: Deactivated successfully.
Oct 02 11:59:46 compute-0 podman[201190]: 2025-10-02 11:59:46.965655162 +0000 UTC m=+0.054290127 container create b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_raman, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 11:59:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:47 compute-0 systemd[1]: Started libpod-conmon-b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce.scope.
Oct 02 11:59:47 compute-0 podman[201190]: 2025-10-02 11:59:46.944559438 +0000 UTC m=+0.033194453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab268078e8c5d6d3f9b858092d281f7fe9b7baf0cb0372376979c7122447d4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab268078e8c5d6d3f9b858092d281f7fe9b7baf0cb0372376979c7122447d4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab268078e8c5d6d3f9b858092d281f7fe9b7baf0cb0372376979c7122447d4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab268078e8c5d6d3f9b858092d281f7fe9b7baf0cb0372376979c7122447d4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab268078e8c5d6d3f9b858092d281f7fe9b7baf0cb0372376979c7122447d4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:47 compute-0 podman[201190]: 2025-10-02 11:59:47.082722175 +0000 UTC m=+0.171357180 container init b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:59:47 compute-0 podman[201190]: 2025-10-02 11:59:47.094231487 +0000 UTC m=+0.182866492 container start b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_raman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:59:47 compute-0 podman[201190]: 2025-10-02 11:59:47.098349986 +0000 UTC m=+0.186984991 container attach b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_raman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:59:47 compute-0 sudo[201284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olvombjaahsgukgclvxsmzlqxfmlqqdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406386.8081996-1627-133352055225288/AnsiballZ_stat.py'
Oct 02 11:59:47 compute-0 sudo[201284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:47 compute-0 python3.9[201286]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:47 compute-0 sudo[201284]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:47 compute-0 sudo[201413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrryyqqmftfsjserqloipaiuhwbylhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406386.8081996-1627-133352055225288/AnsiballZ_copy.py'
Oct 02 11:59:47 compute-0 sudo[201413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:47 compute-0 quirky_raman[201235]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:59:47 compute-0 quirky_raman[201235]: --> relative data size: 1.0
Oct 02 11:59:47 compute-0 quirky_raman[201235]: --> All data devices are unavailable
Oct 02 11:59:47 compute-0 systemd[1]: libpod-b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce.scope: Deactivated successfully.
Oct 02 11:59:47 compute-0 podman[201190]: 2025-10-02 11:59:47.983351921 +0000 UTC m=+1.071986936 container died b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_raman, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ab268078e8c5d6d3f9b858092d281f7fe9b7baf0cb0372376979c7122447d4c-merged.mount: Deactivated successfully.
Oct 02 11:59:48 compute-0 python3.9[201417]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406386.8081996-1627-133352055225288/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:48 compute-0 podman[201190]: 2025-10-02 11:59:48.07205726 +0000 UTC m=+1.160692225 container remove b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_raman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:59:48 compute-0 sudo[201413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 systemd[1]: libpod-conmon-b060e74b42af7c37c0c8247e93433c6f1862fd03e1f53d263b3986e3314dcbce.scope: Deactivated successfully.
Oct 02 11:59:48 compute-0 sudo[200890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 sudo[201439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:48 compute-0 sudo[201439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:48 compute-0 sudo[201439]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 ceph-mon[73668]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:48 compute-0 sudo[201485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:48 compute-0 sudo[201485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:48 compute-0 sudo[201485]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 sudo[201533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:48 compute-0 sudo[201533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:48 compute-0 sudo[201533]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:59:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:59:48 compute-0 sudo[201589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:59:48 compute-0 sudo[201589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:48 compute-0 sudo[201686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqtfocnymmimnbqvtllvngneselftnks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406388.271532-1966-133191766498688/AnsiballZ_command.py'
Oct 02 11:59:48 compute-0 sudo[201686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:48 compute-0 python3.9[201690]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 02 11:59:48 compute-0 sudo[201686]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:48 compute-0 podman[201731]: 2025-10-02 11:59:48.845297549 +0000 UTC m=+0.058873666 container create 0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:59:48 compute-0 systemd[1]: Started libpod-conmon-0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed.scope.
Oct 02 11:59:48 compute-0 podman[201731]: 2025-10-02 11:59:48.821725631 +0000 UTC m=+0.035301738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:48 compute-0 podman[201731]: 2025-10-02 11:59:48.948302144 +0000 UTC m=+0.161878281 container init 0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:59:48 compute-0 podman[201731]: 2025-10-02 11:59:48.960132884 +0000 UTC m=+0.173708981 container start 0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:59:48 compute-0 podman[201731]: 2025-10-02 11:59:48.963842092 +0000 UTC m=+0.177418199 container attach 0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:59:48 compute-0 vigilant_dirac[201771]: 167 167
Oct 02 11:59:48 compute-0 systemd[1]: libpod-0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed.scope: Deactivated successfully.
Oct 02 11:59:48 compute-0 podman[201731]: 2025-10-02 11:59:48.968777671 +0000 UTC m=+0.182353778 container died 0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-866d5e7f09c55026348593b02b1633082b0e37fbd46da4993ccf70a59ebf1c8c-merged.mount: Deactivated successfully.
Oct 02 11:59:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:49.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:49 compute-0 podman[201731]: 2025-10-02 11:59:49.018595279 +0000 UTC m=+0.232171376 container remove 0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:59:49 compute-0 systemd[1]: libpod-conmon-0e3197e01bdc925add4d883c1642369753b70d25ea582ba826cc0972542d87ed.scope: Deactivated successfully.
Oct 02 11:59:49 compute-0 podman[201846]: 2025-10-02 11:59:49.230057541 +0000 UTC m=+0.060905490 container create 85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:59:49 compute-0 systemd[1]: Started libpod-conmon-85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb.scope.
Oct 02 11:59:49 compute-0 podman[201846]: 2025-10-02 11:59:49.200936566 +0000 UTC m=+0.031784575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed05ccf9952ae1ee13b759581a719cf608bef9ed33efddab9b4ac67d92b814/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed05ccf9952ae1ee13b759581a719cf608bef9ed33efddab9b4ac67d92b814/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed05ccf9952ae1ee13b759581a719cf608bef9ed33efddab9b4ac67d92b814/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed05ccf9952ae1ee13b759581a719cf608bef9ed33efddab9b4ac67d92b814/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:49 compute-0 podman[201846]: 2025-10-02 11:59:49.348538162 +0000 UTC m=+0.179386151 container init 85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:59:49 compute-0 podman[201846]: 2025-10-02 11:59:49.362359395 +0000 UTC m=+0.193207344 container start 85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:59:49 compute-0 podman[201846]: 2025-10-02 11:59:49.366705329 +0000 UTC m=+0.197553528 container attach 85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 11:59:49 compute-0 sudo[201940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqwyfoovibmmrqidkzjvpbwhxfhvknl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406389.0509768-1993-255652259129506/AnsiballZ_file.py'
Oct 02 11:59:49 compute-0 sudo[201940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:49 compute-0 python3.9[201942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:49 compute-0 sudo[201940]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.641078) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389641189, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1974, "num_deletes": 253, "total_data_size": 3718355, "memory_usage": 3770512, "flush_reason": "Manual Compaction"}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389665344, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 3645687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12267, "largest_seqno": 14240, "table_properties": {"data_size": 3636755, "index_size": 5618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16653, "raw_average_key_size": 18, "raw_value_size": 3619026, "raw_average_value_size": 4080, "num_data_blocks": 251, "num_entries": 887, "num_filter_entries": 887, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406177, "oldest_key_time": 1759406177, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 24296 microseconds, and 8267 cpu microseconds.
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.665410) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 3645687 bytes OK
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.665438) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.668034) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.668062) EVENT_LOG_v1 {"time_micros": 1759406389668052, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.668090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3710416, prev total WAL file size 3710416, number of live WAL files 2.
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.670306) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(3560KB)], [29(8239KB)]
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389670378, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12083301, "oldest_snapshot_seqno": -1}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4330 keys, 11528249 bytes, temperature: kUnknown
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389756384, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11528249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11493601, "index_size": 22698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105182, "raw_average_key_size": 24, "raw_value_size": 11409840, "raw_average_value_size": 2635, "num_data_blocks": 969, "num_entries": 4330, "num_filter_entries": 4330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.756770) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11528249 bytes
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.760554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.2 rd, 133.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.5) write-amplify(3.2) OK, records in: 4854, records dropped: 524 output_compression: NoCompression
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.760576) EVENT_LOG_v1 {"time_micros": 1759406389760564, "job": 12, "event": "compaction_finished", "compaction_time_micros": 86170, "compaction_time_cpu_micros": 52550, "output_level": 6, "num_output_files": 1, "total_output_size": 11528249, "num_input_records": 4854, "num_output_records": 4330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389761431, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389762971, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.670209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.763084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.763092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.763096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.763116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-11:59:49.763120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:50 compute-0 sudo[202094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfwocplszejcysljwmzewfokiaitpnxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406389.7834692-1993-1558968151961/AnsiballZ_file.py'
Oct 02 11:59:50 compute-0 sudo[202094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:50 compute-0 epic_carver[201908]: {
Oct 02 11:59:50 compute-0 epic_carver[201908]:     "1": [
Oct 02 11:59:50 compute-0 epic_carver[201908]:         {
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "devices": [
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "/dev/loop3"
Oct 02 11:59:50 compute-0 epic_carver[201908]:             ],
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "lv_name": "ceph_lv0",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "lv_size": "7511998464",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "name": "ceph_lv0",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "tags": {
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.cluster_name": "ceph",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.crush_device_class": "",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.encrypted": "0",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.osd_id": "1",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.type": "block",
Oct 02 11:59:50 compute-0 epic_carver[201908]:                 "ceph.vdo": "0"
Oct 02 11:59:50 compute-0 epic_carver[201908]:             },
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "type": "block",
Oct 02 11:59:50 compute-0 epic_carver[201908]:             "vg_name": "ceph_vg0"
Oct 02 11:59:50 compute-0 epic_carver[201908]:         }
Oct 02 11:59:50 compute-0 epic_carver[201908]:     ]
Oct 02 11:59:50 compute-0 epic_carver[201908]: }
Oct 02 11:59:50 compute-0 systemd[1]: libpod-85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb.scope: Deactivated successfully.
Oct 02 11:59:50 compute-0 podman[201846]: 2025-10-02 11:59:50.175826752 +0000 UTC m=+1.006674761 container died 85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-70ed05ccf9952ae1ee13b759581a719cf608bef9ed33efddab9b4ac67d92b814-merged.mount: Deactivated successfully.
Oct 02 11:59:50 compute-0 ceph-mon[73668]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:50 compute-0 podman[201846]: 2025-10-02 11:59:50.252402872 +0000 UTC m=+1.083250851 container remove 85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:59:50 compute-0 systemd[1]: libpod-conmon-85e504ef7d85b0ca4564c453ed9f3f6a4855e1863de3d69d02b1c4aad36c20cb.scope: Deactivated successfully.
Oct 02 11:59:50 compute-0 sudo[201589]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:50 compute-0 python3.9[202096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:50 compute-0 sudo[202094]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:50 compute-0 sudo[202113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:50 compute-0 sudo[202113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:50 compute-0 sudo[202113]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:50 compute-0 sudo[202142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:50 compute-0 sudo[202142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:50 compute-0 sudo[202142]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:50 compute-0 sudo[202192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:50 compute-0 sudo[202192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:50 compute-0 sudo[202192]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:50 compute-0 sudo[202262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:59:50 compute-0 sudo[202262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:50 compute-0 sudo[202370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exwsrusdjjavifkuyevyiwtnqbamtwbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406390.489777-1993-193160091016042/AnsiballZ_file.py'
Oct 02 11:59:50 compute-0 sudo[202370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:51 compute-0 podman[202405]: 2025-10-02 11:59:51.00572198 +0000 UTC m=+0.045184137 container create 951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 02 11:59:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:51.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:51 compute-0 python3.9[202376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:51 compute-0 systemd[1]: Started libpod-conmon-951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1.scope.
Oct 02 11:59:51 compute-0 sudo[202370]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:51 compute-0 podman[202405]: 2025-10-02 11:59:50.98704249 +0000 UTC m=+0.026504737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:51 compute-0 podman[202405]: 2025-10-02 11:59:51.107300387 +0000 UTC m=+0.146762564 container init 951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 11:59:51 compute-0 podman[202405]: 2025-10-02 11:59:51.116209371 +0000 UTC m=+0.155671538 container start 951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:59:51 compute-0 admiring_nash[202423]: 167 167
Oct 02 11:59:51 compute-0 systemd[1]: libpod-951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1.scope: Deactivated successfully.
Oct 02 11:59:51 compute-0 podman[202405]: 2025-10-02 11:59:51.124737815 +0000 UTC m=+0.164199992 container attach 951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 11:59:51 compute-0 podman[202405]: 2025-10-02 11:59:51.126170372 +0000 UTC m=+0.165632589 container died 951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-640144908fb398d7d267882a7228817927c254a6676afee972dd17b23d2a7a01-merged.mount: Deactivated successfully.
Oct 02 11:59:51 compute-0 podman[202405]: 2025-10-02 11:59:51.167134058 +0000 UTC m=+0.206596225 container remove 951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:59:51 compute-0 podman[202422]: 2025-10-02 11:59:51.193948502 +0000 UTC m=+0.121692106 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:59:51 compute-0 systemd[1]: libpod-conmon-951fd2a47c403a2f67c1a7e1dc915e02f317ffa4319697a019d7258364d901d1.scope: Deactivated successfully.
Oct 02 11:59:51 compute-0 ceph-mon[73668]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:51 compute-0 podman[202535]: 2025-10-02 11:59:51.34091302 +0000 UTC m=+0.050438095 container create 4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:59:51 compute-0 systemd[1]: Started libpod-conmon-4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86.scope.
Oct 02 11:59:51 compute-0 podman[202535]: 2025-10-02 11:59:51.316999172 +0000 UTC m=+0.026524287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:59:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 11:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac6ebe3b35cf76d69b9ea07fa0692ddf13a0f11ae481ecfcdf6abdaa58cb071f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac6ebe3b35cf76d69b9ea07fa0692ddf13a0f11ae481ecfcdf6abdaa58cb071f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac6ebe3b35cf76d69b9ea07fa0692ddf13a0f11ae481ecfcdf6abdaa58cb071f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac6ebe3b35cf76d69b9ea07fa0692ddf13a0f11ae481ecfcdf6abdaa58cb071f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:59:51 compute-0 podman[202535]: 2025-10-02 11:59:51.445573398 +0000 UTC m=+0.155098563 container init 4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:59:51 compute-0 podman[202535]: 2025-10-02 11:59:51.460308645 +0000 UTC m=+0.169833730 container start 4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 11:59:51 compute-0 podman[202535]: 2025-10-02 11:59:51.464156496 +0000 UTC m=+0.173681671 container attach 4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 11:59:51 compute-0 sudo[202642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbgzgfsqpumveidmtxwfxhwvbbjqfiwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406391.2404249-1993-212317010546508/AnsiballZ_file.py'
Oct 02 11:59:51 compute-0 sudo[202642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:51 compute-0 python3.9[202644]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:51 compute-0 sudo[202642]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:52 compute-0 sudo[202808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkzmheqgqyivsztsjeidynycfyrtnkqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406391.970977-1993-213124093228315/AnsiballZ_file.py'
Oct 02 11:59:52 compute-0 sudo[202808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:52 compute-0 friendly_cray[202585]: {
Oct 02 11:59:52 compute-0 friendly_cray[202585]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 11:59:52 compute-0 friendly_cray[202585]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:59:52 compute-0 friendly_cray[202585]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:59:52 compute-0 friendly_cray[202585]:         "osd_id": 1,
Oct 02 11:59:52 compute-0 friendly_cray[202585]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 11:59:52 compute-0 friendly_cray[202585]:         "type": "bluestore"
Oct 02 11:59:52 compute-0 friendly_cray[202585]:     }
Oct 02 11:59:52 compute-0 friendly_cray[202585]: }
Oct 02 11:59:52 compute-0 systemd[1]: libpod-4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86.scope: Deactivated successfully.
Oct 02 11:59:52 compute-0 podman[202535]: 2025-10-02 11:59:52.336237611 +0000 UTC m=+1.045762726 container died 4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:59:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:59:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:59:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac6ebe3b35cf76d69b9ea07fa0692ddf13a0f11ae481ecfcdf6abdaa58cb071f-merged.mount: Deactivated successfully.
Oct 02 11:59:52 compute-0 podman[202535]: 2025-10-02 11:59:52.423002359 +0000 UTC m=+1.132527444 container remove 4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cray, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:59:52 compute-0 systemd[1]: libpod-conmon-4d7f3b99a336f1810ff126f1ae3df3686b1c8a98b1accb069eb83e813fe27f86.scope: Deactivated successfully.
Oct 02 11:59:52 compute-0 python3.9[202810]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:52 compute-0 sudo[202262]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 11:59:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 11:59:52 compute-0 sudo[202808]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7753beed-f4d5-4b62-a7e5-67f08965f154 does not exist
Oct 02 11:59:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7491e77e-7e6f-41cd-82d1-1604df8573e8 does not exist
Oct 02 11:59:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e2a949b0-fcc0-4dd4-9bc7-cbb436fdd15c does not exist
Oct 02 11:59:52 compute-0 sudo[202828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:52 compute-0 sudo[202828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:52 compute-0 sudo[202828]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:52 compute-0 sudo[202877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:59:52 compute-0 sudo[202877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:52 compute-0 sudo[202877]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:53 compute-0 sudo[203027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngypapgugfasgnsrksvtgnkistfqehcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406392.6612654-1993-189493852012961/AnsiballZ_file.py'
Oct 02 11:59:53 compute-0 sudo[203027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:53 compute-0 python3.9[203029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:53 compute-0 sudo[203027]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:53 compute-0 ceph-mon[73668]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:53 compute-0 sudo[203179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwyxqrppsdwstthdcgxuphcvhlpimonb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406393.4279897-1993-171860322932275/AnsiballZ_file.py'
Oct 02 11:59:53 compute-0 sudo[203179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:53 compute-0 python3.9[203181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:54 compute-0 sudo[203179]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:54 compute-0 sudo[203332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfpueflehrtpxxbekgqfuntmuudyioet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406394.1344733-1993-123479550031962/AnsiballZ_file.py'
Oct 02 11:59:54 compute-0 sudo[203332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:54 compute-0 python3.9[203334]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:54 compute-0 sudo[203332]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:55 compute-0 sudo[203484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyrznlxmyhsptnshvjfdkhqwifxknheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406394.893615-1993-76698903552835/AnsiballZ_file.py'
Oct 02 11:59:55 compute-0 sudo[203484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:55 compute-0 python3.9[203486]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:55 compute-0 sudo[203484]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:55 compute-0 ceph-mon[73668]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:55 compute-0 sudo[203636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jawpknbzrtqcnfktbrsomfwnhonawjhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406395.638181-1993-116227408404036/AnsiballZ_file.py'
Oct 02 11:59:55 compute-0 sudo[203636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:56 compute-0 python3.9[203638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:56 compute-0 sudo[203636]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:56 compute-0 sudo[203789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fotwzenehvdzjkuxhtkjklkgnmcaegdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406396.386245-1993-48736769758986/AnsiballZ_file.py'
Oct 02 11:59:56 compute-0 sudo[203789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:57.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:57 compute-0 python3.9[203791]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:57 compute-0 sudo[203789]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:57 compute-0 sudo[203941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrndbqkvxrbszmduabqiccuxkbdxthoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406397.2053263-1993-110385942020906/AnsiballZ_file.py'
Oct 02 11:59:57 compute-0 sudo[203941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:57 compute-0 ceph-mon[73668]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:57 compute-0 python3.9[203943]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:57 compute-0 sudo[203941]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:58 compute-0 sudo[204093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrqgbwxdosbsbjfnssfborwgnbmuksvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406397.9194033-1993-174312200602190/AnsiballZ_file.py'
Oct 02 11:59:58 compute-0 sudo[204093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 11:59:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 11:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 11:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 11:59:58 compute-0 python3.9[204096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:58 compute-0 sudo[204093]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:58 compute-0 sudo[204128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:58 compute-0 sudo[204128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:58 compute-0 sudo[204128]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:58 compute-0 podman[204168]: 2025-10-02 11:59:58.830789609 +0000 UTC m=+0.055323113 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:59:58 compute-0 sudo[204174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:58 compute-0 sudo[204174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:58 compute-0 sudo[204174]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 11:59:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:59 compute-0 sudo[204314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlxlqcyewbvvylxekybthhvkzgyyljss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406398.7241707-1993-33738257139647/AnsiballZ_file.py'
Oct 02 11:59:59 compute-0 sudo[204314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:59 compute-0 python3.9[204316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:59 compute-0 sudo[204314]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:59 compute-0 ceph-mon[73668]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:59 compute-0 sudo[204466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyhuurcauawbxgkxqlxkwpoqoqzqlke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406399.5413182-2290-281357478235200/AnsiballZ_stat.py'
Oct 02 11:59:59 compute-0 sudo[204466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:00:00 compute-0 python3.9[204468]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:00 compute-0 sudo[204466]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:00 compute-0 sudo[204590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlnmdluryrtozrmvrkwcyegmsfzdwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406399.5413182-2290-281357478235200/AnsiballZ_copy.py'
Oct 02 12:00:00 compute-0 sudo[204590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 12:00:00 compute-0 python3.9[204592]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406399.5413182-2290-281357478235200/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:00 compute-0 sudo[204590]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:01.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:01 compute-0 sudo[204742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjlvgjjgbrgxyopbjtsfhfmojhykwrsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406400.9405057-2290-53334697383462/AnsiballZ_stat.py'
Oct 02 12:00:01 compute-0 sudo[204742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:01 compute-0 python3.9[204744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:01 compute-0 sudo[204742]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:01 compute-0 ceph-mon[73668]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:01 compute-0 sudo[204865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmjpcpkqhicvvywhwucbzpoztrbnziwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406400.9405057-2290-53334697383462/AnsiballZ_copy.py'
Oct 02 12:00:01 compute-0 sudo[204865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:02 compute-0 python3.9[204867]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406400.9405057-2290-53334697383462/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:02 compute-0 sudo[204865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:02.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:02 compute-0 sudo[205018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edbljqwybkiyaoopcbsftqkphbwnogrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406402.1783075-2290-221020823144677/AnsiballZ_stat.py'
Oct 02 12:00:02 compute-0 sudo[205018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:02 compute-0 python3.9[205020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:02 compute-0 sudo[205018]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:03 compute-0 sudo[205141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phbebvpewetxfaonxydqkfcovgcbjvmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406402.1783075-2290-221020823144677/AnsiballZ_copy.py'
Oct 02 12:00:03 compute-0 sudo[205141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:03 compute-0 python3.9[205143]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406402.1783075-2290-221020823144677/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:03 compute-0 sudo[205141]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:03 compute-0 ceph-mon[73668]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:04 compute-0 sudo[205293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxdjmxcilpfriqphchqvjpvudtsuxdgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406403.7132933-2290-263859908192537/AnsiballZ_stat.py'
Oct 02 12:00:04 compute-0 sudo[205293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:04 compute-0 python3.9[205295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:04 compute-0 sudo[205293]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:04.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:04 compute-0 sudo[205417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwftjnrjzfxastatjjvluedvslgadoob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406403.7132933-2290-263859908192537/AnsiballZ_copy.py'
Oct 02 12:00:04 compute-0 sudo[205417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:04 compute-0 python3.9[205419]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406403.7132933-2290-263859908192537/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:04 compute-0 sudo[205417]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:00:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:00:05 compute-0 sudo[205569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwnmkkkrmuflzyeftibhsadwvqpcalv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406404.9487433-2290-98566503269938/AnsiballZ_stat.py'
Oct 02 12:00:05 compute-0 sudo[205569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:05 compute-0 python3.9[205571]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:05 compute-0 sudo[205569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:05 compute-0 ceph-mon[73668]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:05 compute-0 sudo[205692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtbzcjgekrudrkcoelyaefpfymmhmvcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406404.9487433-2290-98566503269938/AnsiballZ_copy.py'
Oct 02 12:00:05 compute-0 sudo[205692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:06 compute-0 python3.9[205694]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406404.9487433-2290-98566503269938/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:06 compute-0 sudo[205692]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:06 compute-0 sudo[205845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahfmnwpmzaszyxnkqblcwnckcssgumzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406406.309607-2290-25634461891790/AnsiballZ_stat.py'
Oct 02 12:00:06 compute-0 sudo[205845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:06 compute-0 python3.9[205847]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:06 compute-0 sudo[205845]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:07 compute-0 sudo[205968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isotofjuwrzshphqpmdmqvvwzvrfxyag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406406.309607-2290-25634461891790/AnsiballZ_copy.py'
Oct 02 12:00:07 compute-0 sudo[205968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:07 compute-0 python3.9[205970]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406406.309607-2290-25634461891790/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:07 compute-0 sudo[205968]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:07 compute-0 ceph-mon[73668]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:08 compute-0 sudo[206120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysmhgmugmwmqglbeapmhwxhdyvkjdetb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406407.65969-2290-67872446723489/AnsiballZ_stat.py'
Oct 02 12:00:08 compute-0 sudo[206120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:08 compute-0 python3.9[206122]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:08 compute-0 sudo[206120]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:08 compute-0 sudo[206244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-astpnejsbgnilsignmgecniwecbkraoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406407.65969-2290-67872446723489/AnsiballZ_copy.py'
Oct 02 12:00:08 compute-0 sudo[206244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:08 compute-0 python3.9[206246]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406407.65969-2290-67872446723489/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:08 compute-0 sudo[206244]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:09 compute-0 sudo[206396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrndwlgikyzfshxvzkcgboevnpadcwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406408.964546-2290-163765993254916/AnsiballZ_stat.py'
Oct 02 12:00:09 compute-0 sudo[206396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:09 compute-0 python3.9[206398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:09 compute-0 sudo[206396]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:09 compute-0 ceph-mon[73668]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:09 compute-0 sudo[206519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otzbzawscwsnqknlgqocnfknrvqiglfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406408.964546-2290-163765993254916/AnsiballZ_copy.py'
Oct 02 12:00:09 compute-0 sudo[206519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:10 compute-0 python3.9[206521]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406408.964546-2290-163765993254916/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:10 compute-0 sudo[206519]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:00:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:10.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:00:10 compute-0 sudo[206672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzxwaiuiqvipapmhcdpxsfrmusfobcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406410.2393887-2290-197630602578467/AnsiballZ_stat.py'
Oct 02 12:00:10 compute-0 sudo[206672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:10 compute-0 python3.9[206674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:10 compute-0 sudo[206672]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:11 compute-0 sudo[206795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjquxwublyhwjitplfzxbtjsjracolwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406410.2393887-2290-197630602578467/AnsiballZ_copy.py'
Oct 02 12:00:11 compute-0 sudo[206795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:11 compute-0 python3.9[206797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406410.2393887-2290-197630602578467/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:11 compute-0 sudo[206795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:11 compute-0 ceph-mon[73668]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:11 compute-0 sudo[206947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owfyfngixnllyfmlxrcbkokeekoowrpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406411.6041577-2290-199899123272160/AnsiballZ_stat.py'
Oct 02 12:00:11 compute-0 sudo[206947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:12 compute-0 python3.9[206949]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:12 compute-0 sudo[206947]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:12 compute-0 sudo[207071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uktldbpkkpztuffwosslakxfrzzgmqcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406411.6041577-2290-199899123272160/AnsiballZ_copy.py'
Oct 02 12:00:12 compute-0 sudo[207071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:12 compute-0 python3.9[207073]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406411.6041577-2290-199899123272160/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:12 compute-0 sudo[207071]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:00:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:00:13 compute-0 sudo[207223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemxzxhlxtedqqlqxrqqfopsnpdtebbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406412.833189-2290-60346486935458/AnsiballZ_stat.py'
Oct 02 12:00:13 compute-0 sudo[207223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:13 compute-0 python3.9[207225]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:13 compute-0 sudo[207223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:13 compute-0 sudo[207346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snuafqrvkpaitbjclkumuyznqwrqlrlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406412.833189-2290-60346486935458/AnsiballZ_copy.py'
Oct 02 12:00:13 compute-0 sudo[207346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:13 compute-0 ceph-mon[73668]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:13 compute-0 python3.9[207348]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406412.833189-2290-60346486935458/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:13 compute-0 sudo[207346]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:14 compute-0 sudo[207499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxqcpnyyasywsvoxbipkameunwlzudpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406414.1230583-2290-16175295101517/AnsiballZ_stat.py'
Oct 02 12:00:14 compute-0 sudo[207499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:14 compute-0 python3.9[207501]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:14 compute-0 sudo[207499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:15.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:15 compute-0 sudo[207622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djwczimewfomhxaielxqzpvsqzcxaked ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406414.1230583-2290-16175295101517/AnsiballZ_copy.py'
Oct 02 12:00:15 compute-0 sudo[207622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:15 compute-0 python3.9[207624]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406414.1230583-2290-16175295101517/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:15 compute-0 sudo[207622]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-0 ceph-mon[73668]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:15 compute-0 sudo[207774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgdfvknsznxzcaojhhzktfbrxphcmqsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406415.5542493-2290-143344152636876/AnsiballZ_stat.py'
Oct 02 12:00:15 compute-0 sudo[207774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:16 compute-0 python3.9[207776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:16 compute-0 sudo[207774]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:16 compute-0 sudo[207898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzxgttzpencclenoqqhmcfltfwrcoyln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406415.5542493-2290-143344152636876/AnsiballZ_copy.py'
Oct 02 12:00:16 compute-0 sudo[207898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:16 compute-0 python3.9[207900]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406415.5542493-2290-143344152636876/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:16 compute-0 sudo[207898]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:17.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:17 compute-0 sudo[208050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmwvlbjefkqmlthhrzggvcbbojtxadg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406416.9258811-2290-223842933607914/AnsiballZ_stat.py'
Oct 02 12:00:17 compute-0 sudo[208050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:17 compute-0 python3.9[208052]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:17 compute-0 sudo[208050]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:17 compute-0 ceph-mon[73668]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:17 compute-0 sudo[208173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lksmobmagphnxwvthnmvlnauhnmaitjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406416.9258811-2290-223842933607914/AnsiballZ_copy.py'
Oct 02 12:00:17 compute-0 sudo[208173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:18 compute-0 python3.9[208175]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406416.9258811-2290-223842933607914/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:18 compute-0 sudo[208173]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:18.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:18 compute-0 python3.9[208326]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:18 compute-0 sudo[208327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:18 compute-0 sudo[208327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:18 compute-0 sudo[208327]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:19 compute-0 sudo[208355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:19 compute-0 sudo[208355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:19 compute-0 sudo[208355]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:19.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:19 compute-0 sudo[208529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvjxuclaxdgsiyerjszvlmzsnhybmkgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406419.1783445-2908-194546004447798/AnsiballZ_seboolean.py'
Oct 02 12:00:19 compute-0 sudo[208529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:19 compute-0 python3.9[208531]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 02 12:00:19 compute-0 ceph-mon[73668]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:20.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:20 compute-0 sudo[208529]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:00:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:21.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:00:21 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 02 12:00:21 compute-0 podman[208561]: 2025-10-02 12:00:21.45555843 +0000 UTC m=+0.105086390 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:00:21 compute-0 ceph-mon[73668]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:22.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:00:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:00:23 compute-0 sudo[208714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfqlonahyiepfenptqvmtlzidzssurgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406422.962073-2932-190478460088649/AnsiballZ_copy.py'
Oct 02 12:00:23 compute-0 sudo[208714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:23 compute-0 python3.9[208716]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:23 compute-0 sudo[208714]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:23 compute-0 sudo[208866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkzvicanbaiwdjhjzkepbnwxxhjficje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406423.6081047-2932-155131471404500/AnsiballZ_copy.py'
Oct 02 12:00:23 compute-0 sudo[208866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:24 compute-0 ceph-mon[73668]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:24 compute-0 python3.9[208868]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:24 compute-0 sudo[208866]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:24.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:24 compute-0 sudo[209019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvjgbwnatbknymwaighykfsxpwbdvfyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406424.2999048-2932-272917261645213/AnsiballZ_copy.py'
Oct 02 12:00:24 compute-0 sudo[209019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:24 compute-0 python3.9[209021]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:24 compute-0 sudo[209019]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:25 compute-0 sudo[209171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmlrfcvkkjqhiagfdfiffomgamgkxqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406425.0390606-2932-272195504854071/AnsiballZ_copy.py'
Oct 02 12:00:25 compute-0 sudo[209171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:25 compute-0 python3.9[209173]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:25 compute-0 sudo[209171]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:26 compute-0 sudo[209323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxntxccyybgplkmjrrxihvbxejutjgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406425.720014-2932-233887589852934/AnsiballZ_copy.py'
Oct 02 12:00:26 compute-0 sudo[209323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:26 compute-0 ceph-mon[73668]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:26 compute-0 python3.9[209325]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:26 compute-0 sudo[209323]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:26.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:00:26.435 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:00:26.435 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:00:26.435 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:26 compute-0 sudo[209476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkfthsnahltcfbnmzgimechhxmdonocw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406426.4916553-3040-116936081852870/AnsiballZ_copy.py'
Oct 02 12:00:26 compute-0 sudo[209476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:27 compute-0 python3.9[209478]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:27.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:27 compute-0 sudo[209476]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:27 compute-0 sudo[209628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxksjgylnkyhjzldetkyrdlqnevmpiop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406427.2071064-3040-13538045167979/AnsiballZ_copy.py'
Oct 02 12:00:27 compute-0 sudo[209628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:27 compute-0 python3.9[209630]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:27 compute-0 sudo[209628]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:28 compute-0 ceph-mon[73668]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:28 compute-0 sudo[209780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfithmouzxgowacmrmvysbcuvwbivpgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406427.9462388-3040-78016349076325/AnsiballZ_copy.py'
Oct 02 12:00:28 compute-0 sudo[209780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:00:28 compute-0 python3.9[209782]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:00:28 compute-0 sudo[209780]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:00:28
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images']
Oct 02 12:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:00:28 compute-0 sudo[209946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npobfdspcnekhyhpbhxqhgbhiqhjsgyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406428.6576219-3040-198571024934431/AnsiballZ_copy.py'
Oct 02 12:00:28 compute-0 sudo[209946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:28 compute-0 podman[209907]: 2025-10-02 12:00:28.997421054 +0000 UTC m=+0.076945951 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:00:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:29.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:29 compute-0 python3.9[209954]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:29 compute-0 sudo[209946]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:29 compute-0 sudo[210104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joaeqsssckyusenhkaouracjdnenqvld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406429.3997695-3040-189400104174977/AnsiballZ_copy.py'
Oct 02 12:00:29 compute-0 sudo[210104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:29 compute-0 python3.9[210106]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:29 compute-0 sudo[210104]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:30 compute-0 ceph-mon[73668]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:30 compute-0 sudo[210257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koteqqooegneophcfpssarpnwfpdhabo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406430.0961788-3148-251339956395105/AnsiballZ_systemd.py'
Oct 02 12:00:30 compute-0 sudo[210257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:30 compute-0 python3.9[210259]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:30 compute-0 systemd[1]: Reloading.
Oct 02 12:00:30 compute-0 systemd-rc-local-generator[210286]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:30 compute-0 systemd-sysv-generator[210290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:31 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 02 12:00:31 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 02 12:00:31 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 02 12:00:31 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 02 12:00:31 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 02 12:00:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:00:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:31.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:00:31 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 02 12:00:31 compute-0 sudo[210257]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:31 compute-0 ceph-mon[73668]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:31 compute-0 sudo[210450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pirgotoclifrcsoilcjixchdwptfrijg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406431.4375398-3148-216740999440414/AnsiballZ_systemd.py'
Oct 02 12:00:31 compute-0 sudo[210450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:32 compute-0 python3.9[210452]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:32 compute-0 systemd[1]: Reloading.
Oct 02 12:00:32 compute-0 systemd-sysv-generator[210482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:32 compute-0 systemd-rc-local-generator[210478]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:32 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 02 12:00:32 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 02 12:00:32 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 02 12:00:32 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 02 12:00:32 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 02 12:00:32 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 02 12:00:32 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 12:00:32 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 12:00:32 compute-0 sudo[210450]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:33 compute-0 sudo[210665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxfhjfzugcvqywtxlkhesorrxotrofzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406432.7362196-3148-108643278236331/AnsiballZ_systemd.py'
Oct 02 12:00:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:33 compute-0 sudo[210665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:33 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 02 12:00:33 compute-0 ceph-mon[73668]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:33 compute-0 python3.9[210668]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:33 compute-0 systemd[1]: Reloading.
Oct 02 12:00:33 compute-0 systemd-rc-local-generator[210696]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:33 compute-0 systemd-sysv-generator[210699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:33 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 02 12:00:33 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 02 12:00:33 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 02 12:00:33 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 02 12:00:33 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 02 12:00:33 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 12:00:33 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 12:00:33 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 02 12:00:33 compute-0 sudo[210665]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:33 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 02 12:00:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:34 compute-0 sudo[210884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvsuptlchbklbkwgbutlzcetlnuypjte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406433.9773111-3148-269111520126114/AnsiballZ_systemd.py'
Oct 02 12:00:34 compute-0 sudo[210884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:34 compute-0 python3.9[210886]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:34 compute-0 systemd[1]: Reloading.
Oct 02 12:00:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:34 compute-0 systemd-rc-local-generator[210914]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:34 compute-0 systemd-sysv-generator[210917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:34 compute-0 setroubleshoot[210667]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f6cfe289-3b5b-4b0f-8d7b-4857fa862fc7
Oct 02 12:00:34 compute-0 setroubleshoot[210667]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 12:00:34 compute-0 setroubleshoot[210667]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f6cfe289-3b5b-4b0f-8d7b-4857fa862fc7
Oct 02 12:00:34 compute-0 setroubleshoot[210667]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 12:00:34 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 02 12:00:34 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 02 12:00:34 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 02 12:00:34 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 02 12:00:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 02 12:00:34 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 02 12:00:34 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 02 12:00:35 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 02 12:00:35 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 02 12:00:35 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 02 12:00:35 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 12:00:35 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 12:00:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:35 compute-0 sudo[210884]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:35 compute-0 ceph-mon[73668]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:35 compute-0 sudo[211098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdneoufdqthsysfcxdwhnuttuorxdcee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406435.2742813-3148-151799527962149/AnsiballZ_systemd.py'
Oct 02 12:00:35 compute-0 sudo[211098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:35 compute-0 python3.9[211100]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:35 compute-0 systemd[1]: Reloading.
Oct 02 12:00:36 compute-0 systemd-sysv-generator[211126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:36 compute-0 systemd-rc-local-generator[211117]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:36 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 02 12:00:36 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 02 12:00:36 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 02 12:00:36 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 02 12:00:36 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 02 12:00:36 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 02 12:00:36 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 12:00:36 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 12:00:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:36.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:36 compute-0 sudo[211098]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:36 compute-0 sudo[211308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwxvpjcgqlqhgpllxjuxxnnfoxaairl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406436.6964338-3259-174349419657027/AnsiballZ_file.py'
Oct 02 12:00:36 compute-0 sudo[211308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:37 compute-0 python3.9[211310]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:37 compute-0 sudo[211308]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:37 compute-0 ceph-mon[73668]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:37 compute-0 sudo[211460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzeijkcbgmovaditxqbuwxyauhxssjpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406437.4869077-3283-195033316019203/AnsiballZ_find.py'
Oct 02 12:00:37 compute-0 sudo[211460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:38 compute-0 python3.9[211462]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 12:00:38 compute-0 sudo[211460]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:38 compute-0 sudo[211613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amfnynadjkrpanorthkooljwcxwjyrsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406438.2170212-3307-262969718088098/AnsiballZ_command.py'
Oct 02 12:00:38 compute-0 sudo[211613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:38 compute-0 python3.9[211615]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:38 compute-0 sudo[211613]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:00:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:00:39 compute-0 sudo[211665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:39 compute-0 sudo[211665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:39 compute-0 sudo[211665]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:39 compute-0 sudo[211717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:39 compute-0 sudo[211717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:39 compute-0 sudo[211717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:39 compute-0 ceph-mon[73668]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:39 compute-0 python3.9[211819]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 12:00:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:00:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:00:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:00:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:40.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:00:40 compute-0 python3.9[211969]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:41 compute-0 python3.9[212091]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406439.9780185-3364-175647052407793/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9d9565ec21a9799171bafbb06d2141d5e5510d7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:00:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:00:41 compute-0 ceph-mon[73668]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:41 compute-0 sudo[212241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvepxgszgknkxdhcrksqdbwjrfxberg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406441.2274249-3409-13905832301674/AnsiballZ_command.py'
Oct 02 12:00:41 compute-0 sudo[212241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:41 compute-0 python3.9[212243]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 20fdc58c-b037-5094-a8ef-d490aa7c36f3
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:41 compute-0 polkitd[6323]: Registered Authentication Agent for unix-process:212245:433727 (system bus name :1.3027 [/usr/bin/pkttyagent --process 212245 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 12:00:41 compute-0 polkitd[6323]: Unregistered Authentication Agent for unix-process:212245:433727 (system bus name :1.3027, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 12:00:41 compute-0 polkitd[6323]: Registered Authentication Agent for unix-process:212244:433726 (system bus name :1.3028 [/usr/bin/pkttyagent --process 212244 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 12:00:41 compute-0 polkitd[6323]: Unregistered Authentication Agent for unix-process:212244:433726 (system bus name :1.3028, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 12:00:41 compute-0 sudo[212241]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:42 compute-0 python3.9[212406]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000053s ======
Oct 02 12:00:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct 02 12:00:43 compute-0 sudo[212556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkpmwopbooahacmackdfuvsjqqmawbrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406442.9448156-3457-21105699795263/AnsiballZ_command.py'
Oct 02 12:00:43 compute-0 sudo[212556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:43 compute-0 sudo[212556]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:43 compute-0 ceph-mon[73668]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:43 compute-0 sudo[212709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaffzvdxhehjqzcnxhlofdqqwlriwhwo ; FSID=20fdc58c-b037-5094-a8ef-d490aa7c36f3 KEY=AQBLZd5oAAAAABAAIzZhCjE1jBJ2OFSOmUV6ug== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406443.69178-3481-32239713887768/AnsiballZ_command.py'
Oct 02 12:00:43 compute-0 sudo[212709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:44 compute-0 polkitd[6323]: Registered Authentication Agent for unix-process:212712:433969 (system bus name :1.3031 [/usr/bin/pkttyagent --process 212712 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 12:00:44 compute-0 polkitd[6323]: Unregistered Authentication Agent for unix-process:212712:433969 (system bus name :1.3031, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 12:00:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:44.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:44 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 02 12:00:44 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 02 12:00:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:45.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:45 compute-0 sudo[212709]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:45 compute-0 ceph-mon[73668]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:45 compute-0 sudo[212868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irukijbglozrdzgqrpnvkbuisbmnpxzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406445.4402041-3505-188287032188836/AnsiballZ_copy.py'
Oct 02 12:00:45 compute-0 sudo[212868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:45 compute-0 python3.9[212870]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:45 compute-0 sudo[212868]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:46.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:46 compute-0 sudo[213021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxzdwwmnjgyxtoticwjhrstgvfmhtrnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406446.155458-3529-30535358418264/AnsiballZ_stat.py'
Oct 02 12:00:46 compute-0 sudo[213021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:46 compute-0 python3.9[213023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:46 compute-0 sudo[213021]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:00:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:47.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:00:47 compute-0 sudo[213144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djvpqhuduafvcwtvbthuewurwmfkebrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406446.155458-3529-30535358418264/AnsiballZ_copy.py'
Oct 02 12:00:47 compute-0 sudo[213144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:47 compute-0 python3.9[213146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406446.155458-3529-30535358418264/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:47 compute-0 sudo[213144]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:47 compute-0 ceph-mon[73668]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:47 compute-0 sudo[213296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwbwublvmwgimhjdioynwsqkvvcgvspc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406447.678952-3577-56248604984103/AnsiballZ_file.py'
Oct 02 12:00:47 compute-0 sudo[213296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:48 compute-0 python3.9[213298]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:48 compute-0 sudo[213296]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:48.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:48 compute-0 sudo[213449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyrhwvclhlddpmiwwpkugbtchhhhdjri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406448.5635128-3601-119276966460094/AnsiballZ_stat.py'
Oct 02 12:00:48 compute-0 sudo[213449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:49 compute-0 python3.9[213451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:49 compute-0 sudo[213449]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:00:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:49.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:00:49 compute-0 sudo[213527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkunlalfedspkiewtpgdecdxeawozxca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406448.5635128-3601-119276966460094/AnsiballZ_file.py'
Oct 02 12:00:49 compute-0 sudo[213527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:49 compute-0 python3.9[213529]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:49 compute-0 sudo[213527]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:49 compute-0 ceph-mon[73668]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:50 compute-0 sudo[213679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hojntpzjmbtshmctlzblfpskpzjfbyrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406449.7084932-3637-225552599536540/AnsiballZ_stat.py'
Oct 02 12:00:50 compute-0 sudo[213679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:50 compute-0 python3.9[213681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:50 compute-0 sudo[213679]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:50.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:50 compute-0 sudo[213758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljjhfruurxqiisdmcksgletkrumjuwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406449.7084932-3637-225552599536540/AnsiballZ_file.py'
Oct 02 12:00:50 compute-0 sudo[213758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:50 compute-0 python3.9[213760]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a8iozp1i recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:50 compute-0 sudo[213758]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:51 compute-0 sudo[213910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naspbijlbuezmfumdfbwbfgbzzfvmseb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406450.8967729-3673-179087342903524/AnsiballZ_stat.py'
Oct 02 12:00:51 compute-0 sudo[213910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:51 compute-0 python3.9[213912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:51 compute-0 sudo[213910]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:51 compute-0 ceph-mon[73668]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:51 compute-0 sudo[214003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmecuzuoddvzpcxrmjqbyzirhvdxscl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406450.8967729-3673-179087342903524/AnsiballZ_file.py'
Oct 02 12:00:51 compute-0 sudo[214003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:51 compute-0 podman[213962]: 2025-10-02 12:00:51.871668116 +0000 UTC m=+0.092089658 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 12:00:52 compute-0 python3.9[214012]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:52 compute-0 sudo[214003]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:52.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:52 compute-0 sudo[214167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcsmebhafmzjphtjzldabiyjisirxfmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406452.2874653-3712-197408097076883/AnsiballZ_command.py'
Oct 02 12:00:52 compute-0 sudo[214167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:52 compute-0 python3.9[214169]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:52 compute-0 sudo[214167]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:52 compute-0 sudo[214195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:52 compute-0 sudo[214195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:52 compute-0 sudo[214195]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-0 sudo[214243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:53 compute-0 sudo[214243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:53 compute-0 sudo[214243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:53 compute-0 sudo[214297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:53 compute-0 sudo[214297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:53 compute-0 sudo[214297]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-0 sudo[214322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:00:53 compute-0 sudo[214322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:53 compute-0 sudo[214431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjizqvijrgahpakwsapwpwbsodpocku ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406453.0091271-3736-118837863074574/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 12:00:53 compute-0 sudo[214431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:53 compute-0 python3[214435]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 12:00:53 compute-0 sudo[214431]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-0 sudo[214322]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:00:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:00:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:00:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:00:53 compute-0 ceph-mon[73668]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:00:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 494e6120-dcf6-42d7-8e96-2146d5d0c127 does not exist
Oct 02 12:00:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev dd87348c-d26a-4d5f-82d4-5733ddef5c8e does not exist
Oct 02 12:00:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7c182db6-61ca-419b-80fb-a38d0130c904 does not exist
Oct 02 12:00:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:00:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:00:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:00:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:00:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:00:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:53 compute-0 sudo[214477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:53 compute-0 sudo[214477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:53 compute-0 sudo[214477]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:54 compute-0 sudo[214540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:54 compute-0 sudo[214540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:54 compute-0 sudo[214540]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:54 compute-0 sudo[214579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:54 compute-0 sudo[214579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:54 compute-0 sudo[214579]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:54 compute-0 sudo[214621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:00:54 compute-0 sudo[214621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:54 compute-0 sudo[214702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkamubtdnxmnwgmjvjcjaaaokofxcww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406453.91489-3760-234223124271462/AnsiballZ_stat.py'
Oct 02 12:00:54 compute-0 sudo[214702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:54.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:54 compute-0 python3.9[214704]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:54 compute-0 sudo[214702]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:54 compute-0 podman[214748]: 2025-10-02 12:00:54.568203861 +0000 UTC m=+0.093144767 container create b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shtern, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:00:54 compute-0 podman[214748]: 2025-10-02 12:00:54.496511949 +0000 UTC m=+0.021452875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:54 compute-0 systemd[1]: Started libpod-conmon-b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c.scope.
Oct 02 12:00:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:54 compute-0 podman[214748]: 2025-10-02 12:00:54.728302374 +0000 UTC m=+0.253243370 container init b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shtern, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:00:54 compute-0 podman[214748]: 2025-10-02 12:00:54.741596813 +0000 UTC m=+0.266537749 container start b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:00:54 compute-0 podman[214748]: 2025-10-02 12:00:54.746496872 +0000 UTC m=+0.271437858 container attach b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shtern, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:54 compute-0 exciting_shtern[214805]: 167 167
Oct 02 12:00:54 compute-0 systemd[1]: libpod-b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c.scope: Deactivated successfully.
Oct 02 12:00:54 compute-0 podman[214748]: 2025-10-02 12:00:54.751502833 +0000 UTC m=+0.276443779 container died b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:00:54 compute-0 sudo[214844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmyqbtvodbghopiwqsjmybhssgyfmdpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406453.91489-3760-234223124271462/AnsiballZ_file.py'
Oct 02 12:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2d579a60116e42243ccf18d77fff22aef1dcb1344445b1df58914e0c51c32f8-merged.mount: Deactivated successfully.
Oct 02 12:00:54 compute-0 sudo[214844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:54 compute-0 podman[214748]: 2025-10-02 12:00:54.809854155 +0000 UTC m=+0.334795061 container remove b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:00:54 compute-0 systemd[1]: libpod-conmon-b397b789826db15b766fd8059dab0f2463de9ee6fa72437eb89226aec8e5178c.scope: Deactivated successfully.
Oct 02 12:00:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:00:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:00:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:00:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:00:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:55 compute-0 python3.9[214857]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:55 compute-0 podman[214865]: 2025-10-02 12:00:55.02372804 +0000 UTC m=+0.052681774 container create 6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_aryabhata, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:00:55 compute-0 sudo[214844]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:55 compute-0 systemd[1]: Started libpod-conmon-6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329.scope.
Oct 02 12:00:55 compute-0 podman[214865]: 2025-10-02 12:00:54.999236857 +0000 UTC m=+0.028190651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:55.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac407950f7d84fc42b540014bda662921cb94b58a754ce704f09f9556ef7e911/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac407950f7d84fc42b540014bda662921cb94b58a754ce704f09f9556ef7e911/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac407950f7d84fc42b540014bda662921cb94b58a754ce704f09f9556ef7e911/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac407950f7d84fc42b540014bda662921cb94b58a754ce704f09f9556ef7e911/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac407950f7d84fc42b540014bda662921cb94b58a754ce704f09f9556ef7e911/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:55 compute-0 podman[214865]: 2025-10-02 12:00:55.186648838 +0000 UTC m=+0.215602672 container init 6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_aryabhata, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:00:55 compute-0 podman[214865]: 2025-10-02 12:00:55.203253934 +0000 UTC m=+0.232207658 container start 6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:00:55 compute-0 podman[214865]: 2025-10-02 12:00:55.207591937 +0000 UTC m=+0.236545711 container attach 6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:00:55 compute-0 sudo[215036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riracdbhtceeqwhubmipqbweebachrhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406455.2771544-3796-65203999262979/AnsiballZ_stat.py'
Oct 02 12:00:55 compute-0 sudo[215036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:55 compute-0 ceph-mon[73668]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:56 compute-0 peaceful_aryabhata[214900]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:00:56 compute-0 python3.9[215039]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:56 compute-0 peaceful_aryabhata[214900]: --> relative data size: 1.0
Oct 02 12:00:56 compute-0 peaceful_aryabhata[214900]: --> All data devices are unavailable
Oct 02 12:00:56 compute-0 systemd[1]: libpod-6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329.scope: Deactivated successfully.
Oct 02 12:00:56 compute-0 podman[214865]: 2025-10-02 12:00:56.057293196 +0000 UTC m=+1.086247000 container died 6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_aryabhata, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:00:56 compute-0 sudo[215036]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac407950f7d84fc42b540014bda662921cb94b58a754ce704f09f9556ef7e911-merged.mount: Deactivated successfully.
Oct 02 12:00:56 compute-0 podman[214865]: 2025-10-02 12:00:56.171172646 +0000 UTC m=+1.200126380 container remove 6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:00:56 compute-0 systemd[1]: libpod-conmon-6477332bc46e0c93e2c43003cf4dfa14db31e28d5fac5a4d7ca521aecc202329.scope: Deactivated successfully.
Oct 02 12:00:56 compute-0 sudo[214621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:56 compute-0 sudo[215112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:56 compute-0 sudo[215161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liirjqofyasxqpmguguqlriztfwpbpuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406455.2771544-3796-65203999262979/AnsiballZ_file.py'
Oct 02 12:00:56 compute-0 sudo[215112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:56 compute-0 sudo[215161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:56 compute-0 sudo[215112]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-0 sudo[215166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:56 compute-0 sudo[215166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:56 compute-0 sudo[215166]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:56 compute-0 sudo[215192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:56 compute-0 sudo[215192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:56 compute-0 sudo[215192]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-0 sudo[215217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:00:56 compute-0 sudo[215217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:56 compute-0 python3.9[215164]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:56 compute-0 sudo[215161]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-0 podman[215366]: 2025-10-02 12:00:56.930847779 +0000 UTC m=+0.060551100 container create 5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:00:56 compute-0 systemd[1]: Started libpod-conmon-5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3.scope.
Oct 02 12:00:56 compute-0 podman[215366]: 2025-10-02 12:00:56.900203175 +0000 UTC m=+0.029906486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:57 compute-0 sudo[215450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppdiewtksslbgehkdmlacovvoabbjcdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406456.7186975-3832-76809142069248/AnsiballZ_stat.py'
Oct 02 12:00:57 compute-0 sudo[215450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:57 compute-0 podman[215366]: 2025-10-02 12:00:57.089494154 +0000 UTC m=+0.219197535 container init 5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:57 compute-0 podman[215366]: 2025-10-02 12:00:57.105172756 +0000 UTC m=+0.234876047 container start 5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:00:57 compute-0 podman[215366]: 2025-10-02 12:00:57.10837811 +0000 UTC m=+0.238081431 container attach 5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:00:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:57 compute-0 brave_kowalevski[215421]: 167 167
Oct 02 12:00:57 compute-0 systemd[1]: libpod-5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3.scope: Deactivated successfully.
Oct 02 12:00:57 compute-0 conmon[215421]: conmon 5751eabed36e1ca3a868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3.scope/container/memory.events
Oct 02 12:00:57 compute-0 podman[215366]: 2025-10-02 12:00:57.115345573 +0000 UTC m=+0.245048874 container died 5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7078bfabadf59aa5a1a433a66ec8d9c8ff0cab1c03a55f216e0f7ac46afe61-merged.mount: Deactivated successfully.
Oct 02 12:00:57 compute-0 podman[215366]: 2025-10-02 12:00:57.155320483 +0000 UTC m=+0.285023764 container remove 5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:00:57 compute-0 systemd[1]: libpod-conmon-5751eabed36e1ca3a868b7fe61c21f88dbae040a900512a7a2052af2e75e9da3.scope: Deactivated successfully.
Oct 02 12:00:57 compute-0 python3.9[215452]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:57 compute-0 podman[215474]: 2025-10-02 12:00:57.31938353 +0000 UTC m=+0.040734740 container create 9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:00:57 compute-0 sudo[215450]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:57 compute-0 systemd[1]: Started libpod-conmon-9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe.scope.
Oct 02 12:00:57 compute-0 podman[215474]: 2025-10-02 12:00:57.301623794 +0000 UTC m=+0.022975034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebea986bd355e914ad509c43a19dd5d35838156624054aef715974306fae962/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebea986bd355e914ad509c43a19dd5d35838156624054aef715974306fae962/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebea986bd355e914ad509c43a19dd5d35838156624054aef715974306fae962/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebea986bd355e914ad509c43a19dd5d35838156624054aef715974306fae962/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:57 compute-0 podman[215474]: 2025-10-02 12:00:57.458235715 +0000 UTC m=+0.179586935 container init 9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:57 compute-0 podman[215474]: 2025-10-02 12:00:57.473672851 +0000 UTC m=+0.195024061 container start 9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:00:57 compute-0 podman[215474]: 2025-10-02 12:00:57.476818893 +0000 UTC m=+0.198170103 container attach 9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:00:57 compute-0 sudo[215571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbvmlwrityusyujqcbsnqrdhuikpbhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406456.7186975-3832-76809142069248/AnsiballZ_file.py'
Oct 02 12:00:57 compute-0 sudo[215571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:57 compute-0 python3.9[215573]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:57 compute-0 sudo[215571]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:58 compute-0 ceph-mon[73668]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]: {
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:     "1": [
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:         {
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "devices": [
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "/dev/loop3"
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             ],
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "lv_name": "ceph_lv0",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "lv_size": "7511998464",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "name": "ceph_lv0",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "tags": {
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.cluster_name": "ceph",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.crush_device_class": "",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.encrypted": "0",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.osd_id": "1",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.type": "block",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:                 "ceph.vdo": "0"
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             },
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "type": "block",
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:             "vg_name": "ceph_vg0"
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:         }
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]:     ]
Oct 02 12:00:58 compute-0 xenodochial_sanderson[215497]: }
Oct 02 12:00:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:58 compute-0 systemd[1]: libpod-9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe.scope: Deactivated successfully.
Oct 02 12:00:58 compute-0 podman[215474]: 2025-10-02 12:00:58.313540431 +0000 UTC m=+1.034891681 container died 9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:00:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ebea986bd355e914ad509c43a19dd5d35838156624054aef715974306fae962-merged.mount: Deactivated successfully.
Oct 02 12:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:58 compute-0 sudo[215738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sceplbynvqikbcwbjpiigqxlzmwydkyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406458.0609329-3868-28816871016505/AnsiballZ_stat.py'
Oct 02 12:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:00:58 compute-0 sudo[215738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:58.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:58 compute-0 podman[215474]: 2025-10-02 12:00:58.428091968 +0000 UTC m=+1.149443178 container remove 9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:00:58 compute-0 systemd[1]: libpod-conmon-9c0751d9f62ac88aaf5e133dc2bf27609fc92e69dba1051f58fc4d3ee0dd93fe.scope: Deactivated successfully.
Oct 02 12:00:58 compute-0 sudo[215217]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:58 compute-0 sudo[215744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:58 compute-0 sudo[215744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:58 compute-0 sudo[215744]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:58 compute-0 sudo[215769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:58 compute-0 python3.9[215740]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:58 compute-0 sudo[215769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:58 compute-0 sudo[215769]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:58 compute-0 sudo[215738]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:58 compute-0 sudo[215795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:58 compute-0 sudo[215795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:58 compute-0 sudo[215795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:58 compute-0 sudo[215831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:00:58 compute-0 sudo[215831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:58 compute-0 sudo[215919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktejaxvegksduqpewyjowqobchzsqycl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406458.0609329-3868-28816871016505/AnsiballZ_file.py'
Oct 02 12:00:58 compute-0 sudo[215919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:59 compute-0 python3.9[215921]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:00:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:59.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:59 compute-0 sudo[215919]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:59 compute-0 podman[215962]: 2025-10-02 12:00:59.222566517 +0000 UTC m=+0.064252028 container create 0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:00:59 compute-0 systemd[1]: Started libpod-conmon-0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f.scope.
Oct 02 12:00:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:59 compute-0 sudo[215997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:59 compute-0 sudo[215997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:59 compute-0 sudo[215997]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:59 compute-0 podman[215962]: 2025-10-02 12:00:59.283582139 +0000 UTC m=+0.125267670 container init 0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:00:59 compute-0 podman[215962]: 2025-10-02 12:00:59.196178014 +0000 UTC m=+0.037863575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:59 compute-0 podman[215962]: 2025-10-02 12:00:59.292460142 +0000 UTC m=+0.134145653 container start 0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:00:59 compute-0 podman[215962]: 2025-10-02 12:00:59.295627495 +0000 UTC m=+0.137313026 container attach 0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:00:59 compute-0 interesting_golick[216026]: 167 167
Oct 02 12:00:59 compute-0 systemd[1]: libpod-0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f.scope: Deactivated successfully.
Oct 02 12:00:59 compute-0 podman[215962]: 2025-10-02 12:00:59.298908211 +0000 UTC m=+0.140593722 container died 0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:00:59 compute-0 podman[216018]: 2025-10-02 12:00:59.311202534 +0000 UTC m=+0.054878592 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:00:59 compute-0 sudo[216046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-24df37b2365fd225cfe4cc7758e3d2e9de0e7e5baadb8312976ea1dc88e37da5-merged.mount: Deactivated successfully.
Oct 02 12:00:59 compute-0 sudo[216046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:59 compute-0 sudo[216046]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:59 compute-0 podman[215962]: 2025-10-02 12:00:59.354025868 +0000 UTC m=+0.195711379 container remove 0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:00:59 compute-0 systemd[1]: libpod-conmon-0a8e360af5923b164cb594e514fafa2b3fb3ca3bec88d88bd01dae7afd186e1f.scope: Deactivated successfully.
Oct 02 12:00:59 compute-0 podman[216145]: 2025-10-02 12:00:59.532763631 +0000 UTC m=+0.064816813 container create 11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:00:59 compute-0 systemd[1]: Started libpod-conmon-11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391.scope.
Oct 02 12:00:59 compute-0 podman[216145]: 2025-10-02 12:00:59.50227646 +0000 UTC m=+0.034329712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:00:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b01a66e5590a3d54d86cd337b0d72d8e510630506f100356b45c9e50dabdddd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b01a66e5590a3d54d86cd337b0d72d8e510630506f100356b45c9e50dabdddd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b01a66e5590a3d54d86cd337b0d72d8e510630506f100356b45c9e50dabdddd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b01a66e5590a3d54d86cd337b0d72d8e510630506f100356b45c9e50dabdddd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:00:59 compute-0 podman[216145]: 2025-10-02 12:00:59.635056336 +0000 UTC m=+0.167109548 container init 11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:00:59 compute-0 podman[216145]: 2025-10-02 12:00:59.646391454 +0000 UTC m=+0.178444616 container start 11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:00:59 compute-0 podman[216145]: 2025-10-02 12:00:59.64966682 +0000 UTC m=+0.181720012 container attach 11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:00:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:59 compute-0 sudo[216239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baiafhnobsjpimzoaijicttlhmikfppw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406459.315802-3904-194874733344661/AnsiballZ_stat.py'
Oct 02 12:00:59 compute-0 sudo[216239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:59 compute-0 python3.9[216241]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:59 compute-0 sudo[216239]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:00 compute-0 ceph-mon[73668]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:00 compute-0 sudo[216370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqccfgdlygumuieynvhaelyhqwvnzeqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406459.315802-3904-194874733344661/AnsiballZ_copy.py'
Oct 02 12:01:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:00 compute-0 sudo[216370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:00 compute-0 clever_yalow[216201]: {
Oct 02 12:01:00 compute-0 clever_yalow[216201]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:01:00 compute-0 clever_yalow[216201]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:01:00 compute-0 clever_yalow[216201]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:01:00 compute-0 clever_yalow[216201]:         "osd_id": 1,
Oct 02 12:01:00 compute-0 clever_yalow[216201]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:01:00 compute-0 clever_yalow[216201]:         "type": "bluestore"
Oct 02 12:01:00 compute-0 clever_yalow[216201]:     }
Oct 02 12:01:00 compute-0 clever_yalow[216201]: }
Oct 02 12:01:00 compute-0 systemd[1]: libpod-11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391.scope: Deactivated successfully.
Oct 02 12:01:00 compute-0 conmon[216201]: conmon 11de7f85d8274481922c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391.scope/container/memory.events
Oct 02 12:01:00 compute-0 podman[216145]: 2025-10-02 12:01:00.574932291 +0000 UTC m=+1.106985513 container died 11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b01a66e5590a3d54d86cd337b0d72d8e510630506f100356b45c9e50dabdddd-merged.mount: Deactivated successfully.
Oct 02 12:01:00 compute-0 python3.9[216375]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406459.315802-3904-194874733344661/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:00 compute-0 podman[216145]: 2025-10-02 12:01:00.666430383 +0000 UTC m=+1.198483565 container remove 11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:01:00 compute-0 systemd[1]: libpod-conmon-11de7f85d8274481922c97c824ce4f71a5b3d0322b6cec85a6aeed443c63d391.scope: Deactivated successfully.
Oct 02 12:01:00 compute-0 sudo[216370]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:00 compute-0 sudo[215831]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:01:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:01:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:01:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:01:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 57ca0f8f-b87a-4805-a112-e3bdef421b65 does not exist
Oct 02 12:01:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev fcaabf5e-42f2-49ac-91ba-65997c5cafc2 does not exist
Oct 02 12:01:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 043fb643-79b7-41b4-af1f-74f89c570620 does not exist
Oct 02 12:01:00 compute-0 sudo[216417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:00 compute-0 sudo[216417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:00 compute-0 sudo[216417]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:00 compute-0 sudo[216445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:01:00 compute-0 sudo[216445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:00 compute-0 sudo[216445]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:01.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:01 compute-0 sudo[216595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icootnekejopcfwdizvdcdztuqyblyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406460.8626266-3949-124782819513262/AnsiballZ_file.py'
Oct 02 12:01:01 compute-0 sudo[216595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:01 compute-0 python3.9[216597]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:01 compute-0 sudo[216595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:01 compute-0 CROND[216675]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 12:01:01 compute-0 run-parts[216678]: (/etc/cron.hourly) starting 0anacron
Oct 02 12:01:01 compute-0 run-parts[216684]: (/etc/cron.hourly) finished 0anacron
Oct 02 12:01:01 compute-0 ceph-mon[73668]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:01:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:01:01 compute-0 CROND[216672]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 12:01:01 compute-0 sudo[216758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnygjdoycbocgkhczdrfmlsyottdqrzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406461.6162803-3973-219207515977786/AnsiballZ_command.py'
Oct 02 12:01:01 compute-0 sudo[216758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:02 compute-0 python3.9[216760]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:01:02 compute-0 sudo[216758]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:02.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:02 compute-0 sudo[216914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcdkbmebuxpkzkwkeuqppqicmpcjpwno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406462.4136338-3997-224293087667647/AnsiballZ_blockinfile.py'
Oct 02 12:01:02 compute-0 sudo[216914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:03 compute-0 python3.9[216916]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:03 compute-0 sudo[216914]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:03.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:03 compute-0 sudo[217066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nliftvexfxrjkutaqsdyyqvpxxlrmowx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406463.3728297-4024-71657153331907/AnsiballZ_command.py'
Oct 02 12:01:03 compute-0 sudo[217066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:03 compute-0 ceph-mon[73668]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:03 compute-0 python3.9[217068]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:01:03 compute-0 sudo[217066]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:04.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:04 compute-0 sudo[217220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxuswzaxfehqmpzumkwuwgwzqdrepiky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406464.1065922-4048-55540922818022/AnsiballZ_stat.py'
Oct 02 12:01:04 compute-0 sudo[217220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:04 compute-0 python3.9[217222]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:04 compute-0 sudo[217220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:05 compute-0 sudo[217374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsbdqqkhohyzelwqepldwnkshnsxqybf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406464.9806905-4072-71734613344620/AnsiballZ_command.py'
Oct 02 12:01:05 compute-0 sudo[217374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:05 compute-0 python3.9[217376]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:01:05 compute-0 sudo[217374]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:05 compute-0 ceph-mon[73668]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:06 compute-0 sudo[217529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paocpfhiexmcefhtcfhspszsskmselua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406465.695547-4096-228044962180481/AnsiballZ_file.py'
Oct 02 12:01:06 compute-0 sudo[217529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:06 compute-0 python3.9[217531]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:06 compute-0 sudo[217529]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:06.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:06 compute-0 sudo[217682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbprrauqlvkkcoilpkydptbebprvcdva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406466.4821122-4120-140660367228706/AnsiballZ_stat.py'
Oct 02 12:01:06 compute-0 sudo[217682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:07 compute-0 python3.9[217684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:07 compute-0 sudo[217682]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:07.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:07 compute-0 sudo[217805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqenexfmimerpwknvdzlzkduvdgxwdpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406466.4821122-4120-140660367228706/AnsiballZ_copy.py'
Oct 02 12:01:07 compute-0 sudo[217805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:07 compute-0 python3.9[217807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406466.4821122-4120-140660367228706/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:07 compute-0 sudo[217805]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:07 compute-0 ceph-mon[73668]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:08 compute-0 sudo[217957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejevhltgrjawdflvdygrlnkolgvtdwiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406467.8993285-4165-193388864069834/AnsiballZ_stat.py'
Oct 02 12:01:08 compute-0 sudo[217957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:08 compute-0 python3.9[217959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:08 compute-0 sudo[217957]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:08.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:08 compute-0 sudo[218081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yinuleafscswrtybfowzrxmpdtbtrqox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406467.8993285-4165-193388864069834/AnsiballZ_copy.py'
Oct 02 12:01:08 compute-0 sudo[218081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:09 compute-0 python3.9[218083]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406467.8993285-4165-193388864069834/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:09 compute-0 sudo[218081]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:09.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:09 compute-0 sudo[218233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idgxumwrgkltrkjkqaamgxzhznaujgrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406469.2605424-4210-63814802280606/AnsiballZ_stat.py'
Oct 02 12:01:09 compute-0 sudo[218233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:09 compute-0 ceph-mon[73668]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:09 compute-0 python3.9[218235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:09 compute-0 sudo[218233]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:10 compute-0 sudo[218356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbsqjbfhowdybbekpcuezlmzgpvkincn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406469.2605424-4210-63814802280606/AnsiballZ_copy.py'
Oct 02 12:01:10 compute-0 sudo[218356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:10.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:10 compute-0 python3.9[218358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406469.2605424-4210-63814802280606/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:10 compute-0 sudo[218356]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:11 compute-0 sudo[218509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptlhawmzsstbeqydwpuohxryxtmqgwds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406470.740615-4255-182467379419464/AnsiballZ_systemd.py'
Oct 02 12:01:11 compute-0 sudo[218509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:11.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:11 compute-0 python3.9[218511]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:11 compute-0 systemd[1]: Reloading.
Oct 02 12:01:11 compute-0 systemd-rc-local-generator[218534]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:11 compute-0 systemd-sysv-generator[218539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:11 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 02 12:01:11 compute-0 ceph-mon[73668]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:11 compute-0 sudo[218509]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:12 compute-0 sudo[218700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lupzrkwpeiwmrqbdulumeckduezebfpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406472.0367706-4279-149658721132064/AnsiballZ_systemd.py'
Oct 02 12:01:12 compute-0 sudo[218700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:12.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:12 compute-0 python3.9[218702]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 12:01:12 compute-0 systemd[1]: Reloading.
Oct 02 12:01:12 compute-0 systemd-rc-local-generator[218724]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:12 compute-0 systemd-sysv-generator[218734]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:13.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:13 compute-0 systemd[1]: Reloading.
Oct 02 12:01:13 compute-0 systemd-rc-local-generator[218767]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:13 compute-0 systemd-sysv-generator[218770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:13 compute-0 sudo[218700]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:13 compute-0 ceph-mon[73668]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:13 compute-0 sshd-session[158311]: Connection closed by 192.168.122.30 port 43406
Oct 02 12:01:13 compute-0 sshd-session[158308]: pam_unix(sshd:session): session closed for user zuul
Oct 02 12:01:13 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 02 12:01:13 compute-0 systemd[1]: session-50.scope: Consumed 3min 50.900s CPU time.
Oct 02 12:01:13 compute-0 systemd-logind[820]: Session 50 logged out. Waiting for processes to exit.
Oct 02 12:01:13 compute-0 systemd-logind[820]: Removed session 50.
Oct 02 12:01:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:15.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:15 compute-0 ceph-mon[73668]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:16.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:01:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:01:17 compute-0 ceph-mon[73668]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:01:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:18.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:01:18 compute-0 sshd-session[218804]: Accepted publickey for zuul from 192.168.122.30 port 40528 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 12:01:18 compute-0 systemd-logind[820]: New session 51 of user zuul.
Oct 02 12:01:18 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct 02 12:01:18 compute-0 sshd-session[218804]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 12:01:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:19.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:19 compute-0 sudo[218905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:19 compute-0 sudo[218905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:19 compute-0 sudo[218905]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:19 compute-0 sudo[218937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:19 compute-0 sudo[218937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:19 compute-0 sudo[218937]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:19 compute-0 python3.9[219007]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 12:01:19 compute-0 ceph-mon[73668]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:01:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:21.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:01:21 compute-0 sudo[219162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsxwscnwgphkjvzkwxnhfntdiptbvtvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406480.5845435-67-251285296600014/AnsiballZ_file.py'
Oct 02 12:01:21 compute-0 sudo[219162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:21 compute-0 python3.9[219164]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:21 compute-0 sudo[219162]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:21 compute-0 sudo[219314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwpxswczlsfibgkfffbidtptutrzdxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406481.5568287-67-203550914253069/AnsiballZ_file.py'
Oct 02 12:01:21 compute-0 sudo[219314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:21 compute-0 ceph-mon[73668]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:22 compute-0 python3.9[219316]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:22 compute-0 sudo[219314]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:22 compute-0 podman[219393]: 2025-10-02 12:01:22.463673559 +0000 UTC m=+0.133401914 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 12:01:22 compute-0 sudo[219493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txngbuvskgfmjusbypqrqpngcygeowgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406482.211498-67-30886058647424/AnsiballZ_file.py'
Oct 02 12:01:22 compute-0 sudo[219493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:22 compute-0 python3.9[219495]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:22 compute-0 sudo[219493]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:01:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:23.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:01:23 compute-0 sudo[219645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtroezliukuldtpxzgkrsiyjlghzkpbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406482.9088895-67-83843580933134/AnsiballZ_file.py'
Oct 02 12:01:23 compute-0 sudo[219645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:23 compute-0 python3.9[219647]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 12:01:23 compute-0 sudo[219645]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:23 compute-0 sudo[219797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbdobztijqnmuaddhvokrhvoiijuwyqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406483.588836-67-133206039612735/AnsiballZ_file.py'
Oct 02 12:01:23 compute-0 sudo[219797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:23 compute-0 ceph-mon[73668]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:24 compute-0 python3.9[219799]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:24 compute-0 sudo[219797]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:24 compute-0 sudo[219950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzkwtthithefpoolnzgaqhezsiisjewd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406484.2628522-175-101220326566642/AnsiballZ_stat.py'
Oct 02 12:01:24 compute-0 sudo[219950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:25 compute-0 python3.9[219952]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:25 compute-0 sudo[219950]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:25.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:25 compute-0 sudo[220104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heisvvjtsvoggaotnibdesncehvobyuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406485.2429235-199-271376893237554/AnsiballZ_systemd.py'
Oct 02 12:01:25 compute-0 sudo[220104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:26 compute-0 ceph-mon[73668]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:26 compute-0 python3.9[220106]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:26 compute-0 systemd[1]: Reloading.
Oct 02 12:01:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:26 compute-0 systemd-rc-local-generator[220138]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:26 compute-0 systemd-sysv-generator[220142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:01:26.436 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:01:26.437 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:01:26.437 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:26 compute-0 sudo[220104]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:27.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:27 compute-0 sudo[220294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzbllktudzvrkgogdndburhoofxkhksf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406486.8240225-223-16298604252458/AnsiballZ_service_facts.py'
Oct 02 12:01:27 compute-0 sudo[220294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:27 compute-0 python3.9[220296]: ansible-ansible.builtin.service_facts Invoked
Oct 02 12:01:27 compute-0 network[220313]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 12:01:27 compute-0 network[220314]: 'network-scripts' will be removed from distribution in near future.
Oct 02 12:01:27 compute-0 network[220315]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 12:01:28 compute-0 ceph-mon[73668]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:28.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:01:28
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'backups', '.rgw.root', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Oct 02 12:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:01:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:29.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:29 compute-0 podman[220366]: 2025-10-02 12:01:29.417862843 +0000 UTC m=+0.060314114 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:01:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:30 compute-0 ceph-mon[73668]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:30.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:31 compute-0 sudo[220294]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:31.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:31 compute-0 sudo[220608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjqflxkemodxihhumabozeujfadnhij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406491.353578-247-76820406526896/AnsiballZ_systemd.py'
Oct 02 12:01:31 compute-0 sudo[220608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:31 compute-0 python3.9[220610]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:32 compute-0 systemd[1]: Reloading.
Oct 02 12:01:32 compute-0 systemd-rc-local-generator[220639]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:32 compute-0 systemd-sysv-generator[220643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:32 compute-0 ceph-mon[73668]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:32 compute-0 sudo[220608]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:32.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:33.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:33 compute-0 python3.9[220798]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:34 compute-0 sudo[220948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woietqmtwfemtvrrkbfnnwbeoxoynvhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406493.434951-298-124016260387525/AnsiballZ_podman_container.py'
Oct 02 12:01:34 compute-0 sudo[220948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:34 compute-0 ceph-mon[73668]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:34 compute-0 python3.9[220950]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 12:01:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:34 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:01:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:34 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:01:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:35.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:35 compute-0 ceph-mon[73668]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:35 compute-0 podman[220963]: 2025-10-02 12:01:35.561508432 +0000 UTC m=+1.163329158 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:01:35 compute-0 podman[221024]: 2025-10-02 12:01:35.723410706 +0000 UTC m=+0.059950984 container create ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.7516] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 02 12:01:35 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 12:01:35 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 12:01:35 compute-0 kernel: veth0: entered allmulticast mode
Oct 02 12:01:35 compute-0 kernel: veth0: entered promiscuous mode
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.7736] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 02 12:01:35 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 12:01:35 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.7765] device (veth0): carrier: link connected
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.7768] device (podman0): carrier: link connected
Oct 02 12:01:35 compute-0 systemd-udevd[221043]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:01:35 compute-0 systemd-udevd[221047]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:01:35 compute-0 podman[221024]: 2025-10-02 12:01:35.693068275 +0000 UTC m=+0.029608653 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8070] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8083] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8094] device (podman0): Activation: starting connection 'podman0' (d8286dd7-088b-4e25-9e6f-ed2fe59ede1e)
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8098] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8124] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8127] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8130] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 12:01:35 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8643] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8646] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-0 NetworkManager[44981]: <info>  [1759406495.8654] device (podman0): Activation: successful, device activated.
Oct 02 12:01:35 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 02 12:01:36 compute-0 systemd[1]: Started libpod-conmon-ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514.scope.
Oct 02 12:01:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:01:36 compute-0 podman[221024]: 2025-10-02 12:01:36.179501023 +0000 UTC m=+0.516041331 container init ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:01:36 compute-0 podman[221024]: 2025-10-02 12:01:36.190954706 +0000 UTC m=+0.527494984 container start ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:01:36 compute-0 podman[221024]: 2025-10-02 12:01:36.194229932 +0000 UTC m=+0.530770210 container attach ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:01:36 compute-0 iscsid_config[221180]: iqn.1994-05.com.redhat:b6d21f9028d8
Oct 02 12:01:36 compute-0 systemd[1]: libpod-ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514.scope: Deactivated successfully.
Oct 02 12:01:36 compute-0 podman[221024]: 2025-10-02 12:01:36.201036412 +0000 UTC m=+0.537576720 container died ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:01:36 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 12:01:36 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 02 12:01:36 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 02 12:01:36 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 12:01:36 compute-0 NetworkManager[44981]: <info>  [1759406496.2787] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:01:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:36.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:36 compute-0 systemd[1]: run-netns-netns\x2dcd1841b1\x2dd1fa\x2d8328\x2de3ee\x2d63915a53172d.mount: Deactivated successfully.
Oct 02 12:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514-userdata-shm.mount: Deactivated successfully.
Oct 02 12:01:36 compute-0 podman[221024]: 2025-10-02 12:01:36.686899056 +0000 UTC m=+1.023439374 container remove ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:01:36 compute-0 python3.9[220950]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 02 12:01:36 compute-0 systemd[1]: libpod-conmon-ceb9f4afc6657b8584bb8862cdb4b32cac587bcec66b2d209764208a6b02d514.scope: Deactivated successfully.
Oct 02 12:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd212e8cacd71432d70884d203c45e8b66e7e655a83b6f9fab36276d01c86bd-merged.mount: Deactivated successfully.
Oct 02 12:01:36 compute-0 python3.9[220950]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 02 12:01:36 compute-0 sudo[220948]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:37.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:37 compute-0 ceph-mon[73668]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:38.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:38 compute-0 sudo[221425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igvkwlvemdfrallgapeqqdrnayiktouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406498.206219-322-191504083970353/AnsiballZ_stat.py'
Oct 02 12:01:38 compute-0 sudo[221425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:38 compute-0 python3.9[221427]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:38 compute-0 sudo[221425]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:39.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:39 compute-0 sudo[221548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrkzaimyjgyffcvmtmncamspisecpjoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406498.206219-322-191504083970353/AnsiballZ_copy.py'
Oct 02 12:01:39 compute-0 sudo[221548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:39 compute-0 python3.9[221550]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406498.206219-322-191504083970353/.source.iscsi _original_basename=.s_uhtj5f follow=False checksum=b69ac513a3e54e9e70400c5073642e287b5071a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:39 compute-0 ceph-mon[73668]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:39 compute-0 sudo[221548]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:39 compute-0 sudo[221558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:39 compute-0 sudo[221558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:39 compute-0 sudo[221558]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:39 compute-0 sudo[221600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:39 compute-0 sudo[221600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:39 compute-0 sudo[221600]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:01:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:01:40 compute-0 sudo[221750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njndjfzeimdengjubboonxrlkupdzicl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406499.6562161-367-69187590353766/AnsiballZ_file.py'
Oct 02 12:01:40 compute-0 sudo[221750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:40 compute-0 python3.9[221752]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:40 compute-0 sudo[221750]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:40.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:40 compute-0 python3.9[221903]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:41.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:41 compute-0 ceph-mon[73668]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:41 compute-0 sudo[222055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbathcpmyeheapihvrsctdakodztvopz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406501.2243156-418-219933986790372/AnsiballZ_lineinfile.py'
Oct 02 12:01:41 compute-0 sudo[222055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:41 compute-0 python3.9[222057]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:41 compute-0 sudo[222055]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:42 compute-0 sudo[222208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlmbruecphxfsjeehggbdiptilbbcehy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406502.2730358-445-188151728905454/AnsiballZ_file.py'
Oct 02 12:01:42 compute-0 sudo[222208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:42 compute-0 python3.9[222210]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:42 compute-0 sudo[222208]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:43.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:43 compute-0 sudo[222360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzhvrqqutjypawhnsswarhhxedqoibox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406503.0787024-469-89680734672360/AnsiballZ_stat.py'
Oct 02 12:01:43 compute-0 sudo[222360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:43 compute-0 ceph-mon[73668]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:43 compute-0 python3.9[222362]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:43 compute-0 sudo[222360]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:44 compute-0 sudo[222438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tooavdawhtrhgcivgpfglvsxjbrgbhvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406503.0787024-469-89680734672360/AnsiballZ_file.py'
Oct 02 12:01:44 compute-0 sudo[222438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:44 compute-0 python3.9[222440]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:44 compute-0 sudo[222438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:44.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.685063) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504685194, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1406, "num_deletes": 501, "total_data_size": 1989504, "memory_usage": 2028312, "flush_reason": "Manual Compaction"}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504696720, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1162013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14241, "largest_seqno": 15646, "table_properties": {"data_size": 1157196, "index_size": 1765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14875, "raw_average_key_size": 19, "raw_value_size": 1144903, "raw_average_value_size": 1465, "num_data_blocks": 81, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406390, "oldest_key_time": 1759406390, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11691 microseconds, and 6352 cpu microseconds.
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.696769) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1162013 bytes OK
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.696789) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.698452) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.698465) EVENT_LOG_v1 {"time_micros": 1759406504698461, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.698485) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1982437, prev total WAL file size 1982437, number of live WAL files 2.
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.699192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1134KB)], [32(10MB)]
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504699244, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 12690262, "oldest_snapshot_seqno": -1}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4147 keys, 7897058 bytes, temperature: kUnknown
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504742746, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7897058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7868096, "index_size": 17524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 102544, "raw_average_key_size": 24, "raw_value_size": 7791784, "raw_average_value_size": 1878, "num_data_blocks": 739, "num_entries": 4147, "num_filter_entries": 4147, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.742994) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7897058 bytes
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.745006) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 291.1 rd, 181.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(17.7) write-amplify(6.8) OK, records in: 5111, records dropped: 964 output_compression: NoCompression
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.745027) EVENT_LOG_v1 {"time_micros": 1759406504745017, "job": 14, "event": "compaction_finished", "compaction_time_micros": 43588, "compaction_time_cpu_micros": 23645, "output_level": 6, "num_output_files": 1, "total_output_size": 7897058, "num_input_records": 5111, "num_output_records": 4147, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504745415, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504748139, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.699079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.748178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.748183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.748186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.748188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:01:44.748190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-0 sudo[222591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwxfqcsnczomzgjaajgysipcvkyxqrdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406504.4422405-469-205708500672688/AnsiballZ_stat.py'
Oct 02 12:01:44 compute-0 sudo[222591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:44 compute-0 python3.9[222593]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:45 compute-0 sudo[222591]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:45.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:45 compute-0 sudo[222669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllddwrrxwvatfrazhscfmdfkyihjlyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406504.4422405-469-205708500672688/AnsiballZ_file.py'
Oct 02 12:01:45 compute-0 sudo[222669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:45 compute-0 python3.9[222671]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:45 compute-0 sudo[222669]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:45 compute-0 ceph-mon[73668]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:46 compute-0 sudo[222821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jevmaxqggyjuldlmyaiimtvukaoqulic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406505.8413188-538-28432815862419/AnsiballZ_file.py'
Oct 02 12:01:46 compute-0 sudo[222821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:46 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 12:01:46 compute-0 python3.9[222823]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:46 compute-0 sudo[222821]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:46.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:47 compute-0 sudo[222974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuhbahomxkowzhozlpjvnzgvzuprssyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406506.6622813-562-273922620934885/AnsiballZ_stat.py'
Oct 02 12:01:47 compute-0 sudo[222974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:47.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:47 compute-0 python3.9[222976]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:47 compute-0 sudo[222974]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:47 compute-0 sudo[223052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnveigiebsmqrbfucviqchicnaiwwtzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406506.6622813-562-273922620934885/AnsiballZ_file.py'
Oct 02 12:01:47 compute-0 sudo[223052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:47 compute-0 ceph-mon[73668]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:47 compute-0 python3.9[223054]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:47 compute-0 sudo[223052]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:48 compute-0 sudo[223205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxufwvzagclszowtkbupoyqrsoksgpkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406508.073494-598-67827272566600/AnsiballZ_stat.py'
Oct 02 12:01:48 compute-0 sudo[223205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:48.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:48 compute-0 python3.9[223207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:48 compute-0 sudo[223205]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:48 compute-0 sudo[223283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhnwzkzpypwncxkzznqvkwuibgmfsuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406508.073494-598-67827272566600/AnsiballZ_file.py'
Oct 02 12:01:48 compute-0 sudo[223283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:49 compute-0 python3.9[223285]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:49 compute-0 sudo[223283]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:49.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:49 compute-0 ceph-mon[73668]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:49 compute-0 sudo[223435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeydgjmmwmmfecuwbkekzvhwoempiszx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406509.3555784-634-33696551710436/AnsiballZ_systemd.py'
Oct 02 12:01:49 compute-0 sudo[223435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:50 compute-0 python3.9[223437]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:50 compute-0 systemd[1]: Reloading.
Oct 02 12:01:50 compute-0 systemd-rc-local-generator[223458]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:50 compute-0 systemd-sysv-generator[223464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:01:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:50.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:01:50 compute-0 sudo[223435]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:51 compute-0 sudo[223625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdnwjwaxisnjwrrbbxosydqkyvzspdkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406510.7466404-658-150093653464352/AnsiballZ_stat.py'
Oct 02 12:01:51 compute-0 sudo[223625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:51.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:51 compute-0 python3.9[223627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:51 compute-0 sudo[223625]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:51 compute-0 sudo[223703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-palnrnkqnrsoyfptuedwdkaipuiyrktb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406510.7466404-658-150093653464352/AnsiballZ_file.py'
Oct 02 12:01:51 compute-0 sudo[223703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:51 compute-0 python3.9[223705]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:51 compute-0 sudo[223703]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:51 compute-0 ceph-mon[73668]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:52 compute-0 sudo[223856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusejqvumyibzprnquzimcoznbnhspcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406512.0074549-694-163501734921986/AnsiballZ_stat.py'
Oct 02 12:01:52 compute-0 sudo[223856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:52.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:52 compute-0 python3.9[223858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:52 compute-0 sudo[223856]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:52 compute-0 sudo[223945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thpdmfvvmtqwxllzacwmetqsuxzhqeeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406512.0074549-694-163501734921986/AnsiballZ_file.py'
Oct 02 12:01:52 compute-0 sudo[223945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:52 compute-0 podman[223908]: 2025-10-02 12:01:52.989353044 +0000 UTC m=+0.141475665 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 02 12:01:53 compute-0 python3.9[223951]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:53 compute-0 sudo[223945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:53.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:53 compute-0 sudo[224113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsedebotuynqprlxnoqwwhvyddflyyad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406513.2990723-730-265770042670720/AnsiballZ_systemd.py'
Oct 02 12:01:53 compute-0 sudo[224113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:53 compute-0 ceph-mon[73668]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:53 compute-0 python3.9[224115]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:53 compute-0 systemd[1]: Reloading.
Oct 02 12:01:54 compute-0 systemd-rc-local-generator[224144]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:54 compute-0 systemd-sysv-generator[224148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:54 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 12:01:54 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 12:01:54 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 12:01:54 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 12:01:54 compute-0 sudo[224113]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:54.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:55 compute-0 sudo[224308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnyfrbxyztfpcaavbzunfmszdnnmxgqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406514.7779644-760-94162881087681/AnsiballZ_file.py'
Oct 02 12:01:55 compute-0 sudo[224308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:55.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:55 compute-0 python3.9[224310]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:55 compute-0 sudo[224308]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:55 compute-0 sudo[224460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qknhoavfuaunydgccvrpqlulfjxqgesg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406515.534179-784-66697346731706/AnsiballZ_stat.py'
Oct 02 12:01:55 compute-0 sudo[224460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:55 compute-0 ceph-mon[73668]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:56 compute-0 python3.9[224462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:56 compute-0 sudo[224460]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:56 compute-0 sudo[224584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgrnkecpbmrgjkicexcekxzsmywjgniv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406515.534179-784-66697346731706/AnsiballZ_copy.py'
Oct 02 12:01:56 compute-0 sudo[224584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:56.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:56 compute-0 python3.9[224586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406515.534179-784-66697346731706/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:56 compute-0 sudo[224584]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:01:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:57.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:01:57 compute-0 sudo[224736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buehhlofcikbwunelvkdjzaqfgykzwyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406517.076253-835-226327641876376/AnsiballZ_file.py'
Oct 02 12:01:57 compute-0 sudo[224736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:57 compute-0 python3.9[224738]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:57 compute-0 sudo[224736]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:57 compute-0 ceph-mon[73668]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:58 compute-0 sudo[224888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdlaornxkscssltcyifdqwyixzbkgsna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406517.9025862-859-47561365025035/AnsiballZ_stat.py'
Oct 02 12:01:58 compute-0 sudo[224888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:01:58 compute-0 python3.9[224890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:58 compute-0 sudo[224888]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:01:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:01:58 compute-0 sudo[225012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjrcktrcrajjclvanzinrgovnayyvrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406517.9025862-859-47561365025035/AnsiballZ_copy.py'
Oct 02 12:01:58 compute-0 sudo[225012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:58 compute-0 python3.9[225014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406517.9025862-859-47561365025035/.source.json _original_basename=.26apg5aq follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:59 compute-0 sudo[225012]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:01:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:59.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:59 compute-0 sudo[225176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqblncqparonytktqqzslgtuytogaadw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406519.2062147-904-89398929463059/AnsiballZ_file.py'
Oct 02 12:01:59 compute-0 sudo[225176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:59 compute-0 podman[225138]: 2025-10-02 12:01:59.556511625 +0000 UTC m=+0.073749948 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:01:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:59 compute-0 sudo[225185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:59 compute-0 sudo[225185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:59 compute-0 sudo[225185]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:59 compute-0 python3.9[225182]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:59 compute-0 sudo[225176]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:59 compute-0 sudo[225210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:59 compute-0 sudo[225210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:59 compute-0 sudo[225210]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:59 compute-0 ceph-mon[73668]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:00 compute-0 sudo[225384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqukknmagjsslauivfdocpahlpwnrhbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406519.9536748-928-18247621851734/AnsiballZ_stat.py'
Oct 02 12:02:00 compute-0 sudo[225384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:00 compute-0 sudo[225384]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:00.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:00 compute-0 sudo[225508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhijhslwaxndiwysrmaklxvlxkmjabjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406519.9536748-928-18247621851734/AnsiballZ_copy.py'
Oct 02 12:02:00 compute-0 sudo[225508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:01 compute-0 sudo[225508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:01.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:01 compute-0 sudo[225535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:01 compute-0 sudo[225535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-0 sudo[225535]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-0 sudo[225560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:01 compute-0 sudo[225560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-0 sudo[225560]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-0 sudo[225585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:01 compute-0 sudo[225585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-0 sudo[225585]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-0 sudo[225610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:02:01 compute-0 sudo[225610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-0 sudo[225610]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-0 sudo[225790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfdsdgslueotfoimhotvurplbqyyoee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406521.396392-979-75784523668109/AnsiballZ_container_config_data.py'
Oct 02 12:02:01 compute-0 sudo[225790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:02:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:02:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:02:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ea2877da-0bd0-45cf-a667-e38a79dba555 does not exist
Oct 02 12:02:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 649244a5-6136-47aa-869c-91beb156efd7 does not exist
Oct 02 12:02:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2d5e13f2-bbe1-4169-a28c-436a2b9c99b3 does not exist
Oct 02 12:02:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:02:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:02:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:02:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:02:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:02 compute-0 sudo[225793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:02 compute-0 sudo[225793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:02 compute-0 sudo[225793]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:02 compute-0 python3.9[225792]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 02 12:02:02 compute-0 sudo[225790]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:02 compute-0 sudo[225818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:02 compute-0 sudo[225818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:02 compute-0 sudo[225818]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:02 compute-0 sudo[225867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:02 compute-0 sudo[225867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:02 compute-0 sudo[225867]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:02 compute-0 sudo[225892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:02:02 compute-0 sudo[225892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:02.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:02 compute-0 podman[226011]: 2025-10-02 12:02:02.607245961 +0000 UTC m=+0.055448785 container create 63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:02 compute-0 systemd[1]: Started libpod-conmon-63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a.scope.
Oct 02 12:02:02 compute-0 podman[226011]: 2025-10-02 12:02:02.57955897 +0000 UTC m=+0.027761844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:02 compute-0 podman[226011]: 2025-10-02 12:02:02.73640766 +0000 UTC m=+0.184610494 container init 63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:02:02 compute-0 podman[226011]: 2025-10-02 12:02:02.745184642 +0000 UTC m=+0.193387426 container start 63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:02:02 compute-0 podman[226011]: 2025-10-02 12:02:02.748895459 +0000 UTC m=+0.197098283 container attach 63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:02:02 compute-0 naughty_hodgkin[226073]: 167 167
Oct 02 12:02:02 compute-0 systemd[1]: libpod-63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a.scope: Deactivated successfully.
Oct 02 12:02:02 compute-0 podman[226011]: 2025-10-02 12:02:02.753886401 +0000 UTC m=+0.202089185 container died 63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:02:02 compute-0 sudo[226103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdpajgktzljqxfyklkqaqilznkoxealu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406522.2501357-1006-87200055346198/AnsiballZ_container_config_hash.py'
Oct 02 12:02:02 compute-0 sudo[226103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-276d389dc201eb27352cb94a20464207266dfd9211ae5de6d2379b03e4e1c3e5-merged.mount: Deactivated successfully.
Oct 02 12:02:02 compute-0 podman[226011]: 2025-10-02 12:02:02.805877024 +0000 UTC m=+0.254079848 container remove 63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:02:02 compute-0 systemd[1]: libpod-conmon-63372a244e8a1340c642ea782d89781482e1f026ea131877c988258793a0e68a.scope: Deactivated successfully.
Oct 02 12:02:03 compute-0 podman[226125]: 2025-10-02 12:02:03.007991638 +0000 UTC m=+0.053787520 container create 5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:02:03 compute-0 python3.9[226108]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:02:03 compute-0 sudo[226103]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:03 compute-0 systemd[1]: Started libpod-conmon-5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4.scope.
Oct 02 12:02:03 compute-0 podman[226125]: 2025-10-02 12:02:02.978315275 +0000 UTC m=+0.024111077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26360b7931470149bc7da9fa9f81bef7b4c270a6f01c2019c1fe5081e23624d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26360b7931470149bc7da9fa9f81bef7b4c270a6f01c2019c1fe5081e23624d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26360b7931470149bc7da9fa9f81bef7b4c270a6f01c2019c1fe5081e23624d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26360b7931470149bc7da9fa9f81bef7b4c270a6f01c2019c1fe5081e23624d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26360b7931470149bc7da9fa9f81bef7b4c270a6f01c2019c1fe5081e23624d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:03 compute-0 podman[226125]: 2025-10-02 12:02:03.122498801 +0000 UTC m=+0.168294563 container init 5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:03 compute-0 podman[226125]: 2025-10-02 12:02:03.132204817 +0000 UTC m=+0.178000579 container start 5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:02:03 compute-0 podman[226125]: 2025-10-02 12:02:03.135605707 +0000 UTC m=+0.181401469 container attach 5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:02:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:03.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:03 compute-0 sudo[226295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuhdhutrwxjolrefmsbccoeaevsujcxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406523.3219042-1033-145233815404447/AnsiballZ_podman_container_info.py'
Oct 02 12:02:03 compute-0 sudo[226295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:03 compute-0 gifted_swirles[226141]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:02:03 compute-0 gifted_swirles[226141]: --> relative data size: 1.0
Oct 02 12:02:03 compute-0 gifted_swirles[226141]: --> All data devices are unavailable
Oct 02 12:02:03 compute-0 python3.9[226298]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 12:02:04 compute-0 ceph-mon[73668]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:04 compute-0 systemd[1]: libpod-5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4.scope: Deactivated successfully.
Oct 02 12:02:04 compute-0 podman[226125]: 2025-10-02 12:02:04.021330196 +0000 UTC m=+1.067125948 container died 5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 02 12:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e26360b7931470149bc7da9fa9f81bef7b4c270a6f01c2019c1fe5081e23624d-merged.mount: Deactivated successfully.
Oct 02 12:02:04 compute-0 podman[226125]: 2025-10-02 12:02:04.093531772 +0000 UTC m=+1.139327534 container remove 5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:02:04 compute-0 systemd[1]: libpod-conmon-5e3e2c67adb64ae0b1e22a367b732e44d9e36df33b973c701bce3ab1601b8fa4.scope: Deactivated successfully.
Oct 02 12:02:04 compute-0 sudo[225892]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:04 compute-0 sudo[226344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:04 compute-0 sudo[226344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:04 compute-0 sudo[226344]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:04 compute-0 sudo[226381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:04 compute-0 sudo[226381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:04 compute-0 sudo[226381]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:04 compute-0 sudo[226418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:04 compute-0 sudo[226418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:04 compute-0 sudo[226418]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:04 compute-0 sudo[226457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:02:04 compute-0 sudo[226457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:04.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:04 compute-0 podman[226606]: 2025-10-02 12:02:04.766940716 +0000 UTC m=+0.048209483 container create edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_napier, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:02:04 compute-0 systemd[1]: Started libpod-conmon-edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c.scope.
Oct 02 12:02:04 compute-0 podman[226606]: 2025-10-02 12:02:04.74021794 +0000 UTC m=+0.021486737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:04 compute-0 sudo[226295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:04 compute-0 podman[226606]: 2025-10-02 12:02:04.876895598 +0000 UTC m=+0.158164465 container init edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:02:04 compute-0 podman[226606]: 2025-10-02 12:02:04.887744705 +0000 UTC m=+0.169013482 container start edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:02:04 compute-0 podman[226606]: 2025-10-02 12:02:04.891804792 +0000 UTC m=+0.173073609 container attach edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_napier, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:02:04 compute-0 systemd[1]: libpod-edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c.scope: Deactivated successfully.
Oct 02 12:02:04 compute-0 jolly_napier[226645]: 167 167
Oct 02 12:02:04 compute-0 podman[226606]: 2025-10-02 12:02:04.894665647 +0000 UTC m=+0.175934424 container died edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_napier, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:02:04 compute-0 conmon[226645]: conmon edd81679e80595137021 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c.scope/container/memory.events
Oct 02 12:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a29db65475198ea5ad12201a43c92c98e8e89a10af14ddb4751f67b800bafec-merged.mount: Deactivated successfully.
Oct 02 12:02:04 compute-0 podman[226606]: 2025-10-02 12:02:04.935875205 +0000 UTC m=+0.217143982 container remove edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_napier, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:02:04 compute-0 systemd[1]: libpod-conmon-edd81679e8059513702110e55e00d9feaeccaf585dd5c1e2396fe0a419deb95c.scope: Deactivated successfully.
Oct 02 12:02:05 compute-0 podman[226693]: 2025-10-02 12:02:05.164177871 +0000 UTC m=+0.065519840 container create 41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:02:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:05.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:05 compute-0 systemd[1]: Started libpod-conmon-41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9.scope.
Oct 02 12:02:05 compute-0 podman[226693]: 2025-10-02 12:02:05.139272344 +0000 UTC m=+0.040614363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bafac1988221235314930df859e2ec6af73e608409bc1913795464cd6027ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bafac1988221235314930df859e2ec6af73e608409bc1913795464cd6027ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bafac1988221235314930df859e2ec6af73e608409bc1913795464cd6027ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bafac1988221235314930df859e2ec6af73e608409bc1913795464cd6027ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:05 compute-0 podman[226693]: 2025-10-02 12:02:05.278483078 +0000 UTC m=+0.179825077 container init 41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:02:05 compute-0 podman[226693]: 2025-10-02 12:02:05.291958084 +0000 UTC m=+0.193300053 container start 41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:02:05 compute-0 podman[226693]: 2025-10-02 12:02:05.297091599 +0000 UTC m=+0.198433568 container attach 41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:02:06 compute-0 ceph-mon[73668]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:06 compute-0 keen_pare[226709]: {
Oct 02 12:02:06 compute-0 keen_pare[226709]:     "1": [
Oct 02 12:02:06 compute-0 keen_pare[226709]:         {
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "devices": [
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "/dev/loop3"
Oct 02 12:02:06 compute-0 keen_pare[226709]:             ],
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "lv_name": "ceph_lv0",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "lv_size": "7511998464",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "name": "ceph_lv0",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "tags": {
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.cluster_name": "ceph",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.crush_device_class": "",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.encrypted": "0",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.osd_id": "1",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.type": "block",
Oct 02 12:02:06 compute-0 keen_pare[226709]:                 "ceph.vdo": "0"
Oct 02 12:02:06 compute-0 keen_pare[226709]:             },
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "type": "block",
Oct 02 12:02:06 compute-0 keen_pare[226709]:             "vg_name": "ceph_vg0"
Oct 02 12:02:06 compute-0 keen_pare[226709]:         }
Oct 02 12:02:06 compute-0 keen_pare[226709]:     ]
Oct 02 12:02:06 compute-0 keen_pare[226709]: }
Oct 02 12:02:06 compute-0 systemd[1]: libpod-41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9.scope: Deactivated successfully.
Oct 02 12:02:06 compute-0 podman[226693]: 2025-10-02 12:02:06.058179299 +0000 UTC m=+0.959521238 container died 41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-05bafac1988221235314930df859e2ec6af73e608409bc1913795464cd6027ad-merged.mount: Deactivated successfully.
Oct 02 12:02:06 compute-0 podman[226693]: 2025-10-02 12:02:06.132634174 +0000 UTC m=+1.033976113 container remove 41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:06 compute-0 systemd[1]: libpod-conmon-41ca13050643130d0b6910fd3fb6d0a5ee7c37b0394029d2b4823a5df060dec9.scope: Deactivated successfully.
Oct 02 12:02:06 compute-0 sudo[226457]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:06 compute-0 sudo[226864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awmxmxdekojgwmypzsljyqrtrfktvtel ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406525.6459677-1072-98255257375482/AnsiballZ_edpm_container_manage.py'
Oct 02 12:02:06 compute-0 sudo[226864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:06 compute-0 sudo[226850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:06 compute-0 sudo[226850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:06 compute-0 sudo[226850]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:06 compute-0 sudo[226884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:06 compute-0 sudo[226884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:06 compute-0 sudo[226884]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:06 compute-0 sudo[226909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:06 compute-0 sudo[226909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:06 compute-0 sudo[226909]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:06 compute-0 sudo[226935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:02:06 compute-0 sudo[226935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:06.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:06 compute-0 python3[226882]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:02:06 compute-0 podman[227030]: 2025-10-02 12:02:06.81405635 +0000 UTC m=+0.051584272 container create 1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:02:06 compute-0 podman[227043]: 2025-10-02 12:02:06.84282217 +0000 UTC m=+0.064458653 container create 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 02 12:02:06 compute-0 podman[227043]: 2025-10-02 12:02:06.805002121 +0000 UTC m=+0.026638614 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:02:06 compute-0 python3[226882]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:02:06 compute-0 systemd[1]: Started libpod-conmon-1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f.scope.
Oct 02 12:02:06 compute-0 podman[227030]: 2025-10-02 12:02:06.79320111 +0000 UTC m=+0.030729072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:06 compute-0 podman[227030]: 2025-10-02 12:02:06.911827331 +0000 UTC m=+0.149355283 container init 1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:06 compute-0 podman[227030]: 2025-10-02 12:02:06.921383453 +0000 UTC m=+0.158911395 container start 1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:06 compute-0 reverent_payne[227066]: 167 167
Oct 02 12:02:06 compute-0 systemd[1]: libpod-1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f.scope: Deactivated successfully.
Oct 02 12:02:06 compute-0 podman[227030]: 2025-10-02 12:02:06.932339953 +0000 UTC m=+0.169867895 container attach 1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:06 compute-0 podman[227030]: 2025-10-02 12:02:06.933039631 +0000 UTC m=+0.170567553 container died 1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-64720ee4c86c462c81a8e22f151c9124c7389d973d123c1fad169109ffb649f1-merged.mount: Deactivated successfully.
Oct 02 12:02:06 compute-0 podman[227030]: 2025-10-02 12:02:06.977147475 +0000 UTC m=+0.214675397 container remove 1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:02:06 compute-0 systemd[1]: libpod-conmon-1d28cb1db0fe8e9b43a34dbfe745bf787b1cc8d25122cf7596816543a88b0e8f.scope: Deactivated successfully.
Oct 02 12:02:06 compute-0 sudo[226864]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:07 compute-0 podman[227135]: 2025-10-02 12:02:07.164383587 +0000 UTC m=+0.043303904 container create ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_boyd, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:07.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:07 compute-0 systemd[1]: Started libpod-conmon-ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23.scope.
Oct 02 12:02:07 compute-0 podman[227135]: 2025-10-02 12:02:07.145753016 +0000 UTC m=+0.024673363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:02:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cabeba3106513c426dedb2570161744325b867b3143473312ab5b0d53afe051/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cabeba3106513c426dedb2570161744325b867b3143473312ab5b0d53afe051/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cabeba3106513c426dedb2570161744325b867b3143473312ab5b0d53afe051/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cabeba3106513c426dedb2570161744325b867b3143473312ab5b0d53afe051/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:07 compute-0 podman[227135]: 2025-10-02 12:02:07.274959976 +0000 UTC m=+0.153880343 container init ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:02:07 compute-0 podman[227135]: 2025-10-02 12:02:07.283256945 +0000 UTC m=+0.162177272 container start ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:02:07 compute-0 podman[227135]: 2025-10-02 12:02:07.29062088 +0000 UTC m=+0.169541247 container attach ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:02:07 compute-0 sudo[227281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hruzcjhtbxfbthmndysfdwsvhspbadri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406527.2095532-1096-206871136310075/AnsiballZ_stat.py'
Oct 02 12:02:07 compute-0 sudo[227281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:07 compute-0 python3.9[227283]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:07 compute-0 sudo[227281]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-0 ceph-mon[73668]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]: {
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]:         "osd_id": 1,
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]:         "type": "bluestore"
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]:     }
Oct 02 12:02:08 compute-0 eloquent_boyd[227174]: }
Oct 02 12:02:08 compute-0 systemd[1]: libpod-ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23.scope: Deactivated successfully.
Oct 02 12:02:08 compute-0 podman[227135]: 2025-10-02 12:02:08.135671534 +0000 UTC m=+1.014591851 container died ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_boyd, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cabeba3106513c426dedb2570161744325b867b3143473312ab5b0d53afe051-merged.mount: Deactivated successfully.
Oct 02 12:02:08 compute-0 podman[227135]: 2025-10-02 12:02:08.203832193 +0000 UTC m=+1.082752510 container remove ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:02:08 compute-0 systemd[1]: libpod-conmon-ce4ee74272c64c31d77e262f11a95883a6257207f7d194717eee7e01b10f4d23.scope: Deactivated successfully.
Oct 02 12:02:08 compute-0 sudo[226935]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:02:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:02:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:08 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3cadde2e-3024-422c-953f-f71cabd3e1be does not exist
Oct 02 12:02:08 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2ba592d0-10fd-4602-93e6-dc7d91f882ca does not exist
Oct 02 12:02:08 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bc6f07ad-9615-414f-b1bc-bf5acf96db8e does not exist
Oct 02 12:02:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:08 compute-0 sudo[227418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:08 compute-0 sudo[227418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:08 compute-0 sudo[227418]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-0 sudo[227502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emudhxjkiquuvwnhyxvueqhtzgwrwmte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406528.0957952-1123-154360040171843/AnsiballZ_file.py'
Oct 02 12:02:08 compute-0 sudo[227502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:08 compute-0 sudo[227477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:02:08 compute-0 sudo[227477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:08 compute-0 sudo[227477]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:08.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:08 compute-0 python3.9[227513]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:08 compute-0 sudo[227502]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-0 sudo[227589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xofwelrqtnsabfolrqmzgsihbtvvxjha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406528.0957952-1123-154360040171843/AnsiballZ_stat.py'
Oct 02 12:02:08 compute-0 sudo[227589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:09 compute-0 python3.9[227591]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:09.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:09 compute-0 sudo[227589]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:09 compute-0 sudo[227740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuhzgjqijiedhztiuzdgibhguwcfxuio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406529.2781196-1123-197563204234191/AnsiballZ_copy.py'
Oct 02 12:02:09 compute-0 sudo[227740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:09 compute-0 python3.9[227742]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406529.2781196-1123-197563204234191/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:10 compute-0 sudo[227740]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:10 compute-0 ceph-mon[73668]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:10 compute-0 sudo[227816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnnuopkjwdccrrthkfwcqdhilvhexzyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406529.2781196-1123-197563204234191/AnsiballZ_systemd.py'
Oct 02 12:02:10 compute-0 sudo[227816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:10.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:10 compute-0 python3.9[227818]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:02:10 compute-0 systemd[1]: Reloading.
Oct 02 12:02:10 compute-0 systemd-rc-local-generator[227846]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:10 compute-0 systemd-sysv-generator[227850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:11 compute-0 sudo[227816]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:11.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:11 compute-0 ceph-mon[73668]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:11 compute-0 sudo[227928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stydkmmeujevyczjmnftrwmbfdkgjtgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406529.2781196-1123-197563204234191/AnsiballZ_systemd.py'
Oct 02 12:02:11 compute-0 sudo[227928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:11 compute-0 python3.9[227930]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:02:11 compute-0 systemd[1]: Reloading.
Oct 02 12:02:11 compute-0 systemd-rc-local-generator[227960]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:11 compute-0 systemd-sysv-generator[227964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:12 compute-0 systemd[1]: Starting iscsid container...
Oct 02 12:02:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d713a8afeedaf904fba71657b644c32e8ba869c2ca539bd5599ddc689d3f167/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d713a8afeedaf904fba71657b644c32e8ba869c2ca539bd5599ddc689d3f167/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d713a8afeedaf904fba71657b644c32e8ba869c2ca539bd5599ddc689d3f167/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718.
Oct 02 12:02:12 compute-0 podman[227970]: 2025-10-02 12:02:12.258015144 +0000 UTC m=+0.173748987 container init 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:02:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:12 compute-0 iscsid[227985]: + sudo -E kolla_set_configs
Oct 02 12:02:12 compute-0 podman[227970]: 2025-10-02 12:02:12.346380697 +0000 UTC m=+0.262114500 container start 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:02:12 compute-0 podman[227970]: iscsid
Oct 02 12:02:12 compute-0 systemd[1]: Started iscsid container.
Oct 02 12:02:12 compute-0 sudo[227991]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 12:02:12 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 12:02:12 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 12:02:12 compute-0 sudo[227928]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:12 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 12:02:12 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 12:02:12 compute-0 systemd[228009]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 12:02:12 compute-0 podman[227993]: 2025-10-02 12:02:12.479436779 +0000 UTC m=+0.110007895 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:02:12 compute-0 systemd[1]: 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718-9fb7dd8712f9d84.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 12:02:12 compute-0 systemd[1]: 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718-9fb7dd8712f9d84.service: Failed with result 'exit-code'.
Oct 02 12:02:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:12.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:12 compute-0 systemd[228009]: Queued start job for default target Main User Target.
Oct 02 12:02:12 compute-0 systemd[228009]: Created slice User Application Slice.
Oct 02 12:02:12 compute-0 systemd[228009]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 12:02:12 compute-0 systemd[228009]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:02:12 compute-0 systemd[228009]: Reached target Paths.
Oct 02 12:02:12 compute-0 systemd[228009]: Reached target Timers.
Oct 02 12:02:12 compute-0 systemd[228009]: Starting D-Bus User Message Bus Socket...
Oct 02 12:02:12 compute-0 systemd[228009]: Starting Create User's Volatile Files and Directories...
Oct 02 12:02:12 compute-0 systemd[228009]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:02:12 compute-0 systemd[228009]: Reached target Sockets.
Oct 02 12:02:12 compute-0 systemd[228009]: Finished Create User's Volatile Files and Directories.
Oct 02 12:02:12 compute-0 systemd[228009]: Reached target Basic System.
Oct 02 12:02:12 compute-0 systemd[228009]: Reached target Main User Target.
Oct 02 12:02:12 compute-0 systemd[228009]: Startup finished in 165ms.
Oct 02 12:02:12 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 12:02:12 compute-0 systemd[1]: Started Session c3 of User root.
Oct 02 12:02:12 compute-0 sudo[227991]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:02:12 compute-0 iscsid[227985]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:02:12 compute-0 iscsid[227985]: INFO:__main__:Validating config file
Oct 02 12:02:12 compute-0 iscsid[227985]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:02:12 compute-0 iscsid[227985]: INFO:__main__:Writing out command to execute
Oct 02 12:02:12 compute-0 sudo[227991]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:12 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 02 12:02:12 compute-0 iscsid[227985]: ++ cat /run_command
Oct 02 12:02:12 compute-0 iscsid[227985]: + CMD='/usr/sbin/iscsid -f'
Oct 02 12:02:12 compute-0 iscsid[227985]: + ARGS=
Oct 02 12:02:12 compute-0 iscsid[227985]: + sudo kolla_copy_cacerts
Oct 02 12:02:12 compute-0 sudo[228102]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 12:02:12 compute-0 systemd[1]: Started Session c4 of User root.
Oct 02 12:02:12 compute-0 sudo[228102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:02:12 compute-0 sudo[228102]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:12 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 02 12:02:12 compute-0 iscsid[227985]: + [[ ! -n '' ]]
Oct 02 12:02:12 compute-0 iscsid[227985]: + . kolla_extend_start
Oct 02 12:02:12 compute-0 iscsid[227985]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 02 12:02:12 compute-0 iscsid[227985]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 02 12:02:12 compute-0 iscsid[227985]: Running command: '/usr/sbin/iscsid -f'
Oct 02 12:02:12 compute-0 iscsid[227985]: + umask 0022
Oct 02 12:02:12 compute-0 iscsid[227985]: + exec /usr/sbin/iscsid -f
Oct 02 12:02:12 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 02 12:02:13 compute-0 python3.9[228192]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:13 compute-0 ceph-mon[73668]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:13 compute-0 sudo[228342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lozjnlgzyzyhycovzorbffnszclyutrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406533.521288-1234-58272611478489/AnsiballZ_file.py'
Oct 02 12:02:13 compute-0 sudo[228342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:14 compute-0 python3.9[228344]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:14 compute-0 sudo[228342]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000079s ======
Oct 02 12:02:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Oct 02 12:02:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:14 compute-0 sudo[228495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whrbxcetloscxsxgiyzaybbostchnpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406534.6461813-1267-124683773965821/AnsiballZ_service_facts.py'
Oct 02 12:02:14 compute-0 sudo[228495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:15 compute-0 python3.9[228497]: ansible-ansible.builtin.service_facts Invoked
Oct 02 12:02:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:15 compute-0 network[228514]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 12:02:15 compute-0 network[228515]: 'network-scripts' will be removed from distribution in near future.
Oct 02 12:02:15 compute-0 network[228516]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 12:02:15 compute-0 ceph-mon[73668]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:16.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:17 compute-0 ceph-mon[73668]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:19.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:19 compute-0 sudo[228495]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:19 compute-0 ceph-mon[73668]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:19 compute-0 sudo[228741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:19 compute-0 sudo[228741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:19 compute-0 sudo[228741]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:19 compute-0 sudo[228771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:19 compute-0 sudo[228771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:19 compute-0 sudo[228771]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:20 compute-0 sudo[228841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkrrcakdpljcusatmmrjbdimlelhxsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406539.6671865-1297-221408621180858/AnsiballZ_file.py'
Oct 02 12:02:20 compute-0 sudo[228841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:20 compute-0 python3.9[228843]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 12:02:20 compute-0 sudo[228841]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:20 compute-0 sudo[228994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzvssfqbmbiwdfsmkdnnjbdcsaxxezzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406540.4669724-1321-192979773462387/AnsiballZ_modprobe.py'
Oct 02 12:02:20 compute-0 sudo[228994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:21 compute-0 python3.9[228996]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 02 12:02:21 compute-0 sudo[228994]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:21 compute-0 ceph-mon[73668]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:21 compute-0 sudo[229150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liynrtexvqijgkjgwamqznaewzzlptii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406541.5096617-1345-23758591794908/AnsiballZ_stat.py'
Oct 02 12:02:21 compute-0 sudo[229150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:22 compute-0 python3.9[229152]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:22 compute-0 sudo[229150]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:22 compute-0 sudo[229274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euowyaezqgmtryjbsbphzlznghlrsgha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406541.5096617-1345-23758591794908/AnsiballZ_copy.py'
Oct 02 12:02:22 compute-0 sudo[229274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:22.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:22 compute-0 python3.9[229276]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406541.5096617-1345-23758591794908/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:22 compute-0 sudo[229274]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:23 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 12:02:23 compute-0 systemd[228009]: Activating special unit Exit the Session...
Oct 02 12:02:23 compute-0 systemd[228009]: Stopped target Main User Target.
Oct 02 12:02:23 compute-0 systemd[228009]: Stopped target Basic System.
Oct 02 12:02:23 compute-0 systemd[228009]: Stopped target Paths.
Oct 02 12:02:23 compute-0 systemd[228009]: Stopped target Sockets.
Oct 02 12:02:23 compute-0 systemd[228009]: Stopped target Timers.
Oct 02 12:02:23 compute-0 systemd[228009]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:02:23 compute-0 systemd[228009]: Closed D-Bus User Message Bus Socket.
Oct 02 12:02:23 compute-0 systemd[228009]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:02:23 compute-0 systemd[228009]: Removed slice User Application Slice.
Oct 02 12:02:23 compute-0 systemd[228009]: Reached target Shutdown.
Oct 02 12:02:23 compute-0 systemd[228009]: Finished Exit the Session.
Oct 02 12:02:23 compute-0 systemd[228009]: Reached target Exit the Session.
Oct 02 12:02:23 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 12:02:23 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 12:02:23 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 12:02:23 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 12:02:23 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 12:02:23 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 12:02:23 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 12:02:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:02:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:02:23 compute-0 podman[229353]: 2025-10-02 12:02:23.237616714 +0000 UTC m=+0.145613795 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:02:23 compute-0 sudo[229454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teixbgqnbtvrvzjoovbpocmsybsrlmzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406542.9662528-1393-152538900663886/AnsiballZ_lineinfile.py'
Oct 02 12:02:23 compute-0 sudo[229454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:23 compute-0 ceph-mon[73668]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:23 compute-0 python3.9[229456]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:23 compute-0 sudo[229454]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:24 compute-0 sudo[229606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbdxotqfnzphtwhattrympqljtxymvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406543.805136-1417-163813899133667/AnsiballZ_systemd.py'
Oct 02 12:02:24 compute-0 sudo[229606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:24 compute-0 python3.9[229608]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:02:24 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 12:02:24 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 12:02:24 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 12:02:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:24 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 12:02:24 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 12:02:24 compute-0 sudo[229606]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:02:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:02:25 compute-0 sudo[229763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkiyhmqskqslyzgsgkgqamrmceisdwbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406544.8633611-1441-245711899635904/AnsiballZ_file.py'
Oct 02 12:02:25 compute-0 sudo[229763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:25 compute-0 ceph-mon[73668]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:25 compute-0 python3.9[229765]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:25 compute-0 sudo[229763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:26 compute-0 sudo[229915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsykifrgxftyvtfrxetiohwvwppoyxtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406545.8221264-1468-3249471697651/AnsiballZ_stat.py'
Oct 02 12:02:26 compute-0 sudo[229915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:26 compute-0 python3.9[229917]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:26 compute-0 sudo[229915]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:02:26.437 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:02:26.437 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:02:26.438 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:26.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:26 compute-0 sudo[230068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rezvipsjlwsehmhnuitljejxywaqnooa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406546.627357-1495-23710964047526/AnsiballZ_stat.py'
Oct 02 12:02:26 compute-0 sudo[230068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:27 compute-0 python3.9[230070]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:27 compute-0 sudo[230068]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:27.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:27 compute-0 ceph-mon[73668]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:27 compute-0 sudo[230220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bscmbhnbepagcpoficdqmyjjgomiojrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406547.4084666-1519-218194692191577/AnsiballZ_stat.py'
Oct 02 12:02:27 compute-0 sudo[230220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:27 compute-0 python3.9[230222]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:27 compute-0 sudo[230220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:28 compute-0 sudo[230344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksqtszllcbfveqvlmbmuwutdrenycfpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406547.4084666-1519-218194692191577/AnsiballZ_copy.py'
Oct 02 12:02:28 compute-0 sudo[230344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:02:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:28.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:28 compute-0 python3.9[230346]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406547.4084666-1519-218194692191577/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:28 compute-0 sudo[230344]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:02:28
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Oct 02 12:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:02:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:02:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:29.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:02:29 compute-0 sudo[230496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwoxybswhlypztfjfifhimhzqngeidtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406548.7872758-1564-184695128113997/AnsiballZ_command.py'
Oct 02 12:02:29 compute-0 sudo[230496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:29 compute-0 ceph-mon[73668]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:29 compute-0 python3.9[230498]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:02:29 compute-0 sudo[230496]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:29 compute-0 sudo[230664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srprpdxnsaljtfrehnvoiddfomkvijjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406549.6705952-1588-196760270853772/AnsiballZ_lineinfile.py'
Oct 02 12:02:29 compute-0 sudo[230664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:29 compute-0 podman[230623]: 2025-10-02 12:02:29.993036574 +0000 UTC m=+0.060227430 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:02:30 compute-0 python3.9[230670]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:30 compute-0 sudo[230664]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:30 compute-0 sudo[230821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtawhecygtgzsgqcmffdwyhckquhrdmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406550.4516504-1612-275106327107508/AnsiballZ_replace.py'
Oct 02 12:02:30 compute-0 sudo[230821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:31 compute-0 python3.9[230823]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:31 compute-0 sudo[230821]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:31.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:31 compute-0 ceph-mon[73668]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:31 compute-0 sudo[230973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbkkueaqgbyqxtxavllomkrjtrbohdqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406551.3299034-1636-194863171961289/AnsiballZ_replace.py'
Oct 02 12:02:31 compute-0 sudo[230973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:31 compute-0 python3.9[230975]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:31 compute-0 sudo[230973]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:32 compute-0 sudo[231126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfkajblychhpbvrbeyisdtokmzzdpyhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406552.1097844-1663-161749481553885/AnsiballZ_lineinfile.py'
Oct 02 12:02:32 compute-0 sudo[231126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:32 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 02 12:02:32 compute-0 python3.9[231128]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:32 compute-0 sudo[231126]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:33 compute-0 sudo[231279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtdvohrolaftkyawyycpouubzccnjnaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406552.8294716-1663-242847914806582/AnsiballZ_lineinfile.py'
Oct 02 12:02:33 compute-0 sudo[231279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:33 compute-0 python3.9[231281]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:33 compute-0 sudo[231279]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:33 compute-0 ceph-mon[73668]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:33 compute-0 sudo[231431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fedbbxmxztcdhvkgqtfmgataifkmhmsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406553.4705548-1663-205264480495573/AnsiballZ_lineinfile.py'
Oct 02 12:02:33 compute-0 sudo[231431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:33 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 12:02:34 compute-0 python3.9[231433]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:34 compute-0 sudo[231431]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:34.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:34 compute-0 sudo[231585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txujblzutrrmjxczqjilyjlgmrdeddck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406554.1990328-1663-67520916102079/AnsiballZ_lineinfile.py'
Oct 02 12:02:34 compute-0 sudo[231585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:34 compute-0 python3.9[231587]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:34 compute-0 sudo[231585]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:35.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:35 compute-0 sudo[231737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucsetfnycrkzhkszdlklidbjtnwqodru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406554.9877303-1750-178659859707235/AnsiballZ_stat.py'
Oct 02 12:02:35 compute-0 sudo[231737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:35 compute-0 python3.9[231739]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:35 compute-0 ceph-mon[73668]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:35 compute-0 sudo[231737]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-0 sudo[231891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjcdwlupckxfyxkvxvenfjxfnfedpzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406555.7825959-1774-69300781708639/AnsiballZ_file.py'
Oct 02 12:02:36 compute-0 sudo[231891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:36 compute-0 python3.9[231893]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:36 compute-0 sudo[231891]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:36.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:37 compute-0 sudo[232044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajohofjydgvnokltqotfbfibhboudobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406556.713394-1801-60795270332936/AnsiballZ_file.py'
Oct 02 12:02:37 compute-0 sudo[232044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:37.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:37 compute-0 python3.9[232046]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:37 compute-0 sudo[232044]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-0 ceph-mon[73668]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:37 compute-0 sudo[232196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoltvkhtycbrjqdwzcsrrykbwpedtpjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406557.5716808-1825-179598417046967/AnsiballZ_stat.py'
Oct 02 12:02:37 compute-0 sudo[232196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:38 compute-0 python3.9[232198]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:38 compute-0 sudo[232196]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:38 compute-0 sudo[232275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdxlercutcsivqwxfcsarkmhjxbectjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406557.5716808-1825-179598417046967/AnsiballZ_file.py'
Oct 02 12:02:38 compute-0 sudo[232275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:38 compute-0 python3.9[232277]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:38 compute-0 sudo[232275]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:39 compute-0 sudo[232427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbnvjiepkxeiilpfpzkhihmgikahyuiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406558.8060687-1825-116626705529612/AnsiballZ_stat.py'
Oct 02 12:02:39 compute-0 sudo[232427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:39.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:39 compute-0 python3.9[232429]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:39 compute-0 sudo[232427]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:39 compute-0 sudo[232505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klwjcwzvfvwkapspdkwrrhdhjhfobpky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406558.8060687-1825-116626705529612/AnsiballZ_file.py'
Oct 02 12:02:39 compute-0 sudo[232505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:39 compute-0 ceph-mon[73668]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:02:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:02:39 compute-0 python3.9[232507]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:39 compute-0 sudo[232505]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 sudo[232553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:40 compute-0 sudo[232553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:40 compute-0 sudo[232553]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 sudo[232603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:40 compute-0 sudo[232603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:40 compute-0 sudo[232603]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 sudo[232707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odjmuoiicotofwxebohrwzawvxgnsbkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406559.9858658-1894-99118504789284/AnsiballZ_file.py'
Oct 02 12:02:40 compute-0 sudo[232707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:40 compute-0 python3.9[232709]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:40 compute-0 sudo[232707]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:40 compute-0 sudo[232860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfqiganhpsoqdnxobutraqayfuquevyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406560.6793957-1918-172507839382750/AnsiballZ_stat.py'
Oct 02 12:02:41 compute-0 sudo[232860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:41 compute-0 python3.9[232862]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:41 compute-0 sudo[232860]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:41.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:41 compute-0 sudo[232938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqnupvicxcxeumsipkahwqnwuiqzlksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406560.6793957-1918-172507839382750/AnsiballZ_file.py'
Oct 02 12:02:41 compute-0 sudo[232938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:41 compute-0 ceph-mon[73668]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:41 compute-0 python3.9[232940]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:41 compute-0 sudo[232938]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:42 compute-0 sudo[233090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uayxncnsdwwawlwgjvofdcgkcyokunar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406561.89932-1954-148754370260286/AnsiballZ_stat.py'
Oct 02 12:02:42 compute-0 sudo[233090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:42 compute-0 python3.9[233092]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:42 compute-0 sudo[233090]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:42.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:42 compute-0 sudo[233182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddljfycdryjpqrkrndktnxvibaeymwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406561.89932-1954-148754370260286/AnsiballZ_file.py'
Oct 02 12:02:42 compute-0 sudo[233182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:42 compute-0 podman[233143]: 2025-10-02 12:02:42.756660024 +0000 UTC m=+0.086030882 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:02:42 compute-0 python3.9[233190]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:42 compute-0 sudo[233182]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:43.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:43 compute-0 sudo[233341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbseuhihpydagqwcwavvfizcddhgrnma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406563.160279-1990-20296136747392/AnsiballZ_systemd.py'
Oct 02 12:02:43 compute-0 sudo[233341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:43 compute-0 ceph-mon[73668]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:43 compute-0 python3.9[233343]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:02:43 compute-0 systemd[1]: Reloading.
Oct 02 12:02:44 compute-0 systemd-sysv-generator[233374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:44 compute-0 systemd-rc-local-generator[233371]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:44 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 02 12:02:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 12:02:44 compute-0 sudo[233341]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:02:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:02:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:44 compute-0 sudo[233533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkmbfppvuyojwyyqauytzrormtxslol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406564.5160532-2014-206802515396472/AnsiballZ_stat.py'
Oct 02 12:02:44 compute-0 sudo[233533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:45 compute-0 python3.9[233535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:45 compute-0 sudo[233533]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:45.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:45 compute-0 sudo[233611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trbulhjukkqjjyqmkdvbmgzimawsbvpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406564.5160532-2014-206802515396472/AnsiballZ_file.py'
Oct 02 12:02:45 compute-0 sudo[233611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:45 compute-0 python3.9[233613]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:45 compute-0 sudo[233611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:45 compute-0 ceph-mon[73668]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:46 compute-0 sudo[233763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebtyurfmaefkxieenyymvjhirxyulbdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406565.7606635-2050-21522186517060/AnsiballZ_stat.py'
Oct 02 12:02:46 compute-0 sudo[233763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:46 compute-0 python3.9[233765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:46 compute-0 sudo[233763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:46 compute-0 sudo[233842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whlhysoynihjmttedozbruemqumtlivx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406565.7606635-2050-21522186517060/AnsiballZ_file.py'
Oct 02 12:02:46 compute-0 sudo[233842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:46 compute-0 python3.9[233844]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:46 compute-0 sudo[233842]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:47.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:47 compute-0 sudo[233994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayfcqsfbxncwcskhcoaccoyzagxbkhpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406567.1065788-2086-195443104885823/AnsiballZ_systemd.py'
Oct 02 12:02:47 compute-0 sudo[233994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:47 compute-0 ceph-mon[73668]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:47 compute-0 python3.9[233996]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:02:47 compute-0 systemd[1]: Reloading.
Oct 02 12:02:47 compute-0 systemd-rc-local-generator[234021]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:47 compute-0 systemd-sysv-generator[234025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:48 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 12:02:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 12:02:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 12:02:48 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 12:02:48 compute-0 sudo[233994]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:49 compute-0 sudo[234187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fivaiphkmywongtiwknaxoorsityssmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406568.6850264-2116-112559515371590/AnsiballZ_file.py'
Oct 02 12:02:49 compute-0 sudo[234187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:49.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:49 compute-0 python3.9[234189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:49 compute-0 sudo[234187]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:49 compute-0 ceph-mon[73668]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:49 compute-0 sudo[234339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwbzbbyrgzhcprnbogmxwvvvlnaewlum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406569.5810502-2140-114742619991257/AnsiballZ_stat.py'
Oct 02 12:02:49 compute-0 sudo[234339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:50 compute-0 python3.9[234341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:50 compute-0 sudo[234339]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:50.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:50 compute-0 sudo[234463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurxynmeurithatqqtuyfbolyzixmqyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406569.5810502-2140-114742619991257/AnsiballZ_copy.py'
Oct 02 12:02:50 compute-0 sudo[234463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:50 compute-0 python3.9[234465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406569.5810502-2140-114742619991257/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:50 compute-0 sudo[234463]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:51.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:51 compute-0 sudo[234615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwnlqevyqvcbycljpryyyshgpflvqhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406571.2482579-2191-264581460977596/AnsiballZ_file.py'
Oct 02 12:02:51 compute-0 sudo[234615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:51 compute-0 ceph-mon[73668]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:51 compute-0 python3.9[234617]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:51 compute-0 sudo[234615]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:52 compute-0 sudo[234768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdyagfbtghnxjlzhzgtirlnkbdbsgyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406572.08884-2215-180883043910112/AnsiballZ_stat.py'
Oct 02 12:02:52 compute-0 sudo[234768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:52 compute-0 python3.9[234770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:52 compute-0 sudo[234768]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:53 compute-0 sudo[234891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joxmaxrmxbdjkbdjngmpkyfvlzuefjxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406572.08884-2215-180883043910112/AnsiballZ_copy.py'
Oct 02 12:02:53 compute-0 sudo[234891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:53 compute-0 python3.9[234893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406572.08884-2215-180883043910112/.source.json _original_basename=.2eatek5j follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:53.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:53 compute-0 sudo[234891]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:53 compute-0 podman[234898]: 2025-10-02 12:02:53.510660648 +0000 UTC m=+0.170361958 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:02:53 compute-0 ceph-mon[73668]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:53 compute-0 sudo[235070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxbzwznutgjisurkxjzckgasaqaofsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406573.4610014-2260-96214806397403/AnsiballZ_file.py'
Oct 02 12:02:53 compute-0 sudo[235070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:54 compute-0 python3.9[235072]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:54 compute-0 sudo[235070]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:54 compute-0 sudo[235223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udobxbkdafacsdtgeomqylwbgegsfpxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406574.3635151-2284-65573880483494/AnsiballZ_stat.py'
Oct 02 12:02:54 compute-0 sudo[235223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:54 compute-0 sudo[235223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:55.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:55 compute-0 sudo[235346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnalgwnqcshojvxhubptftnqhknybkkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406574.3635151-2284-65573880483494/AnsiballZ_copy.py'
Oct 02 12:02:55 compute-0 sudo[235346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:55 compute-0 sudo[235346]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:55 compute-0 ceph-mon[73668]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:56 compute-0 sudo[235499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pakqlvwwihbuepkzgbylgibawizvrstz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406576.093189-2335-257670328837994/AnsiballZ_container_config_data.py'
Oct 02 12:02:56 compute-0 sudo[235499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:56.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:56 compute-0 python3.9[235501]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 02 12:02:56 compute-0 sudo[235499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:57.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:57 compute-0 sudo[235651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcimpvnszoxhbfbwhhakqozaewzihtev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406577.0064795-2362-95330812201528/AnsiballZ_container_config_hash.py'
Oct 02 12:02:57 compute-0 sudo[235651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:57 compute-0 python3.9[235653]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:02:57 compute-0 sudo[235651]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:57 compute-0 ceph-mon[73668]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:58 compute-0 sudo[235803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhppizydukrdxcnrfddvbkwxhqmfndel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406577.8772337-2389-232690266872443/AnsiballZ_podman_container_info.py'
Oct 02 12:02:58 compute-0 sudo[235803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:02:58 compute-0 python3.9[235805]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 12:02:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:02:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:58.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:02:58 compute-0 sudo[235803]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:02:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:59.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:59 compute-0 ceph-mon[73668]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:00 compute-0 sudo[235994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyydaygplwezyqxnjdwequgehfrdshni ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406579.8088086-2428-226335218632085/AnsiballZ_edpm_container_manage.py'
Oct 02 12:03:00 compute-0 sudo[235994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:00 compute-0 sudo[235995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:00 compute-0 sudo[235995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:00 compute-0 podman[235957]: 2025-10-02 12:03:00.215306559 +0000 UTC m=+0.102366523 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:03:00 compute-0 sudo[235995]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:00 compute-0 sudo[236031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:00 compute-0 sudo[236031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:00 compute-0 sudo[236031]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:00 compute-0 python3[236003]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:03:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:01.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:01 compute-0 podman[236071]: 2025-10-02 12:03:01.733914953 +0000 UTC m=+1.185863672 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 12:03:01 compute-0 ceph-mon[73668]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:01 compute-0 podman[236130]: 2025-10-02 12:03:01.906526539 +0000 UTC m=+0.060052637 container create 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct 02 12:03:01 compute-0 podman[236130]: 2025-10-02 12:03:01.870350264 +0000 UTC m=+0.023876442 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 12:03:01 compute-0 python3[236003]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 12:03:02 compute-0 sudo[235994]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:02 compute-0 sudo[236318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdwsnssnvpoefgfvnltemuxdkefhwqgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406582.5269687-2452-48426412936863/AnsiballZ_stat.py'
Oct 02 12:03:02 compute-0 sudo[236318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:03 compute-0 python3.9[236320]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:03:03 compute-0 sudo[236318]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:03.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:03 compute-0 sudo[236472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efjmbrypykwyydqxfrjjpnxdbnxyqwhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406583.4518652-2479-260561329819968/AnsiballZ_file.py'
Oct 02 12:03:03 compute-0 sudo[236472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:03 compute-0 ceph-mon[73668]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:03 compute-0 python3.9[236474]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:04 compute-0 sudo[236472]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:04 compute-0 sudo[236548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgvbvgekzqrafhlprcddtooasvyziogf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406583.4518652-2479-260561329819968/AnsiballZ_stat.py'
Oct 02 12:03:04 compute-0 sudo[236548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:04 compute-0 python3.9[236550]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:03:04 compute-0 sudo[236548]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:05 compute-0 sudo[236700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdjuganklvdytlvegzsyyuabyrrnboyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406584.5610707-2479-71896099425828/AnsiballZ_copy.py'
Oct 02 12:03:05 compute-0 sudo[236700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:05 compute-0 python3.9[236702]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406584.5610707-2479-71896099425828/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:05 compute-0 sudo[236700]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:03:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:05.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:03:05 compute-0 sudo[236776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anttpbqlfhaxxmiosqrteiuedygskbps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406584.5610707-2479-71896099425828/AnsiballZ_systemd.py'
Oct 02 12:03:05 compute-0 sudo[236776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:05 compute-0 python3.9[236778]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:03:05 compute-0 systemd[1]: Reloading.
Oct 02 12:03:05 compute-0 ceph-mon[73668]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:05 compute-0 systemd-rc-local-generator[236807]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:05 compute-0 systemd-sysv-generator[236811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:06 compute-0 sudo[236776]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:06 compute-0 sudo[236888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-risfoxftzcwlpoqwyyncvzgxsmmloeux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406584.5610707-2479-71896099425828/AnsiballZ_systemd.py'
Oct 02 12:03:06 compute-0 sudo[236888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:03:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:03:06 compute-0 python3.9[236890]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:06 compute-0 systemd[1]: Reloading.
Oct 02 12:03:07 compute-0 systemd-rc-local-generator[236914]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:07 compute-0 systemd-sysv-generator[236918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:07.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:07 compute-0 systemd[1]: Starting multipathd container...
Oct 02 12:03:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eead396972afa8624141fbd99f9ab1ba25faa3a2a96e399702a7fb6a8e06595/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eead396972afa8624141fbd99f9ab1ba25faa3a2a96e399702a7fb6a8e06595/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125.
Oct 02 12:03:07 compute-0 podman[236930]: 2025-10-02 12:03:07.540212872 +0000 UTC m=+0.210717703 container init 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:03:07 compute-0 multipathd[236945]: + sudo -E kolla_set_configs
Oct 02 12:03:07 compute-0 podman[236930]: 2025-10-02 12:03:07.582259002 +0000 UTC m=+0.252763763 container start 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:03:07 compute-0 sudo[236951]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 12:03:07 compute-0 sudo[236951]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:07 compute-0 sudo[236951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:07 compute-0 podman[236930]: multipathd
Oct 02 12:03:07 compute-0 systemd[1]: Started multipathd container.
Oct 02 12:03:07 compute-0 multipathd[236945]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:03:07 compute-0 multipathd[236945]: INFO:__main__:Validating config file
Oct 02 12:03:07 compute-0 multipathd[236945]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:03:07 compute-0 multipathd[236945]: INFO:__main__:Writing out command to execute
Oct 02 12:03:07 compute-0 sudo[236888]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:07 compute-0 sudo[236951]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:07 compute-0 multipathd[236945]: ++ cat /run_command
Oct 02 12:03:07 compute-0 multipathd[236945]: + CMD='/usr/sbin/multipathd -d'
Oct 02 12:03:07 compute-0 multipathd[236945]: + ARGS=
Oct 02 12:03:07 compute-0 multipathd[236945]: + sudo kolla_copy_cacerts
Oct 02 12:03:07 compute-0 podman[236952]: 2025-10-02 12:03:07.70002208 +0000 UTC m=+0.090106289 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:03:07 compute-0 sudo[236974]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 12:03:07 compute-0 systemd[1]: 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125-12574523f45f8353.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 12:03:07 compute-0 sudo[236974]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:07 compute-0 sudo[236974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:07 compute-0 systemd[1]: 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125-12574523f45f8353.service: Failed with result 'exit-code'.
Oct 02 12:03:07 compute-0 sudo[236974]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:07 compute-0 multipathd[236945]: + [[ ! -n '' ]]
Oct 02 12:03:07 compute-0 multipathd[236945]: + . kolla_extend_start
Oct 02 12:03:07 compute-0 multipathd[236945]: Running command: '/usr/sbin/multipathd -d'
Oct 02 12:03:07 compute-0 multipathd[236945]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 12:03:07 compute-0 multipathd[236945]: + umask 0022
Oct 02 12:03:07 compute-0 multipathd[236945]: + exec /usr/sbin/multipathd -d
Oct 02 12:03:07 compute-0 multipathd[236945]: 4483.279460 | --------start up--------
Oct 02 12:03:07 compute-0 multipathd[236945]: 4483.279479 | read /etc/multipath.conf
Oct 02 12:03:07 compute-0 multipathd[236945]: 4483.287815 | path checkers start up
Oct 02 12:03:07 compute-0 ceph-mon[73668]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:03:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:08.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:03:08 compute-0 python3.9[237135]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:03:08 compute-0 sudo[237162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:08 compute-0 sudo[237162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:08 compute-0 sudo[237162]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:08 compute-0 sudo[237207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:08 compute-0 sudo[237207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:08 compute-0 sudo[237207]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:08 compute-0 sudo[237263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:08 compute-0 sudo[237263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:08 compute-0 sudo[237263]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:09 compute-0 sudo[237312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:03:09 compute-0 sudo[237312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:09 compute-0 sudo[237387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taueejgsusoldjjxuuplafarsprbbscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406588.8256707-2587-268986036603886/AnsiballZ_command.py'
Oct 02 12:03:09 compute-0 sudo[237387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:09.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:09 compute-0 python3.9[237391]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:03:09 compute-0 sudo[237387]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:09 compute-0 sudo[237312]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:09 compute-0 ceph-mon[73668]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:10 compute-0 sudo[237584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airdokmgodigavbvdtywpqfokfwmmyxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406589.664062-2611-184624772198023/AnsiballZ_systemd.py'
Oct 02 12:03:10 compute-0 sudo[237584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:10 compute-0 python3.9[237586]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:03:10 compute-0 systemd[1]: Stopping multipathd container...
Oct 02 12:03:10 compute-0 multipathd[236945]: 4486.006071 | exit (signal)
Oct 02 12:03:10 compute-0 multipathd[236945]: 4486.006627 | --------shut down-------
Oct 02 12:03:10 compute-0 systemd[1]: libpod-23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125.scope: Deactivated successfully.
Oct 02 12:03:10 compute-0 podman[237591]: 2025-10-02 12:03:10.492938499 +0000 UTC m=+0.074903368 container died 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:03:10 compute-0 systemd[1]: 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125-12574523f45f8353.timer: Deactivated successfully.
Oct 02 12:03:10 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125.
Oct 02 12:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125-userdata-shm.mount: Deactivated successfully.
Oct 02 12:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eead396972afa8624141fbd99f9ab1ba25faa3a2a96e399702a7fb6a8e06595-merged.mount: Deactivated successfully.
Oct 02 12:03:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:10.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:10 compute-0 podman[237591]: 2025-10-02 12:03:10.634860275 +0000 UTC m=+0.216825124 container cleanup 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:03:10 compute-0 podman[237591]: multipathd
Oct 02 12:03:10 compute-0 podman[237620]: multipathd
Oct 02 12:03:10 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 02 12:03:10 compute-0 systemd[1]: Stopped multipathd container.
Oct 02 12:03:10 compute-0 systemd[1]: Starting multipathd container...
Oct 02 12:03:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:03:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:03:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eead396972afa8624141fbd99f9ab1ba25faa3a2a96e399702a7fb6a8e06595/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eead396972afa8624141fbd99f9ab1ba25faa3a2a96e399702a7fb6a8e06595/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125.
Oct 02 12:03:10 compute-0 podman[237633]: 2025-10-02 12:03:10.902294194 +0000 UTC m=+0.146374474 container init 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:03:10 compute-0 multipathd[237649]: + sudo -E kolla_set_configs
Oct 02 12:03:10 compute-0 podman[237633]: 2025-10-02 12:03:10.931541736 +0000 UTC m=+0.175621996 container start 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:03:10 compute-0 sudo[237655]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 12:03:10 compute-0 sudo[237655]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:10 compute-0 sudo[237655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:10 compute-0 podman[237633]: multipathd
Oct 02 12:03:10 compute-0 systemd[1]: Started multipathd container.
Oct 02 12:03:10 compute-0 sudo[237584]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-0 multipathd[237649]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:03:11 compute-0 multipathd[237649]: INFO:__main__:Validating config file
Oct 02 12:03:11 compute-0 multipathd[237649]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:03:11 compute-0 multipathd[237649]: INFO:__main__:Writing out command to execute
Oct 02 12:03:11 compute-0 sudo[237655]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-0 multipathd[237649]: ++ cat /run_command
Oct 02 12:03:11 compute-0 multipathd[237649]: + CMD='/usr/sbin/multipathd -d'
Oct 02 12:03:11 compute-0 multipathd[237649]: + ARGS=
Oct 02 12:03:11 compute-0 multipathd[237649]: + sudo kolla_copy_cacerts
Oct 02 12:03:11 compute-0 podman[237656]: 2025-10-02 12:03:11.028076574 +0000 UTC m=+0.080254139 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 12:03:11 compute-0 systemd[1]: 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125-51b855a663e8b3f0.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 12:03:11 compute-0 systemd[1]: 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125-51b855a663e8b3f0.service: Failed with result 'exit-code'.
Oct 02 12:03:11 compute-0 sudo[237684]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 12:03:11 compute-0 sudo[237684]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:11 compute-0 sudo[237684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:11 compute-0 sudo[237684]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-0 multipathd[237649]: + [[ ! -n '' ]]
Oct 02 12:03:11 compute-0 multipathd[237649]: + . kolla_extend_start
Oct 02 12:03:11 compute-0 multipathd[237649]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 12:03:11 compute-0 multipathd[237649]: Running command: '/usr/sbin/multipathd -d'
Oct 02 12:03:11 compute-0 multipathd[237649]: + umask 0022
Oct 02 12:03:11 compute-0 multipathd[237649]: + exec /usr/sbin/multipathd -d
Oct 02 12:03:11 compute-0 multipathd[237649]: 4486.621238 | --------start up--------
Oct 02 12:03:11 compute-0 multipathd[237649]: 4486.621266 | read /etc/multipath.conf
Oct 02 12:03:11 compute-0 multipathd[237649]: 4486.629822 | path checkers start up
Oct 02 12:03:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:03:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:11.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:03:11 compute-0 sudo[237838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxhilgnloiaijhlyutfogecdzltkvxlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406591.1848278-2635-212267623738289/AnsiballZ_file.py'
Oct 02 12:03:11 compute-0 sudo[237838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:03:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:03:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:03:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b7aee78b-bed7-48a6-9ced-94771b619951 does not exist
Oct 02 12:03:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 295abbcf-7e6d-44f0-acb9-c86e4d0d1e55 does not exist
Oct 02 12:03:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0762b146-3669-4cc1-8132-643e5933a898 does not exist
Oct 02 12:03:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:03:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:03:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:03:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:03:11 compute-0 sudo[237841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:11 compute-0 sudo[237841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:11 compute-0 sudo[237841]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-0 sudo[237866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:11 compute-0 sudo[237866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:11 compute-0 python3.9[237840]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:11 compute-0 sudo[237866]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-0 ceph-mon[73668]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:03:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:03:11 compute-0 sudo[237838]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-0 sudo[237891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:11 compute-0 sudo[237891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:11 compute-0 sudo[237891]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-0 sudo[237940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:03:11 compute-0 sudo[237940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:12 compute-0 podman[238031]: 2025-10-02 12:03:12.356044147 +0000 UTC m=+0.042290498 container create 5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:03:12 compute-0 systemd[1]: Started libpod-conmon-5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0.scope.
Oct 02 12:03:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:12 compute-0 podman[238031]: 2025-10-02 12:03:12.336228034 +0000 UTC m=+0.022474395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:12 compute-0 podman[238031]: 2025-10-02 12:03:12.442050037 +0000 UTC m=+0.128296398 container init 5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:03:12 compute-0 podman[238031]: 2025-10-02 12:03:12.449417081 +0000 UTC m=+0.135663462 container start 5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:03:12 compute-0 podman[238031]: 2025-10-02 12:03:12.452617706 +0000 UTC m=+0.138864067 container attach 5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:03:12 compute-0 tender_banzai[238088]: 167 167
Oct 02 12:03:12 compute-0 systemd[1]: libpod-5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0.scope: Deactivated successfully.
Oct 02 12:03:12 compute-0 podman[238031]: 2025-10-02 12:03:12.458050199 +0000 UTC m=+0.144296550 container died 5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:03:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b49eeb32e5038b859bfca437f956349fcba936788b07ea78214b21712ba29d8-merged.mount: Deactivated successfully.
Oct 02 12:03:12 compute-0 podman[238031]: 2025-10-02 12:03:12.504382792 +0000 UTC m=+0.190629133 container remove 5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_banzai, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:03:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:12.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:12 compute-0 systemd[1]: libpod-conmon-5bf4189ca3585f4ad924584e168f62ba5699c16409f051b560c94ca61aa8f3b0.scope: Deactivated successfully.
Oct 02 12:03:12 compute-0 sudo[238167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmlzjykhcnojtimvhftrkwyynvzbwaqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406592.2777684-2671-103080600973143/AnsiballZ_file.py'
Oct 02 12:03:12 compute-0 sudo[238167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:12 compute-0 podman[238166]: 2025-10-02 12:03:12.707334768 +0000 UTC m=+0.067945844 container create acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:03:12 compute-0 systemd[1]: Started libpod-conmon-acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c.scope.
Oct 02 12:03:12 compute-0 podman[238166]: 2025-10-02 12:03:12.679855743 +0000 UTC m=+0.040466899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e05ca794d7b8c2a052267cbf655db045f51b31dbbfcc9b393fc5337a4c6818d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e05ca794d7b8c2a052267cbf655db045f51b31dbbfcc9b393fc5337a4c6818d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e05ca794d7b8c2a052267cbf655db045f51b31dbbfcc9b393fc5337a4c6818d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e05ca794d7b8c2a052267cbf655db045f51b31dbbfcc9b393fc5337a4c6818d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e05ca794d7b8c2a052267cbf655db045f51b31dbbfcc9b393fc5337a4c6818d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:12 compute-0 podman[238166]: 2025-10-02 12:03:12.803727972 +0000 UTC m=+0.164339138 container init acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:03:12 compute-0 podman[238166]: 2025-10-02 12:03:12.812978327 +0000 UTC m=+0.173589423 container start acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:03:12 compute-0 podman[238166]: 2025-10-02 12:03:12.817087115 +0000 UTC m=+0.177698191 container attach acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:03:12 compute-0 python3.9[238175]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 12:03:12 compute-0 sudo[238167]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:12 compute-0 podman[238190]: 2025-10-02 12:03:12.918942974 +0000 UTC m=+0.095069081 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 12:03:13 compute-0 sudo[238361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninaecaahpoioxmqtmpzzkrjovigxals ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406592.9936285-2695-127226667076032/AnsiballZ_modprobe.py'
Oct 02 12:03:13 compute-0 sudo[238361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:13.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:13 compute-0 python3.9[238363]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 02 12:03:13 compute-0 kernel: Key type psk registered
Oct 02 12:03:13 compute-0 elated_lumiere[238186]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:03:13 compute-0 elated_lumiere[238186]: --> relative data size: 1.0
Oct 02 12:03:13 compute-0 elated_lumiere[238186]: --> All data devices are unavailable
Oct 02 12:03:13 compute-0 sudo[238361]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:13 compute-0 systemd[1]: libpod-acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c.scope: Deactivated successfully.
Oct 02 12:03:13 compute-0 podman[238166]: 2025-10-02 12:03:13.605922257 +0000 UTC m=+0.966533333 container died acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lumiere, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:03:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e05ca794d7b8c2a052267cbf655db045f51b31dbbfcc9b393fc5337a4c6818d-merged.mount: Deactivated successfully.
Oct 02 12:03:13 compute-0 podman[238166]: 2025-10-02 12:03:13.673715926 +0000 UTC m=+1.034326992 container remove acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:03:13 compute-0 systemd[1]: libpod-conmon-acca50463d7b7caf2d2a350f4f1cf9b09235c67bc132d0ed157b764423c6c63c.scope: Deactivated successfully.
Oct 02 12:03:13 compute-0 sudo[237940]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:13 compute-0 sudo[238421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:13 compute-0 sudo[238421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:13 compute-0 sudo[238421]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:13 compute-0 ceph-mon[73668]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:13 compute-0 sudo[238469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:13 compute-0 sudo[238469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:13 compute-0 sudo[238469]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:13 compute-0 sudo[238523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:13 compute-0 sudo[238523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:13 compute-0 sudo[238523]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:13 compute-0 sudo[238569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:03:13 compute-0 sudo[238569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:14 compute-0 sudo[238646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnwzojwfuibfbeyjyuiyriatqjxyucxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406593.7680397-2719-263925536738664/AnsiballZ_stat.py'
Oct 02 12:03:14 compute-0 sudo[238646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:14 compute-0 python3.9[238650]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:03:14 compute-0 sudo[238646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:14 compute-0 podman[238688]: 2025-10-02 12:03:14.419002048 +0000 UTC m=+0.075409341 container create ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:03:14 compute-0 systemd[1]: Started libpod-conmon-ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a.scope.
Oct 02 12:03:14 compute-0 podman[238688]: 2025-10-02 12:03:14.38836332 +0000 UTC m=+0.044770663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:14 compute-0 podman[238688]: 2025-10-02 12:03:14.525281574 +0000 UTC m=+0.181688907 container init ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:03:14 compute-0 podman[238688]: 2025-10-02 12:03:14.538427131 +0000 UTC m=+0.194834414 container start ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:03:14 compute-0 podman[238688]: 2025-10-02 12:03:14.544572173 +0000 UTC m=+0.200979456 container attach ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:03:14 compute-0 peaceful_almeida[238727]: 167 167
Oct 02 12:03:14 compute-0 systemd[1]: libpod-ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a.scope: Deactivated successfully.
Oct 02 12:03:14 compute-0 podman[238688]: 2025-10-02 12:03:14.546772851 +0000 UTC m=+0.203180144 container died ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b66e4ec5718b7e7ff6ff24f2c076ad3cea084f64cb4cad1eff05438c7f51d306-merged.mount: Deactivated successfully.
Oct 02 12:03:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:14.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:14 compute-0 podman[238688]: 2025-10-02 12:03:14.593268838 +0000 UTC m=+0.249676131 container remove ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:03:14 compute-0 systemd[1]: libpod-conmon-ca8bc890f33fdee9632d827948c0d447fa155d858ee0d4120b103dc40775286a.scope: Deactivated successfully.
Oct 02 12:03:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:14 compute-0 podman[238822]: 2025-10-02 12:03:14.842448675 +0000 UTC m=+0.064621926 container create f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wilson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:03:14 compute-0 sudo[238861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlttrzlhhgrfjfvjiarntizqbzrkvwhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406593.7680397-2719-263925536738664/AnsiballZ_copy.py'
Oct 02 12:03:14 compute-0 sudo[238861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:14 compute-0 systemd[1]: Started libpod-conmon-f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5.scope.
Oct 02 12:03:14 compute-0 podman[238822]: 2025-10-02 12:03:14.81155105 +0000 UTC m=+0.033724301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7aa3af0fdb1e5603df49a6a7967af240e35c7039a89328fb04f696ddb51f14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7aa3af0fdb1e5603df49a6a7967af240e35c7039a89328fb04f696ddb51f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7aa3af0fdb1e5603df49a6a7967af240e35c7039a89328fb04f696ddb51f14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7aa3af0fdb1e5603df49a6a7967af240e35c7039a89328fb04f696ddb51f14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:14 compute-0 podman[238822]: 2025-10-02 12:03:14.950160708 +0000 UTC m=+0.172334009 container init f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wilson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:03:14 compute-0 podman[238822]: 2025-10-02 12:03:14.962481964 +0000 UTC m=+0.184655175 container start f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wilson, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:03:14 compute-0 podman[238822]: 2025-10-02 12:03:14.980709455 +0000 UTC m=+0.202882756 container attach f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wilson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:03:15 compute-0 python3.9[238864]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406593.7680397-2719-263925536738664/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:15 compute-0 sudo[238861]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:15.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:15 compute-0 nervous_wilson[238867]: {
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:     "1": [
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:         {
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "devices": [
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "/dev/loop3"
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             ],
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "lv_name": "ceph_lv0",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "lv_size": "7511998464",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "name": "ceph_lv0",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "tags": {
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.cluster_name": "ceph",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.crush_device_class": "",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.encrypted": "0",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.osd_id": "1",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.type": "block",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:                 "ceph.vdo": "0"
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             },
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "type": "block",
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:             "vg_name": "ceph_vg0"
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:         }
Oct 02 12:03:15 compute-0 nervous_wilson[238867]:     ]
Oct 02 12:03:15 compute-0 nervous_wilson[238867]: }
Oct 02 12:03:15 compute-0 systemd[1]: libpod-f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5.scope: Deactivated successfully.
Oct 02 12:03:15 compute-0 podman[238822]: 2025-10-02 12:03:15.673952893 +0000 UTC m=+0.896126114 container died f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:03:15 compute-0 sudo[239025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxucyheumjqlefjwrlzevdfkxfgwontg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406595.3515723-2767-89551828790607/AnsiballZ_lineinfile.py'
Oct 02 12:03:15 compute-0 sudo[239025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca7aa3af0fdb1e5603df49a6a7967af240e35c7039a89328fb04f696ddb51f14-merged.mount: Deactivated successfully.
Oct 02 12:03:15 compute-0 podman[238822]: 2025-10-02 12:03:15.731814981 +0000 UTC m=+0.953988192 container remove f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:03:15 compute-0 systemd[1]: libpod-conmon-f8dd3da3d2341e62ceed709dc0e9adbe7b78b5ecb87bcc040834533eec97d4f5.scope: Deactivated successfully.
Oct 02 12:03:15 compute-0 sudo[238569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:15 compute-0 sudo[239042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:15 compute-0 sudo[239042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:15 compute-0 sudo[239042]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:15 compute-0 sudo[239067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:15 compute-0 python3.9[239028]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:15 compute-0 sudo[239067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:15 compute-0 sudo[239067]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:15 compute-0 sudo[239025]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:15 compute-0 ceph-mon[73668]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:15 compute-0 sudo[239092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:15 compute-0 sudo[239092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:15 compute-0 sudo[239092]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:16 compute-0 sudo[239141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:03:16 compute-0 sudo[239141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:16 compute-0 sudo[239332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmkuehazhwwmmvxovmqktvhcfdqeyvbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406596.0466151-2791-199236734381934/AnsiballZ_systemd.py'
Oct 02 12:03:16 compute-0 sudo[239332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:16 compute-0 podman[239335]: 2025-10-02 12:03:16.440533337 +0000 UTC m=+0.044646559 container create ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:03:16 compute-0 systemd[1]: Started libpod-conmon-ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22.scope.
Oct 02 12:03:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:16 compute-0 podman[239335]: 2025-10-02 12:03:16.423724913 +0000 UTC m=+0.027838175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:16 compute-0 podman[239335]: 2025-10-02 12:03:16.529484075 +0000 UTC m=+0.133597337 container init ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:03:16 compute-0 podman[239335]: 2025-10-02 12:03:16.537817175 +0000 UTC m=+0.141930417 container start ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:03:16 compute-0 recursing_bose[239352]: 167 167
Oct 02 12:03:16 compute-0 systemd[1]: libpod-ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22.scope: Deactivated successfully.
Oct 02 12:03:16 compute-0 podman[239335]: 2025-10-02 12:03:16.54977136 +0000 UTC m=+0.153884582 container attach ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:03:16 compute-0 podman[239335]: 2025-10-02 12:03:16.5508799 +0000 UTC m=+0.154993132 container died ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:03:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:16.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ac07dd55d92d4087e92ac41cf9c6f2fd9f93d677025c58b8be1e71c2545e994-merged.mount: Deactivated successfully.
Oct 02 12:03:16 compute-0 podman[239335]: 2025-10-02 12:03:16.608506261 +0000 UTC m=+0.212619493 container remove ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:03:16 compute-0 systemd[1]: libpod-conmon-ae64e9d44496749f0e3b6a397de29040cadc9a54ad869066169825969eeebd22.scope: Deactivated successfully.
Oct 02 12:03:16 compute-0 python3.9[239337]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:03:16 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 12:03:16 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 12:03:16 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 12:03:16 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 12:03:16 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 12:03:16 compute-0 sudo[239332]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:16 compute-0 podman[239381]: 2025-10-02 12:03:16.833353436 +0000 UTC m=+0.055780684 container create 590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:03:16 compute-0 systemd[1]: Started libpod-conmon-590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc.scope.
Oct 02 12:03:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:03:16 compute-0 podman[239381]: 2025-10-02 12:03:16.808849149 +0000 UTC m=+0.031276417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e73794744f499d9b27bd6cb3ceef0872b56fd3080d0ed8ae868735696bb417/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e73794744f499d9b27bd6cb3ceef0872b56fd3080d0ed8ae868735696bb417/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e73794744f499d9b27bd6cb3ceef0872b56fd3080d0ed8ae868735696bb417/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e73794744f499d9b27bd6cb3ceef0872b56fd3080d0ed8ae868735696bb417/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:16 compute-0 podman[239381]: 2025-10-02 12:03:16.916400108 +0000 UTC m=+0.138827406 container init 590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:03:16 compute-0 podman[239381]: 2025-10-02 12:03:16.922951291 +0000 UTC m=+0.145378559 container start 590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:03:16 compute-0 podman[239381]: 2025-10-02 12:03:16.927395998 +0000 UTC m=+0.149823246 container attach 590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:03:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:17.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:17 compute-0 sudo[239552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltjyzlgmerzqjmzyfouaitqnhxamurws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406597.093239-2815-187788531158051/AnsiballZ_setup.py'
Oct 02 12:03:17 compute-0 sudo[239552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]: {
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]:         "osd_id": 1,
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]:         "type": "bluestore"
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]:     }
Oct 02 12:03:17 compute-0 thirsty_wozniak[239416]: }
Oct 02 12:03:17 compute-0 systemd[1]: libpod-590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc.scope: Deactivated successfully.
Oct 02 12:03:17 compute-0 podman[239381]: 2025-10-02 12:03:17.871243141 +0000 UTC m=+1.093670379 container died 590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9e73794744f499d9b27bd6cb3ceef0872b56fd3080d0ed8ae868735696bb417-merged.mount: Deactivated successfully.
Oct 02 12:03:17 compute-0 python3.9[239554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 12:03:17 compute-0 podman[239381]: 2025-10-02 12:03:17.93331745 +0000 UTC m=+1.155744688 container remove 590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:03:17 compute-0 systemd[1]: libpod-conmon-590d5c4cb94476b0028574fd5dd4a9ebc928b5e6030c1009d0d1f67c69b5f3bc.scope: Deactivated successfully.
Oct 02 12:03:17 compute-0 ceph-mon[73668]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:17 compute-0 sudo[239141]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:03:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:03:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3d277c3e-fae1-4942-bec7-e2b6f78011d3 does not exist
Oct 02 12:03:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a0bfbeeb-bf5e-45af-bfdf-a6dca146e159 does not exist
Oct 02 12:03:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6230c2fb-e937-4d58-a904-4a2ea4a09b7e does not exist
Oct 02 12:03:18 compute-0 sudo[239590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:18 compute-0 sudo[239590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:18 compute-0 sudo[239590]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:18 compute-0 sudo[239615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:03:18 compute-0 sudo[239615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:18 compute-0 sudo[239615]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:18 compute-0 sudo[239552]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:18 compute-0 sudo[239714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpdlwqmqehlsjtjzzsuhrsaxbfuanlfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406597.093239-2815-187788531158051/AnsiballZ_dnf.py'
Oct 02 12:03:18 compute-0 sudo[239714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:18 compute-0 python3.9[239716]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 12:03:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:19.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:20 compute-0 ceph-mon[73668]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:20 compute-0 sudo[239718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:20 compute-0 sudo[239718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:20 compute-0 sudo[239718]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:20 compute-0 sudo[239744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:20 compute-0 sudo[239744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:20 compute-0 sudo[239744]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:03:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:20.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:03:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:21.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:22 compute-0 ceph-mon[73668]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:22.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:23.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:24 compute-0 ceph-mon[73668]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:24 compute-0 podman[239773]: 2025-10-02 12:03:24.484511849 +0000 UTC m=+0.151013147 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:03:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:03:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:03:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:24 compute-0 systemd[1]: Reloading.
Oct 02 12:03:25 compute-0 systemd-rc-local-generator[239827]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:25 compute-0 systemd-sysv-generator[239832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:25 compute-0 systemd[1]: Reloading.
Oct 02 12:03:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:25.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:25 compute-0 systemd-rc-local-generator[239863]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:25 compute-0 systemd-sysv-generator[239867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:25 compute-0 systemd-logind[820]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 12:03:25 compute-0 systemd-logind[820]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 12:03:25 compute-0 lvm[239908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 12:03:25 compute-0 lvm[239908]: VG ceph_vg0 finished
Oct 02 12:03:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 12:03:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 12:03:26 compute-0 systemd[1]: Reloading.
Oct 02 12:03:26 compute-0 ceph-mon[73668]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:26 compute-0 systemd-rc-local-generator[239960]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:26 compute-0 systemd-sysv-generator[239964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 12:03:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:03:26.439 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:03:26.440 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:03:26.440 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:26 compute-0 sudo[239714]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 12:03:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 12:03:27 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.729s CPU time.
Oct 02 12:03:27 compute-0 systemd[1]: run-r6f26e42b119140a4b0dbe8b45d49872e.service: Deactivated successfully.
Oct 02 12:03:27 compute-0 sudo[241248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzdtosqbmnjfnvlxfwmerdrbemfvdrtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406607.3311808-2851-269893728717231/AnsiballZ_file.py'
Oct 02 12:03:27 compute-0 sudo[241248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:27 compute-0 python3.9[241250]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:27 compute-0 sudo[241248]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:28 compute-0 ceph-mon[73668]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:03:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:03:28
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes']
Oct 02 12:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:03:28 compute-0 python3.9[241401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 12:03:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:29.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:29 compute-0 sudo[241555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bijgugckvmujsdqrfvksifgdfabrfhtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406609.2784832-2903-134311368771980/AnsiballZ_file.py'
Oct 02 12:03:29 compute-0 sudo[241555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:29 compute-0 python3.9[241557]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:29 compute-0 sudo[241555]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:30 compute-0 ceph-mon[73668]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:30 compute-0 podman[241595]: 2025-10-02 12:03:30.418672703 +0000 UTC m=+0.078959675 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:03:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:03:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:30.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:03:31 compute-0 sudo[241725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkfzaxyzlurgynxhgnqydeqatrbvewgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406610.339199-2936-77878984118682/AnsiballZ_systemd_service.py'
Oct 02 12:03:31 compute-0 sudo[241725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:31 compute-0 python3.9[241727]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:03:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:31.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:31 compute-0 systemd[1]: Reloading.
Oct 02 12:03:31 compute-0 systemd-rc-local-generator[241752]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:31 compute-0 systemd-sysv-generator[241756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:31 compute-0 sudo[241725]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:32 compute-0 ceph-mon[73668]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:32 compute-0 python3.9[241912]: ansible-ansible.builtin.service_facts Invoked
Oct 02 12:03:32 compute-0 network[241930]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 12:03:32 compute-0 network[241931]: 'network-scripts' will be removed from distribution in near future.
Oct 02 12:03:32 compute-0 network[241932]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 12:03:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:32.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:33.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:34 compute-0 ceph-mon[73668]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:03:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:34.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:03:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:03:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:35.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:03:36 compute-0 ceph-mon[73668]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:36.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:37.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:37 compute-0 sudo[242210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwbsxlsjhqglgfbpfdcwrwefrnnihkdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406617.113743-2993-127403473838333/AnsiballZ_systemd_service.py'
Oct 02 12:03:37 compute-0 sudo[242210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:37 compute-0 python3.9[242212]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:37 compute-0 sudo[242210]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:38 compute-0 ceph-mon[73668]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:38 compute-0 sudo[242363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fixwnjaqilalnrsfnxhijtcxhagblijf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406617.9281087-2993-10357810306931/AnsiballZ_systemd_service.py'
Oct 02 12:03:38 compute-0 sudo[242363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:38 compute-0 python3.9[242365]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:38 compute-0 sudo[242363]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:39 compute-0 sudo[242517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phektpuicmevrqgxrovbkpwwsvjbtcph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406618.801535-2993-180784627461926/AnsiballZ_systemd_service.py'
Oct 02 12:03:39 compute-0 sudo[242517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:03:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3808 writes, 16K keys, 3808 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3808 writes, 3808 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1425 writes, 5780 keys, 1425 commit groups, 1.0 writes per commit group, ingest: 9.95 MB, 0.02 MB/s
                                           Interval WAL: 1425 writes, 1425 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    136.4      0.14              0.05         7    0.020       0      0       0.0       0.0
                                             L6      1/0    7.53 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    189.1    154.4      0.33              0.16         6    0.055     26K   3309       0.0       0.0
                                            Sum      1/0    7.53 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7    133.0    149.1      0.47              0.21        13    0.036     26K   3309       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    152.1    152.9      0.22              0.12         6    0.037     14K   2005       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    189.1    154.4      0.33              0.16         6    0.055     26K   3309       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    139.8      0.14              0.05         6    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.019, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 308.00 MB usage: 2.10 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(110,1.86 MB,0.602326%) FilterBlock(14,82.17 KB,0.0260539%) IndexBlock(14,168.67 KB,0.0534801%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:03:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:39.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:39 compute-0 python3.9[242519]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:39 compute-0 sudo[242517]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:03:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:03:39 compute-0 sudo[242670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kthannookimjzhovbmzkptkatibancxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406619.6384292-2993-235125795775786/AnsiballZ_systemd_service.py'
Oct 02 12:03:39 compute-0 sudo[242670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:40 compute-0 ceph-mon[73668]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:40 compute-0 python3.9[242672]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:40 compute-0 sudo[242670]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:40 compute-0 sudo[242722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:40 compute-0 sudo[242722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:40 compute-0 sudo[242722]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:40.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:40 compute-0 sudo[242747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:40 compute-0 sudo[242747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:40 compute-0 sudo[242747]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:40 compute-0 sudo[242874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkbafcexaftepfnvyovwrbgfkjfnursh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406620.4816115-2993-187706083464392/AnsiballZ_systemd_service.py'
Oct 02 12:03:40 compute-0 sudo[242874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:41 compute-0 python3.9[242876]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:41 compute-0 sudo[242874]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:41.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:41 compute-0 podman[242878]: 2025-10-02 12:03:41.383231914 +0000 UTC m=+0.069052303 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Oct 02 12:03:41 compute-0 sudo[243047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xudtbmeiofhqlxjsoealsqosphtjkwdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406621.4583788-2993-118298251702913/AnsiballZ_systemd_service.py'
Oct 02 12:03:41 compute-0 sudo[243047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:42 compute-0 python3.9[243049]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:42 compute-0 sudo[243047]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:42 compute-0 ceph-mon[73668]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:42 compute-0 sudo[243201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohctvpmcxvorxhxktmqtezjrxmnyacra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406622.2860372-2993-160666888817967/AnsiballZ_systemd_service.py'
Oct 02 12:03:42 compute-0 sudo[243201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:42.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:42 compute-0 python3.9[243203]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:42 compute-0 sudo[243201]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:43.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:43 compute-0 podman[243322]: 2025-10-02 12:03:43.418422161 +0000 UTC m=+0.081628012 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:03:43 compute-0 sudo[243371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjqovvskkxmpbhzoatngyibauxaabmib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406623.096285-2993-267741128422865/AnsiballZ_systemd_service.py'
Oct 02 12:03:43 compute-0 sudo[243371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:43 compute-0 python3.9[243376]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:43 compute-0 sudo[243371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:44 compute-0 ceph-mon[73668]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:44 compute-0 sudo[243528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqflbliehkzthwuvnqghqtcbakvyedhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406624.1891246-3170-98673787857266/AnsiballZ_file.py'
Oct 02 12:03:44 compute-0 sudo[243528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:03:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:44.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:03:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:44 compute-0 python3.9[243530]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.761053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624761119, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1205, "num_deletes": 255, "total_data_size": 2088935, "memory_usage": 2116368, "flush_reason": "Manual Compaction"}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 02 12:03:44 compute-0 sudo[243528]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624779924, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2055989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15647, "largest_seqno": 16851, "table_properties": {"data_size": 2050299, "index_size": 3085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11336, "raw_average_key_size": 18, "raw_value_size": 2038910, "raw_average_value_size": 3353, "num_data_blocks": 140, "num_entries": 608, "num_filter_entries": 608, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406505, "oldest_key_time": 1759406505, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19160 microseconds, and 8645 cpu microseconds.
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.780213) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2055989 bytes OK
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.780331) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.782157) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.782173) EVENT_LOG_v1 {"time_micros": 1759406624782167, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.782190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2083653, prev total WAL file size 2083653, number of live WAL files 2.
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.783631) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2007KB)], [35(7711KB)]
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624783693, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9953047, "oldest_snapshot_seqno": -1}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4230 keys, 9583609 bytes, temperature: kUnknown
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624818142, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9583609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9552640, "index_size": 19275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 105365, "raw_average_key_size": 24, "raw_value_size": 9473347, "raw_average_value_size": 2239, "num_data_blocks": 805, "num_entries": 4230, "num_filter_entries": 4230, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.818334) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9583609 bytes
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.819662) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 288.4 rd, 277.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 4755, records dropped: 525 output_compression: NoCompression
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.819676) EVENT_LOG_v1 {"time_micros": 1759406624819670, "job": 16, "event": "compaction_finished", "compaction_time_micros": 34511, "compaction_time_cpu_micros": 17808, "output_level": 6, "num_output_files": 1, "total_output_size": 9583609, "num_input_records": 4755, "num_output_records": 4230, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624820093, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624821217, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.783539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.821284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.821291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.821293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.821295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:44.821297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:45.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:45 compute-0 sudo[243680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-marnyvobwpascwavocpqgyikkbvoylmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406625.0543473-3170-114275207608382/AnsiballZ_file.py'
Oct 02 12:03:45 compute-0 sudo[243680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:45 compute-0 python3.9[243682]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:45 compute-0 sudo[243680]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-0 sudo[243832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwdhsobedzolznmfjbuenebtbmnrqgnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406625.8181386-3170-99793687870752/AnsiballZ_file.py'
Oct 02 12:03:46 compute-0 sudo[243832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:46 compute-0 ceph-mon[73668]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:46 compute-0 python3.9[243834]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:46 compute-0 sudo[243832]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:46.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:46 compute-0 sudo[243985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxdbgjkubatfrbxdbjpodhngbbslowhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406626.4733865-3170-42690226099076/AnsiballZ_file.py'
Oct 02 12:03:46 compute-0 sudo[243985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:47 compute-0 python3.9[243987]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:47 compute-0 sudo[243985]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:47.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:47 compute-0 sudo[244137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbgthtrxdpmplkkquljrcgktejnqqcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406627.2534134-3170-34797206282620/AnsiballZ_file.py'
Oct 02 12:03:47 compute-0 sudo[244137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:47 compute-0 python3.9[244139]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:47 compute-0 sudo[244137]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:48 compute-0 sudo[244289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kobuffhtsjaoksnxssjgegrcbaiqhjuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406628.000095-3170-7824772336229/AnsiballZ_file.py'
Oct 02 12:03:48 compute-0 sudo[244289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:48 compute-0 ceph-mon[73668]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:48 compute-0 python3.9[244291]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:48 compute-0 sudo[244289]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:03:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:48.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:03:48 compute-0 sudo[244442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktbtvargxuooezxfzitxwczmvqqwhjxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406628.656883-3170-139382468098531/AnsiballZ_file.py'
Oct 02 12:03:48 compute-0 sudo[244442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:49 compute-0 python3.9[244444]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:49 compute-0 sudo[244442]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:49 compute-0 sudo[244594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeqhhtsmbicqqtitowmagaozpaftmboq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406629.353152-3170-154171716746278/AnsiballZ_file.py'
Oct 02 12:03:49 compute-0 sudo[244594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:49 compute-0 python3.9[244596]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:49 compute-0 sudo[244594]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:50 compute-0 ceph-mon[73668]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:50 compute-0 sudo[244747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezmqobgpkvwyjadyufizdkpcufdlxrve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406630.103055-3341-21470705702071/AnsiballZ_file.py'
Oct 02 12:03:50 compute-0 sudo[244747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:03:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:50.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:03:50 compute-0 python3.9[244749]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:50 compute-0 sudo[244747]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:51 compute-0 sudo[244899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esodkzgofyuyvmxximbhsbmoxybmscnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406630.821153-3341-254081436810979/AnsiballZ_file.py'
Oct 02 12:03:51 compute-0 sudo[244899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:51 compute-0 ceph-mon[73668]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:51 compute-0 python3.9[244901]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:51 compute-0 sudo[244899]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:51 compute-0 sudo[245051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jucojmujppynxzfjumnfvumozfbkrkqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406631.5494976-3341-56753013494150/AnsiballZ_file.py'
Oct 02 12:03:51 compute-0 sudo[245051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:52 compute-0 python3.9[245053]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:52 compute-0 sudo[245051]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:52.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:53.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:53 compute-0 ceph-mon[73668]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:54.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:55.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:55 compute-0 ceph-mon[73668]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.455931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635455964, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 342, "num_deletes": 251, "total_data_size": 206664, "memory_usage": 214376, "flush_reason": "Manual Compaction"}
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635459680, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 204944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16852, "largest_seqno": 17193, "table_properties": {"data_size": 202801, "index_size": 307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5354, "raw_average_key_size": 18, "raw_value_size": 198624, "raw_average_value_size": 684, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406625, "oldest_key_time": 1759406625, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3788 microseconds, and 1103 cpu microseconds.
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.459718) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 204944 bytes OK
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.459732) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461254) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461283) EVENT_LOG_v1 {"time_micros": 1759406635461274, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461301) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 204355, prev total WAL file size 204355, number of live WAL files 2.
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461859) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(200KB)], [38(9358KB)]
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635462074, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9788553, "oldest_snapshot_seqno": -1}
Oct 02 12:03:55 compute-0 podman[245057]: 2025-10-02 12:03:55.463777408 +0000 UTC m=+0.128447316 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4010 keys, 7761423 bytes, temperature: kUnknown
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635510393, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7761423, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7733563, "index_size": 16742, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 101465, "raw_average_key_size": 25, "raw_value_size": 7659674, "raw_average_value_size": 1910, "num_data_blocks": 691, "num_entries": 4010, "num_filter_entries": 4010, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.510620) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7761423 bytes
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.511852) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.8 rd, 160.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(85.6) write-amplify(37.9) OK, records in: 4520, records dropped: 510 output_compression: NoCompression
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.511873) EVENT_LOG_v1 {"time_micros": 1759406635511862, "job": 18, "event": "compaction_finished", "compaction_time_micros": 48261, "compaction_time_cpu_micros": 33927, "output_level": 6, "num_output_files": 1, "total_output_size": 7761423, "num_input_records": 4520, "num_output_records": 4010, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635512033, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635513923, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.514019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.514027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.514031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.514034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:03:55.514037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:56.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:57.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:57 compute-0 ceph-mon[73668]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:03:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:58.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:03:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:59.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:59 compute-0 ceph-mon[73668]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:00.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:00 compute-0 sudo[245086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:00 compute-0 sudo[245086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:00 compute-0 sudo[245086]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:00 compute-0 sudo[245117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:00 compute-0 sudo[245117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:00 compute-0 sudo[245117]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:00 compute-0 podman[245110]: 2025-10-02 12:04:00.845440795 +0000 UTC m=+0.080477680 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:01.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:01 compute-0 ceph-mon[73668]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:02.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:03 compute-0 sudo[245306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqwuekxzfgmxltfebvvnbcipjkmskin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406642.8323371-3341-237405715214178/AnsiballZ_file.py'
Oct 02 12:04:03 compute-0 sudo[245306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:04:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:03.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:04:03 compute-0 python3.9[245308]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:03 compute-0 sudo[245306]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:03 compute-0 ceph-mon[73668]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:03 compute-0 sudo[245458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olsmvkgcspsvhvkxvxozfgvzwibmpgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406643.5865285-3341-34329421537651/AnsiballZ_file.py'
Oct 02 12:04:03 compute-0 sudo[245458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:04 compute-0 python3.9[245460]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:04 compute-0 sudo[245458]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:04.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:04 compute-0 sudo[245611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdnwhmpoimvifgzjtwpvhgvxszjgpcxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406644.3479202-3341-235218013578626/AnsiballZ_file.py'
Oct 02 12:04:04 compute-0 sudo[245611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:04 compute-0 python3.9[245613]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:04 compute-0 sudo[245611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:05.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:05 compute-0 sudo[245763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uelnckbswpjcngwxkhhrxaiqevpvkitp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406645.1159654-3341-156207365064985/AnsiballZ_file.py'
Oct 02 12:04:05 compute-0 sudo[245763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:05 compute-0 ceph-mon[73668]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:05 compute-0 python3.9[245765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:05 compute-0 sudo[245763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:06 compute-0 sudo[245915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egtafoqlligelnhitmlqjwizbmdpqyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406645.9029036-3341-129717431432812/AnsiballZ_file.py'
Oct 02 12:04:06 compute-0 sudo[245915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:06 compute-0 python3.9[245917]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:06 compute-0 sudo[245915]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:06.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:07 compute-0 sudo[246068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isirohfajhftzidrwsgrgcbwbhlvqjeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406647.0071495-3515-240993047353158/AnsiballZ_command.py'
Oct 02 12:04:07 compute-0 sudo[246068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:04:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:07.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:04:07 compute-0 python3.9[246070]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:07 compute-0 ceph-mon[73668]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:07 compute-0 sudo[246068]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:08 compute-0 python3.9[246222]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 12:04:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:08.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:09 compute-0 sudo[246373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deigkeclhwbkimaeoruyookzdqhmektz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406648.9549057-3569-279129308839542/AnsiballZ_systemd_service.py'
Oct 02 12:04:09 compute-0 sudo[246373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:09 compute-0 ceph-mon[73668]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:09 compute-0 python3.9[246375]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:04:09 compute-0 systemd[1]: Reloading.
Oct 02 12:04:09 compute-0 systemd-sysv-generator[246400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:04:09 compute-0 systemd-rc-local-generator[246396]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:04:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:09 compute-0 sudo[246373]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:10 compute-0 sudo[246560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dykoobfwxawptecidoqqescylekekmkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406650.2789283-3593-72968398995529/AnsiballZ_command.py'
Oct 02 12:04:10 compute-0 sudo[246560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:10.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:10 compute-0 python3.9[246562]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:11.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:11 compute-0 ceph-mon[73668]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:11 compute-0 sudo[246560]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:12 compute-0 sudo[246729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkwramwzzkymdlalfneicwcwxyiulfob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406652.0687325-3593-41376767043566/AnsiballZ_command.py'
Oct 02 12:04:12 compute-0 sudo[246729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:12 compute-0 podman[246686]: 2025-10-02 12:04:12.427770229 +0000 UTC m=+0.088723976 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd)
Oct 02 12:04:12 compute-0 python3.9[246736]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:12 compute-0 sudo[246729]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:13 compute-0 sudo[246887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjmykrrsxgfpglpwjzhpvqcczsycnam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406652.833027-3593-262182760728378/AnsiballZ_command.py'
Oct 02 12:04:13 compute-0 sudo[246887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:13.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:13 compute-0 python3.9[246889]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:13 compute-0 sudo[246887]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:13 compute-0 ceph-mon[73668]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:13 compute-0 sudo[247057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uykrgpbtrztoinkykkphirflnqiqeqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406653.5917592-3593-155010637764400/AnsiballZ_command.py'
Oct 02 12:04:13 compute-0 podman[247014]: 2025-10-02 12:04:13.916384363 +0000 UTC m=+0.049483368 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:13 compute-0 sudo[247057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:14 compute-0 python3.9[247062]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:14 compute-0 sudo[247057]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:14 compute-0 sudo[247214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bueupptljioerofeolaqbshmaulqeiaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406654.4274364-3593-32910612320706/AnsiballZ_command.py'
Oct 02 12:04:14 compute-0 sudo[247214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:14 compute-0 python3.9[247216]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:15 compute-0 sudo[247214]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:15.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:15 compute-0 sudo[247367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsrzemsymonriuhsgmqwduwpvaqipbff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406655.161646-3593-237712427613965/AnsiballZ_command.py'
Oct 02 12:04:15 compute-0 sudo[247367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:15 compute-0 ceph-mon[73668]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:15 compute-0 python3.9[247369]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:15 compute-0 sudo[247367]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:16 compute-0 sudo[247520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlkdtletokaonmmopvfwaeweyfdxkjth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406655.9664464-3593-16230534451115/AnsiballZ_command.py'
Oct 02 12:04:16 compute-0 sudo[247520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:16 compute-0 python3.9[247522]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:16 compute-0 sudo[247520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:16.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:17 compute-0 sudo[247674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilabnncaihnzvigeqoctdadkggrpwxyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406656.7232113-3593-263992556544086/AnsiballZ_command.py'
Oct 02 12:04:17 compute-0 sudo[247674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:17 compute-0 python3.9[247676]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:17 compute-0 sudo[247674]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:17 compute-0 ceph-mon[73668]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:18 compute-0 sudo[247703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:18 compute-0 sudo[247703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:18 compute-0 sudo[247703]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:18 compute-0 sudo[247728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:18 compute-0 sudo[247728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:18 compute-0 sudo[247728]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:18.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:18 compute-0 sudo[247756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:18 compute-0 sudo[247756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:18 compute-0 sudo[247756]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:18 compute-0 sudo[247813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:04:18 compute-0 sudo[247813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:19 compute-0 sudo[247942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frikxsnujfjibefszbwwsrwepqlxxshh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406658.633012-3800-54800538111692/AnsiballZ_file.py'
Oct 02 12:04:19 compute-0 sudo[247942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:19 compute-0 sudo[247813]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-0 python3.9[247946]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:04:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:04:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:04:19 compute-0 sudo[247942]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:19 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 01673015-6adf-40db-85bc-b31007081c2a does not exist
Oct 02 12:04:19 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b31d2042-2069-4dfb-9f1b-0259187a5066 does not exist
Oct 02 12:04:19 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c6eccba7-b126-4dca-977c-b08b00b3d9e2 does not exist
Oct 02 12:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:04:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:04:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:04:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:19 compute-0 sudo[247967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:19 compute-0 sudo[247967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:19 compute-0 sudo[247967]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:19 compute-0 sudo[248010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:19 compute-0 sudo[248010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:19 compute-0 sudo[248010]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-0 sudo[248059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:19 compute-0 sudo[248059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:19 compute-0 sudo[248059]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-0 sudo[248112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:04:19 compute-0 sudo[248112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:19 compute-0 ceph-mon[73668]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:19 compute-0 sudo[248220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofcedfpmtegiklalvaldniyinuzjubph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406659.4560266-3800-157823125737345/AnsiballZ_file.py'
Oct 02 12:04:19 compute-0 sudo[248220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:19 compute-0 python3.9[248229]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:20 compute-0 podman[248252]: 2025-10-02 12:04:20.002969361 +0000 UTC m=+0.054448228 container create 1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kowalevski, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:04:20 compute-0 sudo[248220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:20 compute-0 systemd[1]: Started libpod-conmon-1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b.scope.
Oct 02 12:04:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:20 compute-0 podman[248252]: 2025-10-02 12:04:19.972129753 +0000 UTC m=+0.023608670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:20 compute-0 podman[248252]: 2025-10-02 12:04:20.08499155 +0000 UTC m=+0.136470477 container init 1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:20 compute-0 podman[248252]: 2025-10-02 12:04:20.096967174 +0000 UTC m=+0.148446001 container start 1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:04:20 compute-0 podman[248252]: 2025-10-02 12:04:20.103179287 +0000 UTC m=+0.154658154 container attach 1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kowalevski, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:04:20 compute-0 stoic_kowalevski[248270]: 167 167
Oct 02 12:04:20 compute-0 systemd[1]: libpod-1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b.scope: Deactivated successfully.
Oct 02 12:04:20 compute-0 podman[248252]: 2025-10-02 12:04:20.107139981 +0000 UTC m=+0.158618848 container died 1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kowalevski, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d93c8816d99890dead71697b0c21c5c7e6477a42bb131ba164681b5c4e27273-merged.mount: Deactivated successfully.
Oct 02 12:04:20 compute-0 podman[248252]: 2025-10-02 12:04:20.163916938 +0000 UTC m=+0.215395805 container remove 1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kowalevski, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 02 12:04:20 compute-0 systemd[1]: libpod-conmon-1b770e2e76b1f03ec4d5de53abb4d6b2395ecbea679b5739a10f65663ddd755b.scope: Deactivated successfully.
Oct 02 12:04:20 compute-0 podman[248369]: 2025-10-02 12:04:20.358436315 +0000 UTC m=+0.060467265 container create 41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:04:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:20 compute-0 systemd[1]: Started libpod-conmon-41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3.scope.
Oct 02 12:04:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:20 compute-0 podman[248369]: 2025-10-02 12:04:20.343040602 +0000 UTC m=+0.045071552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c220532affb100e5939191e73225b72d31f85ae20a2f6d25355b77dffa17a4bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c220532affb100e5939191e73225b72d31f85ae20a2f6d25355b77dffa17a4bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c220532affb100e5939191e73225b72d31f85ae20a2f6d25355b77dffa17a4bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c220532affb100e5939191e73225b72d31f85ae20a2f6d25355b77dffa17a4bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c220532affb100e5939191e73225b72d31f85ae20a2f6d25355b77dffa17a4bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:20 compute-0 podman[248369]: 2025-10-02 12:04:20.462997465 +0000 UTC m=+0.165028435 container init 41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:04:20 compute-0 podman[248369]: 2025-10-02 12:04:20.478930962 +0000 UTC m=+0.180961952 container start 41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kapitsa, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:04:20 compute-0 podman[248369]: 2025-10-02 12:04:20.483120392 +0000 UTC m=+0.185151342 container attach 41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:04:20 compute-0 sudo[248464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqredomtqrowwjwngcramxeavafihtkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406660.190673-3800-106631895923219/AnsiballZ_file.py'
Oct 02 12:04:20 compute-0 sudo[248464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:20.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:20 compute-0 python3.9[248466]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:20 compute-0 sudo[248464]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:20 compute-0 sudo[248468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:20 compute-0 sudo[248468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:20 compute-0 sudo[248468]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:20 compute-0 sudo[248516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:20 compute-0 sudo[248516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:20 compute-0 sudo[248516]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-0 sudo[248673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khlmsqghnrnpkiilczrmwvcimiksygbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406661.0186982-3866-107755314728622/AnsiballZ_file.py'
Oct 02 12:04:21 compute-0 sudo[248673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:21 compute-0 tender_kapitsa[248416]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:04:21 compute-0 tender_kapitsa[248416]: --> relative data size: 1.0
Oct 02 12:04:21 compute-0 tender_kapitsa[248416]: --> All data devices are unavailable
Oct 02 12:04:21 compute-0 systemd[1]: libpod-41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3.scope: Deactivated successfully.
Oct 02 12:04:21 compute-0 podman[248369]: 2025-10-02 12:04:21.36736514 +0000 UTC m=+1.069396130 container died 41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c220532affb100e5939191e73225b72d31f85ae20a2f6d25355b77dffa17a4bf-merged.mount: Deactivated successfully.
Oct 02 12:04:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:21 compute-0 podman[248369]: 2025-10-02 12:04:21.446726649 +0000 UTC m=+1.148757609 container remove 41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:04:21 compute-0 systemd[1]: libpod-conmon-41fcfcfefc7de8c57062212692ff9e2bbe8a7d45dc16fe66a55760e6c55fa4d3.scope: Deactivated successfully.
Oct 02 12:04:21 compute-0 sudo[248112]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-0 python3.9[248677]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:21 compute-0 sudo[248673]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-0 sudo[248693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:21 compute-0 sudo[248693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:21 compute-0 sudo[248693]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-0 sudo[248742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:21 compute-0 sudo[248742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:21 compute-0 sudo[248742]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-0 ceph-mon[73668]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:21 compute-0 sudo[248790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:21 compute-0 sudo[248790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:21 compute-0 sudo[248790]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-0 sudo[248844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:04:21 compute-0 sudo[248844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:22 compute-0 sudo[248985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xavjmrmxbmaepvrfsexguzjulhxalxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406661.6958237-3866-230079333781252/AnsiballZ_file.py'
Oct 02 12:04:22 compute-0 sudo[248985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:22 compute-0 podman[248986]: 2025-10-02 12:04:22.21884777 +0000 UTC m=+0.053855642 container create 5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:04:22 compute-0 systemd[1]: Started libpod-conmon-5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887.scope.
Oct 02 12:04:22 compute-0 podman[248986]: 2025-10-02 12:04:22.194478082 +0000 UTC m=+0.029485984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:22 compute-0 podman[248986]: 2025-10-02 12:04:22.310231655 +0000 UTC m=+0.145239577 container init 5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:04:22 compute-0 podman[248986]: 2025-10-02 12:04:22.323493232 +0000 UTC m=+0.158501134 container start 5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:04:22 compute-0 podman[248986]: 2025-10-02 12:04:22.327195529 +0000 UTC m=+0.162203431 container attach 5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:04:22 compute-0 busy_pike[249005]: 167 167
Oct 02 12:04:22 compute-0 systemd[1]: libpod-5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887.scope: Deactivated successfully.
Oct 02 12:04:22 compute-0 podman[248986]: 2025-10-02 12:04:22.330185157 +0000 UTC m=+0.165193029 container died 5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:04:22 compute-0 python3.9[248989]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d22d14d96269c11c8bb31a18c155f46ad7081c096a43b143b9a8abfeda4363ae-merged.mount: Deactivated successfully.
Oct 02 12:04:22 compute-0 sudo[248985]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:22 compute-0 podman[248986]: 2025-10-02 12:04:22.380976328 +0000 UTC m=+0.215984200 container remove 5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:22 compute-0 systemd[1]: libpod-conmon-5b1250476ea238968e274d270051daea89aa2ec5b1293b5247d0f064b3575887.scope: Deactivated successfully.
Oct 02 12:04:22 compute-0 podman[249056]: 2025-10-02 12:04:22.58905838 +0000 UTC m=+0.058532024 container create 51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:04:22 compute-0 systemd[1]: Started libpod-conmon-51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488.scope.
Oct 02 12:04:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:22.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:22 compute-0 podman[249056]: 2025-10-02 12:04:22.567260469 +0000 UTC m=+0.036734103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a88f9fbf22c82ca08fb1995a5b42dabb7055d47c210bae8f44a511139df534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a88f9fbf22c82ca08fb1995a5b42dabb7055d47c210bae8f44a511139df534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a88f9fbf22c82ca08fb1995a5b42dabb7055d47c210bae8f44a511139df534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a88f9fbf22c82ca08fb1995a5b42dabb7055d47c210bae8f44a511139df534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:22 compute-0 podman[249056]: 2025-10-02 12:04:22.70047778 +0000 UTC m=+0.169951464 container init 51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:04:22 compute-0 podman[249056]: 2025-10-02 12:04:22.720217447 +0000 UTC m=+0.189691101 container start 51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:04:22 compute-0 podman[249056]: 2025-10-02 12:04:22.725374042 +0000 UTC m=+0.194847696 container attach 51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:04:22 compute-0 sudo[249202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smdjqpzmjwxetjjrhdvomkqtuxadrgnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406662.5537233-3866-101419892856996/AnsiballZ_file.py'
Oct 02 12:04:22 compute-0 sudo[249202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:23 compute-0 python3.9[249204]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:23 compute-0 sudo[249202]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]: {
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:     "1": [
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:         {
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "devices": [
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "/dev/loop3"
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             ],
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "lv_name": "ceph_lv0",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "lv_size": "7511998464",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "name": "ceph_lv0",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "tags": {
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.cluster_name": "ceph",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.crush_device_class": "",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.encrypted": "0",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.osd_id": "1",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.type": "block",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:                 "ceph.vdo": "0"
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             },
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "type": "block",
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:             "vg_name": "ceph_vg0"
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:         }
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]:     ]
Oct 02 12:04:23 compute-0 wonderful_sanderson[249121]: }
Oct 02 12:04:23 compute-0 systemd[1]: libpod-51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488.scope: Deactivated successfully.
Oct 02 12:04:23 compute-0 podman[249056]: 2025-10-02 12:04:23.50654299 +0000 UTC m=+0.976016654 container died 51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5a88f9fbf22c82ca08fb1995a5b42dabb7055d47c210bae8f44a511139df534-merged.mount: Deactivated successfully.
Oct 02 12:04:23 compute-0 podman[249056]: 2025-10-02 12:04:23.588160129 +0000 UTC m=+1.057633783 container remove 51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:04:23 compute-0 systemd[1]: libpod-conmon-51e601b2aacb3a1333decb2043b4a0b25592d723d5ac44a2b2c730dde3ecc488.scope: Deactivated successfully.
Oct 02 12:04:23 compute-0 sudo[248844]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-0 sudo[249391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uitflhpqutgvsjfdqweccpfxrmlcfvjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406663.355553-3866-58449279804354/AnsiballZ_file.py'
Oct 02 12:04:23 compute-0 sudo[249391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:23 compute-0 sudo[249351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:23 compute-0 sudo[249351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:23 compute-0 sudo[249351]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-0 ceph-mon[73668]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:23 compute-0 sudo[249398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:23 compute-0 sudo[249398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:23 compute-0 sudo[249398]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-0 sudo[249423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:23 compute-0 sudo[249423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:23 compute-0 sudo[249423]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-0 python3.9[249396]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:23 compute-0 sudo[249391]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-0 sudo[249448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:04:24 compute-0 sudo[249448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:24 compute-0 podman[249614]: 2025-10-02 12:04:24.357263139 +0000 UTC m=+0.041392325 container create 1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:04:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:24 compute-0 systemd[1]: Started libpod-conmon-1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516.scope.
Oct 02 12:04:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:24 compute-0 podman[249614]: 2025-10-02 12:04:24.341030744 +0000 UTC m=+0.025159930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:24 compute-0 sudo[249682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkrjoxirvpjjxdhivgfviwwpquajhrhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406664.121252-3866-244182441400434/AnsiballZ_file.py'
Oct 02 12:04:24 compute-0 podman[249614]: 2025-10-02 12:04:24.448255534 +0000 UTC m=+0.132384780 container init 1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:24 compute-0 sudo[249682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:24 compute-0 podman[249614]: 2025-10-02 12:04:24.456635713 +0000 UTC m=+0.140764949 container start 1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:04:24 compute-0 systemd[1]: libpod-1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516.scope: Deactivated successfully.
Oct 02 12:04:24 compute-0 goofy_cerf[249673]: 167 167
Oct 02 12:04:24 compute-0 podman[249614]: 2025-10-02 12:04:24.463029091 +0000 UTC m=+0.147158347 container attach 1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:04:24 compute-0 conmon[249673]: conmon 1a5ef0ca24ef51fb4d04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516.scope/container/memory.events
Oct 02 12:04:24 compute-0 podman[249614]: 2025-10-02 12:04:24.464095859 +0000 UTC m=+0.148225065 container died 1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eb68299fa2a292c5d6b68532d3342401e5bcb4d4b565d35ed734d1bb614e717-merged.mount: Deactivated successfully.
Oct 02 12:04:24 compute-0 podman[249614]: 2025-10-02 12:04:24.510395672 +0000 UTC m=+0.194524848 container remove 1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:24 compute-0 systemd[1]: libpod-conmon-1a5ef0ca24ef51fb4d04f9018660c44e6c44cf75e4cce652a05c8894517df516.scope: Deactivated successfully.
Oct 02 12:04:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:24.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:24 compute-0 python3.9[249685]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:24 compute-0 sudo[249682]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:24 compute-0 podman[249706]: 2025-10-02 12:04:24.706741776 +0000 UTC m=+0.045718438 container create 90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:04:24 compute-0 systemd[1]: Started libpod-conmon-90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656.scope.
Oct 02 12:04:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:04:24 compute-0 podman[249706]: 2025-10-02 12:04:24.686590248 +0000 UTC m=+0.025566950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a40305265d2c11f5d7a1b657d98f55f9b99255f3510dee274f1227aaab058a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a40305265d2c11f5d7a1b657d98f55f9b99255f3510dee274f1227aaab058a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a40305265d2c11f5d7a1b657d98f55f9b99255f3510dee274f1227aaab058a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a40305265d2c11f5d7a1b657d98f55f9b99255f3510dee274f1227aaab058a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:04:24 compute-0 podman[249706]: 2025-10-02 12:04:24.797222987 +0000 UTC m=+0.136199669 container init 90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:04:24 compute-0 podman[249706]: 2025-10-02 12:04:24.809183481 +0000 UTC m=+0.148160173 container start 90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_solomon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:04:24 compute-0 podman[249706]: 2025-10-02 12:04:24.813697759 +0000 UTC m=+0.152674451 container attach 90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:04:25 compute-0 sudo[249877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuubbofarecxwzajyhwifmwxsmkxfvhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406664.8588984-3866-21126426917670/AnsiballZ_file.py'
Oct 02 12:04:25 compute-0 sudo[249877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:25 compute-0 python3.9[249879]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:25.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:25 compute-0 sudo[249877]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]: {
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]:         "osd_id": 1,
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]:         "type": "bluestore"
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]:     }
Oct 02 12:04:25 compute-0 dazzling_solomon[249736]: }
Oct 02 12:04:25 compute-0 systemd[1]: libpod-90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656.scope: Deactivated successfully.
Oct 02 12:04:25 compute-0 podman[249706]: 2025-10-02 12:04:25.731802855 +0000 UTC m=+1.070779537 container died 90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a40305265d2c11f5d7a1b657d98f55f9b99255f3510dee274f1227aaab058a2-merged.mount: Deactivated successfully.
Oct 02 12:04:25 compute-0 ceph-mon[73668]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:25 compute-0 podman[249706]: 2025-10-02 12:04:25.816838993 +0000 UTC m=+1.155815645 container remove 90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_solomon, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:04:25 compute-0 systemd[1]: libpod-conmon-90d32a3d203ca3c0b375d2451be405718e547955ed142bc6c1539d0003a48656.scope: Deactivated successfully.
Oct 02 12:04:25 compute-0 sudo[249448]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:04:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:04:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 88cad120-6b41-4154-8537-cf444caeab78 does not exist
Oct 02 12:04:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 69e38a87-6fcf-4001-ae78-d567d77d265a does not exist
Oct 02 12:04:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 420e5789-c7b7-4f05-88d2-a1b9a1a2f215 does not exist
Oct 02 12:04:25 compute-0 podman[249994]: 2025-10-02 12:04:25.913837665 +0000 UTC m=+0.147563308 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:04:25 compute-0 sudo[250064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:25 compute-0 sudo[250105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njkztlgwoymooopiddqesnjdvjqmqwxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406665.594453-3866-184506364730871/AnsiballZ_file.py'
Oct 02 12:04:25 compute-0 sudo[250064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:25 compute-0 sudo[250105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:25 compute-0 sudo[250064]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:26 compute-0 sudo[250110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:04:26 compute-0 sudo[250110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:26 compute-0 sudo[250110]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:26 compute-0 python3.9[250109]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:26 compute-0 sudo[250105]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:04:26.440 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:04:26.441 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:04:26.441 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:26.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:26 compute-0 sudo[250285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjaoqlpcatlmznpdumpugnkcqwkwaaml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406666.33196-3866-142506272613151/AnsiballZ_file.py'
Oct 02 12:04:26 compute-0 sudo[250285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:26 compute-0 python3.9[250287]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:26 compute-0 sudo[250285]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:27 compute-0 sudo[250437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfjlzpcuudwjrmjatksjjhxggkqtstsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406667.077104-3866-151968768410055/AnsiballZ_file.py'
Oct 02 12:04:27 compute-0 sudo[250437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:04:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:27.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:04:27 compute-0 python3.9[250439]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:27 compute-0 sudo[250437]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:27 compute-0 ceph-mon[73668]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:04:28
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes']
Oct 02 12:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:04:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:28.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:29 compute-0 ceph-mon[73668]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:30.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:31 compute-0 podman[250466]: 2025-10-02 12:04:31.401597782 +0000 UTC m=+0.064346937 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 12:04:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:31 compute-0 ceph-mon[73668]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:32 compute-0 sudo[250611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imswlwqfvhczusbdumtgcxwkkwefsfeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406672.5745707-4233-176364057479344/AnsiballZ_getent.py'
Oct 02 12:04:32 compute-0 sudo[250611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:33 compute-0 python3.9[250613]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 02 12:04:33 compute-0 sudo[250611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:33 compute-0 sudo[250764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nclorrcoiszholtzitvwblvkrzhbdqkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406673.3247585-4257-27018124142143/AnsiballZ_group.py'
Oct 02 12:04:33 compute-0 sudo[250764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:34 compute-0 ceph-mon[73668]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:34 compute-0 python3.9[250766]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 12:04:34 compute-0 groupadd[250767]: group added to /etc/group: name=nova, GID=42436
Oct 02 12:04:34 compute-0 groupadd[250767]: group added to /etc/gshadow: name=nova
Oct 02 12:04:34 compute-0 groupadd[250767]: new group: name=nova, GID=42436
Oct 02 12:04:34 compute-0 sudo[250764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:34.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:34 compute-0 sudo[250923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfccsoolsfoyzumgfpeeojamhkdteeck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406674.4279063-4281-6651740177761/AnsiballZ_user.py'
Oct 02 12:04:34 compute-0 sudo[250923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:35 compute-0 python3.9[250925]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 12:04:35 compute-0 useradd[250927]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 02 12:04:35 compute-0 useradd[250927]: add 'nova' to group 'libvirt'
Oct 02 12:04:35 compute-0 useradd[250927]: add 'nova' to shadow group 'libvirt'
Oct 02 12:04:35 compute-0 sudo[250923]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:36 compute-0 ceph-mon[73668]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:36 compute-0 sshd-session[250958]: Accepted publickey for zuul from 192.168.122.30 port 39998 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 12:04:36 compute-0 systemd-logind[820]: New session 53 of user zuul.
Oct 02 12:04:36 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 02 12:04:36 compute-0 sshd-session[250958]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 12:04:36 compute-0 sshd-session[250961]: Received disconnect from 192.168.122.30 port 39998:11: disconnected by user
Oct 02 12:04:36 compute-0 sshd-session[250961]: Disconnected from user zuul 192.168.122.30 port 39998
Oct 02 12:04:36 compute-0 sshd-session[250958]: pam_unix(sshd:session): session closed for user zuul
Oct 02 12:04:36 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 02 12:04:36 compute-0 systemd-logind[820]: Session 53 logged out. Waiting for processes to exit.
Oct 02 12:04:36 compute-0 systemd-logind[820]: Removed session 53.
Oct 02 12:04:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:36.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:36 compute-0 python3.9[251112]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:37 compute-0 python3.9[251233]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406676.520227-4356-33711703624445/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:38 compute-0 ceph-mon[73668]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:38 compute-0 python3.9[251383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:38.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:38 compute-0 python3.9[251460]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:39 compute-0 python3.9[251610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:39.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:04:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:04:39 compute-0 python3.9[251731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406678.8761358-4356-118242300830758/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:40 compute-0 ceph-mon[73668]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:40 compute-0 python3.9[251882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:41 compute-0 sudo[251953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:41 compute-0 sudo[251953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:41 compute-0 sudo[251953]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:41 compute-0 sudo[252010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:41 compute-0 sudo[252010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:41 compute-0 sudo[252010]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:41 compute-0 python3.9[252046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406680.1658642-4356-72199025063809/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:41.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:42 compute-0 ceph-mon[73668]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:42 compute-0 python3.9[252203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:42 compute-0 podman[252299]: 2025-10-02 12:04:42.593726203 +0000 UTC m=+0.082481922 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:04:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:04:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:42.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:04:42 compute-0 python3.9[252338]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406681.4963295-4356-265613683899478/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:43.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:43 compute-0 sudo[252495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crdymxwgmlpappoguoubyscauvqzbkns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406683.104853-4563-29376637992241/AnsiballZ_file.py'
Oct 02 12:04:43 compute-0 sudo[252495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:43 compute-0 python3.9[252497]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:43 compute-0 sudo[252495]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:44 compute-0 ceph-mon[73668]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:44 compute-0 sudo[252659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krzeyaqruryghlnbagzomgdyydayazrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406683.9157667-4587-31668794681673/AnsiballZ_copy.py'
Oct 02 12:04:44 compute-0 sudo[252659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:44 compute-0 podman[252621]: 2025-10-02 12:04:44.316684037 +0000 UTC m=+0.090762329 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:04:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:44 compute-0 python3.9[252667]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:44 compute-0 sudo[252659]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:44.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:45 compute-0 sudo[252819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nopgfnpgeibpmysaxcoeidqvqxpcaqfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406684.697471-4611-238638939315164/AnsiballZ_stat.py'
Oct 02 12:04:45 compute-0 sudo[252819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:45 compute-0 python3.9[252821]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:04:45 compute-0 sudo[252819]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:45 compute-0 sudo[252971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxmzhafuggwasvbvbjqqqpobbnhaljr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406685.4775395-4635-182050405916048/AnsiballZ_stat.py'
Oct 02 12:04:45 compute-0 sudo[252971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:46 compute-0 python3.9[252973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:46 compute-0 sudo[252971]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:46 compute-0 ceph-mon[73668]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:46 compute-0 sudo[253095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcfvxlwsstnmkxkohamajsjdxbgqassh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406685.4775395-4635-182050405916048/AnsiballZ_copy.py'
Oct 02 12:04:46 compute-0 sudo[253095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:46 compute-0 python3.9[253097]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759406685.4775395-4635-182050405916048/.source _original_basename=.cz_h10tn follow=False checksum=093c1c38cf0b1401ec33f87b7abc35808b57ce7e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 02 12:04:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:46.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:46 compute-0 sudo[253095]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:47 compute-0 python3.9[253249]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:04:48 compute-0 ceph-mon[73668]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:48 compute-0 python3.9[253401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:04:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:48.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:04:49 compute-0 python3.9[253523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406687.7593274-4713-145009589219106/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:49 compute-0 python3.9[253673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:50 compute-0 ceph-mon[73668]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:50 compute-0 python3.9[253794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406689.2266307-4758-138159198188099/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:04:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:50.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:04:51 compute-0 sudo[253945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxwinclahnxtuknwstdjobdwlvgdmbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406690.6908357-4809-17514036217221/AnsiballZ_container_config_data.py'
Oct 02 12:04:51 compute-0 sudo[253945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:51 compute-0 python3.9[253947]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 02 12:04:51 compute-0 sudo[253945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:51.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:51 compute-0 sudo[254097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxyewohdntzzbkpznrnoglkuxxrcvfse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406691.5682657-4836-151873893725341/AnsiballZ_container_config_hash.py'
Oct 02 12:04:51 compute-0 sudo[254097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:52 compute-0 python3.9[254099]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:04:52 compute-0 sudo[254097]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:52 compute-0 ceph-mon[73668]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:52.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:52 compute-0 sudo[254250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkhvwzvjcontwqtyyhkhsdgyhgsocxqm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406692.484237-4866-69121072379234/AnsiballZ_edpm_container_manage.py'
Oct 02 12:04:52 compute-0 sudo[254250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:53 compute-0 python3[254252]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:04:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:53.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:54 compute-0 ceph-mon[73668]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:54.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:55.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:56 compute-0 ceph-mon[73668]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:56.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:57 compute-0 ceph-mon[73668]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:57.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:58 compute-0 podman[254306]: 2025-10-02 12:04:58.151167621 +0000 UTC m=+1.821216630 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:04:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:04:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:04:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:59.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:59 compute-0 ceph-mon[73668]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:01 compute-0 sudo[254355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:01 compute-0 sudo[254355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:01 compute-0 sudo[254355]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:01 compute-0 sudo[254380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:01 compute-0 sudo[254380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:01 compute-0 sudo[254380]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:01.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:02 compute-0 podman[254405]: 2025-10-02 12:05:02.875964748 +0000 UTC m=+0.528975291 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 12:05:02 compute-0 ceph-mon[73668]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:02 compute-0 podman[254265]: 2025-10-02 12:05:02.933783403 +0000 UTC m=+9.714145377 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 12:05:03 compute-0 podman[254449]: 2025-10-02 12:05:03.156814627 +0000 UTC m=+0.054433727 container create 6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:05:03 compute-0 podman[254449]: 2025-10-02 12:05:03.127617822 +0000 UTC m=+0.025236902 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 12:05:03 compute-0 python3[254252]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 02 12:05:03 compute-0 sudo[254250]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:03.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:04 compute-0 ceph-mon[73668]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:04 compute-0 sudo[254638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzxvhfdmsogekaoyxfoecafdneabsgqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406704.277194-4890-90516580586710/AnsiballZ_stat.py'
Oct 02 12:05:04 compute-0 sudo[254638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:04.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:04 compute-0 python3.9[254640]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:04 compute-0 sudo[254638]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:05 compute-0 ceph-mon[73668]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:05.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:05 compute-0 sudo[254792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgoozjrqveyjqyxazhvslqwagqramyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406705.401592-4926-222926291985024/AnsiballZ_container_config_data.py'
Oct 02 12:05:05 compute-0 sudo[254792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:05 compute-0 python3.9[254794]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 02 12:05:05 compute-0 sudo[254792]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:06.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:06 compute-0 sudo[254945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxinluucoxizbhcstxigscddtruswgza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406706.3441954-4953-74371667797535/AnsiballZ_container_config_hash.py'
Oct 02 12:05:06 compute-0 sudo[254945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:06 compute-0 python3.9[254947]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:05:06 compute-0 sudo[254945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:07.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:07 compute-0 ceph-mon[73668]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:07 compute-0 sudo[255097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebnifdlmowisgnujpqnhmmvofvqqvbpe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406707.3134575-4983-77966725371251/AnsiballZ_edpm_container_manage.py'
Oct 02 12:05:07 compute-0 sudo[255097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:08 compute-0 python3[255099]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:05:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:08 compute-0 podman[255138]: 2025-10-02 12:05:08.308489358 +0000 UTC m=+0.042912065 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 12:05:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:09 compute-0 podman[255138]: 2025-10-02 12:05:09.343467676 +0000 UTC m=+1.077890323 container create 9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct 02 12:05:09 compute-0 python3[255099]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 02 12:05:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:09.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:09 compute-0 sudo[255097]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:10 compute-0 ceph-mon[73668]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:10 compute-0 sudo[255327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nugakwquqhsuhylunwwfhrkwwszeuymj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406709.6913521-5007-86406504243424/AnsiballZ_stat.py'
Oct 02 12:05:10 compute-0 sudo[255327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:10 compute-0 python3.9[255329]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:10 compute-0 sudo[255327]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:10.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:11 compute-0 sudo[255482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzgedlcgzmftpjmdfhuhbyyosvocueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406710.7170544-5034-62456685859687/AnsiballZ_file.py'
Oct 02 12:05:11 compute-0 sudo[255482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:11 compute-0 python3.9[255484]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:05:11 compute-0 sudo[255482]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:11 compute-0 sudo[255633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vubyleqxfquwbvzqewnkpzcgysegnjxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406711.3676062-5034-214861174975301/AnsiballZ_copy.py'
Oct 02 12:05:11 compute-0 sudo[255633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:11 compute-0 ceph-mon[73668]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:12 compute-0 python3.9[255635]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406711.3676062-5034-214861174975301/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:05:12 compute-0 sudo[255633]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:12 compute-0 sudo[255709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajdmjdzvquvovfipepndfwoeqhkrkaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406711.3676062-5034-214861174975301/AnsiballZ_systemd.py'
Oct 02 12:05:12 compute-0 sudo[255709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:12 compute-0 python3.9[255711]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:05:12 compute-0 systemd[1]: Reloading.
Oct 02 12:05:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:12.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:12 compute-0 systemd-rc-local-generator[255748]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:05:12 compute-0 systemd-sysv-generator[255751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:05:12 compute-0 podman[255714]: 2025-10-02 12:05:12.756983323 +0000 UTC m=+0.094581579 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:05:13 compute-0 sudo[255709]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:13 compute-0 sudo[255839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxcqrnxxupjikuicferggoeabdyjksnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406711.3676062-5034-214861174975301/AnsiballZ_systemd.py'
Oct 02 12:05:13 compute-0 sudo[255839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:13.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:13 compute-0 python3.9[255841]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:05:13 compute-0 systemd[1]: Reloading.
Oct 02 12:05:13 compute-0 systemd-sysv-generator[255877]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:05:13 compute-0 systemd-rc-local-generator[255871]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:05:14 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 12:05:14 compute-0 ceph-mon[73668]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:14 compute-0 podman[255881]: 2025-10-02 12:05:14.428407306 +0000 UTC m=+0.340050680 container init 9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:05:14 compute-0 podman[255881]: 2025-10-02 12:05:14.436309873 +0000 UTC m=+0.347953267 container start 9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0)
Oct 02 12:05:14 compute-0 nova_compute[255896]: + sudo -E kolla_set_configs
Oct 02 12:05:14 compute-0 podman[255881]: nova_compute
Oct 02 12:05:14 compute-0 systemd[1]: Started nova_compute container.
Oct 02 12:05:14 compute-0 sudo[255839]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Validating config file
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying service configuration files
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Deleting /etc/ceph
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Creating directory /etc/ceph
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Writing out command to execute
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:14 compute-0 nova_compute[255896]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:14 compute-0 nova_compute[255896]: ++ cat /run_command
Oct 02 12:05:14 compute-0 nova_compute[255896]: + CMD=nova-compute
Oct 02 12:05:14 compute-0 nova_compute[255896]: + ARGS=
Oct 02 12:05:14 compute-0 nova_compute[255896]: + sudo kolla_copy_cacerts
Oct 02 12:05:14 compute-0 podman[255905]: 2025-10-02 12:05:14.673914479 +0000 UTC m=+0.082919794 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:05:14 compute-0 nova_compute[255896]: + [[ ! -n '' ]]
Oct 02 12:05:14 compute-0 nova_compute[255896]: + . kolla_extend_start
Oct 02 12:05:14 compute-0 nova_compute[255896]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 12:05:14 compute-0 nova_compute[255896]: Running command: 'nova-compute'
Oct 02 12:05:14 compute-0 nova_compute[255896]: + umask 0022
Oct 02 12:05:14 compute-0 nova_compute[255896]: + exec nova-compute
Oct 02 12:05:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:14.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:15.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:15 compute-0 ceph-mon[73668]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:15 compute-0 python3.9[256077]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:16.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:16 compute-0 python3.9[256228]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:17 compute-0 python3.9[256379]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:17 compute-0 ceph-mon[73668]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:18 compute-0 sudo[256529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehwmthwrbmdreheercmhwsevvzcskgpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406718.0369246-5214-245919212817944/AnsiballZ_podman_container.py'
Oct 02 12:05:18 compute-0 sudo[256529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:18 compute-0 python3.9[256532]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 12:05:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:05:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:18.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:05:18 compute-0 sudo[256529]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:19 compute-0 sudo[256705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwcpwfajrdwrcnwisrqvfuzrfnxymmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406719.059456-5238-25212618746022/AnsiballZ_systemd.py'
Oct 02 12:05:19 compute-0 sudo[256705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:19 compute-0 python3.9[256707]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:05:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:19 compute-0 systemd[1]: Stopping nova_compute container...
Oct 02 12:05:20 compute-0 systemd[1]: libpod-9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971.scope: Deactivated successfully.
Oct 02 12:05:20 compute-0 systemd[1]: libpod-9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971.scope: Consumed 2.613s CPU time.
Oct 02 12:05:20 compute-0 podman[256711]: 2025-10-02 12:05:20.205513836 +0000 UTC m=+0.362218322 container died 9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:05:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:20 compute-0 ceph-mon[73668]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3611304547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2400835705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:20.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971-userdata-shm.mount: Deactivated successfully.
Oct 02 12:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943-merged.mount: Deactivated successfully.
Oct 02 12:05:21 compute-0 sudo[256740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:21 compute-0 sudo[256740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:21 compute-0 sudo[256740]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:21 compute-0 sudo[256765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:21 compute-0 sudo[256765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:21 compute-0 sudo[256765]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:21.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:22.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:23.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:24.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:25.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:26 compute-0 ceph-mds[95523]: mds.beacon.cephfs.compute-0.yqiqns missed beacon ack from the monitors
Oct 02 12:05:26 compute-0 sudo[256794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:26 compute-0 sudo[256794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-0 sudo[256794]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:05:26.441 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:05:26.441 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:05:26.442 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:26 compute-0 sudo[256820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:26 compute-0 sudo[256820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-0 sudo[256820]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-0 sudo[256845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:26 compute-0 sudo[256845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-0 sudo[256845]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-0 sudo[256870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:05:26 compute-0 sudo[256870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:26.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:26 compute-0 podman[256711]: 2025-10-02 12:05:26.723325531 +0000 UTC m=+6.880029977 container cleanup 9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:05:26 compute-0 podman[256711]: nova_compute
Oct 02 12:05:26 compute-0 ceph-mon[73668]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:26 compute-0 podman[256902]: nova_compute
Oct 02 12:05:26 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 02 12:05:26 compute-0 systemd[1]: Stopped nova_compute container.
Oct 02 12:05:26 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 12:05:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:05:26 compute-0 sudo[256870]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:05:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af797bc9b2618ea29b28e9b5d927b73c0a4ebd32bd0abc69ff828353cd315943/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:27 compute-0 podman[256927]: 2025-10-02 12:05:27.002339612 +0000 UTC m=+0.159014868 container init 9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:05:27 compute-0 podman[256927]: 2025-10-02 12:05:27.012791836 +0000 UTC m=+0.169467072 container start 9ca67db55674d68250cab9d4df12596b34c273f016407ee0ac61281552b67971 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:05:27 compute-0 nova_compute[256940]: + sudo -E kolla_set_configs
Oct 02 12:05:27 compute-0 podman[256927]: nova_compute
Oct 02 12:05:27 compute-0 systemd[1]: Started nova_compute container.
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Validating config file
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying service configuration files
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 12:05:27 compute-0 sudo[256705]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /etc/ceph
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Creating directory /etc/ceph
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Writing out command to execute
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:27 compute-0 nova_compute[256940]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:05:27 compute-0 nova_compute[256940]: ++ cat /run_command
Oct 02 12:05:27 compute-0 nova_compute[256940]: + CMD=nova-compute
Oct 02 12:05:27 compute-0 nova_compute[256940]: + ARGS=
Oct 02 12:05:27 compute-0 nova_compute[256940]: + sudo kolla_copy_cacerts
Oct 02 12:05:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:05:27 compute-0 nova_compute[256940]: + [[ ! -n '' ]]
Oct 02 12:05:27 compute-0 nova_compute[256940]: + . kolla_extend_start
Oct 02 12:05:27 compute-0 nova_compute[256940]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 12:05:27 compute-0 nova_compute[256940]: Running command: 'nova-compute'
Oct 02 12:05:27 compute-0 nova_compute[256940]: + umask 0022
Oct 02 12:05:27 compute-0 nova_compute[256940]: + exec nova-compute
Oct 02 12:05:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 sudo[256976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:27 compute-0 sudo[256976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-0 sudo[256976]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-0 sudo[257001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:27 compute-0 sudo[257001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-0 sudo[257001]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-0 sudo[257056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:27 compute-0 sudo[257056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-0 sudo[257056]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:27 compute-0 sudo[257110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:05:27 compute-0 sudo[257110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-0 sudo[257201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlhreurwsxjsgalwkjfnvnzyqjqooqag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406727.3557281-5265-32765979227653/AnsiballZ_podman_container.py'
Oct 02 12:05:27 compute-0 sudo[257201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:27 compute-0 ceph-mon[73668]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:27 compute-0 ceph-mon[73668]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2216606571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:27 compute-0 ceph-mon[73668]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1319389839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3916071039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:27 compute-0 python3.9[257205]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 12:05:28 compute-0 sudo[257110]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:05:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:05:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:05:28 compute-0 systemd[1]: Started libpod-conmon-6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3.scope.
Oct 02 12:05:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:05:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e752207c-94ea-4c8c-9810-5cb076a8e31f does not exist
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7617b9ce-c78a-44da-99ca-c118a3d208e9 does not exist
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3118d515-ee64-4090-a199-40afbc13dfcd does not exist
Oct 02 12:05:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:05:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:05:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:05:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:05:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60545d4367ec8ebbe044085d3c5b20c88de446ad6a7bac46bcc2c515288020b6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60545d4367ec8ebbe044085d3c5b20c88de446ad6a7bac46bcc2c515288020b6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60545d4367ec8ebbe044085d3c5b20c88de446ad6a7bac46bcc2c515288020b6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:05:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:28 compute-0 podman[257260]: 2025-10-02 12:05:28.23765883 +0000 UTC m=+0.147488186 container init 6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:05:28 compute-0 podman[257260]: 2025-10-02 12:05:28.245209388 +0000 UTC m=+0.155038764 container start 6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 02 12:05:28 compute-0 python3.9[257205]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 02 12:05:28 compute-0 sudo[257279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:28 compute-0 sudo[257279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 sudo[257279]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Applying nova statedir ownership
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 02 12:05:28 compute-0 nova_compute_init[257303]: INFO:nova_statedir:Nova statedir ownership complete
Oct 02 12:05:28 compute-0 systemd[1]: libpod-6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3.scope: Deactivated successfully.
Oct 02 12:05:28 compute-0 podman[257306]: 2025-10-02 12:05:28.321380393 +0000 UTC m=+0.040045700 container died 6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:05:28 compute-0 sudo[257314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:28 compute-0 sudo[257314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 sudo[257314]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3-userdata-shm.mount: Deactivated successfully.
Oct 02 12:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-60545d4367ec8ebbe044085d3c5b20c88de446ad6a7bac46bcc2c515288020b6-merged.mount: Deactivated successfully.
Oct 02 12:05:28 compute-0 podman[257321]: 2025-10-02 12:05:28.393254006 +0000 UTC m=+0.082342948 container cleanup 6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:28 compute-0 systemd[1]: libpod-conmon-6b95ead2adc2abd7ac1195c7908a29c4b0a574ec50fae0d5f61b7b2115a20fc3.scope: Deactivated successfully.
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:28 compute-0 sudo[257357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:28 compute-0 sudo[257357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 sudo[257357]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 sudo[257201]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-0 sudo[257399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:05:28 compute-0 sudo[257399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:05:28
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta']
Oct 02 12:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:05:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:28.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:05:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:05:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:05:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2433992043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:28 compute-0 podman[257491]: 2025-10-02 12:05:28.805870737 +0000 UTC m=+0.038567502 container create 9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:05:28 compute-0 systemd[1]: Started libpod-conmon-9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1.scope.
Oct 02 12:05:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:28 compute-0 podman[257491]: 2025-10-02 12:05:28.871896537 +0000 UTC m=+0.104593352 container init 9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:28 compute-0 podman[257491]: 2025-10-02 12:05:28.87963768 +0000 UTC m=+0.112334405 container start 9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:05:28 compute-0 podman[257491]: 2025-10-02 12:05:28.882288729 +0000 UTC m=+0.114985494 container attach 9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:05:28 compute-0 busy_solomon[257507]: 167 167
Oct 02 12:05:28 compute-0 systemd[1]: libpod-9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1.scope: Deactivated successfully.
Oct 02 12:05:28 compute-0 podman[257491]: 2025-10-02 12:05:28.884866567 +0000 UTC m=+0.117563332 container died 9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:05:28 compute-0 podman[257491]: 2025-10-02 12:05:28.790216327 +0000 UTC m=+0.022913082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:28 compute-0 podman[257491]: 2025-10-02 12:05:28.926258471 +0000 UTC m=+0.158955226 container remove 9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:05:28 compute-0 systemd[1]: libpod-conmon-9dea8ea4bdac0a128ba666be17155f72dc76739d5a692b7bcbf513e59a19a7c1.scope: Deactivated successfully.
Oct 02 12:05:28 compute-0 sshd-session[218807]: Connection closed by 192.168.122.30 port 40528
Oct 02 12:05:29 compute-0 sshd-session[218804]: pam_unix(sshd:session): session closed for user zuul
Oct 02 12:05:29 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct 02 12:05:29 compute-0 systemd[1]: session-51.scope: Consumed 3min 6.771s CPU time.
Oct 02 12:05:29 compute-0 systemd-logind[820]: Session 51 logged out. Waiting for processes to exit.
Oct 02 12:05:29 compute-0 systemd-logind[820]: Removed session 51.
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.030 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.030 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.030 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.030 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 12:05:29 compute-0 podman[257534]: 2025-10-02 12:05:29.10597002 +0000 UTC m=+0.047262579 container create 2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:05:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7e386f90598575ca0db272255df2ed1bac03ce9d7120bafd927208425c0291-merged.mount: Deactivated successfully.
Oct 02 12:05:29 compute-0 systemd[1]: Started libpod-conmon-2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d.scope.
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.171 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:29 compute-0 podman[257534]: 2025-10-02 12:05:29.084532578 +0000 UTC m=+0.025825117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9591587ae615e7bf024d2f8c705d3b16e3894b9b34f4fb067034a0e12d1f0f54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9591587ae615e7bf024d2f8c705d3b16e3894b9b34f4fb067034a0e12d1f0f54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9591587ae615e7bf024d2f8c705d3b16e3894b9b34f4fb067034a0e12d1f0f54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.201 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9591587ae615e7bf024d2f8c705d3b16e3894b9b34f4fb067034a0e12d1f0f54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9591587ae615e7bf024d2f8c705d3b16e3894b9b34f4fb067034a0e12d1f0f54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:29 compute-0 podman[257534]: 2025-10-02 12:05:29.22085294 +0000 UTC m=+0.162145489 container init 2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:05:29 compute-0 podman[257534]: 2025-10-02 12:05:29.237497496 +0000 UTC m=+0.178790015 container start 2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:05:29 compute-0 podman[257534]: 2025-10-02 12:05:29.24757941 +0000 UTC m=+0.188871989 container attach 2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:05:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:29.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.682 2 INFO nova.virt.driver [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.874 2 INFO nova.compute.provider_config [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.883 2 DEBUG oslo_concurrency.lockutils [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.883 2 DEBUG oslo_concurrency.lockutils [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.884 2 DEBUG oslo_concurrency.lockutils [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.884 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.884 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.884 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.885 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.885 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.885 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.886 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.886 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.886 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.886 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.886 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.887 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.887 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.887 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.887 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.887 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.888 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.888 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.888 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.888 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.888 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.889 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.889 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.889 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.889 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.890 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.890 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.890 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.890 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.891 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.891 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.891 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.891 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.891 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.892 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.892 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.892 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.892 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.892 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.893 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.893 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.893 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.893 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.894 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.894 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.894 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.894 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.894 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.895 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.895 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.895 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.895 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.896 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.896 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.896 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.896 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.896 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.897 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.897 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.897 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.897 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.897 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.898 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.898 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.898 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.898 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.898 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.899 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.899 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.899 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.899 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.899 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.900 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.900 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.900 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.900 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.900 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.901 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.901 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.901 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.901 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.902 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.902 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.902 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.902 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.903 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.903 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.903 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.903 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.903 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.904 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.904 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.904 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.904 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.904 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.905 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.905 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.905 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.905 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.905 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.906 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.906 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.906 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.906 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.907 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.907 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.907 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.907 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.907 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.908 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.908 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.908 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.908 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.909 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.909 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.909 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.909 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.909 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.910 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.910 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.910 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.910 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.910 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.911 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.911 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.911 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.911 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.911 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.912 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.912 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.912 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.912 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.912 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.913 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.913 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.913 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.913 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.913 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.914 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.914 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.914 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.914 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.914 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.914 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.915 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.915 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.915 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.915 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.915 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.915 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.915 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.916 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.916 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.916 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.916 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.916 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.916 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.917 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.917 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.917 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.917 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.917 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.917 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.917 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.918 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.918 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.918 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.918 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.918 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.918 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.918 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.919 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.919 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.919 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.919 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.919 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.919 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.920 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.920 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.920 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.920 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.920 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.920 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.921 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.921 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.921 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.921 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.921 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.921 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.921 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.922 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.922 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.922 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.922 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.922 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.922 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.922 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.923 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.923 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.923 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.923 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.923 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.923 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.923 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.924 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.924 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.924 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.924 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.924 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.924 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.924 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.925 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.925 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.925 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.925 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.925 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.925 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.925 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.926 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.926 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.926 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.926 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.926 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.926 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.926 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.927 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.927 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.927 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.927 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.927 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.927 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.927 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.928 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.928 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.928 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.928 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.928 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.928 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.928 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.929 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.929 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.929 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.929 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.929 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.929 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.929 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.930 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.930 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.930 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.930 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.930 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.930 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.930 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.931 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.931 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.931 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.931 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.931 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.931 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.932 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.932 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.932 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.932 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.932 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.932 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.933 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.933 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.933 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.933 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.933 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.933 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.933 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.934 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.934 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.934 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.934 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.934 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.934 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.934 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.935 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.935 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.935 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.935 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.935 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.935 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.935 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.936 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.936 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.936 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.936 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.936 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.936 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.936 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.937 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.937 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.937 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.937 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.937 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.937 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.937 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.938 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.938 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.938 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.938 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.938 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.938 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.938 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.939 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.939 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.939 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.939 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.939 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.939 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.940 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.940 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.940 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.940 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.940 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.940 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.940 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.941 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.941 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.941 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.941 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.941 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.941 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.941 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.942 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.942 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.942 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.942 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.942 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.942 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.942 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.943 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.943 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.943 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.943 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.943 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.943 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.943 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.944 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.944 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.944 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.944 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.944 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.944 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.944 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.945 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.945 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.945 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.945 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.945 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.945 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.945 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.946 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.946 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.946 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.946 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.946 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.947 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.947 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.947 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.947 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.947 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.947 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.947 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.948 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.948 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.948 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.948 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.948 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.948 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.948 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.949 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.949 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.949 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.949 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.949 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.949 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.949 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.950 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.950 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.950 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.950 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.950 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.950 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.950 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.951 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.951 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.951 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.951 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.951 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.951 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.951 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.952 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.952 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.952 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.952 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.952 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.952 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.952 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.953 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.953 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.953 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.953 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.953 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.953 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.954 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.954 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.954 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.954 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.954 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.954 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.955 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.955 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.955 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.955 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.955 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.955 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.955 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.956 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.956 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.956 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.956 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.956 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.956 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.956 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.957 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.957 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.957 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.957 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.957 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.957 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.957 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.958 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.958 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.958 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.958 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.958 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.958 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.958 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.959 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.959 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.959 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.959 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.959 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.959 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.959 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.960 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.960 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.960 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.960 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.960 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.960 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.960 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.961 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.961 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.961 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.961 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.961 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.961 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.961 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.962 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.962 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.962 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.962 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.962 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.962 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.962 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.963 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.963 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.963 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.963 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.963 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.963 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.963 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.964 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.964 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.964 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.964 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.964 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.964 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.964 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.965 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.965 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.965 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.965 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.965 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.965 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.966 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.966 2 WARNING oslo_config.cfg [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 12:05:29 compute-0 nova_compute[256940]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 12:05:29 compute-0 nova_compute[256940]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 12:05:29 compute-0 nova_compute[256940]: and ``live_migration_inbound_addr`` respectively.
Oct 02 12:05:29 compute-0 nova_compute[256940]: ).  Its value may be silently ignored in the future.
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.966 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.966 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.966 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.966 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.967 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.967 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.967 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.967 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.967 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.967 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.967 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.968 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.968 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.968 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.968 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.968 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.968 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.969 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.969 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rbd_secret_uuid        = 20fdc58c-b037-5094-a8ef-d490aa7c36f3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.969 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.969 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.969 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.969 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.969 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.970 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.970 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.970 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.970 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.970 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.970 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.971 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.971 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.971 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.971 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.971 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.971 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.971 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.972 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.972 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.972 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.972 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.972 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.972 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.973 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.973 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.973 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.973 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.973 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.973 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.973 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.974 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.974 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.974 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.974 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.974 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.974 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.974 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.975 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.975 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.975 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.975 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.975 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.975 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.975 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.976 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.976 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.976 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.976 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.976 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.976 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.976 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.977 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.977 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.977 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.977 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.977 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.977 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.977 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.978 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.978 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.978 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.978 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.978 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.978 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.978 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.979 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.979 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.979 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.979 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.979 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.979 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.979 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.980 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.980 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.980 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.980 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.980 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.980 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.981 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.981 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.981 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.981 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.981 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.981 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.981 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.982 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.982 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.982 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.982 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.982 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.982 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.983 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.983 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.983 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.983 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.983 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.983 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.983 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.984 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.984 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.984 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.984 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.984 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.984 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.985 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.985 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.985 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.985 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.985 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.985 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.985 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.986 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.986 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.986 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.986 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.986 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.986 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.986 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.987 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.987 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.987 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.987 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.987 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.987 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.988 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.988 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.988 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.988 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.988 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.988 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.988 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.989 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.989 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.989 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.989 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.989 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.989 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.990 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.990 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.990 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.990 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.990 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.991 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.991 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.991 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.991 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.991 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.991 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.992 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.992 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.992 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.992 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.992 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.992 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.992 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.993 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.993 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.993 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.993 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.993 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.993 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.994 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.994 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.994 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.994 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.994 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.994 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.994 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.995 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.995 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.995 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.995 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.995 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.995 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.995 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.996 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.996 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.996 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.996 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.996 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.996 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.997 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.997 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.997 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.997 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.997 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.997 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.997 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.998 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.998 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.998 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.998 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.998 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.998 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.998 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:29 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:29.999 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.000 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.000 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.000 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.000 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.000 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.000 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.000 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.001 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.001 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.001 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.001 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.001 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.001 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.001 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.002 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.002 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.002 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.002 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.002 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.002 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.002 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.003 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.003 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.003 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.003 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.003 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.003 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.004 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.004 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.004 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.004 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.004 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.004 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.004 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.005 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.005 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.005 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.005 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.005 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.005 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.005 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.006 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.006 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.006 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.006 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.006 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.006 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.006 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.007 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.007 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.007 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.007 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.007 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.007 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.007 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.008 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.008 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.008 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.008 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.008 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.008 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.008 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.009 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.009 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.009 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.009 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.009 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.009 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.010 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.010 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.010 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.010 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.010 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.010 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.010 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.011 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.011 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.011 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.011 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.011 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.011 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.011 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.012 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.012 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.012 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.012 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.012 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.012 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.012 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.013 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.013 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.013 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.013 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.013 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.013 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.014 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.015 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.015 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.015 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.015 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.015 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.015 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.015 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.016 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.016 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.016 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.016 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.016 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.016 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.017 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.017 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.017 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.017 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.017 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.017 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.017 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.018 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.018 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.018 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.018 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.018 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.018 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.018 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.019 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.019 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.019 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.019 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.019 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.019 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.019 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.020 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.020 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.020 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.020 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.020 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.020 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.020 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.021 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.022 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.022 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.022 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.022 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.022 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.022 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.022 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.023 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.023 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.023 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.023 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.023 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.023 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.023 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.024 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.024 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.024 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.024 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.024 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.024 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.024 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.025 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.025 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.025 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.025 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.025 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.025 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.025 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.026 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.026 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.026 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.026 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.026 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.026 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.026 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.027 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.027 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.027 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.027 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.027 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.027 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.027 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.028 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.028 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.028 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.028 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.028 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.028 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.028 2 DEBUG oslo_service.service [None req-cff72ec6-c0fa-4bff-89ea-aa2ad7b090cb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.030 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.043 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.044 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.044 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.044 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 12:05:30 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 12:05:30 compute-0 amazing_swanson[257551]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:05:30 compute-0 amazing_swanson[257551]: --> relative data size: 1.0
Oct 02 12:05:30 compute-0 amazing_swanson[257551]: --> All data devices are unavailable
Oct 02 12:05:30 compute-0 systemd[1]: libpod-2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d.scope: Deactivated successfully.
Oct 02 12:05:30 compute-0 podman[257534]: 2025-10-02 12:05:30.089950902 +0000 UTC m=+1.031243411 container died 2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:05:30 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 12:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9591587ae615e7bf024d2f8c705d3b16e3894b9b34f4fb067034a0e12d1f0f54-merged.mount: Deactivated successfully.
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.124 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f98e1b15460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.137 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f98e1b15460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.139 2 INFO nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Connection event '1' reason 'None'
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.151 2 WARNING nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 12:05:30 compute-0 nova_compute[256940]: 2025-10-02 12:05:30.153 2 DEBUG nova.virt.libvirt.volume.mount [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 12:05:30 compute-0 podman[257534]: 2025-10-02 12:05:30.154032631 +0000 UTC m=+1.095325140 container remove 2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:05:30 compute-0 systemd[1]: libpod-conmon-2a6f883303433492254fd1cdfa019389d0b660c4cec247a616009bb7e932f54d.scope: Deactivated successfully.
Oct 02 12:05:30 compute-0 sudo[257399]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:30 compute-0 ceph-mon[73668]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:30 compute-0 sudo[257623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:30 compute-0 sudo[257623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:30 compute-0 sudo[257623]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:30 compute-0 sudo[257655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:30 compute-0 sudo[257655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:30 compute-0 sudo[257655]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:30 compute-0 sudo[257682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:30 compute-0 sudo[257682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:30 compute-0 sudo[257682]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:30 compute-0 sudo[257707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:05:30 compute-0 sudo[257707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:30.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:30 compute-0 podman[257780]: 2025-10-02 12:05:30.845411396 +0000 UTC m=+0.042209987 container create b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:05:30 compute-0 systemd[1]: Started libpod-conmon-b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77.scope.
Oct 02 12:05:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:30 compute-0 podman[257780]: 2025-10-02 12:05:30.826442849 +0000 UTC m=+0.023241480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:30 compute-0 podman[257780]: 2025-10-02 12:05:30.937775177 +0000 UTC m=+0.134573848 container init b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:05:30 compute-0 podman[257780]: 2025-10-02 12:05:30.944465512 +0000 UTC m=+0.141264113 container start b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:05:30 compute-0 podman[257780]: 2025-10-02 12:05:30.948005055 +0000 UTC m=+0.144803676 container attach b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:05:30 compute-0 stupefied_buck[257796]: 167 167
Oct 02 12:05:30 compute-0 systemd[1]: libpod-b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77.scope: Deactivated successfully.
Oct 02 12:05:30 compute-0 podman[257780]: 2025-10-02 12:05:30.951494836 +0000 UTC m=+0.148293467 container died b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f83c4450d3ffcb160debeb65ce82ebe485a18645262c619bdf0c83deee5b1c5a-merged.mount: Deactivated successfully.
Oct 02 12:05:30 compute-0 podman[257780]: 2025-10-02 12:05:30.998649712 +0000 UTC m=+0.195448313 container remove b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:05:31 compute-0 systemd[1]: libpod-conmon-b24880e1c100e05ce802191d69cdf1fbbf6f6e5eb63160bbd9b1c79f07844d77.scope: Deactivated successfully.
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.165 2 INFO nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]: 
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <host>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <uuid>8a1e3318-b91c-48d1-8474-e3593dbdcd45</uuid>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <arch>x86_64</arch>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model>EPYC-Rome-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <vendor>AMD</vendor>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <microcode version='16777317'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <signature family='23' model='49' stepping='0'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='x2apic'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='tsc-deadline'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='osxsave'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='hypervisor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='tsc_adjust'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='spec-ctrl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='stibp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='arch-capabilities'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='cmp_legacy'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='topoext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='virt-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='lbrv'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='tsc-scale'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='vmcb-clean'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='pause-filter'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='pfthreshold'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='svme-addr-chk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='rdctl-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='mds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature name='pschange-mc-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <pages unit='KiB' size='4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <pages unit='KiB' size='2048'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <pages unit='KiB' size='1048576'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <power_management>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <suspend_mem/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </power_management>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <iommu support='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <migration_features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <live/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <uri_transports>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <uri_transport>tcp</uri_transport>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <uri_transport>rdma</uri_transport>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </uri_transports>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </migration_features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <topology>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <cells num='1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <cell id='0'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           <memory unit='KiB'>7864108</memory>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           <pages unit='KiB' size='4'>1966027</pages>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           <distances>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <sibling id='0' value='10'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           </distances>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           <cpus num='8'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:           </cpus>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         </cell>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </cells>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </topology>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <cache>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </cache>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <secmodel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model>selinux</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <doi>0</doi>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </secmodel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <secmodel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model>dac</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <doi>0</doi>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </secmodel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </host>
Oct 02 12:05:31 compute-0 nova_compute[256940]: 
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <guest>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <os_type>hvm</os_type>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <arch name='i686'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <wordsize>32</wordsize>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <domain type='qemu'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <domain type='kvm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </arch>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <pae/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <nonpae/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <acpi default='on' toggle='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <apic default='on' toggle='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <cpuselection/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <deviceboot/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <disksnapshot default='on' toggle='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <externalSnapshot/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </guest>
Oct 02 12:05:31 compute-0 nova_compute[256940]: 
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <guest>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <os_type>hvm</os_type>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <arch name='x86_64'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <wordsize>64</wordsize>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <domain type='qemu'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <domain type='kvm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </arch>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <acpi default='on' toggle='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <apic default='on' toggle='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <cpuselection/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <deviceboot/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <disksnapshot default='on' toggle='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <externalSnapshot/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </guest>
Oct 02 12:05:31 compute-0 nova_compute[256940]: 
Oct 02 12:05:31 compute-0 nova_compute[256940]: </capabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]: 
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.178 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 12:05:31 compute-0 podman[257820]: 2025-10-02 12:05:31.192061769 +0000 UTC m=+0.066242616 container create d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.213 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 12:05:31 compute-0 nova_compute[256940]: <domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <domain>kvm</domain>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <arch>i686</arch>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <vcpu max='240'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <iothreads supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <os supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='firmware'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <loader supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>rom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pflash</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='readonly'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>yes</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='secure'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </loader>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </os>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='maximumMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <vendor>AMD</vendor>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='succor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='custom' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-128'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-256'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-512'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 podman[257820]: 2025-10-02 12:05:31.168898452 +0000 UTC m=+0.043079279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <memoryBacking supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='sourceType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>file</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>anonymous</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>memfd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </memoryBacking>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <disk supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='diskDevice'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>disk</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cdrom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>floppy</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>lun</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ide</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>fdc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>sata</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <graphics supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vnc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egl-headless</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>dbus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <video supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='modelType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vga</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cirrus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>none</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>bochs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ramfb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </video>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hostdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='mode'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>subsystem</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='startupPolicy'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>mandatory</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>requisite</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>optional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='subsysType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pci</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='capsType'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='pciBackend'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hostdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <rng supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>random</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <filesystem supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='driverType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>path</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>handle</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtiofs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </filesystem>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <tpm supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-tis</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-crb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emulator</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>external</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendVersion'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>2.0</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </tpm>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <redirdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </redirdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <channel supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pty</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>unix</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </channel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <crypto supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>qemu</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </crypto>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <interface supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>passt</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <panic supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>isa</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>hyperv</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </panic>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <gic supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <genid supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backup supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <async-teardown supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <ps2 supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sev supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sgx supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hyperv supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='features'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>relaxed</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vapic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>spinlocks</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vpindex</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>runtime</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>synic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>stimer</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reset</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vendor_id</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>frequencies</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reenlightenment</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tlbflush</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ipi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>avic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emsr_bitmap</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>xmm_input</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hyperv>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <launchSecurity supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </features>
Oct 02 12:05:31 compute-0 nova_compute[256940]: </domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.222 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 12:05:31 compute-0 nova_compute[256940]: <domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <domain>kvm</domain>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <arch>i686</arch>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <vcpu max='4096'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <iothreads supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <os supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='firmware'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <loader supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>rom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pflash</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='readonly'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>yes</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='secure'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </loader>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </os>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='maximumMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <vendor>AMD</vendor>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='succor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:31 compute-0 systemd[1]: Started libpod-conmon-d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0.scope.
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='custom' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43addf4394f1bd764342e21f8e44cdeaf5b4af63073821b318e49458af22cecc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43addf4394f1bd764342e21f8e44cdeaf5b4af63073821b318e49458af22cecc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43addf4394f1bd764342e21f8e44cdeaf5b4af63073821b318e49458af22cecc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-128'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-256'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-512'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43addf4394f1bd764342e21f8e44cdeaf5b4af63073821b318e49458af22cecc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270'>
Oct 02 12:05:31 compute-0 podman[257820]: 2025-10-02 12:05:31.339560444 +0000 UTC m=+0.213741301 container init d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jennings, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <memoryBacking supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='sourceType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>file</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>anonymous</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>memfd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </memoryBacking>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <disk supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='diskDevice'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>disk</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cdrom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>floppy</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>lun</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>fdc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>sata</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <graphics supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vnc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egl-headless</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>dbus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <video supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='modelType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vga</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cirrus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>none</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>bochs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ramfb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </video>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hostdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='mode'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>subsystem</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='startupPolicy'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>mandatory</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>requisite</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>optional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='subsysType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pci</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='capsType'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='pciBackend'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hostdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <rng supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>random</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <filesystem supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='driverType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>path</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>handle</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtiofs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </filesystem>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <tpm supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-tis</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-crb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emulator</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>external</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendVersion'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>2.0</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </tpm>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <redirdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </redirdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <channel supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pty</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>unix</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </channel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <crypto supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>qemu</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </crypto>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <interface supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>passt</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <panic supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>isa</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>hyperv</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </panic>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <gic supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <genid supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backup supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <async-teardown supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <ps2 supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sev supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sgx supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hyperv supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='features'>
Oct 02 12:05:31 compute-0 podman[257820]: 2025-10-02 12:05:31.346214988 +0000 UTC m=+0.220395845 container start d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jennings, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>relaxed</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vapic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>spinlocks</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vpindex</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>runtime</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>synic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>stimer</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reset</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vendor_id</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>frequencies</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reenlightenment</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tlbflush</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ipi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>avic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emsr_bitmap</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>xmm_input</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hyperv>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <launchSecurity supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </features>
Oct 02 12:05:31 compute-0 nova_compute[256940]: </domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.275 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.280 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 12:05:31 compute-0 nova_compute[256940]: <domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <domain>kvm</domain>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <arch>x86_64</arch>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <vcpu max='240'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <iothreads supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <os supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='firmware'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <loader supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>rom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pflash</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='readonly'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>yes</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='secure'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </loader>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </os>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='maximumMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <vendor>AMD</vendor>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='succor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='custom' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 podman[257820]: 2025-10-02 12:05:31.350163052 +0000 UTC m=+0.224343899 container attach d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jennings, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-128'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-256'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-512'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <memoryBacking supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='sourceType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>file</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>anonymous</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>memfd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </memoryBacking>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <disk supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='diskDevice'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>disk</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cdrom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>floppy</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>lun</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ide</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>fdc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>sata</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <graphics supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vnc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egl-headless</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>dbus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <video supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='modelType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vga</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cirrus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>none</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>bochs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ramfb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </video>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hostdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='mode'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>subsystem</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='startupPolicy'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>mandatory</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>requisite</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>optional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='subsysType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pci</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='capsType'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='pciBackend'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hostdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <rng supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>random</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <filesystem supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='driverType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>path</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>handle</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtiofs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </filesystem>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <tpm supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-tis</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-crb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emulator</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>external</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendVersion'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>2.0</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </tpm>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <redirdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </redirdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <channel supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pty</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>unix</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </channel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <crypto supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>qemu</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </crypto>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <interface supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>passt</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <panic supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>isa</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>hyperv</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </panic>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <gic supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <genid supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backup supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <async-teardown supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <ps2 supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sev supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sgx supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hyperv supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='features'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>relaxed</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vapic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>spinlocks</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vpindex</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>runtime</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>synic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>stimer</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reset</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vendor_id</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>frequencies</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reenlightenment</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tlbflush</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ipi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>avic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emsr_bitmap</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>xmm_input</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hyperv>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <launchSecurity supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </features>
Oct 02 12:05:31 compute-0 nova_compute[256940]: </domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.342 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 12:05:31 compute-0 nova_compute[256940]: <domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <domain>kvm</domain>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <arch>x86_64</arch>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <vcpu max='4096'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <iothreads supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <os supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='firmware'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>efi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <loader supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>rom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pflash</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='readonly'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>yes</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='secure'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>yes</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>no</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </loader>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </os>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='maximumMigratable'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>on</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>off</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <vendor>AMD</vendor>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='succor'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <mode name='custom' supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Denverton-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='auto-ibrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amd-psfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='stibp-always-on'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='EPYC-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-128'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-256'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx10-512'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='prefetchiti'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Haswell-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512er'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512pf'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fma4'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tbm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xop'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='amx-tile'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-bf16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-fp16'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bitalg'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrc'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fzrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='la57'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='taa-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xfd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ifma'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cmpccxadd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fbsdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='fsrs'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ibrs-all'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mcdt-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pbrsb-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='psdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='serialize'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vaes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='hle'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='rtm'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512bw'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512cd'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512dq'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512f'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='avx512vl'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='invpcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pcid'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='pku'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='mpx'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='core-capability'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='split-lock-detect'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='cldemote'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='erms'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='gfni'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdir64b'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='movdiri'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='xsaves'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='athlon-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='core2duo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='coreduo-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='n270-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='ss'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <blockers model='phenom-v1'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnow'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <feature name='3dnowext'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </blockers>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </mode>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <memoryBacking supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <enum name='sourceType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>file</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>anonymous</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <value>memfd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </memoryBacking>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <disk supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='diskDevice'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>disk</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cdrom</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>floppy</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>lun</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>fdc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>sata</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <graphics supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vnc</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egl-headless</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>dbus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <video supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='modelType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vga</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>cirrus</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>none</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>bochs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ramfb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </video>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hostdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='mode'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>subsystem</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='startupPolicy'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>mandatory</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>requisite</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>optional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='subsysType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pci</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>scsi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='capsType'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='pciBackend'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hostdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <rng supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtio-non-transitional</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>random</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>egd</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <filesystem supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='driverType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>path</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>handle</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>virtiofs</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </filesystem>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <tpm supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-tis</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tpm-crb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emulator</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>external</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendVersion'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>2.0</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </tpm>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <redirdev supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='bus'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>usb</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </redirdev>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <channel supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>pty</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>unix</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </channel>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <crypto supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='type'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>qemu</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendModel'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>builtin</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </crypto>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <interface supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='backendType'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>default</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>passt</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <panic supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='model'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>isa</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>hyperv</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </panic>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <features>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <gic supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <genid supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <backup supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <async-teardown supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <ps2 supported='yes'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sev supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <sgx supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <hyperv supported='yes'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       <enum name='features'>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>relaxed</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vapic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>spinlocks</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vpindex</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>runtime</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>synic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>stimer</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reset</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>vendor_id</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>frequencies</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>reenlightenment</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>tlbflush</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>ipi</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>avic</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>emsr_bitmap</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:         <value>xmm_input</value>
Oct 02 12:05:31 compute-0 nova_compute[256940]:       </enum>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     </hyperv>
Oct 02 12:05:31 compute-0 nova_compute[256940]:     <launchSecurity supported='no'/>
Oct 02 12:05:31 compute-0 nova_compute[256940]:   </features>
Oct 02 12:05:31 compute-0 nova_compute[256940]: </domainCapabilities>
Oct 02 12:05:31 compute-0 nova_compute[256940]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.397 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.397 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.397 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.397 2 INFO nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Secure Boot support detected
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.404 2 INFO nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.404 2 INFO nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.415 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] cpu compare xml: <cpu match="exact">
Oct 02 12:05:31 compute-0 nova_compute[256940]:   <model>Nehalem</model>
Oct 02 12:05:31 compute-0 nova_compute[256940]: </cpu>
Oct 02 12:05:31 compute-0 nova_compute[256940]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.419 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.446 2 INFO nova.virt.node [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Determined node identity 8733289a-aa77-4139-9e88-bac686174c8d from /var/lib/nova/compute_id
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.460 2 WARNING nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Compute nodes ['8733289a-aa77-4139-9e88-bac686174c8d'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.487 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 12:05:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:31.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.531 2 WARNING nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.532 2 DEBUG oslo_concurrency.lockutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.532 2 DEBUG oslo_concurrency.lockutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.532 2 DEBUG oslo_concurrency.lockutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.532 2 DEBUG nova.compute.resource_tracker [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.533 2 DEBUG oslo_concurrency.processutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051053204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:31 compute-0 nova_compute[256940]: 2025-10-02 12:05:31.968 2 DEBUG oslo_concurrency.processutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:32 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 12:05:32 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 12:05:32 compute-0 laughing_jennings[257841]: {
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:     "1": [
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:         {
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "devices": [
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "/dev/loop3"
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             ],
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "lv_name": "ceph_lv0",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "lv_size": "7511998464",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "name": "ceph_lv0",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "tags": {
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.cluster_name": "ceph",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.crush_device_class": "",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.encrypted": "0",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.osd_id": "1",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.type": "block",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:                 "ceph.vdo": "0"
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             },
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "type": "block",
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:             "vg_name": "ceph_vg0"
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:         }
Oct 02 12:05:32 compute-0 laughing_jennings[257841]:     ]
Oct 02 12:05:32 compute-0 laughing_jennings[257841]: }
Oct 02 12:05:32 compute-0 systemd[1]: libpod-d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0.scope: Deactivated successfully.
Oct 02 12:05:32 compute-0 podman[257893]: 2025-10-02 12:05:32.248246892 +0000 UTC m=+0.061377259 container died d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jennings, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:05:32 compute-0 ceph-mon[73668]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1051053204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-43addf4394f1bd764342e21f8e44cdeaf5b4af63073821b318e49458af22cecc-merged.mount: Deactivated successfully.
Oct 02 12:05:32 compute-0 podman[257893]: 2025-10-02 12:05:32.321602594 +0000 UTC m=+0.134732891 container remove d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:05:32 compute-0 systemd[1]: libpod-conmon-d05a1b6d5715e8da8718bbe5a947cb75c5a235b634cc19c5f992f0019aec5cd0.scope: Deactivated successfully.
Oct 02 12:05:32 compute-0 sudo[257707]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.393 2 WARNING nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.395 2 DEBUG nova.compute.resource_tracker [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5163MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.396 2 DEBUG oslo_concurrency.lockutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.396 2 DEBUG oslo_concurrency.lockutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.417 2 WARNING nova.compute.resource_tracker [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] No compute node record for compute-0.ctlplane.example.com:8733289a-aa77-4139-9e88-bac686174c8d: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 8733289a-aa77-4139-9e88-bac686174c8d could not be found.
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.437 2 INFO nova.compute.resource_tracker [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 8733289a-aa77-4139-9e88-bac686174c8d
Oct 02 12:05:32 compute-0 sudo[257910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:32 compute-0 sudo[257910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.467 2 DEBUG nova.compute.resource_tracker [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.468 2 DEBUG nova.compute.resource_tracker [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:05:32 compute-0 sudo[257910]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:32 compute-0 sudo[257935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:32 compute-0 sudo[257935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:32 compute-0 sudo[257935]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.576 2 INFO nova.scheduler.client.report [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [req-9be187ab-4a79-49d8-865f-1907247aa4d0] Created resource provider record via placement API for resource provider with UUID 8733289a-aa77-4139-9e88-bac686174c8d and name compute-0.ctlplane.example.com.
Oct 02 12:05:32 compute-0 nova_compute[256940]: 2025-10-02 12:05:32.631 2 DEBUG oslo_concurrency.processutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:32 compute-0 sudo[257960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:32 compute-0 sudo[257960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:32 compute-0 sudo[257960]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:32.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:32 compute-0 sudo[257986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:05:32 compute-0 sudo[257986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429792564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.067 2 DEBUG oslo_concurrency.processutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:33 compute-0 podman[258072]: 2025-10-02 12:05:33.089362231 +0000 UTC m=+0.036669022 container create 34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.105 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 02 12:05:33 compute-0 nova_compute[256940]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.106 2 INFO nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] kernel doesn't support AMD SEV
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.106 2 DEBUG nova.compute.provider_tree [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.107 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.109 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Libvirt baseline CPU <cpu>
Oct 02 12:05:33 compute-0 nova_compute[256940]:   <arch>x86_64</arch>
Oct 02 12:05:33 compute-0 nova_compute[256940]:   <model>Nehalem</model>
Oct 02 12:05:33 compute-0 nova_compute[256940]:   <vendor>AMD</vendor>
Oct 02 12:05:33 compute-0 nova_compute[256940]:   <topology sockets="8" cores="1" threads="1"/>
Oct 02 12:05:33 compute-0 nova_compute[256940]: </cpu>
Oct 02 12:05:33 compute-0 nova_compute[256940]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Oct 02 12:05:33 compute-0 systemd[1]: Started libpod-conmon-34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6.scope.
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.150 2 DEBUG nova.scheduler.client.report [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Updated inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.151 2 DEBUG nova.compute.provider_tree [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Updating resource provider 8733289a-aa77-4139-9e88-bac686174c8d generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.151 2 DEBUG nova.compute.provider_tree [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:05:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:33 compute-0 podman[258072]: 2025-10-02 12:05:33.072871189 +0000 UTC m=+0.020178030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:33 compute-0 podman[258072]: 2025-10-02 12:05:33.183822126 +0000 UTC m=+0.131128967 container init 34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:05:33 compute-0 podman[258072]: 2025-10-02 12:05:33.190892241 +0000 UTC m=+0.138199072 container start 34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:05:33 compute-0 podman[258072]: 2025-10-02 12:05:33.195122322 +0000 UTC m=+0.142429123 container attach 34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:05:33 compute-0 nostalgic_brattain[258093]: 167 167
Oct 02 12:05:33 compute-0 systemd[1]: libpod-34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6.scope: Deactivated successfully.
Oct 02 12:05:33 compute-0 conmon[258093]: conmon 34c1c0f175f4ae322191 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6.scope/container/memory.events
Oct 02 12:05:33 compute-0 podman[258089]: 2025-10-02 12:05:33.200567815 +0000 UTC m=+0.062942220 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 12:05:33 compute-0 podman[258092]: 2025-10-02 12:05:33.237743189 +0000 UTC m=+0.100117514 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:05:33 compute-0 podman[258129]: 2025-10-02 12:05:33.243367326 +0000 UTC m=+0.031478105 container died 34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.252 2 DEBUG nova.compute.provider_tree [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Updating resource provider 8733289a-aa77-4139-9e88-bac686174c8d generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 12:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bac82143ef05dff2bed64a4bff7b50c6124e07b11d9e3c5f82f6d46fb97410ef-merged.mount: Deactivated successfully.
Oct 02 12:05:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1429792564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.279 2 DEBUG nova.compute.resource_tracker [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.279 2 DEBUG oslo_concurrency.lockutils [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.280 2 DEBUG nova.service [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 02 12:05:33 compute-0 podman[258129]: 2025-10-02 12:05:33.284409612 +0000 UTC m=+0.072520321 container remove 34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:05:33 compute-0 systemd[1]: libpod-conmon-34c1c0f175f4ae322191f4a13ed69d4788f79c4f8c9d2871c2f009956392f5c6.scope: Deactivated successfully.
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.354 2 DEBUG nova.service [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 02 12:05:33 compute-0 nova_compute[256940]: 2025-10-02 12:05:33.355 2 DEBUG nova.servicegroup.drivers.db [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 02 12:05:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:33 compute-0 podman[258158]: 2025-10-02 12:05:33.535636574 +0000 UTC m=+0.066654077 container create 7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:05:33 compute-0 systemd[1]: Started libpod-conmon-7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8.scope.
Oct 02 12:05:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946194c8336e58601f9dfa1dc628255512ec399ada9c74cf53133a61b80ab94c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946194c8336e58601f9dfa1dc628255512ec399ada9c74cf53133a61b80ab94c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946194c8336e58601f9dfa1dc628255512ec399ada9c74cf53133a61b80ab94c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946194c8336e58601f9dfa1dc628255512ec399ada9c74cf53133a61b80ab94c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:33 compute-0 podman[258158]: 2025-10-02 12:05:33.611244625 +0000 UTC m=+0.142262148 container init 7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:05:33 compute-0 podman[258158]: 2025-10-02 12:05:33.519601784 +0000 UTC m=+0.050619287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:05:33 compute-0 podman[258158]: 2025-10-02 12:05:33.619713007 +0000 UTC m=+0.150730530 container start 7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:05:33 compute-0 podman[258158]: 2025-10-02 12:05:33.62287704 +0000 UTC m=+0.153894643 container attach 7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:05:34 compute-0 ceph-mon[73668]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]: {
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]:         "osd_id": 1,
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]:         "type": "bluestore"
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]:     }
Oct 02 12:05:34 compute-0 awesome_rosalind[258174]: }
Oct 02 12:05:34 compute-0 systemd[1]: libpod-7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8.scope: Deactivated successfully.
Oct 02 12:05:34 compute-0 conmon[258174]: conmon 7a96c489df2ea250b9c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8.scope/container/memory.events
Oct 02 12:05:34 compute-0 podman[258158]: 2025-10-02 12:05:34.5613487 +0000 UTC m=+1.092366213 container died 7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-946194c8336e58601f9dfa1dc628255512ec399ada9c74cf53133a61b80ab94c-merged.mount: Deactivated successfully.
Oct 02 12:05:34 compute-0 podman[258158]: 2025-10-02 12:05:34.63462947 +0000 UTC m=+1.165647003 container remove 7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:05:34 compute-0 systemd[1]: libpod-conmon-7a96c489df2ea250b9c11057ad554a691323c80b4857f4aca320bd6c4ccb1fa8.scope: Deactivated successfully.
Oct 02 12:05:34 compute-0 sudo[257986]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:05:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:05:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f53729cc-aa9a-47ff-a288-d467af460996 does not exist
Oct 02 12:05:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ebb71403-f1ca-4c00-ab16-ac8792e89232 does not exist
Oct 02 12:05:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e33b2edf-8ee9-4f70-8cb7-2ba92e0614cc does not exist
Oct 02 12:05:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:34.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:34 compute-0 sudo[258210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:34 compute-0 sudo[258210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:34 compute-0 sudo[258210]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:34 compute-0 sudo[258235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:05:34 compute-0 sudo[258235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:34 compute-0 sudo[258235]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:35 compute-0 ceph-mon[73668]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:37.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:37 compute-0 ceph-mon[73668]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:05:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 8543 writes, 33K keys, 8543 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8543 writes, 1802 syncs, 4.74 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 618 writes, 973 keys, 618 commit groups, 1.0 writes per commit group, ingest: 0.31 MB, 0.00 MB/s
                                           Interval WAL: 618 writes, 288 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 12:05:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:39.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:39 compute-0 ceph-mon[73668]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:05:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:05:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:41.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:41 compute-0 sudo[258263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:41 compute-0 sudo[258263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:41 compute-0 sudo[258263]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:41 compute-0 sudo[258288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:41 compute-0 sudo[258288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:41 compute-0 sudo[258288]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:41 compute-0 ceph-mon[73668]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:42.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:43 compute-0 podman[258314]: 2025-10-02 12:05:43.443495477 +0000 UTC m=+0.090688428 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:05:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:43.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:43 compute-0 ceph-mon[73668]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:44.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:45 compute-0 podman[258336]: 2025-10-02 12:05:45.384877777 +0000 UTC m=+0.053147054 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:05:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:45.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:45 compute-0 ceph-mon[73668]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:46 compute-0 nova_compute[256940]: 2025-10-02 12:05:46.356 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:46 compute-0 nova_compute[256940]: 2025-10-02 12:05:46.381 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:46.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 12:05:47 compute-0 ceph-mon[73668]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:48.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:49.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:49 compute-0 ceph-mon[73668]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:52 compute-0 ceph-mon[73668]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:52.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:53.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:54 compute-0 ceph-mon[73668]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:05:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:54.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:05:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:05:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:55.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:05:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:56 compute-0 ceph-mon[73668]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:56.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:57.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:58 compute-0 ceph-mon[73668]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:05:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:58.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:05:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/391542199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:05:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/391542199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:05:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:59.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:00 compute-0 ceph-mon[73668]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/391542199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/391542199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:01.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:01 compute-0 sudo[258364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:01 compute-0 sudo[258364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:01 compute-0 sudo[258364]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:01 compute-0 sudo[258389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:01 compute-0 sudo[258389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:01 compute-0 sudo[258389]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:02 compute-0 ceph-mon[73668]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:02.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:03 compute-0 podman[258415]: 2025-10-02 12:06:03.416987414 +0000 UTC m=+0.078669984 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:06:03 compute-0 podman[258416]: 2025-10-02 12:06:03.459663323 +0000 UTC m=+0.117431939 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:06:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:04 compute-0 ceph-mon[73668]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:05.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:05 compute-0 ceph-mon[73668]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:07.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:07 compute-0 ceph-mon[73668]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:08.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:09 compute-0 ceph-mon[73668]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:10.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:11.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:11 compute-0 ceph-mon[73668]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:06:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:06:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:06:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:13.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:06:13 compute-0 ceph-mon[73668]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:14 compute-0 podman[258464]: 2025-10-02 12:06:14.427409981 +0000 UTC m=+0.087029155 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:06:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:14.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:15.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:15 compute-0 ceph-mon[73668]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:16 compute-0 podman[258485]: 2025-10-02 12:06:16.410730942 +0000 UTC m=+0.077100562 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:06:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:16.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:17.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:17 compute-0 ceph-mon[73668]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:18.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:19.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:19 compute-0 ceph-mon[73668]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:20.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:21 compute-0 ceph-mon[73668]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:21 compute-0 sudo[258508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:21 compute-0 sudo[258508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:21 compute-0 sudo[258508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:21 compute-0 sudo[258533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:21 compute-0 sudo[258533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:21 compute-0 sudo[258533]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:22.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:23.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:23 compute-0 ceph-mon[73668]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:24.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:25.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:25 compute-0 ceph-mon[73668]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3315944606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:06:26.442 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:06:26.442 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:06:26.442 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:26.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1921985533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2515429145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:27.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:27 compute-0 ceph-mon[73668]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1963014453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:06:28
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'vms']
Oct 02 12:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:06:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.232 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.234 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.234 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.234 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.234 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.234 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.259 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.260 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.260 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.260 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.261 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:29.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29750166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.752 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:29 compute-0 ceph-mon[73668]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/29750166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.945 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.947 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5254MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.947 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:29 compute-0 nova_compute[256940]: 2025-10-02 12:06:29.947 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.236 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.237 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.268 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6871257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.731 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.741 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.767 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.770 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:06:30 compute-0 nova_compute[256940]: 2025-10-02 12:06:30.770 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/6871257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:31.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:31 compute-0 ceph-mon[73668]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:33.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:33 compute-0 ceph-mon[73668]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:34 compute-0 podman[258608]: 2025-10-02 12:06:34.436306544 +0000 UTC m=+0.091131243 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:06:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:34 compute-0 podman[258609]: 2025-10-02 12:06:34.479990631 +0000 UTC m=+0.129940711 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:06:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:35 compute-0 sudo[258654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:35 compute-0 sudo[258654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-0 sudo[258654]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-0 sudo[258679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:35 compute-0 sudo[258679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-0 sudo[258679]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-0 sudo[258704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:35 compute-0 sudo[258704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-0 sudo[258704]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-0 sudo[258729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:06:35 compute-0 sudo[258729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:35.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:06:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:06:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:35 compute-0 sudo[258729]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-0 ceph-mon[73668]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e80b35a5-2a18-40a0-86e5-5cfca9d4fa31 does not exist
Oct 02 12:06:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6905d169-56bc-4cc7-aa69-a4fe166f4668 does not exist
Oct 02 12:06:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ac7b3947-f65c-4e3d-a245-7df03ee32727 does not exist
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:06:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:36 compute-0 sudo[258786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:36 compute-0 sudo[258786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:36 compute-0 sudo[258786]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:36 compute-0 sudo[258811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:36 compute-0 sudo[258811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:36 compute-0 sudo[258811]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:36 compute-0 sudo[258836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:36 compute-0 sudo[258836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:36 compute-0 sudo[258836]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:36 compute-0 sudo[258861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:06:36 compute-0 sudo[258861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:06:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:37 compute-0 podman[258926]: 2025-10-02 12:06:37.327046918 +0000 UTC m=+0.067516759 container create e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:06:37 compute-0 systemd[1]: Started libpod-conmon-e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b.scope.
Oct 02 12:06:37 compute-0 podman[258926]: 2025-10-02 12:06:37.297877195 +0000 UTC m=+0.038347116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:37 compute-0 podman[258926]: 2025-10-02 12:06:37.439362621 +0000 UTC m=+0.179832472 container init e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moore, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:06:37 compute-0 podman[258926]: 2025-10-02 12:06:37.448962265 +0000 UTC m=+0.189432096 container start e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:06:37 compute-0 boring_moore[258942]: 167 167
Oct 02 12:06:37 compute-0 systemd[1]: libpod-e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b.scope: Deactivated successfully.
Oct 02 12:06:37 compute-0 conmon[258942]: conmon e5c0def6737a1bc2db5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b.scope/container/memory.events
Oct 02 12:06:37 compute-0 podman[258926]: 2025-10-02 12:06:37.458535588 +0000 UTC m=+0.199005419 container attach e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:06:37 compute-0 podman[258926]: 2025-10-02 12:06:37.460032158 +0000 UTC m=+0.200501989 container died e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d564b902f621c10c61f336d754cc0669f2d05ff6c6fb0c6e9b49ceeaf963f179-merged.mount: Deactivated successfully.
Oct 02 12:06:37 compute-0 podman[258926]: 2025-10-02 12:06:37.5194048 +0000 UTC m=+0.259874671 container remove e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:06:37 compute-0 systemd[1]: libpod-conmon-e5c0def6737a1bc2db5dcb9793479e55b130fa772961a5840351b206e10cb10b.scope: Deactivated successfully.
Oct 02 12:06:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:37.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:37 compute-0 podman[258967]: 2025-10-02 12:06:37.702083086 +0000 UTC m=+0.050783966 container create a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:06:37 compute-0 systemd[1]: Started libpod-conmon-a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10.scope.
Oct 02 12:06:37 compute-0 podman[258967]: 2025-10-02 12:06:37.681216553 +0000 UTC m=+0.029917483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c395500af92d14a28ab01bff421048d46ae53d96cc3fd80e1299b205423bf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c395500af92d14a28ab01bff421048d46ae53d96cc3fd80e1299b205423bf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c395500af92d14a28ab01bff421048d46ae53d96cc3fd80e1299b205423bf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c395500af92d14a28ab01bff421048d46ae53d96cc3fd80e1299b205423bf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c395500af92d14a28ab01bff421048d46ae53d96cc3fd80e1299b205423bf8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:37 compute-0 podman[258967]: 2025-10-02 12:06:37.831708887 +0000 UTC m=+0.180409847 container init a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:06:37 compute-0 podman[258967]: 2025-10-02 12:06:37.840455079 +0000 UTC m=+0.189155999 container start a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:06:37 compute-0 podman[258967]: 2025-10-02 12:06:37.851934753 +0000 UTC m=+0.200635673 container attach a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:06:38 compute-0 ceph-mon[73668]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:38 compute-0 clever_carson[258984]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:06:38 compute-0 clever_carson[258984]: --> relative data size: 1.0
Oct 02 12:06:38 compute-0 clever_carson[258984]: --> All data devices are unavailable
Oct 02 12:06:38 compute-0 systemd[1]: libpod-a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10.scope: Deactivated successfully.
Oct 02 12:06:38 compute-0 podman[258967]: 2025-10-02 12:06:38.685839398 +0000 UTC m=+1.034540298 container died a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:06:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c395500af92d14a28ab01bff421048d46ae53d96cc3fd80e1299b205423bf8-merged.mount: Deactivated successfully.
Oct 02 12:06:39 compute-0 podman[258967]: 2025-10-02 12:06:39.093598502 +0000 UTC m=+1.442299402 container remove a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:06:39 compute-0 systemd[1]: libpod-conmon-a886ad94668a1e844cb19a716d7be0c401c8634cb492608ba0f1ccbf5fc4ae10.scope: Deactivated successfully.
Oct 02 12:06:39 compute-0 sudo[258861]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:39 compute-0 sudo[259014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:39 compute-0 sudo[259014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:39 compute-0 sudo[259014]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:39 compute-0 sudo[259039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:39 compute-0 sudo[259039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:39 compute-0 sudo[259039]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:39 compute-0 sudo[259064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:39 compute-0 sudo[259064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:39 compute-0 sudo[259064]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:39 compute-0 sudo[259089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:06:39 compute-0 sudo[259089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:39.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:06:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:06:39 compute-0 podman[259155]: 2025-10-02 12:06:39.894921025 +0000 UTC m=+0.051327610 container create 96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:06:39 compute-0 systemd[1]: Started libpod-conmon-96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d.scope.
Oct 02 12:06:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:39 compute-0 podman[259155]: 2025-10-02 12:06:39.876314712 +0000 UTC m=+0.032721317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:39 compute-0 podman[259155]: 2025-10-02 12:06:39.995011805 +0000 UTC m=+0.151418420 container init 96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_taussig, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:06:40 compute-0 podman[259155]: 2025-10-02 12:06:40.002918804 +0000 UTC m=+0.159325389 container start 96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_taussig, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:40 compute-0 podman[259155]: 2025-10-02 12:06:40.006479768 +0000 UTC m=+0.162886583 container attach 96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_taussig, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:06:40 compute-0 vigilant_taussig[259171]: 167 167
Oct 02 12:06:40 compute-0 systemd[1]: libpod-96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d.scope: Deactivated successfully.
Oct 02 12:06:40 compute-0 conmon[259171]: conmon 96e089cfaf520b28aeb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d.scope/container/memory.events
Oct 02 12:06:40 compute-0 podman[259155]: 2025-10-02 12:06:40.011546452 +0000 UTC m=+0.167953037 container died 96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b60fb703eb9a643f54ee4ecdef02a427dabf8ef142d44c969ab44614922eacd-merged.mount: Deactivated successfully.
Oct 02 12:06:40 compute-0 podman[259155]: 2025-10-02 12:06:40.065339466 +0000 UTC m=+0.221746061 container remove 96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:06:40 compute-0 systemd[1]: libpod-conmon-96e089cfaf520b28aeb10e22d31a38723160fee758327b3d012f9559babd055d.scope: Deactivated successfully.
Oct 02 12:06:40 compute-0 ceph-mon[73668]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:40 compute-0 podman[259199]: 2025-10-02 12:06:40.273357483 +0000 UTC m=+0.070509938 container create 749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_burnell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:06:40 compute-0 podman[259199]: 2025-10-02 12:06:40.226158753 +0000 UTC m=+0.023311288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:40 compute-0 systemd[1]: Started libpod-conmon-749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3.scope.
Oct 02 12:06:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d68f9bf218c371135102ccea0a5d0172a92afda32038a05cc1d68acc2f664c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d68f9bf218c371135102ccea0a5d0172a92afda32038a05cc1d68acc2f664c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d68f9bf218c371135102ccea0a5d0172a92afda32038a05cc1d68acc2f664c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d68f9bf218c371135102ccea0a5d0172a92afda32038a05cc1d68acc2f664c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:40 compute-0 podman[259199]: 2025-10-02 12:06:40.446227038 +0000 UTC m=+0.243379543 container init 749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_burnell, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:06:40 compute-0 podman[259199]: 2025-10-02 12:06:40.456960492 +0000 UTC m=+0.254112957 container start 749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:06:40 compute-0 podman[259199]: 2025-10-02 12:06:40.479938241 +0000 UTC m=+0.277090796 container attach 749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_burnell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:06:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:06:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:40.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:06:41 compute-0 nifty_burnell[259217]: {
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:     "1": [
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:         {
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "devices": [
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "/dev/loop3"
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             ],
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "lv_name": "ceph_lv0",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "lv_size": "7511998464",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "name": "ceph_lv0",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "tags": {
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.cluster_name": "ceph",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.crush_device_class": "",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.encrypted": "0",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.osd_id": "1",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.type": "block",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:                 "ceph.vdo": "0"
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             },
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "type": "block",
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:             "vg_name": "ceph_vg0"
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:         }
Oct 02 12:06:41 compute-0 nifty_burnell[259217]:     ]
Oct 02 12:06:41 compute-0 nifty_burnell[259217]: }
Oct 02 12:06:41 compute-0 systemd[1]: libpod-749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3.scope: Deactivated successfully.
Oct 02 12:06:41 compute-0 podman[259199]: 2025-10-02 12:06:41.169885825 +0000 UTC m=+0.967038300 container died 749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:06:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d68f9bf218c371135102ccea0a5d0172a92afda32038a05cc1d68acc2f664c-merged.mount: Deactivated successfully.
Oct 02 12:06:41 compute-0 podman[259199]: 2025-10-02 12:06:41.232933344 +0000 UTC m=+1.030085799 container remove 749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:06:41 compute-0 systemd[1]: libpod-conmon-749c477afdc7430d5834f0dd46139870295dd3670bbaf9c58ccf6f6b3f1537b3.scope: Deactivated successfully.
Oct 02 12:06:41 compute-0 sudo[259089]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:41 compute-0 sudo[259239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:41 compute-0 sudo[259239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:41 compute-0 sudo[259239]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:41 compute-0 sudo[259264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:41 compute-0 sudo[259264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:41 compute-0 sudo[259264]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:41 compute-0 sudo[259289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:41 compute-0 sudo[259289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:41 compute-0 sudo[259289]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:41.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:41 compute-0 sudo[259314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:06:41 compute-0 sudo[259314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:42 compute-0 sudo[259387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:42 compute-0 sudo[259387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:42 compute-0 sudo[259387]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:42 compute-0 podman[259380]: 2025-10-02 12:06:42.027913239 +0000 UTC m=+0.081249622 container create bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:06:42 compute-0 systemd[1]: Started libpod-conmon-bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9.scope.
Oct 02 12:06:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:42 compute-0 podman[259380]: 2025-10-02 12:06:42.007370395 +0000 UTC m=+0.060706798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:42 compute-0 sudo[259419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:42 compute-0 sudo[259419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:42 compute-0 sudo[259419]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:42 compute-0 podman[259380]: 2025-10-02 12:06:42.113675539 +0000 UTC m=+0.167011942 container init bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:06:42 compute-0 ceph-mon[73668]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:42 compute-0 podman[259380]: 2025-10-02 12:06:42.124944548 +0000 UTC m=+0.178280931 container start bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:06:42 compute-0 elastic_bartik[259445]: 167 167
Oct 02 12:06:42 compute-0 systemd[1]: libpod-bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9.scope: Deactivated successfully.
Oct 02 12:06:42 compute-0 podman[259380]: 2025-10-02 12:06:42.163691513 +0000 UTC m=+0.217027896 container attach bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:06:42 compute-0 podman[259380]: 2025-10-02 12:06:42.164050163 +0000 UTC m=+0.217386546 container died bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5760b03d157398bc51c54aa404ff438c36a5b7e6c272bf93f6d141dc75acc1e-merged.mount: Deactivated successfully.
Oct 02 12:06:42 compute-0 podman[259380]: 2025-10-02 12:06:42.433767513 +0000 UTC m=+0.487103926 container remove bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:06:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:42 compute-0 systemd[1]: libpod-conmon-bc01525c103482f1629c7f768ac8b82f5e9e22f01834266fb3c4998acf2dfde9.scope: Deactivated successfully.
Oct 02 12:06:42 compute-0 podman[259474]: 2025-10-02 12:06:42.685454796 +0000 UTC m=+0.100249245 container create fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sammet, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:06:42 compute-0 podman[259474]: 2025-10-02 12:06:42.615034482 +0000 UTC m=+0.029828941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:06:42 compute-0 systemd[1]: Started libpod-conmon-fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c.scope.
Oct 02 12:06:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e959fc24810432f28d30bef81c1bced42ce2511fb4506187776a97c2d7bfddab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e959fc24810432f28d30bef81c1bced42ce2511fb4506187776a97c2d7bfddab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e959fc24810432f28d30bef81c1bced42ce2511fb4506187776a97c2d7bfddab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e959fc24810432f28d30bef81c1bced42ce2511fb4506187776a97c2d7bfddab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:06:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:42.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:42 compute-0 podman[259474]: 2025-10-02 12:06:42.848782259 +0000 UTC m=+0.263576788 container init fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:06:42 compute-0 podman[259474]: 2025-10-02 12:06:42.856095353 +0000 UTC m=+0.270889782 container start fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 02 12:06:42 compute-0 podman[259474]: 2025-10-02 12:06:42.861485936 +0000 UTC m=+0.276280425 container attach fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:06:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:43.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:43 compute-0 reverent_sammet[259491]: {
Oct 02 12:06:43 compute-0 reverent_sammet[259491]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:06:43 compute-0 reverent_sammet[259491]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:06:43 compute-0 reverent_sammet[259491]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:06:43 compute-0 reverent_sammet[259491]:         "osd_id": 1,
Oct 02 12:06:43 compute-0 reverent_sammet[259491]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:06:43 compute-0 reverent_sammet[259491]:         "type": "bluestore"
Oct 02 12:06:43 compute-0 reverent_sammet[259491]:     }
Oct 02 12:06:43 compute-0 reverent_sammet[259491]: }
Oct 02 12:06:43 compute-0 systemd[1]: libpod-fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c.scope: Deactivated successfully.
Oct 02 12:06:43 compute-0 podman[259474]: 2025-10-02 12:06:43.773560309 +0000 UTC m=+1.188354778 container died fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e959fc24810432f28d30bef81c1bced42ce2511fb4506187776a97c2d7bfddab-merged.mount: Deactivated successfully.
Oct 02 12:06:43 compute-0 podman[259474]: 2025-10-02 12:06:43.838993432 +0000 UTC m=+1.253787871 container remove fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:06:43 compute-0 systemd[1]: libpod-conmon-fdfa45ff1a2c73d0398b04d6eb22f68d70213d7530251f8dec48fe8c5668c88c.scope: Deactivated successfully.
Oct 02 12:06:43 compute-0 sudo[259314]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:06:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:06:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 132b8188-6803-4274-9d6b-392412b6b5fd does not exist
Oct 02 12:06:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0973f968-8905-4d47-a3d3-65f02b0c7ea8 does not exist
Oct 02 12:06:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8be64320-9903-4808-99c3-642db8dbd44d does not exist
Oct 02 12:06:43 compute-0 sudo[259526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:44 compute-0 sudo[259526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:44 compute-0 sudo[259526]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:44 compute-0 sudo[259551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:06:44 compute-0 sudo[259551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:44 compute-0 sudo[259551]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:44 compute-0 ceph-mon[73668]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:44.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:45 compute-0 podman[259577]: 2025-10-02 12:06:45.447831771 +0000 UTC m=+0.111851972 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:06:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:45.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:46 compute-0 ceph-mon[73668]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:46.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:47 compute-0 podman[259599]: 2025-10-02 12:06:47.411251996 +0000 UTC m=+0.068450162 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:06:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:47.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:48 compute-0 ceph-mon[73668]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:48.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:49 compute-0 ceph-mon[73668]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:06:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:49.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:06:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:50.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:51.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:51 compute-0 ceph-mon[73668]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:52.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:53.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:53 compute-0 ceph-mon[73668]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:54.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:55 compute-0 ceph-mon[73668]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:56.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:57 compute-0 ceph-mon[73668]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:06:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:58.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:06:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:59.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:59 compute-0 ceph-mon[73668]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:00.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:07:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:01.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:07:02 compute-0 ceph-mon[73668]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:02 compute-0 sudo[259626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:02 compute-0 sudo[259626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:02 compute-0 sudo[259626]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:02 compute-0 sudo[259651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:02 compute-0 sudo[259651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:02 compute-0 sudo[259651]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:02.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:03.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:04 compute-0 ceph-mon[73668]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:04.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:05 compute-0 podman[259678]: 2025-10-02 12:07:05.386061705 +0000 UTC m=+0.053612970 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:07:05 compute-0 podman[259679]: 2025-10-02 12:07:05.453175452 +0000 UTC m=+0.121084517 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:07:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:05.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:06 compute-0 ceph-mon[73668]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1643989139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:07:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1643989139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:07:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:06.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:07.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:08 compute-0 ceph-mon[73668]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:08.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:09.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:10 compute-0 ceph-mon[73668]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:10.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:11.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:12 compute-0 ceph-mon[73668]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:12.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:13.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:13 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 02 12:07:14 compute-0 ceph-mon[73668]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:15.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:16 compute-0 ceph-mon[73668]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:16 compute-0 podman[259725]: 2025-10-02 12:07:16.448080389 +0000 UTC m=+0.098443927 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:07:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 102 op/s
Oct 02 12:07:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:16.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:17.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:18 compute-0 ceph-mon[73668]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 102 op/s
Oct 02 12:07:18 compute-0 podman[259746]: 2025-10-02 12:07:18.423130734 +0000 UTC m=+0.078457628 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Oct 02 12:07:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:18.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:19.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:20 compute-0 ceph-mon[73668]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Oct 02 12:07:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct 02 12:07:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:07:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:20.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:07:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:21.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:22 compute-0 ceph-mon[73668]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct 02 12:07:22 compute-0 sudo[259769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:22 compute-0 sudo[259769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:22 compute-0 sudo[259769]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:22 compute-0 sudo[259795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:22 compute-0 sudo[259795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:22 compute-0 sudo[259795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:22.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:23.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:24 compute-0 ceph-mon[73668]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:07:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:24.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:07:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:25.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:26 compute-0 ceph-mon[73668]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:07:26.443 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:07:26.444 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:07:26.444 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1281353787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:27.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:28 compute-0 ceph-mon[73668]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/787876386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1801310458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:07:28
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', 'images', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr']
Oct 02 12:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:07:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:28.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:07:29.270 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:07:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:07:29.271 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:07:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:07:29.272 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/135022925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:29.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:30 compute-0 ceph-mon[73668]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Oct 02 12:07:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.764 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.765 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.791 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.791 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.792 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:07:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.827 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.827 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.828 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.828 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.829 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.829 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:30 compute-0 nova_compute[256940]: 2025-10-02 12:07:30.829 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:07:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.248 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.248 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:31 compute-0 ceph-mon[73668]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 12:07:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958073629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:07:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:31.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.704 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.891 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.893 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5234MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.893 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.893 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.947 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.948 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:07:31 compute-0 nova_compute[256940]: 2025-10-02 12:07:31.964 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860265621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:32 compute-0 nova_compute[256940]: 2025-10-02 12:07:32.396 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:32 compute-0 nova_compute[256940]: 2025-10-02 12:07:32.405 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:07:32 compute-0 nova_compute[256940]: 2025-10-02 12:07:32.425 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:07:32 compute-0 nova_compute[256940]: 2025-10-02 12:07:32.428 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:07:32 compute-0 nova_compute[256940]: 2025-10-02 12:07:32.428 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Oct 02 12:07:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2958073629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1860265621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:32.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:33.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:33 compute-0 ceph-mon[73668]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Oct 02 12:07:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:34.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:35.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:36 compute-0 ceph-mon[73668]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:36 compute-0 podman[259870]: 2025-10-02 12:07:36.425849852 +0000 UTC m=+0.087098266 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:07:36 compute-0 podman[259871]: 2025-10-02 12:07:36.463674313 +0000 UTC m=+0.115285932 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:07:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:07:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:36.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:07:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:37.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:38 compute-0 ceph-mon[73668]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:38.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:39.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:07:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:07:40 compute-0 ceph-mon[73668]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:40.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:41 compute-0 ceph-mon[73668]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:07:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:41.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:07:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:42 compute-0 sudo[259920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:42 compute-0 sudo[259920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:42 compute-0 sudo[259920]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:42 compute-0 sudo[259945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:42 compute-0 sudo[259945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:42 compute-0 sudo[259945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:43 compute-0 ceph-mon[73668]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:43.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:44 compute-0 sudo[259971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:44 compute-0 sudo[259971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 sudo[259971]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.556365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864556412, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2110, "num_deletes": 251, "total_data_size": 3972761, "memory_usage": 4035520, "flush_reason": "Manual Compaction"}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 02 12:07:44 compute-0 sudo[259996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:44 compute-0 sudo[259996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864580645, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3896006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17194, "largest_seqno": 19303, "table_properties": {"data_size": 3886472, "index_size": 6092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18976, "raw_average_key_size": 20, "raw_value_size": 3867534, "raw_average_value_size": 4088, "num_data_blocks": 272, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406636, "oldest_key_time": 1759406636, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 24335 microseconds, and 9236 cpu microseconds.
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580698) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3896006 bytes OK
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580725) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.582780) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.582794) EVENT_LOG_v1 {"time_micros": 1759406864582789, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.582817) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3964316, prev total WAL file size 3964316, number of live WAL files 2.
Oct 02 12:07:44 compute-0 sudo[259996]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.583997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3804KB)], [41(7579KB)]
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864584067, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11657429, "oldest_snapshot_seqno": -1}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4437 keys, 9637185 bytes, temperature: kUnknown
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864652194, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9637185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9604962, "index_size": 20017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110763, "raw_average_key_size": 24, "raw_value_size": 9522013, "raw_average_value_size": 2146, "num_data_blocks": 832, "num_entries": 4437, "num_filter_entries": 4437, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.652411) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9637185 bytes
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.653443) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.9 rd, 141.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4956, records dropped: 519 output_compression: NoCompression
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.653461) EVENT_LOG_v1 {"time_micros": 1759406864653452, "job": 20, "event": "compaction_finished", "compaction_time_micros": 68200, "compaction_time_cpu_micros": 24590, "output_level": 6, "num_output_files": 1, "total_output_size": 9637185, "num_input_records": 4956, "num_output_records": 4437, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864654287, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864655750, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.583894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.655829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.655837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.655840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.655843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:07:44.655847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-0 sudo[260021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:44 compute-0 sudo[260021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 sudo[260021]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-0 sudo[260046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:07:44 compute-0 sudo[260046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:07:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:07:45 compute-0 sudo[260046]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:07:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:07:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:07:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ec73ed40-e53d-498f-9989-f8213df73ef7 does not exist
Oct 02 12:07:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7ad1f5e8-21f2-496a-aa66-1f2a28c3e5a2 does not exist
Oct 02 12:07:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7582f840-a069-43ae-b348-92abce04570d does not exist
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:07:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:07:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:07:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:45 compute-0 sudo[260101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:45 compute-0 sudo[260101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:45 compute-0 sudo[260101]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:45 compute-0 sudo[260126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:45 compute-0 sudo[260126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:45 compute-0 sudo[260126]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:45 compute-0 ceph-mon[73668]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:07:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:45 compute-0 sudo[260151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:45 compute-0 sudo[260151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:45 compute-0 sudo[260151]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:45 compute-0 sudo[260176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:07:45 compute-0 sudo[260176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:45.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:46 compute-0 podman[260241]: 2025-10-02 12:07:46.126787025 +0000 UTC m=+0.088214736 container create b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:07:46 compute-0 podman[260241]: 2025-10-02 12:07:46.076233027 +0000 UTC m=+0.037660778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:46 compute-0 systemd[1]: Started libpod-conmon-b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349.scope.
Oct 02 12:07:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:46 compute-0 podman[260241]: 2025-10-02 12:07:46.265310592 +0000 UTC m=+0.226738353 container init b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:07:46 compute-0 podman[260241]: 2025-10-02 12:07:46.279308073 +0000 UTC m=+0.240735794 container start b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:07:46 compute-0 angry_jemison[260258]: 167 167
Oct 02 12:07:46 compute-0 systemd[1]: libpod-b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349.scope: Deactivated successfully.
Oct 02 12:07:46 compute-0 podman[260241]: 2025-10-02 12:07:46.289135623 +0000 UTC m=+0.250563394 container attach b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:07:46 compute-0 podman[260241]: 2025-10-02 12:07:46.291339971 +0000 UTC m=+0.252767682 container died b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-37783056d41ac91514640452c6e5f98742183aa22a565f0bc2f06c435f257d20-merged.mount: Deactivated successfully.
Oct 02 12:07:46 compute-0 podman[260241]: 2025-10-02 12:07:46.424724233 +0000 UTC m=+0.386151904 container remove b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:07:46 compute-0 systemd[1]: libpod-conmon-b4cdfd62db1388c2d89a94ed8a1a8f2eb4e80d78486c0b6a82cee63f81631349.scope: Deactivated successfully.
Oct 02 12:07:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:46 compute-0 podman[260284]: 2025-10-02 12:07:46.68829319 +0000 UTC m=+0.069980444 container create ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:46 compute-0 podman[260284]: 2025-10-02 12:07:46.646244787 +0000 UTC m=+0.027932071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:46 compute-0 systemd[1]: Started libpod-conmon-ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181.scope.
Oct 02 12:07:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1424dcdd5aef68b39a495a2053212ef3f984ce19adccee1929a75959b93af2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1424dcdd5aef68b39a495a2053212ef3f984ce19adccee1929a75959b93af2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1424dcdd5aef68b39a495a2053212ef3f984ce19adccee1929a75959b93af2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1424dcdd5aef68b39a495a2053212ef3f984ce19adccee1929a75959b93af2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1424dcdd5aef68b39a495a2053212ef3f984ce19adccee1929a75959b93af2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:46 compute-0 podman[260284]: 2025-10-02 12:07:46.843669943 +0000 UTC m=+0.225357217 container init ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:07:46 compute-0 podman[260298]: 2025-10-02 12:07:46.852164678 +0000 UTC m=+0.111230946 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:07:46 compute-0 podman[260284]: 2025-10-02 12:07:46.856077931 +0000 UTC m=+0.237765205 container start ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:07:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:46.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:46 compute-0 podman[260284]: 2025-10-02 12:07:46.884503364 +0000 UTC m=+0.266190628 container attach ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:07:47 compute-0 ceph-mon[73668]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:47 compute-0 compassionate_zhukovsky[260310]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:07:47 compute-0 compassionate_zhukovsky[260310]: --> relative data size: 1.0
Oct 02 12:07:47 compute-0 compassionate_zhukovsky[260310]: --> All data devices are unavailable
Oct 02 12:07:47 compute-0 systemd[1]: libpod-ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181.scope: Deactivated successfully.
Oct 02 12:07:47 compute-0 conmon[260310]: conmon ee590fc64b70ffae5a57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181.scope/container/memory.events
Oct 02 12:07:47 compute-0 podman[260284]: 2025-10-02 12:07:47.715882852 +0000 UTC m=+1.097570136 container died ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:07:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:47.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1424dcdd5aef68b39a495a2053212ef3f984ce19adccee1929a75959b93af2e-merged.mount: Deactivated successfully.
Oct 02 12:07:47 compute-0 podman[260284]: 2025-10-02 12:07:47.799140646 +0000 UTC m=+1.180827900 container remove ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:07:47 compute-0 systemd[1]: libpod-conmon-ee590fc64b70ffae5a57d7f6cf02da5150438c8d3a0e8184f66739f5d2dbf181.scope: Deactivated successfully.
Oct 02 12:07:47 compute-0 sudo[260176]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:47 compute-0 sudo[260345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:47 compute-0 sudo[260345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:47 compute-0 sudo[260345]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:47 compute-0 sudo[260370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:48 compute-0 sudo[260370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:48 compute-0 sudo[260370]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:48 compute-0 sudo[260395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:48 compute-0 sudo[260395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:48 compute-0 sudo[260395]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:48 compute-0 sudo[260420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:07:48 compute-0 sudo[260420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:48 compute-0 podman[260486]: 2025-10-02 12:07:48.554176533 +0000 UTC m=+0.051928466 container create 33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:07:48 compute-0 systemd[1]: Started libpod-conmon-33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f.scope.
Oct 02 12:07:48 compute-0 podman[260486]: 2025-10-02 12:07:48.533023663 +0000 UTC m=+0.030775616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:48 compute-0 podman[260486]: 2025-10-02 12:07:48.694002605 +0000 UTC m=+0.191754578 container init 33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:07:48 compute-0 podman[260500]: 2025-10-02 12:07:48.698723859 +0000 UTC m=+0.095254902 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:07:48 compute-0 podman[260486]: 2025-10-02 12:07:48.708037616 +0000 UTC m=+0.205789559 container start 33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:07:48 compute-0 silly_lehmann[260503]: 167 167
Oct 02 12:07:48 compute-0 systemd[1]: libpod-33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f.scope: Deactivated successfully.
Oct 02 12:07:48 compute-0 podman[260486]: 2025-10-02 12:07:48.745784135 +0000 UTC m=+0.243536108 container attach 33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:07:48 compute-0 podman[260486]: 2025-10-02 12:07:48.747256124 +0000 UTC m=+0.245008057 container died 33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-41e43bd200c1b8a2bc9081faa5cf9c3efeb5cd20249fedaf01e9699069e32aa7-merged.mount: Deactivated successfully.
Oct 02 12:07:48 compute-0 podman[260486]: 2025-10-02 12:07:48.795738608 +0000 UTC m=+0.293490531 container remove 33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:07:48 compute-0 systemd[1]: libpod-conmon-33430d393800f47ebef38ad0de3f7cdc04ed50ebbc60a2cb6c0827d7280b9b6f.scope: Deactivated successfully.
Oct 02 12:07:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:48.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:49 compute-0 podman[260551]: 2025-10-02 12:07:49.015482745 +0000 UTC m=+0.072438839 container create 40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:07:49 compute-0 podman[260551]: 2025-10-02 12:07:48.980843838 +0000 UTC m=+0.037799982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:49 compute-0 systemd[1]: Started libpod-conmon-40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd.scope.
Oct 02 12:07:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48d89d7d328b5309bc85e1f216100e1a5b6bc177f2744592cf98e00b13665f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48d89d7d328b5309bc85e1f216100e1a5b6bc177f2744592cf98e00b13665f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48d89d7d328b5309bc85e1f216100e1a5b6bc177f2744592cf98e00b13665f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48d89d7d328b5309bc85e1f216100e1a5b6bc177f2744592cf98e00b13665f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:49 compute-0 podman[260551]: 2025-10-02 12:07:49.130195222 +0000 UTC m=+0.187151386 container init 40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:07:49 compute-0 podman[260551]: 2025-10-02 12:07:49.143286918 +0000 UTC m=+0.200243052 container start 40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:07:49 compute-0 podman[260551]: 2025-10-02 12:07:49.148130256 +0000 UTC m=+0.205086390 container attach 40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:07:49 compute-0 ceph-mon[73668]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:49.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:49 compute-0 hungry_carver[260567]: {
Oct 02 12:07:49 compute-0 hungry_carver[260567]:     "1": [
Oct 02 12:07:49 compute-0 hungry_carver[260567]:         {
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "devices": [
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "/dev/loop3"
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             ],
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "lv_name": "ceph_lv0",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "lv_size": "7511998464",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "name": "ceph_lv0",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "tags": {
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.cluster_name": "ceph",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.crush_device_class": "",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.encrypted": "0",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.osd_id": "1",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.type": "block",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:                 "ceph.vdo": "0"
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             },
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "type": "block",
Oct 02 12:07:49 compute-0 hungry_carver[260567]:             "vg_name": "ceph_vg0"
Oct 02 12:07:49 compute-0 hungry_carver[260567]:         }
Oct 02 12:07:49 compute-0 hungry_carver[260567]:     ]
Oct 02 12:07:49 compute-0 hungry_carver[260567]: }
Oct 02 12:07:49 compute-0 systemd[1]: libpod-40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd.scope: Deactivated successfully.
Oct 02 12:07:49 compute-0 podman[260551]: 2025-10-02 12:07:49.913715773 +0000 UTC m=+0.970671867 container died 40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e48d89d7d328b5309bc85e1f216100e1a5b6bc177f2744592cf98e00b13665f-merged.mount: Deactivated successfully.
Oct 02 12:07:49 compute-0 podman[260551]: 2025-10-02 12:07:49.968882643 +0000 UTC m=+1.025838737 container remove 40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:07:49 compute-0 systemd[1]: libpod-conmon-40fc90f361110a456b4a28e27f3aace1cc575a66677a8a09e2a87ec319d483dd.scope: Deactivated successfully.
Oct 02 12:07:50 compute-0 sudo[260420]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:50 compute-0 sudo[260590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:50 compute-0 sudo[260590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:50 compute-0 sudo[260590]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:50 compute-0 sudo[260615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:50 compute-0 sudo[260615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:50 compute-0 sudo[260615]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:50 compute-0 sudo[260640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:50 compute-0 sudo[260640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:50 compute-0 sudo[260640]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:50 compute-0 sudo[260665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:07:50 compute-0 sudo[260665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:50 compute-0 podman[260732]: 2025-10-02 12:07:50.805613114 +0000 UTC m=+0.059787814 container create a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:07:50 compute-0 systemd[1]: Started libpod-conmon-a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b.scope.
Oct 02 12:07:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:50.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:50 compute-0 podman[260732]: 2025-10-02 12:07:50.783074617 +0000 UTC m=+0.037249317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:50 compute-0 podman[260732]: 2025-10-02 12:07:50.899947791 +0000 UTC m=+0.154122541 container init a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lehmann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:07:50 compute-0 podman[260732]: 2025-10-02 12:07:50.910221113 +0000 UTC m=+0.164395783 container start a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:07:50 compute-0 podman[260732]: 2025-10-02 12:07:50.913695655 +0000 UTC m=+0.167870405 container attach a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:07:50 compute-0 trusting_lehmann[260748]: 167 167
Oct 02 12:07:50 compute-0 systemd[1]: libpod-a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b.scope: Deactivated successfully.
Oct 02 12:07:50 compute-0 podman[260732]: 2025-10-02 12:07:50.920070904 +0000 UTC m=+0.174245604 container died a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c874116307408945d8494a39243dd4c4b167a0250049e93f99b4535a1ba69da1-merged.mount: Deactivated successfully.
Oct 02 12:07:50 compute-0 podman[260732]: 2025-10-02 12:07:50.968343772 +0000 UTC m=+0.222518472 container remove a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lehmann, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:07:50 compute-0 systemd[1]: libpod-conmon-a74195a05090c39ab81bda162c3d71f7bd195738bb96bfaa1a7c9a2b7fd7d72b.scope: Deactivated successfully.
Oct 02 12:07:51 compute-0 podman[260772]: 2025-10-02 12:07:51.24026901 +0000 UTC m=+0.093782934 container create c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:07:51 compute-0 podman[260772]: 2025-10-02 12:07:51.187448772 +0000 UTC m=+0.040962726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:07:51 compute-0 systemd[1]: Started libpod-conmon-c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa.scope.
Oct 02 12:07:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb1a23d991917c7115c620ed46d9d6b67a2db6ed34755d61b099a2f40275d47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb1a23d991917c7115c620ed46d9d6b67a2db6ed34755d61b099a2f40275d47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb1a23d991917c7115c620ed46d9d6b67a2db6ed34755d61b099a2f40275d47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb1a23d991917c7115c620ed46d9d6b67a2db6ed34755d61b099a2f40275d47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:07:51 compute-0 podman[260772]: 2025-10-02 12:07:51.351653269 +0000 UTC m=+0.205167233 container init c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 02 12:07:51 compute-0 podman[260772]: 2025-10-02 12:07:51.363016339 +0000 UTC m=+0.216530263 container start c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:07:51 compute-0 podman[260772]: 2025-10-02 12:07:51.367039566 +0000 UTC m=+0.220553520 container attach c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:07:51 compute-0 ceph-mon[73668]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:51.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:52 compute-0 crazy_cannon[260788]: {
Oct 02 12:07:52 compute-0 crazy_cannon[260788]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:07:52 compute-0 crazy_cannon[260788]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:07:52 compute-0 crazy_cannon[260788]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:07:52 compute-0 crazy_cannon[260788]:         "osd_id": 1,
Oct 02 12:07:52 compute-0 crazy_cannon[260788]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:07:52 compute-0 crazy_cannon[260788]:         "type": "bluestore"
Oct 02 12:07:52 compute-0 crazy_cannon[260788]:     }
Oct 02 12:07:52 compute-0 crazy_cannon[260788]: }
Oct 02 12:07:52 compute-0 systemd[1]: libpod-c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa.scope: Deactivated successfully.
Oct 02 12:07:52 compute-0 podman[260772]: 2025-10-02 12:07:52.264513453 +0000 UTC m=+1.118027367 container died c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbb1a23d991917c7115c620ed46d9d6b67a2db6ed34755d61b099a2f40275d47-merged.mount: Deactivated successfully.
Oct 02 12:07:52 compute-0 podman[260772]: 2025-10-02 12:07:52.445210637 +0000 UTC m=+1.298724531 container remove c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:07:52 compute-0 systemd[1]: libpod-conmon-c83086cafbe913a4eb3317771292408216bb439a21ee80a99e5fee06f65561fa.scope: Deactivated successfully.
Oct 02 12:07:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:52 compute-0 sudo[260665]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:07:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:07:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5ddc7572-f711-483d-936d-57c405ff41c4 does not exist
Oct 02 12:07:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 41d5bed8-224a-4924-a52c-ddf7fb0b9448 does not exist
Oct 02 12:07:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6290b553-d5b6-4212-a6cf-cde30980ec88 does not exist
Oct 02 12:07:52 compute-0 sudo[260824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:52 compute-0 sudo[260824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:52 compute-0 sudo[260824]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:52 compute-0 sudo[260849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:07:52 compute-0 sudo[260849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:52 compute-0 sudo[260849]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:53 compute-0 ceph-mon[73668]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:07:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:53.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:07:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:55 compute-0 ceph-mon[73668]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:55.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:56.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:57.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:57 compute-0 ceph-mon[73668]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:07:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:58.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:07:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:59.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:59 compute-0 ceph-mon[73668]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:01.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:01 compute-0 ceph-mon[73668]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:02 compute-0 sudo[260879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:02 compute-0 sudo[260879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:02 compute-0 sudo[260879]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:02 compute-0 sudo[260904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:02 compute-0 sudo[260904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:02 compute-0 sudo[260904]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:03.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:03 compute-0 ceph-mon[73668]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:05.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:05 compute-0 ceph-mon[73668]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2705115424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:08:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2705115424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:08:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:06.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:07 compute-0 podman[260931]: 2025-10-02 12:08:07.416555308 +0000 UTC m=+0.078588871 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:08:07 compute-0 podman[260932]: 2025-10-02 12:08:07.475943766 +0000 UTC m=+0.133559651 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:08:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:07.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:07 compute-0 ceph-mon[73668]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:08.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:09.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:10 compute-0 ceph-mon[73668]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:10.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:11.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:12 compute-0 ceph-mon[73668]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:12.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:13.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:14 compute-0 ceph-mon[73668]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000053s ======
Oct 02 12:08:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:14.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct 02 12:08:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:15.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:16 compute-0 ceph-mon[73668]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:16.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:17 compute-0 podman[260983]: 2025-10-02 12:08:17.408119164 +0000 UTC m=+0.070793713 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:08:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:17.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:18 compute-0 ceph-mon[73668]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:18.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:19 compute-0 podman[261005]: 2025-10-02 12:08:19.395327238 +0000 UTC m=+0.066992441 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:08:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:19.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:20 compute-0 ceph-mon[73668]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:20.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:21 compute-0 ceph-mon[73668]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:21.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000080s ======
Oct 02 12:08:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:22.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Oct 02 12:08:22 compute-0 sudo[261029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:22 compute-0 sudo[261029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:22 compute-0 sudo[261029]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:23 compute-0 sudo[261054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:23 compute-0 sudo[261054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:23 compute-0 sudo[261054]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:23.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:24 compute-0 ceph-mon[73668]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:24.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:25.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:08:26.444 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:08:26.445 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:08:26.445 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:26 compute-0 ceph-mon[73668]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:26.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:27 compute-0 ceph-mon[73668]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:27.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:08:28
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root']
Oct 02 12:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:08:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:28.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2064557662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3308308964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3330560456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:29.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:30 compute-0 ceph-mon[73668]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2031141129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:30 compute-0 nova_compute[256940]: 2025-10-02 12:08:30.430 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:30 compute-0 nova_compute[256940]: 2025-10-02 12:08:30.431 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:30.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:08:31 compute-0 ceph-mon[73668]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.754 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.754 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.755 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.755 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:31 compute-0 nova_compute[256940]: 2025-10-02 12:08:31.756 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:08:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:31.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:32 compute-0 nova_compute[256940]: 2025-10-02 12:08:32.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:32 compute-0 PackageKit[187206]: daemon quit
Oct 02 12:08:32 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 12:08:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:32.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:33 compute-0 nova_compute[256940]: 2025-10-02 12:08:33.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:33 compute-0 nova_compute[256940]: 2025-10-02 12:08:33.661 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:33 compute-0 nova_compute[256940]: 2025-10-02 12:08:33.662 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:33 compute-0 nova_compute[256940]: 2025-10-02 12:08:33.662 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:33 compute-0 nova_compute[256940]: 2025-10-02 12:08:33.662 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:08:33 compute-0 nova_compute[256940]: 2025-10-02 12:08:33.662 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:33.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:33 compute-0 ceph-mon[73668]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23628137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.093 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.284 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.286 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5229MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.286 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.287 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.358 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.359 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.374 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641892421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.783 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.791 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.816 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.819 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:08:34 compute-0 nova_compute[256940]: 2025-10-02 12:08:34.819 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:34.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/23628137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3641892421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:36 compute-0 ceph-mon[73668]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:36.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:38 compute-0 ceph-mon[73668]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:38 compute-0 podman[261130]: 2025-10-02 12:08:38.429481824 +0000 UTC m=+0.089699769 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 12:08:38 compute-0 podman[261131]: 2025-10-02 12:08:38.478654599 +0000 UTC m=+0.131752643 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:08:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:38.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:08:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:08:40 compute-0 ceph-mon[73668]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:40.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:41.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:42 compute-0 ceph-mon[73668]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=404 latency=0.005000134s ======
Oct 02 12:08:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:42.641 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.005000134s
Oct 02 12:08:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000054s ======
Oct 02 12:08:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - - [02/Oct/2025:12:08:42.662 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.002000054s
Oct 02 12:08:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:42.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:43 compute-0 sudo[261177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:43 compute-0 sudo[261177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:43 compute-0 sudo[261177]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:43 compute-0 sudo[261202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:43 compute-0 sudo[261202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:43 compute-0 sudo[261202]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:43 compute-0 ceph-mon[73668]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:44.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:45 compute-0 ceph-mon[73668]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:46.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:47.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 02 12:08:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 02 12:08:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 02 12:08:48 compute-0 podman[261229]: 2025-10-02 12:08:48.408908777 +0000 UTC m=+0.077048461 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:08:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:48 compute-0 ceph-mon[73668]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:48.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 02 12:08:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 02 12:08:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 02 12:08:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:49.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:50 compute-0 ceph-mon[73668]: osdmap e129: 3 total, 3 up, 3 in
Oct 02 12:08:50 compute-0 ceph-mon[73668]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:50 compute-0 ceph-mon[73668]: osdmap e130: 3 total, 3 up, 3 in
Oct 02 12:08:50 compute-0 podman[261251]: 2025-10-02 12:08:50.41488501 +0000 UTC m=+0.083470233 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:08:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s wr, 0 op/s
Oct 02 12:08:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:50.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct 02 12:08:52 compute-0 ceph-mon[73668]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s wr, 0 op/s
Oct 02 12:08:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:08:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:52.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:08:53 compute-0 sudo[261273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:53 compute-0 sudo[261273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 sudo[261273]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 sudo[261298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:53 compute-0 sudo[261298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 sudo[261298]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 sudo[261323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:53 compute-0 sudo[261323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 sudo[261323]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 sudo[261348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:08:53 compute-0 sudo[261348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 02 12:08:53 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 02 12:08:53 compute-0 ceph-mon[73668]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct 02 12:08:53 compute-0 ceph-mon[73668]: osdmap e131: 3 total, 3 up, 3 in
Oct 02 12:08:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:53.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:53 compute-0 sudo[261348]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:08:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:08:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:08:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:08:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1148e790-5c30-4309-9ade-898195903261 does not exist
Oct 02 12:08:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3b89b6d2-d89e-48ef-915b-5b3eb441e0bf does not exist
Oct 02 12:08:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 07cdddd0-8035-4fdc-bc5f-7d1d3d8d3100 does not exist
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:08:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:08:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:08:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:08:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:54 compute-0 sudo[261403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:54 compute-0 sudo[261403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 sudo[261403]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[261428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:54 compute-0 sudo[261428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 sudo[261428]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[261453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:54 compute-0 sudo[261453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 sudo[261453]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:54 compute-0 sudo[261478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:08:54 compute-0 sudo[261478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 1.4 KiB/s wr, 8 op/s
Oct 02 12:08:54 compute-0 podman[261544]: 2025-10-02 12:08:54.778040077 +0000 UTC m=+0.046163335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:08:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:08:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:54.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:54 compute-0 podman[261544]: 2025-10-02 12:08:54.969996648 +0000 UTC m=+0.238119886 container create b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:08:55 compute-0 systemd[1]: Started libpod-conmon-b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f.scope.
Oct 02 12:08:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:55 compute-0 podman[261544]: 2025-10-02 12:08:55.237431517 +0000 UTC m=+0.505554785 container init b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:08:55 compute-0 podman[261544]: 2025-10-02 12:08:55.251215016 +0000 UTC m=+0.519338284 container start b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heisenberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:08:55 compute-0 focused_heisenberg[261560]: 167 167
Oct 02 12:08:55 compute-0 systemd[1]: libpod-b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f.scope: Deactivated successfully.
Oct 02 12:08:55 compute-0 podman[261544]: 2025-10-02 12:08:55.31571676 +0000 UTC m=+0.583840028 container attach b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:08:55 compute-0 podman[261544]: 2025-10-02 12:08:55.316343387 +0000 UTC m=+0.584466705 container died b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-88267d169faea7c865b4f5ccbadc940b168e3bd46ec481bc592a148d7367ea4b-merged.mount: Deactivated successfully.
Oct 02 12:08:55 compute-0 podman[261544]: 2025-10-02 12:08:55.787615385 +0000 UTC m=+1.055738653 container remove b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:08:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:55.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:55 compute-0 systemd[1]: libpod-conmon-b431ceb7e1cbf9c28589f8f48c5e64ff3d0324919df5d7a869fe4d71c7242a1f.scope: Deactivated successfully.
Oct 02 12:08:56 compute-0 podman[261587]: 2025-10-02 12:08:56.00999082 +0000 UTC m=+0.042747144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 02 12:08:56 compute-0 podman[261587]: 2025-10-02 12:08:56.157197204 +0000 UTC m=+0.189953528 container create 9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:08:56 compute-0 ceph-mon[73668]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 1.4 KiB/s wr, 8 op/s
Oct 02 12:08:56 compute-0 systemd[1]: Started libpod-conmon-9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886.scope.
Oct 02 12:08:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 02 12:08:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db34630caa11529ea5b831449501ce001b6a56118df172d24b209c7741abf01a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db34630caa11529ea5b831449501ce001b6a56118df172d24b209c7741abf01a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db34630caa11529ea5b831449501ce001b6a56118df172d24b209c7741abf01a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db34630caa11529ea5b831449501ce001b6a56118df172d24b209c7741abf01a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db34630caa11529ea5b831449501ce001b6a56118df172d24b209c7741abf01a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:08:56 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 02 12:08:56 compute-0 podman[261587]: 2025-10-02 12:08:56.392315169 +0000 UTC m=+0.425071553 container init 9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:08:56 compute-0 podman[261587]: 2025-10-02 12:08:56.410024412 +0000 UTC m=+0.442780736 container start 9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:08:56 compute-0 podman[261587]: 2025-10-02 12:08:56.445756867 +0000 UTC m=+0.478513191 container attach 9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:08:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 22 op/s
Oct 02 12:08:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:56.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:57 compute-0 gifted_sammet[261603]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:08:57 compute-0 gifted_sammet[261603]: --> relative data size: 1.0
Oct 02 12:08:57 compute-0 gifted_sammet[261603]: --> All data devices are unavailable
Oct 02 12:08:57 compute-0 systemd[1]: libpod-9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886.scope: Deactivated successfully.
Oct 02 12:08:57 compute-0 podman[261587]: 2025-10-02 12:08:57.284460888 +0000 UTC m=+1.317217182 container died 9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:08:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 02 12:08:57 compute-0 ceph-mon[73668]: osdmap e132: 3 total, 3 up, 3 in
Oct 02 12:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-db34630caa11529ea5b831449501ce001b6a56118df172d24b209c7741abf01a-merged.mount: Deactivated successfully.
Oct 02 12:08:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 02 12:08:57 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 02 12:08:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:57.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:58 compute-0 podman[261587]: 2025-10-02 12:08:58.119169452 +0000 UTC m=+2.151925776 container remove 9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 02 12:08:58 compute-0 sudo[261478]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 systemd[1]: libpod-conmon-9d43f5c45d0e270c697fa1d7514f401228bb1954a5197583604e94d5cdc54886.scope: Deactivated successfully.
Oct 02 12:08:58 compute-0 sudo[261633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:58 compute-0 sudo[261633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 sudo[261633]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 sudo[261658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:58 compute-0 sudo[261658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 sudo[261658]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:08:58 compute-0 sudo[261684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:58 compute-0 sudo[261684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 sudo[261684]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 25 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 4.1 MiB/s wr, 44 op/s
Oct 02 12:08:58 compute-0 sudo[261709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:08:58 compute-0 sudo[261709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:58 compute-0 ceph-mon[73668]: pgmap v859: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 22 op/s
Oct 02 12:08:58 compute-0 ceph-mon[73668]: osdmap e133: 3 total, 3 up, 3 in
Oct 02 12:08:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:08:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:58.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:08:59 compute-0 podman[261775]: 2025-10-02 12:08:59.107337378 +0000 UTC m=+0.134681001 container create 25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:08:59 compute-0 podman[261775]: 2025-10-02 12:08:59.018529854 +0000 UTC m=+0.045873527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:08:59 compute-0 systemd[1]: Started libpod-conmon-25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11.scope.
Oct 02 12:08:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:08:59 compute-0 podman[261775]: 2025-10-02 12:08:59.280447406 +0000 UTC m=+0.307791049 container init 25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:08:59 compute-0 podman[261775]: 2025-10-02 12:08:59.292013165 +0000 UTC m=+0.319356788 container start 25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:08:59 compute-0 hungry_cohen[261791]: 167 167
Oct 02 12:08:59 compute-0 systemd[1]: libpod-25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11.scope: Deactivated successfully.
Oct 02 12:08:59 compute-0 podman[261775]: 2025-10-02 12:08:59.354419443 +0000 UTC m=+0.381763056 container attach 25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cohen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:08:59 compute-0 podman[261775]: 2025-10-02 12:08:59.354789193 +0000 UTC m=+0.382132796 container died 25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dd074a15e464bbdb3bddf26f70faded6f2e7a79f684a810335244b82c25a9f6-merged.mount: Deactivated successfully.
Oct 02 12:08:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:08:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:08:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:08:59 compute-0 podman[261775]: 2025-10-02 12:08:59.870236912 +0000 UTC m=+0.897580505 container remove 25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:08:59 compute-0 systemd[1]: libpod-conmon-25cd76ff0b1e76a1a7c3d53586474ad593e14547b6f96a30076f7e3076192d11.scope: Deactivated successfully.
Oct 02 12:09:00 compute-0 ceph-mon[73668]: pgmap v861: 305 pgs: 305 active+clean; 25 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 4.1 MiB/s wr, 44 op/s
Oct 02 12:09:00 compute-0 podman[261817]: 2025-10-02 12:09:00.069501039 +0000 UTC m=+0.032110500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:09:00 compute-0 podman[261817]: 2025-10-02 12:09:00.185381556 +0000 UTC m=+0.147990967 container create 249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:09:00 compute-0 systemd[1]: Started libpod-conmon-249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a.scope.
Oct 02 12:09:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79575f3d1d39ef45f4a808d40f7420c1c71817eaa62e47d642508dd3aab37849/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79575f3d1d39ef45f4a808d40f7420c1c71817eaa62e47d642508dd3aab37849/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79575f3d1d39ef45f4a808d40f7420c1c71817eaa62e47d642508dd3aab37849/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79575f3d1d39ef45f4a808d40f7420c1c71817eaa62e47d642508dd3aab37849/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 33 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 4.8 MiB/s wr, 40 op/s
Oct 02 12:09:00 compute-0 podman[261817]: 2025-10-02 12:09:00.642414804 +0000 UTC m=+0.605024255 container init 249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_margulis, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:09:00 compute-0 podman[261817]: 2025-10-02 12:09:00.65571141 +0000 UTC m=+0.618320811 container start 249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_margulis, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:09:00 compute-0 podman[261817]: 2025-10-02 12:09:00.792781624 +0000 UTC m=+0.755391085 container attach 249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_margulis, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:09:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:01 compute-0 adoring_margulis[261833]: {
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:     "1": [
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:         {
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "devices": [
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "/dev/loop3"
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             ],
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "lv_name": "ceph_lv0",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "lv_size": "7511998464",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "name": "ceph_lv0",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "tags": {
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.cluster_name": "ceph",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.crush_device_class": "",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.encrypted": "0",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.osd_id": "1",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.type": "block",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:                 "ceph.vdo": "0"
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             },
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "type": "block",
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:             "vg_name": "ceph_vg0"
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:         }
Oct 02 12:09:01 compute-0 adoring_margulis[261833]:     ]
Oct 02 12:09:01 compute-0 adoring_margulis[261833]: }
Oct 02 12:09:01 compute-0 systemd[1]: libpod-249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a.scope: Deactivated successfully.
Oct 02 12:09:01 compute-0 podman[261817]: 2025-10-02 12:09:01.522789419 +0000 UTC m=+1.485398830 container died 249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-79575f3d1d39ef45f4a808d40f7420c1c71817eaa62e47d642508dd3aab37849-merged.mount: Deactivated successfully.
Oct 02 12:09:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:09:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:01.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:09:02 compute-0 ceph-mon[73668]: pgmap v862: 305 pgs: 305 active+clean; 33 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 4.8 MiB/s wr, 40 op/s
Oct 02 12:09:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 40 op/s
Oct 02 12:09:02 compute-0 podman[261817]: 2025-10-02 12:09:02.575298235 +0000 UTC m=+2.537907646 container remove 249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:09:02 compute-0 systemd[1]: libpod-conmon-249260b2cb1fc871a43e5358715e1ffeb5b82a56a9850b516717ccc8930c5f3a.scope: Deactivated successfully.
Oct 02 12:09:02 compute-0 sudo[261709]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:02 compute-0 sudo[261857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:02 compute-0 sudo[261857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:02 compute-0 sudo[261857]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:02 compute-0 sudo[261882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:09:02 compute-0 sudo[261882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:02 compute-0 sudo[261882]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:02.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:02 compute-0 sudo[261907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:02 compute-0 sudo[261907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:02 compute-0 sudo[261907]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:03 compute-0 sudo[261932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:09:03 compute-0 sudo[261932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:03 compute-0 sudo[261964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:03 compute-0 sudo[261964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:03 compute-0 sudo[261964]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:03 compute-0 sudo[262006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:03 compute-0 sudo[262006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:03 compute-0 sudo[262006]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:03 compute-0 podman[262045]: 2025-10-02 12:09:03.574515086 +0000 UTC m=+0.039587170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:09:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:09:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:09:03 compute-0 podman[262045]: 2025-10-02 12:09:03.867776956 +0000 UTC m=+0.332849000 container create 4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:09:04 compute-0 ceph-mon[73668]: pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 40 op/s
Oct 02 12:09:04 compute-0 systemd[1]: Started libpod-conmon-4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052.scope.
Oct 02 12:09:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:09:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 4.0 MiB/s wr, 27 op/s
Oct 02 12:09:04 compute-0 podman[262045]: 2025-10-02 12:09:04.587865396 +0000 UTC m=+1.052937510 container init 4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:09:04 compute-0 podman[262045]: 2025-10-02 12:09:04.600283537 +0000 UTC m=+1.065355581 container start 4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:09:04 compute-0 crazy_stonebraker[262063]: 167 167
Oct 02 12:09:04 compute-0 systemd[1]: libpod-4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052.scope: Deactivated successfully.
Oct 02 12:09:04 compute-0 podman[262045]: 2025-10-02 12:09:04.651735343 +0000 UTC m=+1.116807447 container attach 4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:09:04 compute-0 podman[262045]: 2025-10-02 12:09:04.652845043 +0000 UTC m=+1.117917097 container died 4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:09:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:04.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-34ba847421f687f3b10949205c6f124c5afbb9c4ca3661acd85139f645b573a5-merged.mount: Deactivated successfully.
Oct 02 12:09:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:09:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2844025141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:09:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:09:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2844025141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:09:05 compute-0 podman[262045]: 2025-10-02 12:09:05.565844649 +0000 UTC m=+2.030916683 container remove 4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:09:05 compute-0 systemd[1]: libpod-conmon-4ef3676b4ef1f2ca16ffc7ad0b7936037a9ff13ba7cfe33c7190cda67201f052.scope: Deactivated successfully.
Oct 02 12:09:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2844025141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:09:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2844025141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:09:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:05 compute-0 podman[262089]: 2025-10-02 12:09:05.795599601 +0000 UTC m=+0.041116520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:09:05 compute-0 podman[262089]: 2025-10-02 12:09:05.990251634 +0000 UTC m=+0.235768543 container create 14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:09:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:06 compute-0 systemd[1]: Started libpod-conmon-14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6.scope.
Oct 02 12:09:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072fee70c94dcefc454029a071e4ee30b063f3637db97ec8c7fe1b53348a2d2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072fee70c94dcefc454029a071e4ee30b063f3637db97ec8c7fe1b53348a2d2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072fee70c94dcefc454029a071e4ee30b063f3637db97ec8c7fe1b53348a2d2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072fee70c94dcefc454029a071e4ee30b063f3637db97ec8c7fe1b53348a2d2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:09:06 compute-0 podman[262089]: 2025-10-02 12:09:06.424955235 +0000 UTC m=+0.670472154 container init 14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:09:06 compute-0 podman[262089]: 2025-10-02 12:09:06.437020657 +0000 UTC m=+0.682537536 container start 14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:09:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 3.3 MiB/s wr, 22 op/s
Oct 02 12:09:06 compute-0 podman[262089]: 2025-10-02 12:09:06.700866171 +0000 UTC m=+0.946383130 container attach 14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:09:06 compute-0 ceph-mon[73668]: pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 4.0 MiB/s wr, 27 op/s
Oct 02 12:09:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:06.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:07 compute-0 gifted_germain[262105]: {
Oct 02 12:09:07 compute-0 gifted_germain[262105]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:09:07 compute-0 gifted_germain[262105]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:09:07 compute-0 gifted_germain[262105]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:09:07 compute-0 gifted_germain[262105]:         "osd_id": 1,
Oct 02 12:09:07 compute-0 gifted_germain[262105]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:09:07 compute-0 gifted_germain[262105]:         "type": "bluestore"
Oct 02 12:09:07 compute-0 gifted_germain[262105]:     }
Oct 02 12:09:07 compute-0 gifted_germain[262105]: }
Oct 02 12:09:07 compute-0 systemd[1]: libpod-14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6.scope: Deactivated successfully.
Oct 02 12:09:07 compute-0 podman[262089]: 2025-10-02 12:09:07.377774045 +0000 UTC m=+1.623290944 container died 14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:09:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:07.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-072fee70c94dcefc454029a071e4ee30b063f3637db97ec8c7fe1b53348a2d2b-merged.mount: Deactivated successfully.
Oct 02 12:09:08 compute-0 ceph-mon[73668]: pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 3.3 MiB/s wr, 22 op/s
Oct 02 12:09:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Oct 02 12:09:08 compute-0 podman[262089]: 2025-10-02 12:09:08.925745785 +0000 UTC m=+3.171262684 container remove 14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:09:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:09:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:08.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:09:08 compute-0 systemd[1]: libpod-conmon-14a77ce9d7d26ee5f6a92ed5114cebd648b35574ebed5dd1e5e746ba1634cdd6.scope: Deactivated successfully.
Oct 02 12:09:08 compute-0 sudo[261932]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:09:09 compute-0 podman[262140]: 2025-10-02 12:09:09.108715156 +0000 UTC m=+0.093458479 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:09:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:09:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:09:09 compute-0 podman[262142]: 2025-10-02 12:09:09.172473461 +0000 UTC m=+0.158003125 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 02 12:09:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:09:09 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 80870850-5213-4f15-be93-8aff369510c0 does not exist
Oct 02 12:09:09 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6f3e5901-2bae-4afb-b30a-9538229de986 does not exist
Oct 02 12:09:09 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 38d364db-052e-4b4c-9d7f-1f53f13f61fc does not exist
Oct 02 12:09:09 compute-0 sudo[262183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:09 compute-0 sudo[262183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:09 compute-0 sudo[262183]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:09 compute-0 sudo[262208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:09:09 compute-0 sudo[262208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:09 compute-0 sudo[262208]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:09.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:10 compute-0 ceph-mon[73668]: pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Oct 02 12:09:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:09:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:09:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 1.3 MiB/s wr, 4 op/s
Oct 02 12:09:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:10.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:11 compute-0 ceph-mon[73668]: pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 1.3 MiB/s wr, 4 op/s
Oct 02 12:09:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:11.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 683 KiB/s wr, 4 op/s
Oct 02 12:09:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:12.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:09:13.703 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:09:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:09:13.704 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:09:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:09:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:13.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:09:14 compute-0 ceph-mon[73668]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 683 KiB/s wr, 4 op/s
Oct 02 12:09:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:14.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:15.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:16 compute-0 ceph-mon[73668]: pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:16.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:17.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:18 compute-0 ceph-mon[73668]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:19 compute-0 podman[262238]: 2025-10-02 12:09:19.417026181 +0000 UTC m=+0.078332695 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 02 12:09:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:19.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:20 compute-0 ceph-mon[73668]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4267939077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:09:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:20.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:09:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:21 compute-0 podman[262261]: 2025-10-02 12:09:21.39100623 +0000 UTC m=+0.062600284 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:09:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:09:21.707 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:21.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:22 compute-0 ceph-mon[73668]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1 op/s
Oct 02 12:09:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:09:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:22.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:09:23 compute-0 sudo[262282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:23 compute-0 sudo[262282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:23 compute-0 sudo[262282]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:23 compute-0 sudo[262307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:23 compute-0 sudo[262307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:23 compute-0 sudo[262307]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:23.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 02 12:09:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 02 12:09:24 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 02 12:09:24 compute-0 ceph-mon[73668]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1 op/s
Oct 02 12:09:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1 op/s
Oct 02 12:09:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 02 12:09:25 compute-0 ceph-mon[73668]: osdmap e134: 3 total, 3 up, 3 in
Oct 02 12:09:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 02 12:09:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 02 12:09:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:26 compute-0 ceph-mon[73668]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1 op/s
Oct 02 12:09:26 compute-0 ceph-mon[73668]: osdmap e135: 3 total, 3 up, 3 in
Oct 02 12:09:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1302507799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:09:26.445 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:09:26.446 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:09:26.446 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 51 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 218 KiB/s wr, 14 op/s
Oct 02 12:09:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:09:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:26.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:09:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4019412554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:28 compute-0 ceph-mon[73668]: pgmap v877: 305 pgs: 305 active+clean; 51 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 218 KiB/s wr, 14 op/s
Oct 02 12:09:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2150631146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 59 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 551 KiB/s wr, 16 op/s
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:09:28
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Oct 02 12:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:09:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:28.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1331342296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:29.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 68 op/s
Oct 02 12:09:30 compute-0 ceph-mon[73668]: pgmap v878: 305 pgs: 305 active+clean; 59 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 551 KiB/s wr, 16 op/s
Oct 02 12:09:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1708688043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/44836204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1664810939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:09:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:09:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:31 compute-0 ceph-mon[73668]: pgmap v879: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 68 op/s
Oct 02 12:09:31 compute-0 nova_compute[256940]: 2025-10-02 12:09:31.820 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-0 nova_compute[256940]: 2025-10-02 12:09:31.821 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-0 nova_compute[256940]: 2025-10-02 12:09:31.821 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-0 nova_compute[256940]: 2025-10-02 12:09:31.822 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-0 nova_compute[256940]: 2025-10-02 12:09:31.822 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-0 nova_compute[256940]: 2025-10-02 12:09:31.822 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:09:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:32 compute-0 nova_compute[256940]: 2025-10-02 12:09:32.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 84 op/s
Oct 02 12:09:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:33 compute-0 nova_compute[256940]: 2025-10-02 12:09:33.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:33 compute-0 nova_compute[256940]: 2025-10-02 12:09:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:09:33 compute-0 nova_compute[256940]: 2025-10-02 12:09:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:09:33 compute-0 nova_compute[256940]: 2025-10-02 12:09:33.231 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:09:33 compute-0 nova_compute[256940]: 2025-10-02 12:09:33.231 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.239 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.239 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:34 compute-0 ceph-mon[73668]: pgmap v880: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 84 op/s
Oct 02 12:09:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 12:09:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118983961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.659 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.821 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.822 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5200MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.822 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.822 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.900 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.900 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:09:34 compute-0 nova_compute[256940]: 2025-10-02 12:09:34.929 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:34.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459032079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.458 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.464 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.490 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.492 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.492 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2118983961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.529 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.530 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.560 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.661 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.662 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.668 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.669 2 INFO nova.compute.claims [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:09:35 compute-0 nova_compute[256940]: 2025-10-02 12:09:35.847 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791544486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.292 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.300 2 DEBUG nova.compute.provider_tree [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.321 2 DEBUG nova.scheduler.client.report [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.345 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.346 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:09:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 02 12:09:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 02 12:09:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.414 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.414 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.450 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.470 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:09:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Oct 02 12:09:36 compute-0 ceph-mon[73668]: pgmap v881: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 12:09:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2459032079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/791544486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1063715503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1301256854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-0 ceph-mon[73668]: osdmap e136: 3 total, 3 up, 3 in
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.608 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.609 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.610 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Creating image(s)
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.653 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.690 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.723 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.727 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:36 compute-0 nova_compute[256940]: 2025-10-02 12:09:36.727 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:36.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:37 compute-0 ceph-mon[73668]: pgmap v883: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Oct 02 12:09:37 compute-0 nova_compute[256940]: 2025-10-02 12:09:37.774 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Automatically allocating a network for project fa15236c63df4c43bf19989029fcda0f. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Oct 02 12:09:37 compute-0 nova_compute[256940]: 2025-10-02 12:09:37.784 2 DEBUG nova.virt.libvirt.imagebackend [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:09:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Oct 02 12:09:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:38.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:39 compute-0 podman[262460]: 2025-10-02 12:09:39.408495666 +0000 UTC m=+0.076688601 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:09:39 compute-0 podman[262461]: 2025-10-02 12:09:39.442452673 +0000 UTC m=+0.107206517 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:09:39 compute-0 ceph-mon[73668]: pgmap v884: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:09:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:09:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 129 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 107 op/s
Oct 02 12:09:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:40.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:41 compute-0 nova_compute[256940]: 2025-10-02 12:09:41.647 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:41 compute-0 ceph-mon[73668]: pgmap v885: 305 pgs: 305 active+clean; 129 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 107 op/s
Oct 02 12:09:41 compute-0 nova_compute[256940]: 2025-10-02 12:09:41.733 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:41 compute-0 nova_compute[256940]: 2025-10-02 12:09:41.735 2 DEBUG nova.virt.images [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] 423b8b5f-aab8-418b-8fad-d82c90818bdd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 12:09:41 compute-0 nova_compute[256940]: 2025-10-02 12:09:41.738 2 DEBUG nova.privsep.utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:09:41 compute-0 nova_compute[256940]: 2025-10-02 12:09:41.738 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:41.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.077 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.087 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.179 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.182 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.226 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.232 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 142 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 121 op/s
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.699 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.819 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] resizing rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:09:42 compute-0 nova_compute[256940]: 2025-10-02 12:09:42.964 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'migration_context' on Instance uuid 6392cd0b-cf6b-485c-bbaa-69525ae0191c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:43 compute-0 nova_compute[256940]: 2025-10-02 12:09:43.003 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:09:43 compute-0 nova_compute[256940]: 2025-10-02 12:09:43.004 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Ensure instance console log exists: /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:09:43 compute-0 nova_compute[256940]: 2025-10-02 12:09:43.005 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:43 compute-0 nova_compute[256940]: 2025-10-02 12:09:43.005 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:43 compute-0 nova_compute[256940]: 2025-10-02 12:09:43.006 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:43 compute-0 ceph-mon[73668]: pgmap v886: 305 pgs: 305 active+clean; 142 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 121 op/s
Oct 02 12:09:43 compute-0 sudo[262630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:43 compute-0 sudo[262630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:43 compute-0 sudo[262630]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:43 compute-0 sudo[262655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:43 compute-0 sudo[262655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:43 compute-0 sudo[262655]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:43.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 142 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 121 op/s
Oct 02 12:09:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:09:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:44.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:09:45 compute-0 ceph-mon[73668]: pgmap v887: 305 pgs: 305 active+clean; 142 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 121 op/s
Oct 02 12:09:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:45.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 226 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 161 op/s
Oct 02 12:09:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:47 compute-0 ceph-mon[73668]: pgmap v888: 305 pgs: 305 active+clean; 226 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 161 op/s
Oct 02 12:09:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:47.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 156 op/s
Oct 02 12:09:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:48.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:49 compute-0 ceph-mon[73668]: pgmap v889: 305 pgs: 305 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 156 op/s
Oct 02 12:09:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3418279144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:50 compute-0 podman[262683]: 2025-10-02 12:09:50.426933123 +0000 UTC m=+0.090028218 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:09:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.5 MiB/s wr, 156 op/s
Oct 02 12:09:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3378946537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:50.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:51 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 12:09:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:52 compute-0 podman[262705]: 2025-10-02 12:09:52.430307048 +0000 UTC m=+0.095255767 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:09:52 compute-0 ceph-mon[73668]: pgmap v890: 305 pgs: 305 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.5 MiB/s wr, 156 op/s
Oct 02 12:09:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 267 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 128 op/s
Oct 02 12:09:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:52.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:53 compute-0 ceph-mon[73668]: pgmap v891: 305 pgs: 305 active+clean; 267 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 128 op/s
Oct 02 12:09:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:53.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 267 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 278 KiB/s rd, 5.7 MiB/s wr, 103 op/s
Oct 02 12:09:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:09:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:54.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:09:55 compute-0 ceph-mon[73668]: pgmap v892: 305 pgs: 305 active+clean; 267 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 278 KiB/s rd, 5.7 MiB/s wr, 103 op/s
Oct 02 12:09:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:55.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 306 MiB data, 354 MiB used, 21 GiB / 21 GiB avail; 295 KiB/s rd, 6.9 MiB/s wr, 129 op/s
Oct 02 12:09:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:56.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:57 compute-0 ceph-mon[73668]: pgmap v893: 305 pgs: 305 active+clean; 306 MiB data, 354 MiB used, 21 GiB / 21 GiB avail; 295 KiB/s rd, 6.9 MiB/s wr, 129 op/s
Oct 02 12:09:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:09:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:57.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:09:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.2 MiB/s wr, 47 op/s
Oct 02 12:09:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:58.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:59 compute-0 ceph-mon[73668]: pgmap v894: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.2 MiB/s wr, 47 op/s
Oct 02 12:09:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:09:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:10:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:10:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 12:10:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:00.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:01.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:02 compute-0 ceph-mon[73668]: pgmap v895: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:10:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:10:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:10:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:02.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:10:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1286063853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:03.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:03 compute-0 sudo[262730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:03 compute-0 sudo[262730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:03 compute-0 sudo[262730]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:04 compute-0 sudo[262755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:04 compute-0 sudo[262755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:04 compute-0 sudo[262755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:04 compute-0 ceph-mon[73668]: pgmap v896: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:10:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3848430570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Oct 02 12:10:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:05.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:05 compute-0 ceph-mon[73668]: pgmap v897: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Oct 02 12:10:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3716205795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:10:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:10:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2617089515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:05.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Oct 02 12:10:06 compute-0 nova_compute[256940]: 2025-10-02 12:10:06.550 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Automatically allocated network: {'id': 'b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'name': 'auto_allocated_network', 'tenant_id': 'fa15236c63df4c43bf19989029fcda0f', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['227aa610-f00f-4cec-b799-1839354b34be', 'c08cc57a-142e-470f-ae51-6b2f21e9e17e'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-10-02T12:09:38Z', 'updated_at': '2025-10-02T12:09:57Z', 'revision_number': 4, 'project_id': 'fa15236c63df4c43bf19989029fcda0f'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Oct 02 12:10:06 compute-0 nova_compute[256940]: 2025-10-02 12:10:06.562 2 WARNING oslo_policy.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 12:10:06 compute-0 nova_compute[256940]: 2025-10-02 12:10:06.563 2 WARNING oslo_policy.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 12:10:06 compute-0 nova_compute[256940]: 2025-10-02 12:10:06.565 2 DEBUG nova.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b81237ef015d48dfa022b6761d706e36', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fa15236c63df4c43bf19989029fcda0f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:10:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:07 compute-0 ceph-mon[73668]: pgmap v898: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Oct 02 12:10:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:07.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 op/s
Oct 02 12:10:08 compute-0 nova_compute[256940]: 2025-10-02 12:10:08.558 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Successfully created port: cfc8529e-4a3c-4d06-af03-c52fb2f6b463 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:10:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:10:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:09.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:10:09 compute-0 ceph-mon[73668]: pgmap v899: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 op/s
Oct 02 12:10:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:09.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:09 compute-0 sudo[262783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:09 compute-0 sudo[262783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:09 compute-0 sudo[262783]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:10 compute-0 sudo[262825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:10 compute-0 sudo[262825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:10 compute-0 sudo[262825]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:10 compute-0 podman[262807]: 2025-10-02 12:10:10.089015616 +0000 UTC m=+0.122298837 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:10:10 compute-0 podman[262808]: 2025-10-02 12:10:10.102996723 +0000 UTC m=+0.129097136 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:10 compute-0 sudo[262877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:10 compute-0 sudo[262877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:10 compute-0 sudo[262877]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:10 compute-0 sudo[262903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:10:10 compute-0 sudo[262903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 852 B/s wr, 77 op/s
Oct 02 12:10:10 compute-0 nova_compute[256940]: 2025-10-02 12:10:10.746 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Successfully updated port: cfc8529e-4a3c-4d06-af03-c52fb2f6b463 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:10:10 compute-0 nova_compute[256940]: 2025-10-02 12:10:10.778 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "refresh_cache-6392cd0b-cf6b-485c-bbaa-69525ae0191c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:10 compute-0 nova_compute[256940]: 2025-10-02 12:10:10.779 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquired lock "refresh_cache-6392cd0b-cf6b-485c-bbaa-69525ae0191c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:10 compute-0 nova_compute[256940]: 2025-10-02 12:10:10.779 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:10:10 compute-0 podman[263003]: 2025-10-02 12:10:10.824149008 +0000 UTC m=+0.101200744 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:10:10 compute-0 podman[263003]: 2025-10-02 12:10:10.925571857 +0000 UTC m=+0.202623573 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:10:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:11.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:11 compute-0 nova_compute[256940]: 2025-10-02 12:10:11.072 2 DEBUG nova.compute.manager [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received event network-changed-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:11 compute-0 nova_compute[256940]: 2025-10-02 12:10:11.073 2 DEBUG nova.compute.manager [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Refreshing instance network info cache due to event network-changed-cfc8529e-4a3c-4d06-af03-c52fb2f6b463. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:11 compute-0 nova_compute[256940]: 2025-10-02 12:10:11.073 2 DEBUG oslo_concurrency.lockutils [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6392cd0b-cf6b-485c-bbaa-69525ae0191c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:11 compute-0 nova_compute[256940]: 2025-10-02 12:10:11.259 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:10:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.619183) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011619239, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1558, "num_deletes": 251, "total_data_size": 2637860, "memory_usage": 2679496, "flush_reason": "Manual Compaction"}
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011669286, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1572669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19304, "largest_seqno": 20861, "table_properties": {"data_size": 1567204, "index_size": 2669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14134, "raw_average_key_size": 20, "raw_value_size": 1555080, "raw_average_value_size": 2263, "num_data_blocks": 120, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406865, "oldest_key_time": 1759406865, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 50153 microseconds, and 4289 cpu microseconds.
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.669342) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1572669 bytes OK
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.669361) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.699204) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.699246) EVENT_LOG_v1 {"time_micros": 1759407011699238, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.699267) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2631251, prev total WAL file size 2634986, number of live WAL files 2.
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.700194) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1535KB)], [44(9411KB)]
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011700238, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11209854, "oldest_snapshot_seqno": -1}
Oct 02 12:10:11 compute-0 podman[263141]: 2025-10-02 12:10:11.729519593 +0000 UTC m=+0.235187347 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:10:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4672 keys, 8369699 bytes, temperature: kUnknown
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011761424, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8369699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8338434, "index_size": 18483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 116096, "raw_average_key_size": 24, "raw_value_size": 8253787, "raw_average_value_size": 1766, "num_data_blocks": 765, "num_entries": 4672, "num_filter_entries": 4672, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:10:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.761625) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8369699 bytes
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.762940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.0 rd, 136.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(12.4) write-amplify(5.3) OK, records in: 5124, records dropped: 452 output_compression: NoCompression
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.762961) EVENT_LOG_v1 {"time_micros": 1759407011762948, "job": 22, "event": "compaction_finished", "compaction_time_micros": 61248, "compaction_time_cpu_micros": 17865, "output_level": 6, "num_output_files": 1, "total_output_size": 8369699, "num_input_records": 5124, "num_output_records": 4672, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011763305, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011764702, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.700044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.764748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.764755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.764759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.764761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:10:11.764765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-0 ceph-mon[73668]: pgmap v900: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 852 B/s wr, 77 op/s
Oct 02 12:10:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:11 compute-0 podman[263162]: 2025-10-02 12:10:11.811283836 +0000 UTC m=+0.060485207 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:10:11 compute-0 podman[263141]: 2025-10-02 12:10:11.817819527 +0000 UTC m=+0.323487291 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:10:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:11.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:12 compute-0 podman[263207]: 2025-10-02 12:10:12.024532597 +0000 UTC m=+0.054147151 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Oct 02 12:10:12 compute-0 podman[263207]: 2025-10-02 12:10:12.045412194 +0000 UTC m=+0.075026698 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9)
Oct 02 12:10:12 compute-0 sudo[262903]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:10:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:10:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-0 sudo[263259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:12 compute-0 sudo[263259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:12 compute-0 sudo[263259]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:12 compute-0 sudo[263284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:12 compute-0 sudo[263284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:12 compute-0 sudo[263284]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:12 compute-0 sudo[263309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:12 compute-0 sudo[263309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:12 compute-0 sudo[263309]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:12 compute-0 sudo[263335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:10:12 compute-0 sudo[263335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 135 op/s
Oct 02 12:10:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-0 sudo[263335]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:10:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:10:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:10:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:10:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev df6b5d10-9829-4d34-b705-595ffdebcd73 does not exist
Oct 02 12:10:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 441849c8-d5a9-4517-926f-465b2adb5daa does not exist
Oct 02 12:10:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a4510c02-983f-4a43-b6f3-851fbee640b1 does not exist
Oct 02 12:10:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:10:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:10:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:10:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:10:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:10:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:13.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:13 compute-0 sudo[263393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:13 compute-0 sudo[263393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:13 compute-0 sudo[263393]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:13 compute-0 sudo[263418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:13 compute-0 sudo[263418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:13 compute-0 sudo[263418]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:13 compute-0 sudo[263443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:13 compute-0 sudo[263443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:13 compute-0 sudo[263443]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:13 compute-0 sudo[263468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:10:13 compute-0 sudo[263468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:13 compute-0 ceph-mon[73668]: pgmap v901: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 135 op/s
Oct 02 12:10:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:10:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:10:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:10:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:13 compute-0 podman[263534]: 2025-10-02 12:10:13.845639789 +0000 UTC m=+0.101038780 container create 34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:10:13 compute-0 podman[263534]: 2025-10-02 12:10:13.770192931 +0000 UTC m=+0.025591952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:13 compute-0 systemd[1]: Started libpod-conmon-34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361.scope.
Oct 02 12:10:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:13 compute-0 podman[263534]: 2025-10-02 12:10:13.971901109 +0000 UTC m=+0.227300110 container init 34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:10:13 compute-0 podman[263534]: 2025-10-02 12:10:13.985867445 +0000 UTC m=+0.241266456 container start 34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:13 compute-0 naughty_allen[263551]: 167 167
Oct 02 12:10:13 compute-0 systemd[1]: libpod-34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361.scope: Deactivated successfully.
Oct 02 12:10:13 compute-0 podman[263534]: 2025-10-02 12:10:13.996442473 +0000 UTC m=+0.251841484 container attach 34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:10:13 compute-0 podman[263534]: 2025-10-02 12:10:13.998023904 +0000 UTC m=+0.253422905 container died 34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f88e7a051c7f1584a4b8eb94ab4f49f8f2d01eec0ad3d151a39ad575855e3e11-merged.mount: Deactivated successfully.
Oct 02 12:10:14 compute-0 podman[263534]: 2025-10-02 12:10:14.087151661 +0000 UTC m=+0.342550672 container remove 34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_allen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:10:14 compute-0 systemd[1]: libpod-conmon-34d26cc7766a593d73a39f078cd12aa5f2df00bd876bf3b5f28966279584e361.scope: Deactivated successfully.
Oct 02 12:10:14 compute-0 podman[263574]: 2025-10-02 12:10:14.316886383 +0000 UTC m=+0.054449138 container create 2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wing, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:10:14 compute-0 systemd[1]: Started libpod-conmon-2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c.scope.
Oct 02 12:10:14 compute-0 podman[263574]: 2025-10-02 12:10:14.291903248 +0000 UTC m=+0.029466053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1fd048fb4e410b1df7110c716ecb618e92a6eab004cc41834654f03d2a9b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1fd048fb4e410b1df7110c716ecb618e92a6eab004cc41834654f03d2a9b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1fd048fb4e410b1df7110c716ecb618e92a6eab004cc41834654f03d2a9b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1fd048fb4e410b1df7110c716ecb618e92a6eab004cc41834654f03d2a9b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1fd048fb4e410b1df7110c716ecb618e92a6eab004cc41834654f03d2a9b37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:14 compute-0 podman[263574]: 2025-10-02 12:10:14.426490067 +0000 UTC m=+0.164052882 container init 2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:10:14 compute-0 podman[263574]: 2025-10-02 12:10:14.43804743 +0000 UTC m=+0.175610165 container start 2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:10:14 compute-0 podman[263574]: 2025-10-02 12:10:14.441787758 +0000 UTC m=+0.179350513 container attach 2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 02 12:10:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 135 op/s
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.724 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Updating instance_info_cache with network_info: [{"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.789 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Releasing lock "refresh_cache-6392cd0b-cf6b-485c-bbaa-69525ae0191c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.790 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Instance network_info: |[{"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.790 2 DEBUG oslo_concurrency.lockutils [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6392cd0b-cf6b-485c-bbaa-69525ae0191c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.790 2 DEBUG nova.network.neutron [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Refreshing network info cache for port cfc8529e-4a3c-4d06-af03-c52fb2f6b463 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.794 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Start _get_guest_xml network_info=[{"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.800 2 WARNING nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.805 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.806 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.810 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.811 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.812 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.812 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.813 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.813 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.813 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.814 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.814 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.814 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.815 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.815 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.815 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.816 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.820 2 DEBUG nova.privsep.utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:10:14 compute-0 nova_compute[256940]: 2025-10-02 12:10:14.820 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:15.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:15 compute-0 strange_wing[263591]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:10:15 compute-0 strange_wing[263591]: --> relative data size: 1.0
Oct 02 12:10:15 compute-0 strange_wing[263591]: --> All data devices are unavailable
Oct 02 12:10:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095754142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:15 compute-0 systemd[1]: libpod-2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c.scope: Deactivated successfully.
Oct 02 12:10:15 compute-0 podman[263574]: 2025-10-02 12:10:15.30283023 +0000 UTC m=+1.040392985 container died 2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.302 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.351 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.364 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b1fd048fb4e410b1df7110c716ecb618e92a6eab004cc41834654f03d2a9b37-merged.mount: Deactivated successfully.
Oct 02 12:10:15 compute-0 podman[263574]: 2025-10-02 12:10:15.405552734 +0000 UTC m=+1.143115489 container remove 2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:10:15 compute-0 systemd[1]: libpod-conmon-2fab3c6c70feeec0c60f3c1df7eb798a1154378c7ef79fedeb7f77ad31e2296c.scope: Deactivated successfully.
Oct 02 12:10:15 compute-0 sudo[263468]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:15 compute-0 sudo[263659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:15 compute-0 sudo[263659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:15 compute-0 sudo[263659]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:15 compute-0 sudo[263703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:15 compute-0 sudo[263703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:15 compute-0 sudo[263703]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:15 compute-0 sudo[263728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:15 compute-0 sudo[263728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:15 compute-0 sudo[263728]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:15 compute-0 sudo[263753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:10:15 compute-0 sudo[263753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511910208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.841 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.845 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-1',id=2,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=6392cd0b-cf6b-485c-bbaa-69525ae0191c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.846 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.848 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f6:5d,bridge_name='br-int',has_traffic_filtering=True,id=cfc8529e-4a3c-4d06-af03-c52fb2f6b463,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfc8529e-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.853 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6392cd0b-cf6b-485c-bbaa-69525ae0191c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.884 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <uuid>6392cd0b-cf6b-485c-bbaa-69525ae0191c</uuid>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <name>instance-00000002</name>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <nova:name>tempest-tempest.common.compute-instance-1858146006-1</nova:name>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:10:14</nova:creationTime>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:user uuid="b81237ef015d48dfa022b6761d706e36">tempest-AutoAllocateNetworkTest-1017519520-project-member</nova:user>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:project uuid="fa15236c63df4c43bf19989029fcda0f">tempest-AutoAllocateNetworkTest-1017519520</nova:project>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <nova:port uuid="cfc8529e-4a3c-4d06-af03-c52fb2f6b463">
Oct 02 12:10:15 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="fdfe:381f:8400:1::33c" ipVersion="6"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.1.0.80" ipVersion="4"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <system>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <entry name="serial">6392cd0b-cf6b-485c-bbaa-69525ae0191c</entry>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <entry name="uuid">6392cd0b-cf6b-485c-bbaa-69525ae0191c</entry>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </system>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <os>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   </os>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <features>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   </features>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk">
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       </source>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk.config">
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       </source>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:10:15 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:dc:f6:5d"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <target dev="tapcfc8529e-4a"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/console.log" append="off"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <video>
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </video>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:10:15 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:10:15 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:10:15 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:10:15 compute-0 nova_compute[256940]: </domain>
Oct 02 12:10:15 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.885 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Preparing to wait for external event network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.885 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.886 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.886 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.887 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-1',id=2,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=6392cd0b-cf6b-485c-bbaa-69525ae0191c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.888 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.889 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f6:5d,bridge_name='br-int',has_traffic_filtering=True,id=cfc8529e-4a3c-4d06-af03-c52fb2f6b463,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfc8529e-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.890 2 DEBUG os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f6:5d,bridge_name='br-int',has_traffic_filtering=True,id=cfc8529e-4a3c-4d06-af03-c52fb2f6b463,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfc8529e-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:10:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:15.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.960 2 DEBUG ovsdbapp.backend.ovs_idl [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.961 2 DEBUG ovsdbapp.backend.ovs_idl [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.961 2 DEBUG ovsdbapp.backend.ovs_idl [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:15 compute-0 ceph-mon[73668]: pgmap v902: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 135 op/s
Oct 02 12:10:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2095754142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1511910208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.996 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:15 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:16 compute-0 nova_compute[256940]: 2025-10-02 12:10:15.999 2 INFO oslo.privsep.daemon [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2j32r49b/privsep.sock']
Oct 02 12:10:16 compute-0 podman[263824]: 2025-10-02 12:10:16.238123201 +0000 UTC m=+0.060386175 container create 43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldwasser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:10:16 compute-0 systemd[1]: Started libpod-conmon-43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028.scope.
Oct 02 12:10:16 compute-0 podman[263824]: 2025-10-02 12:10:16.214013918 +0000 UTC m=+0.036276952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:16 compute-0 podman[263824]: 2025-10-02 12:10:16.337783983 +0000 UTC m=+0.160046997 container init 43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldwasser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:10:16 compute-0 podman[263824]: 2025-10-02 12:10:16.3460642 +0000 UTC m=+0.168327204 container start 43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldwasser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:10:16 compute-0 bold_goldwasser[263841]: 167 167
Oct 02 12:10:16 compute-0 systemd[1]: libpod-43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028.scope: Deactivated successfully.
Oct 02 12:10:16 compute-0 podman[263824]: 2025-10-02 12:10:16.367210845 +0000 UTC m=+0.189473859 container attach 43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:10:16 compute-0 podman[263824]: 2025-10-02 12:10:16.36778986 +0000 UTC m=+0.190052864 container died 43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e72da4fcf75985d1b68a866211343af88aa9fc5030943c7241fca0f19493bdd-merged.mount: Deactivated successfully.
Oct 02 12:10:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:16 compute-0 podman[263824]: 2025-10-02 12:10:16.433276177 +0000 UTC m=+0.255539181 container remove 43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:16 compute-0 systemd[1]: libpod-conmon-43aca17007b2cd4cc1be4b9f89c932d0f3dceab98d77a2073b9fbcacb5de0028.scope: Deactivated successfully.
Oct 02 12:10:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 02 12:10:16 compute-0 podman[263868]: 2025-10-02 12:10:16.63059911 +0000 UTC m=+0.028360495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:16 compute-0 nova_compute[256940]: 2025-10-02 12:10:16.723 2 INFO oslo.privsep.daemon [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Spawned new privsep daemon via rootwrap
Oct 02 12:10:16 compute-0 nova_compute[256940]: 2025-10-02 12:10:16.588 521 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:10:16 compute-0 nova_compute[256940]: 2025-10-02 12:10:16.595 521 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:10:16 compute-0 nova_compute[256940]: 2025-10-02 12:10:16.599 521 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 02 12:10:16 compute-0 nova_compute[256940]: 2025-10-02 12:10:16.599 521 INFO oslo.privsep.daemon [-] privsep daemon running as pid 521
Oct 02 12:10:16 compute-0 podman[263868]: 2025-10-02 12:10:16.730719995 +0000 UTC m=+0.128481360 container create 8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:10:16 compute-0 systemd[1]: Started libpod-conmon-8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727.scope.
Oct 02 12:10:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2c577c14efe91c1f3d80f409aeb2935d48dae477a8cbfc3ccdbc51a0d88c05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2c577c14efe91c1f3d80f409aeb2935d48dae477a8cbfc3ccdbc51a0d88c05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2c577c14efe91c1f3d80f409aeb2935d48dae477a8cbfc3ccdbc51a0d88c05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2c577c14efe91c1f3d80f409aeb2935d48dae477a8cbfc3ccdbc51a0d88c05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 02 12:10:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:17.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc8529e-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcfc8529e-4a, col_values=(('external_ids', {'iface-id': 'cfc8529e-4a3c-4d06-af03-c52fb2f6b463', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:f6:5d', 'vm-uuid': '6392cd0b-cf6b-485c-bbaa-69525ae0191c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:17 compute-0 podman[263868]: 2025-10-02 12:10:17.049784589 +0000 UTC m=+0.447545994 container init 8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamarr, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:10:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 02 12:10:17 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:17 compute-0 NetworkManager[44981]: <info>  [1759407017.0853] manager: (tapcfc8529e-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 02 12:10:17 compute-0 podman[263868]: 2025-10-02 12:10:17.0894934 +0000 UTC m=+0.487254805 container start 8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.092 2 INFO os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f6:5d,bridge_name='br-int',has_traffic_filtering=True,id=cfc8529e-4a3c-4d06-af03-c52fb2f6b463,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfc8529e-4a')
Oct 02 12:10:17 compute-0 podman[263868]: 2025-10-02 12:10:17.121405727 +0000 UTC m=+0.519167132 container attach 8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamarr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.202 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.203 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.203 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No VIF found with MAC fa:16:3e:dc:f6:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.204 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Using config drive
Oct 02 12:10:17 compute-0 nova_compute[256940]: 2025-10-02 12:10:17.233 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:17 compute-0 charming_lamarr[263886]: {
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:     "1": [
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:         {
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "devices": [
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "/dev/loop3"
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             ],
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "lv_name": "ceph_lv0",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "lv_size": "7511998464",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "name": "ceph_lv0",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "tags": {
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.cluster_name": "ceph",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.crush_device_class": "",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.encrypted": "0",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.osd_id": "1",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.type": "block",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:                 "ceph.vdo": "0"
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             },
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "type": "block",
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:             "vg_name": "ceph_vg0"
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:         }
Oct 02 12:10:17 compute-0 charming_lamarr[263886]:     ]
Oct 02 12:10:17 compute-0 charming_lamarr[263886]: }
Oct 02 12:10:17 compute-0 systemd[1]: libpod-8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727.scope: Deactivated successfully.
Oct 02 12:10:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:17.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:17 compute-0 podman[263917]: 2025-10-02 12:10:17.956375477 +0000 UTC m=+0.034708181 container died 8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:10:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 02 12:10:18 compute-0 ceph-mon[73668]: pgmap v903: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 02 12:10:18 compute-0 ceph-mon[73668]: osdmap e137: 3 total, 3 up, 3 in
Oct 02 12:10:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1883701792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.239 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Creating config drive at /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/disk.config
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.243 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsuopu_ks execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff2c577c14efe91c1f3d80f409aeb2935d48dae477a8cbfc3ccdbc51a0d88c05-merged.mount: Deactivated successfully.
Oct 02 12:10:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.410 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsuopu_ks" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.443 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.447 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/disk.config 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.468 2 DEBUG nova.network.neutron [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Updated VIF entry in instance network info cache for port cfc8529e-4a3c-4d06-af03-c52fb2f6b463. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.469 2 DEBUG nova.network.neutron [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Updating instance_info_cache with network_info: [{"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.488 2 DEBUG oslo_concurrency.lockutils [req-1a8c8868-68da-40ba-ad02-96f7a6b1cb43 req-14172244-3dc6-47d9-8d63-1e3cc07ff40d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6392cd0b-cf6b-485c-bbaa-69525ae0191c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 54 KiB/s wr, 106 op/s
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.657 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/disk.config 6392cd0b-cf6b-485c-bbaa-69525ae0191c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.658 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Deleting local config drive /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c/disk.config because it was imported into RBD.
Oct 02 12:10:18 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 12:10:18 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 12:10:18 compute-0 podman[263917]: 2025-10-02 12:10:18.767996434 +0000 UTC m=+0.846329098 container remove 8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:10:18 compute-0 systemd[1]: libpod-conmon-8055ec647beb735f0168ebb727f0b7e715e6bcd617125c13e75f95dc50337727.scope: Deactivated successfully.
Oct 02 12:10:18 compute-0 sudo[263753]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:18 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 02 12:10:18 compute-0 kernel: tapcfc8529e-4a: entered promiscuous mode
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:18 compute-0 ovn_controller[148123]: 2025-10-02T12:10:18Z|00027|binding|INFO|Claiming lport cfc8529e-4a3c-4d06-af03-c52fb2f6b463 for this chassis.
Oct 02 12:10:18 compute-0 ovn_controller[148123]: 2025-10-02T12:10:18Z|00028|binding|INFO|cfc8529e-4a3c-4d06-af03-c52fb2f6b463: Claiming fa:16:3e:dc:f6:5d 10.1.0.80 fdfe:381f:8400:1::33c
Oct 02 12:10:18 compute-0 NetworkManager[44981]: <info>  [1759407018.8524] manager: (tapcfc8529e-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:18.872 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:f6:5d 10.1.0.80 fdfe:381f:8400:1::33c'], port_security=['fa:16:3e:dc:f6:5d 10.1.0.80 fdfe:381f:8400:1::33c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.80/26 fdfe:381f:8400:1::33c/64', 'neutron:device_id': '6392cd0b-cf6b-485c-bbaa-69525ae0191c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cfc8529e-4a3c-4d06-af03-c52fb2f6b463) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:18.874 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cfc8529e-4a3c-4d06-af03-c52fb2f6b463 in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 bound to our chassis
Oct 02 12:10:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:18.880 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct 02 12:10:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:18.882 158104 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp37col2_o/privsep.sock']
Oct 02 12:10:18 compute-0 systemd-udevd[264030]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:18 compute-0 sudo[263999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:18 compute-0 sudo[263999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:18 compute-0 sudo[263999]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:18 compute-0 ovn_controller[148123]: 2025-10-02T12:10:18Z|00029|binding|INFO|Setting lport cfc8529e-4a3c-4d06-af03-c52fb2f6b463 ovn-installed in OVS
Oct 02 12:10:18 compute-0 ovn_controller[148123]: 2025-10-02T12:10:18Z|00030|binding|INFO|Setting lport cfc8529e-4a3c-4d06-af03-c52fb2f6b463 up in Southbound
Oct 02 12:10:18 compute-0 nova_compute[256940]: 2025-10-02 12:10:18.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:18 compute-0 NetworkManager[44981]: <info>  [1759407018.9428] device (tapcfc8529e-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:10:18 compute-0 NetworkManager[44981]: <info>  [1759407018.9443] device (tapcfc8529e-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:10:18 compute-0 systemd-machined[210927]: New machine qemu-1-instance-00000002.
Oct 02 12:10:18 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct 02 12:10:18 compute-0 sudo[264040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:19 compute-0 sudo[264040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:19 compute-0 sudo[264040]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:19.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:19 compute-0 sudo[264066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:19 compute-0 sudo[264066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:19 compute-0 sudo[264066]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:19 compute-0 sudo[264096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:10:19 compute-0 sudo[264096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2438634577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:19 compute-0 ceph-mon[73668]: osdmap e138: 3 total, 3 up, 3 in
Oct 02 12:10:19 compute-0 nova_compute[256940]: 2025-10-02 12:10:19.308 2 DEBUG nova.compute.manager [req-4de1575c-530f-4f58-971e-d8d1f3dfc66c req-e48bf5ea-1303-49ca-82f0-45dc285cdf3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received event network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:19 compute-0 nova_compute[256940]: 2025-10-02 12:10:19.308 2 DEBUG oslo_concurrency.lockutils [req-4de1575c-530f-4f58-971e-d8d1f3dfc66c req-e48bf5ea-1303-49ca-82f0-45dc285cdf3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:19 compute-0 nova_compute[256940]: 2025-10-02 12:10:19.309 2 DEBUG oslo_concurrency.lockutils [req-4de1575c-530f-4f58-971e-d8d1f3dfc66c req-e48bf5ea-1303-49ca-82f0-45dc285cdf3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:19 compute-0 nova_compute[256940]: 2025-10-02 12:10:19.309 2 DEBUG oslo_concurrency.lockutils [req-4de1575c-530f-4f58-971e-d8d1f3dfc66c req-e48bf5ea-1303-49ca-82f0-45dc285cdf3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:19 compute-0 nova_compute[256940]: 2025-10-02 12:10:19.309 2 DEBUG nova.compute.manager [req-4de1575c-530f-4f58-971e-d8d1f3dfc66c req-e48bf5ea-1303-49ca-82f0-45dc285cdf3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Processing event network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.425 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:19 compute-0 nova_compute[256940]: 2025-10-02 12:10:19.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:19 compute-0 podman[264202]: 2025-10-02 12:10:19.563207981 +0000 UTC m=+0.053268907 container create c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bose, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:10:19 compute-0 systemd[1]: Started libpod-conmon-c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7.scope.
Oct 02 12:10:19 compute-0 podman[264202]: 2025-10-02 12:10:19.534281623 +0000 UTC m=+0.024342579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.685 158104 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.686 158104 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp37col2_o/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.578 264216 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.585 264216 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.589 264216 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.589 264216 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264216
Oct 02 12:10:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:19.689 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[92bdb2a6-7393-40eb-980b-cdc753ba45f2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:19 compute-0 podman[264202]: 2025-10-02 12:10:19.789763331 +0000 UTC m=+0.279824337 container init c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bose, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:10:19 compute-0 podman[264202]: 2025-10-02 12:10:19.80268671 +0000 UTC m=+0.292747636 container start c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:10:19 compute-0 wizardly_bose[264219]: 167 167
Oct 02 12:10:19 compute-0 systemd[1]: libpod-c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7.scope: Deactivated successfully.
Oct 02 12:10:19 compute-0 conmon[264219]: conmon c1a8ff5033f27aa93763 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7.scope/container/memory.events
Oct 02 12:10:19 compute-0 podman[264202]: 2025-10-02 12:10:19.814770206 +0000 UTC m=+0.304831252 container attach c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bose, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:10:19 compute-0 podman[264202]: 2025-10-02 12:10:19.815432694 +0000 UTC m=+0.305493660 container died c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bose, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-190ecf96d462e2b5b8d0136e50ed752a2a4ea5b746398e7833b8eb5d04211423-merged.mount: Deactivated successfully.
Oct 02 12:10:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.019 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407020.0188677, 6392cd0b-cf6b-485c-bbaa-69525ae0191c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.020 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] VM Started (Lifecycle Event)
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.023 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:10:20 compute-0 podman[264202]: 2025-10-02 12:10:20.026809315 +0000 UTC m=+0.516870281 container remove c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bose, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.046 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.054 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.060 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:20 compute-0 systemd[1]: libpod-conmon-c1a8ff5033f27aa937639be95d655f4157e8336b0305b11608359930559afec7.scope: Deactivated successfully.
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.064 2 INFO nova.virt.libvirt.driver [-] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Instance spawned successfully.
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.064 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.095 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.095 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407020.0191154, 6392cd0b-cf6b-485c-bbaa-69525ae0191c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.096 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] VM Paused (Lifecycle Event)
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.122 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.133 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.134 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.134 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.135 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.136 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.136 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.141 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407020.029253, 6392cd0b-cf6b-485c-bbaa-69525ae0191c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.141 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] VM Resumed (Lifecycle Event)
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.176 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.194 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.216 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Took 43.61 seconds to spawn the instance on the hypervisor.
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.218 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.225 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:20 compute-0 podman[264248]: 2025-10-02 12:10:20.277288002 +0000 UTC m=+0.046977113 container create 000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:10:20 compute-0 ceph-mon[73668]: pgmap v906: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 54 KiB/s wr, 106 op/s
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.299 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Took 44.68 seconds to build instance.
Oct 02 12:10:20 compute-0 nova_compute[256940]: 2025-10-02 12:10:20.320 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 44.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:20 compute-0 systemd[1]: Started libpod-conmon-000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef.scope.
Oct 02 12:10:20 compute-0 podman[264248]: 2025-10-02 12:10:20.254853834 +0000 UTC m=+0.024542955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:10:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d174a832f84a4ae2b6979e229659ef322fbd6ed7841e0b116cd998bb1c2d99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d174a832f84a4ae2b6979e229659ef322fbd6ed7841e0b116cd998bb1c2d99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d174a832f84a4ae2b6979e229659ef322fbd6ed7841e0b116cd998bb1c2d99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d174a832f84a4ae2b6979e229659ef322fbd6ed7841e0b116cd998bb1c2d99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:20 compute-0 podman[264248]: 2025-10-02 12:10:20.419365917 +0000 UTC m=+0.189055048 container init 000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:10:20 compute-0 podman[264248]: 2025-10-02 12:10:20.429986505 +0000 UTC m=+0.199675606 container start 000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:10:20 compute-0 podman[264248]: 2025-10-02 12:10:20.438331984 +0000 UTC m=+0.208021085 container attach 000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bohr, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:10:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 602 KiB/s rd, 33 KiB/s wr, 37 op/s
Oct 02 12:10:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:21.108 264216 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:21.108 264216 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:21.108 264216 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:21 compute-0 blissful_bohr[264265]: {
Oct 02 12:10:21 compute-0 blissful_bohr[264265]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:10:21 compute-0 blissful_bohr[264265]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:10:21 compute-0 blissful_bohr[264265]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:10:21 compute-0 blissful_bohr[264265]:         "osd_id": 1,
Oct 02 12:10:21 compute-0 blissful_bohr[264265]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:10:21 compute-0 blissful_bohr[264265]:         "type": "bluestore"
Oct 02 12:10:21 compute-0 blissful_bohr[264265]:     }
Oct 02 12:10:21 compute-0 blissful_bohr[264265]: }
Oct 02 12:10:21 compute-0 systemd[1]: libpod-000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef.scope: Deactivated successfully.
Oct 02 12:10:21 compute-0 podman[264248]: 2025-10-02 12:10:21.31777647 +0000 UTC m=+1.087465591 container died 000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:10:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0d174a832f84a4ae2b6979e229659ef322fbd6ed7841e0b116cd998bb1c2d99-merged.mount: Deactivated successfully.
Oct 02 12:10:21 compute-0 nova_compute[256940]: 2025-10-02 12:10:21.545 2 DEBUG nova.compute.manager [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received event network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:21 compute-0 nova_compute[256940]: 2025-10-02 12:10:21.546 2 DEBUG oslo_concurrency.lockutils [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:21 compute-0 nova_compute[256940]: 2025-10-02 12:10:21.546 2 DEBUG oslo_concurrency.lockutils [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:21 compute-0 nova_compute[256940]: 2025-10-02 12:10:21.546 2 DEBUG oslo_concurrency.lockutils [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:21 compute-0 nova_compute[256940]: 2025-10-02 12:10:21.546 2 DEBUG nova.compute.manager [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] No waiting events found dispatching network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:21 compute-0 nova_compute[256940]: 2025-10-02 12:10:21.546 2 WARNING nova.compute.manager [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received unexpected event network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 for instance with vm_state active and task_state None.
Oct 02 12:10:21 compute-0 podman[264248]: 2025-10-02 12:10:21.747150756 +0000 UTC m=+1.516839887 container remove 000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:10:21 compute-0 podman[264289]: 2025-10-02 12:10:21.771901485 +0000 UTC m=+0.437561952 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:10:21 compute-0 sudo[264096]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:21 compute-0 systemd[1]: libpod-conmon-000290658a0db92af91d07ee27f512083ceec9c4cf6567e594576544abc2f9ef.scope: Deactivated successfully.
Oct 02 12:10:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:10:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:10:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:21.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 451aeb47-455b-48d1-b6fa-23a8defe8366 does not exist
Oct 02 12:10:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c89e7e3f-cb89-49ef-b920-ade5a486e39c does not exist
Oct 02 12:10:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5a92a948-98bc-4f07-9f5c-843c9dd6f4fa does not exist
Oct 02 12:10:22 compute-0 sudo[264318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:22 compute-0 sudo[264318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:22 compute-0 sudo[264318]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:22 compute-0 nova_compute[256940]: 2025-10-02 12:10:22.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-0 sudo[264343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:10:22 compute-0 sudo[264343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:22 compute-0 sudo[264343]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:22 compute-0 ceph-mon[73668]: pgmap v907: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 602 KiB/s rd, 33 KiB/s wr, 37 op/s
Oct 02 12:10:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 327 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 90 op/s
Oct 02 12:10:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:23.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:23 compute-0 podman[264369]: 2025-10-02 12:10:23.398880968 +0000 UTC m=+0.067675105 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:10:23 compute-0 ceph-mon[73668]: pgmap v908: 305 pgs: 305 active+clean; 327 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 90 op/s
Oct 02 12:10:23 compute-0 nova_compute[256940]: 2025-10-02 12:10:23.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.693 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[39a03940-597e-4076-a132-233040db7062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.694 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4aadb38-81 in ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.699 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4aadb38-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.699 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa2a046-d748-405a-8e8d-77972387305c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.761 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[613d64c5-6e0c-4130-97e0-be6f849c16c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.792 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[cafce3d8-0942-44be-a947-b81c0766cf5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.818 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[13cc078c-75b6-4fbb-9b87-eff570b83ad9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:23.820 158104 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpg5ud8tlv/privsep.sock']
Oct 02 12:10:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:23.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:24 compute-0 sudo[264398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:24 compute-0 sudo[264398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:24 compute-0 sudo[264398]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:24 compute-0 sudo[264424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:24 compute-0 sudo[264424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:24 compute-0 sudo[264424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 327 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 653 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Oct 02 12:10:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:24.636 158104 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 12:10:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:24.638 158104 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpg5ud8tlv/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 12:10:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:24.452 264450 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:10:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:24.456 264450 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:10:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:24.458 264450 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 12:10:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:24.459 264450 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264450
Oct 02 12:10:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:24.642 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[02445e65-e31b-4a95-afff-0acbd5789a31]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:25.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.181 264450 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.182 264450 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.182 264450 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.763 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[24024d33-0e42-413e-9663-7b3efdfe30c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 NetworkManager[44981]: <info>  [1759407025.7759] manager: (tapb4aadb38-80): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.774 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[721e59da-6bca-4f51-bc06-d7f3a7303732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.816 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[011413a5-7348-4a4c-811c-0786cf07124c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.820 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c06a16ff-d6ec-457e-b884-3ee4125f4fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 systemd-udevd[264460]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:25 compute-0 NetworkManager[44981]: <info>  [1759407025.8547] device (tapb4aadb38-80): carrier: link connected
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.863 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0968c6bc-6935-4c1a-9924-1a39f06d3ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.890 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[51b25532-b270-49be-b4f8-9b9cfb8f01c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492133, 'reachable_time': 16248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264479, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.911 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[08370b01-92d4-44f3-8de8-339de455814b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:b633'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492133, 'tstamp': 492133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264480, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.932 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[46581da5-0aeb-4651-a818-18c350d9e990]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492133, 'reachable_time': 16248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264481, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:25.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:25.997 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fa330bff-a942-4b18-a8a8-bcaa8b99d7a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ceph-mon[73668]: pgmap v909: 305 pgs: 305 active+clean; 327 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 653 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.091 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba4c410-e9f3-47a7-863c-2cd011cd204e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.094 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.094 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.095 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4aadb38-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 NetworkManager[44981]: <info>  [1759407026.0989] manager: (tapb4aadb38-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 02 12:10:26 compute-0 kernel: tapb4aadb38-80: entered promiscuous mode
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.102 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4aadb38-80, col_values=(('external_ids', {'iface-id': 'de74dbb2-fac5-494f-b65c-51300143a2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_controller[148123]: 2025-10-02T12:10:26Z|00031|binding|INFO|Releasing lport de74dbb2-fac5-494f-b65c-51300143a2da from this chassis (sb_readonly=0)
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.124 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.125 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2429bf28-01c7-4b1f-b852-49b6b7ed953c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.127 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.129 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'env', 'PROCESS_TAG=haproxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:10:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.446 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.447 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.448 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 02 12:10:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 02 12:10:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 362 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.3 MiB/s wr, 359 op/s
Oct 02 12:10:26 compute-0 podman[264515]: 2025-10-02 12:10:26.595800889 +0000 UTC m=+0.097150838 container create 4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:10:26 compute-0 podman[264515]: 2025-10-02 12:10:26.547060771 +0000 UTC m=+0.048410760 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:10:26 compute-0 systemd[1]: Started libpod-conmon-4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3.scope.
Oct 02 12:10:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb60072cba4fa203e609c0009235e525b98306bb74eaa7eb75ab9996441844e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:26 compute-0 podman[264515]: 2025-10-02 12:10:26.754438017 +0000 UTC m=+0.255787996 container init 4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.763 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.764 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.765 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.765 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.766 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:26 compute-0 podman[264515]: 2025-10-02 12:10:26.767147201 +0000 UTC m=+0.268497170 container start 4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.768 2 INFO nova.compute.manager [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Terminating instance
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.769 2 DEBUG nova.compute.manager [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:10:26 compute-0 kernel: tapcfc8529e-4a (unregistering): left promiscuous mode
Oct 02 12:10:26 compute-0 NetworkManager[44981]: <info>  [1759407026.8189] device (tapcfc8529e-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:10:26 compute-0 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[264530]: [NOTICE]   (264534) : New worker (264538) forked
Oct 02 12:10:26 compute-0 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[264530]: [NOTICE]   (264534) : Loading success.
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_controller[148123]: 2025-10-02T12:10:26Z|00032|binding|INFO|Releasing lport cfc8529e-4a3c-4d06-af03-c52fb2f6b463 from this chassis (sb_readonly=0)
Oct 02 12:10:26 compute-0 ovn_controller[148123]: 2025-10-02T12:10:26Z|00033|binding|INFO|Setting lport cfc8529e-4a3c-4d06-af03-c52fb2f6b463 down in Southbound
Oct 02 12:10:26 compute-0 ovn_controller[148123]: 2025-10-02T12:10:26Z|00034|binding|INFO|Removing iface tapcfc8529e-4a ovn-installed in OVS
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.868 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:f6:5d 10.1.0.80 fdfe:381f:8400:1::33c'], port_security=['fa:16:3e:dc:f6:5d 10.1.0.80 fdfe:381f:8400:1::33c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.80/26 fdfe:381f:8400:1::33c/64', 'neutron:device_id': '6392cd0b-cf6b-485c-bbaa-69525ae0191c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cfc8529e-4a3c-4d06-af03-c52fb2f6b463) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:26 compute-0 nova_compute[256940]: 2025-10-02 12:10:26.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.879 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.880 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cfc8529e-4a3c-4d06-af03-c52fb2f6b463 in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 unbound from our chassis
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.882 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.882 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[193e54b0-53e1-48b5-8a09-94c986a0053f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:26.883 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace which is not needed anymore
Oct 02 12:10:26 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 02 12:10:26 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 7.705s CPU time.
Oct 02 12:10:26 compute-0 systemd-machined[210927]: Machine qemu-1-instance-00000002 terminated.
Oct 02 12:10:26 compute-0 NetworkManager[44981]: <info>  [1759407026.9927] manager: (tapcfc8529e-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/29)
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.014 2 INFO nova.virt.libvirt.driver [-] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Instance destroyed successfully.
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.014 2 DEBUG nova.objects.instance [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'resources' on Instance uuid 6392cd0b-cf6b-485c-bbaa-69525ae0191c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:27.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.035 2 DEBUG nova.virt.libvirt.vif [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-1',id=2,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:20Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=6392cd0b-cf6b-485c-bbaa-69525ae0191c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.035 2 DEBUG nova.network.os_vif_util [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "address": "fa:16:3e:dc:f6:5d", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::33c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfc8529e-4a", "ovs_interfaceid": "cfc8529e-4a3c-4d06-af03-c52fb2f6b463", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.036 2 DEBUG nova.network.os_vif_util [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f6:5d,bridge_name='br-int',has_traffic_filtering=True,id=cfc8529e-4a3c-4d06-af03-c52fb2f6b463,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfc8529e-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.036 2 DEBUG os_vif [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f6:5d,bridge_name='br-int',has_traffic_filtering=True,id=cfc8529e-4a3c-4d06-af03-c52fb2f6b463,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfc8529e-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.038 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc8529e-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.045 2 INFO os_vif [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f6:5d,bridge_name='br-int',has_traffic_filtering=True,id=cfc8529e-4a3c-4d06-af03-c52fb2f6b463,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfc8529e-4a')
Oct 02 12:10:27 compute-0 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[264530]: [NOTICE]   (264534) : haproxy version is 2.8.14-c23fe91
Oct 02 12:10:27 compute-0 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[264530]: [NOTICE]   (264534) : path to executable is /usr/sbin/haproxy
Oct 02 12:10:27 compute-0 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[264530]: [WARNING]  (264534) : Exiting Master process...
Oct 02 12:10:27 compute-0 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[264530]: [ALERT]    (264534) : Current worker (264538) exited with code 143 (Terminated)
Oct 02 12:10:27 compute-0 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[264530]: [WARNING]  (264534) : All workers exited. Exiting... (0)
Oct 02 12:10:27 compute-0 systemd[1]: libpod-4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3.scope: Deactivated successfully.
Oct 02 12:10:27 compute-0 podman[264567]: 2025-10-02 12:10:27.070676238 +0000 UTC m=+0.063983038 container died 4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3-userdata-shm.mount: Deactivated successfully.
Oct 02 12:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb60072cba4fa203e609c0009235e525b98306bb74eaa7eb75ab9996441844e-merged.mount: Deactivated successfully.
Oct 02 12:10:27 compute-0 podman[264567]: 2025-10-02 12:10:27.553152137 +0000 UTC m=+0.546458967 container cleanup 4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:10:27 compute-0 systemd[1]: libpod-conmon-4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3.scope: Deactivated successfully.
Oct 02 12:10:27 compute-0 ceph-mon[73668]: osdmap e139: 3 total, 3 up, 3 in
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.757 2 DEBUG nova.compute.manager [req-11133dc3-ac1c-41bf-b520-108d9d70d983 req-daa9cce4-9382-4923-b641-cb9304899ef7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received event network-vif-unplugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.758 2 DEBUG oslo_concurrency.lockutils [req-11133dc3-ac1c-41bf-b520-108d9d70d983 req-daa9cce4-9382-4923-b641-cb9304899ef7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.759 2 DEBUG oslo_concurrency.lockutils [req-11133dc3-ac1c-41bf-b520-108d9d70d983 req-daa9cce4-9382-4923-b641-cb9304899ef7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.759 2 DEBUG oslo_concurrency.lockutils [req-11133dc3-ac1c-41bf-b520-108d9d70d983 req-daa9cce4-9382-4923-b641-cb9304899ef7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.760 2 DEBUG nova.compute.manager [req-11133dc3-ac1c-41bf-b520-108d9d70d983 req-daa9cce4-9382-4923-b641-cb9304899ef7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] No waiting events found dispatching network-vif-unplugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:27 compute-0 nova_compute[256940]: 2025-10-02 12:10:27.760 2 DEBUG nova.compute.manager [req-11133dc3-ac1c-41bf-b520-108d9d70d983 req-daa9cce4-9382-4923-b641-cb9304899ef7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received event network-vif-unplugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:10:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:27.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.405 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.407 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:28 compute-0 podman[264625]: 2025-10-02 12:10:28.419398257 +0000 UTC m=+0.829899018 container remove 4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.427 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c6406ad3-0ff3-48e5-b8b4-85dd8e5a2a1f]: (4, ('Thu Oct  2 12:10:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3)\n4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3\nThu Oct  2 12:10:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3)\n4b69c14d9a6bafad99d85a624cc1eec882f3b2b9883a0e8e9f6321969e82bcb3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.429 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[53df540a-0b89-47e2-a5fe-ecb2f0f15a63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.430 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:28 compute-0 kernel: tapb4aadb38-80: left promiscuous mode
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.474 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.503 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[54bf0445-7c5f-46a9-83ea-4e96b82108eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.530 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a466f48b-c677-4af5-b50a-465fb945216f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.532 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0446f0-c478-475e-a954-2f1286895adb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 372 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 327 op/s
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.557 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ae087c-8ccb-4a40-9a62-986659ae3772]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492124, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264644, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:28 compute-0 systemd[1]: run-netns-ovnmeta\x2db4aadb38\x2d89a4\x2d463f\x2db7b5\x2d8bb4dcce7d32.mount: Deactivated successfully.
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.571 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:10:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:28.572 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[eae84e50-a88c-4026-b59a-e2e135c85f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.619 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.628 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.628 2 INFO nova.compute.claims [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.636 2 INFO nova.virt.libvirt.driver [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Deleting instance files /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c_del
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.637 2 INFO nova.virt.libvirt.driver [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Deletion of /var/lib/nova/instances/6392cd0b-cf6b-485c-bbaa-69525ae0191c_del complete
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:10:28
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 02 12:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.748 2 DEBUG nova.virt.libvirt.host [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.749 2 INFO nova.virt.libvirt.host [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] UEFI support detected
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.751 2 INFO nova.compute.manager [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Took 1.98 seconds to destroy the instance on the hypervisor.
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.751 2 DEBUG oslo.service.loopingcall [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.752 2 DEBUG nova.compute.manager [-] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.752 2 DEBUG nova.network.neutron [-] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:10:28 compute-0 ceph-mon[73668]: pgmap v911: 305 pgs: 305 active+clean; 362 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.3 MiB/s wr, 359 op/s
Oct 02 12:10:28 compute-0 nova_compute[256940]: 2025-10-02 12:10:28.940 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:29.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.321 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.328 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.328 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.355 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/20603787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.394 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.401 2 DEBUG nova.compute.provider_tree [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.457 2 ERROR nova.scheduler.client.report [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [req-f2cfb51e-a9c9-4de5-9ed7-a619e9f88c90] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 8733289a-aa77-4139-9e88-bac686174c8d.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-f2cfb51e-a9c9-4de5-9ed7-a619e9f88c90"}]}
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.486 2 DEBUG nova.scheduler.client.report [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.577 2 DEBUG nova.scheduler.client.report [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.578 2 DEBUG nova.compute.provider_tree [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.609 2 DEBUG nova.scheduler.client.report [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.655 2 DEBUG nova.scheduler.client.report [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:10:29 compute-0 nova_compute[256940]: 2025-10-02 12:10:29.748 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:29.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.076 2 DEBUG nova.compute.manager [req-270d82fc-ca6d-488d-8a42-0ca674782cb7 req-9ff40495-4368-4b02-8fa3-b2a1ab764886 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received event network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.077 2 DEBUG oslo_concurrency.lockutils [req-270d82fc-ca6d-488d-8a42-0ca674782cb7 req-9ff40495-4368-4b02-8fa3-b2a1ab764886 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.077 2 DEBUG oslo_concurrency.lockutils [req-270d82fc-ca6d-488d-8a42-0ca674782cb7 req-9ff40495-4368-4b02-8fa3-b2a1ab764886 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.077 2 DEBUG oslo_concurrency.lockutils [req-270d82fc-ca6d-488d-8a42-0ca674782cb7 req-9ff40495-4368-4b02-8fa3-b2a1ab764886 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.078 2 DEBUG nova.compute.manager [req-270d82fc-ca6d-488d-8a42-0ca674782cb7 req-9ff40495-4368-4b02-8fa3-b2a1ab764886 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] No waiting events found dispatching network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.078 2 WARNING nova.compute.manager [req-270d82fc-ca6d-488d-8a42-0ca674782cb7 req-9ff40495-4368-4b02-8fa3-b2a1ab764886 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received unexpected event network-vif-plugged-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 for instance with vm_state active and task_state deleting.
Oct 02 12:10:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922378935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:30 compute-0 ceph-mon[73668]: pgmap v912: 305 pgs: 305 active+clean; 372 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 327 op/s
Oct 02 12:10:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/20603787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.200 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.208 2 DEBUG nova.compute.provider_tree [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.265 2 DEBUG nova.network.neutron [-] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.373 2 DEBUG nova.scheduler.client.report [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Updated inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.374 2 DEBUG nova.compute.provider_tree [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Updating resource provider 8733289a-aa77-4139-9e88-bac686174c8d generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.374 2 DEBUG nova.compute.provider_tree [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.399 2 INFO nova.compute.manager [-] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Took 1.65 seconds to deallocate network for instance.
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.404 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.406 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.406 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:10:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 341 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 330 op/s
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.590 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.591 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.632 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.633 2 DEBUG nova.network.neutron [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.662 2 INFO nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.691 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.696 2 DEBUG oslo_concurrency.processutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.937 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.939 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.939 2 INFO nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Creating image(s)
Oct 02 12:10:30 compute-0 nova_compute[256940]: 2025-10-02 12:10:30.972 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.002 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:10:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:31.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.036 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.041 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.076 2 DEBUG nova.policy [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '40989194be64450c86bad57eee45a7b5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c05c3b0c35e4dab854814086f385daa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.101 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.101 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.102 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.102 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.135 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.140 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100158935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.167 2 DEBUG oslo_concurrency.processutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.174 2 DEBUG nova.compute.provider_tree [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.198 2 DEBUG nova.scheduler.client.report [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.209 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.231 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.289 2 INFO nova.scheduler.client.report [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Deleted allocations for instance 6392cd0b-cf6b-485c-bbaa-69525ae0191c
Oct 02 12:10:31 compute-0 nova_compute[256940]: 2025-10-02 12:10:31.418 2 DEBUG oslo_concurrency.lockutils [None req-33bc87cb-5342-441a-ac1c-e5a8b0860b44 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "6392cd0b-cf6b-485c-bbaa-69525ae0191c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2922378935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/204562109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/100158935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:31.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.168 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.224 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.224 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.278 2 DEBUG nova.compute.manager [req-56266df1-f8a1-494a-ae8d-78089b2c459b req-070f3567-0bc8-4b25-8286-ebb53487f562 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Received event network-vif-deleted-cfc8529e-4a3c-4d06-af03-c52fb2f6b463 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.285 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] resizing rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.426 2 DEBUG nova.objects.instance [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lazy-loading 'migration_context' on Instance uuid 30d82f4f-be7f-4957-a953-d4d82be0e42c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:32 compute-0 ceph-mon[73668]: pgmap v913: 305 pgs: 305 active+clean; 341 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 330 op/s
Oct 02 12:10:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3606299484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1286005547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 337 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.636 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.636 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Ensure instance console log exists: /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.637 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.637 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.638 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:32 compute-0 nova_compute[256940]: 2025-10-02 12:10:32.900 2 DEBUG nova.network.neutron [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Successfully created port: 612a7e9e-f5f3-4f20-aacb-e35a26532f7e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:10:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:33.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.235 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.236 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.236 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.237 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:33 compute-0 nova_compute[256940]: 2025-10-02 12:10:33.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:33 compute-0 ceph-mon[73668]: pgmap v914: 305 pgs: 305 active+clean; 337 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 02 12:10:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:33.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:34 compute-0 nova_compute[256940]: 2025-10-02 12:10:34.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 337 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 02 12:10:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2268975107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:35.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.178 2 DEBUG nova.network.neutron [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Successfully updated port: 612a7e9e-f5f3-4f20-aacb-e35a26532f7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.207 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.207 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquired lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.208 2 DEBUG nova.network.neutron [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.770 2 DEBUG nova.network.neutron [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.810 2 DEBUG nova.compute.manager [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-changed-612a7e9e-f5f3-4f20-aacb-e35a26532f7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.810 2 DEBUG nova.compute.manager [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Refreshing instance network info cache due to event network-changed-612a7e9e-f5f3-4f20-aacb-e35a26532f7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:35 compute-0 nova_compute[256940]: 2025-10-02 12:10:35.811 2 DEBUG oslo_concurrency.lockutils [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:35 compute-0 ceph-mon[73668]: pgmap v915: 305 pgs: 305 active+clean; 337 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 02 12:10:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:10:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:35.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:10:36 compute-0 nova_compute[256940]: 2025-10-02 12:10:36.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:36 compute-0 nova_compute[256940]: 2025-10-02 12:10:36.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:36 compute-0 nova_compute[256940]: 2025-10-02 12:10:36.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:36 compute-0 nova_compute[256940]: 2025-10-02 12:10:36.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:36 compute-0 nova_compute[256940]: 2025-10-02 12:10:36.239 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:10:36 compute-0 nova_compute[256940]: 2025-10-02 12:10:36.239 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 312 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 749 KiB/s rd, 4.3 MiB/s wr, 181 op/s
Oct 02 12:10:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46767875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:36 compute-0 nova_compute[256940]: 2025-10-02 12:10:36.723 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:36.881 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2230665049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/46767875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:37 compute-0 nova_compute[256940]: 2025-10-02 12:10:37.017 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:37 compute-0 nova_compute[256940]: 2025-10-02 12:10:37.019 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4892MB free_disk=20.824901580810547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:10:37 compute-0 nova_compute[256940]: 2025-10-02 12:10:37.019 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:37 compute-0 nova_compute[256940]: 2025-10-02 12:10:37.020 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:37.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:37 compute-0 nova_compute[256940]: 2025-10-02 12:10:37.045 2 DEBUG nova.network.neutron [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Updating instance_info_cache with network_info: [{"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:37 compute-0 nova_compute[256940]: 2025-10-02 12:10:37.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:37.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:38 compute-0 ceph-mon[73668]: pgmap v916: 305 pgs: 305 active+clean; 312 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 749 KiB/s rd, 4.3 MiB/s wr, 181 op/s
Oct 02 12:10:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1891091328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 260 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 663 KiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.903 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Releasing lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.904 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Instance network_info: |[{"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.905 2 DEBUG oslo_concurrency.lockutils [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.905 2 DEBUG nova.network.neutron [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Refreshing network info cache for port 612a7e9e-f5f3-4f20-aacb-e35a26532f7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.910 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Start _get_guest_xml network_info=[{"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.915 2 WARNING nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.928 2 DEBUG nova.virt.libvirt.host [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.929 2 DEBUG nova.virt.libvirt.host [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.942 2 DEBUG nova.virt.libvirt.host [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.942 2 DEBUG nova.virt.libvirt.host [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.944 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.945 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.945 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.946 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.946 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.947 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.947 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.948 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.948 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.948 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.949 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.949 2 DEBUG nova.virt.hardware [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:10:38 compute-0 nova_compute[256940]: 2025-10-02 12:10:38.953 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:39.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.133 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 30d82f4f-be7f-4957-a953-d4d82be0e42c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.133 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.133 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.269 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584573527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.467 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.503 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.509 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103885911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.674 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.681 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.704 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.731 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.731 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005777767599571933 of space, bias 1.0, pg target 1.73333027987158 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:10:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:10:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193053227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.983 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.986 2 DEBUG nova.virt.libvirt.vif [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-1439905541',display_name='tempest-VolumesAssistedSnapshotsTest-server-1439905541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-1439905541',id=6,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1z0h/VT4LwY6YBDvStM1+9oURDQPQhuK2fEi1Kfqi8zH173Egyyq471LsZFBynS5g1TC34LgQ1g8SybYc1LajPOY4cBHWowUSFKffwiPM2ft4sDwA2HAdWi4Wqa1YypA==',key_name='tempest-keypair-1970104438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c05c3b0c35e4dab854814086f385daa',ramdisk_id='',reservation_id='r-wkqwqh0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1314821025',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1314821025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40989194be64450c86bad57eee45a7b5',uuid=30d82f4f-be7f-4957-a953-d4d82be0e42c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.987 2 DEBUG nova.network.os_vif_util [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Converting VIF {"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.989 2 DEBUG nova.network.os_vif_util [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:d9:0f,bridge_name='br-int',has_traffic_filtering=True,id=612a7e9e-f5f3-4f20-aacb-e35a26532f7e,network=Network(e1d61fb0-ffae-4b83-8fcd-761f606eb71c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612a7e9e-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:39.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:39 compute-0 nova_compute[256940]: 2025-10-02 12:10:39.991 2 DEBUG nova.objects.instance [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lazy-loading 'pci_devices' on Instance uuid 30d82f4f-be7f-4957-a953-d4d82be0e42c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.009 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <uuid>30d82f4f-be7f-4957-a953-d4d82be0e42c</uuid>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <name>instance-00000006</name>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <nova:name>tempest-VolumesAssistedSnapshotsTest-server-1439905541</nova:name>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:10:38</nova:creationTime>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:user uuid="40989194be64450c86bad57eee45a7b5">tempest-VolumesAssistedSnapshotsTest-1314821025-project-member</nova:user>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:project uuid="3c05c3b0c35e4dab854814086f385daa">tempest-VolumesAssistedSnapshotsTest-1314821025</nova:project>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <nova:port uuid="612a7e9e-f5f3-4f20-aacb-e35a26532f7e">
Oct 02 12:10:40 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <system>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <entry name="serial">30d82f4f-be7f-4957-a953-d4d82be0e42c</entry>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <entry name="uuid">30d82f4f-be7f-4957-a953-d4d82be0e42c</entry>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </system>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <os>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   </os>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <features>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   </features>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/30d82f4f-be7f-4957-a953-d4d82be0e42c_disk">
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       </source>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/30d82f4f-be7f-4957-a953-d4d82be0e42c_disk.config">
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       </source>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:10:40 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:7e:d9:0f"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <target dev="tap612a7e9e-f5"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/console.log" append="off"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <video>
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </video>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:10:40 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:10:40 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:10:40 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:10:40 compute-0 nova_compute[256940]: </domain>
Oct 02 12:10:40 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.010 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Preparing to wait for external event network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.011 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.012 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.012 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.014 2 DEBUG nova.virt.libvirt.vif [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-1439905541',display_name='tempest-VolumesAssistedSnapshotsTest-server-1439905541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-1439905541',id=6,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1z0h/VT4LwY6YBDvStM1+9oURDQPQhuK2fEi1Kfqi8zH173Egyyq471LsZFBynS5g1TC34LgQ1g8SybYc1LajPOY4cBHWowUSFKffwiPM2ft4sDwA2HAdWi4Wqa1YypA==',key_name='tempest-keypair-1970104438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c05c3b0c35e4dab854814086f385daa',ramdisk_id='',reservation_id='r-wkqwqh0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1314821025',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1314821025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40989194be64450c86bad57eee45a7b5',uuid=30d82f4f-be7f-4957-a953-d4d82be0e42c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.014 2 DEBUG nova.network.os_vif_util [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Converting VIF {"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.016 2 DEBUG nova.network.os_vif_util [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:d9:0f,bridge_name='br-int',has_traffic_filtering=True,id=612a7e9e-f5f3-4f20-aacb-e35a26532f7e,network=Network(e1d61fb0-ffae-4b83-8fcd-761f606eb71c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612a7e9e-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.016 2 DEBUG os_vif [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:d9:0f,bridge_name='br-int',has_traffic_filtering=True,id=612a7e9e-f5f3-4f20-aacb-e35a26532f7e,network=Network(e1d61fb0-ffae-4b83-8fcd-761f606eb71c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612a7e9e-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap612a7e9e-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap612a7e9e-f5, col_values=(('external_ids', {'iface-id': '612a7e9e-f5f3-4f20-aacb-e35a26532f7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:d9:0f', 'vm-uuid': '30d82f4f-be7f-4957-a953-d4d82be0e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:40 compute-0 NetworkManager[44981]: <info>  [1759407040.0306] manager: (tap612a7e9e-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.039 2 INFO os_vif [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:d9:0f,bridge_name='br-int',has_traffic_filtering=True,id=612a7e9e-f5f3-4f20-aacb-e35a26532f7e,network=Network(e1d61fb0-ffae-4b83-8fcd-761f606eb71c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612a7e9e-f5')
Oct 02 12:10:40 compute-0 ceph-mon[73668]: pgmap v917: 305 pgs: 305 active+clean; 260 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 663 KiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 02 12:10:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3584573527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/103885911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4193053227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.212 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.212 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.212 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] No VIF found with MAC fa:16:3e:7e:d9:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.213 2 INFO nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Using config drive
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.249 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.256 2 DEBUG nova.network.neutron [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Updated VIF entry in instance network info cache for port 612a7e9e-f5f3-4f20-aacb-e35a26532f7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.257 2 DEBUG nova.network.neutron [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Updating instance_info_cache with network_info: [{"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.339 2 DEBUG oslo_concurrency.lockutils [req-9163bbf0-4470-443f-a05d-3717cc127a97 req-ea47dd41-110e-49d9-bca6-abfb3833b53c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:40 compute-0 podman[265010]: 2025-10-02 12:10:40.400694168 +0000 UTC m=+0.068927188 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 12:10:40 compute-0 podman[265011]: 2025-10-02 12:10:40.450183375 +0000 UTC m=+0.118567229 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller)
Oct 02 12:10:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 3.9 MiB/s wr, 174 op/s
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.712 2 INFO nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Creating config drive at /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/disk.config
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.724 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqm15etl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.875 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqm15etl" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.923 2 DEBUG nova.storage.rbd_utils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] rbd image 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:40 compute-0 nova_compute[256940]: 2025-10-02 12:10:40.929 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/disk.config 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:41.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.131 2 DEBUG oslo_concurrency.processutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/disk.config 30d82f4f-be7f-4957-a953-d4d82be0e42c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.132 2 INFO nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Deleting local config drive /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c/disk.config because it was imported into RBD.
Oct 02 12:10:41 compute-0 kernel: tap612a7e9e-f5: entered promiscuous mode
Oct 02 12:10:41 compute-0 ovn_controller[148123]: 2025-10-02T12:10:41Z|00035|binding|INFO|Claiming lport 612a7e9e-f5f3-4f20-aacb-e35a26532f7e for this chassis.
Oct 02 12:10:41 compute-0 ovn_controller[148123]: 2025-10-02T12:10:41Z|00036|binding|INFO|612a7e9e-f5f3-4f20-aacb-e35a26532f7e: Claiming fa:16:3e:7e:d9:0f 10.100.0.12
Oct 02 12:10:41 compute-0 NetworkManager[44981]: <info>  [1759407041.1927] manager: (tap612a7e9e-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.209 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:d9:0f 10.100.0.12'], port_security=['fa:16:3e:7e:d9:0f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30d82f4f-be7f-4957-a953-d4d82be0e42c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c05c3b0c35e4dab854814086f385daa', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dfa752d8-a0f0-4663-8bef-3da58b45dff3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6941c30-9635-4c81-a0e3-272f76b5cabf, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=612a7e9e-f5f3-4f20-aacb-e35a26532f7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.211 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 612a7e9e-f5f3-4f20-aacb-e35a26532f7e in datapath e1d61fb0-ffae-4b83-8fcd-761f606eb71c bound to our chassis
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.215 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e1d61fb0-ffae-4b83-8fcd-761f606eb71c
Oct 02 12:10:41 compute-0 systemd-udevd[265107]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.236 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[213ee60a-578f-4be1-b768-5cde07b965e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.238 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape1d61fb0-f1 in ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:10:41 compute-0 systemd-machined[210927]: New machine qemu-2-instance-00000006.
Oct 02 12:10:41 compute-0 NetworkManager[44981]: <info>  [1759407041.2437] device (tap612a7e9e-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:10:41 compute-0 NetworkManager[44981]: <info>  [1759407041.2447] device (tap612a7e9e-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.242 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape1d61fb0-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.242 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fce66a-d88c-4754-9383-901dd5aaaa12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.243 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[94f748d7-7151-4fc5-8d9d-8384565c6902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.262 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[7423a363-5d4a-4e4e-9c93-2ce670b3f1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-0 ovn_controller[148123]: 2025-10-02T12:10:41Z|00037|binding|INFO|Setting lport 612a7e9e-f5f3-4f20-aacb-e35a26532f7e ovn-installed in OVS
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.299 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3e12d0-9267-4173-9376-d8993d3ffda5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_controller[148123]: 2025-10-02T12:10:41Z|00038|binding|INFO|Setting lport 612a7e9e-f5f3-4f20-aacb-e35a26532f7e up in Southbound
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.346 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9b51f225-4bf2-4868-9350-3d081c45dbd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 systemd-udevd[265111]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.353 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a935df97-8ad6-431f-9051-a866458023fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 NetworkManager[44981]: <info>  [1759407041.3573] manager: (tape1d61fb0-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/32)
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.389 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cc98ffad-c2af-4939-9fe6-dac0ae1660b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.396 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7131dffa-3fc1-46be-80ed-0a81568bae31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 NetworkManager[44981]: <info>  [1759407041.4216] device (tape1d61fb0-f0): carrier: link connected
Oct 02 12:10:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.427 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[65225694-f7d5-4ce1-9de7-e98a6f0e0263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.453 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[df3180cc-cda1-4106-ba51-74466c2de2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1d61fb0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493690, 'reachable_time': 24498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265141, 'error': None, 'target': 'ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.478 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[61901d53-65cd-4ce4-bb95-57d99e41beab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:478a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493690, 'tstamp': 493690}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265143, 'error': None, 'target': 'ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.498 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09c84c9c-773f-4411-98fc-2ba4e622caa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1d61fb0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493690, 'reachable_time': 24498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265144, 'error': None, 'target': 'ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/764252114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.550 2 DEBUG nova.compute.manager [req-195592b6-4aa0-4413-bea7-89bc8bf6a7ee req-5f13ffec-297c-4214-8851-bf2b1b7644c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.550 2 DEBUG oslo_concurrency.lockutils [req-195592b6-4aa0-4413-bea7-89bc8bf6a7ee req-5f13ffec-297c-4214-8851-bf2b1b7644c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.550 2 DEBUG oslo_concurrency.lockutils [req-195592b6-4aa0-4413-bea7-89bc8bf6a7ee req-5f13ffec-297c-4214-8851-bf2b1b7644c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.551 2 DEBUG oslo_concurrency.lockutils [req-195592b6-4aa0-4413-bea7-89bc8bf6a7ee req-5f13ffec-297c-4214-8851-bf2b1b7644c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.551 2 DEBUG nova.compute.manager [req-195592b6-4aa0-4413-bea7-89bc8bf6a7ee req-5f13ffec-297c-4214-8851-bf2b1b7644c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Processing event network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.551 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b262ffea-a148-44f4-b558-62e3d1d0c523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.630 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[feffc936-a228-4d25-a50d-9dbac41d0d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.631 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1d61fb0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.631 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.632 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1d61fb0-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-0 NetworkManager[44981]: <info>  [1759407041.6346] manager: (tape1d61fb0-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 02 12:10:41 compute-0 kernel: tape1d61fb0-f0: entered promiscuous mode
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.637 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape1d61fb0-f0, col_values=(('external_ids', {'iface-id': '2abb223e-a39f-4e23-8682-229343d6c8ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-0 ovn_controller[148123]: 2025-10-02T12:10:41Z|00039|binding|INFO|Releasing lport 2abb223e-a39f-4e23-8682-229343d6c8ed from this chassis (sb_readonly=0)
Oct 02 12:10:41 compute-0 nova_compute[256940]: 2025-10-02 12:10:41.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.658 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e1d61fb0-ffae-4b83-8fcd-761f606eb71c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e1d61fb0-ffae-4b83-8fcd-761f606eb71c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.659 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[34f42cbc-0429-443c-8c5c-4fb951fd1e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.661 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-e1d61fb0-ffae-4b83-8fcd-761f606eb71c
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/e1d61fb0-ffae-4b83-8fcd-761f606eb71c.pid.haproxy
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID e1d61fb0-ffae-4b83-8fcd-761f606eb71c
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:10:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:10:41.662 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'env', 'PROCESS_TAG=haproxy-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e1d61fb0-ffae-4b83-8fcd-761f606eb71c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:10:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:41.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.012 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407027.0114632, 6392cd0b-cf6b-485c-bbaa-69525ae0191c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.013 2 INFO nova.compute.manager [-] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] VM Stopped (Lifecycle Event)
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.036 2 DEBUG nova.compute.manager [None req-a535a9d2-b583-4f55-859c-173a21537b01 - - - - - -] [instance: 6392cd0b-cf6b-485c-bbaa-69525ae0191c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:42 compute-0 podman[265218]: 2025-10-02 12:10:42.044514943 +0000 UTC m=+0.037262978 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:10:42 compute-0 podman[265218]: 2025-10-02 12:10:42.273315971 +0000 UTC m=+0.266064026 container create 3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.276 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407042.2757292, 30d82f4f-be7f-4957-a953-d4d82be0e42c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.278 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] VM Started (Lifecycle Event)
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.281 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.287 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.291 2 INFO nova.virt.libvirt.driver [-] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Instance spawned successfully.
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.291 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.323 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:42 compute-0 systemd[1]: Started libpod-conmon-3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71.scope.
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.330 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.330 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.331 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.331 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.331 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.332 2 DEBUG nova.virt.libvirt.driver [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.337 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd978805aeae3cca1009fac5c71fe7f6997030c4b6150cecbb7e512b9338008/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.375 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.376 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407042.2782195, 30d82f4f-be7f-4957-a953-d4d82be0e42c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.376 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] VM Paused (Lifecycle Event)
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.394 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.398 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407042.2868726, 30d82f4f-be7f-4957-a953-d4d82be0e42c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.399 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] VM Resumed (Lifecycle Event)
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.419 2 INFO nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Took 11.48 seconds to spawn the instance on the hypervisor.
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.419 2 DEBUG nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.421 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.428 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:42 compute-0 podman[265218]: 2025-10-02 12:10:42.436676344 +0000 UTC m=+0.429424369 container init 3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:42 compute-0 podman[265218]: 2025-10-02 12:10:42.443356749 +0000 UTC m=+0.436104754 container start 3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:10:42 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [NOTICE]   (265238) : New worker (265240) forked
Oct 02 12:10:42 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [NOTICE]   (265238) : Loading success.
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.476 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.514 2 INFO nova.compute.manager [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Took 13.95 seconds to build instance.
Oct 02 12:10:42 compute-0 ceph-mon[73668]: pgmap v918: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 3.9 MiB/s wr, 174 op/s
Oct 02 12:10:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.9 MiB/s wr, 163 op/s
Oct 02 12:10:42 compute-0 nova_compute[256940]: 2025-10-02 12:10:42.580 2 DEBUG oslo_concurrency.lockutils [None req-ea6280f6-7838-4348-a690-c15e05b7fa08 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:43 compute-0 ceph-mon[73668]: pgmap v919: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.9 MiB/s wr, 163 op/s
Oct 02 12:10:43 compute-0 nova_compute[256940]: 2025-10-02 12:10:43.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:44 compute-0 sudo[265249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:44 compute-0 sudo[265249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:44 compute-0 sudo[265249]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:44 compute-0 sudo[265274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:44 compute-0 sudo[265274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:44 compute-0 sudo[265274]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 384 KiB/s rd, 3.4 MiB/s wr, 149 op/s
Oct 02 12:10:45 compute-0 nova_compute[256940]: 2025-10-02 12:10:45.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:45.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:45 compute-0 ceph-mon[73668]: pgmap v920: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 384 KiB/s rd, 3.4 MiB/s wr, 149 op/s
Oct 02 12:10:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:45.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 220 MiB data, 342 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Oct 02 12:10:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2705683967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/946783354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:47.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:47 compute-0 nova_compute[256940]: 2025-10-02 12:10:47.413 2 DEBUG nova.compute.manager [req-412e53cf-ee49-4535-b6b5-bd1e65f98871 req-408339c9-de81-4df9-8665-b5abce3174dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:47 compute-0 nova_compute[256940]: 2025-10-02 12:10:47.414 2 DEBUG oslo_concurrency.lockutils [req-412e53cf-ee49-4535-b6b5-bd1e65f98871 req-408339c9-de81-4df9-8665-b5abce3174dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:47 compute-0 nova_compute[256940]: 2025-10-02 12:10:47.415 2 DEBUG oslo_concurrency.lockutils [req-412e53cf-ee49-4535-b6b5-bd1e65f98871 req-408339c9-de81-4df9-8665-b5abce3174dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:47 compute-0 nova_compute[256940]: 2025-10-02 12:10:47.415 2 DEBUG oslo_concurrency.lockutils [req-412e53cf-ee49-4535-b6b5-bd1e65f98871 req-408339c9-de81-4df9-8665-b5abce3174dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:47 compute-0 nova_compute[256940]: 2025-10-02 12:10:47.415 2 DEBUG nova.compute.manager [req-412e53cf-ee49-4535-b6b5-bd1e65f98871 req-408339c9-de81-4df9-8665-b5abce3174dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] No waiting events found dispatching network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:47 compute-0 nova_compute[256940]: 2025-10-02 12:10:47.416 2 WARNING nova.compute.manager [req-412e53cf-ee49-4535-b6b5-bd1e65f98871 req-408339c9-de81-4df9-8665-b5abce3174dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received unexpected event network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e for instance with vm_state active and task_state None.
Oct 02 12:10:47 compute-0 ceph-mon[73668]: pgmap v921: 305 pgs: 305 active+clean; 220 MiB data, 342 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Oct 02 12:10:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2087921985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:48.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 02 12:10:48 compute-0 nova_compute[256940]: 2025-10-02 12:10:48.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:49 compute-0 nova_compute[256940]: 2025-10-02 12:10:49.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6861] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/34)
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6874] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6897] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/35)
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6907] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6928] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6941] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6950] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 12:10:49 compute-0 NetworkManager[44981]: <info>  [1759407049.6957] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 12:10:49 compute-0 ovn_controller[148123]: 2025-10-02T12:10:49Z|00040|binding|INFO|Releasing lport 2abb223e-a39f-4e23-8682-229343d6c8ed from this chassis (sb_readonly=0)
Oct 02 12:10:49 compute-0 nova_compute[256940]: 2025-10-02 12:10:49.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:49 compute-0 ovn_controller[148123]: 2025-10-02T12:10:49Z|00041|binding|INFO|Releasing lport 2abb223e-a39f-4e23-8682-229343d6c8ed from this chassis (sb_readonly=0)
Oct 02 12:10:49 compute-0 nova_compute[256940]: 2025-10-02 12:10:49.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:50.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:50 compute-0 nova_compute[256940]: 2025-10-02 12:10:50.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-0 ceph-mon[73668]: pgmap v922: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 02 12:10:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Oct 02 12:10:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:51.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:10:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:52.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:10:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3951825793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:52 compute-0 podman[265304]: 2025-10-02 12:10:52.473176252 +0000 UTC m=+0.129806704 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:10:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct 02 12:10:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:53.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:53 compute-0 ceph-mon[73668]: pgmap v923: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Oct 02 12:10:53 compute-0 nova_compute[256940]: 2025-10-02 12:10:53.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:54.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:54 compute-0 ceph-mon[73668]: pgmap v924: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct 02 12:10:54 compute-0 podman[265326]: 2025-10-02 12:10:54.45549329 +0000 UTC m=+0.115303654 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 02 12:10:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Oct 02 12:10:55 compute-0 nova_compute[256940]: 2025-10-02 12:10:55.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:55.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:55 compute-0 nova_compute[256940]: 2025-10-02 12:10:55.137 2 DEBUG nova.compute.manager [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-changed-612a7e9e-f5f3-4f20-aacb-e35a26532f7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:55 compute-0 nova_compute[256940]: 2025-10-02 12:10:55.137 2 DEBUG nova.compute.manager [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Refreshing instance network info cache due to event network-changed-612a7e9e-f5f3-4f20-aacb-e35a26532f7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:55 compute-0 nova_compute[256940]: 2025-10-02 12:10:55.138 2 DEBUG oslo_concurrency.lockutils [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:55 compute-0 nova_compute[256940]: 2025-10-02 12:10:55.138 2 DEBUG oslo_concurrency.lockutils [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:55 compute-0 nova_compute[256940]: 2025-10-02 12:10:55.138 2 DEBUG nova.network.neutron [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Refreshing network info cache for port 612a7e9e-f5f3-4f20-aacb-e35a26532f7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:55 compute-0 ceph-mon[73668]: pgmap v925: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Oct 02 12:10:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:56.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 234 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 168 op/s
Oct 02 12:10:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:10:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:10:57 compute-0 ovn_controller[148123]: 2025-10-02T12:10:57Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:d9:0f 10.100.0.12
Oct 02 12:10:57 compute-0 ovn_controller[148123]: 2025-10-02T12:10:57Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:d9:0f 10.100.0.12
Oct 02 12:10:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:58.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:58 compute-0 ceph-mon[73668]: pgmap v926: 305 pgs: 305 active+clean; 234 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 168 op/s
Oct 02 12:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:10:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 237 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 771 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Oct 02 12:10:58 compute-0 nova_compute[256940]: 2025-10-02 12:10:58.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:10:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:59 compute-0 nova_compute[256940]: 2025-10-02 12:10:59.187 2 DEBUG nova.network.neutron [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Updated VIF entry in instance network info cache for port 612a7e9e-f5f3-4f20-aacb-e35a26532f7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:59 compute-0 nova_compute[256940]: 2025-10-02 12:10:59.187 2 DEBUG nova.network.neutron [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Updating instance_info_cache with network_info: [{"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:59 compute-0 nova_compute[256940]: 2025-10-02 12:10:59.249 2 DEBUG oslo_concurrency.lockutils [req-8f324e81-54a7-43f6-b1b3-c1279a9668b0 req-ccc629b2-0ec1-4efc-a4ee-81f057a81719 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-30d82f4f-be7f-4957-a953-d4d82be0e42c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:00 compute-0 nova_compute[256940]: 2025-10-02 12:11:00.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 242 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 02 12:11:00 compute-0 ceph-mon[73668]: pgmap v927: 305 pgs: 305 active+clean; 237 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 771 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Oct 02 12:11:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:01.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:02.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:02 compute-0 ovn_controller[148123]: 2025-10-02T12:11:02Z|00042|binding|INFO|Releasing lport 2abb223e-a39f-4e23-8682-229343d6c8ed from this chassis (sb_readonly=0)
Oct 02 12:11:02 compute-0 ceph-mon[73668]: pgmap v928: 305 pgs: 305 active+clean; 242 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 02 12:11:02 compute-0 nova_compute[256940]: 2025-10-02 12:11:02.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:11:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:03.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:03 compute-0 nova_compute[256940]: 2025-10-02 12:11:03.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:04.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:04 compute-0 ceph-mon[73668]: pgmap v929: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:11:04 compute-0 sudo[265354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:04 compute-0 sudo[265354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:04 compute-0 sudo[265354]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Oct 02 12:11:04 compute-0 sudo[265379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:04 compute-0 sudo[265379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:04 compute-0 sudo[265379]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:05 compute-0 nova_compute[256940]: 2025-10-02 12:11:05.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:05.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:11:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1832698462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:11:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1832698462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:06.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1832698462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1832698462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 12:11:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.762088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066762411, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 832, "num_deletes": 255, "total_data_size": 1100092, "memory_usage": 1119136, "flush_reason": "Manual Compaction"}
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066805079, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1077109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20862, "largest_seqno": 21693, "table_properties": {"data_size": 1072939, "index_size": 1822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9373, "raw_average_key_size": 18, "raw_value_size": 1064325, "raw_average_value_size": 2150, "num_data_blocks": 80, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407011, "oldest_key_time": 1759407011, "file_creation_time": 1759407066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 43074 microseconds, and 7405 cpu microseconds.
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.805201) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1077109 bytes OK
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.805231) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.808229) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.808245) EVENT_LOG_v1 {"time_micros": 1759407066808239, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.808271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1095981, prev total WAL file size 1095981, number of live WAL files 2.
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.809169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1051KB)], [47(8173KB)]
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066809246, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9446808, "oldest_snapshot_seqno": -1}
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4638 keys, 9302698 bytes, temperature: kUnknown
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066978650, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 9302698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9270287, "index_size": 19693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116618, "raw_average_key_size": 25, "raw_value_size": 9184935, "raw_average_value_size": 1980, "num_data_blocks": 814, "num_entries": 4638, "num_filter_entries": 4638, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:06 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.979034) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9302698 bytes
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:07.014298) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.7 rd, 54.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.0 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(17.4) write-amplify(8.6) OK, records in: 5167, records dropped: 529 output_compression: NoCompression
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:07.014354) EVENT_LOG_v1 {"time_micros": 1759407067014334, "job": 24, "event": "compaction_finished", "compaction_time_micros": 169506, "compaction_time_cpu_micros": 38045, "output_level": 6, "num_output_files": 1, "total_output_size": 9302698, "num_input_records": 5167, "num_output_records": 4638, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067014899, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067018250, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:06.808990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:07.018382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:07.018389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:07.018392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:07.018395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:07.018398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:07.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:07 compute-0 ceph-mon[73668]: pgmap v930: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Oct 02 12:11:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:08.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 02 12:11:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 02 12:11:08 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 02 12:11:08 compute-0 ceph-mon[73668]: pgmap v931: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 12:11:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 234 KiB/s wr, 112 op/s
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.764636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068765098, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 287, "num_deletes": 251, "total_data_size": 62075, "memory_usage": 67784, "flush_reason": "Manual Compaction"}
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068821341, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 61781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21694, "largest_seqno": 21980, "table_properties": {"data_size": 59852, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5051, "raw_average_key_size": 18, "raw_value_size": 56007, "raw_average_value_size": 204, "num_data_blocks": 7, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407067, "oldest_key_time": 1759407067, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 56418 microseconds, and 1695 cpu microseconds.
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:08 compute-0 nova_compute[256940]: 2025-10-02 12:11:08.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.821415) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 61781 bytes OK
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.821449) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.868305) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.868375) EVENT_LOG_v1 {"time_micros": 1759407068868359, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.868415) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 59947, prev total WAL file size 59947, number of live WAL files 2.
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.869192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(60KB)], [50(9084KB)]
Oct 02 12:11:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068869244, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 9364479, "oldest_snapshot_seqno": -1}
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4399 keys, 7342855 bytes, temperature: kUnknown
Oct 02 12:11:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:09.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407069075976, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7342855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7313756, "index_size": 17021, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 112342, "raw_average_key_size": 25, "raw_value_size": 7234193, "raw_average_value_size": 1644, "num_data_blocks": 693, "num_entries": 4399, "num_filter_entries": 4399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.076611) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7342855 bytes
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.142800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.3 rd, 35.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(270.4) write-amplify(118.9) OK, records in: 4912, records dropped: 513 output_compression: NoCompression
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.142854) EVENT_LOG_v1 {"time_micros": 1759407069142824, "job": 26, "event": "compaction_finished", "compaction_time_micros": 206911, "compaction_time_cpu_micros": 30719, "output_level": 6, "num_output_files": 1, "total_output_size": 7342855, "num_input_records": 4912, "num_output_records": 4399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407069143056, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407069145976, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:08.868999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.146032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.146039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.146043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.146046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:11:09.146049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:09 compute-0 ceph-mon[73668]: osdmap e140: 3 total, 3 up, 3 in
Oct 02 12:11:09 compute-0 ceph-mon[73668]: pgmap v933: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 234 KiB/s wr, 112 op/s
Oct 02 12:11:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:10.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:10 compute-0 nova_compute[256940]: 2025-10-02 12:11:10.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 196 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 200 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Oct 02 12:11:10 compute-0 nova_compute[256940]: 2025-10-02 12:11:10.945 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:10 compute-0 nova_compute[256940]: 2025-10-02 12:11:10.946 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:10 compute-0 nova_compute[256940]: 2025-10-02 12:11:10.969 2 DEBUG nova.objects.instance [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lazy-loading 'flavor' on Instance uuid 30d82f4f-be7f-4957-a953-d4d82be0e42c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:11 compute-0 nova_compute[256940]: 2025-10-02 12:11:11.022 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:11 compute-0 nova_compute[256940]: 2025-10-02 12:11:11.283 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:11 compute-0 nova_compute[256940]: 2025-10-02 12:11:11.284 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:11 compute-0 nova_compute[256940]: 2025-10-02 12:11:11.284 2 INFO nova.compute.manager [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Attaching volume 4eee902f-54cd-4f12-8095-b5f306236ae9 to /dev/vdb
Oct 02 12:11:11 compute-0 podman[265407]: 2025-10-02 12:11:11.428238018 +0000 UTC m=+0.087328270 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:11:11 compute-0 podman[265408]: 2025-10-02 12:11:11.482165602 +0000 UTC m=+0.136135310 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:11:11 compute-0 nova_compute[256940]: 2025-10-02 12:11:11.529 2 DEBUG os_brick.utils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:11:11 compute-0 nova_compute[256940]: 2025-10-02 12:11:11.530 2 INFO oslo.privsep.daemon [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpb6ahvl2v/privsep.sock']
Oct 02 12:11:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.245 2 INFO oslo.privsep.daemon [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Spawned new privsep daemon via rootwrap
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.117 1002 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.124 1002 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.127 1002 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.128 1002 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1002
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.252 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[add0c96e-070f-4249-86ae-050c99f582c6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 02 12:11:12 compute-0 ceph-mon[73668]: pgmap v934: 305 pgs: 305 active+clean; 196 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 200 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Oct 02 12:11:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.356 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.376 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.377 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[e542d92e-d959-418d-8063-48badcd8c018]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.378 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.387 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.388 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[4212364f-cb32-40a8-9f6a-42f3fc53b99f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.390 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.402 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.402 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf557a4-7ed1-41bb-bafc-630d0b4ca433]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.404 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[cec4a66d-5e86-4c0d-bb57-316a75f72532]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.405 2 DEBUG oslo_concurrency.processutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.427 2 DEBUG oslo_concurrency.processutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.430 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.431 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.431 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.432 2 DEBUG os_brick.utils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] <== get_connector_properties: return (902ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:11:12 compute-0 nova_compute[256940]: 2025-10-02 12:11:12.432 2 DEBUG nova.virt.block_device [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Updating existing volume attachment record: f7717d65-077c-46d0-a16c-a7f5d3d10a2f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:11:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 190 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 3.0 MiB/s wr, 99 op/s
Oct 02 12:11:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:13.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.305 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.306 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.307 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.315 2 DEBUG nova.objects.instance [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lazy-loading 'flavor' on Instance uuid 30d82f4f-be7f-4957-a953-d4d82be0e42c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.338 2 DEBUG nova.virt.libvirt.driver [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Attempting to attach volume 4eee902f-54cd-4f12-8095-b5f306236ae9 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.340 2 DEBUG nova.virt.libvirt.guest [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:11:13 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:11:13 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-4eee902f-54cd-4f12-8095-b5f306236ae9">
Oct 02 12:11:13 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:13 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:13 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:13 compute-0 nova_compute[256940]:   </source>
Oct 02 12:11:13 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:11:13 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:11:13 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:11:13 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:11:13 compute-0 nova_compute[256940]:   <serial>4eee902f-54cd-4f12-8095-b5f306236ae9</serial>
Oct 02 12:11:13 compute-0 nova_compute[256940]: </disk>
Oct 02 12:11:13 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:11:13 compute-0 ceph-mon[73668]: osdmap e141: 3 total, 3 up, 3 in
Oct 02 12:11:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2090179139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1377094611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.608 2 DEBUG nova.virt.libvirt.driver [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.610 2 DEBUG nova.virt.libvirt.driver [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.610 2 DEBUG nova.virt.libvirt.driver [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.610 2 DEBUG nova.virt.libvirt.driver [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] No VIF found with MAC fa:16:3e:7e:d9:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.847 2 DEBUG oslo_concurrency.lockutils [None req-4a01043f-2e67-4884-a17b-8af06db170a9 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:13 compute-0 nova_compute[256940]: 2025-10-02 12:11:13.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:14.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:14 compute-0 ceph-mon[73668]: pgmap v936: 305 pgs: 305 active+clean; 190 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 3.0 MiB/s wr, 99 op/s
Oct 02 12:11:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 190 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 3.0 MiB/s wr, 97 op/s
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:15.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.214 2 DEBUG nova.virt.libvirt.driver [None req-4fe9aecd-7991-4a0d-8f02-0fa1cb494344 da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] volume_snapshot_create: create_info: {'snapshot_id': '937a3aef-9901-48ab-87ea-9a8e5ca77a4b', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [None req-4fe9aecd-7991-4a0d-8f02-0fa1cb494344 da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]     self._volume_snapshot_create(context, instance, guest,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]     raise exception.InternalError(msg)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] nova.exception.InternalError: Found no disk to snapshot.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.219 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 DEBUG nova.virt.libvirt.driver [None req-585a6096-de1b-42c1-b99c-0f04714db5ea da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] volume_snapshot_delete: delete_info: {'volume_id': '4eee902f-54cd-4f12-8095-b5f306236ae9'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [None req-585a6096-de1b-42c1-b99c-0f04714db5ea da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]     self._volume_snapshot_delete(context, instance, volume_id,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c]     if delete_info['type'] != 'qcow2':
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] KeyError: 'type'
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.332 2 ERROR nova.virt.libvirt.driver [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver [None req-4fe9aecd-7991-4a0d-8f02-0fa1cb494344 da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot 937a3aef-9901-48ab-87ea-9a8e5ca77a4b could not be found.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot 937a3aef-9901-48ab-87ea-9a8e5ca77a4b could not be found. (HTTP 404) (Request-ID: req-9c14461a-1981-43ca-8192-2c4404947155)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot 937a3aef-9901-48ab-87ea-9a8e5ca77a4b could not be found.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.355 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server [None req-4fe9aecd-7991-4a0d-8f02-0fa1cb494344 da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.359 2 ERROR oslo_messaging.rpc.server 
Oct 02 12:11:15 compute-0 ceph-mon[73668]: pgmap v937: 305 pgs: 305 active+clean; 190 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 3.0 MiB/s wr, 97 op/s
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver [None req-585a6096-de1b-42c1-b99c-0f04714db5ea da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver KeyError: 'type'
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-1eca417e-2052-4b58-9c27-9e6fb6c2b8bd)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.654 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server [None req-585a6096-de1b-42c1-b99c-0f04714db5ea da379ea0e8494b4da47c8d2ccb79c0a8 fc53700185164ac1b7e78cb5b064a7b3 - - default default] Exception during message handling: KeyError: 'type'
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server KeyError: 'type'
Oct 02 12:11:15 compute-0 nova_compute[256940]: 2025-10-02 12:11:15.658 2 ERROR oslo_messaging.rpc.server 
Oct 02 12:11:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:16.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.415 2 DEBUG oslo_concurrency.lockutils [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.415 2 DEBUG oslo_concurrency.lockutils [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.443 2 INFO nova.compute.manager [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Detaching volume 4eee902f-54cd-4f12-8095-b5f306236ae9
Oct 02 12:11:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 198 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 696 KiB/s rd, 3.1 MiB/s wr, 190 op/s
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.641 2 INFO nova.virt.block_device [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Attempting to driver detach volume 4eee902f-54cd-4f12-8095-b5f306236ae9 from mountpoint /dev/vdb
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.651 2 DEBUG nova.virt.libvirt.driver [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Attempting to detach device vdb from instance 30d82f4f-be7f-4957-a953-d4d82be0e42c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.652 2 DEBUG nova.virt.libvirt.guest [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-4eee902f-54cd-4f12-8095-b5f306236ae9">
Oct 02 12:11:16 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   </source>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <serial>4eee902f-54cd-4f12-8095-b5f306236ae9</serial>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]: </disk>
Oct 02 12:11:16 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.663 2 INFO nova.virt.libvirt.driver [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Successfully detached device vdb from instance 30d82f4f-be7f-4957-a953-d4d82be0e42c from the persistent domain config.
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.664 2 DEBUG nova.virt.libvirt.driver [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 30d82f4f-be7f-4957-a953-d4d82be0e42c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.664 2 DEBUG nova.virt.libvirt.guest [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-4eee902f-54cd-4f12-8095-b5f306236ae9">
Oct 02 12:11:16 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   </source>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <serial>4eee902f-54cd-4f12-8095-b5f306236ae9</serial>
Oct 02 12:11:16 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:11:16 compute-0 nova_compute[256940]: </disk>
Oct 02 12:11:16 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:11:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.782 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759407076.7817607, 30d82f4f-be7f-4957-a953-d4d82be0e42c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.784 2 DEBUG nova.virt.libvirt.driver [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 30d82f4f-be7f-4957-a953-d4d82be0e42c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:11:16 compute-0 nova_compute[256940]: 2025-10-02 12:11:16.786 2 INFO nova.virt.libvirt.driver [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Successfully detached device vdb from instance 30d82f4f-be7f-4957-a953-d4d82be0e42c from the live domain config.
Oct 02 12:11:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:17.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:17 compute-0 nova_compute[256940]: 2025-10-02 12:11:17.172 2 DEBUG nova.objects.instance [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lazy-loading 'flavor' on Instance uuid 30d82f4f-be7f-4957-a953-d4d82be0e42c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:17 compute-0 nova_compute[256940]: 2025-10-02 12:11:17.213 2 DEBUG oslo_concurrency.lockutils [None req-76d8b829-3459-406d-a34d-4ccb5c4149f7 3cd040a5b7f04e9c9dca681a59e324f6 a5b7ebf61d0a43d39a837b14d0530c61 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:17 compute-0 ceph-mon[73668]: pgmap v938: 305 pgs: 305 active+clean; 198 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 696 KiB/s rd, 3.1 MiB/s wr, 190 op/s
Oct 02 12:11:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:18.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 574 KiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 12:11:18 compute-0 nova_compute[256940]: 2025-10-02 12:11:18.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:19.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:19 compute-0 ceph-mon[73668]: pgmap v939: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 574 KiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 12:11:19 compute-0 nova_compute[256940]: 2025-10-02 12:11:19.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:19.713 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:11:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:19.716 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:11:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:11:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3757002488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:11:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3757002488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:20.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:20 compute-0 nova_compute[256940]: 2025-10-02 12:11:20.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 140 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 520 KiB/s rd, 1.0 MiB/s wr, 137 op/s
Oct 02 12:11:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3757002488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3757002488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.145 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.146 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.146 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.147 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.147 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.149 2 INFO nova.compute.manager [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Terminating instance
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.150 2 DEBUG nova.compute.manager [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:11:21 compute-0 kernel: tap612a7e9e-f5 (unregistering): left promiscuous mode
Oct 02 12:11:21 compute-0 NetworkManager[44981]: <info>  [1759407081.2087] device (tap612a7e9e-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:11:21 compute-0 ovn_controller[148123]: 2025-10-02T12:11:21Z|00043|binding|INFO|Releasing lport 612a7e9e-f5f3-4f20-aacb-e35a26532f7e from this chassis (sb_readonly=0)
Oct 02 12:11:21 compute-0 ovn_controller[148123]: 2025-10-02T12:11:21Z|00044|binding|INFO|Setting lport 612a7e9e-f5f3-4f20-aacb-e35a26532f7e down in Southbound
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 ovn_controller[148123]: 2025-10-02T12:11:21Z|00045|binding|INFO|Removing iface tap612a7e9e-f5 ovn-installed in OVS
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.222 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:d9:0f 10.100.0.12'], port_security=['fa:16:3e:7e:d9:0f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30d82f4f-be7f-4957-a953-d4d82be0e42c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c05c3b0c35e4dab854814086f385daa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dfa752d8-a0f0-4663-8bef-3da58b45dff3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6941c30-9635-4c81-a0e3-272f76b5cabf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=612a7e9e-f5f3-4f20-aacb-e35a26532f7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.223 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 612a7e9e-f5f3-4f20-aacb-e35a26532f7e in datapath e1d61fb0-ffae-4b83-8fcd-761f606eb71c unbound from our chassis
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.225 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e1d61fb0-ffae-4b83-8fcd-761f606eb71c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.226 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[01378fd9-1fc0-44bf-b66c-104f3590924b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.227 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c namespace which is not needed anymore
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 02 12:11:21 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 15.264s CPU time.
Oct 02 12:11:21 compute-0 systemd-machined[210927]: Machine qemu-2-instance-00000006 terminated.
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.392 2 INFO nova.virt.libvirt.driver [-] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Instance destroyed successfully.
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.393 2 DEBUG nova.objects.instance [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lazy-loading 'resources' on Instance uuid 30d82f4f-be7f-4957-a953-d4d82be0e42c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:21 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [NOTICE]   (265238) : haproxy version is 2.8.14-c23fe91
Oct 02 12:11:21 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [NOTICE]   (265238) : path to executable is /usr/sbin/haproxy
Oct 02 12:11:21 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [WARNING]  (265238) : Exiting Master process...
Oct 02 12:11:21 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [WARNING]  (265238) : Exiting Master process...
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.410 2 DEBUG nova.virt.libvirt.vif [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:10:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-1439905541',display_name='tempest-VolumesAssistedSnapshotsTest-server-1439905541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-1439905541',id=6,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1z0h/VT4LwY6YBDvStM1+9oURDQPQhuK2fEi1Kfqi8zH173Egyyq471LsZFBynS5g1TC34LgQ1g8SybYc1LajPOY4cBHWowUSFKffwiPM2ft4sDwA2HAdWi4Wqa1YypA==',key_name='tempest-keypair-1970104438',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c05c3b0c35e4dab854814086f385daa',ramdisk_id='',reservation_id='r-wkqwqh0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1314821025',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1314821025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40989194be64450c86bad57eee45a7b5',uuid=30d82f4f-be7f-4957-a953-d4d82be0e42c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.411 2 DEBUG nova.network.os_vif_util [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Converting VIF {"id": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "address": "fa:16:3e:7e:d9:0f", "network": {"id": "e1d61fb0-ffae-4b83-8fcd-761f606eb71c", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-492733149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c05c3b0c35e4dab854814086f385daa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap612a7e9e-f5", "ovs_interfaceid": "612a7e9e-f5f3-4f20-aacb-e35a26532f7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.411 2 DEBUG nova.network.os_vif_util [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:d9:0f,bridge_name='br-int',has_traffic_filtering=True,id=612a7e9e-f5f3-4f20-aacb-e35a26532f7e,network=Network(e1d61fb0-ffae-4b83-8fcd-761f606eb71c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612a7e9e-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.412 2 DEBUG os_vif [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:d9:0f,bridge_name='br-int',has_traffic_filtering=True,id=612a7e9e-f5f3-4f20-aacb-e35a26532f7e,network=Network(e1d61fb0-ffae-4b83-8fcd-761f606eb71c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612a7e9e-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap612a7e9e-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [ALERT]    (265238) : Current worker (265240) exited with code 143 (Terminated)
Oct 02 12:11:21 compute-0 neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c[265233]: [WARNING]  (265238) : All workers exited. Exiting... (0)
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.419 2 INFO os_vif [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:d9:0f,bridge_name='br-int',has_traffic_filtering=True,id=612a7e9e-f5f3-4f20-aacb-e35a26532f7e,network=Network(e1d61fb0-ffae-4b83-8fcd-761f606eb71c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap612a7e9e-f5')
Oct 02 12:11:21 compute-0 systemd[1]: libpod-3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71.scope: Deactivated successfully.
Oct 02 12:11:21 compute-0 podman[265516]: 2025-10-02 12:11:21.431758941 +0000 UTC m=+0.066725990 container died 3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.451 2 DEBUG nova.compute.manager [req-ca9e4b5a-0580-41a5-9694-bda3012bb40d req-df5f3137-ccaf-4829-be20-4778353d8189 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-vif-unplugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.452 2 DEBUG oslo_concurrency.lockutils [req-ca9e4b5a-0580-41a5-9694-bda3012bb40d req-df5f3137-ccaf-4829-be20-4778353d8189 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.452 2 DEBUG oslo_concurrency.lockutils [req-ca9e4b5a-0580-41a5-9694-bda3012bb40d req-df5f3137-ccaf-4829-be20-4778353d8189 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.453 2 DEBUG oslo_concurrency.lockutils [req-ca9e4b5a-0580-41a5-9694-bda3012bb40d req-df5f3137-ccaf-4829-be20-4778353d8189 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.453 2 DEBUG nova.compute.manager [req-ca9e4b5a-0580-41a5-9694-bda3012bb40d req-df5f3137-ccaf-4829-be20-4778353d8189 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] No waiting events found dispatching network-vif-unplugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.453 2 DEBUG nova.compute.manager [req-ca9e4b5a-0580-41a5-9694-bda3012bb40d req-df5f3137-ccaf-4829-be20-4778353d8189 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-vif-unplugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71-userdata-shm.mount: Deactivated successfully.
Oct 02 12:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fd978805aeae3cca1009fac5c71fe7f6997030c4b6150cecbb7e512b9338008-merged.mount: Deactivated successfully.
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 podman[265516]: 2025-10-02 12:11:21.491259861 +0000 UTC m=+0.126226920 container cleanup 3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:11:21 compute-0 systemd[1]: libpod-conmon-3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71.scope: Deactivated successfully.
Oct 02 12:11:21 compute-0 podman[265576]: 2025-10-02 12:11:21.573830205 +0000 UTC m=+0.049853458 container remove 3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.581 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0f231e4a-e635-4bae-9a80-4de72250c9dd]: (4, ('Thu Oct  2 12:11:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c (3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71)\n3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71\nThu Oct  2 12:11:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c (3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71)\n3f469ca5475dd0d21799a49c3b53027e877625923ed81cec60b497a1758aaa71\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.583 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[66d24c0d-3463-4a0d-a1e7-0edb3063bb0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.584 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1d61fb0-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 kernel: tape1d61fb0-f0: left promiscuous mode
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.607 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d535d535-29ce-4c8b-9b7f-f9977a3a6a59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.631 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f4995cd0-b586-40c2-80f5-686589c0a429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.633 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7614c404-ae06-4bbd-9075-8105d1b939c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.652 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfc356c-a39f-4377-83ae-40c24cd27b29]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493682, 'reachable_time': 31112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265589, 'error': None, 'target': 'ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 systemd[1]: run-netns-ovnmeta\x2de1d61fb0\x2dffae\x2d4b83\x2d8fcd\x2d761f606eb71c.mount: Deactivated successfully.
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.659 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e1d61fb0-ffae-4b83-8fcd-761f606eb71c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:11:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:21.659 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[d395dd17-14da-4487-a7b3-acc55ae65187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 02 12:11:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 02 12:11:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 02 12:11:21 compute-0 ceph-mon[73668]: pgmap v940: 305 pgs: 305 active+clean; 140 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 520 KiB/s rd, 1.0 MiB/s wr, 137 op/s
Oct 02 12:11:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1749779519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:21 compute-0 ceph-mon[73668]: osdmap e142: 3 total, 3 up, 3 in
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.951 2 INFO nova.virt.libvirt.driver [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Deleting instance files /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c_del
Oct 02 12:11:21 compute-0 nova_compute[256940]: 2025-10-02 12:11:21.952 2 INFO nova.virt.libvirt.driver [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Deletion of /var/lib/nova/instances/30d82f4f-be7f-4957-a953-d4d82be0e42c_del complete
Oct 02 12:11:22 compute-0 nova_compute[256940]: 2025-10-02 12:11:22.000 2 INFO nova.compute.manager [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 02 12:11:22 compute-0 nova_compute[256940]: 2025-10-02 12:11:22.003 2 DEBUG oslo.service.loopingcall [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:11:22 compute-0 nova_compute[256940]: 2025-10-02 12:11:22.004 2 DEBUG nova.compute.manager [-] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:11:22 compute-0 nova_compute[256940]: 2025-10-02 12:11:22.005 2 DEBUG nova.network.neutron [-] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:11:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:22.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:22 compute-0 sudo[265592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:22 compute-0 sudo[265592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:22 compute-0 sudo[265592]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 101 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 521 KiB/s rd, 197 KiB/s wr, 154 op/s
Oct 02 12:11:22 compute-0 sudo[265619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:22 compute-0 sudo[265619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:22 compute-0 sudo[265619]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:22 compute-0 podman[265616]: 2025-10-02 12:11:22.698082849 +0000 UTC m=+0.148262667 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 02 12:11:22 compute-0 sudo[265658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:22 compute-0 sudo[265658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:22 compute-0 sudo[265658]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:22 compute-0 sudo[265687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:11:22 compute-0 sudo[265687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:23.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:23 compute-0 sudo[265687]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:11:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:11:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:11:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7404a8e0-0cf8-4f92-8d1c-f548e32abf93 does not exist
Oct 02 12:11:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d344672e-f650-43cd-bf36-f4561e569feb does not exist
Oct 02 12:11:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a6a60e6f-b9b3-41c2-a31b-a7717a7b3453 does not exist
Oct 02 12:11:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:11:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:11:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:11:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:23 compute-0 nova_compute[256940]: 2025-10-02 12:11:23.550 2 DEBUG nova.compute.manager [req-b4eeb8e4-660a-4a50-9a06-74d034089c79 req-91ec7a07-a7a8-4403-af1e-1139fe142f55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:23 compute-0 nova_compute[256940]: 2025-10-02 12:11:23.551 2 DEBUG oslo_concurrency.lockutils [req-b4eeb8e4-660a-4a50-9a06-74d034089c79 req-91ec7a07-a7a8-4403-af1e-1139fe142f55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:23 compute-0 nova_compute[256940]: 2025-10-02 12:11:23.552 2 DEBUG oslo_concurrency.lockutils [req-b4eeb8e4-660a-4a50-9a06-74d034089c79 req-91ec7a07-a7a8-4403-af1e-1139fe142f55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:23 compute-0 nova_compute[256940]: 2025-10-02 12:11:23.552 2 DEBUG oslo_concurrency.lockutils [req-b4eeb8e4-660a-4a50-9a06-74d034089c79 req-91ec7a07-a7a8-4403-af1e-1139fe142f55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:23 compute-0 sudo[265743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:23 compute-0 nova_compute[256940]: 2025-10-02 12:11:23.552 2 DEBUG nova.compute.manager [req-b4eeb8e4-660a-4a50-9a06-74d034089c79 req-91ec7a07-a7a8-4403-af1e-1139fe142f55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] No waiting events found dispatching network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:11:23 compute-0 nova_compute[256940]: 2025-10-02 12:11:23.553 2 WARNING nova.compute.manager [req-b4eeb8e4-660a-4a50-9a06-74d034089c79 req-91ec7a07-a7a8-4403-af1e-1139fe142f55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received unexpected event network-vif-plugged-612a7e9e-f5f3-4f20-aacb-e35a26532f7e for instance with vm_state active and task_state deleting.
Oct 02 12:11:23 compute-0 sudo[265743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:23 compute-0 sudo[265743]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:23 compute-0 sudo[265768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:23 compute-0 sudo[265768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:23 compute-0 sudo[265768]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:23 compute-0 sudo[265793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:23 compute-0 sudo[265793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:23 compute-0 sudo[265793]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:23 compute-0 ceph-mon[73668]: pgmap v942: 305 pgs: 305 active+clean; 101 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 521 KiB/s rd, 197 KiB/s wr, 154 op/s
Oct 02 12:11:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:11:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:23 compute-0 sudo[265818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:11:23 compute-0 sudo[265818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:23 compute-0 nova_compute[256940]: 2025-10-02 12:11:23.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.039 2 DEBUG nova.network.neutron [-] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.056 2 INFO nova.compute.manager [-] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Took 2.05 seconds to deallocate network for instance.
Oct 02 12:11:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:24.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.092 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.092 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.150 2 DEBUG oslo_concurrency.processutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.262 2 DEBUG nova.compute.manager [req-d1c28509-2e57-49a4-acff-cb7081959556 req-542175e5-1a01-4288-97a2-96ecb9504a73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Received event network-vif-deleted-612a7e9e-f5f3-4f20-aacb-e35a26532f7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:24 compute-0 podman[265885]: 2025-10-02 12:11:24.285421993 +0000 UTC m=+0.068260771 container create 1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jackson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:11:24 compute-0 systemd[1]: Started libpod-conmon-1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d.scope.
Oct 02 12:11:24 compute-0 podman[265885]: 2025-10-02 12:11:24.249164672 +0000 UTC m=+0.032003510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:24 compute-0 podman[265885]: 2025-10-02 12:11:24.3864062 +0000 UTC m=+0.169245008 container init 1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:11:24 compute-0 podman[265885]: 2025-10-02 12:11:24.398239551 +0000 UTC m=+0.181078339 container start 1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:24 compute-0 podman[265885]: 2025-10-02 12:11:24.401487926 +0000 UTC m=+0.184326714 container attach 1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:24 compute-0 mystifying_jackson[265920]: 167 167
Oct 02 12:11:24 compute-0 podman[265885]: 2025-10-02 12:11:24.406784335 +0000 UTC m=+0.189623123 container died 1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jackson, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:11:24 compute-0 systemd[1]: libpod-1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d.scope: Deactivated successfully.
Oct 02 12:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ce6fadabdffe07963fee27ae91ac7ee4280f1653fadb743b4d564dc547080b6-merged.mount: Deactivated successfully.
Oct 02 12:11:24 compute-0 podman[265885]: 2025-10-02 12:11:24.455900872 +0000 UTC m=+0.238739660 container remove 1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:11:24 compute-0 systemd[1]: libpod-conmon-1d15b7c19933511ffdc902b70e10c66f350d2293219330793f0bee7a9fcf462d.scope: Deactivated successfully.
Oct 02 12:11:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 101 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 521 KiB/s rd, 197 KiB/s wr, 154 op/s
Oct 02 12:11:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3691398620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.636 2 DEBUG oslo_concurrency.processutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.643 2 DEBUG nova.compute.provider_tree [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:24 compute-0 podman[265940]: 2025-10-02 12:11:24.662792366 +0000 UTC m=+0.144003056 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.668 2 DEBUG nova.scheduler.client.report [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.691 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:24 compute-0 podman[265966]: 2025-10-02 12:11:24.719974055 +0000 UTC m=+0.074196596 container create ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.727 2 INFO nova.scheduler.client.report [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Deleted allocations for instance 30d82f4f-be7f-4957-a953-d4d82be0e42c
Oct 02 12:11:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3691398620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:24 compute-0 sudo[265973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:24 compute-0 sudo[265973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:24 compute-0 sudo[265973]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:24 compute-0 systemd[1]: Started libpod-conmon-ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13.scope.
Oct 02 12:11:24 compute-0 podman[265966]: 2025-10-02 12:11:24.693026019 +0000 UTC m=+0.047248600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55b87dc4dc219d74c850c9e54d2ce28e3151ea064af169d92866d7e5b43d69f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55b87dc4dc219d74c850c9e54d2ce28e3151ea064af169d92866d7e5b43d69f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55b87dc4dc219d74c850c9e54d2ce28e3151ea064af169d92866d7e5b43d69f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55b87dc4dc219d74c850c9e54d2ce28e3151ea064af169d92866d7e5b43d69f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55b87dc4dc219d74c850c9e54d2ce28e3151ea064af169d92866d7e5b43d69f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:24 compute-0 nova_compute[256940]: 2025-10-02 12:11:24.821 2 DEBUG oslo_concurrency.lockutils [None req-21a2c277-b6f5-4b18-842b-058e0cb77cff 40989194be64450c86bad57eee45a7b5 3c05c3b0c35e4dab854814086f385daa - - default default] Lock "30d82f4f-be7f-4957-a953-d4d82be0e42c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:24 compute-0 podman[265966]: 2025-10-02 12:11:24.832880905 +0000 UTC m=+0.187103456 container init ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:11:24 compute-0 podman[265966]: 2025-10-02 12:11:24.845847485 +0000 UTC m=+0.200069996 container start ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:24 compute-0 podman[265966]: 2025-10-02 12:11:24.851598986 +0000 UTC m=+0.205821497 container attach ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:11:24 compute-0 sudo[266007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:24 compute-0 sudo[266007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:24 compute-0 sudo[266007]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:25.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:25 compute-0 nova_compute[256940]: 2025-10-02 12:11:25.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:25 compute-0 xenodochial_mccarthy[266010]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:11:25 compute-0 xenodochial_mccarthy[266010]: --> relative data size: 1.0
Oct 02 12:11:25 compute-0 xenodochial_mccarthy[266010]: --> All data devices are unavailable
Oct 02 12:11:25 compute-0 ceph-mon[73668]: pgmap v943: 305 pgs: 305 active+clean; 101 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 521 KiB/s rd, 197 KiB/s wr, 154 op/s
Oct 02 12:11:25 compute-0 systemd[1]: libpod-ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13.scope: Deactivated successfully.
Oct 02 12:11:25 compute-0 podman[265966]: 2025-10-02 12:11:25.807709972 +0000 UTC m=+1.161932513 container died ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55b87dc4dc219d74c850c9e54d2ce28e3151ea064af169d92866d7e5b43d69f-merged.mount: Deactivated successfully.
Oct 02 12:11:25 compute-0 podman[265966]: 2025-10-02 12:11:25.885394298 +0000 UTC m=+1.239616829 container remove ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:11:25 compute-0 systemd[1]: libpod-conmon-ee26acff6c1a4366098c330a8ce20994d2a109a5615d601f9fc99d8e176d5b13.scope: Deactivated successfully.
Oct 02 12:11:25 compute-0 sudo[265818]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:26 compute-0 sudo[266061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:26 compute-0 sudo[266061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:26 compute-0 sudo[266061]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:26.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:26 compute-0 sudo[266086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:26 compute-0 sudo[266086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:26 compute-0 sudo[266086]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:26 compute-0 sudo[266111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:26 compute-0 sudo[266111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:26 compute-0 sudo[266111]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:26 compute-0 sudo[266136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:11:26 compute-0 sudo[266136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:26 compute-0 nova_compute[256940]: 2025-10-02 12:11:26.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:26.448 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:26.448 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:26.448 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 63 KiB/s wr, 105 op/s
Oct 02 12:11:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:26 compute-0 podman[266201]: 2025-10-02 12:11:26.730521384 +0000 UTC m=+0.058606908 container create 02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:26 compute-0 systemd[1]: Started libpod-conmon-02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79.scope.
Oct 02 12:11:26 compute-0 podman[266201]: 2025-10-02 12:11:26.702594912 +0000 UTC m=+0.030680486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:26 compute-0 podman[266201]: 2025-10-02 12:11:26.830951856 +0000 UTC m=+0.159037370 container init 02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:11:26 compute-0 podman[266201]: 2025-10-02 12:11:26.841551594 +0000 UTC m=+0.169637108 container start 02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:11:26 compute-0 podman[266201]: 2025-10-02 12:11:26.845929589 +0000 UTC m=+0.174015103 container attach 02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:11:26 compute-0 clever_bose[266218]: 167 167
Oct 02 12:11:26 compute-0 systemd[1]: libpod-02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79.scope: Deactivated successfully.
Oct 02 12:11:26 compute-0 conmon[266218]: conmon 02c417c3c20181d2c42a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79.scope/container/memory.events
Oct 02 12:11:26 compute-0 podman[266201]: 2025-10-02 12:11:26.852481591 +0000 UTC m=+0.180567115 container died 02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f835044c28aa38ffcd1d67c65d5d840f694e0c451727e99c1051e1e583e4ca9-merged.mount: Deactivated successfully.
Oct 02 12:11:26 compute-0 podman[266201]: 2025-10-02 12:11:26.904830363 +0000 UTC m=+0.232915847 container remove 02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:26 compute-0 systemd[1]: libpod-conmon-02c417c3c20181d2c42a7c39811444ebd9b6e325cdba0989221c179073d05a79.scope: Deactivated successfully.
Oct 02 12:11:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:27 compute-0 podman[266243]: 2025-10-02 12:11:27.164959763 +0000 UTC m=+0.072513842 container create f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:11:27 compute-0 systemd[1]: Started libpod-conmon-f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca.scope.
Oct 02 12:11:27 compute-0 podman[266243]: 2025-10-02 12:11:27.140611134 +0000 UTC m=+0.048165233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f91d37dcdad11af95579d2af3e67d54f3b36bca9e88ac48c9489837a9ffb037/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f91d37dcdad11af95579d2af3e67d54f3b36bca9e88ac48c9489837a9ffb037/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f91d37dcdad11af95579d2af3e67d54f3b36bca9e88ac48c9489837a9ffb037/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f91d37dcdad11af95579d2af3e67d54f3b36bca9e88ac48c9489837a9ffb037/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:27 compute-0 podman[266243]: 2025-10-02 12:11:27.281792136 +0000 UTC m=+0.189346255 container init f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:11:27 compute-0 podman[266243]: 2025-10-02 12:11:27.301246756 +0000 UTC m=+0.208800845 container start f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:11:27 compute-0 podman[266243]: 2025-10-02 12:11:27.332202757 +0000 UTC m=+0.239756846 container attach f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:27 compute-0 ceph-mon[73668]: pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 63 KiB/s wr, 105 op/s
Oct 02 12:11:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1743318663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:28 compute-0 epic_dewdney[266259]: {
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:     "1": [
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:         {
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "devices": [
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "/dev/loop3"
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             ],
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "lv_name": "ceph_lv0",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "lv_size": "7511998464",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "name": "ceph_lv0",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "tags": {
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.cluster_name": "ceph",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.crush_device_class": "",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.encrypted": "0",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.osd_id": "1",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.type": "block",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:                 "ceph.vdo": "0"
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             },
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "type": "block",
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:             "vg_name": "ceph_vg0"
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:         }
Oct 02 12:11:28 compute-0 epic_dewdney[266259]:     ]
Oct 02 12:11:28 compute-0 epic_dewdney[266259]: }
Oct 02 12:11:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:28.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:28 compute-0 systemd[1]: libpod-f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca.scope: Deactivated successfully.
Oct 02 12:11:28 compute-0 podman[266243]: 2025-10-02 12:11:28.093004743 +0000 UTC m=+1.000558822 container died f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:11:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f91d37dcdad11af95579d2af3e67d54f3b36bca9e88ac48c9489837a9ffb037-merged.mount: Deactivated successfully.
Oct 02 12:11:28 compute-0 podman[266243]: 2025-10-02 12:11:28.256331475 +0000 UTC m=+1.163885564 container remove f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:11:28 compute-0 systemd[1]: libpod-conmon-f910255c69d4e3414db6e10754900f845e7a90a18b9edd55c37c0fa94e6f7bca.scope: Deactivated successfully.
Oct 02 12:11:28 compute-0 sudo[266136]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:28 compute-0 sudo[266284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:28 compute-0 sudo[266284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:28 compute-0 sudo[266284]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.491 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.494 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:28 compute-0 sudo[266310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.512 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:11:28 compute-0 sudo[266310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:28 compute-0 sudo[266310]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 5.8 KiB/s wr, 98 op/s
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.583 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.583 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.592 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.592 2 INFO nova.compute.claims [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:11:28 compute-0 sudo[266335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:28 compute-0 sudo[266335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:28 compute-0 sudo[266335]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:11:28
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', '.rgw.root']
Oct 02 12:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.681 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:28 compute-0 sudo[266360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:11:28 compute-0 sudo[266360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:28.719 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:28 compute-0 nova_compute[256940]: 2025-10-02 12:11:28.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658119017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.136 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.142 2 DEBUG nova.compute.provider_tree [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.160 2 DEBUG nova.scheduler.client.report [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:29 compute-0 podman[266446]: 2025-10-02 12:11:29.183613355 +0000 UTC m=+0.063103646 container create 998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.184 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.185 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:11:29 compute-0 podman[266446]: 2025-10-02 12:11:29.158763093 +0000 UTC m=+0.038253484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:29 compute-0 systemd[1]: Started libpod-conmon-998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db.scope.
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.274 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.275 2 DEBUG nova.network.neutron [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.316 2 INFO nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:11:29 compute-0 podman[266446]: 2025-10-02 12:11:29.328485723 +0000 UTC m=+0.207976104 container init 998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:11:29 compute-0 podman[266446]: 2025-10-02 12:11:29.336219895 +0000 UTC m=+0.215710216 container start 998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:11:29 compute-0 podman[266446]: 2025-10-02 12:11:29.340947159 +0000 UTC m=+0.220437490 container attach 998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:11:29 compute-0 clever_bhaskara[266465]: 167 167
Oct 02 12:11:29 compute-0 systemd[1]: libpod-998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db.scope: Deactivated successfully.
Oct 02 12:11:29 compute-0 conmon[266465]: conmon 998caee9d78864499c70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db.scope/container/memory.events
Oct 02 12:11:29 compute-0 podman[266446]: 2025-10-02 12:11:29.346811293 +0000 UTC m=+0.226301624 container died 998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-88469fe908b59aa2e056218343ef847c49882aa6c4f66f4917d638f2a6a93bd4-merged.mount: Deactivated successfully.
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.396 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:11:29 compute-0 podman[266446]: 2025-10-02 12:11:29.403295934 +0000 UTC m=+0.282786255 container remove 998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:11:29 compute-0 systemd[1]: libpod-conmon-998caee9d78864499c7011bb1fe12a5c82a2cb8ff71e4cd3f770d8b933ae76db.scope: Deactivated successfully.
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.483 2 DEBUG nova.policy [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59e4926e6a8749a8a393630a15b70607', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f66f0e949a840d789f5980d83c5f225', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.553 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.555 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.556 2 INFO nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Creating image(s)
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.592 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.637 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:29 compute-0 podman[266490]: 2025-10-02 12:11:29.646294874 +0000 UTC m=+0.081406715 container create acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.688 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.695 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:29 compute-0 podman[266490]: 2025-10-02 12:11:29.616662738 +0000 UTC m=+0.051774619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:11:29 compute-0 systemd[1]: Started libpod-conmon-acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f.scope.
Oct 02 12:11:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5b89c2cf34e2d00fab9abdf94e46eadb2bf1ae99bd8a96da8148e29443e04f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5b89c2cf34e2d00fab9abdf94e46eadb2bf1ae99bd8a96da8148e29443e04f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5b89c2cf34e2d00fab9abdf94e46eadb2bf1ae99bd8a96da8148e29443e04f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5b89c2cf34e2d00fab9abdf94e46eadb2bf1ae99bd8a96da8148e29443e04f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:29 compute-0 podman[266490]: 2025-10-02 12:11:29.769633448 +0000 UTC m=+0.204745259 container init acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.778 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.779 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.780 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.780 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:29 compute-0 podman[266490]: 2025-10-02 12:11:29.78190669 +0000 UTC m=+0.217018501 container start acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:11:29 compute-0 podman[266490]: 2025-10-02 12:11:29.78650567 +0000 UTC m=+0.221617491 container attach acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.812 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:29 compute-0 nova_compute[256940]: 2025-10-02 12:11:29.817 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a80fe91d-8a49-4d15-9036-30c510b44f99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:29 compute-0 ceph-mon[73668]: pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 5.8 KiB/s wr, 98 op/s
Oct 02 12:11:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2720756083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1658119017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3626303909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:30.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.120 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a80fe91d-8a49-4d15-9036-30c510b44f99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.213 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] resizing rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.351 2 DEBUG nova.objects.instance [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lazy-loading 'migration_context' on Instance uuid a80fe91d-8a49-4d15-9036-30c510b44f99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.470 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.471 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Ensure instance console log exists: /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.471 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.472 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.472 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 91 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 2.4 MiB/s wr, 97 op/s
Oct 02 12:11:30 compute-0 nova_compute[256940]: 2025-10-02 12:11:30.670 2 DEBUG nova.network.neutron [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Successfully created port: d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:11:30 compute-0 cranky_hellman[266561]: {
Oct 02 12:11:30 compute-0 cranky_hellman[266561]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:11:30 compute-0 cranky_hellman[266561]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:11:30 compute-0 cranky_hellman[266561]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:11:30 compute-0 cranky_hellman[266561]:         "osd_id": 1,
Oct 02 12:11:30 compute-0 cranky_hellman[266561]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:11:30 compute-0 cranky_hellman[266561]:         "type": "bluestore"
Oct 02 12:11:30 compute-0 cranky_hellman[266561]:     }
Oct 02 12:11:30 compute-0 cranky_hellman[266561]: }
Oct 02 12:11:30 compute-0 systemd[1]: libpod-acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f.scope: Deactivated successfully.
Oct 02 12:11:30 compute-0 conmon[266561]: conmon acc436a856897308839d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f.scope/container/memory.events
Oct 02 12:11:30 compute-0 podman[266694]: 2025-10-02 12:11:30.78821657 +0000 UTC m=+0.036466347 container died acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:11:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d5b89c2cf34e2d00fab9abdf94e46eadb2bf1ae99bd8a96da8148e29443e04f-merged.mount: Deactivated successfully.
Oct 02 12:11:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/955745629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:30 compute-0 podman[266694]: 2025-10-02 12:11:30.87366485 +0000 UTC m=+0.121914558 container remove acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:11:30 compute-0 systemd[1]: libpod-conmon-acc436a856897308839db447eb02f0abc444c00c4dc6603fe9cd236799473e0f.scope: Deactivated successfully.
Oct 02 12:11:30 compute-0 sudo[266360]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:11:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:11:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 12b52b28-d91e-46ad-a892-6bf5b5b90950 does not exist
Oct 02 12:11:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f3b1ef42-57aa-476f-af35-73d1b619d6dd does not exist
Oct 02 12:11:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ea780fa9-0e5f-47f0-afda-37500de12156 does not exist
Oct 02 12:11:31 compute-0 sudo[266709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:31 compute-0 sudo[266709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:31 compute-0 sudo[266709]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:31 compute-0 sudo[266734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:11:31 compute-0 sudo[266734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:31 compute-0 sudo[266734]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:31 compute-0 nova_compute[256940]: 2025-10-02 12:11:31.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:31 compute-0 ceph-mon[73668]: pgmap v946: 305 pgs: 305 active+clean; 91 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 2.4 MiB/s wr, 97 op/s
Oct 02 12:11:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2081085421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:32 compute-0 nova_compute[256940]: 2025-10-02 12:11:32.082 2 DEBUG nova.network.neutron [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Successfully updated port: d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:11:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:32.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:32 compute-0 nova_compute[256940]: 2025-10-02 12:11:32.109 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:32 compute-0 nova_compute[256940]: 2025-10-02 12:11:32.110 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquired lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:32 compute-0 nova_compute[256940]: 2025-10-02 12:11:32.110 2 DEBUG nova.network.neutron [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:11:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 116 MiB data, 291 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 3.4 MiB/s wr, 65 op/s
Oct 02 12:11:32 compute-0 nova_compute[256940]: 2025-10-02 12:11:32.766 2 DEBUG nova.compute.manager [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-changed-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:32 compute-0 nova_compute[256940]: 2025-10-02 12:11:32.766 2 DEBUG nova.compute.manager [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Refreshing instance network info cache due to event network-changed-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:11:32 compute-0 nova_compute[256940]: 2025-10-02 12:11:32.766 2 DEBUG oslo_concurrency.lockutils [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2323166239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:33.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:33 compute-0 nova_compute[256940]: 2025-10-02 12:11:33.707 2 DEBUG nova.network.neutron [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:11:33 compute-0 nova_compute[256940]: 2025-10-02 12:11:33.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:34 compute-0 ceph-mon[73668]: pgmap v947: 305 pgs: 305 active+clean; 116 MiB data, 291 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 3.4 MiB/s wr, 65 op/s
Oct 02 12:11:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/51653874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:34.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 116 MiB data, 291 MiB used, 21 GiB / 21 GiB avail; 61 KiB/s rd, 3.0 MiB/s wr, 59 op/s
Oct 02 12:11:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:35.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.728 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.729 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.851 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.851 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.851 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.988 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.989 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.991 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.991 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.991 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.992 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.992 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.992 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:35 compute-0 nova_compute[256940]: 2025-10-02 12:11:35.992 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:11:36 compute-0 ceph-mon[73668]: pgmap v948: 305 pgs: 305 active+clean; 116 MiB data, 291 MiB used, 21 GiB / 21 GiB avail; 61 KiB/s rd, 3.0 MiB/s wr, 59 op/s
Oct 02 12:11:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:36.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.360 2 DEBUG nova.network.neutron [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updating instance_info_cache with network_info: [{"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.390 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407081.3885815, 30d82f4f-be7f-4957-a953-d4d82be0e42c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.390 2 INFO nova.compute.manager [-] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] VM Stopped (Lifecycle Event)
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.446 2 DEBUG nova.compute.manager [None req-1ac257bc-c931-4a85-8c5d-b8fd2bccdafc - - - - - -] [instance: 30d82f4f-be7f-4957-a953-d4d82be0e42c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.447 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Releasing lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.447 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Instance network_info: |[{"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.448 2 DEBUG oslo_concurrency.lockutils [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.448 2 DEBUG nova.network.neutron [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Refreshing network info cache for port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.453 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Start _get_guest_xml network_info=[{"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.461 2 WARNING nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.466 2 DEBUG nova.virt.libvirt.host [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.468 2 DEBUG nova.virt.libvirt.host [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.471 2 DEBUG nova.virt.libvirt.host [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.471 2 DEBUG nova.virt.libvirt.host [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.473 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.473 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.474 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.474 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.475 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.475 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.475 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.476 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.476 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.476 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.477 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.477 2 DEBUG nova.virt.hardware [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.481 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 105 MiB data, 292 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 12:11:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:11:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548667372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.913 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.961 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:36 compute-0 nova_compute[256940]: 2025-10-02 12:11:36.968 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1510747912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1548667372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:37.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:11:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328603544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.459 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.461 2 DEBUG nova.virt.libvirt.vif [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1563176459',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1563176459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-156317645',id=9,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f66f0e949a840d789f5980d83c5f225',ramdisk_id='',reservation_id='r-3abrnehi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1891605502',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1891605502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:29Z,user_data=None,user_id='59e4926e6a8749a8a393630a15b70607',uuid=a80fe91d-8a49-4d15-9036-30c510b44f99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.461 2 DEBUG nova.network.os_vif_util [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Converting VIF {"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.462 2 DEBUG nova.network.os_vif_util [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:a3:ca,bridge_name='br-int',has_traffic_filtering=True,id=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1,network=Network(a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1df7f09-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.463 2 DEBUG nova.objects.instance [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lazy-loading 'pci_devices' on Instance uuid a80fe91d-8a49-4d15-9036-30c510b44f99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.512 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <uuid>a80fe91d-8a49-4d15-9036-30c510b44f99</uuid>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <name>instance-00000009</name>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1563176459</nova:name>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:11:36</nova:creationTime>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:user uuid="59e4926e6a8749a8a393630a15b70607">tempest-FloatingIPsAssociationNegativeTestJSON-1891605502-project-member</nova:user>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:project uuid="7f66f0e949a840d789f5980d83c5f225">tempest-FloatingIPsAssociationNegativeTestJSON-1891605502</nova:project>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <nova:port uuid="d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1">
Oct 02 12:11:37 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <system>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <entry name="serial">a80fe91d-8a49-4d15-9036-30c510b44f99</entry>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <entry name="uuid">a80fe91d-8a49-4d15-9036-30c510b44f99</entry>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </system>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <os>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   </os>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <features>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   </features>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a80fe91d-8a49-4d15-9036-30c510b44f99_disk">
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a80fe91d-8a49-4d15-9036-30c510b44f99_disk.config">
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:11:37 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:31:a3:ca"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <target dev="tapd1df7f09-3c"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/console.log" append="off"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <video>
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </video>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:11:37 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:11:37 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:11:37 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:11:37 compute-0 nova_compute[256940]: </domain>
Oct 02 12:11:37 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.513 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Preparing to wait for external event network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.513 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.514 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.514 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.515 2 DEBUG nova.virt.libvirt.vif [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1563176459',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1563176459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-156317645',id=9,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f66f0e949a840d789f5980d83c5f225',ramdisk_id='',reservation_id='r-3abrnehi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1891605502',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1891605502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:29Z,user_data=None,user_id='59e4926e6a8749a8a393630a15b70607',uuid=a80fe91d-8a49-4d15-9036-30c510b44f99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.515 2 DEBUG nova.network.os_vif_util [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Converting VIF {"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.516 2 DEBUG nova.network.os_vif_util [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:a3:ca,bridge_name='br-int',has_traffic_filtering=True,id=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1,network=Network(a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1df7f09-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.517 2 DEBUG os_vif [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:a3:ca,bridge_name='br-int',has_traffic_filtering=True,id=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1,network=Network(a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1df7f09-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1df7f09-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1df7f09-3c, col_values=(('external_ids', {'iface-id': 'd1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:a3:ca', 'vm-uuid': 'a80fe91d-8a49-4d15-9036-30c510b44f99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:37 compute-0 NetworkManager[44981]: <info>  [1759407097.5652] manager: (tapd1df7f09-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.577 2 INFO os_vif [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:a3:ca,bridge_name='br-int',has_traffic_filtering=True,id=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1,network=Network(a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1df7f09-3c')
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.667 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.667 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.668 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] No VIF found with MAC fa:16:3e:31:a3:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.669 2 INFO nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Using config drive
Oct 02 12:11:37 compute-0 nova_compute[256940]: 2025-10-02 12:11:37.708 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:38.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:38 compute-0 ceph-mon[73668]: pgmap v949: 305 pgs: 305 active+clean; 105 MiB data, 292 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 12:11:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/328603544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.245 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.248 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 12:11:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471206287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.728 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.867 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.868 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:11:38 compute-0 nova_compute[256940]: 2025-10-02 12:11:38.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.105 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:11:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.108 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4856MB free_disk=20.95922088623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.109 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.109 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1471206287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.283 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance a80fe91d-8a49-4d15-9036-30c510b44f99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.284 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.285 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.400 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.498 2 INFO nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Creating config drive at /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/disk.config
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.508 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxn6szq33 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.653 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxn6szq33" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.702 2 DEBUG nova.storage.rbd_utils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] rbd image a80fe91d-8a49-4d15-9036-30c510b44f99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.708 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/disk.config a80fe91d-8a49-4d15-9036-30c510b44f99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006113779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.899 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.904 2 DEBUG oslo_concurrency.processutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/disk.config a80fe91d-8a49-4d15-9036-30c510b44f99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.905 2 INFO nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Deleting local config drive /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99/disk.config because it was imported into RBD.
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.911 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:11:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.929 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:39 compute-0 kernel: tapd1df7f09-3c: entered promiscuous mode
Oct 02 12:11:39 compute-0 NetworkManager[44981]: <info>  [1759407099.9789] manager: (tapd1df7f09-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.994 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:11:39 compute-0 nova_compute[256940]: 2025-10-02 12:11:39.995 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:40 compute-0 systemd-udevd[266940]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:40 compute-0 ovn_controller[148123]: 2025-10-02T12:11:40Z|00046|binding|INFO|Claiming lport d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 for this chassis.
Oct 02 12:11:40 compute-0 ovn_controller[148123]: 2025-10-02T12:11:40Z|00047|binding|INFO|d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1: Claiming fa:16:3e:31:a3:ca 10.100.0.12
Oct 02 12:11:40 compute-0 NetworkManager[44981]: <info>  [1759407100.0435] device (tapd1df7f09-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:11:40 compute-0 NetworkManager[44981]: <info>  [1759407100.0452] device (tapd1df7f09-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.044 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:a3:ca 10.100.0.12'], port_security=['fa:16:3e:31:a3:ca 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a80fe91d-8a49-4d15-9036-30c510b44f99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f66f0e949a840d789f5980d83c5f225', 'neutron:revision_number': '2', 'neutron:security_group_ids': '32b10fbd-cf4a-4d31-b068-19efd1a2ef75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c290c8-3e63-4fff-8d21-adb217745716, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.046 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 in datapath a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f bound to our chassis
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.048 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.062 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[498e21c7-1471-48fd-b0d2-f5544ad85bf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.063 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8ed8159-51 in ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:11:40 compute-0 systemd-machined[210927]: New machine qemu-3-instance-00000009.
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.068 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8ed8159-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.068 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[04b57938-d94d-4216-b87d-4077ab85bad7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.071 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b69623c6-95b2-45b4-ab13-b3e9923521c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.092 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb1950d-c096-4aaa-a233-3b36cd99e566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000053s ======
Oct 02 12:11:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct 02 12:11:40 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000009.
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:40 compute-0 ovn_controller[148123]: 2025-10-02T12:11:40Z|00048|binding|INFO|Setting lport d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 ovn-installed in OVS
Oct 02 12:11:40 compute-0 ovn_controller[148123]: 2025-10-02T12:11:40Z|00049|binding|INFO|Setting lport d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 up in Southbound
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.121 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdacea0-9987-4bd3-ae12-9185be969626]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:40 compute-0 ceph-mon[73668]: pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 12:11:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4006113779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.162 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f73c9d28-a222-4811-8167-14fc81141234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 systemd-udevd[266944]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.171 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[096169b4-ee5a-4eee-a91a-7c8d45c7135a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 NetworkManager[44981]: <info>  [1759407100.1725] manager: (tapa8ed8159-50): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.210 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d4afb7fe-80f2-4ff6-a40d-7511f2fa049a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.213 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb8aedb-1a18-4546-a688-b69dee5f2bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 NetworkManager[44981]: <info>  [1759407100.2372] device (tapa8ed8159-50): carrier: link connected
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.244 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[61233ecb-6ef1-4e8d-a814-aa7f5a1ca784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.265 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bf474686-51ea-4a8a-ac7d-c3ed5a91419c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8ed8159-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:fa:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499572, 'reachable_time': 44140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266976, 'error': None, 'target': 'ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.287 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fa9dba-9830-4dc6-b406-c6f47abec20c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:fa7d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499572, 'tstamp': 499572}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266977, 'error': None, 'target': 'ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.310 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1746325d-973f-41b4-9ae6-35836b96bf86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8ed8159-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:fa:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499572, 'reachable_time': 44140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266978, 'error': None, 'target': 'ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.360 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ad4699-0eda-4150-a5b6-a6322abfb0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.453 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ed411933-904c-4b1b-a434-e0e0bd361020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.455 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8ed8159-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.455 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.456 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8ed8159-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:40 compute-0 kernel: tapa8ed8159-50: entered promiscuous mode
Oct 02 12:11:40 compute-0 NetworkManager[44981]: <info>  [1759407100.4598] manager: (tapa8ed8159-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.463 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8ed8159-50, col_values=(('external_ids', {'iface-id': '0302832e-d363-45d6-b1cd-d5dbda0568a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:40 compute-0 ovn_controller[148123]: 2025-10-02T12:11:40Z|00050|binding|INFO|Releasing lport 0302832e-d363-45d6-b1cd-d5dbda0568a7 from this chassis (sb_readonly=0)
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.485 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.486 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f2145a54-a935-4fbd-92f1-810a5e780266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.487 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f.pid.haproxy
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:11:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:11:40.489 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'env', 'PROCESS_TAG=haproxy-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.511 2 DEBUG nova.network.neutron [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updated VIF entry in instance network info cache for port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.512 2 DEBUG nova.network.neutron [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updating instance_info_cache with network_info: [{"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.570 2 DEBUG oslo_concurrency.lockutils [req-b71c7a9b-8d56-4064-989e-c2342375bb13 req-6b2a7ae1-bf8e-46b6-bb97-9e217d69e46e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.645 2 DEBUG nova.compute.manager [req-a2b988fc-390d-4d79-8ebe-ddae446c1415 req-7b0ce6b6-1310-43be-bca0-331b2546d2dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.645 2 DEBUG oslo_concurrency.lockutils [req-a2b988fc-390d-4d79-8ebe-ddae446c1415 req-7b0ce6b6-1310-43be-bca0-331b2546d2dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.646 2 DEBUG oslo_concurrency.lockutils [req-a2b988fc-390d-4d79-8ebe-ddae446c1415 req-7b0ce6b6-1310-43be-bca0-331b2546d2dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.646 2 DEBUG oslo_concurrency.lockutils [req-a2b988fc-390d-4d79-8ebe-ddae446c1415 req-7b0ce6b6-1310-43be-bca0-331b2546d2dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:40 compute-0 nova_compute[256940]: 2025-10-02 12:11:40.646 2 DEBUG nova.compute.manager [req-a2b988fc-390d-4d79-8ebe-ddae446c1415 req-7b0ce6b6-1310-43be-bca0-331b2546d2dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Processing event network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:11:40 compute-0 podman[267053]: 2025-10-02 12:11:40.986367535 +0000 UTC m=+0.090215626 container create b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:11:41 compute-0 podman[267053]: 2025-10-02 12:11:40.940620066 +0000 UTC m=+0.044468217 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:11:41 compute-0 systemd[1]: Started libpod-conmon-b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267.scope.
Oct 02 12:11:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468678b84b97670deb1c7bcb3dfb804c966fee8719725d31482e7d88ac99df45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:11:41 compute-0 podman[267053]: 2025-10-02 12:11:41.085375401 +0000 UTC m=+0.189223512 container init b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:11:41 compute-0 podman[267053]: 2025-10-02 12:11:41.093256928 +0000 UTC m=+0.197105019 container start b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:11:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:41 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [NOTICE]   (267072) : New worker (267074) forked
Oct 02 12:11:41 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [NOTICE]   (267072) : Loading success.
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.228 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.230 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407101.2293353, a80fe91d-8a49-4d15-9036-30c510b44f99 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.230 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] VM Started (Lifecycle Event)
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.233 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.237 2 INFO nova.virt.libvirt.driver [-] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Instance spawned successfully.
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.238 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.277 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.288 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.294 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.295 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.296 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.297 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.298 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.299 2 DEBUG nova.virt.libvirt.driver [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.390 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.390 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407101.2294943, a80fe91d-8a49-4d15-9036-30c510b44f99 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.391 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] VM Paused (Lifecycle Event)
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.518 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.525 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407101.2333198, a80fe91d-8a49-4d15-9036-30c510b44f99 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.526 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] VM Resumed (Lifecycle Event)
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.576 2 INFO nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Took 12.02 seconds to spawn the instance on the hypervisor.
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.577 2 DEBUG nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.589 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.593 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.646 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.685 2 INFO nova.compute.manager [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Took 13.13 seconds to build instance.
Oct 02 12:11:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:41 compute-0 nova_compute[256940]: 2025-10-02 12:11:41.742 2 DEBUG oslo_concurrency.lockutils [None req-b2fa491a-77cc-4d44-891f-195556e913ee 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:42.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:42 compute-0 ceph-mon[73668]: pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 12:11:42 compute-0 podman[267083]: 2025-10-02 12:11:42.423520813 +0000 UTC m=+0.086027957 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 12:11:42 compute-0 podman[267084]: 2025-10-02 12:11:42.519736985 +0000 UTC m=+0.173935501 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:11:42 compute-0 nova_compute[256940]: 2025-10-02 12:11:42.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 128 op/s
Oct 02 12:11:42 compute-0 nova_compute[256940]: 2025-10-02 12:11:42.904 2 DEBUG nova.compute.manager [req-34e4e910-41f3-4b8c-a7e6-d1e127b6a9a4 req-b6b92822-e491-41f6-befb-ee4e5690149a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:42 compute-0 nova_compute[256940]: 2025-10-02 12:11:42.905 2 DEBUG oslo_concurrency.lockutils [req-34e4e910-41f3-4b8c-a7e6-d1e127b6a9a4 req-b6b92822-e491-41f6-befb-ee4e5690149a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:42 compute-0 nova_compute[256940]: 2025-10-02 12:11:42.906 2 DEBUG oslo_concurrency.lockutils [req-34e4e910-41f3-4b8c-a7e6-d1e127b6a9a4 req-b6b92822-e491-41f6-befb-ee4e5690149a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:42 compute-0 nova_compute[256940]: 2025-10-02 12:11:42.906 2 DEBUG oslo_concurrency.lockutils [req-34e4e910-41f3-4b8c-a7e6-d1e127b6a9a4 req-b6b92822-e491-41f6-befb-ee4e5690149a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:42 compute-0 nova_compute[256940]: 2025-10-02 12:11:42.907 2 DEBUG nova.compute.manager [req-34e4e910-41f3-4b8c-a7e6-d1e127b6a9a4 req-b6b92822-e491-41f6-befb-ee4e5690149a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] No waiting events found dispatching network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:11:42 compute-0 nova_compute[256940]: 2025-10-02 12:11:42.907 2 WARNING nova.compute.manager [req-34e4e910-41f3-4b8c-a7e6-d1e127b6a9a4 req-b6b92822-e491-41f6-befb-ee4e5690149a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received unexpected event network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 for instance with vm_state active and task_state None.
Oct 02 12:11:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:43.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:43 compute-0 nova_compute[256940]: 2025-10-02 12:11:43.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:44 compute-0 ceph-mon[73668]: pgmap v952: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 128 op/s
Oct 02 12:11:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3263702750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 547 KiB/s wr, 122 op/s
Oct 02 12:11:44 compute-0 sudo[267131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:44 compute-0 sudo[267131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:44 compute-0 sudo[267131]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:45 compute-0 sudo[267156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:45 compute-0 sudo[267156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:45 compute-0 sudo[267156]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:45.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:46 compute-0 ceph-mon[73668]: pgmap v953: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 547 KiB/s wr, 122 op/s
Oct 02 12:11:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4248775936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2283155084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 102 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 196 op/s
Oct 02 12:11:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:47 compute-0 nova_compute[256940]: 2025-10-02 12:11:47.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:48.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:48 compute-0 ceph-mon[73668]: pgmap v954: 305 pgs: 305 active+clean; 102 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 196 op/s
Oct 02 12:11:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 122 MiB data, 296 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Oct 02 12:11:48 compute-0 nova_compute[256940]: 2025-10-02 12:11:48.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:50 compute-0 ceph-mon[73668]: pgmap v955: 305 pgs: 305 active+clean; 122 MiB data, 296 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Oct 02 12:11:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:11:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:11:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:11:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:52 compute-0 ovn_controller[148123]: 2025-10-02T12:11:52Z|00051|binding|INFO|Releasing lport 0302832e-d363-45d6-b1cd-d5dbda0568a7 from this chassis (sb_readonly=0)
Oct 02 12:11:52 compute-0 NetworkManager[44981]: <info>  [1759407112.0416] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 02 12:11:52 compute-0 NetworkManager[44981]: <info>  [1759407112.0430] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 02 12:11:52 compute-0 ovn_controller[148123]: 2025-10-02T12:11:52Z|00052|binding|INFO|Releasing lport 0302832e-d363-45d6-b1cd-d5dbda0568a7 from this chassis (sb_readonly=0)
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:52.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:52 compute-0 ceph-mon[73668]: pgmap v956: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:11:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.781 2 DEBUG nova.compute.manager [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-changed-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.781 2 DEBUG nova.compute.manager [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Refreshing instance network info cache due to event network-changed-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.782 2 DEBUG oslo_concurrency.lockutils [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.782 2 DEBUG oslo_concurrency.lockutils [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:52 compute-0 nova_compute[256940]: 2025-10-02 12:11:52.782 2 DEBUG nova.network.neutron [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Refreshing network info cache for port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:11:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:53 compute-0 podman[267187]: 2025-10-02 12:11:53.427223857 +0000 UTC m=+0.083888260 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:11:53 compute-0 nova_compute[256940]: 2025-10-02 12:11:53.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:54.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:54 compute-0 ceph-mon[73668]: pgmap v957: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Oct 02 12:11:54 compute-0 ovn_controller[148123]: 2025-10-02T12:11:54Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:31:a3:ca 10.100.0.12
Oct 02 12:11:54 compute-0 ovn_controller[148123]: 2025-10-02T12:11:54Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:a3:ca 10.100.0.12
Oct 02 12:11:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 02 12:11:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:55 compute-0 nova_compute[256940]: 2025-10-02 12:11:55.135 2 DEBUG nova.network.neutron [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updated VIF entry in instance network info cache for port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:11:55 compute-0 nova_compute[256940]: 2025-10-02 12:11:55.136 2 DEBUG nova.network.neutron [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updating instance_info_cache with network_info: [{"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:55 compute-0 nova_compute[256940]: 2025-10-02 12:11:55.177 2 DEBUG oslo_concurrency.lockutils [req-014a136e-7f6f-486e-98c9-4d4c75efc1ce req-c74df005-5bdb-4396-baf0-24fc90650c5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:55 compute-0 podman[267208]: 2025-10-02 12:11:55.43872834 +0000 UTC m=+0.091740466 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:11:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:11:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:56.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:11:56 compute-0 ceph-mon[73668]: pgmap v958: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 02 12:11:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 146 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 225 op/s
Oct 02 12:11:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:57.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2787303180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:57 compute-0 nova_compute[256940]: 2025-10-02 12:11:57.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:58.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:11:58 compute-0 ceph-mon[73668]: pgmap v959: 305 pgs: 305 active+clean; 146 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 225 op/s
Oct 02 12:11:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 132 MiB data, 309 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.4 MiB/s wr, 181 op/s
Oct 02 12:11:58 compute-0 nova_compute[256940]: 2025-10-02 12:11:58.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:11:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:59.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:00.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:00 compute-0 ceph-mon[73668]: pgmap v960: 305 pgs: 305 active+clean; 132 MiB data, 309 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.4 MiB/s wr, 181 op/s
Oct 02 12:12:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 174 op/s
Oct 02 12:12:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:12:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:01.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:12:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:12:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:12:02 compute-0 ceph-mon[73668]: pgmap v961: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 174 op/s
Oct 02 12:12:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 02 12:12:02 compute-0 nova_compute[256940]: 2025-10-02 12:12:02.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:02.913 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:02 compute-0 nova_compute[256940]: 2025-10-02 12:12:02.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:02.915 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:12:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:03.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:03 compute-0 nova_compute[256940]: 2025-10-02 12:12:03.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:12:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:04.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:12:04 compute-0 ceph-mon[73668]: pgmap v962: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 02 12:12:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 808 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:12:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:12:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339841313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:12:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:12:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339841313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:12:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:05.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:05 compute-0 sudo[267235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:05 compute-0 sudo[267235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:05 compute-0 sudo[267235]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:05 compute-0 sudo[267260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:05 compute-0 sudo[267260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:05 compute-0 sudo[267260]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2339841313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:12:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2339841313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:12:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:06 compute-0 nova_compute[256940]: 2025-10-02 12:12:06.537 2 DEBUG nova.compute.manager [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-changed-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:06 compute-0 nova_compute[256940]: 2025-10-02 12:12:06.538 2 DEBUG nova.compute.manager [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Refreshing instance network info cache due to event network-changed-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:12:06 compute-0 nova_compute[256940]: 2025-10-02 12:12:06.538 2 DEBUG oslo_concurrency.lockutils [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:12:06 compute-0 nova_compute[256940]: 2025-10-02 12:12:06.539 2 DEBUG oslo_concurrency.lockutils [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:12:06 compute-0 nova_compute[256940]: 2025-10-02 12:12:06.539 2 DEBUG nova.network.neutron [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Refreshing network info cache for port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:12:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 808 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:12:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:06 compute-0 ceph-mon[73668]: pgmap v963: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 808 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:12:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:06.919 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:07.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:07 compute-0 nova_compute[256940]: 2025-10-02 12:12:07.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:08 compute-0 ceph-mon[73668]: pgmap v964: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 808 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:12:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:08.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 143 KiB/s rd, 542 KiB/s wr, 37 op/s
Oct 02 12:12:08 compute-0 nova_compute[256940]: 2025-10-02 12:12:08.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:09.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:10 compute-0 nova_compute[256940]: 2025-10-02 12:12:10.072 2 DEBUG nova.network.neutron [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updated VIF entry in instance network info cache for port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:12:10 compute-0 nova_compute[256940]: 2025-10-02 12:12:10.073 2 DEBUG nova.network.neutron [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updating instance_info_cache with network_info: [{"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:10 compute-0 ceph-mon[73668]: pgmap v965: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 143 KiB/s rd, 542 KiB/s wr, 37 op/s
Oct 02 12:12:10 compute-0 nova_compute[256940]: 2025-10-02 12:12:10.108 2 DEBUG oslo_concurrency.lockutils [req-80ea88c4-d8a9-4d67-a93f-50407c779ef3 req-b901dd8e-ff94-41b8-8205-7181b976a8d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a80fe91d-8a49-4d15-9036-30c510b44f99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:12:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:10.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 12:12:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:11.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:12.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:12 compute-0 ceph-mon[73668]: pgmap v966: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 12:12:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 12:12:12 compute-0 nova_compute[256940]: 2025-10-02 12:12:12.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:13.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:13 compute-0 podman[267289]: 2025-10-02 12:12:13.428898082 +0000 UTC m=+0.093090322 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:12:13 compute-0 podman[267290]: 2025-10-02 12:12:13.473721717 +0000 UTC m=+0.133223684 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:12:13 compute-0 nova_compute[256940]: 2025-10-02 12:12:13.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:14.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:14 compute-0 ceph-mon[73668]: pgmap v967: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 12:12:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 12:12:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:15.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.342 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.343 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.343 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.344 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.344 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.346 2 INFO nova.compute.manager [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Terminating instance
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.348 2 DEBUG nova.compute.manager [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:12:15 compute-0 kernel: tapd1df7f09-3c (unregistering): left promiscuous mode
Oct 02 12:12:15 compute-0 NetworkManager[44981]: <info>  [1759407135.4576] device (tapd1df7f09-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 ovn_controller[148123]: 2025-10-02T12:12:15Z|00053|binding|INFO|Releasing lport d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 from this chassis (sb_readonly=0)
Oct 02 12:12:15 compute-0 ovn_controller[148123]: 2025-10-02T12:12:15Z|00054|binding|INFO|Setting lport d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 down in Southbound
Oct 02 12:12:15 compute-0 ovn_controller[148123]: 2025-10-02T12:12:15Z|00055|binding|INFO|Removing iface tapd1df7f09-3c ovn-installed in OVS
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.496 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:a3:ca 10.100.0.12'], port_security=['fa:16:3e:31:a3:ca 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a80fe91d-8a49-4d15-9036-30c510b44f99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f66f0e949a840d789f5980d83c5f225', 'neutron:revision_number': '4', 'neutron:security_group_ids': '32b10fbd-cf4a-4d31-b068-19efd1a2ef75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c290c8-3e63-4fff-8d21-adb217745716, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.498 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 in datapath a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f unbound from our chassis
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.500 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.502 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbf51bc-defe-4699-9515-389f1d9955a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.503 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f namespace which is not needed anymore
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 02 12:12:15 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Consumed 15.225s CPU time.
Oct 02 12:12:15 compute-0 systemd-machined[210927]: Machine qemu-3-instance-00000009 terminated.
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.628 2 INFO nova.virt.libvirt.driver [-] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Instance destroyed successfully.
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.629 2 DEBUG nova.objects.instance [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lazy-loading 'resources' on Instance uuid a80fe91d-8a49-4d15-9036-30c510b44f99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.650 2 DEBUG nova.virt.libvirt.vif [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1563176459',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1563176459',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-156317645',id=9,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:11:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f66f0e949a840d789f5980d83c5f225',ramdisk_id='',reservation_id='r-3abrnehi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1891605502',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1891605502-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:11:41Z,user_data=None,user_id='59e4926e6a8749a8a393630a15b70607',uuid=a80fe91d-8a49-4d15-9036-30c510b44f99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.651 2 DEBUG nova.network.os_vif_util [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Converting VIF {"id": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "address": "fa:16:3e:31:a3:ca", "network": {"id": "a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-169569517-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f66f0e949a840d789f5980d83c5f225", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1df7f09-3c", "ovs_interfaceid": "d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.652 2 DEBUG nova.network.os_vif_util [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:31:a3:ca,bridge_name='br-int',has_traffic_filtering=True,id=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1,network=Network(a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1df7f09-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.653 2 DEBUG os_vif [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:a3:ca,bridge_name='br-int',has_traffic_filtering=True,id=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1,network=Network(a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1df7f09-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1df7f09-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.659 2 INFO os_vif [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:a3:ca,bridge_name='br-int',has_traffic_filtering=True,id=d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1,network=Network(a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1df7f09-3c')
Oct 02 12:12:15 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [NOTICE]   (267072) : haproxy version is 2.8.14-c23fe91
Oct 02 12:12:15 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [NOTICE]   (267072) : path to executable is /usr/sbin/haproxy
Oct 02 12:12:15 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [WARNING]  (267072) : Exiting Master process...
Oct 02 12:12:15 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [WARNING]  (267072) : Exiting Master process...
Oct 02 12:12:15 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [ALERT]    (267072) : Current worker (267074) exited with code 143 (Terminated)
Oct 02 12:12:15 compute-0 neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f[267068]: [WARNING]  (267072) : All workers exited. Exiting... (0)
Oct 02 12:12:15 compute-0 systemd[1]: libpod-b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267.scope: Deactivated successfully.
Oct 02 12:12:15 compute-0 podman[267366]: 2025-10-02 12:12:15.680715056 +0000 UTC m=+0.048456941 container died b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:12:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267-userdata-shm.mount: Deactivated successfully.
Oct 02 12:12:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-468678b84b97670deb1c7bcb3dfb804c966fee8719725d31482e7d88ac99df45-merged.mount: Deactivated successfully.
Oct 02 12:12:15 compute-0 podman[267366]: 2025-10-02 12:12:15.722092871 +0000 UTC m=+0.089834766 container cleanup b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:12:15 compute-0 systemd[1]: libpod-conmon-b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267.scope: Deactivated successfully.
Oct 02 12:12:15 compute-0 podman[267416]: 2025-10-02 12:12:15.778477449 +0000 UTC m=+0.037413311 container remove b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.788 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[48bfc2e9-1864-4b1b-948c-66f75c0a35aa]: (4, ('Thu Oct  2 12:12:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f (b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267)\nb753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267\nThu Oct  2 12:12:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f (b753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267)\nb753b865ac163bfd222a95aa09174d7d7dd1a7a61b93cd995cb2ffb5c7db2267\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.790 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6546fa33-cd3d-4f66-a1ae-36c8281404e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.792 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8ed8159-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 kernel: tapa8ed8159-50: left promiscuous mode
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.803 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[80b2611a-1c67-4e48-b132-41b6b8c31100]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 nova_compute[256940]: 2025-10-02 12:12:15.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.835 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0dbf79-a6f5-4cc0-a116-f7a4691d8965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.837 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e17603a9-0612-45d9-bc35-31b817827003]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.853 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[82acb34f-22cd-4c69-881d-49a928240ad4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499563, 'reachable_time': 30428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267431, 'error': None, 'target': 'ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.857 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8ed8159-51c1-4a5b-8464-9b1ae0a9f93f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:15.857 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ae317997-b040-403b-a5c5-3b1a7ab71330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:12:15 compute-0 systemd[1]: run-netns-ovnmeta\x2da8ed8159\x2d51c1\x2d4a5b\x2d8464\x2d9b1ae0a9f93f.mount: Deactivated successfully.
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.111 2 INFO nova.virt.libvirt.driver [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Deleting instance files /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99_del
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.112 2 INFO nova.virt.libvirt.driver [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Deletion of /var/lib/nova/instances/a80fe91d-8a49-4d15-9036-30c510b44f99_del complete
Oct 02 12:12:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:16.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.208 2 INFO nova.compute.manager [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.209 2 DEBUG oslo.service.loopingcall [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.210 2 DEBUG nova.compute.manager [-] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.210 2 DEBUG nova.network.neutron [-] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:12:16 compute-0 ceph-mon[73668]: pgmap v968: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.357 2 DEBUG nova.compute.manager [req-c2b52921-e03d-44fa-a0f7-407bd70db492 req-f0631784-fad8-482e-b47c-050e88fdb213 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-vif-unplugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.358 2 DEBUG oslo_concurrency.lockutils [req-c2b52921-e03d-44fa-a0f7-407bd70db492 req-f0631784-fad8-482e-b47c-050e88fdb213 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.359 2 DEBUG oslo_concurrency.lockutils [req-c2b52921-e03d-44fa-a0f7-407bd70db492 req-f0631784-fad8-482e-b47c-050e88fdb213 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.359 2 DEBUG oslo_concurrency.lockutils [req-c2b52921-e03d-44fa-a0f7-407bd70db492 req-f0631784-fad8-482e-b47c-050e88fdb213 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.359 2 DEBUG nova.compute.manager [req-c2b52921-e03d-44fa-a0f7-407bd70db492 req-f0631784-fad8-482e-b47c-050e88fdb213 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] No waiting events found dispatching network-vif-unplugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:16 compute-0 nova_compute[256940]: 2025-10-02 12:12:16.360 2 DEBUG nova.compute.manager [req-c2b52921-e03d-44fa-a0f7-407bd70db492 req-f0631784-fad8-482e-b47c-050e88fdb213 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-vif-unplugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:12:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 12:12:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:17.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:17 compute-0 nova_compute[256940]: 2025-10-02 12:12:17.702 2 DEBUG nova.network.neutron [-] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:12:17 compute-0 nova_compute[256940]: 2025-10-02 12:12:17.727 2 INFO nova.compute.manager [-] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Took 1.52 seconds to deallocate network for instance.
Oct 02 12:12:17 compute-0 nova_compute[256940]: 2025-10-02 12:12:17.845 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:17 compute-0 nova_compute[256940]: 2025-10-02 12:12:17.846 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:17 compute-0 nova_compute[256940]: 2025-10-02 12:12:17.926 2 DEBUG oslo_concurrency.processutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:18.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:18 compute-0 ceph-mon[73668]: pgmap v969: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 12:12:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934180314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.411 2 DEBUG oslo_concurrency.processutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.418 2 DEBUG nova.compute.provider_tree [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.442 2 DEBUG nova.scheduler.client.report [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.469 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.560 2 DEBUG nova.compute.manager [req-6811c78a-f1df-4b13-b5f5-c45d4603bda7 req-315bb161-1cca-4707-abaf-27d78a10ffae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.560 2 DEBUG oslo_concurrency.lockutils [req-6811c78a-f1df-4b13-b5f5-c45d4603bda7 req-315bb161-1cca-4707-abaf-27d78a10ffae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.560 2 DEBUG oslo_concurrency.lockutils [req-6811c78a-f1df-4b13-b5f5-c45d4603bda7 req-315bb161-1cca-4707-abaf-27d78a10ffae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.561 2 DEBUG oslo_concurrency.lockutils [req-6811c78a-f1df-4b13-b5f5-c45d4603bda7 req-315bb161-1cca-4707-abaf-27d78a10ffae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.561 2 DEBUG nova.compute.manager [req-6811c78a-f1df-4b13-b5f5-c45d4603bda7 req-315bb161-1cca-4707-abaf-27d78a10ffae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] No waiting events found dispatching network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.561 2 WARNING nova.compute.manager [req-6811c78a-f1df-4b13-b5f5-c45d4603bda7 req-315bb161-1cca-4707-abaf-27d78a10ffae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received unexpected event network-vif-plugged-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 for instance with vm_state deleted and task_state None.
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.561 2 DEBUG nova.compute.manager [req-6811c78a-f1df-4b13-b5f5-c45d4603bda7 req-315bb161-1cca-4707-abaf-27d78a10ffae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Received event network-vif-deleted-d1df7f09-3cd6-4f5c-8c63-0a6eb44c19b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.594 2 INFO nova.scheduler.client.report [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Deleted allocations for instance a80fe91d-8a49-4d15-9036-30c510b44f99
Oct 02 12:12:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 105 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.737 2 DEBUG oslo_concurrency.lockutils [None req-dec5483d-6337-4220-af04-b7e3d7ace19b 59e4926e6a8749a8a393630a15b70607 7f66f0e949a840d789f5980d83c5f225 - - default default] Lock "a80fe91d-8a49-4d15-9036-30c510b44f99" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:18 compute-0 nova_compute[256940]: 2025-10-02 12:12:18.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:19.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1934180314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:20.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:20 compute-0 ceph-mon[73668]: pgmap v970: 305 pgs: 305 active+clean; 105 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Oct 02 12:12:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:20 compute-0 nova_compute[256940]: 2025-10-02 12:12:20.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:22.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:22 compute-0 ceph-mon[73668]: pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:23 compute-0 nova_compute[256940]: 2025-10-02 12:12:23.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:23 compute-0 nova_compute[256940]: 2025-10-02 12:12:23.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:23 compute-0 nova_compute[256940]: 2025-10-02 12:12:23.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:24.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:24 compute-0 podman[267460]: 2025-10-02 12:12:24.428818976 +0000 UTC m=+0.088067249 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 12:12:24 compute-0 ceph-mon[73668]: pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:12:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:12:25 compute-0 sudo[267481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:25 compute-0 sudo[267481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:25 compute-0 sudo[267481]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:25 compute-0 sudo[267506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:25 compute-0 sudo[267506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:25 compute-0 sudo[267506]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:25 compute-0 podman[267530]: 2025-10-02 12:12:25.594298361 +0000 UTC m=+0.094777005 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:12:25 compute-0 nova_compute[256940]: 2025-10-02 12:12:25.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:26.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:26.449 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:26.451 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:26.451 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:26 compute-0 ceph-mon[73668]: pgmap v973: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:27.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:28 compute-0 ceph-mon[73668]: pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:12:28
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', '.mgr', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 02 12:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:12:28 compute-0 nova_compute[256940]: 2025-10-02 12:12:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:29 compute-0 ceph-mon[73668]: pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 12:12:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:30.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 682 B/s wr, 24 op/s
Oct 02 12:12:30 compute-0 nova_compute[256940]: 2025-10-02 12:12:30.627 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407135.6257784, a80fe91d-8a49-4d15-9036-30c510b44f99 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:30 compute-0 nova_compute[256940]: 2025-10-02 12:12:30.627 2 INFO nova.compute.manager [-] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] VM Stopped (Lifecycle Event)
Oct 02 12:12:30 compute-0 nova_compute[256940]: 2025-10-02 12:12:30.660 2 DEBUG nova.compute.manager [None req-0f0c3816-6669-4231-870d-69a816036bed - - - - - -] [instance: a80fe91d-8a49-4d15-9036-30c510b44f99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:30 compute-0 nova_compute[256940]: 2025-10-02 12:12:30.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:31 compute-0 sudo[267556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:31 compute-0 sudo[267556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:31 compute-0 sudo[267556]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:31 compute-0 sudo[267581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:31 compute-0 sudo[267581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:31 compute-0 sudo[267581]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:31 compute-0 sudo[267606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:31 compute-0 sudo[267606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:31 compute-0 sudo[267606]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:31 compute-0 ceph-mon[73668]: pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 682 B/s wr, 24 op/s
Oct 02 12:12:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2165228842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:31 compute-0 sudo[267631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:12:31 compute-0 sudo[267631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:32.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:32 compute-0 sudo[267631]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:12:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:12:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:12:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cdc99a8b-84f4-4f62-a6b5-1943a4fed55e does not exist
Oct 02 12:12:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c7abc7aa-75a7-4d99-922c-15e02490f885 does not exist
Oct 02 12:12:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4a721e40-f64a-41a6-ba98-8b8db5cd4064 does not exist
Oct 02 12:12:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:12:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:12:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:12:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:32 compute-0 sudo[267688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:32 compute-0 sudo[267688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:32 compute-0 sudo[267688]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:32 compute-0 sudo[267713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:32 compute-0 sudo[267713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:32 compute-0 sudo[267713]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:32 compute-0 sudo[267738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:32 compute-0 sudo[267738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:32 compute-0 sudo[267738]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:32 compute-0 sudo[267763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:12:32 compute-0 sudo[267763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1978288928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:12:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:33 compute-0 podman[267830]: 2025-10-02 12:12:33.091292903 +0000 UTC m=+0.050392452 container create 902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:12:33 compute-0 systemd[1]: Started libpod-conmon-902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260.scope.
Oct 02 12:12:33 compute-0 podman[267830]: 2025-10-02 12:12:33.06599963 +0000 UTC m=+0.025099189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:33.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:33 compute-0 podman[267830]: 2025-10-02 12:12:33.191593412 +0000 UTC m=+0.150692951 container init 902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cerf, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:12:33 compute-0 podman[267830]: 2025-10-02 12:12:33.200915547 +0000 UTC m=+0.160015086 container start 902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:12:33 compute-0 ecstatic_cerf[267847]: 167 167
Oct 02 12:12:33 compute-0 systemd[1]: libpod-902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260.scope: Deactivated successfully.
Oct 02 12:12:33 compute-0 podman[267830]: 2025-10-02 12:12:33.228680495 +0000 UTC m=+0.187780034 container attach 902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:12:33 compute-0 podman[267830]: 2025-10-02 12:12:33.229951138 +0000 UTC m=+0.189050667 container died 902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d94bd3fb7c9be1255c9939c79211f6f1fe4389f6d1401e421e616854a2aa29d3-merged.mount: Deactivated successfully.
Oct 02 12:12:33 compute-0 podman[267830]: 2025-10-02 12:12:33.302642054 +0000 UTC m=+0.261741583 container remove 902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:12:33 compute-0 systemd[1]: libpod-conmon-902000743ec054e757be2d9b9ecdbca505cc32b64fe63d42cd0364897c94e260.scope: Deactivated successfully.
Oct 02 12:12:33 compute-0 podman[267871]: 2025-10-02 12:12:33.528345381 +0000 UTC m=+0.057357575 container create 4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:33 compute-0 podman[267871]: 2025-10-02 12:12:33.500645885 +0000 UTC m=+0.029658099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:33 compute-0 systemd[1]: Started libpod-conmon-4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7.scope.
Oct 02 12:12:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ee78e3a4e8a091ce24bf782c3722bca3182ccb35dfbc885af4dad5d2975ed7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ee78e3a4e8a091ce24bf782c3722bca3182ccb35dfbc885af4dad5d2975ed7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ee78e3a4e8a091ce24bf782c3722bca3182ccb35dfbc885af4dad5d2975ed7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ee78e3a4e8a091ce24bf782c3722bca3182ccb35dfbc885af4dad5d2975ed7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ee78e3a4e8a091ce24bf782c3722bca3182ccb35dfbc885af4dad5d2975ed7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:33 compute-0 podman[267871]: 2025-10-02 12:12:33.658456842 +0000 UTC m=+0.187469086 container init 4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:12:33 compute-0 podman[267871]: 2025-10-02 12:12:33.674524183 +0000 UTC m=+0.203536377 container start 4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:12:33 compute-0 podman[267871]: 2025-10-02 12:12:33.677805269 +0000 UTC m=+0.206817553 container attach 4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:33 compute-0 nova_compute[256940]: 2025-10-02 12:12:33.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:33 compute-0 nova_compute[256940]: 2025-10-02 12:12:33.997 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:33 compute-0 nova_compute[256940]: 2025-10-02 12:12:33.997 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:33 compute-0 nova_compute[256940]: 2025-10-02 12:12:33.997 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:34.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:34 compute-0 nova_compute[256940]: 2025-10-02 12:12:34.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:34 compute-0 nova_compute[256940]: 2025-10-02 12:12:34.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:12:34 compute-0 nova_compute[256940]: 2025-10-02 12:12:34.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:12:34 compute-0 nova_compute[256940]: 2025-10-02 12:12:34.354 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:12:34 compute-0 ceph-mon[73668]: pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:34 compute-0 unruffled_wing[267887]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:12:34 compute-0 unruffled_wing[267887]: --> relative data size: 1.0
Oct 02 12:12:34 compute-0 unruffled_wing[267887]: --> All data devices are unavailable
Oct 02 12:12:34 compute-0 systemd[1]: libpod-4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7.scope: Deactivated successfully.
Oct 02 12:12:34 compute-0 podman[267871]: 2025-10-02 12:12:34.524341621 +0000 UTC m=+1.053353825 container died 4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0ee78e3a4e8a091ce24bf782c3722bca3182ccb35dfbc885af4dad5d2975ed7-merged.mount: Deactivated successfully.
Oct 02 12:12:34 compute-0 podman[267871]: 2025-10-02 12:12:34.581594892 +0000 UTC m=+1.110607086 container remove 4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:12:34 compute-0 systemd[1]: libpod-conmon-4d6adbaef0d9d47c9cc1867292c177d3680503a74155410b58a721915fd13ee7.scope: Deactivated successfully.
Oct 02 12:12:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:34 compute-0 sudo[267763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:34 compute-0 sudo[267918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:34 compute-0 sudo[267918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:34 compute-0 sudo[267918]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:34 compute-0 sudo[267943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:34 compute-0 sudo[267943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:34 compute-0 sudo[267943]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:34 compute-0 sudo[267968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:34 compute-0 sudo[267968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:34 compute-0 sudo[267968]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:34 compute-0 sudo[267993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:12:34 compute-0 sudo[267993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:35 compute-0 nova_compute[256940]: 2025-10-02 12:12:35.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:35 compute-0 nova_compute[256940]: 2025-10-02 12:12:35.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:35 compute-0 nova_compute[256940]: 2025-10-02 12:12:35.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:35 compute-0 nova_compute[256940]: 2025-10-02 12:12:35.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:35 compute-0 nova_compute[256940]: 2025-10-02 12:12:35.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:12:35 compute-0 podman[268058]: 2025-10-02 12:12:35.333858274 +0000 UTC m=+0.055516847 container create c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:12:35 compute-0 systemd[1]: Started libpod-conmon-c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7.scope.
Oct 02 12:12:35 compute-0 podman[268058]: 2025-10-02 12:12:35.307352559 +0000 UTC m=+0.029011172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/398630855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1190854149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:35 compute-0 nova_compute[256940]: 2025-10-02 12:12:35.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:35 compute-0 podman[268058]: 2025-10-02 12:12:35.781091529 +0000 UTC m=+0.502750112 container init c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_meitner, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:35 compute-0 podman[268058]: 2025-10-02 12:12:35.788705699 +0000 UTC m=+0.510364272 container start c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_meitner, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:12:35 compute-0 podman[268058]: 2025-10-02 12:12:35.793159975 +0000 UTC m=+0.514818578 container attach c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:12:35 compute-0 strange_meitner[268075]: 167 167
Oct 02 12:12:35 compute-0 systemd[1]: libpod-c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7.scope: Deactivated successfully.
Oct 02 12:12:35 compute-0 podman[268058]: 2025-10-02 12:12:35.795867676 +0000 UTC m=+0.517526269 container died c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d76c3d6eeb248f5490267c799fa318daa97dfe2e2bc66090cc7f8c2ba854aa7-merged.mount: Deactivated successfully.
Oct 02 12:12:35 compute-0 podman[268058]: 2025-10-02 12:12:35.839368187 +0000 UTC m=+0.561026790 container remove c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:12:35 compute-0 systemd[1]: libpod-conmon-c4bcfe64e3b379d8aaa3765befd834417f11ffa4442646203a34fb4498d453a7.scope: Deactivated successfully.
Oct 02 12:12:36 compute-0 podman[268098]: 2025-10-02 12:12:36.093254003 +0000 UTC m=+0.067507841 container create 10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:12:36 compute-0 podman[268098]: 2025-10-02 12:12:36.061925601 +0000 UTC m=+0.036179479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:36 compute-0 systemd[1]: Started libpod-conmon-10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c.scope.
Oct 02 12:12:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4c636932bcea46b77970807b8bee0a9c20bff57b62927f140fd0fc0e3daf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4c636932bcea46b77970807b8bee0a9c20bff57b62927f140fd0fc0e3daf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4c636932bcea46b77970807b8bee0a9c20bff57b62927f140fd0fc0e3daf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee4c636932bcea46b77970807b8bee0a9c20bff57b62927f140fd0fc0e3daf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:36.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:36 compute-0 podman[268098]: 2025-10-02 12:12:36.218212689 +0000 UTC m=+0.192466597 container init 10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:12:36 compute-0 podman[268098]: 2025-10-02 12:12:36.233988212 +0000 UTC m=+0.208242080 container start 10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:12:36 compute-0 podman[268098]: 2025-10-02 12:12:36.239719852 +0000 UTC m=+0.213973780 container attach 10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:12:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:36 compute-0 ceph-mon[73668]: pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:37 compute-0 trusting_robinson[268115]: {
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:     "1": [
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:         {
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "devices": [
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "/dev/loop3"
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             ],
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "lv_name": "ceph_lv0",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "lv_size": "7511998464",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "name": "ceph_lv0",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "tags": {
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.cluster_name": "ceph",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.crush_device_class": "",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.encrypted": "0",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.osd_id": "1",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.type": "block",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:                 "ceph.vdo": "0"
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             },
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "type": "block",
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:             "vg_name": "ceph_vg0"
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:         }
Oct 02 12:12:37 compute-0 trusting_robinson[268115]:     ]
Oct 02 12:12:37 compute-0 trusting_robinson[268115]: }
Oct 02 12:12:37 compute-0 systemd[1]: libpod-10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c.scope: Deactivated successfully.
Oct 02 12:12:37 compute-0 podman[268098]: 2025-10-02 12:12:37.042001115 +0000 UTC m=+1.016254943 container died 10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-aee4c636932bcea46b77970807b8bee0a9c20bff57b62927f140fd0fc0e3daf4-merged.mount: Deactivated successfully.
Oct 02 12:12:37 compute-0 podman[268098]: 2025-10-02 12:12:37.11540751 +0000 UTC m=+1.089661348 container remove 10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:12:37 compute-0 systemd[1]: libpod-conmon-10b3b237d0fbcb09cdb06785bde1fe3675101f70f775daace78d33672e33f50c.scope: Deactivated successfully.
Oct 02 12:12:37 compute-0 sudo[267993]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:37.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:37 compute-0 sudo[268139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:37 compute-0 sudo[268139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:37 compute-0 sudo[268139]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:37 compute-0 sudo[268164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:37 compute-0 sudo[268164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:37 compute-0 sudo[268164]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:37 compute-0 sudo[268189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:37 compute-0 sudo[268189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:37 compute-0 sudo[268189]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:37 compute-0 sudo[268214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:12:37 compute-0 sudo[268214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:37 compute-0 ceph-mon[73668]: pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:37 compute-0 podman[268281]: 2025-10-02 12:12:37.90043165 +0000 UTC m=+0.042498905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:38 compute-0 podman[268281]: 2025-10-02 12:12:38.027613695 +0000 UTC m=+0.169680940 container create 46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 02 12:12:38 compute-0 systemd[1]: Started libpod-conmon-46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6.scope.
Oct 02 12:12:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:38.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:38 compute-0 podman[268281]: 2025-10-02 12:12:38.37115755 +0000 UTC m=+0.513224845 container init 46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:12:38 compute-0 podman[268281]: 2025-10-02 12:12:38.380776322 +0000 UTC m=+0.522843547 container start 46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:38 compute-0 gracious_ramanujan[268297]: 167 167
Oct 02 12:12:38 compute-0 systemd[1]: libpod-46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6.scope: Deactivated successfully.
Oct 02 12:12:38 compute-0 podman[268281]: 2025-10-02 12:12:38.504782353 +0000 UTC m=+0.646849618 container attach 46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:38 compute-0 podman[268281]: 2025-10-02 12:12:38.50544563 +0000 UTC m=+0.647512865 container died 46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:12:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:38 compute-0 nova_compute[256940]: 2025-10-02 12:12:38.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:39.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:39 compute-0 nova_compute[256940]: 2025-10-02 12:12:39.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-959793b7d9479029023572ae90f4dcd25f58ce4d0a72e541b5740f6bd43f884c-merged.mount: Deactivated successfully.
Oct 02 12:12:39 compute-0 podman[268281]: 2025-10-02 12:12:39.575236257 +0000 UTC m=+1.717303492 container remove 46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:12:39 compute-0 systemd[1]: libpod-conmon-46ccfe919bfcbe8acfe9d02f104bbfb97290f766ae903a8fb1d8cb21272126b6.scope: Deactivated successfully.
Oct 02 12:12:39 compute-0 nova_compute[256940]: 2025-10-02 12:12:39.796 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:39 compute-0 nova_compute[256940]: 2025-10-02 12:12:39.798 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:39 compute-0 nova_compute[256940]: 2025-10-02 12:12:39.798 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:39 compute-0 nova_compute[256940]: 2025-10-02 12:12:39.799 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:12:39 compute-0 nova_compute[256940]: 2025-10-02 12:12:39.799 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:39 compute-0 podman[268325]: 2025-10-02 12:12:39.827291355 +0000 UTC m=+0.062564941 container create f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sutherland, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:12:39 compute-0 podman[268325]: 2025-10-02 12:12:39.796770505 +0000 UTC m=+0.032044151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:12:39 compute-0 systemd[1]: Started libpod-conmon-f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4.scope.
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:12:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:12:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b5e5927bd14beca561ea3d6aacd61fbd3ec676191270fac4fd9a2effd2f9f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b5e5927bd14beca561ea3d6aacd61fbd3ec676191270fac4fd9a2effd2f9f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b5e5927bd14beca561ea3d6aacd61fbd3ec676191270fac4fd9a2effd2f9f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b5e5927bd14beca561ea3d6aacd61fbd3ec676191270fac4fd9a2effd2f9f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:12:39 compute-0 podman[268325]: 2025-10-02 12:12:39.955911227 +0000 UTC m=+0.191184873 container init f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:12:39 compute-0 podman[268325]: 2025-10-02 12:12:39.963431974 +0000 UTC m=+0.198705570 container start f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sutherland, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:12:39 compute-0 podman[268325]: 2025-10-02 12:12:39.97893963 +0000 UTC m=+0.214213286 container attach f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sutherland, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:12:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:40.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2078166621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:40 compute-0 nova_compute[256940]: 2025-10-02 12:12:40.331 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:40 compute-0 ceph-mon[73668]: pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2078166621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:40 compute-0 nova_compute[256940]: 2025-10-02 12:12:40.530 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:12:40 compute-0 nova_compute[256940]: 2025-10-02 12:12:40.531 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4852MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:12:40 compute-0 nova_compute[256940]: 2025-10-02 12:12:40.532 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:40 compute-0 nova_compute[256940]: 2025-10-02 12:12:40.532 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:40 compute-0 nova_compute[256940]: 2025-10-02 12:12:40.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]: {
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]:         "osd_id": 1,
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]:         "type": "bluestore"
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]:     }
Oct 02 12:12:40 compute-0 romantic_sutherland[268342]: }
Oct 02 12:12:40 compute-0 systemd[1]: libpod-f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4.scope: Deactivated successfully.
Oct 02 12:12:40 compute-0 podman[268325]: 2025-10-02 12:12:40.885256581 +0000 UTC m=+1.120530177 container died f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9b5e5927bd14beca561ea3d6aacd61fbd3ec676191270fac4fd9a2effd2f9f4-merged.mount: Deactivated successfully.
Oct 02 12:12:41 compute-0 podman[268325]: 2025-10-02 12:12:41.174860413 +0000 UTC m=+1.410133969 container remove f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:12:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:41.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:41 compute-0 systemd[1]: libpod-conmon-f6c955248f30be8ab3a1e14cb81315a4e4b4000224e6ec653ba8b83a72c622a4.scope: Deactivated successfully.
Oct 02 12:12:41 compute-0 sudo[268214]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:12:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:12:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0db64d53-eb79-4b46-8bbd-23129e3e8fdd does not exist
Oct 02 12:12:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 09f6ba9e-780c-4a87-bb94-59dba7ff6946 does not exist
Oct 02 12:12:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 280fabbf-048c-4a04-b315-e162f544ef5b does not exist
Oct 02 12:12:41 compute-0 sudo[268399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:41 compute-0 sudo[268399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:41 compute-0 sudo[268399]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:41 compute-0 sudo[268424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:12:41 compute-0 sudo[268424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:41 compute-0 sudo[268424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.005 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.005 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:12:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:12:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:42.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.279 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:42 compute-0 ceph-mon[73668]: pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636622389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.776 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.785 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.892 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.978 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:12:42 compute-0 nova_compute[256940]: 2025-10-02 12:12:42.978 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:12:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:43.368 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:43 compute-0 nova_compute[256940]: 2025-10-02 12:12:43.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:43.370 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:12:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1636622389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:44 compute-0 nova_compute[256940]: 2025-10-02 12:12:44.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:44.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:44 compute-0 podman[268472]: 2025-10-02 12:12:44.433172144 +0000 UTC m=+0.087707931 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:12:44 compute-0 podman[268473]: 2025-10-02 12:12:44.478072601 +0000 UTC m=+0.131825477 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:12:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:45 compute-0 ceph-mon[73668]: pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:45 compute-0 sudo[268521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:45 compute-0 sudo[268521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:45 compute-0 sudo[268521]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:45 compute-0 nova_compute[256940]: 2025-10-02 12:12:45.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:45 compute-0 sudo[268546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:45 compute-0 sudo[268546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:45 compute-0 sudo[268546]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:46 compute-0 ceph-mon[73668]: pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:12:46.373 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:47.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:47 compute-0 ceph-mon[73668]: pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:48.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:49 compute-0 nova_compute[256940]: 2025-10-02 12:12:49.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:50 compute-0 ceph-mon[73668]: pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:50.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:50 compute-0 nova_compute[256940]: 2025-10-02 12:12:50.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:52.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:52 compute-0 ceph-mon[73668]: pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:53.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:54 compute-0 nova_compute[256940]: 2025-10-02 12:12:54.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:54.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:54 compute-0 ceph-mon[73668]: pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:12:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:12:55 compute-0 podman[268576]: 2025-10-02 12:12:55.431472635 +0000 UTC m=+0.099190131 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 12:12:55 compute-0 nova_compute[256940]: 2025-10-02 12:12:55.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:56.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:56 compute-0 podman[268597]: 2025-10-02 12:12:56.426748947 +0000 UTC m=+0.093096622 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:12:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:56 compute-0 ceph-mon[73668]: pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.019 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.020 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.069 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:12:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.184 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.185 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.198 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.198 2 INFO nova.compute.claims [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:12:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:12:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.392 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748525350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.882 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.893 2 DEBUG nova.compute.provider_tree [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.910 2 DEBUG nova.scheduler.client.report [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.952 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:57 compute-0 nova_compute[256940]: 2025-10-02 12:12:57.953 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.010 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.011 2 DEBUG nova.network.neutron [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.039 2 INFO nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.061 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.231 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.235 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.236 2 INFO nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating image(s)
Oct 02 12:12:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.273 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.314 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.351 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.357 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.444 2 DEBUG nova.policy [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a80f833255046e7b62d34c1c6066073', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39ca581fbb054c959d26096ca39fef05', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.447 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.448 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.448 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.449 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.475 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:12:58 compute-0 nova_compute[256940]: 2025-10-02 12:12:58.478 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c704d1db-8e20-432e-900d-4f4267059d9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:58 compute-0 ceph-mon[73668]: pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3748525350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:59 compute-0 nova_compute[256940]: 2025-10-02 12:12:59.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:12:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:59 compute-0 ceph-mon[73668]: pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:59 compute-0 nova_compute[256940]: 2025-10-02 12:12:59.947 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c704d1db-8e20-432e-900d-4f4267059d9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.195 2 DEBUG nova.network.neutron [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Successfully created port: 2747f0c7-f3c3-4402-aedd-266862caf435 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:13:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.260 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] resizing rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.393 2 DEBUG nova.objects.instance [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'migration_context' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.408 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.409 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Ensure instance console log exists: /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.409 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.409 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.410 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 65 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 6.1 KiB/s rd, 1003 KiB/s wr, 10 op/s
Oct 02 12:13:00 compute-0 nova_compute[256940]: 2025-10-02 12:13:00.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.398 2 DEBUG nova.network.neutron [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Successfully updated port: 2747f0c7-f3c3-4402-aedd-266862caf435 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.423 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.424 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquired lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.424 2 DEBUG nova.network.neutron [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:01 compute-0 ovn_controller[148123]: 2025-10-02T12:13:01Z|00056|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.544 2 DEBUG nova.compute.manager [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-changed-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.544 2 DEBUG nova.compute.manager [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Refreshing instance network info cache due to event network-changed-2747f0c7-f3c3-4402-aedd-266862caf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.545 2 DEBUG oslo_concurrency.lockutils [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:01 compute-0 nova_compute[256940]: 2025-10-02 12:13:01.573 2 DEBUG nova.network.neutron [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:13:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:02 compute-0 ceph-mon[73668]: pgmap v991: 305 pgs: 305 active+clean; 65 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 6.1 KiB/s rd, 1003 KiB/s wr, 10 op/s
Oct 02 12:13:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3565044961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 88 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.160 2 DEBUG nova.network.neutron [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Updating instance_info_cache with network_info: [{"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.186 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Releasing lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.187 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance network_info: |[{"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.187 2 DEBUG oslo_concurrency.lockutils [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.188 2 DEBUG nova.network.neutron [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Refreshing network info cache for port 2747f0c7-f3c3-4402-aedd-266862caf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.191 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Start _get_guest_xml network_info=[{"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.197 2 WARNING nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.206 2 DEBUG nova.virt.libvirt.host [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.207 2 DEBUG nova.virt.libvirt.host [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:13:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.211 2 DEBUG nova.virt.libvirt.host [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.212 2 DEBUG nova.virt.libvirt.host [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.214 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.214 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.214 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.214 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.215 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.215 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.215 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.215 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.216 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.216 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.216 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.216 2 DEBUG nova.virt.hardware [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.219 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868232652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.705 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.742 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:03 compute-0 nova_compute[256940]: 2025-10-02 12:13:03.748 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995955371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.195 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.197 2 DEBUG nova.virt.libvirt.vif [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:12:58Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.198 2 DEBUG nova.network.os_vif_util [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.199 2 DEBUG nova.network.os_vif_util [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.201 2 DEBUG nova.objects.instance [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'pci_devices' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.220 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <uuid>c704d1db-8e20-432e-900d-4f4267059d9f</uuid>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <name>instance-0000000b</name>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersAdminTestJSON-server-1406080749</nova:name>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:13:03</nova:creationTime>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:user uuid="7a80f833255046e7b62d34c1c6066073">tempest-ServersAdminTestJSON-1879159697-project-member</nova:user>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:project uuid="39ca581fbb054c959d26096ca39fef05">tempest-ServersAdminTestJSON-1879159697</nova:project>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <nova:port uuid="2747f0c7-f3c3-4402-aedd-266862caf435">
Oct 02 12:13:04 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <system>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <entry name="serial">c704d1db-8e20-432e-900d-4f4267059d9f</entry>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <entry name="uuid">c704d1db-8e20-432e-900d-4f4267059d9f</entry>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </system>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <os>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   </os>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <features>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   </features>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c704d1db-8e20-432e-900d-4f4267059d9f_disk">
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       </source>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c704d1db-8e20-432e-900d-4f4267059d9f_disk.config">
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       </source>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:13:04 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:87:d3:15"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <target dev="tap2747f0c7-f3"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/console.log" append="off"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <video>
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </video>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:13:04 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:13:04 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:13:04 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:13:04 compute-0 nova_compute[256940]: </domain>
Oct 02 12:13:04 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.222 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Preparing to wait for external event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.223 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.223 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.224 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.225 2 DEBUG nova.virt.libvirt.vif [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:12:58Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.226 2 DEBUG nova.network.os_vif_util [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.227 2 DEBUG nova.network.os_vif_util [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.228 2 DEBUG os_vif [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.230 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2747f0c7-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2747f0c7-f3, col_values=(('external_ids', {'iface-id': '2747f0c7-f3c3-4402-aedd-266862caf435', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:d3:15', 'vm-uuid': 'c704d1db-8e20-432e-900d-4f4267059d9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:04 compute-0 NetworkManager[44981]: <info>  [1759407184.2392] manager: (tap2747f0c7-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.247 2 INFO os_vif [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3')
Oct 02 12:13:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:04.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.292 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.292 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.292 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No VIF found with MAC fa:16:3e:87:d3:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.293 2 INFO nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Using config drive
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.323 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:04 compute-0 ceph-mon[73668]: pgmap v992: 305 pgs: 305 active+clean; 88 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 12:13:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2868232652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3995955371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 107 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.716 2 INFO nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating config drive at /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.724 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrb7qmwi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.849 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrb7qmwi" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.895 2 DEBUG nova.storage.rbd_utils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.900 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config c704d1db-8e20-432e-900d-4f4267059d9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.946 2 DEBUG nova.network.neutron [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Updated VIF entry in instance network info cache for port 2747f0c7-f3c3-4402-aedd-266862caf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.947 2 DEBUG nova.network.neutron [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Updating instance_info_cache with network_info: [{"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:04 compute-0 nova_compute[256940]: 2025-10-02 12:13:04.970 2 DEBUG oslo_concurrency.lockutils [req-f52859b3-5f1a-4f46-9cc7-3f42db5e3cf4 req-848ed543-2e1e-45d2-a7d5-6a831105ef27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.096 2 DEBUG oslo_concurrency.processutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config c704d1db-8e20-432e-900d-4f4267059d9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.097 2 INFO nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deleting local config drive /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config because it was imported into RBD.
Oct 02 12:13:05 compute-0 kernel: tap2747f0c7-f3: entered promiscuous mode
Oct 02 12:13:05 compute-0 NetworkManager[44981]: <info>  [1759407185.1636] manager: (tap2747f0c7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 ovn_controller[148123]: 2025-10-02T12:13:05Z|00057|binding|INFO|Claiming lport 2747f0c7-f3c3-4402-aedd-266862caf435 for this chassis.
Oct 02 12:13:05 compute-0 ovn_controller[148123]: 2025-10-02T12:13:05Z|00058|binding|INFO|2747f0c7-f3c3-4402-aedd-266862caf435: Claiming fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:13:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:05.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.247 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:d3:15 10.100.0.7'], port_security=['fa:16:3e:87:d3:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c704d1db-8e20-432e-900d-4f4267059d9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2747f0c7-f3c3-4402-aedd-266862caf435) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.248 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2747f0c7-f3c3-4402-aedd-266862caf435 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f bound to our chassis
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.251 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:13:05 compute-0 systemd-machined[210927]: New machine qemu-4-instance-0000000b.
Oct 02 12:13:05 compute-0 systemd-udevd[268946]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.266 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cc29868e-0ad4-4978-bc4d-d096997962ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.266 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85ed78eb-41 in ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.268 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85ed78eb-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.268 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cad3e540-baf9-4813-b13e-427a86bfc95b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.271 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[78da34c7-3a4b-4798-94c5-ea6e5cf1c446]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.285 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[10063582-0273-4837-aaef-6d562ae88d01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 NetworkManager[44981]: <info>  [1759407185.2943] device (tap2747f0c7-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:13:05 compute-0 NetworkManager[44981]: <info>  [1759407185.2959] device (tap2747f0c7-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:13:05 compute-0 ovn_controller[148123]: 2025-10-02T12:13:05Z|00059|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 ovn-installed in OVS
Oct 02 12:13:05 compute-0 ovn_controller[148123]: 2025-10-02T12:13:05Z|00060|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 up in Southbound
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.317 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5c5079-cb5d-4769-8de7-99072c8425bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.354 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b87193-924d-41ee-b9d0-1b52f696ee90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 NetworkManager[44981]: <info>  [1759407185.3759] manager: (tap85ed78eb-40): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.374 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[26d90d8c-1eeb-4eb6-bf9e-503d05ef2838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.419 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bfba6dd8-7c90-4bd3-a76d-631c194dc51a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.422 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[61ddb336-8f0a-422f-a0a5-c583da3965e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 NetworkManager[44981]: <info>  [1759407185.4584] device (tap85ed78eb-40): carrier: link connected
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.464 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[93a103af-0604-4a57-a095-37b860ddffa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.489 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2e96fe26-b49d-4cfb-9e34-c77129660e5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 31338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268980, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.513 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6bad8e-ecf7-48cf-8027-75aec5917181]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:5ea8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508094, 'tstamp': 508094}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268981, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.534 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d4a1f3-86fa-4697-8769-ddfa7eaf9ce5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 31338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268982, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.571 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[32ee408b-0056-4d46-acd5-43ae77c68f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.642 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f59d64c1-7d31-4b45-974e-17ee5879d1cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.643 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:05 compute-0 kernel: tap85ed78eb-40: entered promiscuous mode
Oct 02 12:13:05 compute-0 NetworkManager[44981]: <info>  [1759407185.6495] manager: (tap85ed78eb-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.652 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 ovn_controller[148123]: 2025-10-02T12:13:05Z|00061|binding|INFO|Releasing lport 3021b3c7-b1d0-44e1-b22e-fbf6a4a79654 from this chassis (sb_readonly=0)
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.657 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85ed78eb-4003-42a7-9312-f47c5830131f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85ed78eb-4003-42a7-9312-f47c5830131f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.658 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5f466d-e7f6-4a99-ad4e-1e6126aa5161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.659 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/85ed78eb-4003-42a7-9312-f47c5830131f.pid.haproxy
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:13:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:05.661 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'env', 'PROCESS_TAG=haproxy-85ed78eb-4003-42a7-9312-f47c5830131f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85ed78eb-4003-42a7-9312-f47c5830131f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:13:05 compute-0 nova_compute[256940]: 2025-10-02 12:13:05.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:05 compute-0 sudo[268999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:05 compute-0 sudo[268999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:05 compute-0 sudo[268999]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:05 compute-0 sudo[269042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:05 compute-0 sudo[269042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:05 compute-0 sudo[269042]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:05 compute-0 ceph-mon[73668]: pgmap v993: 305 pgs: 305 active+clean; 107 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Oct 02 12:13:06 compute-0 podman[269106]: 2025-10-02 12:13:06.070905559 +0000 UTC m=+0.026562537 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:13:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:06.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:06 compute-0 podman[269106]: 2025-10-02 12:13:06.417208388 +0000 UTC m=+0.372865326 container create a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:13:06 compute-0 systemd[1]: Started libpod-conmon-a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c.scope.
Oct 02 12:13:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c34339826d8d3e3d995d9cd5600042e53048893151d91c87d0a167a22b823e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.609 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407186.6083696, c704d1db-8e20-432e-900d-4f4267059d9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.611 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Started (Lifecycle Event)
Oct 02 12:13:06 compute-0 podman[269106]: 2025-10-02 12:13:06.634559226 +0000 UTC m=+0.590216184 container init a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:13:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 119 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.8 MiB/s wr, 52 op/s
Oct 02 12:13:06 compute-0 podman[269106]: 2025-10-02 12:13:06.640937263 +0000 UTC m=+0.596594221 container start a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:13:06 compute-0 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[269122]: [NOTICE]   (269126) : New worker (269128) forked
Oct 02 12:13:06 compute-0 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[269122]: [NOTICE]   (269126) : Loading success.
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.688 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.696 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407186.608699, c704d1db-8e20-432e-900d-4f4267059d9f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.697 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Paused (Lifecycle Event)
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.745 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.749 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:06 compute-0 nova_compute[256940]: 2025-10-02 12:13:06.842 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3095058566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:07.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:08.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:08 compute-0 ceph-mon[73668]: pgmap v994: 305 pgs: 305 active+clean; 119 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.8 MiB/s wr, 52 op/s
Oct 02 12:13:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3419774605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.448 2 DEBUG nova.compute.manager [req-6529afd9-8b0e-4a40-9385-cc13ff4cd76f req-468573c8-80f2-43a9-89fb-576d950b4837 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.449 2 DEBUG oslo_concurrency.lockutils [req-6529afd9-8b0e-4a40-9385-cc13ff4cd76f req-468573c8-80f2-43a9-89fb-576d950b4837 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.449 2 DEBUG oslo_concurrency.lockutils [req-6529afd9-8b0e-4a40-9385-cc13ff4cd76f req-468573c8-80f2-43a9-89fb-576d950b4837 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.449 2 DEBUG oslo_concurrency.lockutils [req-6529afd9-8b0e-4a40-9385-cc13ff4cd76f req-468573c8-80f2-43a9-89fb-576d950b4837 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.450 2 DEBUG nova.compute.manager [req-6529afd9-8b0e-4a40-9385-cc13ff4cd76f req-468573c8-80f2-43a9-89fb-576d950b4837 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Processing event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.451 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.460 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407188.4594004, c704d1db-8e20-432e-900d-4f4267059d9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.460 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Resumed (Lifecycle Event)
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.464 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.471 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance spawned successfully.
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.472 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.480 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.487 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.533 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.533 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.534 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.534 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.535 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.535 2 DEBUG nova.virt.libvirt.driver [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.540 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.599 2 INFO nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Took 10.37 seconds to spawn the instance on the hypervisor.
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.599 2 DEBUG nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.663 2 INFO nova.compute.manager [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Took 11.53 seconds to build instance.
Oct 02 12:13:08 compute-0 nova_compute[256940]: 2025-10-02 12:13:08.680 2 DEBUG oslo_concurrency.lockutils [None req-4fec7f29-5eca-4e85-b9c6-e502784b8f44 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:09 compute-0 nova_compute[256940]: 2025-10-02 12:13:09.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:09 compute-0 nova_compute[256940]: 2025-10-02 12:13:09.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:13:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:09.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:13:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:10.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:10 compute-0 nova_compute[256940]: 2025-10-02 12:13:10.552 2 DEBUG nova.compute.manager [req-fd56d970-725d-41dd-b524-04d3a68214cd req-aec93b1a-d8b4-4ce8-90ca-3e442258c5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:10 compute-0 nova_compute[256940]: 2025-10-02 12:13:10.552 2 DEBUG oslo_concurrency.lockutils [req-fd56d970-725d-41dd-b524-04d3a68214cd req-aec93b1a-d8b4-4ce8-90ca-3e442258c5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:10 compute-0 nova_compute[256940]: 2025-10-02 12:13:10.553 2 DEBUG oslo_concurrency.lockutils [req-fd56d970-725d-41dd-b524-04d3a68214cd req-aec93b1a-d8b4-4ce8-90ca-3e442258c5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:10 compute-0 nova_compute[256940]: 2025-10-02 12:13:10.553 2 DEBUG oslo_concurrency.lockutils [req-fd56d970-725d-41dd-b524-04d3a68214cd req-aec93b1a-d8b4-4ce8-90ca-3e442258c5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:10 compute-0 nova_compute[256940]: 2025-10-02 12:13:10.553 2 DEBUG nova.compute.manager [req-fd56d970-725d-41dd-b524-04d3a68214cd req-aec93b1a-d8b4-4ce8-90ca-3e442258c5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:10 compute-0 nova_compute[256940]: 2025-10-02 12:13:10.553 2 WARNING nova.compute.manager [req-fd56d970-725d-41dd-b524-04d3a68214cd req-aec93b1a-d8b4-4ce8-90ca-3e442258c5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state active and task_state None.
Oct 02 12:13:10 compute-0 ceph-mon[73668]: pgmap v995: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct 02 12:13:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 736 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Oct 02 12:13:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:11.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:12 compute-0 ceph-mon[73668]: pgmap v996: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 736 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Oct 02 12:13:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 114 op/s
Oct 02 12:13:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:13:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:13.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:13:14 compute-0 nova_compute[256940]: 2025-10-02 12:13:14.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:14 compute-0 nova_compute[256940]: 2025-10-02 12:13:14.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:14.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:14 compute-0 ceph-mon[73668]: pgmap v997: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 114 op/s
Oct 02 12:13:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1201019618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:13:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:15 compute-0 podman[269141]: 2025-10-02 12:13:15.440105643 +0000 UTC m=+0.100129017 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:13:15 compute-0 podman[269142]: 2025-10-02 12:13:15.490395941 +0000 UTC m=+0.150375143 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct 02 12:13:15 compute-0 ceph-mon[73668]: pgmap v998: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:13:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:16.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 147 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 02 12:13:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 02 12:13:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 02 12:13:17 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 02 12:13:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:17.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:17 compute-0 nova_compute[256940]: 2025-10-02 12:13:17.811 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:17 compute-0 nova_compute[256940]: 2025-10-02 12:13:17.813 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:17 compute-0 nova_compute[256940]: 2025-10-02 12:13:17.916 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:13:18 compute-0 nova_compute[256940]: 2025-10-02 12:13:18.103 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:18 compute-0 nova_compute[256940]: 2025-10-02 12:13:18.104 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:18 compute-0 nova_compute[256940]: 2025-10-02 12:13:18.130 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:13:18 compute-0 nova_compute[256940]: 2025-10-02 12:13:18.131 2 INFO nova.compute.claims [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:13:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:18.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:18 compute-0 nova_compute[256940]: 2025-10-02 12:13:18.396 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:18 compute-0 ceph-mon[73668]: pgmap v999: 305 pgs: 305 active+clean; 147 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 02 12:13:18 compute-0 ceph-mon[73668]: osdmap e143: 3 total, 3 up, 3 in
Oct 02 12:13:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/261862872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 171 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 191 op/s
Oct 02 12:13:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932998566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.003 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.012 2 DEBUG nova.compute.provider_tree [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.038 2 DEBUG nova.scheduler.client.report [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.114 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.115 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:19.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.294 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.294 2 DEBUG nova.network.neutron [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.427 2 INFO nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.553 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.701 2 DEBUG nova.policy [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a80f833255046e7b62d34c1c6066073', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39ca581fbb054c959d26096ca39fef05', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.813 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.815 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:13:19 compute-0 nova_compute[256940]: 2025-10-02 12:13:19.816 2 INFO nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Creating image(s)
Oct 02 12:13:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/932998566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.144 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.199 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.248 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.257 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.345 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.346 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.347 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.348 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.394 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.400 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 181 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 02 12:13:20 compute-0 nova_compute[256940]: 2025-10-02 12:13:20.709 2 DEBUG nova.network.neutron [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Successfully created port: 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:13:21 compute-0 ceph-mon[73668]: pgmap v1001: 305 pgs: 305 active+clean; 171 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 191 op/s
Oct 02 12:13:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3914675815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1702175459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.125 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.725s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:21.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.379 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] resizing rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.530 2 DEBUG nova.objects.instance [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'migration_context' on Instance uuid 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.553 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.553 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Ensure instance console log exists: /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.554 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.554 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.554 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:21.728 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:21.730 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.732 2 DEBUG nova.network.neutron [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Successfully updated port: 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.790 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.791 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquired lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.791 2 DEBUG nova.network.neutron [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:21 compute-0 ovn_controller[148123]: 2025-10-02T12:13:21Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:13:21 compute-0 ovn_controller[148123]: 2025-10-02T12:13:21Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.977 2 DEBUG nova.compute.manager [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received event network-changed-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.978 2 DEBUG nova.compute.manager [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Refreshing instance network info cache due to event network-changed-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:13:21 compute-0 nova_compute[256940]: 2025-10-02 12:13:21.978 2 DEBUG oslo_concurrency.lockutils [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:22 compute-0 nova_compute[256940]: 2025-10-02 12:13:22.033 2 DEBUG nova.network.neutron [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:13:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 220 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 215 op/s
Oct 02 12:13:22 compute-0 ceph-mon[73668]: pgmap v1002: 305 pgs: 305 active+clean; 181 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 02 12:13:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2915731104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:22 compute-0 nova_compute[256940]: 2025-10-02 12:13:22.915 2 DEBUG nova.network.neutron [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Updating instance_info_cache with network_info: [{"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.037 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Releasing lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.039 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Instance network_info: |[{"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.040 2 DEBUG oslo_concurrency.lockutils [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.041 2 DEBUG nova.network.neutron [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Refreshing network info cache for port 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.046 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Start _get_guest_xml network_info=[{"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.055 2 WARNING nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.061 2 DEBUG nova.virt.libvirt.host [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.062 2 DEBUG nova.virt.libvirt.host [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.066 2 DEBUG nova.virt.libvirt.host [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.067 2 DEBUG nova.virt.libvirt.host [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.068 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.069 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.070 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.070 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.071 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.071 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.072 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.072 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.073 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.073 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.074 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.074 2 DEBUG nova.virt.hardware [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.079 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.005000131s ======
Oct 02 12:13:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000131s
Oct 02 12:13:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997894256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.554 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.597 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:23 compute-0 nova_compute[256940]: 2025-10-02 12:13:23.603 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:24 compute-0 ceph-mon[73668]: pgmap v1003: 305 pgs: 305 active+clean; 220 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 215 op/s
Oct 02 12:13:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2071537996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2997894256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950564086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.225 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.228 2 DEBUG nova.virt.libvirt.vif [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-434032293',display_name='tempest-ServersAdminTestJSON-server-434032293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-434032293',id=15,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-ut4y1d90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:19Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=38f4ff47-ece3-4bba-ad08-bb8e4d1391a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.229 2 DEBUG nova.network.os_vif_util [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.231 2 DEBUG nova.network.os_vif_util [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:a3:84,bridge_name='br-int',has_traffic_filtering=True,id=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f08a0f7-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.233 2 DEBUG nova.objects.instance [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.260 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <uuid>38f4ff47-ece3-4bba-ad08-bb8e4d1391a4</uuid>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <name>instance-0000000f</name>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersAdminTestJSON-server-434032293</nova:name>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:13:23</nova:creationTime>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:user uuid="7a80f833255046e7b62d34c1c6066073">tempest-ServersAdminTestJSON-1879159697-project-member</nova:user>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:project uuid="39ca581fbb054c959d26096ca39fef05">tempest-ServersAdminTestJSON-1879159697</nova:project>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <nova:port uuid="7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3">
Oct 02 12:13:24 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <system>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <entry name="serial">38f4ff47-ece3-4bba-ad08-bb8e4d1391a4</entry>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <entry name="uuid">38f4ff47-ece3-4bba-ad08-bb8e4d1391a4</entry>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </system>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <os>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   </os>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <features>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   </features>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:13:24 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk">
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       </source>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk.config">
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       </source>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:13:24 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:13:a3:84"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <target dev="tap7f08a0f7-6a"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/console.log" append="off"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <video>
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </video>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:13:24 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:13:24 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:13:24 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:13:24 compute-0 nova_compute[256940]: </domain>
Oct 02 12:13:24 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.262 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Preparing to wait for external event network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.263 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.264 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.264 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.266 2 DEBUG nova.virt.libvirt.vif [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-434032293',display_name='tempest-ServersAdminTestJSON-server-434032293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-434032293',id=15,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-ut4y1d90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:19Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=38f4ff47-ece3-4bba-ad08-bb8e4d1391a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.266 2 DEBUG nova.network.os_vif_util [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.268 2 DEBUG nova.network.os_vif_util [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:a3:84,bridge_name='br-int',has_traffic_filtering=True,id=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f08a0f7-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.268 2 DEBUG os_vif [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:a3:84,bridge_name='br-int',has_traffic_filtering=True,id=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f08a0f7-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f08a0f7-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f08a0f7-6a, col_values=(('external_ids', {'iface-id': '7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:a3:84', 'vm-uuid': '38f4ff47-ece3-4bba-ad08-bb8e4d1391a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:24 compute-0 NetworkManager[44981]: <info>  [1759407204.2851] manager: (tap7f08a0f7-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.299 2 INFO os_vif [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:a3:84,bridge_name='br-int',has_traffic_filtering=True,id=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f08a0f7-6a')
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.308 2 DEBUG nova.network.neutron [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Updated VIF entry in instance network info cache for port 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.310 2 DEBUG nova.network.neutron [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Updating instance_info_cache with network_info: [{"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.351 2 DEBUG oslo_concurrency.lockutils [req-3cd2228f-ae9f-4395-99ca-af71a97cb47e req-3b3f2172-aa86-45c8-8e55-0aded6eb252a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.644 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.645 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.645 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No VIF found with MAC fa:16:3e:13:a3:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.646 2 INFO nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Using config drive
Oct 02 12:13:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 292 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.1 MiB/s wr, 226 op/s
Oct 02 12:13:24 compute-0 nova_compute[256940]: 2025-10-02 12:13:24.685 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:24.734 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:25 compute-0 nova_compute[256940]: 2025-10-02 12:13:25.117 2 INFO nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Creating config drive at /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/disk.config
Oct 02 12:13:25 compute-0 nova_compute[256940]: 2025-10-02 12:13:25.125 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp65it0zga execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3950564086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:25.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:25 compute-0 nova_compute[256940]: 2025-10-02 12:13:25.271 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp65it0zga" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:25 compute-0 nova_compute[256940]: 2025-10-02 12:13:25.584 2 DEBUG nova.storage.rbd_utils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:25 compute-0 nova_compute[256940]: 2025-10-02 12:13:25.590 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/disk.config 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:26 compute-0 sudo[269499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:26 compute-0 sudo[269499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:26 compute-0 sudo[269499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:26 compute-0 sudo[269533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:26 compute-0 sudo[269533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:26 compute-0 sudo[269533]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:26 compute-0 podman[269526]: 2025-10-02 12:13:26.169083114 +0000 UTC m=+0.103832993 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:13:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:26.450 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:26.450 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:26.451 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:26 compute-0 ceph-mon[73668]: pgmap v1004: 305 pgs: 305 active+clean; 292 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.1 MiB/s wr, 226 op/s
Oct 02 12:13:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 318 MiB data, 404 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 9.7 MiB/s wr, 238 op/s
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.196 2 DEBUG oslo_concurrency.processutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/disk.config 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.197 2 INFO nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Deleting local config drive /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4/disk.config because it was imported into RBD.
Oct 02 12:13:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:27.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:27 compute-0 kernel: tap7f08a0f7-6a: entered promiscuous mode
Oct 02 12:13:27 compute-0 NetworkManager[44981]: <info>  [1759407207.2857] manager: (tap7f08a0f7-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 02 12:13:27 compute-0 ovn_controller[148123]: 2025-10-02T12:13:27Z|00062|binding|INFO|Claiming lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 for this chassis.
Oct 02 12:13:27 compute-0 ovn_controller[148123]: 2025-10-02T12:13:27Z|00063|binding|INFO|7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3: Claiming fa:16:3e:13:a3:84 10.100.0.14
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.297 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:a3:84 10.100.0.14'], port_security=['fa:16:3e:13:a3:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '38f4ff47-ece3-4bba-ad08-bb8e4d1391a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.300 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f bound to our chassis
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.305 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:13:27 compute-0 ovn_controller[148123]: 2025-10-02T12:13:27Z|00064|binding|INFO|Setting lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 ovn-installed in OVS
Oct 02 12:13:27 compute-0 ovn_controller[148123]: 2025-10-02T12:13:27Z|00065|binding|INFO|Setting lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 up in Southbound
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.336 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5040a7-7f63-44fd-8f27-9dcc71b2ace5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:27 compute-0 systemd-machined[210927]: New machine qemu-5-instance-0000000f.
Oct 02 12:13:27 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000f.
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.386 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[658568aa-9662-4b99-9942-bb66298ff9fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.391 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0082625b-f195-4317-8645-a947de482033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:27 compute-0 systemd-udevd[269602]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:27 compute-0 NetworkManager[44981]: <info>  [1759407207.4208] device (tap7f08a0f7-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:13:27 compute-0 NetworkManager[44981]: <info>  [1759407207.4222] device (tap7f08a0f7-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:13:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.436 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c48de145-307d-416d-9a3f-9409d8ce0c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:27 compute-0 podman[269584]: 2025-10-02 12:13:27.463884699 +0000 UTC m=+0.112003848 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.465 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[183342da-4f14-4f49-95e7-bb31c2cbf948]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 6, 'rx_bytes': 874, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 6, 'rx_bytes': 874, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 31338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269617, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.497 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fc0452-4a14-492b-b2f7-a5158a4cbf35]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269619, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269619, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.501 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.506 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.507 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.508 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:13:27.508 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.584 2 DEBUG nova.compute.manager [req-ba60885a-6f46-409a-9a69-1a10d5084509 req-a6efc8b2-c118-47c5-a564-df40d9c8bcdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received event network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.585 2 DEBUG oslo_concurrency.lockutils [req-ba60885a-6f46-409a-9a69-1a10d5084509 req-a6efc8b2-c118-47c5-a564-df40d9c8bcdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.586 2 DEBUG oslo_concurrency.lockutils [req-ba60885a-6f46-409a-9a69-1a10d5084509 req-a6efc8b2-c118-47c5-a564-df40d9c8bcdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.587 2 DEBUG oslo_concurrency.lockutils [req-ba60885a-6f46-409a-9a69-1a10d5084509 req-a6efc8b2-c118-47c5-a564-df40d9c8bcdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:27 compute-0 nova_compute[256940]: 2025-10-02 12:13:27.587 2 DEBUG nova.compute.manager [req-ba60885a-6f46-409a-9a69-1a10d5084509 req-a6efc8b2-c118-47c5-a564-df40d9c8bcdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Processing event network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:13:28 compute-0 ceph-mon[73668]: pgmap v1005: 305 pgs: 305 active+clean; 318 MiB data, 404 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 9.7 MiB/s wr, 238 op/s
Oct 02 12:13:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:13:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 331 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 1014 KiB/s rd, 8.6 MiB/s wr, 221 op/s
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:13:28
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.data', 'backups']
Oct 02 12:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.936 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.938 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407208.9377959, 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.939 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] VM Started (Lifecycle Event)
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.944 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.948 2 INFO nova.virt.libvirt.driver [-] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Instance spawned successfully.
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.949 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.973 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.981 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.985 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.986 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.987 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.988 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.988 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:28 compute-0 nova_compute[256940]: 2025-10-02 12:13:28.989 2 DEBUG nova.virt.libvirt.driver [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.024 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.025 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407208.9384277, 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.025 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] VM Paused (Lifecycle Event)
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.054 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.060 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407208.9429379, 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.060 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] VM Resumed (Lifecycle Event)
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.067 2 INFO nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Took 9.25 seconds to spawn the instance on the hypervisor.
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.068 2 DEBUG nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.081 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.085 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.123 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.153 2 INFO nova.compute.manager [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Took 11.08 seconds to build instance.
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.185 2 DEBUG oslo_concurrency.lockutils [None req-89d8628e-37de-4243-a555-99f7022d8025 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:29.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.867 2 DEBUG nova.compute.manager [req-b5a219e7-97fc-4a48-b1b4-c0e9450237ec req-ccf6d698-a865-4576-a339-ad45aecf1a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received event network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.868 2 DEBUG oslo_concurrency.lockutils [req-b5a219e7-97fc-4a48-b1b4-c0e9450237ec req-ccf6d698-a865-4576-a339-ad45aecf1a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.868 2 DEBUG oslo_concurrency.lockutils [req-b5a219e7-97fc-4a48-b1b4-c0e9450237ec req-ccf6d698-a865-4576-a339-ad45aecf1a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.869 2 DEBUG oslo_concurrency.lockutils [req-b5a219e7-97fc-4a48-b1b4-c0e9450237ec req-ccf6d698-a865-4576-a339-ad45aecf1a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.869 2 DEBUG nova.compute.manager [req-b5a219e7-97fc-4a48-b1b4-c0e9450237ec req-ccf6d698-a865-4576-a339-ad45aecf1a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] No waiting events found dispatching network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:29 compute-0 nova_compute[256940]: 2025-10-02 12:13:29.870 2 WARNING nova.compute.manager [req-b5a219e7-97fc-4a48-b1b4-c0e9450237ec req-ccf6d698-a865-4576-a339-ad45aecf1a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received unexpected event network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 for instance with vm_state active and task_state None.
Oct 02 12:13:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:30.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:30 compute-0 ceph-mon[73668]: pgmap v1006: 305 pgs: 305 active+clean; 331 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 1014 KiB/s rd, 8.6 MiB/s wr, 221 op/s
Oct 02 12:13:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 337 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Oct 02 12:13:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:13:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:31.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:13:32 compute-0 ceph-mon[73668]: pgmap v1007: 305 pgs: 305 active+clean; 337 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Oct 02 12:13:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:32.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.8 MiB/s wr, 355 op/s
Oct 02 12:13:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:33.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/812512747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:34 compute-0 nova_compute[256940]: 2025-10-02 12:13:34.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:34.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:34 compute-0 nova_compute[256940]: 2025-10-02 12:13:34.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 362 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 361 op/s
Oct 02 12:13:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:35.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:35 compute-0 ceph-mon[73668]: pgmap v1008: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.8 MiB/s wr, 355 op/s
Oct 02 12:13:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3184662851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/56871212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2647583963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:36.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 02 12:13:36 compute-0 ceph-mon[73668]: pgmap v1009: 305 pgs: 305 active+clean; 362 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 361 op/s
Oct 02 12:13:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1645490694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/103446974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 374 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.5 MiB/s wr, 322 op/s
Oct 02 12:13:36 compute-0 nova_compute[256940]: 2025-10-02 12:13:36.980 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:36 compute-0 nova_compute[256940]: 2025-10-02 12:13:36.980 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:37 compute-0 nova_compute[256940]: 2025-10-02 12:13:37.004 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:37 compute-0 nova_compute[256940]: 2025-10-02 12:13:37.005 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:13:37 compute-0 nova_compute[256940]: 2025-10-02 12:13:37.005 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:13:37 compute-0 nova_compute[256940]: 2025-10-02 12:13:37.152 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:37 compute-0 nova_compute[256940]: 2025-10-02 12:13:37.153 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:37 compute-0 nova_compute[256940]: 2025-10-02 12:13:37.153 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:13:37 compute-0 nova_compute[256940]: 2025-10-02 12:13:37.154 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 02 12:13:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 02 12:13:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:37.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:37 compute-0 ceph-mon[73668]: pgmap v1010: 305 pgs: 305 active+clean; 374 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.5 MiB/s wr, 322 op/s
Oct 02 12:13:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3594061908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:37 compute-0 ceph-mon[73668]: osdmap e144: 3 total, 3 up, 3 in
Oct 02 12:13:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 386 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.2 MiB/s wr, 296 op/s
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:39.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:13:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5339 writes, 23K keys, 5339 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5339 writes, 5339 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1531 writes, 6772 keys, 1531 commit groups, 1.0 writes per commit group, ingest: 10.19 MB, 0.02 MB/s
                                           Interval WAL: 1531 writes, 1531 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.6      0.34              0.08        13    0.026       0      0       0.0       0.0
                                             L6      1/0    7.00 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.7    131.6    109.4      0.92              0.33        12    0.076     55K   6357       0.0       0.0
                                            Sum      1/0    7.00 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.7     96.4    102.0      1.25              0.41        25    0.050     55K   6357       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.9     74.6     73.9      0.79              0.20        12    0.065     29K   3048       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    131.6    109.4      0.92              0.33        12    0.076     55K   6357       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     82.4      0.33              0.08        12    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.027, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 1.3 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 10.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(593,9.68 MB,3.18278%) FilterBlock(26,161.55 KB,0.0518949%) IndexBlock(26,304.08 KB,0.0976813%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.312 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Updating instance_info_cache with network_info: [{"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.344 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-c704d1db-8e20-432e-900d-4f4267059d9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.345 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.345 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.346 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.346 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.346 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.347 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.347 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.347 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.348 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.371 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.372 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.372 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.372 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.373 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:39 compute-0 nova_compute[256940]: 2025-10-02 12:13:39.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3683659444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008310141564302997 of space, bias 1.0, pg target 2.493042469290899 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5672990240554625 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:13:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:13:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4182321119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.317 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.944s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:40.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.409 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.409 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.413 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.414 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 416 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.5 MiB/s wr, 261 op/s
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.672 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.673 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4509MB free_disk=20.817787170410156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.673 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.673 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.753 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance c704d1db-8e20-432e-900d-4f4267059d9f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.753 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.753 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.753 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:13:40 compute-0 nova_compute[256940]: 2025-10-02 12:13:40.822 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:40 compute-0 ceph-mon[73668]: pgmap v1012: 305 pgs: 305 active+clean; 386 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.2 MiB/s wr, 296 op/s
Oct 02 12:13:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4182321119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:41.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057945220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:41 compute-0 nova_compute[256940]: 2025-10-02 12:13:41.735 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.912s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:41 compute-0 nova_compute[256940]: 2025-10-02 12:13:41.746 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:41 compute-0 nova_compute[256940]: 2025-10-02 12:13:41.784 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:41 compute-0 nova_compute[256940]: 2025-10-02 12:13:41.818 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:13:41 compute-0 nova_compute[256940]: 2025-10-02 12:13:41.818 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:41 compute-0 sudo[269716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:41 compute-0 sudo[269716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:41 compute-0 sudo[269716]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:41 compute-0 sudo[269741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:41 compute-0 sudo[269741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:41 compute-0 sudo[269741]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:42 compute-0 sudo[269766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:42 compute-0 sudo[269766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:42 compute-0 sudo[269766]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:42 compute-0 ceph-mon[73668]: pgmap v1013: 305 pgs: 305 active+clean; 416 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.5 MiB/s wr, 261 op/s
Oct 02 12:13:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2859817929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2102508875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2057945220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:42 compute-0 sudo[269791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:13:42 compute-0 sudo[269791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:42.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 463 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.4 MiB/s wr, 247 op/s
Oct 02 12:13:42 compute-0 sudo[269791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:43.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:13:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:13:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:44 compute-0 nova_compute[256940]: 2025-10-02 12:13:44.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:44.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:44 compute-0 nova_compute[256940]: 2025-10-02 12:13:44.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 465 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.6 MiB/s wr, 206 op/s
Oct 02 12:13:44 compute-0 ceph-mon[73668]: pgmap v1014: 305 pgs: 305 active+clean; 463 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.4 MiB/s wr, 247 op/s
Oct 02 12:13:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:13:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:13:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:13:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:13:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:44 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev aa963ef2-8fba-4f3b-8778-b14bb35693ef does not exist
Oct 02 12:13:44 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5535e760-c10e-48e4-a227-b705a31d2bea does not exist
Oct 02 12:13:44 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7e86d485-6fe8-423a-9213-11bcf3fead80 does not exist
Oct 02 12:13:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:13:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:13:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:13:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:13:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:13:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:45 compute-0 sudo[269849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:45 compute-0 sudo[269849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:45 compute-0 sudo[269849]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:45 compute-0 sudo[269874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:45 compute-0 sudo[269874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:45 compute-0 sudo[269874]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:45.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:45 compute-0 sudo[269899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:45 compute-0 sudo[269899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:45 compute-0 sudo[269899]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:45 compute-0 sudo[269924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:13:45 compute-0 sudo[269924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:45 compute-0 podman[269948]: 2025-10-02 12:13:45.557480752 +0000 UTC m=+0.064500322 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:13:45 compute-0 podman[269966]: 2025-10-02 12:13:45.735851858 +0000 UTC m=+0.129900626 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:13:45 compute-0 ceph-mon[73668]: pgmap v1015: 305 pgs: 305 active+clean; 465 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.6 MiB/s wr, 206 op/s
Oct 02 12:13:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:13:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:13:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:13:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:45 compute-0 podman[270030]: 2025-10-02 12:13:45.847848545 +0000 UTC m=+0.026666231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:46 compute-0 podman[270030]: 2025-10-02 12:13:46.056460343 +0000 UTC m=+0.235278019 container create 0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendel, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:13:46 compute-0 systemd[1]: Started libpod-conmon-0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53.scope.
Oct 02 12:13:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:46 compute-0 sudo[270044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:46 compute-0 sudo[270044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:46 compute-0 sudo[270044]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:46.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:46 compute-0 sudo[270074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:46 compute-0 sudo[270074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:46 compute-0 sudo[270074]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:46 compute-0 podman[270030]: 2025-10-02 12:13:46.61631118 +0000 UTC m=+0.795128896 container init 0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:13:46 compute-0 podman[270030]: 2025-10-02 12:13:46.630180803 +0000 UTC m=+0.808998429 container start 0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendel, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:13:46 compute-0 vigorous_mendel[270069]: 167 167
Oct 02 12:13:46 compute-0 systemd[1]: libpod-0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53.scope: Deactivated successfully.
Oct 02 12:13:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 462 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 9.0 MiB/s wr, 245 op/s
Oct 02 12:13:46 compute-0 ovn_controller[148123]: 2025-10-02T12:13:46Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:a3:84 10.100.0.14
Oct 02 12:13:46 compute-0 ovn_controller[148123]: 2025-10-02T12:13:46Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:a3:84 10.100.0.14
Oct 02 12:13:46 compute-0 podman[270030]: 2025-10-02 12:13:46.798329162 +0000 UTC m=+0.977146888 container attach 0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:13:46 compute-0 podman[270030]: 2025-10-02 12:13:46.798852145 +0000 UTC m=+0.977669821 container died 0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c2f3eb531c4cbf06853faafeac658344dba81cfbd9d66f3e879a6eb7b53a34b-merged.mount: Deactivated successfully.
Oct 02 12:13:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:47.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 02 12:13:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 02 12:13:47 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 02 12:13:47 compute-0 podman[270030]: 2025-10-02 12:13:47.721447952 +0000 UTC m=+1.900265588 container remove 0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendel, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:13:47 compute-0 systemd[1]: libpod-conmon-0f28df4da663342f04d2fe0677cef2e5bb024277a244659c76d3cf6421339f53.scope: Deactivated successfully.
Oct 02 12:13:47 compute-0 podman[270121]: 2025-10-02 12:13:47.956224217 +0000 UTC m=+0.063310280 container create 1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:13:48 compute-0 systemd[1]: Started libpod-conmon-1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf.scope.
Oct 02 12:13:48 compute-0 podman[270121]: 2025-10-02 12:13:47.927176456 +0000 UTC m=+0.034262609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2253ef8cefd0e509f70893a1cec7925e47e39ef10dcff74ce19976b0892a22d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2253ef8cefd0e509f70893a1cec7925e47e39ef10dcff74ce19976b0892a22d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2253ef8cefd0e509f70893a1cec7925e47e39ef10dcff74ce19976b0892a22d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2253ef8cefd0e509f70893a1cec7925e47e39ef10dcff74ce19976b0892a22d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2253ef8cefd0e509f70893a1cec7925e47e39ef10dcff74ce19976b0892a22d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:48 compute-0 podman[270121]: 2025-10-02 12:13:48.062994697 +0000 UTC m=+0.170080800 container init 1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:13:48 compute-0 podman[270121]: 2025-10-02 12:13:48.071078658 +0000 UTC m=+0.178164721 container start 1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:13:48 compute-0 podman[270121]: 2025-10-02 12:13:48.076339006 +0000 UTC m=+0.183425109 container attach 1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:13:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:48.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:48 compute-0 ceph-mon[73668]: pgmap v1016: 305 pgs: 305 active+clean; 462 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 9.0 MiB/s wr, 245 op/s
Oct 02 12:13:48 compute-0 ceph-mon[73668]: osdmap e145: 3 total, 3 up, 3 in
Oct 02 12:13:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 471 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.7 MiB/s wr, 319 op/s
Oct 02 12:13:48 compute-0 adoring_wu[270137]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:13:48 compute-0 adoring_wu[270137]: --> relative data size: 1.0
Oct 02 12:13:48 compute-0 adoring_wu[270137]: --> All data devices are unavailable
Oct 02 12:13:48 compute-0 systemd[1]: libpod-1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf.scope: Deactivated successfully.
Oct 02 12:13:48 compute-0 podman[270121]: 2025-10-02 12:13:48.940026659 +0000 UTC m=+1.047112762 container died 1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2253ef8cefd0e509f70893a1cec7925e47e39ef10dcff74ce19976b0892a22d-merged.mount: Deactivated successfully.
Oct 02 12:13:49 compute-0 podman[270121]: 2025-10-02 12:13:49.056204545 +0000 UTC m=+1.163290618 container remove 1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:13:49 compute-0 nova_compute[256940]: 2025-10-02 12:13:49.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:49 compute-0 systemd[1]: libpod-conmon-1ead47e0c85ae500242383ddf86d610498664bd5e39da73514cdf7729d451ccf.scope: Deactivated successfully.
Oct 02 12:13:49 compute-0 sudo[269924]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:49 compute-0 sudo[270165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:49 compute-0 sudo[270165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:49 compute-0 sudo[270165]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:49 compute-0 sudo[270190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:49 compute-0 sudo[270190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:49 compute-0 sudo[270190]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:49.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:49 compute-0 sudo[270215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:49 compute-0 sudo[270215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:49 compute-0 sudo[270215]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:49 compute-0 nova_compute[256940]: 2025-10-02 12:13:49.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:49 compute-0 sudo[270240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:13:49 compute-0 sudo[270240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2261448251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:49 compute-0 podman[270306]: 2025-10-02 12:13:49.989714897 +0000 UTC m=+0.137159887 container create 5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:13:49 compute-0 podman[270306]: 2025-10-02 12:13:49.89751436 +0000 UTC m=+0.044959370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:50 compute-0 systemd[1]: Started libpod-conmon-5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80.scope.
Oct 02 12:13:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:50 compute-0 podman[270306]: 2025-10-02 12:13:50.295997627 +0000 UTC m=+0.443442697 container init 5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:13:50 compute-0 podman[270306]: 2025-10-02 12:13:50.311091193 +0000 UTC m=+0.458536223 container start 5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:13:50 compute-0 great_meitner[270322]: 167 167
Oct 02 12:13:50 compute-0 systemd[1]: libpod-5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80.scope: Deactivated successfully.
Oct 02 12:13:50 compute-0 conmon[270322]: conmon 5de7be959dfcdf61fc63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80.scope/container/memory.events
Oct 02 12:13:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:50.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:50 compute-0 podman[270306]: 2025-10-02 12:13:50.439727965 +0000 UTC m=+0.587173035 container attach 5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meitner, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:13:50 compute-0 podman[270306]: 2025-10-02 12:13:50.441870981 +0000 UTC m=+0.589315981 container died 5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meitner, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:13:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 481 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 335 op/s
Oct 02 12:13:51 compute-0 ceph-mon[73668]: pgmap v1018: 305 pgs: 305 active+clean; 471 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.7 MiB/s wr, 319 op/s
Oct 02 12:13:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:51.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f94de0aff099e90a5ceac5dec6d1cdfa5a74efed29af57a00ff01af66aefd4-merged.mount: Deactivated successfully.
Oct 02 12:13:52 compute-0 podman[270306]: 2025-10-02 12:13:52.088320895 +0000 UTC m=+2.235765925 container remove 5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:13:52 compute-0 systemd[1]: libpod-conmon-5de7be959dfcdf61fc63fd4a496b574839da3d06585c11e7c5cfff9c926cff80.scope: Deactivated successfully.
Oct 02 12:13:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:52.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:52 compute-0 podman[270351]: 2025-10-02 12:13:52.334725325 +0000 UTC m=+0.044976841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:52 compute-0 ceph-mon[73668]: pgmap v1019: 305 pgs: 305 active+clean; 481 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 335 op/s
Oct 02 12:13:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3601294168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:52 compute-0 podman[270351]: 2025-10-02 12:13:52.656252494 +0000 UTC m=+0.366504010 container create 4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:13:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 484 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 02 12:13:52 compute-0 systemd[1]: Started libpod-conmon-4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6.scope.
Oct 02 12:13:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d854ee99806b9aa8c8b638da263f3f5134cde2c9f75cf1a4ff700b7543a0cde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d854ee99806b9aa8c8b638da263f3f5134cde2c9f75cf1a4ff700b7543a0cde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d854ee99806b9aa8c8b638da263f3f5134cde2c9f75cf1a4ff700b7543a0cde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d854ee99806b9aa8c8b638da263f3f5134cde2c9f75cf1a4ff700b7543a0cde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:52 compute-0 podman[270351]: 2025-10-02 12:13:52.935838754 +0000 UTC m=+0.646090270 container init 4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:13:52 compute-0 podman[270351]: 2025-10-02 12:13:52.951907025 +0000 UTC m=+0.662158531 container start 4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wing, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:13:53 compute-0 podman[270351]: 2025-10-02 12:13:53.064229049 +0000 UTC m=+0.774480545 container attach 4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:13:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:53.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3097560303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:53 compute-0 friendly_wing[270369]: {
Oct 02 12:13:53 compute-0 friendly_wing[270369]:     "1": [
Oct 02 12:13:53 compute-0 friendly_wing[270369]:         {
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "devices": [
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "/dev/loop3"
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             ],
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "lv_name": "ceph_lv0",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "lv_size": "7511998464",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "name": "ceph_lv0",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "tags": {
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.cluster_name": "ceph",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.crush_device_class": "",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.encrypted": "0",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.osd_id": "1",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.type": "block",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:                 "ceph.vdo": "0"
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             },
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "type": "block",
Oct 02 12:13:53 compute-0 friendly_wing[270369]:             "vg_name": "ceph_vg0"
Oct 02 12:13:53 compute-0 friendly_wing[270369]:         }
Oct 02 12:13:53 compute-0 friendly_wing[270369]:     ]
Oct 02 12:13:53 compute-0 friendly_wing[270369]: }
Oct 02 12:13:53 compute-0 systemd[1]: libpod-4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6.scope: Deactivated successfully.
Oct 02 12:13:53 compute-0 podman[270351]: 2025-10-02 12:13:53.865290789 +0000 UTC m=+1.575542295 container died 4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d854ee99806b9aa8c8b638da263f3f5134cde2c9f75cf1a4ff700b7543a0cde-merged.mount: Deactivated successfully.
Oct 02 12:13:54 compute-0 nova_compute[256940]: 2025-10-02 12:13:54.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:54 compute-0 podman[270351]: 2025-10-02 12:13:54.187445775 +0000 UTC m=+1.897697241 container remove 4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:13:54 compute-0 systemd[1]: libpod-conmon-4663bff89f9e6d5844faf3c72ae9e119f2a97b16fc0b72b7641b95eaf81bcfa6.scope: Deactivated successfully.
Oct 02 12:13:54 compute-0 sudo[270240]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:54 compute-0 sudo[270393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:54 compute-0 sudo[270393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:54 compute-0 sudo[270393]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:54.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:54 compute-0 nova_compute[256940]: 2025-10-02 12:13:54.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:54 compute-0 sudo[270418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:54 compute-0 sudo[270418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:54 compute-0 sudo[270418]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:54 compute-0 sudo[270444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:54 compute-0 sudo[270444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:54 compute-0 sudo[270444]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:54 compute-0 sudo[270469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:13:54 compute-0 sudo[270469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 441 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 247 op/s
Oct 02 12:13:54 compute-0 ceph-mon[73668]: pgmap v1020: 305 pgs: 305 active+clean; 484 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 02 12:13:55 compute-0 podman[270535]: 2025-10-02 12:13:55.089583346 +0000 UTC m=+0.031360133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:55 compute-0 podman[270535]: 2025-10-02 12:13:55.227226154 +0000 UTC m=+0.169002911 container create c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:13:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:55 compute-0 systemd[1]: Started libpod-conmon-c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861.scope.
Oct 02 12:13:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:55 compute-0 podman[270535]: 2025-10-02 12:13:55.718522334 +0000 UTC m=+0.660299171 container init c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:13:55 compute-0 podman[270535]: 2025-10-02 12:13:55.731079323 +0000 UTC m=+0.672856100 container start c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:13:55 compute-0 sweet_burnell[270552]: 167 167
Oct 02 12:13:55 compute-0 systemd[1]: libpod-c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861.scope: Deactivated successfully.
Oct 02 12:13:55 compute-0 podman[270535]: 2025-10-02 12:13:55.884957438 +0000 UTC m=+0.826734225 container attach c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:13:55 compute-0 podman[270535]: 2025-10-02 12:13:55.886660422 +0000 UTC m=+0.828437209 container died c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:13:55 compute-0 ceph-mon[73668]: pgmap v1021: 305 pgs: 305 active+clean; 441 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 247 op/s
Oct 02 12:13:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/513098378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 425 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 213 op/s
Oct 02 12:13:56 compute-0 nova_compute[256940]: 2025-10-02 12:13:56.696 2 INFO nova.compute.manager [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Rebuilding instance
Oct 02 12:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-17399ee1ce6d325431361e2b6db9aad4650376333768d144be0f1d508961d2f6-merged.mount: Deactivated successfully.
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.075 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.093 2 DEBUG nova.compute.manager [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.139 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'pci_requests' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.150 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'pci_devices' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.160 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'resources' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.170 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'migration_context' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.178 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:13:57 compute-0 nova_compute[256940]: 2025-10-02 12:13:57.195 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:13:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:13:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:57.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:13:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:58 compute-0 podman[270535]: 2025-10-02 12:13:58.11842879 +0000 UTC m=+3.060205537 container remove c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:13:58 compute-0 podman[270567]: 2025-10-02 12:13:58.213420851 +0000 UTC m=+1.876787203 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:13:58 compute-0 systemd[1]: libpod-conmon-c99afbb1ab4713b6e3331aa4ad489d5bb40beacf34e8c8c8ca2280f1c6637861.scope: Deactivated successfully.
Oct 02 12:13:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:58.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:58 compute-0 podman[270591]: 2025-10-02 12:13:58.408820093 +0000 UTC m=+0.137101975 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:13:58 compute-0 podman[270609]: 2025-10-02 12:13:58.390839652 +0000 UTC m=+0.042117315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:13:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 405 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 131 KiB/s wr, 137 op/s
Oct 02 12:13:58 compute-0 ceph-mon[73668]: pgmap v1022: 305 pgs: 305 active+clean; 425 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 213 op/s
Oct 02 12:13:59 compute-0 podman[270609]: 2025-10-02 12:13:59.040461533 +0000 UTC m=+0.691739186 container create 4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:13:59 compute-0 nova_compute[256940]: 2025-10-02 12:13:59.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:13:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:59.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:59 compute-0 systemd[1]: Started libpod-conmon-4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233.scope.
Oct 02 12:13:59 compute-0 nova_compute[256940]: 2025-10-02 12:13:59.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036a8c3dd5a7370f5d51738a1f26d0e5adf4825d29d75fd20d7db1bbc42f2761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036a8c3dd5a7370f5d51738a1f26d0e5adf4825d29d75fd20d7db1bbc42f2761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036a8c3dd5a7370f5d51738a1f26d0e5adf4825d29d75fd20d7db1bbc42f2761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036a8c3dd5a7370f5d51738a1f26d0e5adf4825d29d75fd20d7db1bbc42f2761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:59 compute-0 podman[270609]: 2025-10-02 12:13:59.788899524 +0000 UTC m=+1.440177177 container init 4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:13:59 compute-0 podman[270609]: 2025-10-02 12:13:59.80286176 +0000 UTC m=+1.454139413 container start 4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:14:00 compute-0 podman[270609]: 2025-10-02 12:14:00.011620443 +0000 UTC m=+1.662898136 container attach 4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:14:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:00.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:00 compute-0 ceph-mon[73668]: pgmap v1023: 305 pgs: 305 active+clean; 405 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 131 KiB/s wr, 137 op/s
Oct 02 12:14:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 417 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 145 op/s
Oct 02 12:14:00 compute-0 nice_haslett[270633]: {
Oct 02 12:14:00 compute-0 nice_haslett[270633]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:14:00 compute-0 nice_haslett[270633]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:14:00 compute-0 nice_haslett[270633]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:14:00 compute-0 nice_haslett[270633]:         "osd_id": 1,
Oct 02 12:14:00 compute-0 nice_haslett[270633]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:14:00 compute-0 nice_haslett[270633]:         "type": "bluestore"
Oct 02 12:14:00 compute-0 nice_haslett[270633]:     }
Oct 02 12:14:00 compute-0 nice_haslett[270633]: }
Oct 02 12:14:00 compute-0 systemd[1]: libpod-4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233.scope: Deactivated successfully.
Oct 02 12:14:00 compute-0 systemd[1]: libpod-4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233.scope: Consumed 1.045s CPU time.
Oct 02 12:14:00 compute-0 podman[270655]: 2025-10-02 12:14:00.915859168 +0000 UTC m=+0.032115263 container died 4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:14:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:01 compute-0 kernel: tap2747f0c7-f3 (unregistering): left promiscuous mode
Oct 02 12:14:01 compute-0 NetworkManager[44981]: <info>  [1759407241.4550] device (tap2747f0c7-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:01 compute-0 ovn_controller[148123]: 2025-10-02T12:14:01Z|00066|binding|INFO|Releasing lport 2747f0c7-f3c3-4402-aedd-266862caf435 from this chassis (sb_readonly=0)
Oct 02 12:14:01 compute-0 ovn_controller[148123]: 2025-10-02T12:14:01Z|00067|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 down in Southbound
Oct 02 12:14:01 compute-0 ovn_controller[148123]: 2025-10-02T12:14:01Z|00068|binding|INFO|Removing iface tap2747f0c7-f3 ovn-installed in OVS
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.482 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:d3:15 10.100.0.7'], port_security=['fa:16:3e:87:d3:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c704d1db-8e20-432e-900d-4f4267059d9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2747f0c7-f3c3-4402-aedd-266862caf435) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.485 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2747f0c7-f3c3-4402-aedd-266862caf435 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.489 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.516 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7ac93b-2112-4937-a771-cb1e4531b6f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:01 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 02 12:14:01 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 16.565s CPU time.
Oct 02 12:14:01 compute-0 systemd-machined[210927]: Machine qemu-4-instance-0000000b terminated.
Oct 02 12:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-036a8c3dd5a7370f5d51738a1f26d0e5adf4825d29d75fd20d7db1bbc42f2761-merged.mount: Deactivated successfully.
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.553 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dd078c0b-0680-4cf1-bb1d-c44073bf9a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.556 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6d21c655-9059-4155-a9d9-c60b5709ca62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.588 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d661dfaa-33ba-41f0-ba77-9b4d31c170fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.611 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f66729-25cc-414d-b851-2a16527dbc43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 20476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270681, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.636 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9b079b-6f63-4bb2-adc3-8d6f55741a13]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270682, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270682, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.638 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.643 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:01.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:01 compute-0 ceph-mon[73668]: pgmap v1024: 305 pgs: 305 active+clean; 417 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 145 op/s
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.882 2 DEBUG nova.compute.manager [req-af6a1b05-69c3-4459-abc9-30aae288fe5d req-becde07c-74bc-4422-bbc4-8526f1ebacf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-unplugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.884 2 DEBUG oslo_concurrency.lockutils [req-af6a1b05-69c3-4459-abc9-30aae288fe5d req-becde07c-74bc-4422-bbc4-8526f1ebacf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.884 2 DEBUG oslo_concurrency.lockutils [req-af6a1b05-69c3-4459-abc9-30aae288fe5d req-becde07c-74bc-4422-bbc4-8526f1ebacf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.885 2 DEBUG oslo_concurrency.lockutils [req-af6a1b05-69c3-4459-abc9-30aae288fe5d req-becde07c-74bc-4422-bbc4-8526f1ebacf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.885 2 DEBUG nova.compute.manager [req-af6a1b05-69c3-4459-abc9-30aae288fe5d req-becde07c-74bc-4422-bbc4-8526f1ebacf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-unplugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:01 compute-0 nova_compute[256940]: 2025-10-02 12:14:01.886 2 WARNING nova.compute.manager [req-af6a1b05-69c3-4459-abc9-30aae288fe5d req-becde07c-74bc-4422-bbc4-8526f1ebacf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-unplugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state error and task_state rebuilding.
Oct 02 12:14:02 compute-0 podman[270655]: 2025-10-02 12:14:02.114487861 +0000 UTC m=+1.230743926 container remove 4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:14:02 compute-0 systemd[1]: libpod-conmon-4fac9d8f7b54ee3f616db375c6ed383e8ca356557e7964c1fdc1541affc3a233.scope: Deactivated successfully.
Oct 02 12:14:02 compute-0 sudo[270469]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.231 2 INFO nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance shutdown successfully after 5 seconds.
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.239 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance destroyed successfully.
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.245 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance destroyed successfully.
Oct 02 12:14:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.246 2 DEBUG nova.virt.libvirt.vif [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:55Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:14:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.247 2 DEBUG nova.network.os_vif_util [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.248 2 DEBUG nova.network.os_vif_util [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.248 2 DEBUG os_vif [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2747f0c7-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:02 compute-0 nova_compute[256940]: 2025-10-02 12:14:02.257 2 INFO os_vif [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3')
Oct 02 12:14:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:02.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:14:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0a76d2e6-13fa-4f3f-bbf6-6ab03a947999 does not exist
Oct 02 12:14:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1488246d-957e-4536-ba79-c049756e28b7 does not exist
Oct 02 12:14:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 74eaa033-fc5b-4139-86d0-52cf5b21db3b does not exist
Oct 02 12:14:02 compute-0 sudo[270714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:02 compute-0 sudo[270714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:02 compute-0 sudo[270714]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:02 compute-0 sudo[270739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:14:02 compute-0 sudo[270739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:02 compute-0 sudo[270739]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 430 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:14:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:03.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:14:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:14:03 compute-0 nova_compute[256940]: 2025-10-02 12:14:03.338 2 INFO nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deleting instance files /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f_del
Oct 02 12:14:03 compute-0 nova_compute[256940]: 2025-10-02 12:14:03.340 2 INFO nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deletion of /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f_del complete
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.190 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.191 2 INFO nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating image(s)
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.227 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.266 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.305 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.309 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.310 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.316 2 DEBUG nova.compute.manager [req-ab382dd3-b2d7-4bbe-a750-17f3ee47eb53 req-4f867a46-bccb-4514-a974-2fca77498ab7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.316 2 DEBUG oslo_concurrency.lockutils [req-ab382dd3-b2d7-4bbe-a750-17f3ee47eb53 req-4f867a46-bccb-4514-a974-2fca77498ab7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.317 2 DEBUG oslo_concurrency.lockutils [req-ab382dd3-b2d7-4bbe-a750-17f3ee47eb53 req-4f867a46-bccb-4514-a974-2fca77498ab7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.317 2 DEBUG oslo_concurrency.lockutils [req-ab382dd3-b2d7-4bbe-a750-17f3ee47eb53 req-4f867a46-bccb-4514-a974-2fca77498ab7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.317 2 DEBUG nova.compute.manager [req-ab382dd3-b2d7-4bbe-a750-17f3ee47eb53 req-4f867a46-bccb-4514-a974-2fca77498ab7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.317 2 WARNING nova.compute.manager [req-ab382dd3-b2d7-4bbe-a750-17f3ee47eb53 req-4f867a46-bccb-4514-a974-2fca77498ab7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state error and task_state rebuild_spawning.
Oct 02 12:14:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:04 compute-0 ceph-mon[73668]: pgmap v1025: 305 pgs: 305 active+clean; 430 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:14:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 408 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.715 2 DEBUG nova.virt.libvirt.imagebackend [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:14:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:04.846 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:04 compute-0 nova_compute[256940]: 2025-10-02 12:14:04.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:04.847 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:14:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:14:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935995377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:14:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:14:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935995377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:14:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:05.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1935995377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:14:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1935995377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:14:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:06 compute-0 sudo[270821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:06 compute-0 sudo[270821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:06 compute-0 sudo[270821]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:06 compute-0 sudo[270846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:06 compute-0 sudo[270846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:06 compute-0 sudo[270846]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:06 compute-0 ceph-mon[73668]: pgmap v1026: 305 pgs: 305 active+clean; 408 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 02 12:14:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 382 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 02 12:14:06 compute-0 nova_compute[256940]: 2025-10-02 12:14:06.709 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:06 compute-0 nova_compute[256940]: 2025-10-02 12:14:06.811 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:06 compute-0 nova_compute[256940]: 2025-10-02 12:14:06.814 2 DEBUG nova.virt.images [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] 52ef509e-0e22-464e-93c9-3ddcf574cd64 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 12:14:06 compute-0 nova_compute[256940]: 2025-10-02 12:14:06.856 2 DEBUG nova.privsep.utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:14:06 compute-0 nova_compute[256940]: 2025-10-02 12:14:06.857 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:07 compute-0 nova_compute[256940]: 2025-10-02 12:14:07.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:07.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:07 compute-0 ceph-mon[73668]: pgmap v1027: 305 pgs: 305 active+clean; 382 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 02 12:14:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/333674804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:07.850 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:08.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:08 compute-0 nova_compute[256940]: 2025-10-02 12:14:08.537 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted" returned: 0 in 1.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:08 compute-0 nova_compute[256940]: 2025-10-02 12:14:08.541 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:08 compute-0 nova_compute[256940]: 2025-10-02 12:14:08.625 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:08 compute-0 nova_compute[256940]: 2025-10-02 12:14:08.627 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:08 compute-0 nova_compute[256940]: 2025-10-02 12:14:08.666 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:08 compute-0 nova_compute[256940]: 2025-10-02 12:14:08.673 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 c704d1db-8e20-432e-900d-4f4267059d9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 358 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.053 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 c704d1db-8e20-432e-900d-4f4267059d9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.133 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] resizing rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.245 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.246 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Ensure instance console log exists: /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.247 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.248 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.248 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.252 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Start _get_guest_xml network_info=[{"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.257 2 WARNING nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.263 2 DEBUG nova.virt.libvirt.host [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.264 2 DEBUG nova.virt.libvirt.host [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.268 2 DEBUG nova.virt.libvirt.host [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.269 2 DEBUG nova.virt.libvirt.host [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.270 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.270 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.271 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.271 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.272 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.272 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.273 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.273 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.274 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.274 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.274 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.275 2 DEBUG nova.virt.hardware [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.275 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:09.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.487 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921988349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.917 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.952 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:09 compute-0 nova_compute[256940]: 2025-10-02 12:14:09.958 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:10.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:10 compute-0 ceph-mon[73668]: pgmap v1028: 305 pgs: 305 active+clean; 358 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 02 12:14:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3576362853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1635128892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1921988349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250033867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.421 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.425 2 DEBUG nova.virt.libvirt.vif [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:03Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.426 2 DEBUG nova.network.os_vif_util [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.428 2 DEBUG nova.network.os_vif_util [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.433 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <uuid>c704d1db-8e20-432e-900d-4f4267059d9f</uuid>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <name>instance-0000000b</name>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersAdminTestJSON-server-1406080749</nova:name>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:14:09</nova:creationTime>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:user uuid="7a80f833255046e7b62d34c1c6066073">tempest-ServersAdminTestJSON-1879159697-project-member</nova:user>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:project uuid="39ca581fbb054c959d26096ca39fef05">tempest-ServersAdminTestJSON-1879159697</nova:project>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <nova:port uuid="2747f0c7-f3c3-4402-aedd-266862caf435">
Oct 02 12:14:10 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <system>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <entry name="serial">c704d1db-8e20-432e-900d-4f4267059d9f</entry>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <entry name="uuid">c704d1db-8e20-432e-900d-4f4267059d9f</entry>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </system>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <os>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   </os>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <features>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   </features>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c704d1db-8e20-432e-900d-4f4267059d9f_disk">
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       </source>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c704d1db-8e20-432e-900d-4f4267059d9f_disk.config">
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       </source>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:14:10 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:87:d3:15"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <target dev="tap2747f0c7-f3"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/console.log" append="off"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <video>
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </video>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:14:10 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:14:10 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:14:10 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:14:10 compute-0 nova_compute[256940]: </domain>
Oct 02 12:14:10 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.435 2 DEBUG nova.virt.libvirt.vif [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:03Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.436 2 DEBUG nova.network.os_vif_util [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.437 2 DEBUG nova.network.os_vif_util [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.438 2 DEBUG os_vif [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2747f0c7-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2747f0c7-f3, col_values=(('external_ids', {'iface-id': '2747f0c7-f3c3-4402-aedd-266862caf435', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:d3:15', 'vm-uuid': 'c704d1db-8e20-432e-900d-4f4267059d9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:10 compute-0 NetworkManager[44981]: <info>  [1759407250.4485] manager: (tap2747f0c7-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:10 compute-0 nova_compute[256940]: 2025-10-02 12:14:10.454 2 INFO os_vif [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3')
Oct 02 12:14:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 401 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 130 op/s
Oct 02 12:14:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:11.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:11 compute-0 nova_compute[256940]: 2025-10-02 12:14:11.454 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:11 compute-0 nova_compute[256940]: 2025-10-02 12:14:11.454 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:11 compute-0 nova_compute[256940]: 2025-10-02 12:14:11.455 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No VIF found with MAC fa:16:3e:87:d3:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:14:11 compute-0 nova_compute[256940]: 2025-10-02 12:14:11.455 2 INFO nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Using config drive
Oct 02 12:14:11 compute-0 nova_compute[256940]: 2025-10-02 12:14:11.486 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2250033867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:11 compute-0 nova_compute[256940]: 2025-10-02 12:14:11.514 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:11 compute-0 nova_compute[256940]: 2025-10-02 12:14:11.633 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'keypairs' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:12 compute-0 ceph-mon[73668]: pgmap v1029: 305 pgs: 305 active+clean; 401 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 130 op/s
Oct 02 12:14:12 compute-0 nova_compute[256940]: 2025-10-02 12:14:12.625 2 INFO nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating config drive at /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config
Oct 02 12:14:12 compute-0 nova_compute[256940]: 2025-10-02 12:14:12.632 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnlccde8s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 451 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 142 op/s
Oct 02 12:14:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:12 compute-0 nova_compute[256940]: 2025-10-02 12:14:12.766 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnlccde8s" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:12 compute-0 nova_compute[256940]: 2025-10-02 12:14:12.806 2 DEBUG nova.storage.rbd_utils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:12 compute-0 nova_compute[256940]: 2025-10-02 12:14:12.811 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config c704d1db-8e20-432e-900d-4f4267059d9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.013 2 DEBUG oslo_concurrency.processutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config c704d1db-8e20-432e-900d-4f4267059d9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.015 2 INFO nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deleting local config drive /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config because it was imported into RBD.
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.064 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.065 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:13 compute-0 kernel: tap2747f0c7-f3: entered promiscuous mode
Oct 02 12:14:13 compute-0 NetworkManager[44981]: <info>  [1759407253.0821] manager: (tap2747f0c7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Oct 02 12:14:13 compute-0 ovn_controller[148123]: 2025-10-02T12:14:13Z|00069|binding|INFO|Claiming lport 2747f0c7-f3c3-4402-aedd-266862caf435 for this chassis.
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:13 compute-0 ovn_controller[148123]: 2025-10-02T12:14:13Z|00070|binding|INFO|2747f0c7-f3c3-4402-aedd-266862caf435: Claiming fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.102 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:d3:15 10.100.0.7'], port_security=['fa:16:3e:87:d3:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c704d1db-8e20-432e-900d-4f4267059d9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2747f0c7-f3c3-4402-aedd-266862caf435) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.104 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2747f0c7-f3c3-4402-aedd-266862caf435 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f bound to our chassis
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.107 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:14:13 compute-0 ovn_controller[148123]: 2025-10-02T12:14:13Z|00071|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 ovn-installed in OVS
Oct 02 12:14:13 compute-0 ovn_controller[148123]: 2025-10-02T12:14:13Z|00072|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 up in Southbound
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:13 compute-0 systemd-udevd[271128]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.113 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:13 compute-0 NetworkManager[44981]: <info>  [1759407253.1274] device (tap2747f0c7-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:14:13 compute-0 NetworkManager[44981]: <info>  [1759407253.1325] device (tap2747f0c7-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:14:13 compute-0 systemd-machined[210927]: New machine qemu-6-instance-0000000b.
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.140 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcafe22-590e-41fd-803d-ea6edfb87db9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:13 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-0000000b.
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.181 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4195e38d-3e9a-4120-93a5-98ee4aeb2277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.184 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[028c65b7-c857-4771-828b-2c35300b86e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.216 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.217 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.229 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.229 2 INFO nova.compute.claims [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.234 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7adb6f1e-3ee8-4bb8-aa2b-7687970e3fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.262 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a36b8ec2-bc2f-4412-baea-b63ed28f7e25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 20476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271145, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.286 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9af259-0afb-4968-916d-b2ff030cbbef]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271146, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271146, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.294 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.299 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.299 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.300 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:13.301 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:13.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.411 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.443 2 DEBUG nova.compute.manager [req-dc028b95-20f3-4cea-a2f8-00cba7ce89ac req-926c3bde-3878-457f-88ff-194a65e75eec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.444 2 DEBUG oslo_concurrency.lockutils [req-dc028b95-20f3-4cea-a2f8-00cba7ce89ac req-926c3bde-3878-457f-88ff-194a65e75eec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.444 2 DEBUG oslo_concurrency.lockutils [req-dc028b95-20f3-4cea-a2f8-00cba7ce89ac req-926c3bde-3878-457f-88ff-194a65e75eec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.444 2 DEBUG oslo_concurrency.lockutils [req-dc028b95-20f3-4cea-a2f8-00cba7ce89ac req-926c3bde-3878-457f-88ff-194a65e75eec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.445 2 DEBUG nova.compute.manager [req-dc028b95-20f3-4cea-a2f8-00cba7ce89ac req-926c3bde-3878-457f-88ff-194a65e75eec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.445 2 WARNING nova.compute.manager [req-dc028b95-20f3-4cea-a2f8-00cba7ce89ac req-926c3bde-3878-457f-88ff-194a65e75eec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state error and task_state rebuild_spawning.
Oct 02 12:14:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3144243600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.873 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.880 2 DEBUG nova.compute.provider_tree [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.899 2 DEBUG nova.scheduler.client.report [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.928 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:13 compute-0 nova_compute[256940]: 2025-10-02 12:14:13.929 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.002 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.003 2 DEBUG nova.network.neutron [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.067 2 INFO nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.100 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.227 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for c704d1db-8e20-432e-900d-4f4267059d9f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.227 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407254.2264333, c704d1db-8e20-432e-900d-4f4267059d9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.228 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Resumed (Lifecycle Event)
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.231 2 DEBUG nova.compute.manager [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.231 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.239 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance spawned successfully.
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.239 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.278 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.285 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.288 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.288 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.289 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.290 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.290 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.291 2 DEBUG nova.virt.libvirt.driver [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.296 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.297 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.297 2 INFO nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Creating image(s)
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.331 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.369 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:14.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.409 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.415 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.441 2 DEBUG nova.policy [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b3b07fd71254446ca25c68ff383cbfaf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bbb79918378c4b758373ad37cb271097', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.449 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.450 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407254.2277477, c704d1db-8e20-432e-900d-4f4267059d9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.451 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Started (Lifecycle Event)
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.455 2 DEBUG nova.compute.manager [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.492 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.496 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.497 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.497 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.523 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.527 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.553 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.563 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.564 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.564 2 DEBUG nova.objects.instance [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.570 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:14 compute-0 ceph-mon[73668]: pgmap v1030: 305 pgs: 305 active+clean; 451 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 142 op/s
Oct 02 12:14:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3144243600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 462 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Oct 02 12:14:14 compute-0 nova_compute[256940]: 2025-10-02 12:14:14.684 2 DEBUG oslo_concurrency.lockutils [None req-f4af3e7b-8e13-407a-b6c6-b62672ff876a 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.218 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.322 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] resizing rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:14:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.393 2 DEBUG nova.network.neutron [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Successfully created port: 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.455 2 DEBUG nova.objects.instance [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e9ad0a2-241c-412f-8c8e-487d18a310f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.523 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.526 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Ensure instance console log exists: /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.527 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.528 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.528 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.800 2 DEBUG nova.compute.manager [req-f33a0283-c759-4c99-80a6-ffa6c60f02b8 req-02420c1a-e084-49c9-8948-fc8c5af40ffe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.802 2 DEBUG oslo_concurrency.lockutils [req-f33a0283-c759-4c99-80a6-ffa6c60f02b8 req-02420c1a-e084-49c9-8948-fc8c5af40ffe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.803 2 DEBUG oslo_concurrency.lockutils [req-f33a0283-c759-4c99-80a6-ffa6c60f02b8 req-02420c1a-e084-49c9-8948-fc8c5af40ffe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.803 2 DEBUG oslo_concurrency.lockutils [req-f33a0283-c759-4c99-80a6-ffa6c60f02b8 req-02420c1a-e084-49c9-8948-fc8c5af40ffe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.804 2 DEBUG nova.compute.manager [req-f33a0283-c759-4c99-80a6-ffa6c60f02b8 req-02420c1a-e084-49c9-8948-fc8c5af40ffe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:15 compute-0 nova_compute[256940]: 2025-10-02 12:14:15.804 2 WARNING nova.compute.manager [req-f33a0283-c759-4c99-80a6-ffa6c60f02b8 req-02420c1a-e084-49c9-8948-fc8c5af40ffe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state active and task_state None.
Oct 02 12:14:15 compute-0 ceph-mon[73668]: pgmap v1031: 305 pgs: 305 active+clean; 462 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Oct 02 12:14:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:16 compute-0 podman[271378]: 2025-10-02 12:14:16.405633809 +0000 UTC m=+0.071112335 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:14:16 compute-0 podman[271379]: 2025-10-02 12:14:16.461265277 +0000 UTC m=+0.126886177 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 12:14:16 compute-0 nova_compute[256940]: 2025-10-02 12:14:16.576 2 DEBUG nova.network.neutron [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Successfully updated port: 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:14:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 484 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.1 MiB/s wr, 164 op/s
Oct 02 12:14:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:17.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:17 compute-0 ceph-mon[73668]: pgmap v1032: 305 pgs: 305 active+clean; 484 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.1 MiB/s wr, 164 op/s
Oct 02 12:14:17 compute-0 nova_compute[256940]: 2025-10-02 12:14:17.918 2 DEBUG nova.compute.manager [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received event network-changed-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:17 compute-0 nova_compute[256940]: 2025-10-02 12:14:17.920 2 DEBUG nova.compute.manager [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Refreshing instance network info cache due to event network-changed-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:14:17 compute-0 nova_compute[256940]: 2025-10-02 12:14:17.920 2 DEBUG oslo_concurrency.lockutils [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4e9ad0a2-241c-412f-8c8e-487d18a310f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:17 compute-0 nova_compute[256940]: 2025-10-02 12:14:17.921 2 DEBUG oslo_concurrency.lockutils [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4e9ad0a2-241c-412f-8c8e-487d18a310f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:17 compute-0 nova_compute[256940]: 2025-10-02 12:14:17.921 2 DEBUG nova.network.neutron [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Refreshing network info cache for port 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:14:17 compute-0 nova_compute[256940]: 2025-10-02 12:14:17.926 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "refresh_cache-4e9ad0a2-241c-412f-8c8e-487d18a310f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:18 compute-0 nova_compute[256940]: 2025-10-02 12:14:18.255 2 DEBUG nova.network.neutron [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:14:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 528 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.8 MiB/s wr, 221 op/s
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.168 2 INFO nova.compute.manager [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Rebuilding instance
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.211 2 DEBUG nova.network.neutron [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.264 2 DEBUG oslo_concurrency.lockutils [req-de0d8a51-24f2-40dd-a0f0-f4f517af4d33 req-540fef00-b057-47ba-a677-8887fd8b6db4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4e9ad0a2-241c-412f-8c8e-487d18a310f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.264 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquired lock "refresh_cache-4e9ad0a2-241c-412f-8c8e-487d18a310f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.265 2 DEBUG nova.network.neutron [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:14:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:19.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.517 2 DEBUG nova.network.neutron [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.664 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.686 2 DEBUG nova.compute.manager [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.801 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'pci_requests' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.819 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'pci_devices' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.847 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'resources' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.862 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'migration_context' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.873 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:14:19 compute-0 nova_compute[256940]: 2025-10-02 12:14:19.878 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:14:19 compute-0 ceph-mon[73668]: pgmap v1033: 305 pgs: 305 active+clean; 528 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.8 MiB/s wr, 221 op/s
Oct 02 12:14:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1261907939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:20 compute-0 nova_compute[256940]: 2025-10-02 12:14:20.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 250 op/s
Oct 02 12:14:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2639501534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:21.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:21 compute-0 nova_compute[256940]: 2025-10-02 12:14:21.567 2 DEBUG nova.network.neutron [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Updating instance_info_cache with network_info: [{"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:21 compute-0 ceph-mon[73668]: pgmap v1034: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 250 op/s
Oct 02 12:14:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3593238018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.643 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Releasing lock "refresh_cache-4e9ad0a2-241c-412f-8c8e-487d18a310f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.643 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Instance network_info: |[{"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.647 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Start _get_guest_xml network_info=[{"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.653 2 WARNING nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.661 2 DEBUG nova.virt.libvirt.host [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.662 2 DEBUG nova.virt.libvirt.host [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.665 2 DEBUG nova.virt.libvirt.host [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.666 2 DEBUG nova.virt.libvirt.host [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.668 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.668 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.669 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.669 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.669 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.669 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.670 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.670 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.670 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.671 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.671 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.671 2 DEBUG nova.virt.hardware [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:14:22 compute-0 nova_compute[256940]: 2025-10-02 12:14:22.674 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 238 op/s
Oct 02 12:14:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459332538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.148 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.180 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.185 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:23.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022665077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.659 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.661 2 DEBUG nova.virt.libvirt.vif [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-829412332',display_name='tempest-ImagesNegativeTestJSON-server-829412332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-829412332',id=19,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bbb79918378c4b758373ad37cb271097',ramdisk_id='',reservation_id='r-njftobam',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1691644921',owner_user_name='tempest-ImagesNegativeTestJSON-1691644921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:14Z,user_data=None,user_id='b3b07fd71254446ca25c68ff383cbfaf',uuid=4e9ad0a2-241c-412f-8c8e-487d18a310f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.662 2 DEBUG nova.network.os_vif_util [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Converting VIF {"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.662 2 DEBUG nova.network.os_vif_util [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:80:14,bridge_name='br-int',has_traffic_filtering=True,id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6,network=Network(80afb90a-f741-4862-b83f-c98daedcd1c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f9b137a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.664 2 DEBUG nova.objects.instance [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9ad0a2-241c-412f-8c8e-487d18a310f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.689 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <uuid>4e9ad0a2-241c-412f-8c8e-487d18a310f2</uuid>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <name>instance-00000013</name>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <nova:name>tempest-ImagesNegativeTestJSON-server-829412332</nova:name>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:14:22</nova:creationTime>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:user uuid="b3b07fd71254446ca25c68ff383cbfaf">tempest-ImagesNegativeTestJSON-1691644921-project-member</nova:user>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:project uuid="bbb79918378c4b758373ad37cb271097">tempest-ImagesNegativeTestJSON-1691644921</nova:project>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <nova:port uuid="6f9b137a-5ac3-4a29-b7fe-58eceb3843d6">
Oct 02 12:14:23 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <system>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <entry name="serial">4e9ad0a2-241c-412f-8c8e-487d18a310f2</entry>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <entry name="uuid">4e9ad0a2-241c-412f-8c8e-487d18a310f2</entry>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </system>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <os>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   </os>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <features>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   </features>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk">
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       </source>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk.config">
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       </source>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:14:23 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:a6:80:14"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <target dev="tap6f9b137a-5a"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/console.log" append="off"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <video>
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </video>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:14:23 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:14:23 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:14:23 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:14:23 compute-0 nova_compute[256940]: </domain>
Oct 02 12:14:23 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.691 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Preparing to wait for external event network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.692 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.693 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.693 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.694 2 DEBUG nova.virt.libvirt.vif [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-829412332',display_name='tempest-ImagesNegativeTestJSON-server-829412332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-829412332',id=19,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bbb79918378c4b758373ad37cb271097',ramdisk_id='',reservation_id='r-njftobam',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1691644921',owner_user_name='tempest-ImagesNegativeTestJSON-1691644921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:14Z,user_data=None,user_id='b3b07fd71254446ca25c68ff383cbfaf',uuid=4e9ad0a2-241c-412f-8c8e-487d18a310f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.695 2 DEBUG nova.network.os_vif_util [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Converting VIF {"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.696 2 DEBUG nova.network.os_vif_util [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:80:14,bridge_name='br-int',has_traffic_filtering=True,id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6,network=Network(80afb90a-f741-4862-b83f-c98daedcd1c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f9b137a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.697 2 DEBUG os_vif [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:80:14,bridge_name='br-int',has_traffic_filtering=True,id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6,network=Network(80afb90a-f741-4862-b83f-c98daedcd1c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f9b137a-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f9b137a-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f9b137a-5a, col_values=(('external_ids', {'iface-id': '6f9b137a-5ac3-4a29-b7fe-58eceb3843d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:80:14', 'vm-uuid': '4e9ad0a2-241c-412f-8c8e-487d18a310f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:23 compute-0 NetworkManager[44981]: <info>  [1759407263.7093] manager: (tap6f9b137a-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.717 2 INFO os_vif [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:80:14,bridge_name='br-int',has_traffic_filtering=True,id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6,network=Network(80afb90a-f741-4862-b83f-c98daedcd1c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f9b137a-5a')
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.794 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.794 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.795 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] No VIF found with MAC fa:16:3e:a6:80:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.795 2 INFO nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Using config drive
Oct 02 12:14:23 compute-0 nova_compute[256940]: 2025-10-02 12:14:23.830 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:24 compute-0 ceph-mon[73668]: pgmap v1035: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 238 op/s
Oct 02 12:14:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3459332538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2022665077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:24 compute-0 nova_compute[256940]: 2025-10-02 12:14:24.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:24.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:24 compute-0 nova_compute[256940]: 2025-10-02 12:14:24.606 2 INFO nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Creating config drive at /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/disk.config
Oct 02 12:14:24 compute-0 nova_compute[256940]: 2025-10-02 12:14:24.618 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp41g7rceb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 02 12:14:24 compute-0 nova_compute[256940]: 2025-10-02 12:14:24.755 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp41g7rceb" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:24 compute-0 nova_compute[256940]: 2025-10-02 12:14:24.787 2 DEBUG nova.storage.rbd_utils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] rbd image 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:24 compute-0 nova_compute[256940]: 2025-10-02 12:14:24.791 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/disk.config 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.031 2 DEBUG oslo_concurrency.processutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/disk.config 4e9ad0a2-241c-412f-8c8e-487d18a310f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.034 2 INFO nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Deleting local config drive /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2/disk.config because it was imported into RBD.
Oct 02 12:14:25 compute-0 kernel: tap6f9b137a-5a: entered promiscuous mode
Oct 02 12:14:25 compute-0 ovn_controller[148123]: 2025-10-02T12:14:25Z|00073|binding|INFO|Claiming lport 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 for this chassis.
Oct 02 12:14:25 compute-0 ovn_controller[148123]: 2025-10-02T12:14:25Z|00074|binding|INFO|6f9b137a-5ac3-4a29-b7fe-58eceb3843d6: Claiming fa:16:3e:a6:80:14 10.100.0.6
Oct 02 12:14:25 compute-0 NetworkManager[44981]: <info>  [1759407265.1192] manager: (tap6f9b137a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.126 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:80:14 10.100.0.6'], port_security=['fa:16:3e:a6:80:14 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e9ad0a2-241c-412f-8c8e-487d18a310f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80afb90a-f741-4862-b83f-c98daedcd1c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bbb79918378c4b758373ad37cb271097', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1321efe7-8991-4c5e-9cb0-74c9f9b1ff26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ee90733-aa61-4233-92ff-45971512cd71, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.128 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 in datapath 80afb90a-f741-4862-b83f-c98daedcd1c7 bound to our chassis
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.132 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80afb90a-f741-4862-b83f-c98daedcd1c7
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.171 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[156ec356-013e-4ea3-863a-07d41ffb723d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.173 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80afb90a-f1 in ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.176 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80afb90a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.176 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4c97d9e3-119f-48c3-b1fb-9432df5f19de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.177 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf04705-63fb-4cc5-8479-1caa725577f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 systemd-machined[210927]: New machine qemu-7-instance-00000013.
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.196 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[d12f5555-4114-4039-b06d-a48543f03fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000013.
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.230 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6e1188-4ed1-4e69-97a5-b8e172e4f374]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 ovn_controller[148123]: 2025-10-02T12:14:25Z|00075|binding|INFO|Setting lport 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 ovn-installed in OVS
Oct 02 12:14:25 compute-0 ovn_controller[148123]: 2025-10-02T12:14:25Z|00076|binding|INFO|Setting lport 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 up in Southbound
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 systemd-udevd[271566]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:25 compute-0 NetworkManager[44981]: <info>  [1759407265.2663] device (tap6f9b137a-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:14:25 compute-0 NetworkManager[44981]: <info>  [1759407265.2676] device (tap6f9b137a-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.282 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2b53b8-9479-408c-b660-084804c722dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.289 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4467b1f6-5f95-4f57-a767-22ebe955f6ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 systemd-udevd[271569]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:25 compute-0 NetworkManager[44981]: <info>  [1759407265.2906] manager: (tap80afb90a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.331 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8f61f3f1-7011-4fae-955b-32b2521d2857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.334 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b60a7896-9ba2-4cc7-8825-92d26ff41e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:14:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:14:25 compute-0 NetworkManager[44981]: <info>  [1759407265.3621] device (tap80afb90a-f0): carrier: link connected
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.367 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcae27b-5d7d-49db-a7e0-e19db37d02fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.391 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ea78ca1b-3d9e-41ea-bf84-d79b8a5b1996]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80afb90a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:12:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516084, 'reachable_time': 28977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271595, 'error': None, 'target': 'ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.410 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6f0c65-098c-4a8d-b5a2-005f11d573aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:120d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516084, 'tstamp': 516084}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271596, 'error': None, 'target': 'ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.432 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[61b7e120-bcc8-4133-833a-24d4c13c7d0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80afb90a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:12:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516084, 'reachable_time': 28977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271597, 'error': None, 'target': 'ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.465 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[881cbf11-72fb-4576-b8cb-b18fce52b68a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.532 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3d03f4-bad4-4100-88ca-e37353dcd506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.533 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80afb90a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.533 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.533 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80afb90a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 kernel: tap80afb90a-f0: entered promiscuous mode
Oct 02 12:14:25 compute-0 NetworkManager[44981]: <info>  [1759407265.5363] manager: (tap80afb90a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.551 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80afb90a-f0, col_values=(('external_ids', {'iface-id': '386b910e-2b76-4eee-8b06-708f6060355d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 ovn_controller[148123]: 2025-10-02T12:14:25Z|00077|binding|INFO|Releasing lport 386b910e-2b76-4eee-8b06-708f6060355d from this chassis (sb_readonly=0)
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 nova_compute[256940]: 2025-10-02 12:14:25.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.568 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80afb90a-f741-4862-b83f-c98daedcd1c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80afb90a-f741-4862-b83f-c98daedcd1c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.569 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f2137186-cc53-4cdd-bf26-368946826ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.569 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-80afb90a-f741-4862-b83f-c98daedcd1c7
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/80afb90a-f741-4862-b83f-c98daedcd1c7.pid.haproxy
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 80afb90a-f741-4862-b83f-c98daedcd1c7
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:14:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:25.570 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7', 'env', 'PROCESS_TAG=haproxy-80afb90a-f741-4862-b83f-c98daedcd1c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80afb90a-f741-4862-b83f-c98daedcd1c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:14:26 compute-0 podman[271629]: 2025-10-02 12:14:25.95250183 +0000 UTC m=+0.032759469 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:14:26 compute-0 nova_compute[256940]: 2025-10-02 12:14:26.327 2 DEBUG nova.compute.manager [req-da2a2106-f383-4adf-bb16-23b6cdf8adab req-a86dfe0b-6746-43be-b57e-bd9fa474736e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received event network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:26 compute-0 nova_compute[256940]: 2025-10-02 12:14:26.329 2 DEBUG oslo_concurrency.lockutils [req-da2a2106-f383-4adf-bb16-23b6cdf8adab req-a86dfe0b-6746-43be-b57e-bd9fa474736e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:26 compute-0 nova_compute[256940]: 2025-10-02 12:14:26.330 2 DEBUG oslo_concurrency.lockutils [req-da2a2106-f383-4adf-bb16-23b6cdf8adab req-a86dfe0b-6746-43be-b57e-bd9fa474736e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:26 compute-0 nova_compute[256940]: 2025-10-02 12:14:26.331 2 DEBUG oslo_concurrency.lockutils [req-da2a2106-f383-4adf-bb16-23b6cdf8adab req-a86dfe0b-6746-43be-b57e-bd9fa474736e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:26 compute-0 nova_compute[256940]: 2025-10-02 12:14:26.331 2 DEBUG nova.compute.manager [req-da2a2106-f383-4adf-bb16-23b6cdf8adab req-a86dfe0b-6746-43be-b57e-bd9fa474736e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Processing event network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:14:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:26.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:26.451 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:26.453 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:26.454 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:26 compute-0 podman[271629]: 2025-10-02 12:14:26.533465991 +0000 UTC m=+0.613723560 container create d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:14:26 compute-0 ceph-mon[73668]: pgmap v1036: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 02 12:14:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3948354084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:26 compute-0 systemd[1]: Started libpod-conmon-d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef.scope.
Oct 02 12:14:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:14:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 545 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Oct 02 12:14:26 compute-0 sudo[271675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc48e6c1b890a0ed21b5d8e36470ef81545931e78f157a5916c0d63e9346cb9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:14:26 compute-0 sudo[271675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:26 compute-0 sudo[271675]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:26 compute-0 sudo[271714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:26 compute-0 sudo[271714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:26 compute-0 sudo[271714]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:26 compute-0 podman[271629]: 2025-10-02 12:14:26.918857355 +0000 UTC m=+0.999114924 container init d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:14:26 compute-0 podman[271629]: 2025-10-02 12:14:26.927683456 +0000 UTC m=+1.007941015 container start d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:14:26 compute-0 neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7[271699]: [NOTICE]   (271741) : New worker (271743) forked
Oct 02 12:14:26 compute-0 neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7[271699]: [NOTICE]   (271741) : Loading success.
Oct 02 12:14:27 compute-0 nova_compute[256940]: 2025-10-02 12:14:27.240 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:14:27 compute-0 nova_compute[256940]: 2025-10-02 12:14:27.241 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407267.2396739, 4e9ad0a2-241c-412f-8c8e-487d18a310f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:27 compute-0 nova_compute[256940]: 2025-10-02 12:14:27.242 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] VM Started (Lifecycle Event)
Oct 02 12:14:27 compute-0 nova_compute[256940]: 2025-10-02 12:14:27.246 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:14:27 compute-0 nova_compute[256940]: 2025-10-02 12:14:27.251 2 INFO nova.virt.libvirt.driver [-] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Instance spawned successfully.
Oct 02 12:14:27 compute-0 nova_compute[256940]: 2025-10-02 12:14:27.251 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:14:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:27.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.198 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.200 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.201 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.202 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.202 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.203 2 DEBUG nova.virt.libvirt.driver [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.208 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.213 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:28 compute-0 ceph-mon[73668]: pgmap v1037: 305 pgs: 305 active+clean; 545 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.263 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.264 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407267.2400014, 4e9ad0a2-241c-412f-8c8e-487d18a310f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.264 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] VM Paused (Lifecycle Event)
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.309 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.315 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407267.2453566, 4e9ad0a2-241c-412f-8c8e-487d18a310f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.315 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] VM Resumed (Lifecycle Event)
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.320 2 INFO nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Took 14.02 seconds to spawn the instance on the hypervisor.
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.321 2 DEBUG nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.380 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.389 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:28.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.437 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.441 2 INFO nova.compute.manager [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Took 15.26 seconds to build instance.
Oct 02 12:14:28 compute-0 podman[271752]: 2025-10-02 12:14:28.450560789 +0000 UTC m=+0.116429032 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.477 2 DEBUG nova.compute.manager [req-cb2a27f9-f9de-4d4c-9986-5f688280d483 req-6ce963c6-2e0d-4668-abfc-33eca094edd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received event network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.477 2 DEBUG oslo_concurrency.lockutils [req-cb2a27f9-f9de-4d4c-9986-5f688280d483 req-6ce963c6-2e0d-4668-abfc-33eca094edd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.478 2 DEBUG oslo_concurrency.lockutils [req-cb2a27f9-f9de-4d4c-9986-5f688280d483 req-6ce963c6-2e0d-4668-abfc-33eca094edd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.478 2 DEBUG oslo_concurrency.lockutils [req-cb2a27f9-f9de-4d4c-9986-5f688280d483 req-6ce963c6-2e0d-4668-abfc-33eca094edd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.479 2 DEBUG nova.compute.manager [req-cb2a27f9-f9de-4d4c-9986-5f688280d483 req-6ce963c6-2e0d-4668-abfc-33eca094edd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] No waiting events found dispatching network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.479 2 WARNING nova.compute.manager [req-cb2a27f9-f9de-4d4c-9986-5f688280d483 req-6ce963c6-2e0d-4668-abfc-33eca094edd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received unexpected event network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 for instance with vm_state active and task_state None.
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.490 2 DEBUG oslo_concurrency.lockutils [None req-e4660471-66d3-4557-9dde-fa75a688ca9e b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:28 compute-0 podman[271773]: 2025-10-02 12:14:28.58958279 +0000 UTC m=+0.107354495 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:14:28
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 555 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 150 op/s
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'default.rgw.control']
Oct 02 12:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:14:28 compute-0 nova_compute[256940]: 2025-10-02 12:14:28.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:29 compute-0 nova_compute[256940]: 2025-10-02 12:14:29.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:29 compute-0 ceph-mgr[73961]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 12:14:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:29.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:29 compute-0 nova_compute[256940]: 2025-10-02 12:14:29.938 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:14:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 574 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 155 op/s
Oct 02 12:14:30 compute-0 ceph-mon[73668]: pgmap v1038: 305 pgs: 305 active+clean; 555 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 150 op/s
Oct 02 12:14:30 compute-0 ovn_controller[148123]: 2025-10-02T12:14:30Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:14:30 compute-0 ovn_controller[148123]: 2025-10-02T12:14:30Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.815 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.816 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.816 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.816 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.816 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.817 2 INFO nova.compute.manager [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Terminating instance
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.818 2 DEBUG nova.compute.manager [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:14:30 compute-0 kernel: tap6f9b137a-5a (unregistering): left promiscuous mode
Oct 02 12:14:30 compute-0 NetworkManager[44981]: <info>  [1759407270.8648] device (tap6f9b137a-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:14:30 compute-0 ovn_controller[148123]: 2025-10-02T12:14:30Z|00078|binding|INFO|Releasing lport 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 from this chassis (sb_readonly=0)
Oct 02 12:14:30 compute-0 ovn_controller[148123]: 2025-10-02T12:14:30Z|00079|binding|INFO|Setting lport 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 down in Southbound
Oct 02 12:14:30 compute-0 ovn_controller[148123]: 2025-10-02T12:14:30Z|00080|binding|INFO|Removing iface tap6f9b137a-5a ovn-installed in OVS
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:30 compute-0 nova_compute[256940]: 2025-10-02 12:14:30.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:30.905 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:80:14 10.100.0.6'], port_security=['fa:16:3e:a6:80:14 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e9ad0a2-241c-412f-8c8e-487d18a310f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80afb90a-f741-4862-b83f-c98daedcd1c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bbb79918378c4b758373ad37cb271097', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1321efe7-8991-4c5e-9cb0-74c9f9b1ff26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ee90733-aa61-4233-92ff-45971512cd71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:30.908 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 in datapath 80afb90a-f741-4862-b83f-c98daedcd1c7 unbound from our chassis
Oct 02 12:14:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:30.910 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80afb90a-f741-4862-b83f-c98daedcd1c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:14:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:30.915 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ef391047-209a-441d-8791-4435a918c726]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:30.915 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7 namespace which is not needed anymore
Oct 02 12:14:30 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 02 12:14:30 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000013.scope: Consumed 5.556s CPU time.
Oct 02 12:14:30 compute-0 systemd-machined[210927]: Machine qemu-7-instance-00000013 terminated.
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.060 2 INFO nova.virt.libvirt.driver [-] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Instance destroyed successfully.
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.060 2 DEBUG nova.objects.instance [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lazy-loading 'resources' on Instance uuid 4e9ad0a2-241c-412f-8c8e-487d18a310f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.087 2 DEBUG nova.virt.libvirt.vif [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-829412332',display_name='tempest-ImagesNegativeTestJSON-server-829412332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-829412332',id=19,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bbb79918378c4b758373ad37cb271097',ramdisk_id='',reservation_id='r-njftobam',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1691644921',owner_user_name='tempest-ImagesNegativeTestJSON-1691644921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:28Z,user_data=None,user_id='b3b07fd71254446ca25c68ff383cbfaf',uuid=4e9ad0a2-241c-412f-8c8e-487d18a310f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.089 2 DEBUG nova.network.os_vif_util [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Converting VIF {"id": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "address": "fa:16:3e:a6:80:14", "network": {"id": "80afb90a-f741-4862-b83f-c98daedcd1c7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1741165365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbb79918378c4b758373ad37cb271097", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f9b137a-5a", "ovs_interfaceid": "6f9b137a-5ac3-4a29-b7fe-58eceb3843d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.091 2 DEBUG nova.network.os_vif_util [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:80:14,bridge_name='br-int',has_traffic_filtering=True,id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6,network=Network(80afb90a-f741-4862-b83f-c98daedcd1c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f9b137a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.092 2 DEBUG os_vif [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:80:14,bridge_name='br-int',has_traffic_filtering=True,id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6,network=Network(80afb90a-f741-4862-b83f-c98daedcd1c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f9b137a-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.096 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f9b137a-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.104 2 INFO os_vif [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:80:14,bridge_name='br-int',has_traffic_filtering=True,id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6,network=Network(80afb90a-f741-4862-b83f-c98daedcd1c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f9b137a-5a')
Oct 02 12:14:31 compute-0 neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7[271699]: [NOTICE]   (271741) : haproxy version is 2.8.14-c23fe91
Oct 02 12:14:31 compute-0 neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7[271699]: [NOTICE]   (271741) : path to executable is /usr/sbin/haproxy
Oct 02 12:14:31 compute-0 neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7[271699]: [WARNING]  (271741) : Exiting Master process...
Oct 02 12:14:31 compute-0 neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7[271699]: [ALERT]    (271741) : Current worker (271743) exited with code 143 (Terminated)
Oct 02 12:14:31 compute-0 neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7[271699]: [WARNING]  (271741) : All workers exited. Exiting... (0)
Oct 02 12:14:31 compute-0 systemd[1]: libpod-d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef.scope: Deactivated successfully.
Oct 02 12:14:31 compute-0 podman[271819]: 2025-10-02 12:14:31.149217858 +0000 UTC m=+0.102620872 container died d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.261 2 DEBUG nova.compute.manager [req-9e712358-c7df-458f-95aa-615b48b2e2d4 req-40f9a607-2a18-4934-a1d1-b2fed22bc9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received event network-vif-unplugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.261 2 DEBUG oslo_concurrency.lockutils [req-9e712358-c7df-458f-95aa-615b48b2e2d4 req-40f9a607-2a18-4934-a1d1-b2fed22bc9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.264 2 DEBUG oslo_concurrency.lockutils [req-9e712358-c7df-458f-95aa-615b48b2e2d4 req-40f9a607-2a18-4934-a1d1-b2fed22bc9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.264 2 DEBUG oslo_concurrency.lockutils [req-9e712358-c7df-458f-95aa-615b48b2e2d4 req-40f9a607-2a18-4934-a1d1-b2fed22bc9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.265 2 DEBUG nova.compute.manager [req-9e712358-c7df-458f-95aa-615b48b2e2d4 req-40f9a607-2a18-4934-a1d1-b2fed22bc9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] No waiting events found dispatching network-vif-unplugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:31 compute-0 nova_compute[256940]: 2025-10-02 12:14:31.265 2 DEBUG nova.compute.manager [req-9e712358-c7df-458f-95aa-615b48b2e2d4 req-40f9a607-2a18-4934-a1d1-b2fed22bc9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received event network-vif-unplugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef-userdata-shm.mount: Deactivated successfully.
Oct 02 12:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc48e6c1b890a0ed21b5d8e36470ef81545931e78f157a5916c0d63e9346cb9-merged.mount: Deactivated successfully.
Oct 02 12:14:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:31.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:31 compute-0 podman[271819]: 2025-10-02 12:14:31.405908983 +0000 UTC m=+0.359311997 container cleanup d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:14:31 compute-0 systemd[1]: libpod-conmon-d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef.scope: Deactivated successfully.
Oct 02 12:14:31 compute-0 ceph-mon[73668]: pgmap v1039: 305 pgs: 305 active+clean; 574 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 155 op/s
Oct 02 12:14:32 compute-0 podman[271876]: 2025-10-02 12:14:32.226066076 +0000 UTC m=+0.793703003 container remove d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.235 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[610b7393-68e4-4b97-80a0-d5746bcedcc7]: (4, ('Thu Oct  2 12:14:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7 (d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef)\nd8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef\nThu Oct  2 12:14:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7 (d8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef)\nd8fad1933bcaac9b49cdc636cfac6d2d4ff65bd6203b7d11e6e93dadb35cddef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.237 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[48686786-02c2-4243-b037-a7009466da85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.238 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80afb90a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:32 compute-0 kernel: tap80afb90a-f0: left promiscuous mode
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.245 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93a64998-a3e5-444c-a95a-e650584df875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.267 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1e783047-f38f-4cf5-becd-b88e5322dde2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.269 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[64e827fc-66ad-457e-8fa4-b964f4601654]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.284 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[748e61d0-d6fb-457d-abbe-30ecbcdd04f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516075, 'reachable_time': 24417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271893, 'error': None, 'target': 'ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d80afb90a\x2df741\x2d4862\x2db83f\x2dc98daedcd1c7.mount: Deactivated successfully.
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.286 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80afb90a-f741-4862-b83f-c98daedcd1c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:14:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:32.287 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9a257ee3-0838-4f09-befc-93740c9a1db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:32.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.488 2 INFO nova.virt.libvirt.driver [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Deleting instance files /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2_del
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.489 2 INFO nova.virt.libvirt.driver [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Deletion of /var/lib/nova/instances/4e9ad0a2-241c-412f-8c8e-487d18a310f2_del complete
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.620 2 INFO nova.compute.manager [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Took 1.80 seconds to destroy the instance on the hypervisor.
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.620 2 DEBUG oslo.service.loopingcall [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.621 2 DEBUG nova.compute.manager [-] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:14:32 compute-0 nova_compute[256940]: 2025-10-02 12:14:32.621 2 DEBUG nova.network.neutron [-] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:14:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 577 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 261 op/s
Oct 02 12:14:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:14:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:33.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:14:33 compute-0 nova_compute[256940]: 2025-10-02 12:14:33.382 2 DEBUG nova.compute.manager [req-c2ae9e57-a645-4ac9-bb4b-42dfcffa3d06 req-d81e0e43-1f18-41d6-883e-434b6ceec64b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received event network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:33 compute-0 nova_compute[256940]: 2025-10-02 12:14:33.383 2 DEBUG oslo_concurrency.lockutils [req-c2ae9e57-a645-4ac9-bb4b-42dfcffa3d06 req-d81e0e43-1f18-41d6-883e-434b6ceec64b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:33 compute-0 nova_compute[256940]: 2025-10-02 12:14:33.383 2 DEBUG oslo_concurrency.lockutils [req-c2ae9e57-a645-4ac9-bb4b-42dfcffa3d06 req-d81e0e43-1f18-41d6-883e-434b6ceec64b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:33 compute-0 nova_compute[256940]: 2025-10-02 12:14:33.383 2 DEBUG oslo_concurrency.lockutils [req-c2ae9e57-a645-4ac9-bb4b-42dfcffa3d06 req-d81e0e43-1f18-41d6-883e-434b6ceec64b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:33 compute-0 nova_compute[256940]: 2025-10-02 12:14:33.384 2 DEBUG nova.compute.manager [req-c2ae9e57-a645-4ac9-bb4b-42dfcffa3d06 req-d81e0e43-1f18-41d6-883e-434b6ceec64b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] No waiting events found dispatching network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:33 compute-0 nova_compute[256940]: 2025-10-02 12:14:33.384 2 WARNING nova.compute.manager [req-c2ae9e57-a645-4ac9-bb4b-42dfcffa3d06 req-d81e0e43-1f18-41d6-883e-434b6ceec64b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received unexpected event network-vif-plugged-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 for instance with vm_state active and task_state deleting.
Oct 02 12:14:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1827930705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:34 compute-0 nova_compute[256940]: 2025-10-02 12:14:34.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:34.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:34 compute-0 ceph-mon[73668]: pgmap v1040: 305 pgs: 305 active+clean; 577 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 261 op/s
Oct 02 12:14:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4119413179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 571 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 297 op/s
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.194 2 DEBUG nova.compute.manager [req-0451005a-469b-48b1-9367-44ca0d432b9e req-cb4d5940-9a41-40c1-a582-74e470862fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Received event network-vif-deleted-6f9b137a-5ac3-4a29-b7fe-58eceb3843d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.194 2 INFO nova.compute.manager [req-0451005a-469b-48b1-9367-44ca0d432b9e req-cb4d5940-9a41-40c1-a582-74e470862fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Neutron deleted interface 6f9b137a-5ac3-4a29-b7fe-58eceb3843d6; detaching it from the instance and deleting it from the info cache
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.194 2 DEBUG nova.network.neutron [req-0451005a-469b-48b1-9367-44ca0d432b9e req-cb4d5940-9a41-40c1-a582-74e470862fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.246 2 DEBUG nova.network.neutron [-] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.280 2 DEBUG nova.compute.manager [req-0451005a-469b-48b1-9367-44ca0d432b9e req-cb4d5940-9a41-40c1-a582-74e470862fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Detach interface failed, port_id=6f9b137a-5ac3-4a29-b7fe-58eceb3843d6, reason: Instance 4e9ad0a2-241c-412f-8c8e-487d18a310f2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.282 2 INFO nova.compute.manager [-] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Took 2.66 seconds to deallocate network for instance.
Oct 02 12:14:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:35.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.363 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.363 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:35 compute-0 nova_compute[256940]: 2025-10-02 12:14:35.506 2 DEBUG oslo_concurrency.processutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:35 compute-0 ceph-mon[73668]: pgmap v1041: 305 pgs: 305 active+clean; 571 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 297 op/s
Oct 02 12:14:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1965354879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:36 compute-0 nova_compute[256940]: 2025-10-02 12:14:36.033 2 DEBUG oslo_concurrency.processutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:36 compute-0 nova_compute[256940]: 2025-10-02 12:14:36.038 2 DEBUG nova.compute.provider_tree [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:36 compute-0 nova_compute[256940]: 2025-10-02 12:14:36.083 2 DEBUG nova.scheduler.client.report [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:36 compute-0 nova_compute[256940]: 2025-10-02 12:14:36.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:36 compute-0 nova_compute[256940]: 2025-10-02 12:14:36.137 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:36 compute-0 nova_compute[256940]: 2025-10-02 12:14:36.185 2 INFO nova.scheduler.client.report [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Deleted allocations for instance 4e9ad0a2-241c-412f-8c8e-487d18a310f2
Oct 02 12:14:36 compute-0 nova_compute[256940]: 2025-10-02 12:14:36.274 2 DEBUG oslo_concurrency.lockutils [None req-dfbc5f3d-5e23-435e-b665-b8ce31ad4abe b3b07fd71254446ca25c68ff383cbfaf bbb79918378c4b758373ad37cb271097 - - default default] Lock "4e9ad0a2-241c-412f-8c8e-487d18a310f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:36.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 563 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 296 op/s
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.734387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276734452, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2164, "num_deletes": 252, "total_data_size": 3789261, "memory_usage": 3836624, "flush_reason": "Manual Compaction"}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 02 12:14:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1965354879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276759538, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3722828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21981, "largest_seqno": 24144, "table_properties": {"data_size": 3713161, "index_size": 6033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20530, "raw_average_key_size": 20, "raw_value_size": 3693619, "raw_average_value_size": 3693, "num_data_blocks": 267, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407069, "oldest_key_time": 1759407069, "file_creation_time": 1759407276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 25197 microseconds, and 13424 cpu microseconds.
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.759591) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3722828 bytes OK
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.759614) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.761362) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.761375) EVENT_LOG_v1 {"time_micros": 1759407276761370, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.761390) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3780437, prev total WAL file size 3780437, number of live WAL files 2.
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.762204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3635KB)], [53(7170KB)]
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276762237, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11065683, "oldest_snapshot_seqno": -1}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4877 keys, 9054559 bytes, temperature: kUnknown
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276832909, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9054559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9020991, "index_size": 20274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123147, "raw_average_key_size": 25, "raw_value_size": 8931888, "raw_average_value_size": 1831, "num_data_blocks": 829, "num_entries": 4877, "num_filter_entries": 4877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.833349) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9054559 bytes
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.836299) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 5399, records dropped: 522 output_compression: NoCompression
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.836321) EVENT_LOG_v1 {"time_micros": 1759407276836310, "job": 28, "event": "compaction_finished", "compaction_time_micros": 70793, "compaction_time_cpu_micros": 18544, "output_level": 6, "num_output_files": 1, "total_output_size": 9054559, "num_input_records": 5399, "num_output_records": 4877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276837074, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276838404, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.762135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.838462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.838470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.838474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.838478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:14:36.838480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:37.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:37 compute-0 ceph-mon[73668]: pgmap v1042: 305 pgs: 305 active+clean; 563 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 296 op/s
Oct 02 12:14:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:38.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 563 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.1 MiB/s wr, 294 op/s
Oct 02 12:14:39 compute-0 nova_compute[256940]: 2025-10-02 12:14:39.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:14:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013006474792946213 of space, bias 1.0, pg target 3.901942437883864 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.2937097597361809 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:14:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:14:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 02 12:14:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 02 12:14:39 compute-0 ceph-mon[73668]: pgmap v1043: 305 pgs: 305 active+clean; 563 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.1 MiB/s wr, 294 op/s
Oct 02 12:14:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:40.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 563 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 255 op/s
Oct 02 12:14:40 compute-0 nova_compute[256940]: 2025-10-02 12:14:40.994 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:14:41 compute-0 ceph-mon[73668]: osdmap e146: 3 total, 3 up, 3 in
Oct 02 12:14:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2551709202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:41 compute-0 nova_compute[256940]: 2025-10-02 12:14:41.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:14:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:14:41 compute-0 nova_compute[256940]: 2025-10-02 12:14:41.821 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:41 compute-0 nova_compute[256940]: 2025-10-02 12:14:41.822 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:41 compute-0 nova_compute[256940]: 2025-10-02 12:14:41.822 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:14:42 compute-0 ceph-mon[73668]: pgmap v1045: 305 pgs: 305 active+clean; 563 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 255 op/s
Oct 02 12:14:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2725198995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2544534418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2864292701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:42.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:42 compute-0 nova_compute[256940]: 2025-10-02 12:14:42.588 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:42 compute-0 nova_compute[256940]: 2025-10-02 12:14:42.589 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:42 compute-0 nova_compute[256940]: 2025-10-02 12:14:42.589 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:14:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 583 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 1.8 MiB/s wr, 149 op/s
Oct 02 12:14:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2935015047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:43.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:43 compute-0 kernel: tap2747f0c7-f3 (unregistering): left promiscuous mode
Oct 02 12:14:43 compute-0 NetworkManager[44981]: <info>  [1759407283.3724] device (tap2747f0c7-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:14:43 compute-0 nova_compute[256940]: 2025-10-02 12:14:43.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:43 compute-0 ovn_controller[148123]: 2025-10-02T12:14:43Z|00081|binding|INFO|Releasing lport 2747f0c7-f3c3-4402-aedd-266862caf435 from this chassis (sb_readonly=0)
Oct 02 12:14:43 compute-0 ovn_controller[148123]: 2025-10-02T12:14:43Z|00082|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 down in Southbound
Oct 02 12:14:43 compute-0 ovn_controller[148123]: 2025-10-02T12:14:43Z|00083|binding|INFO|Removing iface tap2747f0c7-f3 ovn-installed in OVS
Oct 02 12:14:43 compute-0 nova_compute[256940]: 2025-10-02 12:14:43.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:43 compute-0 nova_compute[256940]: 2025-10-02 12:14:43.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.442 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:d3:15 10.100.0.7'], port_security=['fa:16:3e:87:d3:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c704d1db-8e20-432e-900d-4f4267059d9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2747f0c7-f3c3-4402-aedd-266862caf435) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.444 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2747f0c7-f3c3-4402-aedd-266862caf435 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.447 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:14:43 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 02 12:14:43 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 15.035s CPU time.
Oct 02 12:14:43 compute-0 systemd-machined[210927]: Machine qemu-6-instance-0000000b terminated.
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.475 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[23c015a6-441d-4f00-90a8-71ee82f91880]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.530 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[77e96840-13c5-4641-856e-cf40a47d235f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.536 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[29229d94-3e43-4ac1-aa69-a8b5b94e7f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.579 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c91ef54a-4f25-4ee6-9215-6a7f6930ec5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.615 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[69ad9ec5-59c9-41ab-88bc-e804dfef49af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 20476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271933, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.640 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3be14c-ca7a-40d0-a9e1-85c1f75d8e5f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271939, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271939, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.643 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:43 compute-0 nova_compute[256940]: 2025-10-02 12:14:43.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:43 compute-0 nova_compute[256940]: 2025-10-02 12:14:43.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.654 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.654 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.655 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:43.656 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.018 2 INFO nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance shutdown successfully after 24 seconds.
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.026 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance destroyed successfully.
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.035 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance destroyed successfully.
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.036 2 DEBUG nova.virt.libvirt.vif [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:18Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.037 2 DEBUG nova.network.os_vif_util [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.038 2 DEBUG nova.network.os_vif_util [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.039 2 DEBUG os_vif [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2747f0c7-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.052 2 INFO os_vif [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3')
Oct 02 12:14:44 compute-0 ceph-mon[73668]: pgmap v1046: 305 pgs: 305 active+clean; 583 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 1.8 MiB/s wr, 149 op/s
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:44.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.520 2 INFO nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deleting instance files /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f_del
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.521 2 INFO nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deletion of /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f_del complete
Oct 02 12:14:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 565 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.723 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.724 2 INFO nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating image(s)
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.766 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.810 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.861 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.868 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.938 2 DEBUG nova.compute.manager [req-0c6e217b-baf6-4e15-8378-8b1c2797f4cb req-2f2ab4ec-f8d8-42e5-a774-29dbb333faf4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-unplugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.940 2 DEBUG oslo_concurrency.lockutils [req-0c6e217b-baf6-4e15-8378-8b1c2797f4cb req-2f2ab4ec-f8d8-42e5-a774-29dbb333faf4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.940 2 DEBUG oslo_concurrency.lockutils [req-0c6e217b-baf6-4e15-8378-8b1c2797f4cb req-2f2ab4ec-f8d8-42e5-a774-29dbb333faf4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.941 2 DEBUG oslo_concurrency.lockutils [req-0c6e217b-baf6-4e15-8378-8b1c2797f4cb req-2f2ab4ec-f8d8-42e5-a774-29dbb333faf4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.941 2 DEBUG nova.compute.manager [req-0c6e217b-baf6-4e15-8378-8b1c2797f4cb req-2f2ab4ec-f8d8-42e5-a774-29dbb333faf4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-unplugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.942 2 WARNING nova.compute.manager [req-0c6e217b-baf6-4e15-8378-8b1c2797f4cb req-2f2ab4ec-f8d8-42e5-a774-29dbb333faf4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-unplugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.961 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.962 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.964 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:44 compute-0 nova_compute[256940]: 2025-10-02 12:14:44.965 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.006 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.012 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c704d1db-8e20-432e-900d-4f4267059d9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.339 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c704d1db-8e20-432e-900d-4f4267059d9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:45.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.440 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] resizing rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.586 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.587 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Ensure instance console log exists: /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.587 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.588 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.588 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.590 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Start _get_guest_xml network_info=[{"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.593 2 WARNING nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.604 2 DEBUG nova.virt.libvirt.host [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.604 2 DEBUG nova.virt.libvirt.host [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.608 2 DEBUG nova.virt.libvirt.host [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.608 2 DEBUG nova.virt.libvirt.host [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.609 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.610 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.610 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.610 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.611 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.611 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.611 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.611 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.612 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.612 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.612 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.612 2 DEBUG nova.virt.hardware [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.613 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:45 compute-0 nova_compute[256940]: 2025-10-02 12:14:45.640 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.058 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407271.055323, 4e9ad0a2-241c-412f-8c8e-487d18a310f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.059 2 INFO nova.compute.manager [-] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] VM Stopped (Lifecycle Event)
Oct 02 12:14:46 compute-0 ceph-mon[73668]: pgmap v1047: 305 pgs: 305 active+clean; 565 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Oct 02 12:14:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538923089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.118 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.153 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.158 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:46.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791117221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.583 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.587 2 DEBUG nova.virt.libvirt.vif [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:44Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.587 2 DEBUG nova.network.os_vif_util [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.589 2 DEBUG nova.network.os_vif_util [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.593 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <uuid>c704d1db-8e20-432e-900d-4f4267059d9f</uuid>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <name>instance-0000000b</name>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersAdminTestJSON-server-1406080749</nova:name>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:14:45</nova:creationTime>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:user uuid="7a80f833255046e7b62d34c1c6066073">tempest-ServersAdminTestJSON-1879159697-project-member</nova:user>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:project uuid="39ca581fbb054c959d26096ca39fef05">tempest-ServersAdminTestJSON-1879159697</nova:project>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <nova:port uuid="2747f0c7-f3c3-4402-aedd-266862caf435">
Oct 02 12:14:46 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <system>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <entry name="serial">c704d1db-8e20-432e-900d-4f4267059d9f</entry>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <entry name="uuid">c704d1db-8e20-432e-900d-4f4267059d9f</entry>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </system>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <os>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   </os>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <features>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   </features>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c704d1db-8e20-432e-900d-4f4267059d9f_disk">
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       </source>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c704d1db-8e20-432e-900d-4f4267059d9f_disk.config">
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       </source>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:14:46 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:87:d3:15"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <target dev="tap2747f0c7-f3"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/console.log" append="off"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <video>
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </video>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:14:46 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:14:46 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:14:46 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:14:46 compute-0 nova_compute[256940]: </domain>
Oct 02 12:14:46 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.595 2 DEBUG nova.virt.libvirt.vif [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:44Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.596 2 DEBUG nova.network.os_vif_util [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.597 2 DEBUG nova.network.os_vif_util [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.597 2 DEBUG os_vif [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.600 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.600 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2747f0c7-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2747f0c7-f3, col_values=(('external_ids', {'iface-id': '2747f0c7-f3c3-4402-aedd-266862caf435', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:d3:15', 'vm-uuid': 'c704d1db-8e20-432e-900d-4f4267059d9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:46 compute-0 NetworkManager[44981]: <info>  [1759407286.6154] manager: (tap2747f0c7-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:46 compute-0 nova_compute[256940]: 2025-10-02 12:14:46.620 2 INFO os_vif [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3')
Oct 02 12:14:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 553 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 193 op/s
Oct 02 12:14:46 compute-0 podman[272199]: 2025-10-02 12:14:46.772304617 +0000 UTC m=+0.108112005 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:14:46 compute-0 podman[272200]: 2025-10-02 12:14:46.84249177 +0000 UTC m=+0.170138565 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:14:46 compute-0 sudo[272240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:46 compute-0 sudo[272240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:46 compute-0 sudo[272240]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:46 compute-0 sudo[272268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:46 compute-0 sudo[272268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:46 compute-0 sudo[272268]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1538923089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/791117221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:47.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.714 2 DEBUG nova.compute.manager [None req-ebba2eeb-936d-43be-979c-9c1d785e565f - - - - - -] [instance: 4e9ad0a2-241c-412f-8c8e-487d18a310f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.745 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.746 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.746 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No VIF found with MAC fa:16:3e:87:d3:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.748 2 INFO nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Using config drive
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.792 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.828 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:47 compute-0 nova_compute[256940]: 2025-10-02 12:14:47.893 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'keypairs' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:48 compute-0 ceph-mon[73668]: pgmap v1048: 305 pgs: 305 active+clean; 553 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 193 op/s
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.224 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Updating instance_info_cache with network_info: [{"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.251 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.251 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.252 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.253 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.253 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.253 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.254 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.255 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.255 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.256 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.271 2 DEBUG nova.compute.manager [req-cacbfb71-a059-465e-a97e-764b5f9e2e62 req-fda3ee51-c282-4bc3-95a7-c24560bf0ae8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.271 2 DEBUG oslo_concurrency.lockutils [req-cacbfb71-a059-465e-a97e-764b5f9e2e62 req-fda3ee51-c282-4bc3-95a7-c24560bf0ae8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.272 2 DEBUG oslo_concurrency.lockutils [req-cacbfb71-a059-465e-a97e-764b5f9e2e62 req-fda3ee51-c282-4bc3-95a7-c24560bf0ae8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.272 2 DEBUG oslo_concurrency.lockutils [req-cacbfb71-a059-465e-a97e-764b5f9e2e62 req-fda3ee51-c282-4bc3-95a7-c24560bf0ae8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.272 2 DEBUG nova.compute.manager [req-cacbfb71-a059-465e-a97e-764b5f9e2e62 req-fda3ee51-c282-4bc3-95a7-c24560bf0ae8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.272 2 WARNING nova.compute.manager [req-cacbfb71-a059-465e-a97e-764b5f9e2e62 req-fda3ee51-c282-4bc3-95a7-c24560bf0ae8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.306 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.306 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.307 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.307 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.308 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:48.411 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:48.413 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.665 2 INFO nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Creating config drive at /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.672 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2gwvbmcv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 543 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 237 op/s
Oct 02 12:14:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165662720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.801 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.812 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2gwvbmcv" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.879 2 DEBUG nova.storage.rbd_utils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image c704d1db-8e20-432e-900d-4f4267059d9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:48 compute-0 nova_compute[256940]: 2025-10-02 12:14:48.885 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config c704d1db-8e20-432e-900d-4f4267059d9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.041 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.042 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.048 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.048 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.212 2 DEBUG oslo_concurrency.processutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config c704d1db-8e20-432e-900d-4f4267059d9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.213 2 INFO nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deleting local config drive /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f/disk.config because it was imported into RBD.
Oct 02 12:14:49 compute-0 kernel: tap2747f0c7-f3: entered promiscuous mode
Oct 02 12:14:49 compute-0 NetworkManager[44981]: <info>  [1759407289.2886] manager: (tap2747f0c7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:49 compute-0 ovn_controller[148123]: 2025-10-02T12:14:49Z|00084|binding|INFO|Claiming lport 2747f0c7-f3c3-4402-aedd-266862caf435 for this chassis.
Oct 02 12:14:49 compute-0 ovn_controller[148123]: 2025-10-02T12:14:49Z|00085|binding|INFO|2747f0c7-f3c3-4402-aedd-266862caf435: Claiming fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:14:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4165662720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:49 compute-0 systemd-udevd[272385]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:49 compute-0 ovn_controller[148123]: 2025-10-02T12:14:49Z|00086|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 ovn-installed in OVS
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:49 compute-0 NetworkManager[44981]: <info>  [1759407289.3428] device (tap2747f0c7-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:14:49 compute-0 NetworkManager[44981]: <info>  [1759407289.3440] device (tap2747f0c7-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:14:49 compute-0 systemd-machined[210927]: New machine qemu-8-instance-0000000b.
Oct 02 12:14:49 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-0000000b.
Oct 02 12:14:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:49.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.374 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.376 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4556MB free_disk=20.737041473388672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.376 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.376 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.438 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:d3:15 10.100.0.7'], port_security=['fa:16:3e:87:d3:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c704d1db-8e20-432e-900d-4f4267059d9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2747f0c7-f3c3-4402-aedd-266862caf435) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:49 compute-0 ovn_controller[148123]: 2025-10-02T12:14:49Z|00087|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 up in Southbound
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.440 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2747f0c7-f3c3-4402-aedd-266862caf435 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f bound to our chassis
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.443 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.472 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b09ee7dd-f554-4c0d-9334-f4262761aadd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.535 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb43405-fa62-4f90-87ec-c2ad526f0801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.540 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3bdc68-5b37-47b6-abd3-9924cfcd623d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.592 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5796669b-f5ef-4025-be43-f2c7e10a4a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.628 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2e8479-57c5-45b4-a586-90d211c1e1bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 20476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272417, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.650 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[720b816e-2802-43fd-aea6-7f7d6e59dda4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272421, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272421, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.652 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.656 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.656 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.656 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:49.657 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.858 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance c704d1db-8e20-432e-900d-4f4267059d9f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.859 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.859 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.859 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:14:49 compute-0 nova_compute[256940]: 2025-10-02 12:14:49.930 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.270 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for c704d1db-8e20-432e-900d-4f4267059d9f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.271 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407290.2694285, c704d1db-8e20-432e-900d-4f4267059d9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.271 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Resumed (Lifecycle Event)
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.277 2 DEBUG nova.compute.manager [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.278 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.282 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance spawned successfully.
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.283 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:14:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 02 12:14:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883025651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 02 12:14:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:50.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.435 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:50 compute-0 nova_compute[256940]: 2025-10-02 12:14:50.441 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 02 12:14:50 compute-0 ceph-mon[73668]: pgmap v1049: 305 pgs: 305 active+clean; 543 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 237 op/s
Oct 02 12:14:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/883025651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 563 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 257 op/s
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.251 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.254 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.362 2 DEBUG nova.compute.manager [req-241beed5-7db9-41a0-a75e-64720888c3b8 req-7f39fd6d-eba4-4f17-be27-c8ae9248d96f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.362 2 DEBUG oslo_concurrency.lockutils [req-241beed5-7db9-41a0-a75e-64720888c3b8 req-7f39fd6d-eba4-4f17-be27-c8ae9248d96f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.362 2 DEBUG oslo_concurrency.lockutils [req-241beed5-7db9-41a0-a75e-64720888c3b8 req-7f39fd6d-eba4-4f17-be27-c8ae9248d96f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.362 2 DEBUG oslo_concurrency.lockutils [req-241beed5-7db9-41a0-a75e-64720888c3b8 req-7f39fd6d-eba4-4f17-be27-c8ae9248d96f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.363 2 DEBUG nova.compute.manager [req-241beed5-7db9-41a0-a75e-64720888c3b8 req-7f39fd6d-eba4-4f17-be27-c8ae9248d96f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.363 2 WARNING nova.compute.manager [req-241beed5-7db9-41a0-a75e-64720888c3b8 req-7f39fd6d-eba4-4f17-be27-c8ae9248d96f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:14:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:51.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.457 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.465 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.465 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.466 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.466 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.466 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.466 2 DEBUG nova.virt.libvirt.driver [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.515 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.515 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407290.2756536, c704d1db-8e20-432e-900d-4f4267059d9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.516 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Started (Lifecycle Event)
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.517 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.517 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.555 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.558 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.599 2 DEBUG nova.compute.manager [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.602 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.681 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.681 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.681 2 DEBUG nova.objects.instance [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:14:51 compute-0 ovn_controller[148123]: 2025-10-02T12:14:51Z|00088|binding|INFO|Releasing lport 3021b3c7-b1d0-44e1-b22e-fbf6a4a79654 from this chassis (sb_readonly=0)
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:51 compute-0 nova_compute[256940]: 2025-10-02 12:14:51.845 2 DEBUG oslo_concurrency.lockutils [None req-7527fa0a-b310-4a0d-9dd0-5995b85d99fb 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:52 compute-0 ceph-mon[73668]: osdmap e147: 3 total, 3 up, 3 in
Oct 02 12:14:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 204 op/s
Oct 02 12:14:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:53 compute-0 ceph-mon[73668]: pgmap v1051: 305 pgs: 305 active+clean; 563 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 257 op/s
Oct 02 12:14:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/826188560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:53.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:53 compute-0 nova_compute[256940]: 2025-10-02 12:14:53.715 2 DEBUG nova.compute.manager [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:53 compute-0 nova_compute[256940]: 2025-10-02 12:14:53.716 2 DEBUG oslo_concurrency.lockutils [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:53 compute-0 nova_compute[256940]: 2025-10-02 12:14:53.716 2 DEBUG oslo_concurrency.lockutils [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:53 compute-0 nova_compute[256940]: 2025-10-02 12:14:53.716 2 DEBUG oslo_concurrency.lockutils [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:53 compute-0 nova_compute[256940]: 2025-10-02 12:14:53.717 2 DEBUG nova.compute.manager [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] No waiting events found dispatching network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:53 compute-0 nova_compute[256940]: 2025-10-02 12:14:53.717 2 WARNING nova.compute.manager [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received unexpected event network-vif-plugged-2747f0c7-f3c3-4402-aedd-266862caf435 for instance with vm_state active and task_state None.
Oct 02 12:14:54 compute-0 nova_compute[256940]: 2025-10-02 12:14:54.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:54 compute-0 ceph-mon[73668]: pgmap v1052: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 204 op/s
Oct 02 12:14:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:54.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Oct 02 12:14:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:55.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:56 compute-0 ceph-mon[73668]: pgmap v1053: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Oct 02 12:14:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:56.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:56 compute-0 nova_compute[256940]: 2025-10-02 12:14:56.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 176 op/s
Oct 02 12:14:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:57.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 02 12:14:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 02 12:14:57 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 02 12:14:58 compute-0 ceph-mon[73668]: pgmap v1054: 305 pgs: 305 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 176 op/s
Oct 02 12:14:58 compute-0 ceph-mon[73668]: osdmap e148: 3 total, 3 up, 3 in
Oct 02 12:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:14:58.416 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:14:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:14:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:58.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:14:58 compute-0 nova_compute[256940]: 2025-10-02 12:14:58.570 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:58 compute-0 nova_compute[256940]: 2025-10-02 12:14:58.571 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:58 compute-0 nova_compute[256940]: 2025-10-02 12:14:58.663 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:14:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 54 KiB/s wr, 155 op/s
Oct 02 12:14:58 compute-0 nova_compute[256940]: 2025-10-02 12:14:58.848 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:58 compute-0 nova_compute[256940]: 2025-10-02 12:14:58.848 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:58 compute-0 nova_compute[256940]: 2025-10-02 12:14:58.854 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:14:58 compute-0 nova_compute[256940]: 2025-10-02 12:14:58.854 2 INFO nova.compute.claims [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.145 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:14:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:59.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:59 compute-0 podman[272492]: 2025-10-02 12:14:59.436455516 +0000 UTC m=+0.092707313 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:14:59 compute-0 podman[272493]: 2025-10-02 12:14:59.444829755 +0000 UTC m=+0.095159097 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:14:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118389002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.669 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.675 2 DEBUG nova.compute.provider_tree [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.708 2 DEBUG nova.scheduler.client.report [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.763 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.764 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.844 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.846 2 DEBUG nova.network.neutron [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.871 2 INFO nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:14:59 compute-0 nova_compute[256940]: 2025-10-02 12:14:59.907 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.077 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.079 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.079 2 INFO nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Creating image(s)
Oct 02 12:15:00 compute-0 ceph-mon[73668]: pgmap v1056: 305 pgs: 305 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 54 KiB/s wr, 155 op/s
Oct 02 12:15:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/118389002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/798080806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.125 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.166 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.204 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.210 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.298 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.299 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.299 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.300 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.332 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.339 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.437 2 DEBUG nova.network.neutron [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.438 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:15:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:00.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.694 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 530 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 46 KiB/s wr, 157 op/s
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.795 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] resizing rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.931 2 DEBUG nova.objects.instance [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'migration_context' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.970 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.971 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Ensure instance console log exists: /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.972 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.973 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.974 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.977 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.984 2 WARNING nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.990 2 DEBUG nova.virt.libvirt.host [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.991 2 DEBUG nova.virt.libvirt.host [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.995 2 DEBUG nova.virt.libvirt.host [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:00 compute-0 nova_compute[256940]: 2025-10-02 12:15:00.997 2 DEBUG nova.virt.libvirt.host [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.000 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.000 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.001 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.001 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.001 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.001 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.002 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.002 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.002 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.003 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.003 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.003 2 DEBUG nova.virt.hardware [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.007 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2851670682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.529 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.568 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.574 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.678 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.682 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.683 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.683 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.684 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.687 2 INFO nova.compute.manager [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Terminating instance
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.689 2 DEBUG nova.compute.manager [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 kernel: tap7f08a0f7-6a (unregistering): left promiscuous mode
Oct 02 12:15:01 compute-0 NetworkManager[44981]: <info>  [1759407301.7559] device (tap7f08a0f7-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00089|binding|INFO|Releasing lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 from this chassis (sb_readonly=0)
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00090|binding|INFO|Setting lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 down in Southbound
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00091|binding|INFO|Removing iface tap7f08a0f7-6a ovn-installed in OVS
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.800 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:a3:84 10.100.0.14'], port_security=['fa:16:3e:13:a3:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '38f4ff47-ece3-4bba-ad08-bb8e4d1391a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.802 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.806 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.829 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3efe7b16-c5d9-4dc4-a3ec-4cf9f64a061f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:01 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 02 12:15:01 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000f.scope: Consumed 19.502s CPU time.
Oct 02 12:15:01 compute-0 systemd-machined[210927]: Machine qemu-5-instance-0000000f terminated.
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.864 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[15669d18-1b70-405f-98b5-ad3ba7d6bd2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.867 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ade978d6-8de9-4baf-a7ec-afc12858d4b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.905 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3d27cb9c-5724-4624-9fb2-ec80a12b540d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:01 compute-0 kernel: tap7f08a0f7-6a: entered promiscuous mode
Oct 02 12:15:01 compute-0 systemd-udevd[272762]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00092|binding|INFO|Claiming lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 for this chassis.
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00093|binding|INFO|7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3: Claiming fa:16:3e:13:a3:84 10.100.0.14
Oct 02 12:15:01 compute-0 NetworkManager[44981]: <info>  [1759407301.9208] manager: (tap7f08a0f7-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 kernel: tap7f08a0f7-6a (unregistering): left promiscuous mode
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.931 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:a3:84 10.100.0.14'], port_security=['fa:16:3e:13:a3:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '38f4ff47-ece3-4bba-ad08-bb8e4d1391a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00094|binding|INFO|Setting lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 ovn-installed in OVS
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00095|binding|INFO|Setting lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 up in Southbound
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00096|binding|INFO|Releasing lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 from this chassis (sb_readonly=1)
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00097|if_status|INFO|Not setting lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 down as sb is readonly
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00098|binding|INFO|Removing iface tap7f08a0f7-6a ovn-installed in OVS
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.947 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[87039ac9-d516-47ec-bad6-0eec890484c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 20476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272772, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00099|binding|INFO|Releasing lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 from this chassis (sb_readonly=0)
Oct 02 12:15:01 compute-0 ovn_controller[148123]: 2025-10-02T12:15:01Z|00100|binding|INFO|Setting lport 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 down in Southbound
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.965 2 INFO nova.virt.libvirt.driver [-] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Instance destroyed successfully.
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.966 2 DEBUG nova.objects.instance [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'resources' on Instance uuid 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.970 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:a3:84 10.100.0.14'], port_security=['fa:16:3e:13:a3:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '38f4ff47-ece3-4bba-ad08-bb8e4d1391a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.976 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b60fe4-6642-4291-b2fd-7de0e30584a6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272775, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272775, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.978 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:01 compute-0 nova_compute[256940]: 2025-10-02 12:15:01.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.985 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.986 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.986 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.987 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.989 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis
Oct 02 12:15:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:01.992 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.008 2 DEBUG nova.virt.libvirt.vif [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-434032293',display_name='tempest-ServersAdminTestJSON-server-434032293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-434032293',id=15,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-ut4y1d90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:29Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=38f4ff47-ece3-4bba-ad08-bb8e4d1391a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.009 2 DEBUG nova.network.os_vif_util [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "address": "fa:16:3e:13:a3:84", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f08a0f7-6a", "ovs_interfaceid": "7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.010 2 DEBUG nova.network.os_vif_util [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:a3:84,bridge_name='br-int',has_traffic_filtering=True,id=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f08a0f7-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.011 2 DEBUG os_vif [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:a3:84,bridge_name='br-int',has_traffic_filtering=True,id=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f08a0f7-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f08a0f7-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.012 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1591e9-f673-4c08-afd4-a5440b1660ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.023 2 INFO os_vif [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:a3:84,bridge_name='br-int',has_traffic_filtering=True,id=7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f08a0f7-6a')
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.058 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f19c0590-43d8-4e8c-956a-8dac396ae377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.063 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1cda8add-9187-4c47-a0ae-85a8f68b8ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011153037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.107 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.108 2 DEBUG nova.objects.instance [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.109 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6025e4-8064-4389-84ce-204093ffd0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.133 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[039b15ea-ed3a-458d-b0e1-6499e5b7b028]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 1084, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 1084, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 20476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272803, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.141 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <uuid>bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</uuid>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <name>instance-00000015</name>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <nova:name>tempest-MigrationsAdminTest-server-399340879</nova:name>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:15:00</nova:creationTime>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <system>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <entry name="serial">bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</entry>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <entry name="uuid">bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</entry>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </system>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <os>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   </os>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <features>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   </features>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk">
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       </source>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config">
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       </source>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:15:02 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/console.log" append="off"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <video>
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </video>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:15:02 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:15:02 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:15:02 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:15:02 compute-0 nova_compute[256940]: </domain>
Oct 02 12:15:02 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:02 compute-0 ceph-mon[73668]: pgmap v1057: 305 pgs: 305 active+clean; 530 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 46 KiB/s wr, 157 op/s
Oct 02 12:15:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2851670682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2011153037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.165 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e01b77dd-ca58-4ab7-aace-22f7b886ca88]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272804, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272804, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.168 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.171 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.171 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.172 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.172 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.174 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.176 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.195 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e47db9b1-2976-4360-8719-5cf08b829375]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.230 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.231 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.232 2 INFO nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Using config drive
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.241 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe9352b-23f8-4fe0-912c-42db8f59393b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.246 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ac60dbb1-dbbd-402e-93b4-ce3c39a3b103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.274 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.292 2 DEBUG nova.compute.manager [req-24f1d387-1522-427c-b6ad-157fb20a16d2 req-5d6700e2-6769-4711-8403-5e0c8fb8f54c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received event network-vif-unplugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.293 2 DEBUG oslo_concurrency.lockutils [req-24f1d387-1522-427c-b6ad-157fb20a16d2 req-5d6700e2-6769-4711-8403-5e0c8fb8f54c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.293 2 DEBUG oslo_concurrency.lockutils [req-24f1d387-1522-427c-b6ad-157fb20a16d2 req-5d6700e2-6769-4711-8403-5e0c8fb8f54c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.294 2 DEBUG oslo_concurrency.lockutils [req-24f1d387-1522-427c-b6ad-157fb20a16d2 req-5d6700e2-6769-4711-8403-5e0c8fb8f54c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.293 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[60cc62a4-83ce-4437-abc3-65f30274166a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.295 2 DEBUG nova.compute.manager [req-24f1d387-1522-427c-b6ad-157fb20a16d2 req-5d6700e2-6769-4711-8403-5e0c8fb8f54c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] No waiting events found dispatching network-vif-unplugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.295 2 DEBUG nova.compute.manager [req-24f1d387-1522-427c-b6ad-157fb20a16d2 req-5d6700e2-6769-4711-8403-5e0c8fb8f54c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received event network-vif-unplugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.326 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[845765d2-784d-4275-81a1-7ad45df692e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 1084, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 1084, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508094, 'reachable_time': 20476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272830, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.352 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[abb7d80b-6c1e-4632-9f6c-a951f7853b1f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508108, 'tstamp': 508108}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272831, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85ed78eb-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508111, 'tstamp': 508111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272831, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.354 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.357 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.358 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.359 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:02.359 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.535 2 INFO nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Creating config drive at /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/disk.config
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.545 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq1syxslm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.591 2 INFO nova.virt.libvirt.driver [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Deleting instance files /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_del
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.593 2 INFO nova.virt.libvirt.driver [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Deletion of /var/lib/nova/instances/38f4ff47-ece3-4bba-ad08-bb8e4d1391a4_del complete
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.679 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq1syxslm" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 509 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 184 op/s
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.729 2 DEBUG nova.storage.rbd_utils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.739 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/disk.config bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.778 2 INFO nova.compute.manager [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Took 1.09 seconds to destroy the instance on the hypervisor.
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.780 2 DEBUG oslo.service.loopingcall [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.781 2 DEBUG nova.compute.manager [-] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.781 2 DEBUG nova.network.neutron [-] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.946 2 DEBUG oslo_concurrency.processutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/disk.config bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:02 compute-0 nova_compute[256940]: 2025-10-02 12:15:02.949 2 INFO nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Deleting local config drive /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/disk.config because it was imported into RBD.
Oct 02 12:15:03 compute-0 sudo[272873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:03 compute-0 sudo[272873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-0 sudo[272873]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-0 systemd-machined[210927]: New machine qemu-9-instance-00000015.
Oct 02 12:15:03 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000015.
Oct 02 12:15:03 compute-0 sudo[272907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:03 compute-0 sudo[272907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-0 sudo[272907]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3998498089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:03 compute-0 sudo[272937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:03 compute-0 sudo[272937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-0 sudo[272937]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-0 sudo[272963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:15:03 compute-0 sudo[272963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:03 compute-0 sudo[272963]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-0 sudo[273060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:03 compute-0 nova_compute[256940]: 2025-10-02 12:15:03.932 2 DEBUG nova.network.neutron [-] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:03 compute-0 sudo[273060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-0 sudo[273060]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.004 2 INFO nova.compute.manager [-] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Took 1.22 seconds to deallocate network for instance.
Oct 02 12:15:04 compute-0 sudo[273085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:04 compute-0 sudo[273085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-0 sudo[273085]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.070 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.071 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:04 compute-0 sudo[273110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:04 compute-0 sudo[273110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-0 sudo[273110]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-0 ovn_controller[148123]: 2025-10-02T12:15:04Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:15:04 compute-0 ovn_controller[148123]: 2025-10-02T12:15:04Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:d3:15 10.100.0.7
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.185 2 DEBUG oslo_concurrency.processutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:04 compute-0 sudo[273135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 12:15:04 compute-0 sudo[273135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-0 ceph-mon[73668]: pgmap v1058: 305 pgs: 305 active+clean; 509 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 184 op/s
Oct 02 12:15:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3902082590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.346 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407304.346255, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.347 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Resumed (Lifecycle Event)
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.352 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.353 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.361 2 INFO nova.virt.libvirt.driver [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance spawned successfully.
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.362 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.373 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.377 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.397 2 DEBUG nova.compute.manager [req-2afec840-3fef-4d79-a464-51fa9db565fc req-b4dcc544-8ab3-4090-a10c-123e539de88d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received event network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.398 2 DEBUG oslo_concurrency.lockutils [req-2afec840-3fef-4d79-a464-51fa9db565fc req-b4dcc544-8ab3-4090-a10c-123e539de88d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.398 2 DEBUG oslo_concurrency.lockutils [req-2afec840-3fef-4d79-a464-51fa9db565fc req-b4dcc544-8ab3-4090-a10c-123e539de88d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.399 2 DEBUG oslo_concurrency.lockutils [req-2afec840-3fef-4d79-a464-51fa9db565fc req-b4dcc544-8ab3-4090-a10c-123e539de88d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.399 2 DEBUG nova.compute.manager [req-2afec840-3fef-4d79-a464-51fa9db565fc req-b4dcc544-8ab3-4090-a10c-123e539de88d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] No waiting events found dispatching network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.399 2 WARNING nova.compute.manager [req-2afec840-3fef-4d79-a464-51fa9db565fc req-b4dcc544-8ab3-4090-a10c-123e539de88d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received unexpected event network-vif-plugged-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 for instance with vm_state deleted and task_state None.
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.400 2 DEBUG nova.compute.manager [req-2afec840-3fef-4d79-a464-51fa9db565fc req-b4dcc544-8ab3-4090-a10c-123e539de88d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Received event network-vif-deleted-7f08a0f7-6ac2-48f0-a660-4156e0e8f4a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.407 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.407 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.408 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.409 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.410 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.411 2 DEBUG nova.virt.libvirt.driver [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:04 compute-0 sudo[273135]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:15:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.465 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.465 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407304.347561, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.466 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Started (Lifecycle Event)
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 891af3c1-e8fb-4a4e-8d56-0c7a9a8ab2ac does not exist
Oct 02 12:15:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ccee288a-4cf1-4bc7-ae8b-946d31bf0ae2 does not exist
Oct 02 12:15:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d83c1205-e218-4fdd-88b4-edd84e104595 does not exist
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:04 compute-0 sudo[273198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.543 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:04 compute-0 sudo[273198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.546 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:04 compute-0 sudo[273198]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.566 2 INFO nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Took 4.49 seconds to spawn the instance on the hypervisor.
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.566 2 DEBUG nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.588 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:04 compute-0 sudo[273223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:04 compute-0 sudo[273223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-0 sudo[273223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377956880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:04 compute-0 sudo[273248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:04 compute-0 sudo[273248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.690 2 INFO nova.compute.manager [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Took 5.88 seconds to build instance.
Oct 02 12:15:04 compute-0 sudo[273248]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.693 2 DEBUG oslo_concurrency.processutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.697 2 DEBUG nova.compute.provider_tree [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 511 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 214 op/s
Oct 02 12:15:04 compute-0 sudo[273275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:15:04 compute-0 sudo[273275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.745 2 DEBUG nova.scheduler.client.report [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.764 2 DEBUG oslo_concurrency.lockutils [None req-1487fd3b-92e4-4583-b80d-35f306927326 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.861 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:04 compute-0 nova_compute[256940]: 2025-10-02 12:15:04.959 2 INFO nova.scheduler.client.report [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Deleted allocations for instance 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4
Oct 02 12:15:05 compute-0 nova_compute[256940]: 2025-10-02 12:15:05.190 2 DEBUG oslo_concurrency.lockutils [None req-1fa0e095-554c-44a8-8200-892a543df9f7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "38f4ff47-ece3-4bba-ad08-bb8e4d1391a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:05 compute-0 podman[273342]: 2025-10-02 12:15:05.200698188 +0000 UTC m=+0.068565252 container create 5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:15:05 compute-0 podman[273342]: 2025-10-02 12:15:05.159007989 +0000 UTC m=+0.026875133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:05 compute-0 systemd[1]: Started libpod-conmon-5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5.scope.
Oct 02 12:15:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1377956880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2848696870' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:15:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2848696870' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:15:05 compute-0 podman[273342]: 2025-10-02 12:15:05.345984433 +0000 UTC m=+0.213851507 container init 5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:15:05 compute-0 podman[273342]: 2025-10-02 12:15:05.359615019 +0000 UTC m=+0.227482113 container start 5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:15:05 compute-0 podman[273342]: 2025-10-02 12:15:05.36347067 +0000 UTC m=+0.231337744 container attach 5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:15:05 compute-0 systemd[1]: libpod-5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5.scope: Deactivated successfully.
Oct 02 12:15:05 compute-0 jovial_kilby[273358]: 167 167
Oct 02 12:15:05 compute-0 conmon[273358]: conmon 5042a90e7d42b69228ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5.scope/container/memory.events
Oct 02 12:15:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:05.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:05 compute-0 podman[273363]: 2025-10-02 12:15:05.43964106 +0000 UTC m=+0.042627305 container died 5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 12:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-166bf131ea6f4daa4162ddea420be83916890e49743c7e9b01e1607991eb772b-merged.mount: Deactivated successfully.
Oct 02 12:15:05 compute-0 podman[273363]: 2025-10-02 12:15:05.494553954 +0000 UTC m=+0.097540179 container remove 5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:15:05 compute-0 systemd[1]: libpod-conmon-5042a90e7d42b69228ce6a1a935706753f9e774f4b3259cc4d5bac415797e9e5.scope: Deactivated successfully.
Oct 02 12:15:05 compute-0 podman[273385]: 2025-10-02 12:15:05.690124372 +0000 UTC m=+0.049317739 container create 7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:15:05 compute-0 systemd[1]: Started libpod-conmon-7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8.scope.
Oct 02 12:15:05 compute-0 podman[273385]: 2025-10-02 12:15:05.672692817 +0000 UTC m=+0.031886204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba89041d98c413b1e641d15985f3a23665049e256bf6e925001a594380badde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba89041d98c413b1e641d15985f3a23665049e256bf6e925001a594380badde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba89041d98c413b1e641d15985f3a23665049e256bf6e925001a594380badde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba89041d98c413b1e641d15985f3a23665049e256bf6e925001a594380badde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba89041d98c413b1e641d15985f3a23665049e256bf6e925001a594380badde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:05 compute-0 podman[273385]: 2025-10-02 12:15:05.809946802 +0000 UTC m=+0.169140189 container init 7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:15:05 compute-0 podman[273385]: 2025-10-02 12:15:05.820034386 +0000 UTC m=+0.179227753 container start 7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:15:05 compute-0 podman[273385]: 2025-10-02 12:15:05.824008799 +0000 UTC m=+0.183202196 container attach 7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:15:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:06.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:06 compute-0 ceph-mon[73668]: pgmap v1059: 305 pgs: 305 active+clean; 511 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 214 op/s
Oct 02 12:15:06 compute-0 kind_hugle[273401]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:15:06 compute-0 kind_hugle[273401]: --> relative data size: 1.0
Oct 02 12:15:06 compute-0 kind_hugle[273401]: --> All data devices are unavailable
Oct 02 12:15:06 compute-0 systemd[1]: libpod-7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8.scope: Deactivated successfully.
Oct 02 12:15:06 compute-0 podman[273385]: 2025-10-02 12:15:06.665713485 +0000 UTC m=+1.024906902 container died 7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:15:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 508 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 228 op/s
Oct 02 12:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ba89041d98c413b1e641d15985f3a23665049e256bf6e925001a594380badde-merged.mount: Deactivated successfully.
Oct 02 12:15:06 compute-0 podman[273385]: 2025-10-02 12:15:06.75970349 +0000 UTC m=+1.118896857 container remove 7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hugle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:15:06 compute-0 systemd[1]: libpod-conmon-7f52427a1a10cd623e44ed0e943e5b6c5457fa0088178772021f0b3b114064b8.scope: Deactivated successfully.
Oct 02 12:15:06 compute-0 sudo[273275]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:06 compute-0 sudo[273428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:06 compute-0 sudo[273428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:06 compute-0 sudo[273428]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:06 compute-0 sudo[273453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:06 compute-0 sudo[273453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:06 compute-0 sudo[273453]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:07 compute-0 nova_compute[256940]: 2025-10-02 12:15:07.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:07 compute-0 sudo[273478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:07 compute-0 sudo[273478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:07 compute-0 sudo[273478]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:07 compute-0 sudo[273495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:07 compute-0 sudo[273495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:07 compute-0 sudo[273495]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:07 compute-0 sudo[273526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:15:07 compute-0 sudo[273526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:07 compute-0 sudo[273535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:07 compute-0 sudo[273535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:07 compute-0 sudo[273535]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:07 compute-0 podman[273615]: 2025-10-02 12:15:07.49512664 +0000 UTC m=+0.053792986 container create 3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:15:07 compute-0 systemd[1]: Started libpod-conmon-3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d.scope.
Oct 02 12:15:07 compute-0 podman[273615]: 2025-10-02 12:15:07.47255563 +0000 UTC m=+0.031221996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:07 compute-0 podman[273615]: 2025-10-02 12:15:07.592505303 +0000 UTC m=+0.151171719 container init 3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mayer, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:15:07 compute-0 podman[273615]: 2025-10-02 12:15:07.601432576 +0000 UTC m=+0.160098922 container start 3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:15:07 compute-0 podman[273615]: 2025-10-02 12:15:07.609138268 +0000 UTC m=+0.167804704 container attach 3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mayer, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:15:07 compute-0 competent_mayer[273632]: 167 167
Oct 02 12:15:07 compute-0 systemd[1]: libpod-3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d.scope: Deactivated successfully.
Oct 02 12:15:07 compute-0 podman[273637]: 2025-10-02 12:15:07.65784667 +0000 UTC m=+0.028166837 container died 3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a79bd038c4fb23526043aa67d9cde42f0f35b55008f5570d9f22d737875aa440-merged.mount: Deactivated successfully.
Oct 02 12:15:07 compute-0 podman[273637]: 2025-10-02 12:15:07.724688766 +0000 UTC m=+0.095008923 container remove 3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:15:07 compute-0 systemd[1]: libpod-conmon-3230cbe42ee5402d64e318f118e5d16ed998686e00df3ce0ca017cd50971c38d.scope: Deactivated successfully.
Oct 02 12:15:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:07 compute-0 podman[273660]: 2025-10-02 12:15:07.953134733 +0000 UTC m=+0.065108192 container create d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:15:08 compute-0 systemd[1]: Started libpod-conmon-d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4.scope.
Oct 02 12:15:08 compute-0 podman[273660]: 2025-10-02 12:15:07.922838982 +0000 UTC m=+0.034812501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bca49b85e5daa5ad9e14769f227a0b651024d2dcce3a093930e5ab47955cda4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bca49b85e5daa5ad9e14769f227a0b651024d2dcce3a093930e5ab47955cda4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bca49b85e5daa5ad9e14769f227a0b651024d2dcce3a093930e5ab47955cda4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bca49b85e5daa5ad9e14769f227a0b651024d2dcce3a093930e5ab47955cda4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:08 compute-0 podman[273660]: 2025-10-02 12:15:08.164302199 +0000 UTC m=+0.276275668 container init d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:15:08 compute-0 podman[273660]: 2025-10-02 12:15:08.176078016 +0000 UTC m=+0.288051475 container start d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:08 compute-0 podman[273660]: 2025-10-02 12:15:08.233483626 +0000 UTC m=+0.345457095 container attach d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:15:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:08 compute-0 ceph-mon[73668]: pgmap v1060: 305 pgs: 305 active+clean; 508 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 228 op/s
Oct 02 12:15:08 compute-0 nova_compute[256940]: 2025-10-02 12:15:08.677 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:08 compute-0 nova_compute[256940]: 2025-10-02 12:15:08.679 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:08 compute-0 nova_compute[256940]: 2025-10-02 12:15:08.679 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 486 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 234 op/s
Oct 02 12:15:08 compute-0 hungry_hawking[273677]: {
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:     "1": [
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:         {
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "devices": [
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "/dev/loop3"
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             ],
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "lv_name": "ceph_lv0",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "lv_size": "7511998464",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "name": "ceph_lv0",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "tags": {
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.cluster_name": "ceph",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.crush_device_class": "",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.encrypted": "0",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.osd_id": "1",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.type": "block",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:                 "ceph.vdo": "0"
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             },
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "type": "block",
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:             "vg_name": "ceph_vg0"
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:         }
Oct 02 12:15:08 compute-0 hungry_hawking[273677]:     ]
Oct 02 12:15:08 compute-0 hungry_hawking[273677]: }
Oct 02 12:15:09 compute-0 systemd[1]: libpod-d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4.scope: Deactivated successfully.
Oct 02 12:15:09 compute-0 conmon[273677]: conmon d9685f71929484efd98e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4.scope/container/memory.events
Oct 02 12:15:09 compute-0 podman[273660]: 2025-10-02 12:15:09.011464436 +0000 UTC m=+1.123437855 container died d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:09 compute-0 nova_compute[256940]: 2025-10-02 12:15:09.050 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:15:09 compute-0 nova_compute[256940]: 2025-10-02 12:15:09.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:09.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bca49b85e5daa5ad9e14769f227a0b651024d2dcce3a093930e5ab47955cda4-merged.mount: Deactivated successfully.
Oct 02 12:15:09 compute-0 podman[273660]: 2025-10-02 12:15:09.567732196 +0000 UTC m=+1.679705665 container remove d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:15:09 compute-0 nova_compute[256940]: 2025-10-02 12:15:09.595 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:09 compute-0 systemd[1]: libpod-conmon-d9685f71929484efd98e91c8ecc0c9ef729e843f267015f4273f77cf1cca27b4.scope: Deactivated successfully.
Oct 02 12:15:09 compute-0 sudo[273526]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:09 compute-0 nova_compute[256940]: 2025-10-02 12:15:09.644 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:09 compute-0 sudo[273699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2752457260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:09 compute-0 sudo[273699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:09 compute-0 sudo[273699]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:09 compute-0 sudo[273724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:09 compute-0 sudo[273724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:09 compute-0 sudo[273724]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:09 compute-0 nova_compute[256940]: 2025-10-02 12:15:09.867 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:15:09 compute-0 nova_compute[256940]: 2025-10-02 12:15:09.869 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Creating file /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/5b3bed29f5e64a24849ad0807cc7c241.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:15:09 compute-0 nova_compute[256940]: 2025-10-02 12:15:09.870 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/5b3bed29f5e64a24849ad0807cc7c241.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:09 compute-0 sudo[273749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:09 compute-0 sudo[273749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:09 compute-0 sudo[273749]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:09 compute-0 sudo[273775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:15:09 compute-0 sudo[273775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.434 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/5b3bed29f5e64a24849ad0807cc7c241.tmp" returned: 1 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.436 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/5b3bed29f5e64a24849ad0807cc7c241.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.436 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Creating directory /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.437 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:10.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:10 compute-0 podman[273844]: 2025-10-02 12:15:10.454877709 +0000 UTC m=+0.031065753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:10 compute-0 podman[273844]: 2025-10-02 12:15:10.573931308 +0000 UTC m=+0.150119262 container create cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.683 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.690 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:15:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 455 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Oct 02 12:15:10 compute-0 systemd[1]: Started libpod-conmon-cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007.scope.
Oct 02 12:15:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:10 compute-0 ceph-mon[73668]: pgmap v1061: 305 pgs: 305 active+clean; 486 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 234 op/s
Oct 02 12:15:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1446795426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.903 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.904 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.905 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.906 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.907 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.909 2 INFO nova.compute.manager [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Terminating instance
Oct 02 12:15:10 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.912 2 DEBUG nova.compute.manager [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:15:10 compute-0 kernel: tap2747f0c7-f3 (unregistering): left promiscuous mode
Oct 02 12:15:10 compute-0 NetworkManager[44981]: <info>  [1759407310.9777] device (tap2747f0c7-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:15:10 compute-0 podman[273844]: 2025-10-02 12:15:10.992311127 +0000 UTC m=+0.568499171 container init cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:10.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 ovn_controller[148123]: 2025-10-02T12:15:10Z|00101|binding|INFO|Releasing lport 2747f0c7-f3c3-4402-aedd-266862caf435 from this chassis (sb_readonly=0)
Oct 02 12:15:11 compute-0 ovn_controller[148123]: 2025-10-02T12:15:11Z|00102|binding|INFO|Setting lport 2747f0c7-f3c3-4402-aedd-266862caf435 down in Southbound
Oct 02 12:15:11 compute-0 ovn_controller[148123]: 2025-10-02T12:15:11Z|00103|binding|INFO|Removing iface tap2747f0c7-f3 ovn-installed in OVS
Oct 02 12:15:11 compute-0 podman[273844]: 2025-10-02 12:15:11.008722545 +0000 UTC m=+0.584910539 container start cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:15:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:11.011 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:d3:15 10.100.0.7'], port_security=['fa:16:3e:87:d3:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c704d1db-8e20-432e-900d-4f4267059d9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2747f0c7-f3c3-4402-aedd-266862caf435) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:11.013 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2747f0c7-f3c3-4402-aedd-266862caf435 in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:11.016 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85ed78eb-4003-42a7-9312-f47c5830131f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:15:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:11.018 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[24f804fb-0736-4b3e-a52e-6c41307bb3c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:11.019 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f namespace which is not needed anymore
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 kind_chebyshev[273861]: 167 167
Oct 02 12:15:11 compute-0 systemd[1]: libpod-cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007.scope: Deactivated successfully.
Oct 02 12:15:11 compute-0 conmon[273861]: conmon cfe0f917be9d1067bb24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007.scope/container/memory.events
Oct 02 12:15:11 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 02 12:15:11 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Consumed 13.756s CPU time.
Oct 02 12:15:11 compute-0 systemd-machined[210927]: Machine qemu-8-instance-0000000b terminated.
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 podman[273844]: 2025-10-02 12:15:11.143499216 +0000 UTC m=+0.719687200 container attach cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:15:11 compute-0 podman[273844]: 2025-10-02 12:15:11.144254155 +0000 UTC m=+0.720442149 container died cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.155 2 INFO nova.virt.libvirt.driver [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Instance destroyed successfully.
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.156 2 DEBUG nova.objects.instance [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'resources' on Instance uuid c704d1db-8e20-432e-900d-4f4267059d9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.260 2 DEBUG nova.virt.libvirt.vif [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:12:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1406080749',display_name='tempest-ServersAdminTestJSON-server-1406080749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1406080749',id=11,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-eqz307f9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:55Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=c704d1db-8e20-432e-900d-4f4267059d9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.261 2 DEBUG nova.network.os_vif_util [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "2747f0c7-f3c3-4402-aedd-266862caf435", "address": "fa:16:3e:87:d3:15", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2747f0c7-f3", "ovs_interfaceid": "2747f0c7-f3c3-4402-aedd-266862caf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.262 2 DEBUG nova.network.os_vif_util [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.262 2 DEBUG os_vif [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2747f0c7-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-0 nova_compute[256940]: 2025-10-02 12:15:11.272 2 INFO os_vif [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:d3:15,bridge_name='br-int',has_traffic_filtering=True,id=2747f0c7-f3c3-4402-aedd-266862caf435,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2747f0c7-f3')
Oct 02 12:15:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:11.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-64eb4a374808ec0fa362cec1ce14142b0985bb160d483c393e4a76c76e9dbf22-merged.mount: Deactivated successfully.
Oct 02 12:15:12 compute-0 ceph-mon[73668]: pgmap v1062: 305 pgs: 305 active+clean; 455 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Oct 02 12:15:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/879596293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:12.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 387 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Oct 02 12:15:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:12 compute-0 podman[273844]: 2025-10-02 12:15:12.818144677 +0000 UTC m=+2.394332671 container remove cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:15:12 compute-0 systemd[1]: libpod-conmon-cfe0f917be9d1067bb24cb13919e5e2171327fbb355949a875ca8745c2a55007.scope: Deactivated successfully.
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.088 2 INFO nova.virt.libvirt.driver [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deleting instance files /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f_del
Oct 02 12:15:13 compute-0 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[269122]: [NOTICE]   (269126) : haproxy version is 2.8.14-c23fe91
Oct 02 12:15:13 compute-0 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[269122]: [NOTICE]   (269126) : path to executable is /usr/sbin/haproxy
Oct 02 12:15:13 compute-0 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[269122]: [WARNING]  (269126) : Exiting Master process...
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.090 2 INFO nova.virt.libvirt.driver [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deletion of /var/lib/nova/instances/c704d1db-8e20-432e-900d-4f4267059d9f_del complete
Oct 02 12:15:13 compute-0 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[269122]: [ALERT]    (269126) : Current worker (269128) exited with code 143 (Terminated)
Oct 02 12:15:13 compute-0 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[269122]: [WARNING]  (269126) : All workers exited. Exiting... (0)
Oct 02 12:15:13 compute-0 systemd[1]: libpod-a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c.scope: Deactivated successfully.
Oct 02 12:15:13 compute-0 podman[273932]: 2025-10-02 12:15:13.102624248 +0000 UTC m=+0.206038673 container died a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.158 2 INFO nova.compute.manager [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Took 2.25 seconds to destroy the instance on the hypervisor.
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.159 2 DEBUG oslo.service.loopingcall [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.159 2 DEBUG nova.compute.manager [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.160 2 DEBUG nova.network.neutron [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:15:13 compute-0 podman[273951]: 2025-10-02 12:15:13.099153777 +0000 UTC m=+0.085701129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2c34339826d8d3e3d995d9cd5600042e53048893151d91c87d0a167a22b823e-merged.mount: Deactivated successfully.
Oct 02 12:15:13 compute-0 podman[273951]: 2025-10-02 12:15:13.207966599 +0000 UTC m=+0.194513881 container create 03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:15:13 compute-0 podman[273932]: 2025-10-02 12:15:13.215066375 +0000 UTC m=+0.318480730 container cleanup a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:15:13 compute-0 systemd[1]: libpod-conmon-a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c.scope: Deactivated successfully.
Oct 02 12:15:13 compute-0 systemd[1]: Started libpod-conmon-03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a.scope.
Oct 02 12:15:13 compute-0 podman[273981]: 2025-10-02 12:15:13.315374295 +0000 UTC m=+0.053836437 container remove a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:15:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ccc80db88b29e17759cad4adf5ebe8a8f76af08dfa4ee7524d5692875c32c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ccc80db88b29e17759cad4adf5ebe8a8f76af08dfa4ee7524d5692875c32c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ccc80db88b29e17759cad4adf5ebe8a8f76af08dfa4ee7524d5692875c32c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ccc80db88b29e17759cad4adf5ebe8a8f76af08dfa4ee7524d5692875c32c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.326 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4691971a-bd98-4f94-aa28-e976ad78f9d0]: (4, ('Thu Oct  2 12:15:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f (a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c)\na8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c\nThu Oct  2 12:15:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f (a8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c)\na8e67889c30774c5fd9c91377ee8ef09692654cad30bf5bdabccc158df6c0d9c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.331 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[55ba0306-c32d-4abd-b49f-bf1812025f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.332 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:13 compute-0 kernel: tap85ed78eb-40: left promiscuous mode
Oct 02 12:15:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:13 compute-0 podman[273951]: 2025-10-02 12:15:13.399885732 +0000 UTC m=+0.386433004 container init 03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.414 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[43cee1b9-866b-4e93-bedf-debccbfe766a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:13 compute-0 podman[273951]: 2025-10-02 12:15:13.41856851 +0000 UTC m=+0.405115762 container start 03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:15:13 compute-0 nova_compute[256940]: 2025-10-02 12:15:13.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:13 compute-0 podman[273951]: 2025-10-02 12:15:13.423201921 +0000 UTC m=+0.409749193 container attach 03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.437 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[33a66e86-dc9a-4fa0-a0f6-7602b107233d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.439 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3b326614-9169-4933-b784-2eea027f6062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.463 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f5ae58-8a2d-4423-ad89-027935ad08b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508083, 'reachable_time': 43646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274006, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d85ed78eb\x2d4003\x2d42a7\x2d9312\x2df47c5830131f.mount: Deactivated successfully.
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.471 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:15:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:13.471 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[28a9cf08-47fd-4336-8967-b36ea55f55db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.144 2 DEBUG nova.network.neutron [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.168 2 INFO nova.compute.manager [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Took 1.01 seconds to deallocate network for instance.
Oct 02 12:15:14 compute-0 agitated_cannon[273996]: {
Oct 02 12:15:14 compute-0 agitated_cannon[273996]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:15:14 compute-0 agitated_cannon[273996]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:15:14 compute-0 agitated_cannon[273996]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:15:14 compute-0 agitated_cannon[273996]:         "osd_id": 1,
Oct 02 12:15:14 compute-0 agitated_cannon[273996]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:15:14 compute-0 agitated_cannon[273996]:         "type": "bluestore"
Oct 02 12:15:14 compute-0 agitated_cannon[273996]:     }
Oct 02 12:15:14 compute-0 agitated_cannon[273996]: }
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.242 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.242 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:14 compute-0 ceph-mon[73668]: pgmap v1063: 305 pgs: 305 active+clean; 387 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Oct 02 12:15:14 compute-0 systemd[1]: libpod-03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a.scope: Deactivated successfully.
Oct 02 12:15:14 compute-0 podman[273951]: 2025-10-02 12:15:14.269300892 +0000 UTC m=+1.255848174 container died 03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.278 2 DEBUG nova.compute.manager [req-3ccbfb9a-f903-4b29-b233-494d4a599b06 req-0d9259ab-08d8-4d90-acf4-5f5ed3a7650e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Received event network-vif-deleted-2747f0c7-f3c3-4402-aedd-266862caf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-06ccc80db88b29e17759cad4adf5ebe8a8f76af08dfa4ee7524d5692875c32c0-merged.mount: Deactivated successfully.
Oct 02 12:15:14 compute-0 podman[273951]: 2025-10-02 12:15:14.346805276 +0000 UTC m=+1.333352558 container remove 03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.352 2 DEBUG oslo_concurrency.processutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:14 compute-0 systemd[1]: libpod-conmon-03c610b997d13553c54b9343d01f9454d53af34ff26655bf4f70960edf0fb92a.scope: Deactivated successfully.
Oct 02 12:15:14 compute-0 sudo[273775]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:15:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:15:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7b3ce5dc-6fe6-4d73-a3f1-4d2f76ee3996 does not exist
Oct 02 12:15:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 546fcc69-bd1c-4d23-a1ef-c296cc2b9a93 does not exist
Oct 02 12:15:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 14da0189-21f7-48ae-ac19-ab9deb88bb5b does not exist
Oct 02 12:15:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:14.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:14 compute-0 sudo[274038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:14 compute-0 sudo[274038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:14 compute-0 sudo[274038]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:14 compute-0 sudo[274082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:15:14 compute-0 sudo[274082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:14 compute-0 sudo[274082]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 348 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Oct 02 12:15:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635107998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.832 2 DEBUG oslo_concurrency.processutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.841 2 DEBUG nova.compute.provider_tree [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.859 2 DEBUG nova.scheduler.client.report [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.897 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:14 compute-0 nova_compute[256940]: 2025-10-02 12:15:14.925 2 INFO nova.scheduler.client.report [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Deleted allocations for instance c704d1db-8e20-432e-900d-4f4267059d9f
Oct 02 12:15:15 compute-0 nova_compute[256940]: 2025-10-02 12:15:15.030 2 DEBUG oslo_concurrency.lockutils [None req-10947a45-6648-4835-a68e-0bcc8d4c366e 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "c704d1db-8e20-432e-900d-4f4267059d9f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:15.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2635107998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:16 compute-0 nova_compute[256940]: 2025-10-02 12:15:16.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:16 compute-0 ceph-mon[73668]: pgmap v1064: 305 pgs: 305 active+clean; 348 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Oct 02 12:15:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:16.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 327 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 826 KiB/s wr, 167 op/s
Oct 02 12:15:16 compute-0 nova_compute[256940]: 2025-10-02 12:15:16.956 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407301.9505522, 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:16 compute-0 nova_compute[256940]: 2025-10-02 12:15:16.957 2 INFO nova.compute.manager [-] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] VM Stopped (Lifecycle Event)
Oct 02 12:15:17 compute-0 nova_compute[256940]: 2025-10-02 12:15:17.003 2 DEBUG nova.compute.manager [None req-e6e0849f-602f-4ac5-ba5a-a7fc65f429c1 - - - - - -] [instance: 38f4ff47-ece3-4bba-ad08-bb8e4d1391a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:17 compute-0 podman[274110]: 2025-10-02 12:15:17.447404743 +0000 UTC m=+0.100316521 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 12:15:17 compute-0 podman[274111]: 2025-10-02 12:15:17.509626189 +0000 UTC m=+0.157809673 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3930749028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:18.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:18 compute-0 ceph-mon[73668]: pgmap v1065: 305 pgs: 305 active+clean; 327 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 826 KiB/s wr, 167 op/s
Oct 02 12:15:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 336 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 825 KiB/s wr, 146 op/s
Oct 02 12:15:19 compute-0 nova_compute[256940]: 2025-10-02 12:15:19.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:19.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:20.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:20 compute-0 nova_compute[256940]: 2025-10-02 12:15:20.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:20 compute-0 ceph-mon[73668]: pgmap v1066: 305 pgs: 305 active+clean; 336 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 825 KiB/s wr, 146 op/s
Oct 02 12:15:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 359 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:15:20 compute-0 nova_compute[256940]: 2025-10-02 12:15:20.750 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:15:21 compute-0 nova_compute[256940]: 2025-10-02 12:15:21.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:22 compute-0 ceph-mon[73668]: pgmap v1067: 305 pgs: 305 active+clean; 359 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:15:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2104195057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1515999479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 407 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct 02 12:15:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:23 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 02 12:15:23 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000015.scope: Consumed 14.325s CPU time.
Oct 02 12:15:23 compute-0 systemd-machined[210927]: Machine qemu-9-instance-00000015 terminated.
Oct 02 12:15:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:23.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:23 compute-0 nova_compute[256940]: 2025-10-02 12:15:23.765 2 INFO nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance shutdown successfully after 13 seconds.
Oct 02 12:15:23 compute-0 nova_compute[256940]: 2025-10-02 12:15:23.774 2 INFO nova.virt.libvirt.driver [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance destroyed successfully.
Oct 02 12:15:23 compute-0 nova_compute[256940]: 2025-10-02 12:15:23.778 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:23 compute-0 nova_compute[256940]: 2025-10-02 12:15:23.778 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.194 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.194 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.270 2 INFO nova.compute.rpcapi [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.272 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.432 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.432 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:24 compute-0 nova_compute[256940]: 2025-10-02 12:15:24.433 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:15:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:15:24 compute-0 ceph-mon[73668]: pgmap v1068: 305 pgs: 305 active+clean; 407 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct 02 12:15:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Oct 02 12:15:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:25.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:26 compute-0 nova_compute[256940]: 2025-10-02 12:15:26.154 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407311.1530304, c704d1db-8e20-432e-900d-4f4267059d9f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:26 compute-0 nova_compute[256940]: 2025-10-02 12:15:26.155 2 INFO nova.compute.manager [-] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] VM Stopped (Lifecycle Event)
Oct 02 12:15:26 compute-0 nova_compute[256940]: 2025-10-02 12:15:26.204 2 DEBUG nova.compute.manager [None req-7cb169b5-f39f-444d-a5bd-7749e2db6c29 - - - - - -] [instance: c704d1db-8e20-432e-900d-4f4267059d9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:26 compute-0 nova_compute[256940]: 2025-10-02 12:15:26.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:26.452 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:26.452 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:26.453 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:26 compute-0 ceph-mon[73668]: pgmap v1069: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Oct 02 12:15:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/521728697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 12:15:27 compute-0 sudo[274160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:27 compute-0 sudo[274160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:27 compute-0 sudo[274160]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:27 compute-0 sudo[274185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:27 compute-0 sudo[274185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:27 compute-0 sudo[274185]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 02 12:15:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 02 12:15:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 02 12:15:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1554869641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:15:28
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.mgr']
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:15:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 157 op/s
Oct 02 12:15:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 02 12:15:29 compute-0 ceph-mon[73668]: pgmap v1070: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 12:15:29 compute-0 ceph-mon[73668]: osdmap e149: 3 total, 3 up, 3 in
Oct 02 12:15:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 02 12:15:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 02 12:15:29 compute-0 nova_compute[256940]: 2025-10-02 12:15:29.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:29.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 02 12:15:30 compute-0 ceph-mon[73668]: pgmap v1072: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 157 op/s
Oct 02 12:15:30 compute-0 ceph-mon[73668]: osdmap e150: 3 total, 3 up, 3 in
Oct 02 12:15:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2568182976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/917251262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 02 12:15:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 02 12:15:30 compute-0 podman[274212]: 2025-10-02 12:15:30.398302743 +0000 UTC m=+0.070038660 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:15:30 compute-0 podman[274213]: 2025-10-02 12:15:30.443403371 +0000 UTC m=+0.101504192 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:15:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:30.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 412 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 686 KiB/s wr, 209 op/s
Oct 02 12:15:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 02 12:15:31 compute-0 ceph-mon[73668]: osdmap e151: 3 total, 3 up, 3 in
Oct 02 12:15:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:15:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:15:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 02 12:15:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 02 12:15:31 compute-0 nova_compute[256940]: 2025-10-02 12:15:31.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:31.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:31 compute-0 nova_compute[256940]: 2025-10-02 12:15:31.825 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:31 compute-0 nova_compute[256940]: 2025-10-02 12:15:31.825 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:31 compute-0 nova_compute[256940]: 2025-10-02 12:15:31.825 2 DEBUG nova.compute.manager [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Going to confirm migration 5 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.158 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.158 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.158 2 DEBUG nova.network.neutron [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.159 2 DEBUG nova.objects.instance [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'info_cache' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:32 compute-0 ceph-mon[73668]: pgmap v1075: 305 pgs: 305 active+clean; 412 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 686 KiB/s wr, 209 op/s
Oct 02 12:15:32 compute-0 ceph-mon[73668]: osdmap e152: 3 total, 3 up, 3 in
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.296 2 DEBUG nova.network.neutron [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:15:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 393 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.5 MiB/s wr, 287 op/s
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.774 2 DEBUG nova.network.neutron [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.789 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.789 2 DEBUG nova.objects.instance [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'migration_context' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:32 compute-0 nova_compute[256940]: 2025-10-02 12:15:32.903 2 DEBUG nova.storage.rbd_utils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] removing snapshot(nova-resize) on rbd image(bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:15:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 02 12:15:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 02 12:15:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.254 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.255 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.453 2 DEBUG nova.scheduler.client.report [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.475 2 DEBUG nova.scheduler.client.report [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.475 2 DEBUG nova.compute.provider_tree [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.487 2 DEBUG nova.scheduler.client.report [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.508 2 DEBUG nova.scheduler.client.report [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:15:33 compute-0 nova_compute[256940]: 2025-10-02 12:15:33.544 2 DEBUG oslo_concurrency.processutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606645617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:34 compute-0 nova_compute[256940]: 2025-10-02 12:15:34.007 2 DEBUG oslo_concurrency.processutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:34 compute-0 nova_compute[256940]: 2025-10-02 12:15:34.016 2 DEBUG nova.compute.provider_tree [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:34 compute-0 nova_compute[256940]: 2025-10-02 12:15:34.080 2 DEBUG nova.scheduler.client.report [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:34 compute-0 nova_compute[256940]: 2025-10-02 12:15:34.145 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:34 compute-0 ceph-mon[73668]: pgmap v1077: 305 pgs: 305 active+clean; 393 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.5 MiB/s wr, 287 op/s
Oct 02 12:15:34 compute-0 ceph-mon[73668]: osdmap e153: 3 total, 3 up, 3 in
Oct 02 12:15:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3606645617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/442933021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:34 compute-0 nova_compute[256940]: 2025-10-02 12:15:34.310 2 INFO nova.scheduler.client.report [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocation for migration f91708f3-2f55-4a30-9404-01310d275f98
Oct 02 12:15:34 compute-0 nova_compute[256940]: 2025-10-02 12:15:34.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:34 compute-0 nova_compute[256940]: 2025-10-02 12:15:34.418 2 DEBUG oslo_concurrency.lockutils [None req-6cee629c-8b87-4e35-adc1-f2e3431b2f5e ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 2.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 349 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.8 MiB/s wr, 411 op/s
Oct 02 12:15:35 compute-0 nova_compute[256940]: 2025-10-02 12:15:35.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:35 compute-0 nova_compute[256940]: 2025-10-02 12:15:35.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:35 compute-0 nova_compute[256940]: 2025-10-02 12:15:35.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:35 compute-0 nova_compute[256940]: 2025-10-02 12:15:35.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:15:35 compute-0 nova_compute[256940]: 2025-10-02 12:15:35.228 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:15:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 02 12:15:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2688368381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3853434744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 02 12:15:35 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 02 12:15:36 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 02 12:15:36 compute-0 nova_compute[256940]: 2025-10-02 12:15:36.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:36 compute-0 nova_compute[256940]: 2025-10-02 12:15:36.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:36 compute-0 nova_compute[256940]: 2025-10-02 12:15:36.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:36 compute-0 nova_compute[256940]: 2025-10-02 12:15:36.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:15:36 compute-0 nova_compute[256940]: 2025-10-02 12:15:36.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:36 compute-0 nova_compute[256940]: 2025-10-02 12:15:36.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:15:36 compute-0 nova_compute[256940]: 2025-10-02 12:15:36.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:36 compute-0 ceph-mon[73668]: pgmap v1079: 305 pgs: 305 active+clean; 349 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.8 MiB/s wr, 411 op/s
Oct 02 12:15:36 compute-0 ceph-mon[73668]: osdmap e154: 3 total, 3 up, 3 in
Oct 02 12:15:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 417 op/s
Oct 02 12:15:37 compute-0 nova_compute[256940]: 2025-10-02 12:15:37.271 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:37 compute-0 nova_compute[256940]: 2025-10-02 12:15:37.272 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:15:37 compute-0 nova_compute[256940]: 2025-10-02 12:15:37.272 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:15:37 compute-0 nova_compute[256940]: 2025-10-02 12:15:37.292 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:15:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:37.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/513948928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 02 12:15:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 02 12:15:38 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 02 12:15:38 compute-0 nova_compute[256940]: 2025-10-02 12:15:38.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:38 compute-0 nova_compute[256940]: 2025-10-02 12:15:38.245 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:38 compute-0 nova_compute[256940]: 2025-10-02 12:15:38.246 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:38 compute-0 nova_compute[256940]: 2025-10-02 12:15:38.459 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407323.4582875, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:38 compute-0 nova_compute[256940]: 2025-10-02 12:15:38.460 2 INFO nova.compute.manager [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Stopped (Lifecycle Event)
Oct 02 12:15:38 compute-0 nova_compute[256940]: 2025-10-02 12:15:38.475 2 DEBUG nova.compute.manager [None req-71d43587-d178-4750-abee-ececb5ae68e5 - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 295 op/s
Oct 02 12:15:38 compute-0 ceph-mon[73668]: pgmap v1081: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 417 op/s
Oct 02 12:15:38 compute-0 ceph-mon[73668]: osdmap e155: 3 total, 3 up, 3 in
Oct 02 12:15:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3304084597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 02 12:15:39 compute-0 nova_compute[256940]: 2025-10-02 12:15:39.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:15:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 52K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 3405 syncs, 3.85 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4561 writes, 18K keys, 4561 commit groups, 1.0 writes per commit group, ingest: 21.67 MB, 0.04 MB/s
                                           Interval WAL: 4561 writes, 1603 syncs, 2.85 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:15:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 02 12:15:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0057872187907128236 of space, bias 1.0, pg target 1.736165637213847 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0020808978340777964 of space, bias 1.0, pg target 0.6221884523892611 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:15:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:15:40 compute-0 nova_compute[256940]: 2025-10-02 12:15:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:40 compute-0 nova_compute[256940]: 2025-10-02 12:15:40.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:40 compute-0 nova_compute[256940]: 2025-10-02 12:15:40.263 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:40 compute-0 nova_compute[256940]: 2025-10-02 12:15:40.263 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:40 compute-0 nova_compute[256940]: 2025-10-02 12:15:40.263 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:15:40 compute-0 nova_compute[256940]: 2025-10-02 12:15:40.264 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:40 compute-0 ceph-mon[73668]: pgmap v1083: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 295 op/s
Oct 02 12:15:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/565957413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2642791850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-0 ceph-mon[73668]: osdmap e156: 3 total, 3 up, 3 in
Oct 02 12:15:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2178282675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 282 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 215 op/s
Oct 02 12:15:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717745806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.055 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.309 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.310 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4802MB free_disk=20.866817474365234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.310 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.311 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.403 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.403 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:15:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:41 compute-0 nova_compute[256940]: 2025-10-02 12:15:41.466 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4150019923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3717745806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234252399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:42 compute-0 nova_compute[256940]: 2025-10-02 12:15:42.210 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.744s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:42 compute-0 nova_compute[256940]: 2025-10-02 12:15:42.219 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:42 compute-0 nova_compute[256940]: 2025-10-02 12:15:42.237 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:42 compute-0 nova_compute[256940]: 2025-10-02 12:15:42.258 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:15:42 compute-0 nova_compute[256940]: 2025-10-02 12:15:42.259 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:42 compute-0 nova_compute[256940]: 2025-10-02 12:15:42.260 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 323 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.3 MiB/s wr, 282 op/s
Oct 02 12:15:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:43.043 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:43.044 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:15:43 compute-0 nova_compute[256940]: 2025-10-02 12:15:43.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 02 12:15:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:43.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:43 compute-0 ceph-mon[73668]: pgmap v1085: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 282 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 215 op/s
Oct 02 12:15:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2234252399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1971211061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:44 compute-0 nova_compute[256940]: 2025-10-02 12:15:44.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 02 12:15:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 02 12:15:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 361 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 MiB/s wr, 273 op/s
Oct 02 12:15:45 compute-0 ceph-mon[73668]: pgmap v1086: 305 pgs: 305 active+clean; 323 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.3 MiB/s wr, 282 op/s
Oct 02 12:15:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/623290968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:45 compute-0 ceph-mon[73668]: osdmap e157: 3 total, 3 up, 3 in
Oct 02 12:15:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:15:45.047 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:45.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:46 compute-0 ceph-mon[73668]: pgmap v1088: 305 pgs: 305 active+clean; 361 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 MiB/s wr, 273 op/s
Oct 02 12:15:46 compute-0 nova_compute[256940]: 2025-10-02 12:15:46.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:46.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 367 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 372 op/s
Oct 02 12:15:46 compute-0 nova_compute[256940]: 2025-10-02 12:15:46.984 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:46 compute-0 nova_compute[256940]: 2025-10-02 12:15:46.985 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.005 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.036 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "0d416198-c85f-4560-8638-d57410e783c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.036 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "0d416198-c85f-4560-8638-d57410e783c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.062 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.084 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.085 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.092 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.093 2 INFO nova.compute.claims [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.128 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.247 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:47 compute-0 sudo[274367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:47 compute-0 sudo[274367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:47 compute-0 sudo[274367]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:15:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:15:47 compute-0 sudo[274402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:47 compute-0 sudo[274402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.513 2 DEBUG nova.compute.manager [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:15:47 compute-0 sudo[274402]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:47 compute-0 podman[274435]: 2025-10-02 12:15:47.588085055 +0000 UTC m=+0.060685546 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.616 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:47 compute-0 podman[274456]: 2025-10-02 12:15:47.74989051 +0000 UTC m=+0.132362767 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 12:15:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599039566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.861 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.870 2 DEBUG nova.compute.provider_tree [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.909 2 DEBUG nova.scheduler.client.report [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.956 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.957 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.967 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:15:47 compute-0 nova_compute[256940]: 2025-10-02 12:15:47.967 2 INFO nova.compute.claims [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.007 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "f9b68744-a7a8-495f-990f-f13bab2c1a40" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.007 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "f9b68744-a7a8-495f-990f-f13bab2c1a40" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.038 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "f9b68744-a7a8-495f-990f-f13bab2c1a40" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.039 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.090 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.090 2 DEBUG nova.network.neutron [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.115 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:15:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.139 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:15:48 compute-0 ceph-mon[73668]: pgmap v1089: 305 pgs: 305 active+clean; 367 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 372 op/s
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.160 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.306 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.309 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.309 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Creating image(s)
Oct 02 12:15:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:15:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:15:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 367 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.4 MiB/s wr, 322 op/s
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.914 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:48 compute-0 nova_compute[256940]: 2025-10-02 12:15:48.959 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.006 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.014 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.052 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.082 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.083 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.084 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.084 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 02 12:15:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.186 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.190 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.210 2 WARNING nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 0 instances on the hypervisor.
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid cfb8af05-9dc1-4736-8f2a-88aad55f1585 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid 0d416198-c85f-4560-8638-d57410e783c9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.214 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.215 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "0d416198-c85f-4560-8638-d57410e783c9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700940213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.241 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.247 2 DEBUG nova.compute.provider_tree [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.263 2 DEBUG nova.scheduler.client.report [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.281 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.283 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 1.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.301 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "f9b68744-a7a8-495f-990f-f13bab2c1a40" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.302 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "f9b68744-a7a8-495f-990f-f13bab2c1a40" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.315 2 DEBUG nova.objects.instance [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2599039566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.363 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "f9b68744-a7a8-495f-990f-f13bab2c1a40" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.365 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.399 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.400 2 INFO nova.compute.claims [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.400 2 DEBUG nova.objects.instance [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.427 2 DEBUG nova.objects.instance [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.462 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.463 2 DEBUG nova.network.neutron [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.502 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.510 2 INFO nova.compute.resource_tracker [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating resource usage from migration e3ae1389-09cd-481c-9d83-8b061ca8b765
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.511 2 DEBUG nova.compute.resource_tracker [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting to track incoming migration e3ae1389-09cd-481c-9d83-8b061ca8b765 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.522 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.600 2 DEBUG nova.network.neutron [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.601 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.649 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.680 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.683 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.683 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Creating image(s)
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.725 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.756 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.790 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.800 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.862 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.911 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.912 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.913 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.913 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.941 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.945 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0d416198-c85f-4560-8638-d57410e783c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.973 2 DEBUG nova.network.neutron [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:15:49 compute-0 nova_compute[256940]: 2025-10-02 12:15:49.974 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.034 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] resizing rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:15:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108361957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.123 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.177 2 DEBUG nova.objects.instance [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'migration_context' on Instance uuid cfb8af05-9dc1-4736-8f2a-88aad55f1585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.184 2 DEBUG nova.compute.provider_tree [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.195 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.196 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Ensure instance console log exists: /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.196 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.197 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.198 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.200 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.204 2 DEBUG nova.scheduler.client.report [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.214 2 WARNING nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.220 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.221 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.224 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.225 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.227 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.227 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.228 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.228 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.229 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.229 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.229 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.230 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.230 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.230 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.231 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.231 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.236 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.267 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.268 2 INFO nova.compute.manager [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Migrating
Oct 02 12:15:50 compute-0 ceph-mon[73668]: pgmap v1090: 305 pgs: 305 active+clean; 367 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.4 MiB/s wr, 322 op/s
Oct 02 12:15:50 compute-0 ceph-mon[73668]: osdmap e158: 3 total, 3 up, 3 in
Oct 02 12:15:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1700940213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4108361957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:50.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556109171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 432 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.7 MiB/s wr, 371 op/s
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.746 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.789 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.794 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:50 compute-0 nova_compute[256940]: 2025-10-02 12:15:50.845 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0d416198-c85f-4560-8638-d57410e783c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.901s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.020 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] resizing rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:15:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456174084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.336 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.338 2 DEBUG nova.objects.instance [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfb8af05-9dc1-4736-8f2a-88aad55f1585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.357 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <uuid>cfb8af05-9dc1-4736-8f2a-88aad55f1585</uuid>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <name>instance-00000019</name>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersOnMultiNodesTest-server-2119323674-1</nova:name>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:15:50</nova:creationTime>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <nova:user uuid="27279919e67c49e1a04b6eec249ecc87">tempest-ServersOnMultiNodesTest-348944321-project-member</nova:user>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <nova:project uuid="a5ac6058475f4875b46ae8f3c4ff33e8">tempest-ServersOnMultiNodesTest-348944321</nova:project>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <system>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <entry name="serial">cfb8af05-9dc1-4736-8f2a-88aad55f1585</entry>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <entry name="uuid">cfb8af05-9dc1-4736-8f2a-88aad55f1585</entry>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </system>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <os>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   </os>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <features>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   </features>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk">
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       </source>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk.config">
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       </source>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:15:51 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/console.log" append="off"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <video>
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </video>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:15:51 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:15:51 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:15:51 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:15:51 compute-0 nova_compute[256940]: </domain>
Oct 02 12:15:51 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.551 2 DEBUG nova.objects.instance [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d416198-c85f-4560-8638-d57410e783c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.554 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.554 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.554 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Using config drive
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.587 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/556109171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/456174084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.618 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.619 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Ensure instance console log exists: /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.619 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.619 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.620 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.621 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.626 2 WARNING nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.632 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.632 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.635 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.636 2 DEBUG nova.virt.libvirt.host [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.637 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.637 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.637 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.638 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.638 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.638 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.638 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.638 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.638 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.639 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.639 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.639 2 DEBUG nova.virt.hardware [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:51 compute-0 nova_compute[256940]: 2025-10-02 12:15:51.641 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 02 12:15:51 compute-0 sshd-session[274942]: Accepted publickey for nova from 192.168.122.101 port 36002 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:15:51 compute-0 systemd-logind[820]: New session 54 of user nova.
Oct 02 12:15:51 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:15:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 02 12:15:51 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:15:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:15:52 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:15:52 compute-0 systemd[274965]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:15:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705301429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.137 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.184 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.188 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:52 compute-0 systemd[274965]: Queued start job for default target Main User Target.
Oct 02 12:15:52 compute-0 systemd[274965]: Created slice User Application Slice.
Oct 02 12:15:52 compute-0 systemd[274965]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:15:52 compute-0 systemd[274965]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:15:52 compute-0 systemd[274965]: Reached target Paths.
Oct 02 12:15:52 compute-0 systemd[274965]: Reached target Timers.
Oct 02 12:15:52 compute-0 systemd[274965]: Starting D-Bus User Message Bus Socket...
Oct 02 12:15:52 compute-0 systemd[274965]: Starting Create User's Volatile Files and Directories...
Oct 02 12:15:52 compute-0 systemd[274965]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:15:52 compute-0 systemd[274965]: Reached target Sockets.
Oct 02 12:15:52 compute-0 systemd[274965]: Finished Create User's Volatile Files and Directories.
Oct 02 12:15:52 compute-0 systemd[274965]: Reached target Basic System.
Oct 02 12:15:52 compute-0 systemd[274965]: Reached target Main User Target.
Oct 02 12:15:52 compute-0 systemd[274965]: Startup finished in 206ms.
Oct 02 12:15:52 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:15:52 compute-0 systemd[1]: Started Session 54 of User nova.
Oct 02 12:15:52 compute-0 sshd-session[274942]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:15:52 compute-0 sshd-session[275001]: Received disconnect from 192.168.122.101 port 36002:11: disconnected by user
Oct 02 12:15:52 compute-0 sshd-session[275001]: Disconnected from user nova 192.168.122.101 port 36002
Oct 02 12:15:52 compute-0 sshd-session[274942]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:15:52 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 02 12:15:52 compute-0 systemd-logind[820]: Session 54 logged out. Waiting for processes to exit.
Oct 02 12:15:52 compute-0 systemd-logind[820]: Removed session 54.
Oct 02 12:15:52 compute-0 sshd-session[275023]: Accepted publickey for nova from 192.168.122.101 port 36006 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:15:52 compute-0 systemd-logind[820]: New session 56 of user nova.
Oct 02 12:15:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:52 compute-0 systemd[1]: Started Session 56 of User nova.
Oct 02 12:15:52 compute-0 sshd-session[275023]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.605 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Creating config drive at /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/disk.config
Oct 02 12:15:52 compute-0 sshd-session[275026]: Received disconnect from 192.168.122.101 port 36006:11: disconnected by user
Oct 02 12:15:52 compute-0 sshd-session[275026]: Disconnected from user nova 192.168.122.101 port 36006
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.616 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcp3bftw0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:52 compute-0 sshd-session[275023]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:15:52 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 02 12:15:52 compute-0 systemd-logind[820]: Session 56 logged out. Waiting for processes to exit.
Oct 02 12:15:52 compute-0 systemd-logind[820]: Removed session 56.
Oct 02 12:15:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407417649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.702 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.704 2 DEBUG nova.objects.instance [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d416198-c85f-4560-8638-d57410e783c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 515 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 380 op/s
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.738 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <uuid>0d416198-c85f-4560-8638-d57410e783c9</uuid>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <name>instance-0000001a</name>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersOnMultiNodesTest-server-2119323674-2</nova:name>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:15:51</nova:creationTime>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <nova:user uuid="27279919e67c49e1a04b6eec249ecc87">tempest-ServersOnMultiNodesTest-348944321-project-member</nova:user>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <nova:project uuid="a5ac6058475f4875b46ae8f3c4ff33e8">tempest-ServersOnMultiNodesTest-348944321</nova:project>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <system>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <entry name="serial">0d416198-c85f-4560-8638-d57410e783c9</entry>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <entry name="uuid">0d416198-c85f-4560-8638-d57410e783c9</entry>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </system>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <os>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   </os>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <features>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   </features>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/0d416198-c85f-4560-8638-d57410e783c9_disk">
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       </source>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/0d416198-c85f-4560-8638-d57410e783c9_disk.config">
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       </source>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:15:52 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/console.log" append="off"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <video>
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </video>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:15:52 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:15:52 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:15:52 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:15:52 compute-0 nova_compute[256940]: </domain>
Oct 02 12:15:52 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.751 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcp3bftw0" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.791 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:52 compute-0 nova_compute[256940]: 2025-10-02 12:15:52.797 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/disk.config cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:52 compute-0 ceph-mon[73668]: pgmap v1092: 305 pgs: 305 active+clean; 432 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.7 MiB/s wr, 371 op/s
Oct 02 12:15:52 compute-0 ceph-mon[73668]: osdmap e159: 3 total, 3 up, 3 in
Oct 02 12:15:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2705301429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.067 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.068 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.070 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Using config drive
Oct 02 12:15:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.402 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.675 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Creating config drive at /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/disk.config
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.680 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv_os58iw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.834 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv_os58iw" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.878 2 DEBUG nova.storage.rbd_utils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 0d416198-c85f-4560-8638-d57410e783c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:53 compute-0 nova_compute[256940]: 2025-10-02 12:15:53.883 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/disk.config 0d416198-c85f-4560-8638-d57410e783c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/407417649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:54 compute-0 ceph-mon[73668]: pgmap v1094: 305 pgs: 305 active+clean; 515 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 380 op/s
Oct 02 12:15:54 compute-0 nova_compute[256940]: 2025-10-02 12:15:54.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 02 12:15:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:54.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 531 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 11 MiB/s wr, 254 op/s
Oct 02 12:15:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 02 12:15:55 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 02 12:15:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:55 compute-0 nova_compute[256940]: 2025-10-02 12:15:55.923 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/disk.config cfb8af05-9dc1-4736-8f2a-88aad55f1585_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:55 compute-0 nova_compute[256940]: 2025-10-02 12:15:55.924 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Deleting local config drive /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585/disk.config because it was imported into RBD.
Oct 02 12:15:56 compute-0 systemd-machined[210927]: New machine qemu-10-instance-00000019.
Oct 02 12:15:56 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000019.
Oct 02 12:15:56 compute-0 ceph-mon[73668]: pgmap v1095: 305 pgs: 305 active+clean; 531 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 11 MiB/s wr, 254 op/s
Oct 02 12:15:56 compute-0 ceph-mon[73668]: osdmap e160: 3 total, 3 up, 3 in
Oct 02 12:15:56 compute-0 nova_compute[256940]: 2025-10-02 12:15:56.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:56 compute-0 nova_compute[256940]: 2025-10-02 12:15:56.424 2 DEBUG oslo_concurrency.processutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/disk.config 0d416198-c85f-4560-8638-d57410e783c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:56 compute-0 nova_compute[256940]: 2025-10-02 12:15:56.425 2 INFO nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Deleting local config drive /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9/disk.config because it was imported into RBD.
Oct 02 12:15:56 compute-0 systemd-machined[210927]: New machine qemu-11-instance-0000001a.
Oct 02 12:15:56 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000001a.
Oct 02 12:15:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 552 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 14 MiB/s wr, 339 op/s
Oct 02 12:15:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.034 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407358.03374, cfb8af05-9dc1-4736-8f2a-88aad55f1585 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.036 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] VM Resumed (Lifecycle Event)
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.042 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.042 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.047 2 INFO nova.virt.libvirt.driver [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Instance spawned successfully.
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.048 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:15:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 02 12:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:15:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 02 12:15:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:58 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.661 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.667 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.669 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.669 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.670 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.670 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.670 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.671 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:58 compute-0 ceph-mon[73668]: pgmap v1097: 305 pgs: 305 active+clean; 552 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 14 MiB/s wr, 339 op/s
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.719 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.720 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407358.0350266, cfb8af05-9dc1-4736-8f2a-88aad55f1585 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.720 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] VM Started (Lifecycle Event)
Oct 02 12:15:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 552 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 3.1 MiB/s wr, 100 op/s
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.746 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.750 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:58 compute-0 ovn_controller[148123]: 2025-10-02T12:15:58Z|00104|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.761 2 INFO nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Took 10.45 seconds to spawn the instance on the hypervisor.
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.762 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.798 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.846 2 INFO nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Took 11.79 seconds to build instance.
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.871 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.872 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.872 2 INFO nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:58 compute-0 nova_compute[256940]: 2025-10-02 12:15:58.873 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.098 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407359.0975518, 0d416198-c85f-4560-8638-d57410e783c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.099 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] VM Resumed (Lifecycle Event)
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.100 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.101 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.105 2 INFO nova.virt.libvirt.driver [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Instance spawned successfully.
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.105 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.132 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.142 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.143 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.144 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.145 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.146 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.146 2 DEBUG nova.virt.libvirt.driver [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.152 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.192 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.193 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407359.0986636, 0d416198-c85f-4560-8638-d57410e783c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.193 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] VM Started (Lifecycle Event)
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.221 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.225 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.237 2 INFO nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Took 9.56 seconds to spawn the instance on the hypervisor.
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.237 2 DEBUG nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.268 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.289 2 INFO nova.compute.manager [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Took 12.19 seconds to build instance.
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.314 2 DEBUG oslo_concurrency.lockutils [None req-9a482a44-0353-4a7e-b0d1-fc73ff0f8437 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "0d416198-c85f-4560-8638-d57410e783c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.315 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "0d416198-c85f-4560-8638-d57410e783c9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 10.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:59 compute-0 nova_compute[256940]: 2025-10-02 12:15:59.335 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "0d416198-c85f-4560-8638-d57410e783c9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:15:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:59.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:59 compute-0 ceph-mon[73668]: osdmap e161: 3 total, 3 up, 3 in
Oct 02 12:16:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 581 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.4 MiB/s wr, 260 op/s
Oct 02 12:16:01 compute-0 ceph-mon[73668]: pgmap v1099: 305 pgs: 305 active+clean; 552 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 3.1 MiB/s wr, 100 op/s
Oct 02 12:16:01 compute-0 nova_compute[256940]: 2025-10-02 12:16:01.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:01 compute-0 podman[275245]: 2025-10-02 12:16:01.419348259 +0000 UTC m=+0.079045866 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:01.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:01 compute-0 podman[275246]: 2025-10-02 12:16:01.456188091 +0000 UTC m=+0.112132540 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 12:16:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:16:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:16:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 592 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.7 MiB/s wr, 388 op/s
Oct 02 12:16:02 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:16:02 compute-0 systemd[274965]: Activating special unit Exit the Session...
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped target Main User Target.
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped target Basic System.
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped target Paths.
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped target Sockets.
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped target Timers.
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:16:02 compute-0 systemd[274965]: Closed D-Bus User Message Bus Socket.
Oct 02 12:16:02 compute-0 systemd[274965]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:16:02 compute-0 systemd[274965]: Removed slice User Application Slice.
Oct 02 12:16:02 compute-0 systemd[274965]: Reached target Shutdown.
Oct 02 12:16:02 compute-0 systemd[274965]: Finished Exit the Session.
Oct 02 12:16:02 compute-0 systemd[274965]: Reached target Exit the Session.
Oct 02 12:16:02 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:16:02 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:16:02 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:16:02 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:16:02 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:16:02 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:16:02 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:16:03 compute-0 ceph-mon[73668]: pgmap v1100: 305 pgs: 305 active+clean; 581 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.4 MiB/s wr, 260 op/s
Oct 02 12:16:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:03.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:04 compute-0 nova_compute[256940]: 2025-10-02 12:16:04.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:04 compute-0 ceph-mon[73668]: pgmap v1101: 305 pgs: 305 active+clean; 592 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.7 MiB/s wr, 388 op/s
Oct 02 12:16:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:04.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.7 MiB/s wr, 388 op/s
Oct 02 12:16:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:16:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/491089654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:16:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:16:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/491089654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:16:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:05.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 02 12:16:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 02 12:16:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 02 12:16:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/491089654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:16:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/491089654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:16:06 compute-0 nova_compute[256940]: 2025-10-02 12:16:06.285 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:06 compute-0 nova_compute[256940]: 2025-10-02 12:16:06.285 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:06 compute-0 nova_compute[256940]: 2025-10-02 12:16:06.286 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:06 compute-0 nova_compute[256940]: 2025-10-02 12:16:06.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:06 compute-0 nova_compute[256940]: 2025-10-02 12:16:06.616 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.6 MiB/s wr, 402 op/s
Oct 02 12:16:06 compute-0 ceph-mon[73668]: pgmap v1102: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.7 MiB/s wr, 388 op/s
Oct 02 12:16:06 compute-0 ceph-mon[73668]: osdmap e162: 3 total, 3 up, 3 in
Oct 02 12:16:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3103502843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/70985564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:07 compute-0 nova_compute[256940]: 2025-10-02 12:16:07.186 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:07 compute-0 nova_compute[256940]: 2025-10-02 12:16:07.203 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:07 compute-0 nova_compute[256940]: 2025-10-02 12:16:07.414 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:16:07 compute-0 nova_compute[256940]: 2025-10-02 12:16:07.417 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:16:07 compute-0 nova_compute[256940]: 2025-10-02 12:16:07.417 2 INFO nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Creating image(s)
Oct 02 12:16:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:07.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:07 compute-0 nova_compute[256940]: 2025-10-02 12:16:07.476 2 DEBUG nova.storage.rbd_utils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] creating snapshot(nova-resize) on rbd image(80f9c3a4-aadc-4519-a451-8ce36d37b598_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:16:07 compute-0 sudo[275323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:07 compute-0 sudo[275323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:07 compute-0 sudo[275323]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:07 compute-0 sudo[275348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:07 compute-0 sudo[275348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:07 compute-0 sudo[275348]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 02 12:16:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 02 12:16:08 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 02 12:16:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:08 compute-0 ceph-mon[73668]: pgmap v1104: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.6 MiB/s wr, 402 op/s
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.735 2 DEBUG nova.objects.instance [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 931 KiB/s wr, 236 op/s
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.976 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.977 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Ensure instance console log exists: /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.980 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.980 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.981 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.984 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:16:08 compute-0 nova_compute[256940]: 2025-10-02 12:16:08.991 2 WARNING nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.002 2 DEBUG nova.virt.libvirt.host [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.004 2 DEBUG nova.virt.libvirt.host [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.009 2 DEBUG nova.virt.libvirt.host [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.010 2 DEBUG nova.virt.libvirt.host [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.012 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.012 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.014 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.015 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.015 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.015 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.016 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.016 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.016 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.017 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.017 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.017 2 DEBUG nova.virt.hardware [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.018 2 DEBUG nova.objects.instance [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.048 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:09.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2208594399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.581 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:09 compute-0 nova_compute[256940]: 2025-10-02 12:16:09.639 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:09 compute-0 ceph-mon[73668]: osdmap e163: 3 total, 3 up, 3 in
Oct 02 12:16:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2208594399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134746537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-0 nova_compute[256940]: 2025-10-02 12:16:10.106 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:10 compute-0 nova_compute[256940]: 2025-10-02 12:16:10.112 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <uuid>80f9c3a4-aadc-4519-a451-8ce36d37b598</uuid>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <name>instance-00000018</name>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <nova:name>tempest-MigrationsAdminTest-server-201463142</nova:name>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:16:08</nova:creationTime>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <system>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <entry name="serial">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <entry name="uuid">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </system>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <os>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   </os>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <features>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   </features>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk">
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       </source>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config">
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       </source>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:16:10 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log" append="off"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <video>
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </video>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:16:10 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:16:10 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:16:10 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:16:10 compute-0 nova_compute[256940]: </domain>
Oct 02 12:16:10 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:16:10 compute-0 nova_compute[256940]: 2025-10-02 12:16:10.218 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:10 compute-0 nova_compute[256940]: 2025-10-02 12:16:10.219 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:10 compute-0 nova_compute[256940]: 2025-10-02 12:16:10.219 2 INFO nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Using config drive
Oct 02 12:16:10 compute-0 systemd-machined[210927]: New machine qemu-12-instance-00000018.
Oct 02 12:16:10 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000018.
Oct 02 12:16:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:10 compute-0 ceph-mon[73668]: pgmap v1106: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 931 KiB/s wr, 236 op/s
Oct 02 12:16:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/249885067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2134746537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2701199019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1338855465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 661 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 152 op/s
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:11.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.583 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407371.5829835, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.585 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Resumed (Lifecycle Event)
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.589 2 DEBUG nova.compute.manager [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.594 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance running successfully.
Oct 02 12:16:11 compute-0 virtqemud[257589]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 02 12:16:11 compute-0 virtqemud[257589]: hostname: compute-0
Oct 02 12:16:11 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.598 2 DEBUG nova.virt.libvirt.guest [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.599 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.605 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.608 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.650 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.650 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407371.5854063, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.650 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Started (Lifecycle Event)
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.684 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:11 compute-0 nova_compute[256940]: 2025-10-02 12:16:11.688 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:11 compute-0 ceph-mon[73668]: pgmap v1107: 305 pgs: 305 active+clean; 661 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 152 op/s
Oct 02 12:16:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3250954769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:12.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 656 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 233 KiB/s rd, 7.3 MiB/s wr, 190 op/s
Oct 02 12:16:12 compute-0 nova_compute[256940]: 2025-10-02 12:16:12.916 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:12 compute-0 nova_compute[256940]: 2025-10-02 12:16:12.917 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:12 compute-0 nova_compute[256940]: 2025-10-02 12:16:12.917 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.105 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.348 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.363 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:13 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 02 12:16:13 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Consumed 3.070s CPU time.
Oct 02 12:16:13 compute-0 systemd-machined[210927]: Machine qemu-12-instance-00000018 terminated.
Oct 02 12:16:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.608 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance destroyed successfully.
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.610 2 DEBUG nova.objects.instance [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.679 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.679 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.766 2 DEBUG nova.objects.instance [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 02 12:16:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 02 12:16:13 compute-0 nova_compute[256940]: 2025-10-02 12:16:13.911 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:13 compute-0 ceph-mon[73668]: pgmap v1108: 305 pgs: 305 active+clean; 656 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 233 KiB/s rd, 7.3 MiB/s wr, 190 op/s
Oct 02 12:16:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972657979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:14 compute-0 nova_compute[256940]: 2025-10-02 12:16:14.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:14 compute-0 nova_compute[256940]: 2025-10-02 12:16:14.416 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:14 compute-0 nova_compute[256940]: 2025-10-02 12:16:14.426 2 DEBUG nova.compute.provider_tree [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:14 compute-0 nova_compute[256940]: 2025-10-02 12:16:14.495 2 DEBUG nova.scheduler.client.report [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:14 compute-0 nova_compute[256940]: 2025-10-02 12:16:14.576 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 648 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 10 MiB/s wr, 460 op/s
Oct 02 12:16:15 compute-0 sudo[275575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:15 compute-0 sudo[275575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-0 sudo[275575]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-0 ceph-mon[73668]: osdmap e164: 3 total, 3 up, 3 in
Oct 02 12:16:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2972657979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/540730844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:15 compute-0 sudo[275600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:15 compute-0 sudo[275600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-0 sudo[275600]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-0 sudo[275625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:15 compute-0 sudo[275625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-0 sudo[275625]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-0 sudo[275650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:16:15 compute-0 sudo[275650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:16:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:16:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:15.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:16:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:16:15 compute-0 sudo[275650]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:16:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:16:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:16:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 sudo[275694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:16 compute-0 sudo[275694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-0 sudo[275694]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-0 nova_compute[256940]: 2025-10-02 12:16:16.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:16 compute-0 sudo[275719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:16 compute-0 sudo[275719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-0 sudo[275719]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:16:16 compute-0 sudo[275745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:16 compute-0 sudo[275745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-0 sudo[275745]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:16.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:16 compute-0 sudo[275770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:16:16 compute-0 sudo[275770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 614 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 11 MiB/s wr, 672 op/s
Oct 02 12:16:16 compute-0 ceph-mon[73668]: pgmap v1110: 305 pgs: 305 active+clean; 648 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 10 MiB/s wr, 460 op/s
Oct 02 12:16:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:17 compute-0 sudo[275770]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:17 compute-0 sudo[275826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:17 compute-0 sudo[275826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:17 compute-0 sudo[275826]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 02 12:16:17 compute-0 sudo[275851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:17 compute-0 sudo[275851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:17 compute-0 sudo[275851]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:17 compute-0 sudo[275876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:17 compute-0 sudo[275876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:17 compute-0 sudo[275876]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 02 12:16:17 compute-0 sudo[275901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 12:16:17 compute-0 sudo[275901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:17.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:17 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 02 12:16:17 compute-0 podman[275967]: 2025-10-02 12:16:17.792826675 +0000 UTC m=+0.028651369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:18 compute-0 podman[275967]: 2025-10-02 12:16:18.375436173 +0000 UTC m=+0.611260847 container create 69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:16:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:18 compute-0 ceph-mon[73668]: osdmap e165: 3 total, 3 up, 3 in
Oct 02 12:16:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:18 compute-0 podman[275983]: 2025-10-02 12:16:18.641693138 +0000 UTC m=+0.302732018 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:16:18 compute-0 systemd[1]: Started libpod-conmon-69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31.scope.
Oct 02 12:16:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:18 compute-0 podman[275982]: 2025-10-02 12:16:18.714687115 +0000 UTC m=+0.375629193 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 12:16:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 614 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 6.8 MiB/s wr, 634 op/s
Oct 02 12:16:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:16:19 compute-0 podman[275967]: 2025-10-02 12:16:19.336321402 +0000 UTC m=+1.572146146 container init 69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:19 compute-0 podman[275967]: 2025-10-02 12:16:19.350196044 +0000 UTC m=+1.586020748 container start 69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:16:19 compute-0 epic_brown[276034]: 167 167
Oct 02 12:16:19 compute-0 systemd[1]: libpod-69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31.scope: Deactivated successfully.
Oct 02 12:16:19 compute-0 nova_compute[256940]: 2025-10-02 12:16:19.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:19.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:19 compute-0 podman[275967]: 2025-10-02 12:16:19.59962942 +0000 UTC m=+1.835454174 container attach 69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:16:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:16:19 compute-0 podman[275967]: 2025-10-02 12:16:19.601219931 +0000 UTC m=+1.837044625 container died 69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:19 compute-0 ceph-mon[73668]: pgmap v1111: 305 pgs: 305 active+clean; 614 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 11 MiB/s wr, 672 op/s
Oct 02 12:16:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/164602721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1953203193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc77e3da3d36c361f4290d897854433c9e5a7e2abf7c89d803210a8bac381dd3-merged.mount: Deactivated successfully.
Oct 02 12:16:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:20.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 605 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.4 MiB/s wr, 543 op/s
Oct 02 12:16:21 compute-0 ceph-mon[73668]: pgmap v1113: 305 pgs: 305 active+clean; 614 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 6.8 MiB/s wr, 634 op/s
Oct 02 12:16:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:21 compute-0 podman[275967]: 2025-10-02 12:16:21.389250124 +0000 UTC m=+3.625074828 container remove 69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:16:21 compute-0 nova_compute[256940]: 2025-10-02 12:16:21.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:21 compute-0 systemd[1]: libpod-conmon-69618fdc40c86db0c42860a1a2f2367cda14219b3237e672e3fd79b6197bfc31.scope: Deactivated successfully.
Oct 02 12:16:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:21.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:21 compute-0 podman[276060]: 2025-10-02 12:16:21.656331291 +0000 UTC m=+0.040042277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:21 compute-0 podman[276060]: 2025-10-02 12:16:21.796636785 +0000 UTC m=+0.180347731 container create 53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:16:21 compute-0 systemd[1]: Started libpod-conmon-53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc.scope.
Oct 02 12:16:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c31a96c98f7d8b7902696c84b7a789e17f3cb0ceba54f286610cd5ef8116/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c31a96c98f7d8b7902696c84b7a789e17f3cb0ceba54f286610cd5ef8116/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c31a96c98f7d8b7902696c84b7a789e17f3cb0ceba54f286610cd5ef8116/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c31a96c98f7d8b7902696c84b7a789e17f3cb0ceba54f286610cd5ef8116/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:22 compute-0 podman[276060]: 2025-10-02 12:16:22.134383198 +0000 UTC m=+0.518094164 container init 53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bardeen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:16:22 compute-0 podman[276060]: 2025-10-02 12:16:22.14368231 +0000 UTC m=+0.527393256 container start 53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:16:22 compute-0 podman[276060]: 2025-10-02 12:16:22.16358148 +0000 UTC m=+0.547292456 container attach 53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:16:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 592 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 4.0 MiB/s wr, 510 op/s
Oct 02 12:16:22 compute-0 ceph-mon[73668]: pgmap v1114: 305 pgs: 305 active+clean; 605 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.4 MiB/s wr, 543 op/s
Oct 02 12:16:23 compute-0 cool_bardeen[276077]: [
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:     {
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "available": false,
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "ceph_device": false,
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "lsm_data": {},
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "lvs": [],
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "path": "/dev/sr0",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "rejected_reasons": [
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "Insufficient space (<5GB)",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "Has a FileSystem"
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         ],
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         "sys_api": {
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "actuators": null,
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "device_nodes": "sr0",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "devname": "sr0",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "human_readable_size": "482.00 KB",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "id_bus": "ata",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "model": "QEMU DVD-ROM",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "nr_requests": "2",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "parent": "/dev/sr0",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "partitions": {},
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "path": "/dev/sr0",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "removable": "1",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "rev": "2.5+",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "ro": "0",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "rotational": "0",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "sas_address": "",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "sas_device_handle": "",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "scheduler_mode": "mq-deadline",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "sectors": 0,
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "sectorsize": "2048",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "size": 493568.0,
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "support_discard": "2048",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "type": "disk",
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:             "vendor": "QEMU"
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:         }
Oct 02 12:16:23 compute-0 cool_bardeen[276077]:     }
Oct 02 12:16:23 compute-0 cool_bardeen[276077]: ]
Oct 02 12:16:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:23.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:23 compute-0 systemd[1]: libpod-53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc.scope: Deactivated successfully.
Oct 02 12:16:23 compute-0 systemd[1]: libpod-53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc.scope: Consumed 1.286s CPU time.
Oct 02 12:16:23 compute-0 podman[277204]: 2025-10-02 12:16:23.570987081 +0000 UTC m=+0.041897124 container died 53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bardeen, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 02 12:16:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 02 12:16:23 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 02 12:16:23 compute-0 nova_compute[256940]: 2025-10-02 12:16:23.976 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:23 compute-0 nova_compute[256940]: 2025-10-02 12:16:23.979 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ee0c31a96c98f7d8b7902696c84b7a789e17f3cb0ceba54f286610cd5ef8116-merged.mount: Deactivated successfully.
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.106 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.331 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.332 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.343 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.343 2 INFO nova.compute.claims [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.422 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "cfe39611-f626-4dba-8730-190f423de8a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.423 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.577 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:16:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.659 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:24 compute-0 podman[277204]: 2025-10-02 12:16:24.705128035 +0000 UTC m=+1.176038048 container remove 53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bardeen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:16:24 compute-0 nova_compute[256940]: 2025-10-02 12:16:24.710 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:24 compute-0 systemd[1]: libpod-conmon-53aba08b29d081ae923c511b1b400afbfe652c8ec9c8e9ec109a514ec4ac13cc.scope: Deactivated successfully.
Oct 02 12:16:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 542 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 36 KiB/s wr, 141 op/s
Oct 02 12:16:24 compute-0 ceph-mon[73668]: pgmap v1115: 305 pgs: 305 active+clean; 592 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 4.0 MiB/s wr, 510 op/s
Oct 02 12:16:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3466060320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:24 compute-0 ceph-mon[73668]: osdmap e166: 3 total, 3 up, 3 in
Oct 02 12:16:24 compute-0 sudo[275901]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:16:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183177585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.293 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.301 2 DEBUG nova.compute.provider_tree [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.336 2 DEBUG nova.scheduler.client.report [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.403 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.404 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.406 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.474 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.475 2 INFO nova.compute.claims [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:16:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:25.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.611 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.612 2 DEBUG nova.network.neutron [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b30a96ca-0329-4050-bd67-43c14816cb16 does not exist
Oct 02 12:16:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 15068ec8-73e7-4d26-b83f-67aba13b75e6 does not exist
Oct 02 12:16:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f1a823c8-7b92-41aa-93d8-a8b179a7c97e does not exist
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:16:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:16:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.686 2 INFO nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.706 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:16:25 compute-0 sudo[277242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:25 compute-0 sudo[277242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:25 compute-0 sudo[277242]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.814 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.817 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.817 2 INFO nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Creating image(s)
Oct 02 12:16:25 compute-0 sudo[277267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:25 compute-0 sudo[277267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:25 compute-0 sudo[277267]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.853 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.899 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:25 compute-0 sudo[277310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:25 compute-0 sudo[277310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:25 compute-0 sudo[277310]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.938 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.944 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:25 compute-0 nova_compute[256940]: 2025-10-02 12:16:25.978 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:26 compute-0 sudo[277371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:16:26 compute-0 sudo[277371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.031 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.033 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.033 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.034 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3110913110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:26 compute-0 ceph-mon[73668]: pgmap v1117: 305 pgs: 305 active+clean; 542 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 36 KiB/s wr, 141 op/s
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/183177585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:16:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.264 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.274 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a114d722-ceac-442e-8b38-c2892fda526b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.314 2 DEBUG nova.policy [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0cdfd1473bd4963b4ded642a43c35f3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.352 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.353 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.353 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.354 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.354 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.356 2 INFO nova.compute.manager [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Terminating instance
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.358 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "refresh_cache-cfb8af05-9dc1-4736-8f2a-88aad55f1585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.358 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquired lock "refresh_cache-cfb8af05-9dc1-4736-8f2a-88aad55f1585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.359 2 DEBUG nova.network.neutron [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:26.453 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:26.453 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:26.454 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:26 compute-0 podman[277493]: 2025-10-02 12:16:26.471617497 +0000 UTC m=+0.044201366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.587 2 DEBUG nova.network.neutron [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:26.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372069525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.608 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "0d416198-c85f-4560-8638-d57410e783c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.609 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "0d416198-c85f-4560-8638-d57410e783c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.610 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "0d416198-c85f-4560-8638-d57410e783c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.610 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "0d416198-c85f-4560-8638-d57410e783c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.611 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "0d416198-c85f-4560-8638-d57410e783c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.612 2 INFO nova.compute.manager [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Terminating instance
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.613 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "refresh_cache-0d416198-c85f-4560-8638-d57410e783c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.614 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquired lock "refresh_cache-0d416198-c85f-4560-8638-d57410e783c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.614 2 DEBUG nova.network.neutron [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.621 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.628 2 DEBUG nova.compute.provider_tree [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.646 2 DEBUG nova.scheduler.client.report [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.672 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.673 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:16:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 521 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 34 KiB/s wr, 163 op/s
Oct 02 12:16:26 compute-0 podman[277493]: 2025-10-02 12:16:26.75351241 +0000 UTC m=+0.326096269 container create 67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.776 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.777 2 DEBUG nova.network.neutron [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.813 2 INFO nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.836 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:16:26 compute-0 nova_compute[256940]: 2025-10-02 12:16:26.845 2 DEBUG nova.network.neutron [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.000 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.001 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.002 2 INFO nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Creating image(s)
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.031 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:27 compute-0 systemd[1]: Started libpod-conmon-67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3.scope.
Oct 02 12:16:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.315 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.348 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.354 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.391 2 DEBUG nova.network.neutron [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.412 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a114d722-ceac-442e-8b38-c2892fda526b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.450 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Releasing lock "refresh_cache-cfb8af05-9dc1-4736-8f2a-88aad55f1585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.451 2 DEBUG nova.compute.manager [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.452 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.452 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:27 compute-0 podman[277493]: 2025-10-02 12:16:27.453471272 +0000 UTC m=+1.026055121 container init 67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.453 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.454 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:27 compute-0 podman[277493]: 2025-10-02 12:16:27.467335194 +0000 UTC m=+1.039919003 container start 67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:16:27 compute-0 sleepy_raman[277531]: 167 167
Oct 02 12:16:27 compute-0 systemd[1]: libpod-67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3.scope: Deactivated successfully.
Oct 02 12:16:27 compute-0 conmon[277531]: conmon 67d009466559ef0664a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3.scope/container/memory.events
Oct 02 12:16:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:27.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:27 compute-0 sudo[277618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:27 compute-0 sudo[277618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:27 compute-0 sudo[277618]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:27 compute-0 sudo[277643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:27 compute-0 sudo[277643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:27 compute-0 podman[277493]: 2025-10-02 12:16:27.931148289 +0000 UTC m=+1.503732148 container attach 67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:27 compute-0 podman[277493]: 2025-10-02 12:16:27.933576373 +0000 UTC m=+1.506160232 container died 67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:16:27 compute-0 sudo[277643]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.935 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.945 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cfe39611-f626-4dba-8730-190f423de8a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.977 2 DEBUG nova.network.neutron [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.980 2 DEBUG nova.network.neutron [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:16:27 compute-0 nova_compute[256940]: 2025-10-02 12:16:27.981 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.031 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Releasing lock "refresh_cache-0d416198-c85f-4560-8638-d57410e783c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.032 2 DEBUG nova.compute.manager [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.038 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] resizing rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:16:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3372069525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.200 2 DEBUG nova.objects.instance [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lazy-loading 'migration_context' on Instance uuid a114d722-ceac-442e-8b38-c2892fda526b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.224 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.225 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Ensure instance console log exists: /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.226 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.227 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.227 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:28 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 02 12:16:28 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000019.scope: Consumed 15.324s CPU time.
Oct 02 12:16:28 compute-0 systemd-machined[210927]: Machine qemu-10-instance-00000019 terminated.
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.467 2 INFO nova.virt.libvirt.driver [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Instance destroyed successfully.
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.470 2 DEBUG nova.objects.instance [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'resources' on Instance uuid cfb8af05-9dc1-4736-8f2a-88aad55f1585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.473 2 DEBUG nova.network.neutron [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Successfully updated port: 965edc3f-df96-430d-8b4b-4f3dbb19e9de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.516 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.516 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.516 2 DEBUG nova.network.neutron [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:28.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.607 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407373.6057935, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.608 2 INFO nova.compute.manager [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Stopped (Lifecycle Event)
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.634 2 DEBUG nova.compute.manager [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-changed-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.635 2 DEBUG nova.compute.manager [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Refreshing instance network info cache due to event network-changed-965edc3f-df96-430d-8b4b-4f3dbb19e9de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.635 2 DEBUG oslo_concurrency.lockutils [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.646 2 DEBUG nova.compute.manager [None req-e1376ae8-38b5-449a-9e52-c24645085f85 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:16:28
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'backups']
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.714 2 DEBUG nova.network.neutron [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 521 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 31 KiB/s wr, 148 op/s
Oct 02 12:16:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-629a233003a8e3f145646c6af70f33a69a7bbb2d5979fbadbb065cd70c75df4a-merged.mount: Deactivated successfully.
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.872 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cfe39611-f626-4dba-8730-190f423de8a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.927s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:28 compute-0 nova_compute[256940]: 2025-10-02 12:16:28.960 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] resizing rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:16:29 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 02 12:16:29 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001a.scope: Consumed 14.977s CPU time.
Oct 02 12:16:29 compute-0 systemd-machined[210927]: Machine qemu-11-instance-0000001a terminated.
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.105 2 DEBUG nova.objects.instance [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'migration_context' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.247 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.247 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Ensure instance console log exists: /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.248 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.248 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.248 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.250 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.255 2 WARNING nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.270 2 DEBUG nova.virt.libvirt.host [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.270 2 DEBUG nova.virt.libvirt.host [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.273 2 DEBUG nova.virt.libvirt.host [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.274 2 DEBUG nova.virt.libvirt.host [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.275 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.275 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.275 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.276 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.276 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.276 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.276 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.276 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.276 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.277 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.277 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.277 2 DEBUG nova.virt.hardware [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.279 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.324 2 INFO nova.virt.libvirt.driver [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Instance destroyed successfully.
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.325 2 DEBUG nova.objects.instance [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'resources' on Instance uuid 0d416198-c85f-4560-8638-d57410e783c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:29 compute-0 nova_compute[256940]: 2025-10-02 12:16:29.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:29 compute-0 ceph-mon[73668]: pgmap v1118: 305 pgs: 305 active+clean; 521 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 34 KiB/s wr, 163 op/s
Oct 02 12:16:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:29.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:30.000 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:30.002 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:30.003 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667495230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.159 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.202 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.209 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:30 compute-0 podman[277493]: 2025-10-02 12:16:30.303745143 +0000 UTC m=+3.876329002 container remove 67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:16:30 compute-0 systemd[1]: libpod-conmon-67d009466559ef0664a81c853f4415c68a0341fb568ceb7422f1182fc3acaec3.scope: Deactivated successfully.
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.498 2 DEBUG nova.network.neutron [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.562 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.562 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Instance network_info: |[{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.563 2 DEBUG oslo_concurrency.lockutils [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.564 2 DEBUG nova.network.neutron [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Refreshing network info cache for port 965edc3f-df96-430d-8b4b-4f3dbb19e9de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.569 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Start _get_guest_xml network_info=[{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:16:30 compute-0 ceph-mon[73668]: pgmap v1119: 305 pgs: 305 active+clean; 521 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 31 KiB/s wr, 148 op/s
Oct 02 12:16:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1667495230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.576 2 WARNING nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.585 2 DEBUG nova.virt.libvirt.host [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.586 2 DEBUG nova.virt.libvirt.host [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.589 2 DEBUG nova.virt.libvirt.host [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.589 2 DEBUG nova.virt.libvirt.host [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.591 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.591 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.591 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.592 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.592 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.592 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.592 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.593 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.593 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.593 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.593 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.593 2 DEBUG nova.virt.hardware [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.596 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:30 compute-0 podman[277929]: 2025-10-02 12:16:30.513642425 +0000 UTC m=+0.034294796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:30.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:30 compute-0 podman[277929]: 2025-10-02 12:16:30.731535126 +0000 UTC m=+0.252187417 container create f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:16:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 543 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:16:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481727496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.899 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.901 2 DEBUG nova.objects.instance [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:30 compute-0 systemd[1]: Started libpod-conmon-f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb.scope.
Oct 02 12:16:30 compute-0 nova_compute[256940]: 2025-10-02 12:16:30.947 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <uuid>cfe39611-f626-4dba-8730-190f423de8a1</uuid>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <name>instance-0000001e</name>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <nova:name>tempest-MigrationsAdminTest-server-407394181</nova:name>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:16:29</nova:creationTime>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <system>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <entry name="serial">cfe39611-f626-4dba-8730-190f423de8a1</entry>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <entry name="uuid">cfe39611-f626-4dba-8730-190f423de8a1</entry>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </system>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <os>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   </os>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <features>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   </features>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/cfe39611-f626-4dba-8730-190f423de8a1_disk">
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       </source>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/cfe39611-f626-4dba-8730-190f423de8a1_disk.config">
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       </source>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:16:30 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/console.log" append="off"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <video>
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </video>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:16:30 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:16:30 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:16:30 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:16:30 compute-0 nova_compute[256940]: </domain>
Oct 02 12:16:30 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:16:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a8b6073cf1305e9856497ded3202fa2e778f6741b6d83305f4b75bc350dc00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a8b6073cf1305e9856497ded3202fa2e778f6741b6d83305f4b75bc350dc00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a8b6073cf1305e9856497ded3202fa2e778f6741b6d83305f4b75bc350dc00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a8b6073cf1305e9856497ded3202fa2e778f6741b6d83305f4b75bc350dc00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a8b6073cf1305e9856497ded3202fa2e778f6741b6d83305f4b75bc350dc00/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:31 compute-0 podman[277929]: 2025-10-02 12:16:31.077638856 +0000 UTC m=+0.598291157 container init f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:16:31 compute-0 podman[277929]: 2025-10-02 12:16:31.08698728 +0000 UTC m=+0.607639561 container start f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.114 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.114 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.116 2 INFO nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Using config drive
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.175 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561813357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.236 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.263 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.267 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:31 compute-0 podman[277929]: 2025-10-02 12:16:31.271729956 +0000 UTC m=+0.792382257 container attach f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:31.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697768296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.669 2 INFO nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Creating config drive at /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/disk.config
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.675 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsqkblndc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.720 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.726 2 DEBUG nova.virt.libvirt.vif [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:25Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.727 2 DEBUG nova.network.os_vif_util [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.727 2 DEBUG nova.network.os_vif_util [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.729 2 DEBUG nova.objects.instance [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a114d722-ceac-442e-8b38-c2892fda526b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.763 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <uuid>a114d722-ceac-442e-8b38-c2892fda526b</uuid>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <name>instance-0000001d</name>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <nova:name>tempest-LiveMigrationTest-server-2023518062</nova:name>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:16:30</nova:creationTime>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:user uuid="e0cdfd1473bd4963b4ded642a43c35f3">tempest-LiveMigrationTest-1880928942-project-member</nova:user>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:project uuid="7f6188e258a04ea1a49e6b415bce3fc9">tempest-LiveMigrationTest-1880928942</nova:project>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <nova:port uuid="965edc3f-df96-430d-8b4b-4f3dbb19e9de">
Oct 02 12:16:31 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <system>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <entry name="serial">a114d722-ceac-442e-8b38-c2892fda526b</entry>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <entry name="uuid">a114d722-ceac-442e-8b38-c2892fda526b</entry>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </system>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <os>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   </os>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <features>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   </features>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a114d722-ceac-442e-8b38-c2892fda526b_disk">
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       </source>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a114d722-ceac-442e-8b38-c2892fda526b_disk.config">
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       </source>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:16:31 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:bf:7b:22"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <target dev="tap965edc3f-df"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/console.log" append="off"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <video>
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </video>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:16:31 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:16:31 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:16:31 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:16:31 compute-0 nova_compute[256940]: </domain>
Oct 02 12:16:31 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.764 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Preparing to wait for external event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.764 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.764 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.764 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.765 2 DEBUG nova.virt.libvirt.vif [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:25Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.766 2 DEBUG nova.network.os_vif_util [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.766 2 DEBUG nova.network.os_vif_util [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.767 2 DEBUG os_vif [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.768 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap965edc3f-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap965edc3f-df, col_values=(('external_ids', {'iface-id': '965edc3f-df96-430d-8b4b-4f3dbb19e9de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:7b:22', 'vm-uuid': 'a114d722-ceac-442e-8b38-c2892fda526b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:31 compute-0 NetworkManager[44981]: <info>  [1759407391.7770] manager: (tap965edc3f-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.785 2 INFO os_vif [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df')
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.822 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsqkblndc" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:31 compute-0 inspiring_shamir[277967]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:16:31 compute-0 inspiring_shamir[277967]: --> relative data size: 1.0
Oct 02 12:16:31 compute-0 inspiring_shamir[277967]: --> All data devices are unavailable
Oct 02 12:16:31 compute-0 systemd[1]: libpod-f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb.scope: Deactivated successfully.
Oct 02 12:16:31 compute-0 podman[277929]: 2025-10-02 12:16:31.948566115 +0000 UTC m=+1.469218406 container died f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shamir, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.962 2 DEBUG nova.storage.rbd_utils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image cfe39611-f626-4dba-8730-190f423de8a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:31 compute-0 nova_compute[256940]: 2025-10-02 12:16:31.974 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/disk.config cfe39611-f626-4dba-8730-190f423de8a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1481727496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1561813357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:32 compute-0 nova_compute[256940]: 2025-10-02 12:16:32.248 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:32 compute-0 nova_compute[256940]: 2025-10-02 12:16:32.249 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:32 compute-0 nova_compute[256940]: 2025-10-02 12:16:32.249 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] No VIF found with MAC fa:16:3e:bf:7b:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:16:32 compute-0 nova_compute[256940]: 2025-10-02 12:16:32.250 2 INFO nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Using config drive
Oct 02 12:16:32 compute-0 nova_compute[256940]: 2025-10-02 12:16:32.492 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:32.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-03a8b6073cf1305e9856497ded3202fa2e778f6741b6d83305f4b75bc350dc00-merged.mount: Deactivated successfully.
Oct 02 12:16:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 534 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Oct 02 12:16:32 compute-0 podman[277929]: 2025-10-02 12:16:32.888426355 +0000 UTC m=+2.409078646 container remove f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:32 compute-0 podman[278074]: 2025-10-02 12:16:32.889963425 +0000 UTC m=+0.905808641 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:32 compute-0 sudo[277371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:32 compute-0 systemd[1]: libpod-conmon-f3d990fdd6ec0d866fd4c176855d354e1d67a9d4b7f38da63af47d0425f162bb.scope: Deactivated successfully.
Oct 02 12:16:32 compute-0 podman[278066]: 2025-10-02 12:16:32.948680729 +0000 UTC m=+0.965647775 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:16:32 compute-0 nova_compute[256940]: 2025-10-02 12:16:32.996 2 DEBUG oslo_concurrency.processutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/disk.config cfe39611-f626-4dba-8730-190f423de8a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:32 compute-0 nova_compute[256940]: 2025-10-02 12:16:32.998 2 INFO nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Deleting local config drive /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/disk.config because it was imported into RBD.
Oct 02 12:16:33 compute-0 sudo[278155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:33 compute-0 sudo[278155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:33 compute-0 sudo[278155]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:33 compute-0 sudo[278182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:33 compute-0 systemd-machined[210927]: New machine qemu-13-instance-0000001e.
Oct 02 12:16:33 compute-0 sudo[278182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:33 compute-0 sudo[278182]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:33 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000001e.
Oct 02 12:16:33 compute-0 sudo[278215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:33 compute-0 sudo[278215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:33 compute-0 sudo[278215]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.173 2 INFO nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Creating config drive at /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/disk.config
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.180 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3kq230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:33 compute-0 sudo[278245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:16:33 compute-0 sudo[278245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:33 compute-0 ceph-mon[73668]: pgmap v1120: 305 pgs: 305 active+clean; 543 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:16:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/697768296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.318 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3kq230s" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.374 2 DEBUG nova.storage.rbd_utils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image a114d722-ceac-442e-8b38-c2892fda526b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.382 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/disk.config a114d722-ceac-442e-8b38-c2892fda526b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:33.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.584 2 DEBUG nova.network.neutron [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updated VIF entry in instance network info cache for port 965edc3f-df96-430d-8b4b-4f3dbb19e9de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.586 2 DEBUG nova.network.neutron [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:33 compute-0 nova_compute[256940]: 2025-10-02 12:16:33.619 2 DEBUG oslo_concurrency.lockutils [req-2a4f1af4-501c-4d93-ba89-e85e03dbabc4 req-ab112aa6-d7ce-47ed-9749-17b8e07174de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:33 compute-0 podman[278346]: 2025-10-02 12:16:33.595831512 +0000 UTC m=+0.028661730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:33 compute-0 podman[278346]: 2025-10-02 12:16:33.774479229 +0000 UTC m=+0.207309416 container create 9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:34 compute-0 systemd[1]: Started libpod-conmon-9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb.scope.
Oct 02 12:16:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.139 2 DEBUG oslo_concurrency.processutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/disk.config a114d722-ceac-442e-8b38-c2892fda526b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.140 2 INFO nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deleting local config drive /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/disk.config because it was imported into RBD.
Oct 02 12:16:34 compute-0 kernel: tap965edc3f-df: entered promiscuous mode
Oct 02 12:16:34 compute-0 NetworkManager[44981]: <info>  [1759407394.2117] manager: (tap965edc3f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00105|binding|INFO|Claiming lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de for this chassis.
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00106|binding|INFO|965edc3f-df96-430d-8b4b-4f3dbb19e9de: Claiming fa:16:3e:bf:7b:22 10.100.0.10
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00107|binding|INFO|Claiming lport 92466114-86f5-4a18-ad64-93c2127fe0d3 for this chassis.
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00108|binding|INFO|92466114-86f5-4a18-ad64-93c2127fe0d3: Claiming fa:16:3e:bd:c1:f4 19.80.0.104
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 systemd-udevd[278398]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 NetworkManager[44981]: <info>  [1759407394.2350] device (tap965edc3f-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:16:34 compute-0 NetworkManager[44981]: <info>  [1759407394.2365] device (tap965edc3f-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:16:34 compute-0 systemd-machined[210927]: New machine qemu-14-instance-0000001d.
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.259 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.261 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.263 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e bound to our chassis
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.264 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:16:34 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000001d.
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.286 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[688d0f05-4a0f-4223-9a0f-da89687dd09b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 podman[278346]: 2025-10-02 12:16:34.288276498 +0000 UTC m=+0.721106715 container init 9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.288 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5989958f-c1 in ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.290 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5989958f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.290 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1feec4e3-ad82-4aa6-973d-3236aecaf78d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.292 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9c177c4a-8c49-41fb-bf19-3888dbc4655e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 podman[278346]: 2025-10-02 12:16:34.301069972 +0000 UTC m=+0.733900149 container start 9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.310 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[1344de5e-ddaf-44a5-891b-6d4d80348c56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 systemd[1]: libpod-9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb.scope: Deactivated successfully.
Oct 02 12:16:34 compute-0 cranky_wilbur[278408]: 167 167
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.331 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[61497d52-a906-485d-a07f-c97d0d81e8bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 podman[278346]: 2025-10-02 12:16:34.344265041 +0000 UTC m=+0.777095238 container attach 9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:16:34 compute-0 podman[278346]: 2025-10-02 12:16:34.345521174 +0000 UTC m=+0.778351361 container died 9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.350 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407394.3493035, cfe39611-f626-4dba-8730-190f423de8a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.350 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Resumed (Lifecycle Event)
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.354 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.354 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.362 2 INFO nova.virt.libvirt.driver [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance spawned successfully.
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.363 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00109|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de ovn-installed in OVS
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00110|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de up in Southbound
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00111|binding|INFO|Setting lport 92466114-86f5-4a18-ad64-93c2127fe0d3 up in Southbound
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.370 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[14f1114e-d04b-461e-82b6-5c073e123cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.376 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6dabfb9f-e6c2-475c-80ba-aeda3d1678ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 NetworkManager[44981]: <info>  [1759407394.3772] manager: (tap5989958f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Oct 02 12:16:34 compute-0 systemd-udevd[278424]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.420 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.422 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4283e2b5-39b8-41fe-9b24-de050349f0ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.426 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[188ad331-d7cd-4ff1-a437-03bfb062b576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.428 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.428 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.428 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.429 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.429 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.430 2 DEBUG nova.virt.libvirt.driver [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.433 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:34 compute-0 ceph-mon[73668]: pgmap v1121: 305 pgs: 305 active+clean; 534 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Oct 02 12:16:34 compute-0 NetworkManager[44981]: <info>  [1759407394.4565] device (tap5989958f-c0): carrier: link connected
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.469 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5d52807b-084e-426a-a167-a71ca5a5ef53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.492 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[389015a9-2d11-421f-8b84-c317cf54d455]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528994, 'reachable_time': 23824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278470, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.498 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.499 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407394.3494573, cfe39611-f626-4dba-8730-190f423de8a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.499 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Started (Lifecycle Event)
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.519 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6d22494d-b9c6-433a-8c9a-a6c86405ff27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:d212'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528994, 'tstamp': 528994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278472, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-271be1360c6c2e60f5cf21aeb9771f40cd2c7b0ea1c1dd8634af21dee9b686ef-merged.mount: Deactivated successfully.
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.532 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.538 2 INFO nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Took 7.54 seconds to spawn the instance on the hypervisor.
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.539 2 DEBUG nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.540 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.557 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2c464fed-23e0-40a3-b605-f889c4adaba8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528994, 'reachable_time': 23824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278473, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.589 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.596 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3fe040-d020-487c-a529-bfe491d4b089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:34.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.650 2 INFO nova.compute.manager [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Took 9.96 seconds to build instance.
Oct 02 12:16:34 compute-0 podman[278346]: 2025-10-02 12:16:34.654372111 +0000 UTC m=+1.087202308 container remove 9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:16:34 compute-0 systemd[1]: libpod-conmon-9b354ca75cf366389a7cfcce8e0898bc3dc2ca56a155572ac8ceb27211b2cadb.scope: Deactivated successfully.
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.683 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dfea7907-5254-449f-90d8-41578bfc8c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.685 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.686 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.687 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:34 compute-0 NetworkManager[44981]: <info>  [1759407394.6899] manager: (tap5989958f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 02 12:16:34 compute-0 kernel: tap5989958f-c0: entered promiscuous mode
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.694 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:34 compute-0 ovn_controller[148123]: 2025-10-02T12:16:34Z|00112|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.699 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.709 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0e61e6a7-f8b8-41b3-a2e8-c09d44287977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.711 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:16:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:34.712 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'env', 'PROCESS_TAG=haproxy-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5989958f-ccbb-4db4-8dcb-18563aa2418e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 499 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 229 op/s
Oct 02 12:16:34 compute-0 nova_compute[256940]: 2025-10-02 12:16:34.803 2 DEBUG oslo_concurrency.lockutils [None req-969d0263-ce55-4bb7-a577-3b8195837587 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:34 compute-0 podman[278491]: 2025-10-02 12:16:34.895373586 +0000 UTC m=+0.085385001 container create d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:16:34 compute-0 podman[278491]: 2025-10-02 12:16:34.843936782 +0000 UTC m=+0.033948227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:35 compute-0 systemd[1]: Started libpod-conmon-d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d.scope.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.044 2 INFO nova.virt.libvirt.driver [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Deleting instance files /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585_del
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.046 2 INFO nova.virt.libvirt.driver [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Deletion of /var/lib/nova/instances/cfb8af05-9dc1-4736-8f2a-88aad55f1585_del complete
Oct 02 12:16:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2899d17fa6b704cb99cc3e63c4644a3a52a48e9b467b9ac66c66c7a205291a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2899d17fa6b704cb99cc3e63c4644a3a52a48e9b467b9ac66c66c7a205291a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2899d17fa6b704cb99cc3e63c4644a3a52a48e9b467b9ac66c66c7a205291a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2899d17fa6b704cb99cc3e63c4644a3a52a48e9b467b9ac66c66c7a205291a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:35 compute-0 podman[278491]: 2025-10-02 12:16:35.158398096 +0000 UTC m=+0.348409531 container init d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:16:35 compute-0 podman[278491]: 2025-10-02 12:16:35.172039873 +0000 UTC m=+0.362051288 container start d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.212 2 DEBUG nova.compute.manager [req-c23286a8-7060-4c16-b5e6-6a9dd9be2c9d req-7d8c9782-c799-4d13-9fbd-e2ef98492ad1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.213 2 DEBUG oslo_concurrency.lockutils [req-c23286a8-7060-4c16-b5e6-6a9dd9be2c9d req-7d8c9782-c799-4d13-9fbd-e2ef98492ad1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.213 2 DEBUG oslo_concurrency.lockutils [req-c23286a8-7060-4c16-b5e6-6a9dd9be2c9d req-7d8c9782-c799-4d13-9fbd-e2ef98492ad1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.213 2 DEBUG oslo_concurrency.lockutils [req-c23286a8-7060-4c16-b5e6-6a9dd9be2c9d req-7d8c9782-c799-4d13-9fbd-e2ef98492ad1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:35 compute-0 podman[278491]: 2025-10-02 12:16:35.213815024 +0000 UTC m=+0.403826439 container attach d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jackson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.214 2 DEBUG nova.compute.manager [req-c23286a8-7060-4c16-b5e6-6a9dd9be2c9d req-7d8c9782-c799-4d13-9fbd-e2ef98492ad1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Processing event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.218 2 INFO nova.compute.manager [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Took 7.77 seconds to destroy the instance on the hypervisor.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.218 2 DEBUG oslo.service.loopingcall [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.219 2 DEBUG nova.compute.manager [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.219 2 DEBUG nova.network.neutron [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:16:35 compute-0 podman[278573]: 2025-10-02 12:16:35.170297087 +0000 UTC m=+0.082502526 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.304 2 INFO nova.virt.libvirt.driver [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Deleting instance files /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9_del
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.305 2 INFO nova.virt.libvirt.driver [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Deletion of /var/lib/nova/instances/0d416198-c85f-4560-8638-d57410e783c9_del complete
Oct 02 12:16:35 compute-0 podman[278573]: 2025-10-02 12:16:35.3522735 +0000 UTC m=+0.264478949 container create 62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.419 2 INFO nova.compute.manager [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Took 7.39 seconds to destroy the instance on the hypervisor.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.419 2 DEBUG oslo.service.loopingcall [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.419 2 DEBUG nova.compute.manager [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.420 2 DEBUG nova.network.neutron [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:16:35 compute-0 systemd[1]: Started libpod-conmon-62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398.scope.
Oct 02 12:16:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67f7afe0cc30c68e5945f6b63797cc74028734d6dd2c386a0628953c2d74472/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.643 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407395.6432803, a114d722-ceac-442e-8b38-c2892fda526b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.644 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Started (Lifecycle Event)
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.647 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.651 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.656 2 INFO nova.virt.libvirt.driver [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Instance spawned successfully.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.657 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.701 2 DEBUG nova.network.neutron [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.745 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.751 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.754 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.754 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.755 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.755 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.755 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.756 2 DEBUG nova.virt.libvirt.driver [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.776 2 DEBUG nova.network.neutron [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:35 compute-0 podman[278573]: 2025-10-02 12:16:35.79444653 +0000 UTC m=+0.706651999 container init 62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:16:35 compute-0 podman[278573]: 2025-10-02 12:16:35.802010938 +0000 UTC m=+0.714216377 container start 62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:16:35 compute-0 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[278592]: [NOTICE]   (278596) : New worker (278600) forked
Oct 02 12:16:35 compute-0 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[278592]: [NOTICE]   (278596) : Loading success.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.844 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.845 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407395.6433728, a114d722-ceac-442e-8b38-c2892fda526b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.845 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Paused (Lifecycle Event)
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.847 2 DEBUG nova.network.neutron [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.849 2 INFO nova.compute.manager [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Took 0.63 seconds to deallocate network for instance.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.911 2 INFO nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Took 10.10 seconds to spawn the instance on the hypervisor.
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.912 2 DEBUG nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.940 2 DEBUG nova.network.neutron [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]: {
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:     "1": [
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:         {
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "devices": [
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "/dev/loop3"
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             ],
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "lv_name": "ceph_lv0",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "lv_size": "7511998464",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "name": "ceph_lv0",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "tags": {
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.cluster_name": "ceph",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.crush_device_class": "",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.encrypted": "0",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.osd_id": "1",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.type": "block",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:                 "ceph.vdo": "0"
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             },
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "type": "block",
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:             "vg_name": "ceph_vg0"
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:         }
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]:     ]
Oct 02 12:16:35 compute-0 suspicious_jackson[278565]: }
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.959 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.964 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407395.6505089, a114d722-ceac-442e-8b38-c2892fda526b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.965 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Resumed (Lifecycle Event)
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.975 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:35 compute-0 nova_compute[256940]: 2025-10-02 12:16:35.976 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:35 compute-0 systemd[1]: libpod-d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d.scope: Deactivated successfully.
Oct 02 12:16:35 compute-0 podman[278491]: 2025-10-02 12:16:35.991572869 +0000 UTC m=+1.181584284 container died d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.038 2 INFO nova.compute.manager [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Took 0.62 seconds to deallocate network for instance.
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.042 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.044 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.063 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8c177f-1577-44e6-b57b-9f9e15ba02f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.064 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdc4336bf-61 in ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.067 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdc4336bf-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.068 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e43f2372-a421-44c0-b406-b79275c79370]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.069 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60475203-c167-4016-b951-5fe34f110a3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.083 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2205121b-f609-4dd8-a7ef-07c762fd5832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.097 2 DEBUG oslo_concurrency.processutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.101 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d9510ca0-bb4f-4f3e-8bbc-74e7781c9591]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.149 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7449df9f-6767-491d-9374-1bae73883340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 NetworkManager[44981]: <info>  [1759407396.1655] manager: (tapdc4336bf-60): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.164 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e23d8ec2-0a18-4b2c-ab1d-0d8616c999b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 systemd-udevd[278454]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d2899d17fa6b704cb99cc3e63c4644a3a52a48e9b467b9ac66c66c7a205291a-merged.mount: Deactivated successfully.
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.226 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3529bb-fdd2-4fde-8164-50ceb9565a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.229 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a4792000-a6c7-480a-8440-0d7d11779cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.245 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.255 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:36 compute-0 NetworkManager[44981]: <info>  [1759407396.2648] device (tapdc4336bf-60): carrier: link connected
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.281 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b831e041-9d79-4f63-8ebc-259b92f27dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.290 2 INFO nova.compute.manager [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Took 12.01 seconds to build instance.
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.306 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.307 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[55e56c65-91b9-4e53-a14d-d24e7b139b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4336bf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:70:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529174, 'reachable_time': 25911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278655, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.329 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[659004b6-6fdc-42ce-a4ec-d3eb22d5d977]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:708b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529174, 'tstamp': 529174}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278656, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 podman[278491]: 2025-10-02 12:16:36.329685301 +0000 UTC m=+1.519696716 container remove d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jackson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:16:36 compute-0 systemd[1]: libpod-conmon-d96007d4d499a8fba07ce3a58f71b8e904652d2aa37ebab8f0ede9e3f0066d6d.scope: Deactivated successfully.
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.359 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfe4011-8050-41ea-b0c0-32d5778ef4bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4336bf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:70:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529174, 'reachable_time': 25911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278657, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 sudo[278245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.375 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.401 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2088a1e5-a786-4584-8ae6-7885d229dcd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.420 2 DEBUG oslo_concurrency.lockutils [None req-f46169ab-cfdf-4faf-a025-f2d5ea677b77 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:36 compute-0 sudo[278660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:36 compute-0 sudo[278660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:36 compute-0 sudo[278660]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.493 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4f94d8-419a-41bf-88ed-ef887e5388b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.494 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4336bf-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.494 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.495 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc4336bf-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:36 compute-0 kernel: tapdc4336bf-60: entered promiscuous mode
Oct 02 12:16:36 compute-0 NetworkManager[44981]: <info>  [1759407396.4982] manager: (tapdc4336bf-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.501 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdc4336bf-60, col_values=(('external_ids', {'iface-id': 'c67f345b-5542-4cd7-a60b-7617c8d1414e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:36 compute-0 ovn_controller[148123]: 2025-10-02T12:16:36Z|00113|binding|INFO|Releasing lport c67f345b-5542-4cd7-a60b-7617c8d1414e from this chassis (sb_readonly=0)
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.507 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.509 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aa820b32-da4c-49df-a0e2-07858d99c472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.520 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:16:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:36.521 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'env', 'PROCESS_TAG=haproxy-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:36 compute-0 sudo[278690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:36 compute-0 sudo[278690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:36 compute-0 sudo[278690]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:36 compute-0 sudo[278717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:36 compute-0 sudo[278717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:36 compute-0 sudo[278717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:36.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:36 compute-0 sudo[278744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:16:36 compute-0 sudo[278744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703150610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.740 2 DEBUG oslo_concurrency.processutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.748 2 DEBUG nova.compute.provider_tree [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Oct 02 12:16:36 compute-0 ceph-mon[73668]: pgmap v1122: 305 pgs: 305 active+clean; 499 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 229 op/s
Oct 02 12:16:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3171481446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.775 2 DEBUG nova.scheduler.client.report [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.824 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.832 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.924 2 INFO nova.scheduler.client.report [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Deleted allocations for instance cfb8af05-9dc1-4736-8f2a-88aad55f1585
Oct 02 12:16:36 compute-0 nova_compute[256940]: 2025-10-02 12:16:36.970 2 DEBUG oslo_concurrency.processutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.049 2 DEBUG oslo_concurrency.lockutils [None req-6ca94492-6915-4a0f-9a6a-f1055ebededd 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "cfb8af05-9dc1-4736-8f2a-88aad55f1585" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:37 compute-0 podman[278822]: 2025-10-02 12:16:37.017486967 +0000 UTC m=+0.028125876 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:16:37 compute-0 podman[278834]: 2025-10-02 12:16:37.380259712 +0000 UTC m=+0.365845567 container create ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:37 compute-0 podman[278834]: 2025-10-02 12:16:37.293983519 +0000 UTC m=+0.279569464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512597358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.427 2 DEBUG oslo_concurrency.processutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.432 2 DEBUG nova.compute.manager [req-7c291c97-4037-43dd-9e55-570ee7fd572f req-b7ed29c4-e907-49a7-8bbc-158e2c147853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.432 2 DEBUG oslo_concurrency.lockutils [req-7c291c97-4037-43dd-9e55-570ee7fd572f req-b7ed29c4-e907-49a7-8bbc-158e2c147853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.432 2 DEBUG oslo_concurrency.lockutils [req-7c291c97-4037-43dd-9e55-570ee7fd572f req-b7ed29c4-e907-49a7-8bbc-158e2c147853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.433 2 DEBUG oslo_concurrency.lockutils [req-7c291c97-4037-43dd-9e55-570ee7fd572f req-b7ed29c4-e907-49a7-8bbc-158e2c147853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.433 2 DEBUG nova.compute.manager [req-7c291c97-4037-43dd-9e55-570ee7fd572f req-b7ed29c4-e907-49a7-8bbc-158e2c147853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.433 2 WARNING nova.compute.manager [req-7c291c97-4037-43dd-9e55-570ee7fd572f req-b7ed29c4-e907-49a7-8bbc-158e2c147853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received unexpected event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with vm_state active and task_state None.
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.439 2 DEBUG nova.compute.provider_tree [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:37 compute-0 podman[278822]: 2025-10-02 12:16:37.441259826 +0000 UTC m=+0.451898705 container create e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:16:37 compute-0 systemd[1]: Started libpod-conmon-ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051.scope.
Oct 02 12:16:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:37.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:37 compute-0 systemd[1]: Started libpod-conmon-e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa.scope.
Oct 02 12:16:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb39b190e4f707976f904bbf4327c409659d371d786203c275e463c3af13b146/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.658 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.660 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.661 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:16:37 compute-0 nova_compute[256940]: 2025-10-02 12:16:37.661 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a114d722-ceac-442e-8b38-c2892fda526b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:37 compute-0 podman[278834]: 2025-10-02 12:16:37.662010212 +0000 UTC m=+0.647596087 container init ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:16:37 compute-0 podman[278834]: 2025-10-02 12:16:37.672832335 +0000 UTC m=+0.658418190 container start ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:37 compute-0 wizardly_swirles[278878]: 167 167
Oct 02 12:16:37 compute-0 systemd[1]: libpod-ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051.scope: Deactivated successfully.
Oct 02 12:16:37 compute-0 podman[278822]: 2025-10-02 12:16:37.790815766 +0000 UTC m=+0.801454665 container init e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:16:37 compute-0 podman[278822]: 2025-10-02 12:16:37.797722347 +0000 UTC m=+0.808361226 container start e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/703150610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:37 compute-0 ceph-mon[73668]: pgmap v1123: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Oct 02 12:16:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2193579931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2512597358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:37 compute-0 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[278883]: [NOTICE]   (278900) : New worker (278902) forked
Oct 02 12:16:37 compute-0 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[278883]: [NOTICE]   (278900) : Loading success.
Oct 02 12:16:37 compute-0 podman[278834]: 2025-10-02 12:16:37.9517975 +0000 UTC m=+0.937383355 container attach ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:16:37 compute-0 podman[278834]: 2025-10-02 12:16:37.954808819 +0000 UTC m=+0.940394674 container died ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:16:38 compute-0 nova_compute[256940]: 2025-10-02 12:16:38.060 2 DEBUG nova.scheduler.client.report [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:38 compute-0 nova_compute[256940]: 2025-10-02 12:16:38.169 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:38 compute-0 nova_compute[256940]: 2025-10-02 12:16:38.229 2 INFO nova.scheduler.client.report [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Deleted allocations for instance 0d416198-c85f-4560-8638-d57410e783c9
Oct 02 12:16:38 compute-0 nova_compute[256940]: 2025-10-02 12:16:38.366 2 DEBUG oslo_concurrency.lockutils [None req-dea66092-de0a-4bac-80cb-958425e2e14e 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "0d416198-c85f-4560-8638-d57410e783c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7a57f3108fb2e4a3bf533448d1cdba9d945c397214ee458cd992ee1a04aaceb-merged.mount: Deactivated successfully.
Oct 02 12:16:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:38.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 205 op/s
Oct 02 12:16:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:39 compute-0 podman[278834]: 2025-10-02 12:16:39.077322929 +0000 UTC m=+2.062908784 container remove ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:16:39 compute-0 systemd[1]: libpod-conmon-ecbff84ec1d42c0aae165525e0064013c3ebf6d78597034a0567ec63bdf5b051.scope: Deactivated successfully.
Oct 02 12:16:39 compute-0 podman[278922]: 2025-10-02 12:16:39.32964631 +0000 UTC m=+0.090563576 container create a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_austin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:16:39 compute-0 podman[278922]: 2025-10-02 12:16:39.276482601 +0000 UTC m=+0.037399887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:39 compute-0 systemd[1]: Started libpod-conmon-a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392.scope.
Oct 02 12:16:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94a384698832fca5f95a592884c4e878489fc875ee7a45f4268015f122f3242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94a384698832fca5f95a592884c4e878489fc875ee7a45f4268015f122f3242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94a384698832fca5f95a592884c4e878489fc875ee7a45f4268015f122f3242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94a384698832fca5f95a592884c4e878489fc875ee7a45f4268015f122f3242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:39.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:39 compute-0 podman[278922]: 2025-10-02 12:16:39.526826491 +0000 UTC m=+0.287743777 container init a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_austin, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:16:39 compute-0 podman[278922]: 2025-10-02 12:16:39.536457452 +0000 UTC m=+0.297374718 container start a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:16:39 compute-0 podman[278922]: 2025-10-02 12:16:39.679853708 +0000 UTC m=+0.440770974 container attach a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.688 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.896 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.897 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.898 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.898 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.898 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.898 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.899 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-0 nova_compute[256940]: 2025-10-02 12:16:39.899 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:16:39 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010670940059091754 of space, bias 1.0, pg target 3.2012820177275265 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.251 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.251 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.252 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.252 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.252 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:40 compute-0 ceph-mon[73668]: pgmap v1124: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 205 op/s
Oct 02 12:16:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/762298545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:40 compute-0 quizzical_austin[278938]: {
Oct 02 12:16:40 compute-0 quizzical_austin[278938]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:16:40 compute-0 quizzical_austin[278938]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:16:40 compute-0 quizzical_austin[278938]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:16:40 compute-0 quizzical_austin[278938]:         "osd_id": 1,
Oct 02 12:16:40 compute-0 quizzical_austin[278938]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:16:40 compute-0 quizzical_austin[278938]:         "type": "bluestore"
Oct 02 12:16:40 compute-0 quizzical_austin[278938]:     }
Oct 02 12:16:40 compute-0 quizzical_austin[278938]: }
Oct 02 12:16:40 compute-0 systemd[1]: libpod-a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392.scope: Deactivated successfully.
Oct 02 12:16:40 compute-0 podman[278922]: 2025-10-02 12:16:40.484143976 +0000 UTC m=+1.245061252 container died a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_austin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:16:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.622 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Check if temp file /var/lib/nova/instances/tmpgk1qnbid exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Oct 02 12:16:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:40.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.623 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.701 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.701 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquired lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.702 2 DEBUG nova.network.neutron [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 278 op/s
Oct 02 12:16:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954817087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.848 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:40 compute-0 nova_compute[256940]: 2025-10-02 12:16:40.974 2 DEBUG nova.network.neutron [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d94a384698832fca5f95a592884c4e878489fc875ee7a45f4268015f122f3242-merged.mount: Deactivated successfully.
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.009 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.010 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.013 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.013 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.186 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.187 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4366MB free_disk=20.76431655883789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.188 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.188 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.255 2 DEBUG nova.network.neutron [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.328 2 INFO nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating resource usage from migration 7e1ee5ca-994f-4ce3-b2ab-0267a1862ca4
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.331 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Releasing lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.348 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration 7e1ee5ca-994f-4ce3-b2ab-0267a1862ca4 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.387 2 INFO nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 06c4e4dd-b6b1-4cca-a8b1-62285a0afe3d has allocations against this compute host but is not found in the database.
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.387 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.388 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.436 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:41 compute-0 podman[278922]: 2025-10-02 12:16:41.451725409 +0000 UTC m=+2.212642675 container remove a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:16:41 compute-0 systemd[1]: libpod-conmon-a09e8b1d8a7480e7a5ea589013cec989e4d4aeac72f979957c131663e39b1392.scope: Deactivated successfully.
Oct 02 12:16:41 compute-0 sudo[278744]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:16:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/142292739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1954817087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1031066231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.584 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.585 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Creating file /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/edafe1d21bd34d6eb69125a28ecacfb3.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.586 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/edafe1d21bd34d6eb69125a28ecacfb3.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:16:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4274575128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-0 nova_compute[256940]: 2025-10-02 12:16:41.996 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.003 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 315d5da6-8904-4b1b-8ac8-b5ba40a637ce does not exist
Oct 02 12:16:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 28332d9b-76a1-4a43-967f-8c1269dc22e0 does not exist
Oct 02 12:16:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f099d930-a9dd-49b1-9abd-f3f3614fd1a1 does not exist
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.042 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/edafe1d21bd34d6eb69125a28ecacfb3.tmp" returned: 1 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.042 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/edafe1d21bd34d6eb69125a28ecacfb3.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.044 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Creating directory /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.045 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:42 compute-0 sudo[279019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:42 compute-0 sudo[279019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:42 compute-0 sudo[279019]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.095 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:42 compute-0 sudo[279045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:16:42 compute-0 sudo[279045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:42 compute-0 sudo[279045]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.181 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.181 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.253 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:42 compute-0 nova_compute[256940]: 2025-10-02 12:16:42.258 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:16:42 compute-0 ceph-mon[73668]: pgmap v1125: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 278 op/s
Oct 02 12:16:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4274575128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:42.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 435 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 262 op/s
Oct 02 12:16:43 compute-0 nova_compute[256940]: 2025-10-02 12:16:43.465 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407388.4622903, cfb8af05-9dc1-4736-8f2a-88aad55f1585 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:43 compute-0 nova_compute[256940]: 2025-10-02 12:16:43.465 2 INFO nova.compute.manager [-] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] VM Stopped (Lifecycle Event)
Oct 02 12:16:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:43.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:43 compute-0 nova_compute[256940]: 2025-10-02 12:16:43.550 2 DEBUG nova.compute.manager [None req-b5a1cd39-a49f-4c02-a869-0ebd5fc63170 - - - - - -] [instance: cfb8af05-9dc1-4736-8f2a-88aad55f1585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3362255199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.318 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407389.3072836, 0d416198-c85f-4560-8638-d57410e783c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.319 2 INFO nova.compute.manager [-] [instance: 0d416198-c85f-4560-8638-d57410e783c9] VM Stopped (Lifecycle Event)
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.431 2 DEBUG nova.compute.manager [None req-c1c21450-71ab-4a67-bbdb-fc71ee9a4d95 - - - - - -] [instance: 0d416198-c85f-4560-8638-d57410e783c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.516 2 DEBUG nova.compute.manager [req-96cf6997-660c-42ad-b40c-771402cac234 req-de6b7582-dbbc-47d5-bd20-aa1baf308038 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.517 2 DEBUG oslo_concurrency.lockutils [req-96cf6997-660c-42ad-b40c-771402cac234 req-de6b7582-dbbc-47d5-bd20-aa1baf308038 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.517 2 DEBUG oslo_concurrency.lockutils [req-96cf6997-660c-42ad-b40c-771402cac234 req-de6b7582-dbbc-47d5-bd20-aa1baf308038 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.517 2 DEBUG oslo_concurrency.lockutils [req-96cf6997-660c-42ad-b40c-771402cac234 req-de6b7582-dbbc-47d5-bd20-aa1baf308038 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.518 2 DEBUG nova.compute.manager [req-96cf6997-660c-42ad-b40c-771402cac234 req-de6b7582-dbbc-47d5-bd20-aa1baf308038 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:44 compute-0 nova_compute[256940]: 2025-10-02 12:16:44.518 2 DEBUG nova.compute.manager [req-96cf6997-660c-42ad-b40c-771402cac234 req-de6b7582-dbbc-47d5-bd20-aa1baf308038 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:16:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:16:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:44.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:16:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 403 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 49 KiB/s wr, 249 op/s
Oct 02 12:16:44 compute-0 ceph-mon[73668]: pgmap v1126: 305 pgs: 305 active+clean; 435 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 262 op/s
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.479 2 INFO nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Took 3.69 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.480 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:45.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.532 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(06c4e4dd-b6b1-4cca-a8b1-62285a0afe3d),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.538 2 DEBUG nova.objects.instance [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lazy-loading 'migration_context' on Instance uuid a114d722-ceac-442e-8b38-c2892fda526b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.540 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.543 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.543 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.589 2 DEBUG nova.virt.libvirt.vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:36Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.590 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.591 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.591 2 DEBUG nova.virt.libvirt.migration [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating guest XML with vif config: <interface type="ethernet">
Oct 02 12:16:45 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:bf:7b:22"/>
Oct 02 12:16:45 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:16:45 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:16:45 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:16:45 compute-0 nova_compute[256940]:   <target dev="tap965edc3f-df"/>
Oct 02 12:16:45 compute-0 nova_compute[256940]: </interface>
Oct 02 12:16:45 compute-0 nova_compute[256940]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Oct 02 12:16:45 compute-0 nova_compute[256940]: 2025-10-02 12:16:45.592 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.047 2 DEBUG nova.virt.libvirt.migration [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.048 2 INFO nova.virt.libvirt.migration [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 02 12:16:46 compute-0 ceph-mon[73668]: pgmap v1127: 305 pgs: 305 active+clean; 403 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 49 KiB/s wr, 249 op/s
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.467 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.624 2 DEBUG nova.compute.manager [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.626 2 DEBUG oslo_concurrency.lockutils [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.626 2 DEBUG oslo_concurrency.lockutils [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.626 2 DEBUG oslo_concurrency.lockutils [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.627 2 DEBUG nova.compute.manager [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.627 2 WARNING nova.compute.manager [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received unexpected event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with vm_state active and task_state migrating.
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.629 2 DEBUG nova.compute.manager [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-changed-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.629 2 DEBUG nova.compute.manager [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Refreshing instance network info cache due to event network-changed-965edc3f-df96-430d-8b4b-4f3dbb19e9de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.629 2 DEBUG oslo_concurrency.lockutils [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.630 2 DEBUG oslo_concurrency.lockutils [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.632 2 DEBUG nova.network.neutron [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Refreshing network info cache for port 965edc3f-df96-430d-8b4b-4f3dbb19e9de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:16:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 378 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 19 KiB/s wr, 223 op/s
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.972 2 DEBUG nova.virt.libvirt.migration [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:16:46 compute-0 nova_compute[256940]: 2025-10-02 12:16:46.974 2 DEBUG nova.virt.libvirt.migration [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.348 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407407.3482866, a114d722-ceac-442e-8b38-c2892fda526b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.349 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Paused (Lifecycle Event)
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.454 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:47 compute-0 kernel: tap965edc3f-df (unregistering): left promiscuous mode
Oct 02 12:16:47 compute-0 NetworkManager[44981]: <info>  [1759407407.4892] device (tap965edc3f-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:16:47 compute-0 ovn_controller[148123]: 2025-10-02T12:16:47Z|00114|binding|INFO|Releasing lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de from this chassis (sb_readonly=0)
Oct 02 12:16:47 compute-0 ovn_controller[148123]: 2025-10-02T12:16:47Z|00115|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de down in Southbound
Oct 02 12:16:47 compute-0 ovn_controller[148123]: 2025-10-02T12:16:47Z|00116|binding|INFO|Releasing lport 92466114-86f5-4a18-ad64-93c2127fe0d3 from this chassis (sb_readonly=0)
Oct 02 12:16:47 compute-0 ovn_controller[148123]: 2025-10-02T12:16:47Z|00117|binding|INFO|Setting lport 92466114-86f5-4a18-ad64-93c2127fe0d3 down in Southbound
Oct 02 12:16:47 compute-0 ovn_controller[148123]: 2025-10-02T12:16:47Z|00118|binding|INFO|Removing iface tap965edc3f-df ovn-installed in OVS
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:47.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:47 compute-0 ovn_controller[148123]: 2025-10-02T12:16:47Z|00119|binding|INFO|Releasing lport c67f345b-5542-4cd7-a60b-7617c8d1414e from this chassis (sb_readonly=0)
Oct 02 12:16:47 compute-0 ovn_controller[148123]: 2025-10-02T12:16:47Z|00120|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct 02 12:16:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:47.537 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'db222192-8da1-4f7c-972d-dc680c3e6630'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:47.539 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:47.539 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis
Oct 02 12:16:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:47.541 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:16:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:47.542 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[17ae7701-5811-429e-b597-c7cf489544ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:47.543 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace which is not needed anymore
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:47 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 02 12:16:47 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001d.scope: Consumed 13.305s CPU time.
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:47 compute-0 systemd-machined[210927]: Machine qemu-14-instance-0000001d terminated.
Oct 02 12:16:47 compute-0 virtqemud[257589]: Unable to get XATTR trusted.libvirt.security.ref_selinux on a114d722-ceac-442e-8b38-c2892fda526b_disk: No such file or directory
Oct 02 12:16:47 compute-0 virtqemud[257589]: Unable to get XATTR trusted.libvirt.security.ref_dac on a114d722-ceac-442e-8b38-c2892fda526b_disk: No such file or directory
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.684 2 DEBUG nova.virt.libvirt.guest [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.686 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Migration operation has completed
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.687 2 INFO nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] _post_live_migration() is started..
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.694 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.694 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Oct 02 12:16:47 compute-0 nova_compute[256940]: 2025-10-02 12:16:47.694 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Oct 02 12:16:47 compute-0 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[278592]: [NOTICE]   (278596) : haproxy version is 2.8.14-c23fe91
Oct 02 12:16:47 compute-0 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[278592]: [NOTICE]   (278596) : path to executable is /usr/sbin/haproxy
Oct 02 12:16:47 compute-0 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[278592]: [WARNING]  (278596) : Exiting Master process...
Oct 02 12:16:47 compute-0 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[278592]: [ALERT]    (278596) : Current worker (278600) exited with code 143 (Terminated)
Oct 02 12:16:47 compute-0 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[278592]: [WARNING]  (278596) : All workers exited. Exiting... (0)
Oct 02 12:16:47 compute-0 systemd[1]: libpod-62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398.scope: Deactivated successfully.
Oct 02 12:16:47 compute-0 podman[279100]: 2025-10-02 12:16:47.896347294 +0000 UTC m=+0.208773744 container died 62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:16:48 compute-0 sudo[279135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:48 compute-0 sudo[279135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:48 compute-0 sudo[279135]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:48 compute-0 sudo[279160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:48 compute-0 sudo[279160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:48 compute-0 sudo[279160]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.175 2 DEBUG nova.compute.manager [req-56e62021-3172-475a-b5e9-8d4afeb722a9 req-382a9744-5acc-44c9-9f01-9f5f643b83e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.176 2 DEBUG oslo_concurrency.lockutils [req-56e62021-3172-475a-b5e9-8d4afeb722a9 req-382a9744-5acc-44c9-9f01-9f5f643b83e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.179 2 DEBUG oslo_concurrency.lockutils [req-56e62021-3172-475a-b5e9-8d4afeb722a9 req-382a9744-5acc-44c9-9f01-9f5f643b83e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.179 2 DEBUG oslo_concurrency.lockutils [req-56e62021-3172-475a-b5e9-8d4afeb722a9 req-382a9744-5acc-44c9-9f01-9f5f643b83e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.179 2 DEBUG nova.compute.manager [req-56e62021-3172-475a-b5e9-8d4afeb722a9 req-382a9744-5acc-44c9-9f01-9f5f643b83e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.180 2 DEBUG nova.compute.manager [req-56e62021-3172-475a-b5e9-8d4afeb722a9 req-382a9744-5acc-44c9-9f01-9f5f643b83e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398-userdata-shm.mount: Deactivated successfully.
Oct 02 12:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67f7afe0cc30c68e5945f6b63797cc74028734d6dd2c386a0628953c2d74472-merged.mount: Deactivated successfully.
Oct 02 12:16:48 compute-0 podman[279100]: 2025-10-02 12:16:48.469640328 +0000 UTC m=+0.782066808 container cleanup 62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:16:48 compute-0 ceph-mon[73668]: pgmap v1128: 305 pgs: 305 active+clean; 378 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 19 KiB/s wr, 223 op/s
Oct 02 12:16:48 compute-0 systemd[1]: libpod-conmon-62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398.scope: Deactivated successfully.
Oct 02 12:16:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:48.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 378 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 15 KiB/s wr, 132 op/s
Oct 02 12:16:48 compute-0 podman[279191]: 2025-10-02 12:16:48.898951531 +0000 UTC m=+0.391978629 container remove 62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:16:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:48.907 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbf08b0-999c-40d1-94fa-5a12ffb07dce]: (4, ('Thu Oct  2 12:16:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398)\n62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398\nThu Oct  2 12:16:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398)\n62e6d019391775ff66fafe888faa724303a446ce70a724634c7f789995d7c398\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:48.909 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1d1bf1-7c0e-400e-9b0f-2b4397b3f2ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:48.910 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:48 compute-0 kernel: tap5989958f-c0: left promiscuous mode
Oct 02 12:16:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:48 compute-0 nova_compute[256940]: 2025-10-02 12:16:48.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:48.955 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9591e4-9a14-4159-bc74-1e4c3189c1cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:48.988 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0a72ae74-3c37-455d-b75c-f5036365fe76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:48.990 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[49183f4a-61d1-4574-bae5-8c52e88023a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:49.011 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5f48107e-0509-4b2e-9af8-7447b90adfa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528984, 'reachable_time': 19279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279233, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d5989958f\x2dccbb\x2d4db4\x2d8dcb\x2d18563aa2418e.mount: Deactivated successfully.
Oct 02 12:16:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:49.014 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:16:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:49.014 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdf6173-e50b-4420-99b3-7268c1d5c4b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:49.016 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis
Oct 02 12:16:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:49.017 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:16:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:49.018 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a9374968-5853-4d33-bbbc-327d60a65995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:49.018 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe namespace which is not needed anymore
Oct 02 12:16:49 compute-0 podman[279207]: 2025-10-02 12:16:49.035074007 +0000 UTC m=+0.066605421 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:16:49 compute-0 podman[279209]: 2025-10-02 12:16:49.073006948 +0000 UTC m=+0.104320156 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.272 2 DEBUG nova.network.neutron [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updated VIF entry in instance network info cache for port 965edc3f-df96-430d-8b4b-4f3dbb19e9de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.273 2 DEBUG nova.network.neutron [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:49 compute-0 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[278883]: [NOTICE]   (278900) : haproxy version is 2.8.14-c23fe91
Oct 02 12:16:49 compute-0 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[278883]: [NOTICE]   (278900) : path to executable is /usr/sbin/haproxy
Oct 02 12:16:49 compute-0 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[278883]: [WARNING]  (278900) : Exiting Master process...
Oct 02 12:16:49 compute-0 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[278883]: [ALERT]    (278900) : Current worker (278902) exited with code 143 (Terminated)
Oct 02 12:16:49 compute-0 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[278883]: [WARNING]  (278900) : All workers exited. Exiting... (0)
Oct 02 12:16:49 compute-0 systemd[1]: libpod-e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa.scope: Deactivated successfully.
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.348 2 DEBUG oslo_concurrency.lockutils [req-3fcf933f-8d64-41e8-8d13-95eb8b58e32e req-aea2b196-10a2-48e6-a5ab-7dc4d1e28a99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:49 compute-0 podman[279274]: 2025-10-02 12:16:49.350459585 +0000 UTC m=+0.232973177 container died e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:49.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.683 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Activated binding for port 965edc3f-df96-430d-8b4b-4f3dbb19e9de and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.684 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.685 2 DEBUG nova.virt.libvirt.vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:39Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.685 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.686 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.686 2 DEBUG os_vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap965edc3f-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.695 2 INFO os_vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df')
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.695 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.696 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.696 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.696 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.697 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deleting instance files /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b_del
Oct 02 12:16:49 compute-0 nova_compute[256940]: 2025-10-02 12:16:49.697 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deletion of /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b_del complete
Oct 02 12:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa-userdata-shm.mount: Deactivated successfully.
Oct 02 12:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb39b190e4f707976f904bbf4327c409659d371d786203c275e463c3af13b146-merged.mount: Deactivated successfully.
Oct 02 12:16:50 compute-0 podman[279274]: 2025-10-02 12:16:50.087020204 +0000 UTC m=+0.969533786 container cleanup e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:16:50 compute-0 systemd[1]: libpod-conmon-e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa.scope: Deactivated successfully.
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.432 2 DEBUG nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.433 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.433 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.433 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.433 2 DEBUG nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.434 2 WARNING nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received unexpected event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with vm_state active and task_state migrating.
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.434 2 DEBUG nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.434 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.434 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.435 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.435 2 DEBUG nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.435 2 WARNING nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received unexpected event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with vm_state active and task_state migrating.
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.435 2 DEBUG nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.435 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.436 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.436 2 DEBUG oslo_concurrency.lockutils [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.436 2 DEBUG nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.436 2 WARNING nova.compute.manager [req-51d717b1-c902-4ef2-bad5-1ad766ae8408 req-f4ca7be5-ba3a-4806-ab3b-b715dc3d13d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received unexpected event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with vm_state active and task_state migrating.
Oct 02 12:16:50 compute-0 podman[279306]: 2025-10-02 12:16:50.573026999 +0000 UTC m=+0.450260672 container remove e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.584 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b07d21-27ba-4f80-9a28-f65476df8949]: (4, ('Thu Oct  2 12:16:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe (e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa)\ne7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa\nThu Oct  2 12:16:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe (e7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa)\ne7a6288a742f9ad5812bf69346572765d152bec831850379a04405298e8064aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.588 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[050c420d-fe51-401a-a217-bc084ea0387f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.593 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4336bf-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:50 compute-0 kernel: tapdc4336bf-60: left promiscuous mode
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.603 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9c97141a-5a90-4637-b7a2-a6bf6a0fed06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-0 nova_compute[256940]: 2025-10-02 12:16:50.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:50.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.638 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[67bafd13-678b-4da5-8124-4a374945720a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.640 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a5af180e-faad-4b7b-bc07-09f82733f57a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.665 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11830de4-cb1a-40b0-9bce-613bd81949a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529162, 'reachable_time': 35926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279321, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.669 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:16:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:16:50.669 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[af9003bb-2fe1-4fb7-9f07-dc85020e86f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-0 systemd[1]: run-netns-ovnmeta\x2ddc4336bf\x2d639d\x2d45a4\x2d88f2\x2d32f0af1b9dbe.mount: Deactivated successfully.
Oct 02 12:16:50 compute-0 ceph-mon[73668]: pgmap v1129: 305 pgs: 305 active+clean; 378 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 15 KiB/s wr, 132 op/s
Oct 02 12:16:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 403 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Oct 02 12:16:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:51.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:52 compute-0 ceph-mon[73668]: pgmap v1130: 305 pgs: 305 active+clean; 403 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Oct 02 12:16:52 compute-0 nova_compute[256940]: 2025-10-02 12:16:52.324 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:16:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:52.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 420 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 975 KiB/s rd, 4.0 MiB/s wr, 115 op/s
Oct 02 12:16:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:53.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:54 compute-0 ceph-mon[73668]: pgmap v1131: 305 pgs: 305 active+clean; 420 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 975 KiB/s rd, 4.0 MiB/s wr, 115 op/s
Oct 02 12:16:54 compute-0 nova_compute[256940]: 2025-10-02 12:16:54.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:54.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:54 compute-0 nova_compute[256940]: 2025-10-02 12:16:54.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 429 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 4.1 MiB/s wr, 114 op/s
Oct 02 12:16:54 compute-0 nova_compute[256940]: 2025-10-02 12:16:54.985 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:54 compute-0 nova_compute[256940]: 2025-10-02 12:16:54.985 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:54 compute-0 nova_compute[256940]: 2025-10-02 12:16:54.985 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.018 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.019 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.019 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.019 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.019 2 DEBUG oslo_concurrency.processutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905352423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.517 2 DEBUG oslo_concurrency.processutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:55.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.621 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.621 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.794 2 WARNING nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.795 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4560MB free_disk=20.762325286865234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.795 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.796 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.857 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration for instance a114d722-ceac-442e-8b38-c2892fda526b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.889 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.890 2 INFO nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating resource usage from migration 7e1ee5ca-994f-4ce3-b2ab-0267a1862ca4
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.934 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration 7e1ee5ca-994f-4ce3-b2ab-0267a1862ca4 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.935 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration 06c4e4dd-b6b1-4cca-a8b1-62285a0afe3d is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.935 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:16:55 compute-0 nova_compute[256940]: 2025-10-02 12:16:55.935 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.017 2 DEBUG oslo_concurrency.processutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:56 compute-0 ceph-mon[73668]: pgmap v1132: 305 pgs: 305 active+clean; 429 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 4.1 MiB/s wr, 114 op/s
Oct 02 12:16:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2905352423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680354693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.501 2 DEBUG oslo_concurrency.processutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.509 2 DEBUG nova.compute.provider_tree [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.534 2 DEBUG nova.scheduler.client.report [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.584 2 DEBUG nova.compute.resource_tracker [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.585 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.593 2 INFO nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 02 12:16:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:56.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.685 2 INFO nova.scheduler.client.report [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Deleted allocation for migration 06c4e4dd-b6b1-4cca-a8b1-62285a0afe3d
Oct 02 12:16:56 compute-0 nova_compute[256940]: 2025-10-02 12:16:56.686 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Oct 02 12:16:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 571 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Oct 02 12:16:57 compute-0 nova_compute[256940]: 2025-10-02 12:16:57.356 2 INFO nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance shutdown successfully after 15 seconds.
Oct 02 12:16:57 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 02 12:16:57 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001e.scope: Consumed 14.613s CPU time.
Oct 02 12:16:57 compute-0 systemd-machined[210927]: Machine qemu-13-instance-0000001e terminated.
Oct 02 12:16:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:57.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3680354693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:57 compute-0 nova_compute[256940]: 2025-10-02 12:16:57.586 2 INFO nova.virt.libvirt.driver [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance destroyed successfully.
Oct 02 12:16:57 compute-0 nova_compute[256940]: 2025-10-02 12:16:57.591 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:57 compute-0 nova_compute[256940]: 2025-10-02 12:16:57.591 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:57 compute-0 nova_compute[256940]: 2025-10-02 12:16:57.697 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "cfe39611-f626-4dba-8730-190f423de8a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:57 compute-0 nova_compute[256940]: 2025-10-02 12:16:57.697 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:57 compute-0 nova_compute[256940]: 2025-10-02 12:16:57.698 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:16:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:58 compute-0 ceph-mon[73668]: pgmap v1133: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 571 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Oct 02 12:16:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Oct 02 12:16:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:59 compute-0 nova_compute[256940]: 2025-10-02 12:16:59.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:16:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:16:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:59.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:16:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 02 12:16:59 compute-0 nova_compute[256940]: 2025-10-02 12:16:59.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 02 12:16:59 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 02 12:17:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 607 KiB/s rd, 1.9 MiB/s wr, 106 op/s
Oct 02 12:17:00 compute-0 ceph-mon[73668]: pgmap v1134: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Oct 02 12:17:00 compute-0 ceph-mon[73668]: osdmap e167: 3 total, 3 up, 3 in
Oct 02 12:17:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4085572415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:02 compute-0 ceph-mon[73668]: pgmap v1136: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 607 KiB/s rd, 1.9 MiB/s wr, 106 op/s
Oct 02 12:17:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3244676548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:02 compute-0 nova_compute[256940]: 2025-10-02 12:17:02.696 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407407.683886, a114d722-ceac-442e-8b38-c2892fda526b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:02 compute-0 nova_compute[256940]: 2025-10-02 12:17:02.697 2 INFO nova.compute.manager [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Stopped (Lifecycle Event)
Oct 02 12:17:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 332 KiB/s wr, 94 op/s
Oct 02 12:17:02 compute-0 nova_compute[256940]: 2025-10-02 12:17:02.782 2 DEBUG nova.compute.manager [None req-2260fd6b-e3cd-41b7-9c0c-3fc368f4375d - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:03 compute-0 podman[279380]: 2025-10-02 12:17:03.460764388 +0000 UTC m=+0.113342121 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 12:17:03 compute-0 podman[279381]: 2025-10-02 12:17:03.464717512 +0000 UTC m=+0.112291694 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:17:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:03.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:04 compute-0 nova_compute[256940]: 2025-10-02 12:17:04.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:04 compute-0 ceph-mon[73668]: pgmap v1137: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 332 KiB/s wr, 94 op/s
Oct 02 12:17:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 217 KiB/s wr, 101 op/s
Oct 02 12:17:04 compute-0 nova_compute[256940]: 2025-10-02 12:17:04.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:05.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3342925200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:17:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3342925200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:17:06 compute-0 nova_compute[256940]: 2025-10-02 12:17:06.579 2 INFO nova.compute.manager [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Swapping old allocation on dict_keys(['8733289a-aa77-4139-9e88-bac686174c8d']) held by migration 7e1ee5ca-994f-4ce3-b2ab-0267a1862ca4 for instance
Oct 02 12:17:06 compute-0 nova_compute[256940]: 2025-10-02 12:17:06.619 2 DEBUG nova.scheduler.client.report [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Overwriting current allocation {'allocations': {'f694d536-1dcd-4bb3-8516-534a40cdf6d7': {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}, 'generation': 20}}, 'project_id': '3d306048f2854052ba5317253b834aa7', 'user_id': 'ac1b39d94ed94e2490ad953afb3c225f', 'consumer_generation': 1} on consumer cfe39611-f626-4dba-8730-190f423de8a1 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Oct 02 12:17:06 compute-0 ceph-mon[73668]: pgmap v1138: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 217 KiB/s wr, 101 op/s
Oct 02 12:17:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3288555429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:06.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 113 op/s
Oct 02 12:17:07 compute-0 nova_compute[256940]: 2025-10-02 12:17:07.214 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:07 compute-0 nova_compute[256940]: 2025-10-02 12:17:07.215 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:07 compute-0 nova_compute[256940]: 2025-10-02 12:17:07.216 2 DEBUG nova.network.neutron [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:07 compute-0 nova_compute[256940]: 2025-10-02 12:17:07.667 2 DEBUG nova.network.neutron [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:07 compute-0 ceph-mon[73668]: pgmap v1139: 305 pgs: 305 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 113 op/s
Oct 02 12:17:08 compute-0 sudo[279423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:08 compute-0 sudo[279423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:08 compute-0 sudo[279423]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:08 compute-0 sudo[279448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:08 compute-0 sudo[279448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:08 compute-0 sudo[279448]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:08.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 113 op/s
Oct 02 12:17:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:09 compute-0 nova_compute[256940]: 2025-10-02 12:17:09.058 2 DEBUG nova.network.neutron [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:09 compute-0 nova_compute[256940]: 2025-10-02 12:17:09.087 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:09 compute-0 nova_compute[256940]: 2025-10-02 12:17:09.088 2 DEBUG nova.virt.libvirt.driver [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Oct 02 12:17:09 compute-0 nova_compute[256940]: 2025-10-02 12:17:09.181 2 DEBUG nova.storage.rbd_utils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rolling back rbd image(cfe39611-f626-4dba-8730-190f423de8a1_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Oct 02 12:17:09 compute-0 nova_compute[256940]: 2025-10-02 12:17:09.383 2 DEBUG nova.storage.rbd_utils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] removing snapshot(nova-resize) on rbd image(cfe39611-f626-4dba-8730-190f423de8a1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:17:09 compute-0 nova_compute[256940]: 2025-10-02 12:17:09.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:09.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:09 compute-0 nova_compute[256940]: 2025-10-02 12:17:09.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 02 12:17:10 compute-0 ceph-mon[73668]: pgmap v1140: 305 pgs: 305 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 113 op/s
Oct 02 12:17:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 02 12:17:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.407 2 DEBUG nova.virt.libvirt.driver [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.412 2 WARNING nova.virt.libvirt.driver [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.437 2 DEBUG nova.virt.libvirt.host [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.438 2 DEBUG nova.virt.libvirt.host [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.442 2 DEBUG nova.virt.libvirt.host [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.443 2 DEBUG nova.virt.libvirt.host [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.445 2 DEBUG nova.virt.libvirt.driver [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.445 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.446 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.447 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.447 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.448 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.448 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.448 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.449 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.449 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.450 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.450 2 DEBUG nova.virt.hardware [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.451 2 DEBUG nova.objects.instance [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.512 2 DEBUG oslo_concurrency.processutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 130 op/s
Oct 02 12:17:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914838924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:10 compute-0 nova_compute[256940]: 2025-10-02 12:17:10.996 2 DEBUG oslo_concurrency.processutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:11 compute-0 nova_compute[256940]: 2025-10-02 12:17:11.063 2 DEBUG oslo_concurrency.processutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:11.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003313724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:11 compute-0 nova_compute[256940]: 2025-10-02 12:17:11.618 2 DEBUG oslo_concurrency.processutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:11 compute-0 nova_compute[256940]: 2025-10-02 12:17:11.622 2 DEBUG nova.virt.libvirt.driver [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <uuid>cfe39611-f626-4dba-8730-190f423de8a1</uuid>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <name>instance-0000001e</name>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <nova:name>tempest-MigrationsAdminTest-server-407394181</nova:name>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:17:10</nova:creationTime>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <system>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <entry name="serial">cfe39611-f626-4dba-8730-190f423de8a1</entry>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <entry name="uuid">cfe39611-f626-4dba-8730-190f423de8a1</entry>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </system>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <os>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   </os>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <features>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   </features>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/cfe39611-f626-4dba-8730-190f423de8a1_disk">
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/cfe39611-f626-4dba-8730-190f423de8a1_disk.config">
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:17:11 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/console.log" append="off"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <video>
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </video>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:17:11 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:17:11 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:17:11 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:17:11 compute-0 nova_compute[256940]: </domain>
Oct 02 12:17:11 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:17:11 compute-0 ceph-mon[73668]: osdmap e168: 3 total, 3 up, 3 in
Oct 02 12:17:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3914838924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:11 compute-0 systemd-machined[210927]: New machine qemu-15-instance-0000001e.
Oct 02 12:17:11 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000001e.
Oct 02 12:17:12 compute-0 nova_compute[256940]: 2025-10-02 12:17:12.587 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407417.5843995, cfe39611-f626-4dba-8730-190f423de8a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:12 compute-0 nova_compute[256940]: 2025-10-02 12:17:12.588 2 INFO nova.compute.manager [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Stopped (Lifecycle Event)
Oct 02 12:17:12 compute-0 nova_compute[256940]: 2025-10-02 12:17:12.644 2 DEBUG nova.compute.manager [None req-c3acf167-3793-434d-8ec7-9c1571d3043c - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:12.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 451 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 618 KiB/s wr, 129 op/s
Oct 02 12:17:12 compute-0 ceph-mon[73668]: pgmap v1142: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 130 op/s
Oct 02 12:17:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4003313724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:13.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.583 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407433.5826483, cfe39611-f626-4dba-8730-190f423de8a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.584 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Resumed (Lifecycle Event)
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.587 2 DEBUG nova.compute.manager [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.591 2 INFO nova.virt.libvirt.driver [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance running successfully.
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.591 2 DEBUG nova.virt.libvirt.driver [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.625 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.640 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.714 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.715 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407433.585933, cfe39611-f626-4dba-8730-190f423de8a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.715 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Started (Lifecycle Event)
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.731 2 INFO nova.compute.manager [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating instance to original state: 'active'
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.797 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.801 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:17:13 compute-0 nova_compute[256940]: 2025-10-02 12:17:13.837 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:17:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:13 compute-0 ceph-mon[73668]: pgmap v1143: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 451 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 618 KiB/s wr, 129 op/s
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:14 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000053s ======
Oct 02 12:17:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct 02 12:17:14 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 459 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.1 MiB/s wr, 117 op/s
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:14 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.942 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "cfe39611-f626-4dba-8730-190f423de8a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.943 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.944 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "cfe39611-f626-4dba-8730-190f423de8a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.944 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.945 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.947 2 INFO nova.compute.manager [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Terminating instance
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.949 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.949 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:14 compute-0 nova_compute[256940]: 2025-10-02 12:17:14.950 2 DEBUG nova.network.neutron [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:15 compute-0 nova_compute[256940]: 2025-10-02 12:17:15.194 2 DEBUG nova.network.neutron [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 02 12:17:15 compute-0 nova_compute[256940]: 2025-10-02 12:17:15.444 2 DEBUG nova.network.neutron [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:15 compute-0 nova_compute[256940]: 2025-10-02 12:17:15.464 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:15 compute-0 nova_compute[256940]: 2025-10-02 12:17:15.465 2 DEBUG nova.compute.manager [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:17:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 02 12:17:15 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 02 12:17:15 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001e.scope: Consumed 3.786s CPU time.
Oct 02 12:17:15 compute-0 systemd-machined[210927]: Machine qemu-15-instance-0000001e terminated.
Oct 02 12:17:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:15.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:15 compute-0 nova_compute[256940]: 2025-10-02 12:17:15.696 2 INFO nova.virt.libvirt.driver [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance destroyed successfully.
Oct 02 12:17:15 compute-0 nova_compute[256940]: 2025-10-02 12:17:15.697 2 DEBUG nova.objects.instance [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:15.778 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:15 compute-0 nova_compute[256940]: 2025-10-02 12:17:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:15.780 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:17:16 compute-0 ceph-mon[73668]: pgmap v1144: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 459 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.1 MiB/s wr, 117 op/s
Oct 02 12:17:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 470 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.057 2 INFO nova.virt.libvirt.driver [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Deleting instance files /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1_del
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.058 2 INFO nova.virt.libvirt.driver [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Deletion of /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1_del complete
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.137 2 INFO nova.compute.manager [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Took 1.67 seconds to destroy the instance on the hypervisor.
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.138 2 DEBUG oslo.service.loopingcall [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.138 2 DEBUG nova.compute.manager [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.138 2 DEBUG nova.network.neutron [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.298 2 DEBUG nova.network.neutron [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.317 2 DEBUG nova.network.neutron [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2874766076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.423 2 INFO nova.compute.manager [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Took 0.28 seconds to deallocate network for instance.
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.465 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.465 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:17 compute-0 nova_compute[256940]: 2025-10-02 12:17:17.541 2 DEBUG oslo_concurrency.processutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:17.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071389943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:18 compute-0 nova_compute[256940]: 2025-10-02 12:17:18.054 2 DEBUG oslo_concurrency.processutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:18 compute-0 nova_compute[256940]: 2025-10-02 12:17:18.060 2 DEBUG nova.compute.provider_tree [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:18 compute-0 nova_compute[256940]: 2025-10-02 12:17:18.084 2 DEBUG nova.scheduler.client.report [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:18 compute-0 nova_compute[256940]: 2025-10-02 12:17:18.106 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:18 compute-0 nova_compute[256940]: 2025-10-02 12:17:18.143 2 INFO nova.scheduler.client.report [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocations for instance cfe39611-f626-4dba-8730-190f423de8a1
Oct 02 12:17:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2879857347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:18 compute-0 nova_compute[256940]: 2025-10-02 12:17:18.265 2 DEBUG oslo_concurrency.lockutils [None req-2ee9c92f-1165-4789-93a0-702db09ef9ec ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "cfe39611-f626-4dba-8730-190f423de8a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:18 compute-0 ceph-mon[73668]: pgmap v1145: 305 pgs: 305 active+clean; 470 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 12:17:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4071389943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2879857347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:18.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 470 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 12:17:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 02 12:17:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 02 12:17:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 02 12:17:19 compute-0 podman[279697]: 2025-10-02 12:17:19.409628684 +0000 UTC m=+0.069646281 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:17:19 compute-0 nova_compute[256940]: 2025-10-02 12:17:19.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:19 compute-0 podman[279698]: 2025-10-02 12:17:19.446316032 +0000 UTC m=+0.106488143 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 02 12:17:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:19 compute-0 nova_compute[256940]: 2025-10-02 12:17:19.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:20 compute-0 ceph-mon[73668]: pgmap v1146: 305 pgs: 305 active+clean; 470 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 12:17:20 compute-0 ceph-mon[73668]: osdmap e169: 3 total, 3 up, 3 in
Oct 02 12:17:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:20.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 410 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 251 op/s
Oct 02 12:17:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:21.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:22 compute-0 ceph-mon[73668]: pgmap v1148: 305 pgs: 305 active+clean; 410 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 251 op/s
Oct 02 12:17:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1496903658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3296994374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:22.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:22.782 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 396 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 304 op/s
Oct 02 12:17:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:23.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:24 compute-0 nova_compute[256940]: 2025-10-02 12:17:24.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:24 compute-0 ceph-mon[73668]: pgmap v1149: 305 pgs: 305 active+clean; 396 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 304 op/s
Oct 02 12:17:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:24.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:24 compute-0 nova_compute[256940]: 2025-10-02 12:17:24.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 346 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 283 op/s
Oct 02 12:17:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:25.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:25 compute-0 nova_compute[256940]: 2025-10-02 12:17:25.722 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:25 compute-0 nova_compute[256940]: 2025-10-02 12:17:25.723 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:25 compute-0 nova_compute[256940]: 2025-10-02 12:17:25.761 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:17:25 compute-0 nova_compute[256940]: 2025-10-02 12:17:25.860 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:25 compute-0 nova_compute[256940]: 2025-10-02 12:17:25.861 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:25 compute-0 nova_compute[256940]: 2025-10-02 12:17:25.870 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:17:25 compute-0 nova_compute[256940]: 2025-10-02 12:17:25.871 2 INFO nova.compute.claims [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:17:25 compute-0 ceph-mon[73668]: pgmap v1150: 305 pgs: 305 active+clean; 346 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 283 op/s
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.235 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:26.455 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:26.455 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:26.455 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489528740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.704 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.713 2 DEBUG nova.compute.provider_tree [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.745 2 DEBUG nova.scheduler.client.report [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.776 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.778 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:17:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 329 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 287 op/s
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.849 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.850 2 DEBUG nova.network.neutron [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.872 2 INFO nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:17:26 compute-0 nova_compute[256940]: 2025-10-02 12:17:26.916 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.042 2 DEBUG nova.policy [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c53fbb5ed1e4cf380a90975be5dc249', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96eda2c1552247d8b6632dd9e7d1f6df', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:17:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1489528740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.114 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.116 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.116 2 INFO nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Creating image(s)
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.248 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.292 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.341 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.345 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.406 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.408 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.411 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.411 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.444 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.450 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b91a8089-73c5-41c6-88aa-7681bf70fcad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:27.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.796 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b91a8089-73c5-41c6-88aa-7681bf70fcad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:27 compute-0 nova_compute[256940]: 2025-10-02 12:17:27.946 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] resizing rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:17:28 compute-0 nova_compute[256940]: 2025-10-02 12:17:28.081 2 DEBUG nova.objects.instance [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'migration_context' on Instance uuid b91a8089-73c5-41c6-88aa-7681bf70fcad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:28 compute-0 nova_compute[256940]: 2025-10-02 12:17:28.119 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:17:28 compute-0 nova_compute[256940]: 2025-10-02 12:17:28.120 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Ensure instance console log exists: /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:17:28 compute-0 nova_compute[256940]: 2025-10-02 12:17:28.121 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:28 compute-0 nova_compute[256940]: 2025-10-02 12:17:28.121 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:28 compute-0 nova_compute[256940]: 2025-10-02 12:17:28.122 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:28 compute-0 nova_compute[256940]: 2025-10-02 12:17:28.285 2 DEBUG nova.network.neutron [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Successfully created port: 8ca9e621-315f-431f-a0ec-ce3af08b1458 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:17:28 compute-0 sudo[279935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:28 compute-0 sudo[279935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:28 compute-0 sudo[279935]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:28 compute-0 sudo[279961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:28 compute-0 sudo[279961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:28 compute-0 sudo[279961]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:28 compute-0 ceph-mon[73668]: pgmap v1151: 305 pgs: 305 active+clean; 329 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 287 op/s
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:17:28
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms']
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:17:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:28.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 329 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 287 op/s
Oct 02 12:17:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.020 2 DEBUG nova.network.neutron [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Successfully updated port: 8ca9e621-315f-431f-a0ec-ce3af08b1458 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.048 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.048 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquired lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.048 2 DEBUG nova.network.neutron [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.343 2 DEBUG nova.network.neutron [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.413 2 DEBUG nova.compute.manager [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-changed-8ca9e621-315f-431f-a0ec-ce3af08b1458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.413 2 DEBUG nova.compute.manager [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Refreshing instance network info cache due to event network-changed-8ca9e621-315f-431f-a0ec-ce3af08b1458. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.414 2 DEBUG oslo_concurrency.lockutils [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:29 compute-0 nova_compute[256940]: 2025-10-02 12:17:29.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.037 2 DEBUG nova.network.neutron [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Updating instance_info_cache with network_info: [{"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.072 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Releasing lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.072 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Instance network_info: |[{"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.072 2 DEBUG oslo_concurrency.lockutils [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.073 2 DEBUG nova.network.neutron [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Refreshing network info cache for port 8ca9e621-315f-431f-a0ec-ce3af08b1458 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.075 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Start _get_guest_xml network_info=[{"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.079 2 WARNING nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.097 2 DEBUG nova.virt.libvirt.host [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.098 2 DEBUG nova.virt.libvirt.host [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.102 2 DEBUG nova.virt.libvirt.host [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.103 2 DEBUG nova.virt.libvirt.host [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.105 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.105 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.106 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.107 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.107 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.108 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.108 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.109 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.109 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.109 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.110 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.110 2 DEBUG nova.virt.hardware [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.114 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145820866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.597 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.627 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.633 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.693 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407435.6914766, cfe39611-f626-4dba-8730-190f423de8a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.694 2 INFO nova.compute.manager [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Stopped (Lifecycle Event)
Oct 02 12:17:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:30.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:30 compute-0 nova_compute[256940]: 2025-10-02 12:17:30.786 2 DEBUG nova.compute.manager [None req-1c550502-d01c-403c-bf39-229e381986c7 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 285 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 501 KiB/s wr, 321 op/s
Oct 02 12:17:30 compute-0 ceph-mon[73668]: pgmap v1152: 305 pgs: 305 active+clean; 329 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 287 op/s
Oct 02 12:17:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2145820866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038992732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.205 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.206 2 DEBUG nova.virt.libvirt.vif [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1839342492',display_name='tempest-VolumesAdminNegativeTest-server-1839342492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1839342492',id=32,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkcHJ7r58WDaGntSabDF80AZkiFaqk4NzjOxFh09aJiK9BnqTHRSLGUvMKPr1+T2iLEkowdQQSzTR7xaiz1uScY9nH1xdNHH+CcQ+kNDIaPJxNUfi2RZ30LusiITEEF8w==',key_name='tempest-keypair-424136697',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96eda2c1552247d8b6632dd9e7d1f6df',ramdisk_id='',reservation_id='r-4s6bo0k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-486286654',owner_user_name='tempest-VolumesAdminNegativeTest-486286654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3c53fbb5ed1e4cf380a90975be5dc249',uuid=b91a8089-73c5-41c6-88aa-7681bf70fcad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.207 2 DEBUG nova.network.os_vif_util [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converting VIF {"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.208 2 DEBUG nova.network.os_vif_util [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:81:30,bridge_name='br-int',has_traffic_filtering=True,id=8ca9e621-315f-431f-a0ec-ce3af08b1458,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ca9e621-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.209 2 DEBUG nova.objects.instance [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'pci_devices' on Instance uuid b91a8089-73c5-41c6-88aa-7681bf70fcad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.234 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <uuid>b91a8089-73c5-41c6-88aa-7681bf70fcad</uuid>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <name>instance-00000020</name>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <nova:name>tempest-VolumesAdminNegativeTest-server-1839342492</nova:name>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:17:30</nova:creationTime>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:user uuid="3c53fbb5ed1e4cf380a90975be5dc249">tempest-VolumesAdminNegativeTest-486286654-project-member</nova:user>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:project uuid="96eda2c1552247d8b6632dd9e7d1f6df">tempest-VolumesAdminNegativeTest-486286654</nova:project>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <nova:port uuid="8ca9e621-315f-431f-a0ec-ce3af08b1458">
Oct 02 12:17:31 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <system>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <entry name="serial">b91a8089-73c5-41c6-88aa-7681bf70fcad</entry>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <entry name="uuid">b91a8089-73c5-41c6-88aa-7681bf70fcad</entry>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </system>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <os>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   </os>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <features>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   </features>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b91a8089-73c5-41c6-88aa-7681bf70fcad_disk">
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       </source>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b91a8089-73c5-41c6-88aa-7681bf70fcad_disk.config">
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       </source>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:17:31 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:a1:81:30"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <target dev="tap8ca9e621-31"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/console.log" append="off"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <video>
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </video>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:17:31 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:17:31 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:17:31 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:17:31 compute-0 nova_compute[256940]: </domain>
Oct 02 12:17:31 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.235 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Preparing to wait for external event network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.236 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.236 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.237 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.238 2 DEBUG nova.virt.libvirt.vif [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1839342492',display_name='tempest-VolumesAdminNegativeTest-server-1839342492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1839342492',id=32,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkcHJ7r58WDaGntSabDF80AZkiFaqk4NzjOxFh09aJiK9BnqTHRSLGUvMKPr1+T2iLEkowdQQSzTR7xaiz1uScY9nH1xdNHH+CcQ+kNDIaPJxNUfi2RZ30LusiITEEF8w==',key_name='tempest-keypair-424136697',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96eda2c1552247d8b6632dd9e7d1f6df',ramdisk_id='',reservation_id='r-4s6bo0k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-486286654',owner_user_name='tempest-VolumesAdminNegativeTest-486286654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3c53fbb5ed1e4cf380a90975be5dc249',uuid=b91a8089-73c5-41c6-88aa-7681bf70fcad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.238 2 DEBUG nova.network.os_vif_util [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converting VIF {"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.239 2 DEBUG nova.network.os_vif_util [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:81:30,bridge_name='br-int',has_traffic_filtering=True,id=8ca9e621-315f-431f-a0ec-ce3af08b1458,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ca9e621-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.239 2 DEBUG os_vif [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:81:30,bridge_name='br-int',has_traffic_filtering=True,id=8ca9e621-315f-431f-a0ec-ce3af08b1458,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ca9e621-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.248 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ca9e621-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.249 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ca9e621-31, col_values=(('external_ids', {'iface-id': '8ca9e621-315f-431f-a0ec-ce3af08b1458', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:81:30', 'vm-uuid': 'b91a8089-73c5-41c6-88aa-7681bf70fcad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:31 compute-0 NetworkManager[44981]: <info>  [1759407451.2519] manager: (tap8ca9e621-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.261 2 INFO os_vif [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:81:30,bridge_name='br-int',has_traffic_filtering=True,id=8ca9e621-315f-431f-a0ec-ce3af08b1458,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ca9e621-31')
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.281 2 DEBUG nova.network.neutron [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Updated VIF entry in instance network info cache for port 8ca9e621-315f-431f-a0ec-ce3af08b1458. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.282 2 DEBUG nova.network.neutron [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Updating instance_info_cache with network_info: [{"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.317 2 DEBUG oslo_concurrency.lockutils [req-e822d990-5736-4506-b01e-55fc6499a238 req-fb3ef307-a768-4480-bd59-99ae3e053cdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.476 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.481 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.482 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No VIF found with MAC fa:16:3e:a1:81:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.482 2 INFO nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Using config drive
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.546 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:31.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.993 2 INFO nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Creating config drive at /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/disk.config
Oct 02 12:17:31 compute-0 nova_compute[256940]: 2025-10-02 12:17:31.999 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrz6og23 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:32 compute-0 nova_compute[256940]: 2025-10-02 12:17:32.151 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrz6og23" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:32 compute-0 ceph-mon[73668]: pgmap v1153: 305 pgs: 305 active+clean; 285 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 501 KiB/s wr, 321 op/s
Oct 02 12:17:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3038992732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3277705734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:32 compute-0 nova_compute[256940]: 2025-10-02 12:17:32.198 2 DEBUG nova.storage.rbd_utils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image b91a8089-73c5-41c6-88aa-7681bf70fcad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:17:32 compute-0 nova_compute[256940]: 2025-10-02 12:17:32.202 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/disk.config b91a8089-73c5-41c6-88aa-7681bf70fcad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 294 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 243 op/s
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.084 2 DEBUG oslo_concurrency.processutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/disk.config b91a8089-73c5-41c6-88aa-7681bf70fcad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.882s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.086 2 INFO nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Deleting local config drive /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad/disk.config because it was imported into RBD.
Oct 02 12:17:33 compute-0 kernel: tap8ca9e621-31: entered promiscuous mode
Oct 02 12:17:33 compute-0 ovn_controller[148123]: 2025-10-02T12:17:33Z|00121|binding|INFO|Claiming lport 8ca9e621-315f-431f-a0ec-ce3af08b1458 for this chassis.
Oct 02 12:17:33 compute-0 ovn_controller[148123]: 2025-10-02T12:17:33Z|00122|binding|INFO|8ca9e621-315f-431f-a0ec-ce3af08b1458: Claiming fa:16:3e:a1:81:30 10.100.0.10
Oct 02 12:17:33 compute-0 NetworkManager[44981]: <info>  [1759407453.1737] manager: (tap8ca9e621-31): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-0 systemd-udevd[280123]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1769959602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:33 compute-0 systemd-machined[210927]: New machine qemu-16-instance-00000020.
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.233 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:81:30 10.100.0.10'], port_security=['fa:16:3e:a1:81:30 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b91a8089-73c5-41c6-88aa-7681bf70fcad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96eda2c1552247d8b6632dd9e7d1f6df', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4aab13b-6523-48c5-89b6-85da0c2187d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32c85910-7861-449c-a436-357586f391f8, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8ca9e621-315f-431f-a0ec-ce3af08b1458) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.234 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8ca9e621-315f-431f-a0ec-ce3af08b1458 in datapath 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab bound to our chassis
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.236 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab
Oct 02 12:17:33 compute-0 NetworkManager[44981]: <info>  [1759407453.2419] device (tap8ca9e621-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:33 compute-0 NetworkManager[44981]: <info>  [1759407453.2433] device (tap8ca9e621-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:33 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000020.
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.249 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ad3f99-8b6b-43ee-a485-976c0ac36cc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.251 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1166a0a2-a1 in ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.253 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1166a0a2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.253 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5957c887-39b7-4de2-91e0-5ee8315767bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.254 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[70b25278-7cb6-4d24-ad41-0e5939634372]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_controller[148123]: 2025-10-02T12:17:33Z|00123|binding|INFO|Setting lport 8ca9e621-315f-431f-a0ec-ce3af08b1458 ovn-installed in OVS
Oct 02 12:17:33 compute-0 ovn_controller[148123]: 2025-10-02T12:17:33Z|00124|binding|INFO|Setting lport 8ca9e621-315f-431f-a0ec-ce3af08b1458 up in Southbound
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.282 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[304da09e-cc05-48ac-81e5-e6d2496b9d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.340 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb768ce-6234-4c96-96b4-6cd4f1f0fd35]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.385 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[182c6e01-46fd-4e9b-b2f2-978a6d1e3f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 NetworkManager[44981]: <info>  [1759407453.3923] manager: (tap1166a0a2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.391 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3dfcd0-9b61-4e93-8b72-36f3ce2fa9eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.437 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[66008356-5fc3-4053-b162-8269f7b89a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.441 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[06be40a4-86ec-4bac-9440-c65132a71610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 NetworkManager[44981]: <info>  [1759407453.4764] device (tap1166a0a2-a0): carrier: link connected
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.483 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6867f2-f176-4576-8043-2729bbeed0d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.506 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a6ae63-7f92-448a-b1aa-b03477cda269]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1166a0a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534895, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280157, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.526 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b95e5917-1db8-483c-83ea-5d238187af28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:4fef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534895, 'tstamp': 534895}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280158, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.556 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b91bb011-5bf4-4016-a432-5a76050fb12c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1166a0a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534895, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280159, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:33.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.598 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d6601c-3d2c-4ee4-8634-22da39729826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.665 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c70a92f7-fb73-4041-8430-62cc72d7d5b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.667 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1166a0a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.667 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.668 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1166a0a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-0 NetworkManager[44981]: <info>  [1759407453.6717] manager: (tap1166a0a2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 02 12:17:33 compute-0 kernel: tap1166a0a2-a0: entered promiscuous mode
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.675 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1166a0a2-a0, col_values=(('external_ids', {'iface-id': '97dd6f15-0e35-4e74-9928-1fab371ff2fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-0 ovn_controller[148123]: 2025-10-02T12:17:33Z|00125|binding|INFO|Releasing lport 97dd6f15-0e35-4e74-9928-1fab371ff2fc from this chassis (sb_readonly=0)
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.713 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1166a0a2-aca4-4278-bd35-d73d4e5bb2ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1166a0a2-aca4-4278-bd35-d73d4e5bb2ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.715 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[68e2ef06-1232-4e51-9c37-279aa6237be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.715 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/1166a0a2-aca4-4278-bd35-d73d4e5bb2ab.pid.haproxy
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:17:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:17:33.716 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'env', 'PROCESS_TAG=haproxy-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1166a0a2-aca4-4278-bd35-d73d4e5bb2ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.957 2 DEBUG nova.compute.manager [req-ae253b07-1be3-452a-b0d8-88d69bea5bec req-bb858e6e-51cc-486a-9d89-bfa53ef3d4b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.958 2 DEBUG oslo_concurrency.lockutils [req-ae253b07-1be3-452a-b0d8-88d69bea5bec req-bb858e6e-51cc-486a-9d89-bfa53ef3d4b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.958 2 DEBUG oslo_concurrency.lockutils [req-ae253b07-1be3-452a-b0d8-88d69bea5bec req-bb858e6e-51cc-486a-9d89-bfa53ef3d4b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.958 2 DEBUG oslo_concurrency.lockutils [req-ae253b07-1be3-452a-b0d8-88d69bea5bec req-bb858e6e-51cc-486a-9d89-bfa53ef3d4b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:33 compute-0 nova_compute[256940]: 2025-10-02 12:17:33.959 2 DEBUG nova.compute.manager [req-ae253b07-1be3-452a-b0d8-88d69bea5bec req-bb858e6e-51cc-486a-9d89-bfa53ef3d4b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Processing event network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:17:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.051664) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454051716, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2263, "num_deletes": 265, "total_data_size": 3802622, "memory_usage": 3862912, "flush_reason": "Manual Compaction"}
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454181967, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3692782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24145, "largest_seqno": 26407, "table_properties": {"data_size": 3682594, "index_size": 6426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21956, "raw_average_key_size": 20, "raw_value_size": 3661730, "raw_average_value_size": 3438, "num_data_blocks": 280, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407277, "oldest_key_time": 1759407277, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 130378 microseconds, and 15273 cpu microseconds.
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.182038) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3692782 bytes OK
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.182069) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.224880) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.224946) EVENT_LOG_v1 {"time_micros": 1759407454224932, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.224978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3793104, prev total WAL file size 3793104, number of live WAL files 2.
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.226694) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3606KB)], [56(8842KB)]
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454226813, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12747341, "oldest_snapshot_seqno": -1}
Oct 02 12:17:34 compute-0 podman[280233]: 2025-10-02 12:17:34.148560088 +0000 UTC m=+0.039396610 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.352 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.353 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407454.3517504, b91a8089-73c5-41c6-88aa-7681bf70fcad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.354 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] VM Started (Lifecycle Event)
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.356 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.360 2 INFO nova.virt.libvirt.driver [-] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Instance spawned successfully.
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.361 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.373 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.382 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.383 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.383 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.384 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.384 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.385 2 DEBUG nova.virt.libvirt.driver [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.388 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.419 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.419 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407454.352969, b91a8089-73c5-41c6-88aa-7681bf70fcad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.419 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] VM Paused (Lifecycle Event)
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.436 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.439 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407454.356027, b91a8089-73c5-41c6-88aa-7681bf70fcad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.439 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] VM Resumed (Lifecycle Event)
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.443 2 INFO nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Took 7.33 seconds to spawn the instance on the hypervisor.
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.443 2 DEBUG nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.467 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.471 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.494 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.525 2 INFO nova.compute.manager [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Took 8.70 seconds to build instance.
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5397 keys, 12630587 bytes, temperature: kUnknown
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454536938, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 12630587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12589762, "index_size": 26251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 135610, "raw_average_key_size": 25, "raw_value_size": 12487895, "raw_average_value_size": 2313, "num_data_blocks": 1084, "num_entries": 5397, "num_filter_entries": 5397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:34 compute-0 nova_compute[256940]: 2025-10-02 12:17:34.550 2 DEBUG oslo_concurrency.lockutils [None req-3c2db17e-fb22-4ac7-a7de-df2f09a4303e 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.537354) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12630587 bytes
Oct 02 12:17:34 compute-0 ceph-mon[73668]: pgmap v1154: 305 pgs: 305 active+clean; 294 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 243 op/s
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.703541) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.1 rd, 40.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.6 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(6.9) write-amplify(3.4) OK, records in: 5942, records dropped: 545 output_compression: NoCompression
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.703740) EVENT_LOG_v1 {"time_micros": 1759407454703582, "job": 30, "event": "compaction_finished", "compaction_time_micros": 310228, "compaction_time_cpu_micros": 37025, "output_level": 6, "num_output_files": 1, "total_output_size": 12630587, "num_input_records": 5942, "num_output_records": 5397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.226511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.703925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.703933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.703935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.703938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:34.703940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:17:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:34.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:17:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 294 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Oct 02 12:17:34 compute-0 podman[280233]: 2025-10-02 12:17:34.954945011 +0000 UTC m=+0.845781533 container create 36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:17:35 compute-0 systemd[1]: Started libpod-conmon-36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd.scope.
Oct 02 12:17:35 compute-0 podman[280246]: 2025-10-02 12:17:35.026700005 +0000 UTC m=+0.683166235 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 12:17:35 compute-0 podman[280247]: 2025-10-02 12:17:35.026960492 +0000 UTC m=+0.692339195 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:17:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54aaf47816a71c121b06ee39a1fd236c2246d2334a0475ddc9190e7dbaebeddc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:35 compute-0 podman[280233]: 2025-10-02 12:17:35.172259106 +0000 UTC m=+1.063095628 container init 36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:17:35 compute-0 podman[280233]: 2025-10-02 12:17:35.184249929 +0000 UTC m=+1.075086431 container start 36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:35 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [NOTICE]   (280292) : New worker (280294) forked
Oct 02 12:17:35 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [NOTICE]   (280292) : Loading success.
Oct 02 12:17:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:36 compute-0 nova_compute[256940]: 2025-10-02 12:17:36.109 2 DEBUG nova.compute.manager [req-1eac7a5f-b08d-4b2c-988e-a6ee54bab464 req-96d8a1ca-b66a-4195-aa01-bf4ef02e7b0e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:36 compute-0 nova_compute[256940]: 2025-10-02 12:17:36.109 2 DEBUG oslo_concurrency.lockutils [req-1eac7a5f-b08d-4b2c-988e-a6ee54bab464 req-96d8a1ca-b66a-4195-aa01-bf4ef02e7b0e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:36 compute-0 nova_compute[256940]: 2025-10-02 12:17:36.110 2 DEBUG oslo_concurrency.lockutils [req-1eac7a5f-b08d-4b2c-988e-a6ee54bab464 req-96d8a1ca-b66a-4195-aa01-bf4ef02e7b0e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:36 compute-0 nova_compute[256940]: 2025-10-02 12:17:36.110 2 DEBUG oslo_concurrency.lockutils [req-1eac7a5f-b08d-4b2c-988e-a6ee54bab464 req-96d8a1ca-b66a-4195-aa01-bf4ef02e7b0e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:36 compute-0 nova_compute[256940]: 2025-10-02 12:17:36.110 2 DEBUG nova.compute.manager [req-1eac7a5f-b08d-4b2c-988e-a6ee54bab464 req-96d8a1ca-b66a-4195-aa01-bf4ef02e7b0e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] No waiting events found dispatching network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:36 compute-0 nova_compute[256940]: 2025-10-02 12:17:36.111 2 WARNING nova.compute.manager [req-1eac7a5f-b08d-4b2c-988e-a6ee54bab464 req-96d8a1ca-b66a-4195-aa01-bf4ef02e7b0e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received unexpected event network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 for instance with vm_state active and task_state None.
Oct 02 12:17:36 compute-0 nova_compute[256940]: 2025-10-02 12:17:36.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 296 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 277 op/s
Oct 02 12:17:36 compute-0 ceph-mon[73668]: pgmap v1155: 305 pgs: 305 active+clean; 294 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Oct 02 12:17:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:37.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:38 compute-0 ceph-mon[73668]: pgmap v1156: 305 pgs: 305 active+clean; 296 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 277 op/s
Oct 02 12:17:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2269807714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3581511965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.341860) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458341910, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 293, "num_deletes": 251, "total_data_size": 82200, "memory_usage": 87944, "flush_reason": "Manual Compaction"}
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458421477, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 81647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26408, "largest_seqno": 26700, "table_properties": {"data_size": 79720, "index_size": 155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5039, "raw_average_key_size": 18, "raw_value_size": 75924, "raw_average_value_size": 277, "num_data_blocks": 7, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407454, "oldest_key_time": 1759407454, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 79707 microseconds, and 1676 cpu microseconds.
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:38 compute-0 NetworkManager[44981]: <info>  [1759407458.5843] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct 02 12:17:38 compute-0 NetworkManager[44981]: <info>  [1759407458.5858] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:38.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.421560) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 81647 bytes OK
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.421591) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.778331) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.778376) EVENT_LOG_v1 {"time_micros": 1759407458778367, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.778401) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 80042, prev total WAL file size 80042, number of live WAL files 2.
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458779262, "job": 31, "event": "table_file_deletion", "file_number": 58}
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458780923, "job": 31, "event": "table_file_deletion", "file_number": 56}
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.781155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(79KB)], [59(12MB)]
Oct 02 12:17:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458781240, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 12712234, "oldest_snapshot_seqno": -1}
Oct 02 12:17:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 296 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 202 op/s
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:38 compute-0 ovn_controller[148123]: 2025-10-02T12:17:38Z|00126|binding|INFO|Releasing lport 97dd6f15-0e35-4e74-9928-1fab371ff2fc from this chassis (sb_readonly=0)
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.955 2 DEBUG nova.compute.manager [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-changed-8ca9e621-315f-431f-a0ec-ce3af08b1458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.956 2 DEBUG nova.compute.manager [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Refreshing instance network info cache due to event network-changed-8ca9e621-315f-431f-a0ec-ce3af08b1458. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.956 2 DEBUG oslo_concurrency.lockutils [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.956 2 DEBUG oslo_concurrency.lockutils [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:38 compute-0 nova_compute[256940]: 2025-10-02 12:17:38.957 2 DEBUG nova.network.neutron [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Refreshing network info cache for port 8ca9e621-315f-431f-a0ec-ce3af08b1458 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5162 keys, 10806021 bytes, temperature: kUnknown
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407459273149, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 10806021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10768338, "index_size": 23685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 131430, "raw_average_key_size": 25, "raw_value_size": 10672108, "raw_average_value_size": 2067, "num_data_blocks": 968, "num_entries": 5162, "num_filter_entries": 5162, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.273478) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10806021 bytes
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.300595) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.8 rd, 22.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(288.0) write-amplify(132.4) OK, records in: 5671, records dropped: 509 output_compression: NoCompression
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.300628) EVENT_LOG_v1 {"time_micros": 1759407459300616, "job": 32, "event": "compaction_finished", "compaction_time_micros": 492011, "compaction_time_cpu_micros": 39691, "output_level": 6, "num_output_files": 1, "total_output_size": 10806021, "num_input_records": 5671, "num_output_records": 5162, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407459300817, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407459302823, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:38.781005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.302868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.302873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.302875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.302876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:17:39.302878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1347014307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1591436883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:39 compute-0 nova_compute[256940]: 2025-10-02 12:17:39.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:39.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004504037839661269 of space, bias 1.0, pg target 1.3512113518983808 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001855523276102736 of space, bias 1.0, pg target 0.554801459554718 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.182 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.182 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.267 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.267 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.268 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.268 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.269 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.269 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.270 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.270 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.287 2 DEBUG nova.network.neutron [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Updated VIF entry in instance network info cache for port 8ca9e621-315f-431f-a0ec-ce3af08b1458. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.288 2 DEBUG nova.network.neutron [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Updating instance_info_cache with network_info: [{"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:40 compute-0 nova_compute[256940]: 2025-10-02 12:17:40.312 2 DEBUG oslo_concurrency.lockutils [req-01163696-7c02-4b39-89f4-8cacca1838bc req-8922bb76-ea9f-446a-bf16-eb4315dcaa0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b91a8089-73c5-41c6-88aa-7681bf70fcad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:40 compute-0 ceph-mon[73668]: pgmap v1157: 305 pgs: 305 active+clean; 296 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 202 op/s
Oct 02 12:17:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3073071398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:40.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 241 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 268 op/s
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.238 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.239 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:41.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291252393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.720 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.825 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:17:41 compute-0 nova_compute[256940]: 2025-10-02 12:17:41.826 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.027 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.028 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4528MB free_disk=20.921714782714844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.029 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.029 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.114 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance b91a8089-73c5-41c6-88aa-7681bf70fcad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.114 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.115 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.153 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:42 compute-0 ceph-mon[73668]: pgmap v1158: 305 pgs: 305 active+clean; 241 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 268 op/s
Oct 02 12:17:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3291252393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:42 compute-0 sudo[280351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:42 compute-0 sudo[280351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-0 sudo[280351]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292213779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:42 compute-0 sudo[280376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:17:42 compute-0 sudo[280376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-0 sudo[280376]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.647 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.656 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.672 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.694 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:17:42 compute-0 nova_compute[256940]: 2025-10-02 12:17:42.695 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:42 compute-0 sudo[280403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:42 compute-0 sudo[280403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-0 sudo[280403]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:42 compute-0 sudo[280428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:17:42 compute-0 sudo[280428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 246 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 12:17:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:17:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:17:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:17:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:43 compute-0 sudo[280428]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:17:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/292213779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:43.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:44 compute-0 nova_compute[256940]: 2025-10-02 12:17:44.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:44 compute-0 ceph-mon[73668]: pgmap v1159: 305 pgs: 305 active+clean; 246 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 12:17:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2688491066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:44.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 246 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Oct 02 12:17:44 compute-0 nova_compute[256940]: 2025-10-02 12:17:44.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1617248047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:46 compute-0 nova_compute[256940]: 2025-10-02 12:17:46.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:17:46 compute-0 ceph-mon[73668]: pgmap v1160: 305 pgs: 305 active+clean; 246 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Oct 02 12:17:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:46 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1e484b29-67ba-46dc-95a6-34d123e78ccf does not exist
Oct 02 12:17:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 442c846f-3544-43e3-b6a7-b5921fda000c does not exist
Oct 02 12:17:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 86345268-8c1e-4f20-a602-29767c2cef7d does not exist
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:17:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:17:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:46 compute-0 sudo[280487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:46 compute-0 sudo[280487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:46 compute-0 sudo[280487]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 250 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 177 op/s
Oct 02 12:17:46 compute-0 sudo[280512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:17:46 compute-0 sudo[280512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:46 compute-0 sudo[280512]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:47 compute-0 sudo[280537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:47 compute-0 sudo[280537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:47 compute-0 sudo[280537]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:47 compute-0 sudo[280562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:17:47 compute-0 sudo[280562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:47 compute-0 podman[280627]: 2025-10-02 12:17:47.52952019 +0000 UTC m=+0.066772425 container create ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:17:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:47.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:47 compute-0 systemd[1]: Started libpod-conmon-ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83.scope.
Oct 02 12:17:47 compute-0 podman[280627]: 2025-10-02 12:17:47.506668813 +0000 UTC m=+0.043921078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:47 compute-0 podman[280627]: 2025-10-02 12:17:47.628238678 +0000 UTC m=+0.165490903 container init ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:17:47 compute-0 podman[280627]: 2025-10-02 12:17:47.63595294 +0000 UTC m=+0.173205165 container start ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:47 compute-0 podman[280627]: 2025-10-02 12:17:47.638712512 +0000 UTC m=+0.175964757 container attach ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:17:47 compute-0 systemd[1]: libpod-ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83.scope: Deactivated successfully.
Oct 02 12:17:47 compute-0 sad_turing[280644]: 167 167
Oct 02 12:17:47 compute-0 conmon[280644]: conmon ecdaba254503ddfb86dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83.scope/container/memory.events
Oct 02 12:17:47 compute-0 podman[280627]: 2025-10-02 12:17:47.646917656 +0000 UTC m=+0.184169901 container died ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:17:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:17:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:17:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:17:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:17:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-32d97ffd3203525ae3e11a65eb3cd9c84ada7f2d23d81d4f15e12064f03fbdbb-merged.mount: Deactivated successfully.
Oct 02 12:17:47 compute-0 podman[280627]: 2025-10-02 12:17:47.691521251 +0000 UTC m=+0.228773476 container remove ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:17:47 compute-0 systemd[1]: libpod-conmon-ecdaba254503ddfb86dceb4565c49c7b5c5d7102fa13b18892b2d8a5d0e06c83.scope: Deactivated successfully.
Oct 02 12:17:47 compute-0 ovn_controller[148123]: 2025-10-02T12:17:47Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a1:81:30 10.100.0.10
Oct 02 12:17:47 compute-0 ovn_controller[148123]: 2025-10-02T12:17:47Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a1:81:30 10.100.0.10
Oct 02 12:17:47 compute-0 podman[280666]: 2025-10-02 12:17:47.915687147 +0000 UTC m=+0.061078337 container create 5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hertz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:17:47 compute-0 systemd[1]: Started libpod-conmon-5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8.scope.
Oct 02 12:17:47 compute-0 podman[280666]: 2025-10-02 12:17:47.892480601 +0000 UTC m=+0.037871841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6d2b3535c3deae2f9f86e921d4108f0cdd5831f46e3af9bb0a175bf8c8bf10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6d2b3535c3deae2f9f86e921d4108f0cdd5831f46e3af9bb0a175bf8c8bf10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6d2b3535c3deae2f9f86e921d4108f0cdd5831f46e3af9bb0a175bf8c8bf10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6d2b3535c3deae2f9f86e921d4108f0cdd5831f46e3af9bb0a175bf8c8bf10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6d2b3535c3deae2f9f86e921d4108f0cdd5831f46e3af9bb0a175bf8c8bf10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:48 compute-0 podman[280666]: 2025-10-02 12:17:48.032929759 +0000 UTC m=+0.178321029 container init 5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hertz, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:17:48 compute-0 podman[280666]: 2025-10-02 12:17:48.048940387 +0000 UTC m=+0.194331577 container start 5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:17:48 compute-0 podman[280666]: 2025-10-02 12:17:48.052663465 +0000 UTC m=+0.198054775 container attach 5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:17:48 compute-0 sudo[280688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:48 compute-0 sudo[280688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:48 compute-0 sudo[280688]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:48 compute-0 sudo[280713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:48 compute-0 sudo[280713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:48 compute-0 sudo[280713]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:48.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 250 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 739 KiB/s rd, 1.2 MiB/s wr, 86 op/s
Oct 02 12:17:48 compute-0 xenodochial_hertz[280682]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:17:48 compute-0 xenodochial_hertz[280682]: --> relative data size: 1.0
Oct 02 12:17:48 compute-0 xenodochial_hertz[280682]: --> All data devices are unavailable
Oct 02 12:17:48 compute-0 ceph-mon[73668]: pgmap v1161: 305 pgs: 305 active+clean; 250 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 177 op/s
Oct 02 12:17:48 compute-0 systemd[1]: libpod-5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8.scope: Deactivated successfully.
Oct 02 12:17:48 compute-0 conmon[280682]: conmon 5fcc53e74ccaa968a9fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8.scope/container/memory.events
Oct 02 12:17:48 compute-0 podman[280666]: 2025-10-02 12:17:48.902229266 +0000 UTC m=+1.047620486 container died 5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hertz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:17:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc6d2b3535c3deae2f9f86e921d4108f0cdd5831f46e3af9bb0a175bf8c8bf10-merged.mount: Deactivated successfully.
Oct 02 12:17:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:49 compute-0 nova_compute[256940]: 2025-10-02 12:17:49.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:49 compute-0 podman[280666]: 2025-10-02 12:17:49.50003197 +0000 UTC m=+1.645423170 container remove 5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:17:49 compute-0 sudo[280562]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:49.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:49 compute-0 systemd[1]: libpod-conmon-5fcc53e74ccaa968a9fcede053f74b74732fd4315c93bc6cc46d45b1153baeb8.scope: Deactivated successfully.
Oct 02 12:17:49 compute-0 sudo[280758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:49 compute-0 sudo[280758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:49 compute-0 sudo[280758]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:49 compute-0 podman[280776]: 2025-10-02 12:17:49.69682533 +0000 UTC m=+0.067467433 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:17:49 compute-0 sudo[280796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:17:49 compute-0 sudo[280796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:49 compute-0 sudo[280796]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:49 compute-0 podman[280783]: 2025-10-02 12:17:49.753596473 +0000 UTC m=+0.124835271 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:17:49 compute-0 sudo[280850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:49 compute-0 sudo[280850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:49 compute-0 sudo[280850]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:49 compute-0 sudo[280880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:17:49 compute-0 sudo[280880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:50 compute-0 ceph-mon[73668]: pgmap v1162: 305 pgs: 305 active+clean; 250 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 739 KiB/s rd, 1.2 MiB/s wr, 86 op/s
Oct 02 12:17:50 compute-0 podman[280947]: 2025-10-02 12:17:50.227254486 +0000 UTC m=+0.042734887 container create eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ellis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:17:50 compute-0 systemd[1]: Started libpod-conmon-eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4.scope.
Oct 02 12:17:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:50 compute-0 podman[280947]: 2025-10-02 12:17:50.207199432 +0000 UTC m=+0.022679863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:50 compute-0 podman[280947]: 2025-10-02 12:17:50.411462928 +0000 UTC m=+0.226943399 container init eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ellis, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:17:50 compute-0 podman[280947]: 2025-10-02 12:17:50.421851199 +0000 UTC m=+0.237331590 container start eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:17:50 compute-0 priceless_ellis[280963]: 167 167
Oct 02 12:17:50 compute-0 systemd[1]: libpod-eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4.scope: Deactivated successfully.
Oct 02 12:17:50 compute-0 podman[280947]: 2025-10-02 12:17:50.517718503 +0000 UTC m=+0.333198994 container attach eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:17:50 compute-0 podman[280947]: 2025-10-02 12:17:50.51874075 +0000 UTC m=+0.334221191 container died eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a86230a1eebdc5ca69f08ab97e0571eabc99a16f9f8a927bf7f47f8f0ff0b54d-merged.mount: Deactivated successfully.
Oct 02 12:17:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:50.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 269 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 02 12:17:50 compute-0 podman[280947]: 2025-10-02 12:17:50.962166582 +0000 UTC m=+0.777646993 container remove eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ellis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:17:51 compute-0 systemd[1]: libpod-conmon-eacdafb4fbb368c5bac0508ecb44b7f4b6e69f1156c3fba2470c36b92b9aaed4.scope: Deactivated successfully.
Oct 02 12:17:51 compute-0 podman[280989]: 2025-10-02 12:17:51.203849615 +0000 UTC m=+0.087547208 container create 10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3030137041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1355568132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:51 compute-0 systemd[1]: Started libpod-conmon-10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c.scope.
Oct 02 12:17:51 compute-0 podman[280989]: 2025-10-02 12:17:51.1734432 +0000 UTC m=+0.057140843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:51 compute-0 nova_compute[256940]: 2025-10-02 12:17:51.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ccf130a4ca7b1d05a172375923b084f462f1fdd75b1ee254912c4b7e28bdc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ccf130a4ca7b1d05a172375923b084f462f1fdd75b1ee254912c4b7e28bdc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ccf130a4ca7b1d05a172375923b084f462f1fdd75b1ee254912c4b7e28bdc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ccf130a4ca7b1d05a172375923b084f462f1fdd75b1ee254912c4b7e28bdc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:51 compute-0 podman[280989]: 2025-10-02 12:17:51.355059444 +0000 UTC m=+0.238757017 container init 10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 02 12:17:51 compute-0 podman[280989]: 2025-10-02 12:17:51.366232586 +0000 UTC m=+0.249930139 container start 10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:51 compute-0 podman[280989]: 2025-10-02 12:17:51.375272762 +0000 UTC m=+0.258970435 container attach 10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:51.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:52 compute-0 nice_rosalind[281006]: {
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:     "1": [
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:         {
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "devices": [
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "/dev/loop3"
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             ],
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "lv_name": "ceph_lv0",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "lv_size": "7511998464",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "name": "ceph_lv0",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "tags": {
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.cluster_name": "ceph",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.crush_device_class": "",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.encrypted": "0",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.osd_id": "1",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.type": "block",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:                 "ceph.vdo": "0"
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             },
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "type": "block",
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:             "vg_name": "ceph_vg0"
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:         }
Oct 02 12:17:52 compute-0 nice_rosalind[281006]:     ]
Oct 02 12:17:52 compute-0 nice_rosalind[281006]: }
Oct 02 12:17:52 compute-0 systemd[1]: libpod-10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c.scope: Deactivated successfully.
Oct 02 12:17:52 compute-0 podman[280989]: 2025-10-02 12:17:52.207705076 +0000 UTC m=+1.091402649 container died 10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-52ccf130a4ca7b1d05a172375923b084f462f1fdd75b1ee254912c4b7e28bdc3-merged.mount: Deactivated successfully.
Oct 02 12:17:52 compute-0 ceph-mon[73668]: pgmap v1163: 305 pgs: 305 active+clean; 269 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 02 12:17:52 compute-0 podman[280989]: 2025-10-02 12:17:52.398384807 +0000 UTC m=+1.282082360 container remove 10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:17:52 compute-0 systemd[1]: libpod-conmon-10e902f884754b2b8980479198de0670c50e3fdbc8b90b170575a90b0d53853c.scope: Deactivated successfully.
Oct 02 12:17:52 compute-0 sudo[280880]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:52 compute-0 sudo[281028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:52 compute-0 sudo[281028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:52 compute-0 sudo[281028]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:52 compute-0 sudo[281053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:17:52 compute-0 sudo[281053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:52 compute-0 sudo[281053]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:52 compute-0 sudo[281078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:52 compute-0 sudo[281078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:52 compute-0 sudo[281078]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:52 compute-0 sudo[281103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:17:52 compute-0 sudo[281103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:52.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Oct 02 12:17:53 compute-0 podman[281168]: 2025-10-02 12:17:53.120440496 +0000 UTC m=+0.049754641 container create 613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:53 compute-0 systemd[1]: Started libpod-conmon-613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1.scope.
Oct 02 12:17:53 compute-0 podman[281168]: 2025-10-02 12:17:53.098602466 +0000 UTC m=+0.027916631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:53 compute-0 podman[281168]: 2025-10-02 12:17:53.219062532 +0000 UTC m=+0.148376717 container init 613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_joliot, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:53 compute-0 podman[281168]: 2025-10-02 12:17:53.228046097 +0000 UTC m=+0.157360232 container start 613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_joliot, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:53 compute-0 podman[281168]: 2025-10-02 12:17:53.231151218 +0000 UTC m=+0.160465373 container attach 613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:17:53 compute-0 exciting_joliot[281184]: 167 167
Oct 02 12:17:53 compute-0 systemd[1]: libpod-613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1.scope: Deactivated successfully.
Oct 02 12:17:53 compute-0 podman[281168]: 2025-10-02 12:17:53.235882011 +0000 UTC m=+0.165196156 container died 613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce00681b764d8b9ad6630b2d3cd22cff73d12cba2260eadcf033e0848b4361c4-merged.mount: Deactivated successfully.
Oct 02 12:17:53 compute-0 podman[281168]: 2025-10-02 12:17:53.274451199 +0000 UTC m=+0.203765354 container remove 613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:17:53 compute-0 systemd[1]: libpod-conmon-613d1988aca2fc4163f0c8aa817444e9239e928667d51b1b85d28c51281178e1.scope: Deactivated successfully.
Oct 02 12:17:53 compute-0 podman[281208]: 2025-10-02 12:17:53.510464494 +0000 UTC m=+0.080986737 container create 54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 12:17:53 compute-0 systemd[1]: Started libpod-conmon-54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43.scope.
Oct 02 12:17:53 compute-0 podman[281208]: 2025-10-02 12:17:53.480688656 +0000 UTC m=+0.051210979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:17:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:53.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9d7c88c2a663bcdb4e0c8bf0016f89096bd0a90d9457bfc4be54426f561d74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9d7c88c2a663bcdb4e0c8bf0016f89096bd0a90d9457bfc4be54426f561d74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9d7c88c2a663bcdb4e0c8bf0016f89096bd0a90d9457bfc4be54426f561d74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9d7c88c2a663bcdb4e0c8bf0016f89096bd0a90d9457bfc4be54426f561d74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:17:53 compute-0 podman[281208]: 2025-10-02 12:17:53.615946959 +0000 UTC m=+0.186469242 container init 54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:17:53 compute-0 podman[281208]: 2025-10-02 12:17:53.632720187 +0000 UTC m=+0.203242450 container start 54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:17:53 compute-0 podman[281208]: 2025-10-02 12:17:53.637024909 +0000 UTC m=+0.207547172 container attach 54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:17:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:54 compute-0 ceph-mon[73668]: pgmap v1164: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Oct 02 12:17:54 compute-0 nova_compute[256940]: 2025-10-02 12:17:54.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:54 compute-0 sweet_rubin[281224]: {
Oct 02 12:17:54 compute-0 sweet_rubin[281224]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:17:54 compute-0 sweet_rubin[281224]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:17:54 compute-0 sweet_rubin[281224]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:17:54 compute-0 sweet_rubin[281224]:         "osd_id": 1,
Oct 02 12:17:54 compute-0 sweet_rubin[281224]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:17:54 compute-0 sweet_rubin[281224]:         "type": "bluestore"
Oct 02 12:17:54 compute-0 sweet_rubin[281224]:     }
Oct 02 12:17:54 compute-0 sweet_rubin[281224]: }
Oct 02 12:17:54 compute-0 systemd[1]: libpod-54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43.scope: Deactivated successfully.
Oct 02 12:17:54 compute-0 podman[281208]: 2025-10-02 12:17:54.595703941 +0000 UTC m=+1.166226194 container died 54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f9d7c88c2a663bcdb4e0c8bf0016f89096bd0a90d9457bfc4be54426f561d74-merged.mount: Deactivated successfully.
Oct 02 12:17:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:17:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 310 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.5 MiB/s wr, 92 op/s
Oct 02 12:17:54 compute-0 podman[281208]: 2025-10-02 12:17:54.964388511 +0000 UTC m=+1.534910774 container remove 54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:17:54 compute-0 systemd[1]: libpod-conmon-54d7afc7d6e42d594c59f43a8d15a5795cc9afe2eb59457ecdbdad60e499ea43.scope: Deactivated successfully.
Oct 02 12:17:55 compute-0 sudo[281103]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 888e56fa-5c48-477d-8987-1be735e4036d does not exist
Oct 02 12:17:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 20ab9c08-a847-4a9b-a6ab-4ec63afe1602 does not exist
Oct 02 12:17:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 46bd8da9-403d-4335-9f71-e32fa1bc4fb8 does not exist
Oct 02 12:17:55 compute-0 sudo[281257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:55 compute-0 sudo[281257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:55 compute-0 sudo[281257]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:55 compute-0 sudo[281282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:17:55 compute-0 sudo[281282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:55 compute-0 sudo[281282]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1585577951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/796225185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:55.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:56 compute-0 nova_compute[256940]: 2025-10-02 12:17:56.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:56 compute-0 ceph-mon[73668]: pgmap v1165: 305 pgs: 305 active+clean; 310 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.5 MiB/s wr, 92 op/s
Oct 02 12:17:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.128 2 DEBUG oslo_concurrency.lockutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.129 2 DEBUG oslo_concurrency.lockutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.161 2 DEBUG nova.objects.instance [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'flavor' on Instance uuid b91a8089-73c5-41c6-88aa-7681bf70fcad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.216 2 DEBUG oslo_concurrency.lockutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:57.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.604 2 DEBUG oslo_concurrency.lockutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.605 2 DEBUG oslo_concurrency.lockutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.605 2 INFO nova.compute.manager [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Attaching volume 9dd4f9e5-e2b6-4aab-8485-5208997de3a8 to /dev/vdb
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.944 2 DEBUG os_brick.utils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.946 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.966 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.967 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[b1dd5d98-948d-4177-ae88-9994d5d9f06a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.968 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.983 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.983 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bb66ec-e491-42b7-a14c-e86f375fb5d8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:57 compute-0 nova_compute[256940]: 2025-10-02 12:17:57.985 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.001 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.002 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[da4c9190-0e37-4762-8f37-a97d853e9003]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.003 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaa7548-ac7f-4b9b-a046-3cc49176e4e5]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.004 2 DEBUG oslo_concurrency.processutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.040 2 DEBUG oslo_concurrency.processutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.044 2 DEBUG os_brick.initiator.connectors.lightos [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.044 2 DEBUG os_brick.initiator.connectors.lightos [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.045 2 DEBUG os_brick.initiator.connectors.lightos [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.045 2 DEBUG os_brick.utils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.046 2 DEBUG nova.virt.block_device [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Updating existing volume attachment record: a0d2bc1f-aebf-4529-b1d9-1b20a27c6c04 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:17:58 compute-0 ceph-mon[73668]: pgmap v1166: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Oct 02 12:17:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.823 2 DEBUG nova.objects.instance [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'flavor' on Instance uuid b91a8089-73c5-41c6-88aa-7681bf70fcad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 3.3 MiB/s wr, 92 op/s
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.851 2 DEBUG nova.virt.libvirt.driver [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Attempting to attach volume 9dd4f9e5-e2b6-4aab-8485-5208997de3a8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:17:58 compute-0 nova_compute[256940]: 2025-10-02 12:17:58.856 2 DEBUG nova.virt.libvirt.guest [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:17:58 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:17:58 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-9dd4f9e5-e2b6-4aab-8485-5208997de3a8">
Oct 02 12:17:58 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:17:58 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:17:58 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:17:58 compute-0 nova_compute[256940]:   </source>
Oct 02 12:17:58 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:17:58 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:17:58 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:17:58 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:17:58 compute-0 nova_compute[256940]:   <serial>9dd4f9e5-e2b6-4aab-8485-5208997de3a8</serial>
Oct 02 12:17:58 compute-0 nova_compute[256940]: </disk>
Oct 02 12:17:58 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:17:59 compute-0 nova_compute[256940]: 2025-10-02 12:17:59.004 2 DEBUG nova.virt.libvirt.driver [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:17:59 compute-0 nova_compute[256940]: 2025-10-02 12:17:59.005 2 DEBUG nova.virt.libvirt.driver [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:17:59 compute-0 nova_compute[256940]: 2025-10-02 12:17:59.005 2 DEBUG nova.virt.libvirt.driver [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:17:59 compute-0 nova_compute[256940]: 2025-10-02 12:17:59.005 2 DEBUG nova.virt.libvirt.driver [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No VIF found with MAC fa:16:3e:a1:81:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:17:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:59 compute-0 nova_compute[256940]: 2025-10-02 12:17:59.353 2 DEBUG oslo_concurrency.lockutils [None req-719ca371-1caa-4dbd-b6a7-8546501fe93f 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-0 nova_compute[256940]: 2025-10-02 12:17:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3679030386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:17:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:17:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:00.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:00 compute-0 ceph-mon[73668]: pgmap v1167: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 3.3 MiB/s wr, 92 op/s
Oct 02 12:18:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 159 op/s
Oct 02 12:18:01 compute-0 nova_compute[256940]: 2025-10-02 12:18:01.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:01.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:01 compute-0 ceph-mon[73668]: pgmap v1168: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 159 op/s
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.231 2 DEBUG oslo_concurrency.lockutils [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.232 2 DEBUG oslo_concurrency.lockutils [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.254 2 INFO nova.compute.manager [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Detaching volume 9dd4f9e5-e2b6-4aab-8485-5208997de3a8
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.749 2 INFO nova.virt.block_device [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Attempting to driver detach volume 9dd4f9e5-e2b6-4aab-8485-5208997de3a8 from mountpoint /dev/vdb
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.758 2 DEBUG nova.virt.libvirt.driver [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Attempting to detach device vdb from instance b91a8089-73c5-41c6-88aa-7681bf70fcad from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.758 2 DEBUG nova.virt.libvirt.guest [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-9dd4f9e5-e2b6-4aab-8485-5208997de3a8">
Oct 02 12:18:02 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   </source>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <serial>9dd4f9e5-e2b6-4aab-8485-5208997de3a8</serial>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]: </disk>
Oct 02 12:18:02 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:18:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:02.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.805 2 INFO nova.virt.libvirt.driver [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Successfully detached device vdb from instance b91a8089-73c5-41c6-88aa-7681bf70fcad from the persistent domain config.
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.806 2 DEBUG nova.virt.libvirt.driver [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b91a8089-73c5-41c6-88aa-7681bf70fcad from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.806 2 DEBUG nova.virt.libvirt.guest [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-9dd4f9e5-e2b6-4aab-8485-5208997de3a8">
Oct 02 12:18:02 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   </source>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <serial>9dd4f9e5-e2b6-4aab-8485-5208997de3a8</serial>
Oct 02 12:18:02 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:18:02 compute-0 nova_compute[256940]: </disk>
Oct 02 12:18:02 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:18:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 125 op/s
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.917 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759407482.9166305, b91a8089-73c5-41c6-88aa-7681bf70fcad => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.920 2 DEBUG nova.virt.libvirt.driver [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b91a8089-73c5-41c6-88aa-7681bf70fcad _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:18:02 compute-0 nova_compute[256940]: 2025-10-02 12:18:02.923 2 INFO nova.virt.libvirt.driver [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Successfully detached device vdb from instance b91a8089-73c5-41c6-88aa-7681bf70fcad from the live domain config.
Oct 02 12:18:03 compute-0 nova_compute[256940]: 2025-10-02 12:18:03.236 2 DEBUG nova.objects.instance [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'flavor' on Instance uuid b91a8089-73c5-41c6-88aa-7681bf70fcad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:03 compute-0 nova_compute[256940]: 2025-10-02 12:18:03.278 2 DEBUG oslo_concurrency.lockutils [None req-5c660283-59ce-446d-b2e0-c59876190c8d 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:03.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:04 compute-0 ceph-mon[73668]: pgmap v1169: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 125 op/s
Oct 02 12:18:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:04 compute-0 nova_compute[256940]: 2025-10-02 12:18:04.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:04.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 02 12:18:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2412881761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:05 compute-0 podman[281342]: 2025-10-02 12:18:05.435459118 +0000 UTC m=+0.088320148 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 12:18:05 compute-0 podman[281341]: 2025-10-02 12:18:05.460223155 +0000 UTC m=+0.109483551 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:18:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:05.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:06 compute-0 ceph-mon[73668]: pgmap v1170: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 02 12:18:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1316808026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:06 compute-0 nova_compute[256940]: 2025-10-02 12:18:06.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:06.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 463 KiB/s wr, 93 op/s
Oct 02 12:18:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:07.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:07 compute-0 nova_compute[256940]: 2025-10-02 12:18:07.972 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:07 compute-0 nova_compute[256940]: 2025-10-02 12:18:07.973 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.003 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.088 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.089 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.103 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.103 2 INFO nova.compute.claims [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:18:08 compute-0 ceph-mon[73668]: pgmap v1171: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 463 KiB/s wr, 93 op/s
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.255 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976361597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.752 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.760 2 DEBUG nova.compute.provider_tree [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:08.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:08 compute-0 sudo[281403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:08 compute-0 sudo[281403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:08 compute-0 sudo[281403]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.1 KiB/s wr, 74 op/s
Oct 02 12:18:08 compute-0 sudo[281430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:08 compute-0 sudo[281430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:08 compute-0 sudo[281430]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.889 2 DEBUG nova.scheduler.client.report [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.940 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:08 compute-0 nova_compute[256940]: 2025-10-02 12:18:08.941 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.049 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.050 2 DEBUG nova.network.neutron [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.092 2 INFO nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.119 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.232 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.234 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.234 2 INFO nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Creating image(s)
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.273 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2976361597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.341 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.370 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.375 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.398 2 DEBUG nova.policy [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c53fbb5ed1e4cf380a90975be5dc249', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96eda2c1552247d8b6632dd9e7d1f6df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.434 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.435 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.436 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.436 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.464 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.475 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 27999ce2-bc13-4245-89b5-6f616ace394e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:09.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:09 compute-0 nova_compute[256940]: 2025-10-02 12:18:09.909 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 27999ce2-bc13-4245-89b5-6f616ace394e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.007 2 DEBUG nova.network.neutron [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Successfully created port: d4581462-b5fd-4312-b4b8-681d197b5c22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.015 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] resizing rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.167 2 DEBUG nova.objects.instance [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'migration_context' on Instance uuid 27999ce2-bc13-4245-89b5-6f616ace394e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.184 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.185 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Ensure instance console log exists: /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.186 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.187 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.187 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:10 compute-0 ceph-mon[73668]: pgmap v1172: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.1 KiB/s wr, 74 op/s
Oct 02 12:18:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:10.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 324 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 02 12:18:10 compute-0 nova_compute[256940]: 2025-10-02 12:18:10.985 2 DEBUG nova.network.neutron [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Successfully updated port: d4581462-b5fd-4312-b4b8-681d197b5c22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.012 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "refresh_cache-27999ce2-bc13-4245-89b5-6f616ace394e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.012 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquired lock "refresh_cache-27999ce2-bc13-4245-89b5-6f616ace394e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.012 2 DEBUG nova.network.neutron [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.331 2 DEBUG nova.network.neutron [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:18:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2594510805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2594510805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.457 2 DEBUG nova.compute.manager [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received event network-changed-d4581462-b5fd-4312-b4b8-681d197b5c22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.458 2 DEBUG nova.compute.manager [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Refreshing instance network info cache due to event network-changed-d4581462-b5fd-4312-b4b8-681d197b5c22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:18:11 compute-0 nova_compute[256940]: 2025-10-02 12:18:11.459 2 DEBUG oslo_concurrency.lockutils [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-27999ce2-bc13-4245-89b5-6f616ace394e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:11.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:12 compute-0 ceph-mon[73668]: pgmap v1173: 305 pgs: 305 active+clean; 324 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.540 2 DEBUG nova.network.neutron [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Updating instance_info_cache with network_info: [{"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.583 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Releasing lock "refresh_cache-27999ce2-bc13-4245-89b5-6f616ace394e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.583 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Instance network_info: |[{"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.583 2 DEBUG oslo_concurrency.lockutils [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-27999ce2-bc13-4245-89b5-6f616ace394e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.584 2 DEBUG nova.network.neutron [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Refreshing network info cache for port d4581462-b5fd-4312-b4b8-681d197b5c22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.587 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Start _get_guest_xml network_info=[{"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.592 2 WARNING nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.598 2 DEBUG nova.virt.libvirt.host [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.598 2 DEBUG nova.virt.libvirt.host [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.604 2 DEBUG nova.virt.libvirt.host [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.605 2 DEBUG nova.virt.libvirt.host [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.606 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.606 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.607 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.607 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.607 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.608 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.608 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.608 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.609 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.609 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.609 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.609 2 DEBUG nova.virt.hardware [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:18:12 compute-0 nova_compute[256940]: 2025-10-02 12:18:12.613 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:12.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 313 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.9 MiB/s wr, 98 op/s
Oct 02 12:18:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180439530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.107 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.141 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.146 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2180439530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891112775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:13.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.618 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.620 2 DEBUG nova.virt.libvirt.vif [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-596850479',display_name='tempest-VolumesAdminNegativeTest-server-596850479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-596850479',id=34,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96eda2c1552247d8b6632dd9e7d1f6df',ramdisk_id='',reservation_id='r-p7ss6z0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-486286654',owner_user_name='tempest-VolumesAdminNegativeTest-486286654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:09Z,user_data=None,user_id='3c53fbb5ed1e4cf380a90975be5dc249',uuid=27999ce2-bc13-4245-89b5-6f616ace394e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.620 2 DEBUG nova.network.os_vif_util [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converting VIF {"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.621 2 DEBUG nova.network.os_vif_util [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:4a:e0,bridge_name='br-int',has_traffic_filtering=True,id=d4581462-b5fd-4312-b4b8-681d197b5c22,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4581462-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.622 2 DEBUG nova.objects.instance [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'pci_devices' on Instance uuid 27999ce2-bc13-4245-89b5-6f616ace394e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.638 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <uuid>27999ce2-bc13-4245-89b5-6f616ace394e</uuid>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <name>instance-00000022</name>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <nova:name>tempest-VolumesAdminNegativeTest-server-596850479</nova:name>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:18:12</nova:creationTime>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:user uuid="3c53fbb5ed1e4cf380a90975be5dc249">tempest-VolumesAdminNegativeTest-486286654-project-member</nova:user>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:project uuid="96eda2c1552247d8b6632dd9e7d1f6df">tempest-VolumesAdminNegativeTest-486286654</nova:project>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <nova:port uuid="d4581462-b5fd-4312-b4b8-681d197b5c22">
Oct 02 12:18:13 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <system>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <entry name="serial">27999ce2-bc13-4245-89b5-6f616ace394e</entry>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <entry name="uuid">27999ce2-bc13-4245-89b5-6f616ace394e</entry>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </system>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <os>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   </os>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <features>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   </features>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/27999ce2-bc13-4245-89b5-6f616ace394e_disk">
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       </source>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/27999ce2-bc13-4245-89b5-6f616ace394e_disk.config">
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       </source>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:18:13 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:09:4a:e0"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <target dev="tapd4581462-b5"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/console.log" append="off"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <video>
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </video>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:18:13 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:18:13 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:18:13 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:18:13 compute-0 nova_compute[256940]: </domain>
Oct 02 12:18:13 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.640 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Preparing to wait for external event network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.640 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.640 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.640 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.641 2 DEBUG nova.virt.libvirt.vif [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-596850479',display_name='tempest-VolumesAdminNegativeTest-server-596850479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-596850479',id=34,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96eda2c1552247d8b6632dd9e7d1f6df',ramdisk_id='',reservation_id='r-p7ss6z0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-486286654',owner_user_name='tempest-VolumesAdminNegativeTest-486286654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:09Z,user_data=None,user_id='3c53fbb5ed1e4cf380a90975be5dc249',uuid=27999ce2-bc13-4245-89b5-6f616ace394e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.641 2 DEBUG nova.network.os_vif_util [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converting VIF {"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.642 2 DEBUG nova.network.os_vif_util [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:4a:e0,bridge_name='br-int',has_traffic_filtering=True,id=d4581462-b5fd-4312-b4b8-681d197b5c22,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4581462-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.643 2 DEBUG os_vif [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4a:e0,bridge_name='br-int',has_traffic_filtering=True,id=d4581462-b5fd-4312-b4b8-681d197b5c22,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4581462-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4581462-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4581462-b5, col_values=(('external_ids', {'iface-id': 'd4581462-b5fd-4312-b4b8-681d197b5c22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:4a:e0', 'vm-uuid': '27999ce2-bc13-4245-89b5-6f616ace394e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:13 compute-0 NetworkManager[44981]: <info>  [1759407493.6958] manager: (tapd4581462-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.712 2 INFO os_vif [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4a:e0,bridge_name='br-int',has_traffic_filtering=True,id=d4581462-b5fd-4312-b4b8-681d197b5c22,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4581462-b5')
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.769 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.769 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.770 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] No VIF found with MAC fa:16:3e:09:4a:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.770 2 INFO nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Using config drive
Oct 02 12:18:13 compute-0 nova_compute[256940]: 2025-10-02 12:18:13.802 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:14 compute-0 ceph-mon[73668]: pgmap v1174: 305 pgs: 305 active+clean; 313 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.9 MiB/s wr, 98 op/s
Oct 02 12:18:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/891112775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:14 compute-0 nova_compute[256940]: 2025-10-02 12:18:14.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:18:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:14.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:18:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 311 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 132 op/s
Oct 02 12:18:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:15.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:16 compute-0 nova_compute[256940]: 2025-10-02 12:18:16.614 2 INFO nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Creating config drive at /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/disk.config
Oct 02 12:18:16 compute-0 nova_compute[256940]: 2025-10-02 12:18:16.621 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66ccohlb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:16.742 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:16.744 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:18:16 compute-0 nova_compute[256940]: 2025-10-02 12:18:16.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:16 compute-0 nova_compute[256940]: 2025-10-02 12:18:16.761 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66ccohlb" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:16.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:16 compute-0 nova_compute[256940]: 2025-10-02 12:18:16.790 2 DEBUG nova.storage.rbd_utils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] rbd image 27999ce2-bc13-4245-89b5-6f616ace394e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:16 compute-0 nova_compute[256940]: 2025-10-02 12:18:16.793 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/disk.config 27999ce2-bc13-4245-89b5-6f616ace394e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:16 compute-0 ceph-mon[73668]: pgmap v1175: 305 pgs: 305 active+clean; 311 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 132 op/s
Oct 02 12:18:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 246 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.010 2 DEBUG oslo_concurrency.processutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/disk.config 27999ce2-bc13-4245-89b5-6f616ace394e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.012 2 INFO nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Deleting local config drive /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e/disk.config because it was imported into RBD.
Oct 02 12:18:17 compute-0 kernel: tapd4581462-b5: entered promiscuous mode
Oct 02 12:18:17 compute-0 ovn_controller[148123]: 2025-10-02T12:18:17Z|00127|binding|INFO|Claiming lport d4581462-b5fd-4312-b4b8-681d197b5c22 for this chassis.
Oct 02 12:18:17 compute-0 ovn_controller[148123]: 2025-10-02T12:18:17Z|00128|binding|INFO|d4581462-b5fd-4312-b4b8-681d197b5c22: Claiming fa:16:3e:09:4a:e0 10.100.0.9
Oct 02 12:18:17 compute-0 NetworkManager[44981]: <info>  [1759407497.0753] manager: (tapd4581462-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.080 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4a:e0 10.100.0.9'], port_security=['fa:16:3e:09:4a:e0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '27999ce2-bc13-4245-89b5-6f616ace394e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96eda2c1552247d8b6632dd9e7d1f6df', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3dd7600-fc10-417d-9e5f-d6fa4632ea48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32c85910-7861-449c-a436-357586f391f8, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d4581462-b5fd-4312-b4b8-681d197b5c22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.081 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d4581462-b5fd-4312-b4b8-681d197b5c22 in datapath 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab bound to our chassis
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.083 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab
Oct 02 12:18:17 compute-0 ovn_controller[148123]: 2025-10-02T12:18:17Z|00129|binding|INFO|Setting lport d4581462-b5fd-4312-b4b8-681d197b5c22 ovn-installed in OVS
Oct 02 12:18:17 compute-0 ovn_controller[148123]: 2025-10-02T12:18:17Z|00130|binding|INFO|Setting lport d4581462-b5fd-4312-b4b8-681d197b5c22 up in Southbound
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.110 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6466ca3f-13d7-49b6-966f-376dd565d54b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:17 compute-0 systemd-udevd[281761]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:17 compute-0 systemd-machined[210927]: New machine qemu-17-instance-00000022.
Oct 02 12:18:17 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000022.
Oct 02 12:18:17 compute-0 NetworkManager[44981]: <info>  [1759407497.1360] device (tapd4581462-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:18:17 compute-0 NetworkManager[44981]: <info>  [1759407497.1370] device (tapd4581462-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.150 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ef064cbd-f2ec-4781-8c39-cb1eeebff8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.155 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[74113055-b354-4bbf-beea-7e95ead18f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.184 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b5edb9-ed38-4546-9b21-222ae935ae53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.207 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7971a1-2314-4506-ab4c-f8dd2ed24a07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1166a0a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534895, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281773, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.227 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f79bdc7a-e823-47b9-8587-248a0e083401]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1166a0a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534910, 'tstamp': 534910}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281775, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1166a0a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534914, 'tstamp': 534914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281775, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.230 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1166a0a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.234 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1166a0a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.234 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.235 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1166a0a2-a0, col_values=(('external_ids', {'iface-id': '97dd6f15-0e35-4e74-9928-1fab371ff2fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:17.236 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.434 2 DEBUG nova.compute.manager [req-2aaad255-7dea-4dd7-b890-53336a18c621 req-df10cc45-50fb-4bae-8844-a4d426027720 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received event network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.434 2 DEBUG oslo_concurrency.lockutils [req-2aaad255-7dea-4dd7-b890-53336a18c621 req-df10cc45-50fb-4bae-8844-a4d426027720 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.435 2 DEBUG oslo_concurrency.lockutils [req-2aaad255-7dea-4dd7-b890-53336a18c621 req-df10cc45-50fb-4bae-8844-a4d426027720 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.435 2 DEBUG oslo_concurrency.lockutils [req-2aaad255-7dea-4dd7-b890-53336a18c621 req-df10cc45-50fb-4bae-8844-a4d426027720 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.435 2 DEBUG nova.compute.manager [req-2aaad255-7dea-4dd7-b890-53336a18c621 req-df10cc45-50fb-4bae-8844-a4d426027720 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Processing event network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.555 2 DEBUG nova.network.neutron [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Updated VIF entry in instance network info cache for port d4581462-b5fd-4312-b4b8-681d197b5c22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.556 2 DEBUG nova.network.neutron [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Updating instance_info_cache with network_info: [{"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:17 compute-0 nova_compute[256940]: 2025-10-02 12:18:17.579 2 DEBUG oslo_concurrency.lockutils [req-ba54c171-42d6-48be-b7d0-93a0b1aec099 req-a48c2374-daa7-4ff9-ac69-e7a7a8683b24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-27999ce2-bc13-4245-89b5-6f616ace394e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:17.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:17 compute-0 ceph-mon[73668]: pgmap v1176: 305 pgs: 305 active+clean; 246 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.319 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407498.3188426, 27999ce2-bc13-4245-89b5-6f616ace394e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.322 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] VM Started (Lifecycle Event)
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.325 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.330 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.336 2 INFO nova.virt.libvirt.driver [-] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Instance spawned successfully.
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.336 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.427 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.438 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.446 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.447 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.448 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.449 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.450 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.451 2 DEBUG nova.virt.libvirt.driver [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.514 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.514 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407498.3190472, 27999ce2-bc13-4245-89b5-6f616ace394e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.515 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] VM Paused (Lifecycle Event)
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.621 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.626 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407498.3289506, 27999ce2-bc13-4245-89b5-6f616ace394e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.627 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] VM Resumed (Lifecycle Event)
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.670 2 INFO nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Took 9.44 seconds to spawn the instance on the hypervisor.
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.671 2 DEBUG nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:18.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 246 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.902 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:18 compute-0 nova_compute[256940]: 2025-10-02 12:18:18.907 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/914543213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.185 2 INFO nova.compute.manager [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Took 11.13 seconds to build instance.
Oct 02 12:18:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.343 2 DEBUG oslo_concurrency.lockutils [None req-4f4dfcf8-cfa5-4012-90cc-dc9f8a87ad68 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:19.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.855 2 DEBUG nova.compute.manager [req-f11259e0-0695-435a-980b-34ce43166537 req-20f7bbac-8ce3-4b87-8398-6a5830c8d782 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received event network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.856 2 DEBUG oslo_concurrency.lockutils [req-f11259e0-0695-435a-980b-34ce43166537 req-20f7bbac-8ce3-4b87-8398-6a5830c8d782 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.856 2 DEBUG oslo_concurrency.lockutils [req-f11259e0-0695-435a-980b-34ce43166537 req-20f7bbac-8ce3-4b87-8398-6a5830c8d782 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.856 2 DEBUG oslo_concurrency.lockutils [req-f11259e0-0695-435a-980b-34ce43166537 req-20f7bbac-8ce3-4b87-8398-6a5830c8d782 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.856 2 DEBUG nova.compute.manager [req-f11259e0-0695-435a-980b-34ce43166537 req-20f7bbac-8ce3-4b87-8398-6a5830c8d782 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] No waiting events found dispatching network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:19 compute-0 nova_compute[256940]: 2025-10-02 12:18:19.857 2 WARNING nova.compute.manager [req-f11259e0-0695-435a-980b-34ce43166537 req-20f7bbac-8ce3-4b87-8398-6a5830c8d782 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received unexpected event network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 for instance with vm_state active and task_state None.
Oct 02 12:18:20 compute-0 ceph-mon[73668]: pgmap v1177: 305 pgs: 305 active+clean; 246 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:18:20 compute-0 podman[281819]: 2025-10-02 12:18:20.431495207 +0000 UTC m=+0.082315801 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:18:20 compute-0 podman[281820]: 2025-10-02 12:18:20.480004594 +0000 UTC m=+0.130266564 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:18:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:20.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 246 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 02 12:18:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:21.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:22 compute-0 ceph-mon[73668]: pgmap v1178: 305 pgs: 305 active+clean; 246 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 02 12:18:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:22.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Oct 02 12:18:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:23.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/967908571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:23 compute-0 nova_compute[256940]: 2025-10-02 12:18:23.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:24 compute-0 nova_compute[256940]: 2025-10-02 12:18:24.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:24 compute-0 ceph-mon[73668]: pgmap v1179: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Oct 02 12:18:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:24.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1013 KiB/s wr, 132 op/s
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.566 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.567 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.567 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.567 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.568 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.569 2 INFO nova.compute.manager [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Terminating instance
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.571 2 DEBUG nova.compute.manager [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:18:25 compute-0 kernel: tapd4581462-b5 (unregistering): left promiscuous mode
Oct 02 12:18:25 compute-0 NetworkManager[44981]: <info>  [1759407505.6212] device (tapd4581462-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:18:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:25.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:25 compute-0 ovn_controller[148123]: 2025-10-02T12:18:25Z|00131|binding|INFO|Releasing lport d4581462-b5fd-4312-b4b8-681d197b5c22 from this chassis (sb_readonly=0)
Oct 02 12:18:25 compute-0 ovn_controller[148123]: 2025-10-02T12:18:25Z|00132|binding|INFO|Setting lport d4581462-b5fd-4312-b4b8-681d197b5c22 down in Southbound
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 ovn_controller[148123]: 2025-10-02T12:18:25Z|00133|binding|INFO|Removing iface tapd4581462-b5 ovn-installed in OVS
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.667 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4a:e0 10.100.0.9'], port_security=['fa:16:3e:09:4a:e0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '27999ce2-bc13-4245-89b5-6f616ace394e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96eda2c1552247d8b6632dd9e7d1f6df', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3dd7600-fc10-417d-9e5f-d6fa4632ea48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32c85910-7861-449c-a436-357586f391f8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d4581462-b5fd-4312-b4b8-681d197b5c22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.668 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d4581462-b5fd-4312-b4b8-681d197b5c22 in datapath 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab unbound from our chassis
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.671 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.687 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a52a38df-eb80-46da-8b8d-5c87e4603cf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:25 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 02 12:18:25 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000022.scope: Consumed 8.541s CPU time.
Oct 02 12:18:25 compute-0 systemd-machined[210927]: Machine qemu-17-instance-00000022 terminated.
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.722 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4a16db-dfef-4f6b-9059-37bb82c60181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.726 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c5802b-8144-4209-9aee-9ef03cbf885b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.768 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb7076b-edd0-48fe-8d04-d81429483032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.794 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0524c905-e39f-44a1-9fa7-46bede7a2e67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1166a0a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534895, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281881, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.818 2 INFO nova.virt.libvirt.driver [-] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Instance destroyed successfully.
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.818 2 DEBUG nova.objects.instance [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'resources' on Instance uuid 27999ce2-bc13-4245-89b5-6f616ace394e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.821 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[67630208-697d-41d9-9363-11a1988b0181]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1166a0a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534910, 'tstamp': 534910}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281887, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1166a0a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534914, 'tstamp': 534914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281887, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.823 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1166a0a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.833 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1166a0a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.834 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.834 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1166a0a2-a0, col_values=(('external_ids', {'iface-id': '97dd6f15-0e35-4e74-9928-1fab371ff2fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:25.835 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.840 2 DEBUG nova.virt.libvirt.vif [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-596850479',display_name='tempest-VolumesAdminNegativeTest-server-596850479',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-596850479',id=34,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96eda2c1552247d8b6632dd9e7d1f6df',ramdisk_id='',reservation_id='r-p7ss6z0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-486286654',owner_user_name='tempest-VolumesAdminNegativeTest-486286654-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:18Z,user_data=None,user_id='3c53fbb5ed1e4cf380a90975be5dc249',uuid=27999ce2-bc13-4245-89b5-6f616ace394e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.841 2 DEBUG nova.network.os_vif_util [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converting VIF {"id": "d4581462-b5fd-4312-b4b8-681d197b5c22", "address": "fa:16:3e:09:4a:e0", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4581462-b5", "ovs_interfaceid": "d4581462-b5fd-4312-b4b8-681d197b5c22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.842 2 DEBUG nova.network.os_vif_util [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:4a:e0,bridge_name='br-int',has_traffic_filtering=True,id=d4581462-b5fd-4312-b4b8-681d197b5c22,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4581462-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.842 2 DEBUG os_vif [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4a:e0,bridge_name='br-int',has_traffic_filtering=True,id=d4581462-b5fd-4312-b4b8-681d197b5c22,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4581462-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4581462-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:25 compute-0 nova_compute[256940]: 2025-10-02 12:18:25.854 2 INFO os_vif [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4a:e0,bridge_name='br-int',has_traffic_filtering=True,id=d4581462-b5fd-4312-b4b8-681d197b5c22,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4581462-b5')
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.217 2 DEBUG nova.compute.manager [req-82d3b2a7-d732-4ab1-8f31-8c67f5b24316 req-2557ecd5-f580-4120-8ade-db230daa9232 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received event network-vif-unplugged-d4581462-b5fd-4312-b4b8-681d197b5c22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.218 2 DEBUG oslo_concurrency.lockutils [req-82d3b2a7-d732-4ab1-8f31-8c67f5b24316 req-2557ecd5-f580-4120-8ade-db230daa9232 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.218 2 DEBUG oslo_concurrency.lockutils [req-82d3b2a7-d732-4ab1-8f31-8c67f5b24316 req-2557ecd5-f580-4120-8ade-db230daa9232 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.218 2 DEBUG oslo_concurrency.lockutils [req-82d3b2a7-d732-4ab1-8f31-8c67f5b24316 req-2557ecd5-f580-4120-8ade-db230daa9232 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.219 2 DEBUG nova.compute.manager [req-82d3b2a7-d732-4ab1-8f31-8c67f5b24316 req-2557ecd5-f580-4120-8ade-db230daa9232 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] No waiting events found dispatching network-vif-unplugged-d4581462-b5fd-4312-b4b8-681d197b5c22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.219 2 DEBUG nova.compute.manager [req-82d3b2a7-d732-4ab1-8f31-8c67f5b24316 req-2557ecd5-f580-4120-8ade-db230daa9232 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received event network-vif-unplugged-d4581462-b5fd-4312-b4b8-681d197b5c22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:18:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:26.456 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:26.457 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:26.458 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.620 2 INFO nova.virt.libvirt.driver [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Deleting instance files /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e_del
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.621 2 INFO nova.virt.libvirt.driver [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Deletion of /var/lib/nova/instances/27999ce2-bc13-4245-89b5-6f616ace394e_del complete
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.679 2 INFO nova.compute.manager [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Took 1.11 seconds to destroy the instance on the hypervisor.
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.680 2 DEBUG oslo.service.loopingcall [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.680 2 DEBUG nova.compute.manager [-] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:18:26 compute-0 nova_compute[256940]: 2025-10-02 12:18:26.680 2 DEBUG nova.network.neutron [-] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:18:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:26.747 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:26.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 269 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 133 op/s
Oct 02 12:18:26 compute-0 ceph-mon[73668]: pgmap v1180: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1013 KiB/s wr, 132 op/s
Oct 02 12:18:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:28 compute-0 ceph-mon[73668]: pgmap v1181: 305 pgs: 305 active+clean; 269 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 133 op/s
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.096 2 DEBUG nova.network.neutron [-] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.156 2 INFO nova.compute.manager [-] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Took 1.48 seconds to deallocate network for instance.
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.280 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.281 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.323 2 DEBUG nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received event network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.324 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.324 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.325 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.326 2 DEBUG nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] No waiting events found dispatching network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.326 2 WARNING nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received unexpected event network-vif-plugged-d4581462-b5fd-4312-b4b8-681d197b5c22 for instance with vm_state deleted and task_state None.
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.327 2 DEBUG nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Received event network-vif-deleted-d4581462-b5fd-4312-b4b8-681d197b5c22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.372 2 DEBUG oslo_concurrency.processutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:18:28
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', 'backups', 'volumes']
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:18:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:28.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 269 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Oct 02 12:18:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49774349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.897 2 DEBUG oslo_concurrency.processutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.905 2 DEBUG nova.compute.provider_tree [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.940 2 DEBUG nova.scheduler.client.report [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:28 compute-0 nova_compute[256940]: 2025-10-02 12:18:28.980 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:28 compute-0 sudo[281938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:28 compute-0 sudo[281938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:29 compute-0 sudo[281938]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:29 compute-0 nova_compute[256940]: 2025-10-02 12:18:29.049 2 INFO nova.scheduler.client.report [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Deleted allocations for instance 27999ce2-bc13-4245-89b5-6f616ace394e
Oct 02 12:18:29 compute-0 sudo[281963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:29 compute-0 sudo[281963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:29 compute-0 sudo[281963]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:29 compute-0 nova_compute[256940]: 2025-10-02 12:18:29.179 2 DEBUG oslo_concurrency.lockutils [None req-758c0071-7ca1-42c8-83b2-e0a2dfce8f35 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "27999ce2-bc13-4245-89b5-6f616ace394e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/49774349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:29 compute-0 nova_compute[256940]: 2025-10-02 12:18:29.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:29 compute-0 ovn_controller[148123]: 2025-10-02T12:18:29Z|00134|binding|INFO|Releasing lport 97dd6f15-0e35-4e74-9928-1fab371ff2fc from this chassis (sb_readonly=0)
Oct 02 12:18:29 compute-0 nova_compute[256940]: 2025-10-02 12:18:29.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:30 compute-0 ceph-mon[73668]: pgmap v1182: 305 pgs: 305 active+clean; 269 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Oct 02 12:18:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/885113874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:30.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:30 compute-0 nova_compute[256940]: 2025-10-02 12:18:30.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 246 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 02 12:18:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3202358119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:31.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:32.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 12:18:32 compute-0 ceph-mon[73668]: pgmap v1183: 305 pgs: 305 active+clean; 246 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 02 12:18:32 compute-0 nova_compute[256940]: 2025-10-02 12:18:32.936 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:32 compute-0 nova_compute[256940]: 2025-10-02 12:18:32.937 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:32 compute-0 nova_compute[256940]: 2025-10-02 12:18:32.991 2 DEBUG nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.101 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.101 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.109 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.110 2 INFO nova.compute.claims [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.237 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.587 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.587 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.588 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.588 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.588 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.590 2 INFO nova.compute.manager [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Terminating instance
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.591 2 DEBUG nova.compute.manager [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:18:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:33.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:33 compute-0 kernel: tap8ca9e621-31 (unregistering): left promiscuous mode
Oct 02 12:18:33 compute-0 NetworkManager[44981]: <info>  [1759407513.6961] device (tap8ca9e621-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:18:33 compute-0 ovn_controller[148123]: 2025-10-02T12:18:33Z|00135|binding|INFO|Releasing lport 8ca9e621-315f-431f-a0ec-ce3af08b1458 from this chassis (sb_readonly=0)
Oct 02 12:18:33 compute-0 ovn_controller[148123]: 2025-10-02T12:18:33Z|00136|binding|INFO|Setting lport 8ca9e621-315f-431f-a0ec-ce3af08b1458 down in Southbound
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 ovn_controller[148123]: 2025-10-02T12:18:33Z|00137|binding|INFO|Removing iface tap8ca9e621-31 ovn-installed in OVS
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:33.713 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:81:30 10.100.0.10'], port_security=['fa:16:3e:a1:81:30 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b91a8089-73c5-41c6-88aa-7681bf70fcad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96eda2c1552247d8b6632dd9e7d1f6df', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4aab13b-6523-48c5-89b6-85da0c2187d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32c85910-7861-449c-a436-357586f391f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8ca9e621-315f-431f-a0ec-ce3af08b1458) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:33.715 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8ca9e621-315f-431f-a0ec-ce3af08b1458 in datapath 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab unbound from our chassis
Oct 02 12:18:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:33.717 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1166a0a2-aca4-4278-bd35-d73d4e5bb2ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:33.719 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4d2a75-a057-4994-b7da-6ec7d69c1f49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:33.720 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab namespace which is not needed anymore
Oct 02 12:18:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/564973710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.760 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:33 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000020.scope: Consumed 16.495s CPU time.
Oct 02 12:18:33 compute-0 systemd-machined[210927]: Machine qemu-16-instance-00000020 terminated.
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.770 2 DEBUG nova.compute.provider_tree [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.795 2 DEBUG nova.scheduler.client.report [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.821 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.823 2 DEBUG nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.831 2 INFO nova.virt.libvirt.driver [-] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Instance destroyed successfully.
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.832 2 DEBUG nova.objects.instance [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lazy-loading 'resources' on Instance uuid b91a8089-73c5-41c6-88aa-7681bf70fcad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.849 2 DEBUG nova.virt.libvirt.vif [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1839342492',display_name='tempest-VolumesAdminNegativeTest-server-1839342492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1839342492',id=32,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkcHJ7r58WDaGntSabDF80AZkiFaqk4NzjOxFh09aJiK9BnqTHRSLGUvMKPr1+T2iLEkowdQQSzTR7xaiz1uScY9nH1xdNHH+CcQ+kNDIaPJxNUfi2RZ30LusiITEEF8w==',key_name='tempest-keypair-424136697',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96eda2c1552247d8b6632dd9e7d1f6df',ramdisk_id='',reservation_id='r-4s6bo0k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-486286654',owner_user_name='tempest-VolumesAdminNegativeTest-486286654-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3c53fbb5ed1e4cf380a90975be5dc249',uuid=b91a8089-73c5-41c6-88aa-7681bf70fcad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.850 2 DEBUG nova.network.os_vif_util [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converting VIF {"id": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "address": "fa:16:3e:a1:81:30", "network": {"id": "1166a0a2-aca4-4278-bd35-d73d4e5bb2ab", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-991424029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96eda2c1552247d8b6632dd9e7d1f6df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ca9e621-31", "ovs_interfaceid": "8ca9e621-315f-431f-a0ec-ce3af08b1458", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.851 2 DEBUG nova.network.os_vif_util [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a1:81:30,bridge_name='br-int',has_traffic_filtering=True,id=8ca9e621-315f-431f-a0ec-ce3af08b1458,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ca9e621-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.851 2 DEBUG os_vif [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:81:30,bridge_name='br-int',has_traffic_filtering=True,id=8ca9e621-315f-431f-a0ec-ce3af08b1458,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ca9e621-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ca9e621-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.860 2 INFO os_vif [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:81:30,bridge_name='br-int',has_traffic_filtering=True,id=8ca9e621-315f-431f-a0ec-ce3af08b1458,network=Network(1166a0a2-aca4-4278-bd35-d73d4e5bb2ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ca9e621-31')
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.885 2 DEBUG nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.906 2 INFO nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.922 2 DEBUG nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.959 2 DEBUG nova.compute.manager [req-9931ee13-2eee-4e2b-b9d8-80bae81408fe req-68d85a62-2b7b-4beb-9f4c-ff4a2a8d0f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-vif-unplugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.960 2 DEBUG oslo_concurrency.lockutils [req-9931ee13-2eee-4e2b-b9d8-80bae81408fe req-68d85a62-2b7b-4beb-9f4c-ff4a2a8d0f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.960 2 DEBUG oslo_concurrency.lockutils [req-9931ee13-2eee-4e2b-b9d8-80bae81408fe req-68d85a62-2b7b-4beb-9f4c-ff4a2a8d0f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.960 2 DEBUG oslo_concurrency.lockutils [req-9931ee13-2eee-4e2b-b9d8-80bae81408fe req-68d85a62-2b7b-4beb-9f4c-ff4a2a8d0f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.960 2 DEBUG nova.compute.manager [req-9931ee13-2eee-4e2b-b9d8-80bae81408fe req-68d85a62-2b7b-4beb-9f4c-ff4a2a8d0f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] No waiting events found dispatching network-vif-unplugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:33 compute-0 nova_compute[256940]: 2025-10-02 12:18:33.961 2 DEBUG nova.compute.manager [req-9931ee13-2eee-4e2b-b9d8-80bae81408fe req-68d85a62-2b7b-4beb-9f4c-ff4a2a8d0f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-vif-unplugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:18:34 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [NOTICE]   (280292) : haproxy version is 2.8.14-c23fe91
Oct 02 12:18:34 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [NOTICE]   (280292) : path to executable is /usr/sbin/haproxy
Oct 02 12:18:34 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [WARNING]  (280292) : Exiting Master process...
Oct 02 12:18:34 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [WARNING]  (280292) : Exiting Master process...
Oct 02 12:18:34 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [ALERT]    (280292) : Current worker (280294) exited with code 143 (Terminated)
Oct 02 12:18:34 compute-0 neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab[280288]: [WARNING]  (280292) : All workers exited. Exiting... (0)
Oct 02 12:18:34 compute-0 systemd[1]: libpod-36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd.scope: Deactivated successfully.
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.051 2 DEBUG nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.053 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.054 2 INFO nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Creating image(s)
Oct 02 12:18:34 compute-0 podman[282044]: 2025-10-02 12:18:34.063554549 +0000 UTC m=+0.208331812 container died 36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:34 compute-0 ceph-mon[73668]: pgmap v1184: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 12:18:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/564973710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.204 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd-userdata-shm.mount: Deactivated successfully.
Oct 02 12:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-54aaf47816a71c121b06ee39a1fd236c2246d2334a0475ddc9190e7dbaebeddc-merged.mount: Deactivated successfully.
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.255 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.287 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.292 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.359 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.360 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.360 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.361 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:34 compute-0 podman[282044]: 2025-10-02 12:18:34.448070313 +0000 UTC m=+0.592847576 container cleanup 36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.457 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:34 compute-0 systemd[1]: libpod-conmon-36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd.scope: Deactivated successfully.
Oct 02 12:18:34 compute-0 ovn_controller[148123]: 2025-10-02T12:18:34Z|00138|binding|INFO|Releasing lport 97dd6f15-0e35-4e74-9928-1fab371ff2fc from this chassis (sb_readonly=0)
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.472 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:34 compute-0 ovn_controller[148123]: 2025-10-02T12:18:34Z|00139|binding|INFO|Releasing lport 97dd6f15-0e35-4e74-9928-1fab371ff2fc from this chassis (sb_readonly=0)
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:34.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:34 compute-0 podman[282170]: 2025-10-02 12:18:34.817072532 +0000 UTC m=+0.339357946 container remove 36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.824 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[59f449c3-6490-408c-9928-8c35c4564b4a]: (4, ('Thu Oct  2 12:18:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab (36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd)\n36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd\nThu Oct  2 12:18:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab (36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd)\n36fc363f19b618d4937be01811e46f9297c96d8b94c1e766b19bb537d3339fbd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.827 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[82269dbf-e143-4f3b-b6a6-4b1113e809a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.829 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1166a0a2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:34 compute-0 kernel: tap1166a0a2-a0: left promiscuous mode
Oct 02 12:18:34 compute-0 nova_compute[256940]: 2025-10-02 12:18:34.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 225 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.852 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0e12f9f0-2799-44b9-ada5-be79a565431d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.878 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4eaeadad-8161-4a01-a4c6-434904150fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.880 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[16ee48b8-1e20-4b73-bc83-5f11f8a3e1bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.903 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[53b11686-d2bd-4abd-8e6b-303fb14bdb92]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534886, 'reachable_time': 36446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282204, 'error': None, 'target': 'ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d1166a0a2\x2daca4\x2d4278\x2dbd35\x2dd73d4e5bb2ab.mount: Deactivated successfully.
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.909 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1166a0a2-aca4-4278-bd35-d73d4e5bb2ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:18:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:18:34.909 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0e93f36a-5dc6-4c8c-a60f-cf48d5ff1da7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:35.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.110 2 DEBUG nova.compute.manager [req-aab13fe9-ffed-4960-8e23-323c00287c3b req-d865cdbb-8f60-40d7-888e-41ba62669a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.110 2 DEBUG oslo_concurrency.lockutils [req-aab13fe9-ffed-4960-8e23-323c00287c3b req-d865cdbb-8f60-40d7-888e-41ba62669a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.110 2 DEBUG oslo_concurrency.lockutils [req-aab13fe9-ffed-4960-8e23-323c00287c3b req-d865cdbb-8f60-40d7-888e-41ba62669a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.111 2 DEBUG oslo_concurrency.lockutils [req-aab13fe9-ffed-4960-8e23-323c00287c3b req-d865cdbb-8f60-40d7-888e-41ba62669a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.111 2 DEBUG nova.compute.manager [req-aab13fe9-ffed-4960-8e23-323c00287c3b req-d865cdbb-8f60-40d7-888e-41ba62669a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] No waiting events found dispatching network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.111 2 WARNING nova.compute.manager [req-aab13fe9-ffed-4960-8e23-323c00287c3b req-d865cdbb-8f60-40d7-888e-41ba62669a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received unexpected event network-vif-plugged-8ca9e621-315f-431f-a0ec-ce3af08b1458 for instance with vm_state active and task_state deleting.
Oct 02 12:18:36 compute-0 podman[282206]: 2025-10-02 12:18:36.40354894 +0000 UTC m=+0.068163481 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:18:36 compute-0 podman[282205]: 2025-10-02 12:18:36.411031556 +0000 UTC m=+0.075646827 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.419 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.947s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:36 compute-0 nova_compute[256940]: 2025-10-02 12:18:36.514 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] resizing rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:18:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:36.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 188 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 179 op/s
Oct 02 12:18:36 compute-0 ceph-mon[73668]: pgmap v1185: 305 pgs: 305 active+clean; 225 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 12:18:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:37.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.765 2 DEBUG nova.objects.instance [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lazy-loading 'migration_context' on Instance uuid e6d922cc-a5bb-44a7-93c3-a3704e7ada62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.787 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.787 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Ensure instance console log exists: /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.788 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.788 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.788 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.790 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.797 2 WARNING nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.803 2 DEBUG nova.virt.libvirt.host [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.804 2 DEBUG nova.virt.libvirt.host [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.813 2 DEBUG nova.virt.libvirt.host [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.814 2 DEBUG nova.virt.libvirt.host [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.815 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.815 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.816 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.816 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.816 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.816 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.816 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.817 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.817 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.817 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.817 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.817 2 DEBUG nova.virt.hardware [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.820 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.901 2 INFO nova.virt.libvirt.driver [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Deleting instance files /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad_del
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.903 2 INFO nova.virt.libvirt.driver [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Deletion of /var/lib/nova/instances/b91a8089-73c5-41c6-88aa-7681bf70fcad_del complete
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.983 2 INFO nova.compute.manager [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Took 4.39 seconds to destroy the instance on the hypervisor.
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.984 2 DEBUG oslo.service.loopingcall [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.984 2 DEBUG nova.compute.manager [-] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:18:37 compute-0 nova_compute[256940]: 2025-10-02 12:18:37.985 2 DEBUG nova.network.neutron [-] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:18:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388213123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:38 compute-0 ceph-mon[73668]: pgmap v1186: 305 pgs: 305 active+clean; 188 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 179 op/s
Oct 02 12:18:38 compute-0 nova_compute[256940]: 2025-10-02 12:18:38.268 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:38 compute-0 nova_compute[256940]: 2025-10-02 12:18:38.328 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:38 compute-0 nova_compute[256940]: 2025-10-02 12:18:38.333 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319245846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:38 compute-0 nova_compute[256940]: 2025-10-02 12:18:38.792 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:38 compute-0 nova_compute[256940]: 2025-10-02 12:18:38.794 2 DEBUG nova.objects.instance [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6d922cc-a5bb-44a7-93c3-a3704e7ada62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:38 compute-0 nova_compute[256940]: 2025-10-02 12:18:38.822 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <uuid>e6d922cc-a5bb-44a7-93c3-a3704e7ada62</uuid>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <name>instance-00000024</name>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1772471893</nova:name>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:18:37</nova:creationTime>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <nova:user uuid="c39d93056d3e4b0b8c409a66d7308ffc">tempest-ServerDiagnosticsV248Test-742916682-project-member</nova:user>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <nova:project uuid="eb2c944f35b94efcaf91b43a8272cc75">tempest-ServerDiagnosticsV248Test-742916682</nova:project>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <system>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <entry name="serial">e6d922cc-a5bb-44a7-93c3-a3704e7ada62</entry>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <entry name="uuid">e6d922cc-a5bb-44a7-93c3-a3704e7ada62</entry>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </system>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <os>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   </os>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <features>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   </features>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk">
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk.config">
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:18:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/console.log" append="off"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <video>
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </video>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:18:38 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:18:38 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:18:38 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:18:38 compute-0 nova_compute[256940]: </domain>
Oct 02 12:18:38 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:18:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 188 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Oct 02 12:18:38 compute-0 nova_compute[256940]: 2025-10-02 12:18:38.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.207 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.208 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.209 2 INFO nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Using config drive
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.235 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/388213123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4179098919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/319245846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.528 2 INFO nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Creating config drive at /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/disk.config
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.535 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm6jds0co execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:39.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.669 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm6jds0co" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.705 2 DEBUG nova.storage.rbd_utils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] rbd image e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.710 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/disk.config e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.742 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.744 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.744 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.744 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.766 2 DEBUG nova.network.neutron [-] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.770 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.770 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.770 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.797 2 INFO nova.compute.manager [-] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Took 1.81 seconds to deallocate network for instance.
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.896 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.897 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.918 2 DEBUG oslo_concurrency.processutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/disk.config e6d922cc-a5bb-44a7-93c3-a3704e7ada62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.919 2 INFO nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Deleting local config drive /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62/disk.config because it was imported into RBD.
Oct 02 12:18:39 compute-0 nova_compute[256940]: 2025-10-02 12:18:39.969 2 DEBUG oslo_concurrency.processutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-0 systemd-machined[210927]: New machine qemu-18-instance-00000024.
Oct 02 12:18:40 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000024.
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0038488158384515168 of space, bias 1.0, pg target 1.154644751535455 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.083 2 DEBUG nova.compute.manager [req-666c52b2-80cf-45f1-a27f-f7080b9e5cda req-ddf1d914-7aeb-4fbe-8fea-aa78b544e58d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Received event network-vif-deleted-8ca9e621-315f-431f-a0ec-ce3af08b1458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:18:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073904783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.453 2 DEBUG oslo_concurrency.processutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.459 2 DEBUG nova.compute.provider_tree [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.519 2 DEBUG nova.scheduler.client.report [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.584 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.631 2 INFO nova.scheduler.client.report [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Deleted allocations for instance b91a8089-73c5-41c6-88aa-7681bf70fcad
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.815 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407505.8142583, 27999ce2-bc13-4245-89b5-6f616ace394e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.816 2 INFO nova.compute.manager [-] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] VM Stopped (Lifecycle Event)
Oct 02 12:18:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:40.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:40 compute-0 ceph-mon[73668]: pgmap v1187: 305 pgs: 305 active+clean; 188 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Oct 02 12:18:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/951820212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2073904783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 213 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Oct 02 12:18:40 compute-0 nova_compute[256940]: 2025-10-02 12:18:40.944 2 DEBUG nova.compute.manager [None req-9b503514-3a7e-4331-a776-57e684023887 - - - - - -] [instance: 27999ce2-bc13-4245-89b5-6f616ace394e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.013 2 DEBUG oslo_concurrency.lockutils [None req-c11099ee-9b80-46a1-82da-c13605c13c31 3c53fbb5ed1e4cf380a90975be5dc249 96eda2c1552247d8b6632dd9e7d1f6df - - default default] Lock "b91a8089-73c5-41c6-88aa-7681bf70fcad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.283 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.283 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.284 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.284 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.285 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.482 2 DEBUG nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.484 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.486 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407521.4861808, e6d922cc-a5bb-44a7-93c3-a3704e7ada62 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.486 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] VM Resumed (Lifecycle Event)
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.489 2 INFO nova.virt.libvirt.driver [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Instance spawned successfully.
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.490 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.549 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.553 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.606 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.606 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.607 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.607 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.608 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.608 2 DEBUG nova.virt.libvirt.driver [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.645 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.646 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407521.4866753, e6d922cc-a5bb-44a7-93c3-a3704e7ada62 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.646 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] VM Started (Lifecycle Event)
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.718 2 INFO nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Took 7.67 seconds to spawn the instance on the hypervisor.
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.719 2 DEBUG nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.732 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65087621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.742 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.762 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.815 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.931 2 INFO nova.compute.manager [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Took 8.85 seconds to build instance.
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.966 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:41 compute-0 nova_compute[256940]: 2025-10-02 12:18:41.966 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.051 2 DEBUG oslo_concurrency.lockutils [None req-15bbdf53-2dd3-4b5c-ad4c-da3981f47a4e c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1177830465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:42 compute-0 ceph-mon[73668]: pgmap v1188: 305 pgs: 305 active+clean; 213 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Oct 02 12:18:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/65087621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3657979213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.207 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.209 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4638MB free_disk=20.907501220703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.209 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.210 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.327 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e6d922cc-a5bb-44a7-93c3-a3704e7ada62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.330 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.330 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.369 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.513 2 DEBUG nova.compute.manager [None req-45139436-71c6-47ef-87f1-7f797a0ebc3f 9de4e861a27d499d80538643fa01d948 968f76078e4046fbae8f7230dd0b8147 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:42 compute-0 nova_compute[256940]: 2025-10-02 12:18:42.518 2 INFO nova.compute.manager [None req-45139436-71c6-47ef-87f1-7f797a0ebc3f 9de4e861a27d499d80538643fa01d948 968f76078e4046fbae8f7230dd0b8147 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Retrieving diagnostics
Oct 02 12:18:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:42.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 213 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 12:18:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2963092672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:43 compute-0 nova_compute[256940]: 2025-10-02 12:18:43.032 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:43 compute-0 nova_compute[256940]: 2025-10-02 12:18:43.037 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:43 compute-0 nova_compute[256940]: 2025-10-02 12:18:43.057 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:43 compute-0 nova_compute[256940]: 2025-10-02 12:18:43.090 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:18:43 compute-0 nova_compute[256940]: 2025-10-02 12:18:43.092 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:43.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:43 compute-0 nova_compute[256940]: 2025-10-02 12:18:43.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:44 compute-0 nova_compute[256940]: 2025-10-02 12:18:44.090 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:44 compute-0 ceph-mon[73668]: pgmap v1189: 305 pgs: 305 active+clean; 213 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 12:18:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2963092672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:44 compute-0 nova_compute[256940]: 2025-10-02 12:18:44.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:44.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 214 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Oct 02 12:18:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:45.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:46 compute-0 ceph-mon[73668]: pgmap v1190: 305 pgs: 305 active+clean; 214 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Oct 02 12:18:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:46.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 223 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:18:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:47.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/704874661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4246629214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:48 compute-0 nova_compute[256940]: 2025-10-02 12:18:48.830 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407513.8280075, b91a8089-73c5-41c6-88aa-7681bf70fcad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:48 compute-0 nova_compute[256940]: 2025-10-02 12:18:48.830 2 INFO nova.compute.manager [-] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] VM Stopped (Lifecycle Event)
Oct 02 12:18:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:48.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 223 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Oct 02 12:18:48 compute-0 nova_compute[256940]: 2025-10-02 12:18:48.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:48 compute-0 nova_compute[256940]: 2025-10-02 12:18:48.906 2 DEBUG nova.compute.manager [None req-2c8f4ff5-087a-4dd1-85ec-f2adfb5bf157 - - - - - -] [instance: b91a8089-73c5-41c6-88aa-7681bf70fcad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:49 compute-0 sudo[282571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:49 compute-0 sudo[282571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:49 compute-0 sudo[282571]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:49 compute-0 sudo[282596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:49 compute-0 sudo[282596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:49 compute-0 sudo[282596]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:49 compute-0 ceph-mon[73668]: pgmap v1191: 305 pgs: 305 active+clean; 223 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:18:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:49 compute-0 nova_compute[256940]: 2025-10-02 12:18:49.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:49.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:50 compute-0 ceph-mon[73668]: pgmap v1192: 305 pgs: 305 active+clean; 223 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Oct 02 12:18:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:50.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 235 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 133 op/s
Oct 02 12:18:51 compute-0 podman[282622]: 2025-10-02 12:18:51.401552704 +0000 UTC m=+0.062093009 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:18:51 compute-0 podman[282623]: 2025-10-02 12:18:51.448141548 +0000 UTC m=+0.100472279 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:18:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:51.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:52 compute-0 sshd-session[282668]: banner exchange: Connection from 52.248.40.89 port 39378: invalid format
Oct 02 12:18:52 compute-0 ceph-mon[73668]: pgmap v1193: 305 pgs: 305 active+clean; 235 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 133 op/s
Oct 02 12:18:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:52.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:52 compute-0 nova_compute[256940]: 2025-10-02 12:18:52.855 2 DEBUG nova.compute.manager [None req-c14871fa-8f20-4cf6-abb0-c8bfa8fe889a 9de4e861a27d499d80538643fa01d948 968f76078e4046fbae8f7230dd0b8147 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:52 compute-0 nova_compute[256940]: 2025-10-02 12:18:52.858 2 INFO nova.compute.manager [None req-c14871fa-8f20-4cf6-abb0-c8bfa8fe889a 9de4e861a27d499d80538643fa01d948 968f76078e4046fbae8f7230dd0b8147 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Retrieving diagnostics
Oct 02 12:18:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 235 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 133 op/s
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.543 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.544 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.544 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.545 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.545 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.547 2 INFO nova.compute.manager [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Terminating instance
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.548 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "refresh_cache-e6d922cc-a5bb-44a7-93c3-a3704e7ada62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.548 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquired lock "refresh_cache-e6d922cc-a5bb-44a7-93c3-a3704e7ada62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.549 2 DEBUG nova.network.neutron [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:18:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:53.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:53 compute-0 nova_compute[256940]: 2025-10-02 12:18:53.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:54 compute-0 nova_compute[256940]: 2025-10-02 12:18:54.109 2 DEBUG nova.network.neutron [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:18:54 compute-0 ceph-mon[73668]: pgmap v1194: 305 pgs: 305 active+clean; 235 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 133 op/s
Oct 02 12:18:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:54 compute-0 nova_compute[256940]: 2025-10-02 12:18:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:54 compute-0 nova_compute[256940]: 2025-10-02 12:18:54.585 2 DEBUG nova.network.neutron [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:54 compute-0 nova_compute[256940]: 2025-10-02 12:18:54.601 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Releasing lock "refresh_cache-e6d922cc-a5bb-44a7-93c3-a3704e7ada62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:54 compute-0 nova_compute[256940]: 2025-10-02 12:18:54.602 2 DEBUG nova.compute.manager [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:18:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 242 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Oct 02 12:18:54 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000024.scope: Deactivated successfully.
Oct 02 12:18:54 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000024.scope: Consumed 12.530s CPU time.
Oct 02 12:18:54 compute-0 systemd-machined[210927]: Machine qemu-18-instance-00000024 terminated.
Oct 02 12:18:55 compute-0 nova_compute[256940]: 2025-10-02 12:18:55.028 2 INFO nova.virt.libvirt.driver [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Instance destroyed successfully.
Oct 02 12:18:55 compute-0 nova_compute[256940]: 2025-10-02 12:18:55.028 2 DEBUG nova.objects.instance [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lazy-loading 'resources' on Instance uuid e6d922cc-a5bb-44a7-93c3-a3704e7ada62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:55.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:55 compute-0 sudo[282692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:55 compute-0 sudo[282692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:55 compute-0 sudo[282692]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:55 compute-0 sudo[282717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:18:55 compute-0 sudo[282717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:55 compute-0 sudo[282717]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:55 compute-0 sudo[282742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:55 compute-0 sudo[282742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:55 compute-0 sudo[282742]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:55 compute-0 sudo[282767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:18:55 compute-0 sudo[282767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:56 compute-0 sudo[282767]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:56 compute-0 ceph-mon[73668]: pgmap v1195: 305 pgs: 305 active+clean; 242 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Oct 02 12:18:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:56.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 247 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 157 op/s
Oct 02 12:18:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:18:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:18:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:18:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:18:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:18:57 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev aecead52-177a-4578-b10d-df9edd3baa3c does not exist
Oct 02 12:18:57 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a48fe4a2-18a7-41df-b667-aa1313fcc0fa does not exist
Oct 02 12:18:57 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 508836af-84d0-46df-94b6-c51f2bf473f1 does not exist
Oct 02 12:18:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:18:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:18:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:18:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:18:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:18:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:57.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:57 compute-0 sudo[282826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:57 compute-0 sudo[282826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:57 compute-0 sudo[282826]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:57 compute-0 sudo[282851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:18:57 compute-0 sudo[282851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:57 compute-0 sudo[282851]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:57 compute-0 sudo[282876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:57 compute-0 sudo[282876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:57 compute-0 sudo[282876]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:57 compute-0 sudo[282901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:18:57 compute-0 sudo[282901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:58 compute-0 ceph-mon[73668]: pgmap v1196: 305 pgs: 305 active+clean; 247 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 157 op/s
Oct 02 12:18:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:18:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:18:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:18:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:18:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:18:58 compute-0 podman[282967]: 2025-10-02 12:18:58.444023602 +0000 UTC m=+0.107406170 container create 6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:18:58 compute-0 podman[282967]: 2025-10-02 12:18:58.36912585 +0000 UTC m=+0.032508438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:18:58 compute-0 systemd[1]: Started libpod-conmon-6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7.scope.
Oct 02 12:18:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:18:58 compute-0 podman[282967]: 2025-10-02 12:18:58.63350366 +0000 UTC m=+0.296886258 container init 6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:18:58 compute-0 podman[282967]: 2025-10-02 12:18:58.643015528 +0000 UTC m=+0.306398106 container start 6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:18:58 compute-0 recursing_buck[282985]: 167 167
Oct 02 12:18:58 compute-0 systemd[1]: libpod-6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7.scope: Deactivated successfully.
Oct 02 12:18:58 compute-0 podman[282967]: 2025-10-02 12:18:58.699842649 +0000 UTC m=+0.363225217 container attach 6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:18:58 compute-0 podman[282967]: 2025-10-02 12:18:58.700416894 +0000 UTC m=+0.363799462 container died 6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffcc2b8e21a5900fbed818238973477988843b5bad3c8d57f4af338ae4b4b0f7-merged.mount: Deactivated successfully.
Oct 02 12:18:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:58.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 247 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 100 op/s
Oct 02 12:18:58 compute-0 nova_compute[256940]: 2025-10-02 12:18:58.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:59 compute-0 nova_compute[256940]: 2025-10-02 12:18:59.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:18:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:18:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:18:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:59 compute-0 podman[282967]: 2025-10-02 12:18:59.966582743 +0000 UTC m=+1.629965311 container remove 6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:18:59 compute-0 systemd[1]: libpod-conmon-6fad6eb7de4cd906988b8ae27da7d4a0e97ea5db1f80541ff7f927b2252a99c7.scope: Deactivated successfully.
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.249 2 INFO nova.virt.libvirt.driver [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Deleting instance files /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62_del
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.252 2 INFO nova.virt.libvirt.driver [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Deletion of /var/lib/nova/instances/e6d922cc-a5bb-44a7-93c3-a3704e7ada62_del complete
Oct 02 12:19:00 compute-0 podman[283008]: 2025-10-02 12:19:00.18672096 +0000 UTC m=+0.030081855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.300 2 INFO nova.compute.manager [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Took 5.70 seconds to destroy the instance on the hypervisor.
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.302 2 DEBUG oslo.service.loopingcall [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.302 2 DEBUG nova.compute.manager [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.303 2 DEBUG nova.network.neutron [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:19:00 compute-0 ceph-mon[73668]: pgmap v1197: 305 pgs: 305 active+clean; 247 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 100 op/s
Oct 02 12:19:00 compute-0 podman[283008]: 2025-10-02 12:19:00.396388905 +0000 UTC m=+0.239749750 container create be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:19:00 compute-0 systemd[1]: Started libpod-conmon-be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134.scope.
Oct 02 12:19:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd42793e8aa30db8073efa86e6535807b08ea6b23b4e3dbb0cc2a61b7e442/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd42793e8aa30db8073efa86e6535807b08ea6b23b4e3dbb0cc2a61b7e442/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd42793e8aa30db8073efa86e6535807b08ea6b23b4e3dbb0cc2a61b7e442/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd42793e8aa30db8073efa86e6535807b08ea6b23b4e3dbb0cc2a61b7e442/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd42793e8aa30db8073efa86e6535807b08ea6b23b4e3dbb0cc2a61b7e442/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.647 2 DEBUG nova.network.neutron [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.662 2 DEBUG nova.network.neutron [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:00 compute-0 podman[283008]: 2025-10-02 12:19:00.678441365 +0000 UTC m=+0.521802190 container init be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wu, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:19:00 compute-0 podman[283008]: 2025-10-02 12:19:00.686912806 +0000 UTC m=+0.530273611 container start be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.695 2 INFO nova.compute.manager [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Took 0.39 seconds to deallocate network for instance.
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.774 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.776 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:00 compute-0 podman[283008]: 2025-10-02 12:19:00.822035358 +0000 UTC m=+0.665396183 container attach be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:19:00 compute-0 nova_compute[256940]: 2025-10-02 12:19:00.852 2 DEBUG oslo_concurrency.processutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 196 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 120 op/s
Oct 02 12:19:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:00.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790133365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:01 compute-0 nova_compute[256940]: 2025-10-02 12:19:01.312 2 DEBUG oslo_concurrency.processutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:01 compute-0 nova_compute[256940]: 2025-10-02 12:19:01.324 2 DEBUG nova.compute.provider_tree [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:01 compute-0 nova_compute[256940]: 2025-10-02 12:19:01.344 2 DEBUG nova.scheduler.client.report [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:01 compute-0 nova_compute[256940]: 2025-10-02 12:19:01.367 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:01 compute-0 nova_compute[256940]: 2025-10-02 12:19:01.402 2 INFO nova.scheduler.client.report [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Deleted allocations for instance e6d922cc-a5bb-44a7-93c3-a3704e7ada62
Oct 02 12:19:01 compute-0 nova_compute[256940]: 2025-10-02 12:19:01.474 2 DEBUG oslo_concurrency.lockutils [None req-5db4309e-453f-4cff-b1a0-ee7b295ed5fe c39d93056d3e4b0b8c409a66d7308ffc eb2c944f35b94efcaf91b43a8272cc75 - - default default] Lock "e6d922cc-a5bb-44a7-93c3-a3704e7ada62" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:01 compute-0 sharp_wu[283025]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:19:01 compute-0 sharp_wu[283025]: --> relative data size: 1.0
Oct 02 12:19:01 compute-0 sharp_wu[283025]: --> All data devices are unavailable
Oct 02 12:19:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2790133365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:01 compute-0 systemd[1]: libpod-be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134.scope: Deactivated successfully.
Oct 02 12:19:01 compute-0 podman[283008]: 2025-10-02 12:19:01.680142531 +0000 UTC m=+1.523503376 container died be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wu, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:19:01 compute-0 sshd-session[282666]: Connection closed by 52.248.40.89 port 39370 [preauth]
Oct 02 12:19:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a2cd42793e8aa30db8073efa86e6535807b08ea6b23b4e3dbb0cc2a61b7e442-merged.mount: Deactivated successfully.
Oct 02 12:19:02 compute-0 podman[283008]: 2025-10-02 12:19:02.09257785 +0000 UTC m=+1.935938655 container remove be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:19:02 compute-0 systemd[1]: libpod-conmon-be358d3f6f5892f7785efcd5d5847c6125d0a392c7d6ffca85d44d2932ba8134.scope: Deactivated successfully.
Oct 02 12:19:02 compute-0 sudo[282901]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:02 compute-0 sudo[283074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:02 compute-0 sudo[283074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:02 compute-0 sudo[283074]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:02 compute-0 sudo[283099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:19:02 compute-0 sudo[283099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:02 compute-0 sudo[283099]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:02 compute-0 sudo[283124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:02 compute-0 sudo[283124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:02 compute-0 sudo[283124]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:02 compute-0 sudo[283150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:19:02 compute-0 sudo[283150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:02 compute-0 ceph-mon[73668]: pgmap v1198: 305 pgs: 305 active+clean; 196 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 120 op/s
Oct 02 12:19:02 compute-0 podman[283216]: 2025-10-02 12:19:02.864395675 +0000 UTC m=+0.065648462 container create 2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:19:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 165 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 117 op/s
Oct 02 12:19:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:02.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:02 compute-0 podman[283216]: 2025-10-02 12:19:02.83237356 +0000 UTC m=+0.033626387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:02 compute-0 systemd[1]: Started libpod-conmon-2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe.scope.
Oct 02 12:19:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:03 compute-0 podman[283216]: 2025-10-02 12:19:03.030384151 +0000 UTC m=+0.231636938 container init 2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:19:03 compute-0 podman[283216]: 2025-10-02 12:19:03.04760409 +0000 UTC m=+0.248856837 container start 2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:19:03 compute-0 hopeful_brattain[283233]: 167 167
Oct 02 12:19:03 compute-0 systemd[1]: libpod-2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe.scope: Deactivated successfully.
Oct 02 12:19:03 compute-0 podman[283216]: 2025-10-02 12:19:03.149828884 +0000 UTC m=+0.351081681 container attach 2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:19:03 compute-0 podman[283216]: 2025-10-02 12:19:03.150643875 +0000 UTC m=+0.351896622 container died 2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1172135f629666df406979eb023bd071f915c8e8097b21135a1967bc8e20d34b-merged.mount: Deactivated successfully.
Oct 02 12:19:03 compute-0 podman[283216]: 2025-10-02 12:19:03.634934397 +0000 UTC m=+0.836187154 container remove 2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:19:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:03 compute-0 systemd[1]: libpod-conmon-2557a80e82118cb874f68e66fbfa085fecc3d46f4b6980936c9c8e7539790ebe.scope: Deactivated successfully.
Oct 02 12:19:03 compute-0 podman[283258]: 2025-10-02 12:19:03.816431597 +0000 UTC m=+0.032001945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:03 compute-0 nova_compute[256940]: 2025-10-02 12:19:03.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:03 compute-0 podman[283258]: 2025-10-02 12:19:03.981656483 +0000 UTC m=+0.197226831 container create efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:19:04 compute-0 ceph-mon[73668]: pgmap v1199: 305 pgs: 305 active+clean; 165 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 117 op/s
Oct 02 12:19:04 compute-0 systemd[1]: Started libpod-conmon-efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526.scope.
Oct 02 12:19:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7f6a245e7a714944da2cf45633188edffd791fe7cef456a4c0b2069ec0e143/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7f6a245e7a714944da2cf45633188edffd791fe7cef456a4c0b2069ec0e143/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7f6a245e7a714944da2cf45633188edffd791fe7cef456a4c0b2069ec0e143/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7f6a245e7a714944da2cf45633188edffd791fe7cef456a4c0b2069ec0e143/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:04 compute-0 podman[283258]: 2025-10-02 12:19:04.249710199 +0000 UTC m=+0.465280547 container init efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:19:04 compute-0 podman[283258]: 2025-10-02 12:19:04.261339852 +0000 UTC m=+0.476910220 container start efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:19:04 compute-0 podman[283258]: 2025-10-02 12:19:04.307424294 +0000 UTC m=+0.522994712 container attach efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:19:04 compute-0 nova_compute[256940]: 2025-10-02 12:19:04.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 140 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 106 op/s
Oct 02 12:19:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:04.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]: {
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:     "1": [
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:         {
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "devices": [
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "/dev/loop3"
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             ],
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "lv_name": "ceph_lv0",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "lv_size": "7511998464",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "name": "ceph_lv0",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "tags": {
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.cluster_name": "ceph",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.crush_device_class": "",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.encrypted": "0",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.osd_id": "1",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.type": "block",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:                 "ceph.vdo": "0"
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             },
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "type": "block",
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:             "vg_name": "ceph_vg0"
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:         }
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]:     ]
Oct 02 12:19:05 compute-0 festive_hofstadter[283274]: }
Oct 02 12:19:05 compute-0 systemd[1]: libpod-efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526.scope: Deactivated successfully.
Oct 02 12:19:05 compute-0 podman[283258]: 2025-10-02 12:19:05.154475358 +0000 UTC m=+1.370045686 container died efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3932920309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3932920309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:19:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/439886864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:05.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:05 compute-0 nova_compute[256940]: 2025-10-02 12:19:05.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:05.718 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:19:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:05.721 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c7f6a245e7a714944da2cf45633188edffd791fe7cef456a4c0b2069ec0e143-merged.mount: Deactivated successfully.
Oct 02 12:19:06 compute-0 podman[283258]: 2025-10-02 12:19:06.102621581 +0000 UTC m=+2.318191919 container remove efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:19:06 compute-0 systemd[1]: libpod-conmon-efb56b6e361cf76688147a5919b9152ee4295e2b5b8357ca200dfb2394c38526.scope: Deactivated successfully.
Oct 02 12:19:06 compute-0 sudo[283150]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:06 compute-0 sudo[283297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:06 compute-0 sudo[283297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:06 compute-0 sudo[283297]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:06 compute-0 sudo[283322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:19:06 compute-0 sudo[283322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:06 compute-0 sudo[283322]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:06 compute-0 sudo[283347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:06 compute-0 sudo[283347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:06 compute-0 sudo[283347]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:06 compute-0 sudo[283372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:19:06 compute-0 sudo[283372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:06 compute-0 ceph-mon[73668]: pgmap v1200: 305 pgs: 305 active+clean; 140 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 106 op/s
Oct 02 12:19:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3932920309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:19:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3932920309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:19:06 compute-0 podman[283398]: 2025-10-02 12:19:06.572602828 +0000 UTC m=+0.086775892 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:19:06 compute-0 podman[283397]: 2025-10-02 12:19:06.577200438 +0000 UTC m=+0.101527166 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:19:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 400 KiB/s wr, 71 op/s
Oct 02 12:19:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:06.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:06 compute-0 podman[283478]: 2025-10-02 12:19:06.848133509 +0000 UTC m=+0.041347828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:06 compute-0 podman[283478]: 2025-10-02 12:19:06.997736849 +0000 UTC m=+0.190951148 container create 44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:19:07 compute-0 systemd[1]: Started libpod-conmon-44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2.scope.
Oct 02 12:19:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:07 compute-0 podman[283478]: 2025-10-02 12:19:07.434491881 +0000 UTC m=+0.627706210 container init 44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:19:07 compute-0 podman[283478]: 2025-10-02 12:19:07.449328328 +0000 UTC m=+0.642542647 container start 44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:19:07 compute-0 fervent_spence[283494]: 167 167
Oct 02 12:19:07 compute-0 systemd[1]: libpod-44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2.scope: Deactivated successfully.
Oct 02 12:19:07 compute-0 podman[283478]: 2025-10-02 12:19:07.555094374 +0000 UTC m=+0.748308773 container attach 44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:19:07 compute-0 podman[283478]: 2025-10-02 12:19:07.556590193 +0000 UTC m=+0.749804522 container died 44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:19:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:07.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-65b61c6097f8398bdf0099c094065df00eb3a169d05b9eed4c88a2f47a4230ec-merged.mount: Deactivated successfully.
Oct 02 12:19:08 compute-0 podman[283478]: 2025-10-02 12:19:08.324626439 +0000 UTC m=+1.517840738 container remove 44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:19:08 compute-0 systemd[1]: libpod-conmon-44fd7a38d307af467258bbb622711dc68aa07c5f87c5caa66e9ff093e5fff1b2.scope: Deactivated successfully.
Oct 02 12:19:08 compute-0 podman[283520]: 2025-10-02 12:19:08.517354431 +0000 UTC m=+0.027906198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:19:08 compute-0 podman[283520]: 2025-10-02 12:19:08.635694726 +0000 UTC m=+0.146246483 container create 3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:19:08 compute-0 systemd[1]: Started libpod-conmon-3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7.scope.
Oct 02 12:19:08 compute-0 ceph-mon[73668]: pgmap v1201: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 400 KiB/s wr, 71 op/s
Oct 02 12:19:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e6a54a4d57cd88e1c9decb800ca1abfea1f7e19d76b4c3641c58467e5ef63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e6a54a4d57cd88e1c9decb800ca1abfea1f7e19d76b4c3641c58467e5ef63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e6a54a4d57cd88e1c9decb800ca1abfea1f7e19d76b4c3641c58467e5ef63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e6a54a4d57cd88e1c9decb800ca1abfea1f7e19d76b4c3641c58467e5ef63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:19:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 49 op/s
Oct 02 12:19:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:08 compute-0 podman[283520]: 2025-10-02 12:19:08.904800049 +0000 UTC m=+0.415351896 container init 3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:19:08 compute-0 podman[283520]: 2025-10-02 12:19:08.921407952 +0000 UTC m=+0.431959709 container start 3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:19:08 compute-0 nova_compute[256940]: 2025-10-02 12:19:08.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:09 compute-0 podman[283520]: 2025-10-02 12:19:09.085613681 +0000 UTC m=+0.596165448 container attach 3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:19:09 compute-0 sudo[283541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:09 compute-0 sudo[283541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:09 compute-0 sudo[283541]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:09 compute-0 sudo[283566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:09 compute-0 sudo[283566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:09 compute-0 sudo[283566]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:09 compute-0 nova_compute[256940]: 2025-10-02 12:19:09.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:09.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:09 compute-0 goofy_joliot[283536]: {
Oct 02 12:19:09 compute-0 goofy_joliot[283536]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:19:09 compute-0 goofy_joliot[283536]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:19:09 compute-0 goofy_joliot[283536]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:19:09 compute-0 goofy_joliot[283536]:         "osd_id": 1,
Oct 02 12:19:09 compute-0 goofy_joliot[283536]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:19:09 compute-0 goofy_joliot[283536]:         "type": "bluestore"
Oct 02 12:19:09 compute-0 goofy_joliot[283536]:     }
Oct 02 12:19:09 compute-0 goofy_joliot[283536]: }
Oct 02 12:19:09 compute-0 systemd[1]: libpod-3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7.scope: Deactivated successfully.
Oct 02 12:19:09 compute-0 podman[283520]: 2025-10-02 12:19:09.859924351 +0000 UTC m=+1.370476098 container died 3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:19:10 compute-0 nova_compute[256940]: 2025-10-02 12:19:10.026 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407535.0243127, e6d922cc-a5bb-44a7-93c3-a3704e7ada62 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:10 compute-0 nova_compute[256940]: 2025-10-02 12:19:10.026 2 INFO nova.compute.manager [-] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] VM Stopped (Lifecycle Event)
Oct 02 12:19:10 compute-0 nova_compute[256940]: 2025-10-02 12:19:10.057 2 DEBUG nova.compute.manager [None req-0040cbec-77c3-4d99-96f5-e954f13e9bd2 - - - - - -] [instance: e6d922cc-a5bb-44a7-93c3-a3704e7ada62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 49 op/s
Oct 02 12:19:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:10 compute-0 ceph-mon[73668]: pgmap v1202: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 49 op/s
Oct 02 12:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-190e6a54a4d57cd88e1c9decb800ca1abfea1f7e19d76b4c3641c58467e5ef63-merged.mount: Deactivated successfully.
Oct 02 12:19:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:11.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:11.723 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:11 compute-0 podman[283520]: 2025-10-02 12:19:11.879555246 +0000 UTC m=+3.390107003 container remove 3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:19:11 compute-0 sudo[283372]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:11 compute-0 systemd[1]: libpod-conmon-3c3a4f72a637b184e1bc27f8d39c12cecde776fb0f969e18b695a81449357db7.scope: Deactivated successfully.
Oct 02 12:19:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:19:12 compute-0 ceph-mon[73668]: pgmap v1203: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 49 op/s
Oct 02 12:19:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:19:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:19:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:19:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev eb8812f8-001e-4151-b95a-9aaf57b5c433 does not exist
Oct 02 12:19:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b6938a49-da92-4e6e-ab3a-a4233839777a does not exist
Oct 02 12:19:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 47ca5a29-8655-45aa-9ae4-37fee9a89237 does not exist
Oct 02 12:19:12 compute-0 sudo[283621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:12 compute-0 sudo[283621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:12 compute-0 sudo[283621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:12 compute-0 sudo[283646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:19:12 compute-0 sudo[283646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:12 compute-0 sudo[283646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:12 compute-0 nova_compute[256940]: 2025-10-02 12:19:12.816 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "6dde9654-bcd6-42d0-96c5-bee346944cc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:12 compute-0 nova_compute[256940]: 2025-10-02 12:19:12.817 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "6dde9654-bcd6-42d0-96c5-bee346944cc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct 02 12:19:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:19:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:12.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:19:13 compute-0 nova_compute[256940]: 2025-10-02 12:19:13.006 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:19:13 compute-0 nova_compute[256940]: 2025-10-02 12:19:13.319 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:13 compute-0 nova_compute[256940]: 2025-10-02 12:19:13.320 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:13 compute-0 nova_compute[256940]: 2025-10-02 12:19:13.329 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:19:13 compute-0 nova_compute[256940]: 2025-10-02 12:19:13.330 2 INFO nova.compute.claims [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:19:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:19:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:19:13 compute-0 nova_compute[256940]: 2025-10-02 12:19:13.645 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:13.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:13 compute-0 nova_compute[256940]: 2025-10-02 12:19:13.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200331205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.303 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.310 2 DEBUG nova.compute.provider_tree [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.430 2 DEBUG nova.scheduler.client.report [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.688 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.688 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:19:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:14 compute-0 ceph-mon[73668]: pgmap v1204: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct 02 12:19:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/200331205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 02 12:19:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:14.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.888 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:19:14 compute-0 nova_compute[256940]: 2025-10-02 12:19:14.889 2 DEBUG nova.network.neutron [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:19:15 compute-0 nova_compute[256940]: 2025-10-02 12:19:15.087 2 INFO nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:19:15 compute-0 nova_compute[256940]: 2025-10-02 12:19:15.289 2 DEBUG nova.network.neutron [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:19:15 compute-0 nova_compute[256940]: 2025-10-02 12:19:15.290 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:19:15 compute-0 nova_compute[256940]: 2025-10-02 12:19:15.390 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:19:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:15.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:15 compute-0 ceph-mon[73668]: pgmap v1205: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.431 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.433 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.434 2 INFO nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Creating image(s)
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.476 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.522 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.557 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.562 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.641 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.642 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.643 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.644 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.677 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:16 compute-0 nova_compute[256940]: 2025-10-02 12:19:16.683 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 103 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 7.6 KiB/s rd, 852 B/s wr, 11 op/s
Oct 02 12:19:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:16.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.596 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.913s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:17.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.686 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] resizing rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.800 2 DEBUG nova.objects.instance [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lazy-loading 'migration_context' on Instance uuid 6dde9654-bcd6-42d0-96c5-bee346944cc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.894 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.895 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Ensure instance console log exists: /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.896 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.896 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.896 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.898 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.903 2 WARNING nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.912 2 DEBUG nova.virt.libvirt.host [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.913 2 DEBUG nova.virt.libvirt.host [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.916 2 DEBUG nova.virt.libvirt.host [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.916 2 DEBUG nova.virt.libvirt.host [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.917 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.918 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.918 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.918 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.919 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.919 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.919 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.919 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.919 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.920 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.920 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.920 2 DEBUG nova.virt.hardware [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:19:17 compute-0 nova_compute[256940]: 2025-10-02 12:19:17.923 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:18 compute-0 ceph-mon[73668]: pgmap v1206: 305 pgs: 305 active+clean; 103 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 7.6 KiB/s rd, 852 B/s wr, 11 op/s
Oct 02 12:19:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2889312449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402607599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:18 compute-0 nova_compute[256940]: 2025-10-02 12:19:18.482 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:18 compute-0 nova_compute[256940]: 2025-10-02 12:19:18.518 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:18 compute-0 nova_compute[256940]: 2025-10-02 12:19:18.524 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 103 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 5.6 KiB/s rd, 255 B/s wr, 7 op/s
Oct 02 12:19:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:18.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:18 compute-0 nova_compute[256940]: 2025-10-02 12:19:18.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066289079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:18 compute-0 nova_compute[256940]: 2025-10-02 12:19:18.987 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:18 compute-0 nova_compute[256940]: 2025-10-02 12:19:18.990 2 DEBUG nova.objects.instance [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6dde9654-bcd6-42d0-96c5-bee346944cc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.015 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <uuid>6dde9654-bcd6-42d0-96c5-bee346944cc5</uuid>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <name>instance-00000025</name>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1779611683</nova:name>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:19:17</nova:creationTime>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <nova:user uuid="6c116d69c3ca47b68fd27e8ebc3e8eda">tempest-ServerDiagnosticsNegativeTest-1399619561-project-member</nova:user>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <nova:project uuid="a6e9fb1b7d324521a431c2fd18366d1d">tempest-ServerDiagnosticsNegativeTest-1399619561</nova:project>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <system>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <entry name="serial">6dde9654-bcd6-42d0-96c5-bee346944cc5</entry>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <entry name="uuid">6dde9654-bcd6-42d0-96c5-bee346944cc5</entry>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </system>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <os>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   </os>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <features>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   </features>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6dde9654-bcd6-42d0-96c5-bee346944cc5_disk">
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6dde9654-bcd6-42d0-96c5-bee346944cc5_disk.config">
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:19:19 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/console.log" append="off"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <video>
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </video>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:19:19 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:19:19 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:19:19 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:19:19 compute-0 nova_compute[256940]: </domain>
Oct 02 12:19:19 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.287 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.288 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.289 2 INFO nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Using config drive
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.320 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3402607599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3066289079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:19.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.749 2 INFO nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Creating config drive at /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/disk.config
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.760 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeiu5ce8g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.912 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeiu5ce8g" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.959 2 DEBUG nova.storage.rbd_utils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] rbd image 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:19 compute-0 nova_compute[256940]: 2025-10-02 12:19:19.965 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/disk.config 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 77 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 1.0 MiB/s wr, 40 op/s
Oct 02 12:19:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:20.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:20 compute-0 ceph-mon[73668]: pgmap v1207: 305 pgs: 305 active+clean; 103 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 5.6 KiB/s rd, 255 B/s wr, 7 op/s
Oct 02 12:19:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2501839545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2621466651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3271792050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:21 compute-0 nova_compute[256940]: 2025-10-02 12:19:21.024 2 DEBUG oslo_concurrency.processutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/disk.config 6dde9654-bcd6-42d0-96c5-bee346944cc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:21 compute-0 nova_compute[256940]: 2025-10-02 12:19:21.025 2 INFO nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Deleting local config drive /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5/disk.config because it was imported into RBD.
Oct 02 12:19:21 compute-0 systemd-machined[210927]: New machine qemu-19-instance-00000025.
Oct 02 12:19:21 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000025.
Oct 02 12:19:21 compute-0 podman[284039]: 2025-10-02 12:19:21.680577866 +0000 UTC m=+0.077702756 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:19:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:21.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:21 compute-0 podman[284041]: 2025-10-02 12:19:21.725650171 +0000 UTC m=+0.121926469 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.148 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407562.1480823, 6dde9654-bcd6-42d0-96c5-bee346944cc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.149 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] VM Resumed (Lifecycle Event)
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.151 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.152 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.159 2 INFO nova.virt.libvirt.driver [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Instance spawned successfully.
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.159 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.258 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.263 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.264 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.265 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.266 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.267 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.268 2 DEBUG nova.virt.libvirt.driver [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.277 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:22 compute-0 ceph-mon[73668]: pgmap v1208: 305 pgs: 305 active+clean; 77 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 1.0 MiB/s wr, 40 op/s
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.437 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.437 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407562.1530056, 6dde9654-bcd6-42d0-96c5-bee346944cc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.437 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] VM Started (Lifecycle Event)
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.521 2 INFO nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Took 6.09 seconds to spawn the instance on the hypervisor.
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.522 2 DEBUG nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.543 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.559 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.612 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.632 2 INFO nova.compute.manager [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Took 9.34 seconds to build instance.
Oct 02 12:19:22 compute-0 nova_compute[256940]: 2025-10-02 12:19:22.670 2 DEBUG oslo_concurrency.lockutils [None req-25199aa3-de75-40c1-84c5-2cdd4ee43133 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "6dde9654-bcd6-42d0-96c5-bee346944cc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 117 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.5 MiB/s wr, 76 op/s
Oct 02 12:19:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:22.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:23.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:23 compute-0 nova_compute[256940]: 2025-10-02 12:19:23.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:24 compute-0 nova_compute[256940]: 2025-10-02 12:19:24.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 407 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:19:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:24.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:24 compute-0 ceph-mon[73668]: pgmap v1209: 305 pgs: 305 active+clean; 117 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.5 MiB/s wr, 76 op/s
Oct 02 12:19:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:25.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:26 compute-0 ceph-mon[73668]: pgmap v1210: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 407 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:19:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:26.456 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:26.457 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:26.457 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Oct 02 12:19:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:26.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:28 compute-0 ceph-mon[73668]: pgmap v1211: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:19:28
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root']
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:19:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Oct 02 12:19:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:28.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:28 compute-0 nova_compute[256940]: 2025-10-02 12:19:28.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.273 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "6dde9654-bcd6-42d0-96c5-bee346944cc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.274 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "6dde9654-bcd6-42d0-96c5-bee346944cc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.275 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "6dde9654-bcd6-42d0-96c5-bee346944cc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.275 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "6dde9654-bcd6-42d0-96c5-bee346944cc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.275 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "6dde9654-bcd6-42d0-96c5-bee346944cc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.277 2 INFO nova.compute.manager [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Terminating instance
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.279 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "refresh_cache-6dde9654-bcd6-42d0-96c5-bee346944cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.279 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquired lock "refresh_cache-6dde9654-bcd6-42d0-96c5-bee346944cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.280 2 DEBUG nova.network.neutron [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:29 compute-0 sudo[284089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:29 compute-0 sudo[284089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:29 compute-0 sudo[284089]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:29 compute-0 sudo[284114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:29 compute-0 sudo[284114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:29 compute-0 sudo[284114]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:29.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:29 compute-0 nova_compute[256940]: 2025-10-02 12:19:29.755 2 DEBUG nova.network.neutron [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:30 compute-0 ceph-mon[73668]: pgmap v1212: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Oct 02 12:19:30 compute-0 nova_compute[256940]: 2025-10-02 12:19:30.238 2 DEBUG nova.network.neutron [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:30 compute-0 nova_compute[256940]: 2025-10-02 12:19:30.479 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Releasing lock "refresh_cache-6dde9654-bcd6-42d0-96c5-bee346944cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:30 compute-0 nova_compute[256940]: 2025-10-02 12:19:30.480 2 DEBUG nova.compute.manager [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:19:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 02 12:19:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:30.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:30 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct 02 12:19:30 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000025.scope: Consumed 9.453s CPU time.
Oct 02 12:19:30 compute-0 systemd-machined[210927]: Machine qemu-19-instance-00000025 terminated.
Oct 02 12:19:31 compute-0 nova_compute[256940]: 2025-10-02 12:19:31.111 2 INFO nova.virt.libvirt.driver [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Instance destroyed successfully.
Oct 02 12:19:31 compute-0 nova_compute[256940]: 2025-10-02 12:19:31.112 2 DEBUG nova.objects.instance [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lazy-loading 'resources' on Instance uuid 6dde9654-bcd6-42d0-96c5-bee346944cc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:31.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s
Oct 02 12:19:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:32.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:33 compute-0 ceph-mon[73668]: pgmap v1213: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 02 12:19:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:33.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:33 compute-0 nova_compute[256940]: 2025-10-02 12:19:33.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:34 compute-0 ceph-mon[73668]: pgmap v1214: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s
Oct 02 12:19:34 compute-0 nova_compute[256940]: 2025-10-02 12:19:34.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 106 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 175 op/s
Oct 02 12:19:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:34.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:35 compute-0 nova_compute[256940]: 2025-10-02 12:19:35.469 2 INFO nova.virt.libvirt.driver [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Deleting instance files /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5_del
Oct 02 12:19:35 compute-0 nova_compute[256940]: 2025-10-02 12:19:35.470 2 INFO nova.virt.libvirt.driver [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Deletion of /var/lib/nova/instances/6dde9654-bcd6-42d0-96c5-bee346944cc5_del complete
Oct 02 12:19:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:35.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:36 compute-0 ceph-mon[73668]: pgmap v1215: 305 pgs: 305 active+clean; 106 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 175 op/s
Oct 02 12:19:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 KiB/s wr, 172 op/s
Oct 02 12:19:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:36.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:37 compute-0 nova_compute[256940]: 2025-10-02 12:19:37.347 2 INFO nova.compute.manager [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Took 6.87 seconds to destroy the instance on the hypervisor.
Oct 02 12:19:37 compute-0 nova_compute[256940]: 2025-10-02 12:19:37.349 2 DEBUG oslo.service.loopingcall [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:19:37 compute-0 nova_compute[256940]: 2025-10-02 12:19:37.349 2 DEBUG nova.compute.manager [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:19:37 compute-0 nova_compute[256940]: 2025-10-02 12:19:37.349 2 DEBUG nova.network.neutron [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:19:37 compute-0 podman[284165]: 2025-10-02 12:19:37.414471007 +0000 UTC m=+0.073972509 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:19:37 compute-0 podman[284166]: 2025-10-02 12:19:37.425265528 +0000 UTC m=+0.074703838 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:19:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:38 compute-0 nova_compute[256940]: 2025-10-02 12:19:38.774 2 DEBUG nova.network.neutron [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 94 op/s
Oct 02 12:19:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:38.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:38 compute-0 nova_compute[256940]: 2025-10-02 12:19:38.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:38 compute-0 nova_compute[256940]: 2025-10-02 12:19:38.997 2 DEBUG nova.network.neutron [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:39 compute-0 nova_compute[256940]: 2025-10-02 12:19:39.101 2 INFO nova.compute.manager [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Took 1.75 seconds to deallocate network for instance.
Oct 02 12:19:39 compute-0 ceph-mon[73668]: pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 KiB/s wr, 172 op/s
Oct 02 12:19:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1202750396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:39 compute-0 nova_compute[256940]: 2025-10-02 12:19:39.197 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:39 compute-0 nova_compute[256940]: 2025-10-02 12:19:39.198 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:39 compute-0 nova_compute[256940]: 2025-10-02 12:19:39.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:39 compute-0 nova_compute[256940]: 2025-10-02 12:19:39.297 2 DEBUG oslo_concurrency.processutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:39 compute-0 nova_compute[256940]: 2025-10-02 12:19:39.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:39.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:19:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.566 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-6dde9654-bcd6-42d0-96c5-bee346944cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.567 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-6dde9654-bcd6-42d0-96c5-bee346944cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.567 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.567 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6dde9654-bcd6-42d0-96c5-bee346944cc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012743931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.736 2 DEBUG oslo_concurrency.processutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.745 2 DEBUG nova.compute.provider_tree [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:40 compute-0 ceph-mon[73668]: pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 94 op/s
Oct 02 12:19:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1347628948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3298273475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/528161419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.783 2 DEBUG nova.scheduler.client.report [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 94 op/s
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.892 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.986 2 INFO nova.scheduler.client.report [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Deleted allocations for instance 6dde9654-bcd6-42d0-96c5-bee346944cc5
Oct 02 12:19:40 compute-0 nova_compute[256940]: 2025-10-02 12:19:40.992 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:41 compute-0 nova_compute[256940]: 2025-10-02 12:19:41.150 2 DEBUG oslo_concurrency.lockutils [None req-be3a5c23-746a-432b-8016-575d0e20ca4e 6c116d69c3ca47b68fd27e8ebc3e8eda a6e9fb1b7d324521a431c2fd18366d1d - - default default] Lock "6dde9654-bcd6-42d0-96c5-bee346944cc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:41 compute-0 nova_compute[256940]: 2025-10-02 12:19:41.829 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:41 compute-0 nova_compute[256940]: 2025-10-02 12:19:41.918 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-6dde9654-bcd6-42d0-96c5-bee346944cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:41 compute-0 nova_compute[256940]: 2025-10-02 12:19:41.919 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:19:41 compute-0 nova_compute[256940]: 2025-10-02 12:19:41.919 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3478166263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2012743931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.254 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.254 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.255 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.255 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.255 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.339 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.340 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.340 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.341 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.341 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997822437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:42 compute-0 nova_compute[256940]: 2025-10-02 12:19:42.818 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 12:19:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.029 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.031 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4698MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.031 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.031 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.133 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.133 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.146 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:43 compute-0 ceph-mon[73668]: pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 94 op/s
Oct 02 12:19:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/997822437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047511372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.575 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.584 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.623 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.650 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.650 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:43.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:43 compute-0 nova_compute[256940]: 2025-10-02 12:19:43.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:44 compute-0 ceph-mon[73668]: pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 12:19:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3047511372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:44 compute-0 nova_compute[256940]: 2025-10-02 12:19:44.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:44 compute-0 nova_compute[256940]: 2025-10-02 12:19:44.606 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:44 compute-0 nova_compute[256940]: 2025-10-02 12:19:44.607 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Oct 02 12:19:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:44.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:45.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:46 compute-0 nova_compute[256940]: 2025-10-02 12:19:46.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407571.108881, 6dde9654-bcd6-42d0-96c5-bee346944cc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:46 compute-0 nova_compute[256940]: 2025-10-02 12:19:46.111 2 INFO nova.compute.manager [-] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] VM Stopped (Lifecycle Event)
Oct 02 12:19:46 compute-0 nova_compute[256940]: 2025-10-02 12:19:46.133 2 DEBUG nova.compute.manager [None req-ab28357b-b587-4ee5-b036-3e169416b436 - - - - - -] [instance: 6dde9654-bcd6-42d0-96c5-bee346944cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:46 compute-0 ceph-mon[73668]: pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Oct 02 12:19:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Oct 02 12:19:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:48 compute-0 ceph-mon[73668]: pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Oct 02 12:19:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 12:19:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:49 compute-0 nova_compute[256940]: 2025-10-02 12:19:48.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:49 compute-0 nova_compute[256940]: 2025-10-02 12:19:49.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:49 compute-0 sudo[284277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:49 compute-0 sudo[284277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:49 compute-0 sudo[284277]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:49.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:49 compute-0 sudo[284302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:49 compute-0 sudo[284302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:49 compute-0 sudo[284302]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:49 compute-0 ceph-mon[73668]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 12:19:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 12:19:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1401379172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:51.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:52 compute-0 ceph-mon[73668]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 12:19:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2702662155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:52 compute-0 podman[284328]: 2025-10-02 12:19:52.405742104 +0000 UTC m=+0.061584026 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:19:52 compute-0 podman[284329]: 2025-10-02 12:19:52.452503112 +0000 UTC m=+0.107079661 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:19:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Oct 02 12:19:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3840002178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2080843043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:53.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:54.348 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:54.350 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.523 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.523 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.587 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:19:54 compute-0 ceph-mon[73668]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Oct 02 12:19:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/818748125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/442017125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.699 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.700 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.707 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.708 2 INFO nova.compute.claims [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:19:54 compute-0 nova_compute[256940]: 2025-10-02 12:19:54.819 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 79 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Oct 02 12:19:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:54.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773807802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.303 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.314 2 DEBUG nova.compute.provider_tree [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.336 2 DEBUG nova.scheduler.client.report [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.359 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.360 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.426 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.426 2 DEBUG nova.network.neutron [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.445 2 INFO nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.462 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.598 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.600 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.601 2 INFO nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Creating image(s)
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.629 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.660 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.703 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.707 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:55.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1773807802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.777 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.778 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.779 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.779 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.812 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:55 compute-0 nova_compute[256940]: 2025-10-02 12:19:55.817 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.080 2 DEBUG nova.policy [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7384c819e86a426d84c53eace7ec8594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a376d124dcaf49f2a6c0208610070573', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.193 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.298 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] resizing rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.419 2 DEBUG nova.objects.instance [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lazy-loading 'migration_context' on Instance uuid 87c6b2c0-d27f-46b9-811a-810a5fa87a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.439 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.440 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Ensure instance console log exists: /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.440 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.440 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.441 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:56 compute-0 ceph-mon[73668]: pgmap v1225: 305 pgs: 305 active+clean; 79 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Oct 02 12:19:56 compute-0 nova_compute[256940]: 2025-10-02 12:19:56.839 2 DEBUG nova.network.neutron [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Successfully created port: eacf5e24-e256-44f4-babc-b17f1293993a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:19:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 149 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 506 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Oct 02 12:19:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:56.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:57.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:58 compute-0 ceph-mon[73668]: pgmap v1226: 305 pgs: 305 active+clean; 149 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 506 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.294 2 DEBUG nova.network.neutron [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Successfully updated port: eacf5e24-e256-44f4-babc-b17f1293993a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.323 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.323 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquired lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.323 2 DEBUG nova.network.neutron [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.459 2 DEBUG nova.compute.manager [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-changed-eacf5e24-e256-44f4-babc-b17f1293993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.460 2 DEBUG nova.compute.manager [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Refreshing instance network info cache due to event network-changed-eacf5e24-e256-44f4-babc-b17f1293993a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.460 2 DEBUG oslo_concurrency.lockutils [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:58 compute-0 nova_compute[256940]: 2025-10-02 12:19:58.525 2 DEBUG nova.network.neutron [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 149 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 506 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Oct 02 12:19:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:58.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.207 2 DEBUG nova.network.neutron [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updating instance_info_cache with network_info: [{"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.236 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Releasing lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.237 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Instance network_info: |[{"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.238 2 DEBUG oslo_concurrency.lockutils [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.238 2 DEBUG nova.network.neutron [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Refreshing network info cache for port eacf5e24-e256-44f4-babc-b17f1293993a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.244 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Start _get_guest_xml network_info=[{"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.252 2 WARNING nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3945813759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.261 2 DEBUG nova.virt.libvirt.host [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.262 2 DEBUG nova.virt.libvirt.host [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.265 2 DEBUG nova.virt.libvirt.host [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.266 2 DEBUG nova.virt.libvirt.host [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.267 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.268 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.268 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.268 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.269 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.269 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.269 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.269 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.269 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.270 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.270 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.270 2 DEBUG nova.virt.hardware [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.274 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:19:59.353 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:19:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:19:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:59.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:19:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12375301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.775 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.812 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:59 compute-0 nova_compute[256940]: 2025-10-02 12:19:59.817 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:20:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.294 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.297 2 DEBUG nova.virt.libvirt.vif [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1961707445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1961707445',id=41,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a376d124dcaf49f2a6c0208610070573',ramdisk_id='',reservation_id='r-y8nbz4gv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1922422412',owner_user_name='tempest-AttachInterfacesV270Test-1922422412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:55Z,user_data=None,user_id='7384c819e86a426d84c53eace7ec8594',uuid=87c6b2c0-d27f-46b9-811a-810a5fa87a19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.297 2 DEBUG nova.network.os_vif_util [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converting VIF {"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.298 2 DEBUG nova.network.os_vif_util [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:c7:80,bridge_name='br-int',has_traffic_filtering=True,id=eacf5e24-e256-44f4-babc-b17f1293993a,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacf5e24-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.300 2 DEBUG nova.objects.instance [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87c6b2c0-d27f-46b9-811a-810a5fa87a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.322 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <uuid>87c6b2c0-d27f-46b9-811a-810a5fa87a19</uuid>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <name>instance-00000029</name>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachInterfacesV270Test-server-1961707445</nova:name>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:19:59</nova:creationTime>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:user uuid="7384c819e86a426d84c53eace7ec8594">tempest-AttachInterfacesV270Test-1922422412-project-member</nova:user>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:project uuid="a376d124dcaf49f2a6c0208610070573">tempest-AttachInterfacesV270Test-1922422412</nova:project>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <nova:port uuid="eacf5e24-e256-44f4-babc-b17f1293993a">
Oct 02 12:20:00 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <system>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <entry name="serial">87c6b2c0-d27f-46b9-811a-810a5fa87a19</entry>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <entry name="uuid">87c6b2c0-d27f-46b9-811a-810a5fa87a19</entry>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </system>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <os>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   </os>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <features>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   </features>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk">
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       </source>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk.config">
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       </source>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:20:00 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:44:c7:80"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <target dev="tapeacf5e24-e2"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/console.log" append="off"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <video>
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </video>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:20:00 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:20:00 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:20:00 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:20:00 compute-0 nova_compute[256940]: </domain>
Oct 02 12:20:00 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.324 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Preparing to wait for external event network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.324 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.324 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.325 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.325 2 DEBUG nova.virt.libvirt.vif [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1961707445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1961707445',id=41,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a376d124dcaf49f2a6c0208610070573',ramdisk_id='',reservation_id='r-y8nbz4gv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1922422412',owner_user_name='tempest-AttachInterfacesV270Test-1922422412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:55Z,user_data=None,user_id='7384c819e86a426d84c53eace7ec8594',uuid=87c6b2c0-d27f-46b9-811a-810a5fa87a19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.326 2 DEBUG nova.network.os_vif_util [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converting VIF {"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.327 2 DEBUG nova.network.os_vif_util [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:c7:80,bridge_name='br-int',has_traffic_filtering=True,id=eacf5e24-e256-44f4-babc-b17f1293993a,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacf5e24-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.327 2 DEBUG os_vif [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:c7:80,bridge_name='br-int',has_traffic_filtering=True,id=eacf5e24-e256-44f4-babc-b17f1293993a,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacf5e24-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeacf5e24-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeacf5e24-e2, col_values=(('external_ids', {'iface-id': 'eacf5e24-e256-44f4-babc-b17f1293993a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:c7:80', 'vm-uuid': '87c6b2c0-d27f-46b9-811a-810a5fa87a19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:00 compute-0 NetworkManager[44981]: <info>  [1759407600.3405] manager: (tapeacf5e24-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.348 2 INFO os_vif [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:c7:80,bridge_name='br-int',has_traffic_filtering=True,id=eacf5e24-e256-44f4-babc-b17f1293993a,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacf5e24-e2')
Oct 02 12:20:00 compute-0 ceph-mon[73668]: pgmap v1227: 305 pgs: 305 active+clean; 149 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 506 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Oct 02 12:20:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/12375301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3795866088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.448 2 DEBUG nova.network.neutron [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updated VIF entry in instance network info cache for port eacf5e24-e256-44f4-babc-b17f1293993a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.448 2 DEBUG nova.network.neutron [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updating instance_info_cache with network_info: [{"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.471 2 DEBUG oslo_concurrency.lockutils [req-3f0e2629-32e4-4f31-ae99-97bc6488afd2 req-97bb2b00-416b-429a-b35f-67b0e6053ceb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.501 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.501 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.502 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] No VIF found with MAC fa:16:3e:44:c7:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.502 2 INFO nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Using config drive
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.533 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 155 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.6 MiB/s wr, 224 op/s
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.915 2 INFO nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Creating config drive at /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/disk.config
Oct 02 12:20:00 compute-0 nova_compute[256940]: 2025-10-02 12:20:00.922 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptexxpujm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:00.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.055 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptexxpujm" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.093 2 DEBUG nova.storage.rbd_utils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] rbd image 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.098 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/disk.config 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1508859862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.623 2 DEBUG oslo_concurrency.processutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/disk.config 87c6b2c0-d27f-46b9-811a-810a5fa87a19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.624 2 INFO nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Deleting local config drive /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19/disk.config because it was imported into RBD.
Oct 02 12:20:01 compute-0 kernel: tapeacf5e24-e2: entered promiscuous mode
Oct 02 12:20:01 compute-0 NetworkManager[44981]: <info>  [1759407601.6938] manager: (tapeacf5e24-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct 02 12:20:01 compute-0 ovn_controller[148123]: 2025-10-02T12:20:01Z|00140|binding|INFO|Claiming lport eacf5e24-e256-44f4-babc-b17f1293993a for this chassis.
Oct 02 12:20:01 compute-0 ovn_controller[148123]: 2025-10-02T12:20:01Z|00141|binding|INFO|eacf5e24-e256-44f4-babc-b17f1293993a: Claiming fa:16:3e:44:c7:80 10.100.0.8
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.717 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:c7:80 10.100.0.8'], port_security=['fa:16:3e:44:c7:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '87c6b2c0-d27f-46b9-811a-810a5fa87a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a376d124dcaf49f2a6c0208610070573', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c9c59665-afdf-420f-9fcc-3cd28ff14c14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fd2b58e-7c06-43bb-9bce-7a21c3428ab3, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=eacf5e24-e256-44f4-babc-b17f1293993a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.719 158104 INFO neutron.agent.ovn.metadata.agent [-] Port eacf5e24-e256-44f4-babc-b17f1293993a in datapath 9f5f41ee-bc9c-44cb-8d23-de1279b7943e bound to our chassis
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.720 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f5f41ee-bc9c-44cb-8d23-de1279b7943e
Oct 02 12:20:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:01.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.739 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b431f4fd-f90f-4ea3-9f5b-18ecaaa6f964]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.740 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f5f41ee-b1 in ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.743 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f5f41ee-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.743 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9d58af9b-58b8-4035-8a94-7093c83c7b3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.744 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b10f39-910c-4076-a8fc-6ef5f3f5a797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 systemd-machined[210927]: New machine qemu-20-instance-00000029.
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.772 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[b87cfb26-0288-4a46-ab20-bdeb243542ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000029.
Oct 02 12:20:01 compute-0 systemd-udevd[284705]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:01 compute-0 ovn_controller[148123]: 2025-10-02T12:20:01Z|00142|binding|INFO|Setting lport eacf5e24-e256-44f4-babc-b17f1293993a ovn-installed in OVS
Oct 02 12:20:01 compute-0 ovn_controller[148123]: 2025-10-02T12:20:01Z|00143|binding|INFO|Setting lport eacf5e24-e256-44f4-babc-b17f1293993a up in Southbound
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.808 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c0917bcf-b530-4f96-ba54-c49929fc407d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 nova_compute[256940]: 2025-10-02 12:20:01.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:01 compute-0 NetworkManager[44981]: <info>  [1759407601.8179] device (tapeacf5e24-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:01 compute-0 NetworkManager[44981]: <info>  [1759407601.8197] device (tapeacf5e24-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.847 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[388d1113-071c-493f-b58e-2a496754a637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.854 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[82f88d58-de68-4dd2-963f-5d5e69d57572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 NetworkManager[44981]: <info>  [1759407601.8629] manager: (tap9f5f41ee-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.895 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[09e3f2b8-a7e0-4ab4-8baf-ef27daa647c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.899 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[38159621-bd04-4738-a5a9-6a9ddb3a5618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 NetworkManager[44981]: <info>  [1759407601.9281] device (tap9f5f41ee-b0): carrier: link connected
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.937 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bd6a9f-4066-42dc-9ac7-548e7550bce8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.958 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[00aae586-966f-4956-88dd-de78147cc7bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f5f41ee-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5c:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549741, 'reachable_time': 16499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284735, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.974 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[21ad5942-fd7e-4b23-befb-30e4817b56db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:5c29'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549741, 'tstamp': 549741}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284736, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:01.992 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[799923af-702c-408d-b688-47f91c238f59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f5f41ee-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5c:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549741, 'reachable_time': 16499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284737, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.025 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[43b9cb23-a282-47a7-80d5-c26c4323fd25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.091 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[28ee057f-739d-479c-9a7b-25ee71b169b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.093 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f5f41ee-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.094 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.094 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f5f41ee-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:02 compute-0 kernel: tap9f5f41ee-b0: entered promiscuous mode
Oct 02 12:20:02 compute-0 NetworkManager[44981]: <info>  [1759407602.0972] manager: (tap9f5f41ee-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 02 12:20:02 compute-0 nova_compute[256940]: 2025-10-02 12:20:02.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:02 compute-0 nova_compute[256940]: 2025-10-02 12:20:02.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.101 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f5f41ee-b0, col_values=(('external_ids', {'iface-id': 'bdef37d5-b724-496d-b1f1-3c004abe3020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:02 compute-0 ovn_controller[148123]: 2025-10-02T12:20:02Z|00144|binding|INFO|Releasing lport bdef37d5-b724-496d-b1f1-3c004abe3020 from this chassis (sb_readonly=0)
Oct 02 12:20:02 compute-0 nova_compute[256940]: 2025-10-02 12:20:02.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.105 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f5f41ee-bc9c-44cb-8d23-de1279b7943e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f5f41ee-bc9c-44cb-8d23-de1279b7943e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.106 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93b25efb-4c67-4a20-adcb-48499cf9ff23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.107 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-9f5f41ee-bc9c-44cb-8d23-de1279b7943e
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/9f5f41ee-bc9c-44cb-8d23-de1279b7943e.pid.haproxy
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 9f5f41ee-bc9c-44cb-8d23-de1279b7943e
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:20:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:02.108 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'env', 'PROCESS_TAG=haproxy-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f5f41ee-bc9c-44cb-8d23-de1279b7943e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:20:02 compute-0 nova_compute[256940]: 2025-10-02 12:20:02.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:02 compute-0 podman[284769]: 2025-10-02 12:20:02.440424264 +0000 UTC m=+0.026593584 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:20:02 compute-0 podman[284769]: 2025-10-02 12:20:02.896849109 +0000 UTC m=+0.483018399 container create 124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:20:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 169 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 02 12:20:02 compute-0 ceph-mon[73668]: pgmap v1228: 305 pgs: 305 active+clean; 155 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.6 MiB/s wr, 224 op/s
Oct 02 12:20:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:02.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.110 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407603.1098254, 87c6b2c0-d27f-46b9-811a-810a5fa87a19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.111 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] VM Started (Lifecycle Event)
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.133 2 DEBUG nova.compute.manager [req-2923b648-533e-4514-aecb-f2dfc8dc929e req-a116d7e8-0954-494e-ad97-d8bb756b88ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.134 2 DEBUG oslo_concurrency.lockutils [req-2923b648-533e-4514-aecb-f2dfc8dc929e req-a116d7e8-0954-494e-ad97-d8bb756b88ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.135 2 DEBUG oslo_concurrency.lockutils [req-2923b648-533e-4514-aecb-f2dfc8dc929e req-a116d7e8-0954-494e-ad97-d8bb756b88ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.135 2 DEBUG oslo_concurrency.lockutils [req-2923b648-533e-4514-aecb-f2dfc8dc929e req-a116d7e8-0954-494e-ad97-d8bb756b88ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.135 2 DEBUG nova.compute.manager [req-2923b648-533e-4514-aecb-f2dfc8dc929e req-a116d7e8-0954-494e-ad97-d8bb756b88ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Processing event network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.137 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.138 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.142 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.145 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:03 compute-0 systemd[1]: Started libpod-conmon-124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77.scope.
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.149 2 INFO nova.virt.libvirt.driver [-] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Instance spawned successfully.
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.151 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.175 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.176 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407603.110017, 87c6b2c0-d27f-46b9-811a-810a5fa87a19 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.177 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] VM Paused (Lifecycle Event)
Oct 02 12:20:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.186 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.187 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.188 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.188 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.189 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.190 2 DEBUG nova.virt.libvirt.driver [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f25ff7423fc121dd091741980d12820159019f5145c554d8c025cbceee2f1f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.195 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.201 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407603.141417, 87c6b2c0-d27f-46b9-811a-810a5fa87a19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.202 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] VM Resumed (Lifecycle Event)
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.222 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.227 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.255 2 INFO nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Took 7.66 seconds to spawn the instance on the hypervisor.
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.256 2 DEBUG nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.257 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:03 compute-0 podman[284769]: 2025-10-02 12:20:03.309768901 +0000 UTC m=+0.895938221 container init 124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.312 2 INFO nova.compute.manager [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Took 8.68 seconds to build instance.
Oct 02 12:20:03 compute-0 podman[284769]: 2025-10-02 12:20:03.317055221 +0000 UTC m=+0.903224511 container start 124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:20:03 compute-0 nova_compute[256940]: 2025-10-02 12:20:03.344 2 DEBUG oslo_concurrency.lockutils [None req-182d9008-fec6-4944-b881-c5d581abeeed 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:03 compute-0 neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e[284828]: [NOTICE]   (284832) : New worker (284834) forked
Oct 02 12:20:03 compute-0 neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e[284828]: [NOTICE]   (284832) : Loading success.
Oct 02 12:20:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:03.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:04 compute-0 ceph-mon[73668]: pgmap v1229: 305 pgs: 305 active+clean; 169 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 02 12:20:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/311085604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1221744040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:04 compute-0 nova_compute[256940]: 2025-10-02 12:20:04.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 181 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.1 MiB/s wr, 293 op/s
Oct 02 12:20:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:05 compute-0 nova_compute[256940]: 2025-10-02 12:20:05.214 2 DEBUG nova.compute.manager [req-ca1b4ff6-82bb-4430-b57b-e1d91a10c6ce req-6b419f19-e90e-4d06-a2f1-5e59b8a9ccbc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:05 compute-0 nova_compute[256940]: 2025-10-02 12:20:05.215 2 DEBUG oslo_concurrency.lockutils [req-ca1b4ff6-82bb-4430-b57b-e1d91a10c6ce req-6b419f19-e90e-4d06-a2f1-5e59b8a9ccbc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:05 compute-0 nova_compute[256940]: 2025-10-02 12:20:05.215 2 DEBUG oslo_concurrency.lockutils [req-ca1b4ff6-82bb-4430-b57b-e1d91a10c6ce req-6b419f19-e90e-4d06-a2f1-5e59b8a9ccbc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:05 compute-0 nova_compute[256940]: 2025-10-02 12:20:05.216 2 DEBUG oslo_concurrency.lockutils [req-ca1b4ff6-82bb-4430-b57b-e1d91a10c6ce req-6b419f19-e90e-4d06-a2f1-5e59b8a9ccbc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:05 compute-0 nova_compute[256940]: 2025-10-02 12:20:05.216 2 DEBUG nova.compute.manager [req-ca1b4ff6-82bb-4430-b57b-e1d91a10c6ce req-6b419f19-e90e-4d06-a2f1-5e59b8a9ccbc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] No waiting events found dispatching network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:05 compute-0 nova_compute[256940]: 2025-10-02 12:20:05.216 2 WARNING nova.compute.manager [req-ca1b4ff6-82bb-4430-b57b-e1d91a10c6ce req-6b419f19-e90e-4d06-a2f1-5e59b8a9ccbc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received unexpected event network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a for instance with vm_state active and task_state None.
Oct 02 12:20:05 compute-0 nova_compute[256940]: 2025-10-02 12:20:05.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2792077436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:20:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2792077436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:20:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:05.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:06 compute-0 nova_compute[256940]: 2025-10-02 12:20:06.048 2 DEBUG oslo_concurrency.lockutils [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "interface-87c6b2c0-d27f-46b9-811a-810a5fa87a19-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:06 compute-0 nova_compute[256940]: 2025-10-02 12:20:06.050 2 DEBUG oslo_concurrency.lockutils [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "interface-87c6b2c0-d27f-46b9-811a-810a5fa87a19-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:06 compute-0 nova_compute[256940]: 2025-10-02 12:20:06.050 2 DEBUG nova.objects.instance [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lazy-loading 'flavor' on Instance uuid 87c6b2c0-d27f-46b9-811a-810a5fa87a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:06 compute-0 nova_compute[256940]: 2025-10-02 12:20:06.073 2 DEBUG nova.objects.instance [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lazy-loading 'pci_requests' on Instance uuid 87c6b2c0-d27f-46b9-811a-810a5fa87a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:06 compute-0 nova_compute[256940]: 2025-10-02 12:20:06.087 2 DEBUG nova.network.neutron [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:20:06 compute-0 ceph-mon[73668]: pgmap v1230: 305 pgs: 305 active+clean; 181 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.1 MiB/s wr, 293 op/s
Oct 02 12:20:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 181 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 291 op/s
Oct 02 12:20:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:06.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:07 compute-0 nova_compute[256940]: 2025-10-02 12:20:07.017 2 DEBUG nova.policy [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7384c819e86a426d84c53eace7ec8594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a376d124dcaf49f2a6c0208610070573', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:20:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:07.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:07 compute-0 nova_compute[256940]: 2025-10-02 12:20:07.927 2 DEBUG nova.network.neutron [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Successfully created port: 535db5a8-ea9b-4490-a9a0-1be280e5bc25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:20:08 compute-0 ceph-mon[73668]: pgmap v1231: 305 pgs: 305 active+clean; 181 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 291 op/s
Oct 02 12:20:08 compute-0 podman[284846]: 2025-10-02 12:20:08.423627327 +0000 UTC m=+0.075963531 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 02 12:20:08 compute-0 podman[284845]: 2025-10-02 12:20:08.459325047 +0000 UTC m=+0.109102564 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:20:08 compute-0 nova_compute[256940]: 2025-10-02 12:20:08.814 2 DEBUG nova.network.neutron [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Successfully updated port: 535db5a8-ea9b-4490-a9a0-1be280e5bc25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:20:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 181 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 243 op/s
Oct 02 12:20:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:08.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.124 2 DEBUG oslo_concurrency.lockutils [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.125 2 DEBUG oslo_concurrency.lockutils [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquired lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.125 2 DEBUG nova.network.neutron [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.267 2 DEBUG nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-changed-535db5a8-ea9b-4490-a9a0-1be280e5bc25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.267 2 DEBUG nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Refreshing instance network info cache due to event network-changed-535db5a8-ea9b-4490-a9a0-1be280e5bc25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.268 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.305 2 WARNING nova.network.neutron [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] 9f5f41ee-bc9c-44cb-8d23-de1279b7943e already exists in list: networks containing: ['9f5f41ee-bc9c-44cb-8d23-de1279b7943e']. ignoring it
Oct 02 12:20:09 compute-0 nova_compute[256940]: 2025-10-02 12:20:09.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:09 compute-0 sudo[284885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:09 compute-0 sudo[284885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:09 compute-0 sudo[284885]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:09 compute-0 sudo[284910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:09 compute-0 sudo[284910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:09 compute-0 sudo[284910]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:10 compute-0 ceph-mon[73668]: pgmap v1232: 305 pgs: 305 active+clean; 181 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 243 op/s
Oct 02 12:20:10 compute-0 nova_compute[256940]: 2025-10-02 12:20:10.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 192 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.8 MiB/s wr, 331 op/s
Oct 02 12:20:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:10.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.482 2 DEBUG nova.network.neutron [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updating instance_info_cache with network_info: [{"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.585 2 DEBUG oslo_concurrency.lockutils [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Releasing lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.585 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.586 2 DEBUG nova.network.neutron [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Refreshing network info cache for port 535db5a8-ea9b-4490-a9a0-1be280e5bc25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.589 2 DEBUG nova.virt.libvirt.vif [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1961707445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1961707445',id=41,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a376d124dcaf49f2a6c0208610070573',ramdisk_id='',reservation_id='r-y8nbz4gv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1922422412',owner_user_name='tempest-AttachInterfacesV270Test-1922422412-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:03Z,user_data=None,user_id='7384c819e86a426d84c53eace7ec8594',uuid=87c6b2c0-d27f-46b9-811a-810a5fa87a19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.590 2 DEBUG nova.network.os_vif_util [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converting VIF {"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.590 2 DEBUG nova.network.os_vif_util [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:50:f9,bridge_name='br-int',has_traffic_filtering=True,id=535db5a8-ea9b-4490-a9a0-1be280e5bc25,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap535db5a8-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.591 2 DEBUG os_vif [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:50:f9,bridge_name='br-int',has_traffic_filtering=True,id=535db5a8-ea9b-4490-a9a0-1be280e5bc25,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap535db5a8-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.591 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap535db5a8-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap535db5a8-ea, col_values=(('external_ids', {'iface-id': '535db5a8-ea9b-4490-a9a0-1be280e5bc25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:50:f9', 'vm-uuid': '87c6b2c0-d27f-46b9-811a-810a5fa87a19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 NetworkManager[44981]: <info>  [1759407611.6014] manager: (tap535db5a8-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.613 2 INFO os_vif [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:50:f9,bridge_name='br-int',has_traffic_filtering=True,id=535db5a8-ea9b-4490-a9a0-1be280e5bc25,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap535db5a8-ea')
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.615 2 DEBUG nova.virt.libvirt.vif [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1961707445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1961707445',id=41,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a376d124dcaf49f2a6c0208610070573',ramdisk_id='',reservation_id='r-y8nbz4gv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1922422412',owner_user_name='tempest-AttachInterfacesV270Test-1922422412-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:03Z,user_data=None,user_id='7384c819e86a426d84c53eace7ec8594',uuid=87c6b2c0-d27f-46b9-811a-810a5fa87a19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.615 2 DEBUG nova.network.os_vif_util [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converting VIF {"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.617 2 DEBUG nova.network.os_vif_util [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:50:f9,bridge_name='br-int',has_traffic_filtering=True,id=535db5a8-ea9b-4490-a9a0-1be280e5bc25,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap535db5a8-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.623 2 DEBUG nova.virt.libvirt.guest [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:1a:50:f9"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <target dev="tap535db5a8-ea"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]: </interface>
Oct 02 12:20:11 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:20:11 compute-0 kernel: tap535db5a8-ea: entered promiscuous mode
Oct 02 12:20:11 compute-0 ovn_controller[148123]: 2025-10-02T12:20:11Z|00145|binding|INFO|Claiming lport 535db5a8-ea9b-4490-a9a0-1be280e5bc25 for this chassis.
Oct 02 12:20:11 compute-0 ovn_controller[148123]: 2025-10-02T12:20:11Z|00146|binding|INFO|535db5a8-ea9b-4490-a9a0-1be280e5bc25: Claiming fa:16:3e:1a:50:f9 10.100.0.10
Oct 02 12:20:11 compute-0 NetworkManager[44981]: <info>  [1759407611.6420] manager: (tap535db5a8-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.656 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:50:f9 10.100.0.10'], port_security=['fa:16:3e:1a:50:f9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '87c6b2c0-d27f-46b9-811a-810a5fa87a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a376d124dcaf49f2a6c0208610070573', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c9c59665-afdf-420f-9fcc-3cd28ff14c14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fd2b58e-7c06-43bb-9bce-7a21c3428ab3, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=535db5a8-ea9b-4490-a9a0-1be280e5bc25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:11 compute-0 ovn_controller[148123]: 2025-10-02T12:20:11Z|00147|binding|INFO|Setting lport 535db5a8-ea9b-4490-a9a0-1be280e5bc25 ovn-installed in OVS
Oct 02 12:20:11 compute-0 ovn_controller[148123]: 2025-10-02T12:20:11Z|00148|binding|INFO|Setting lport 535db5a8-ea9b-4490-a9a0-1be280e5bc25 up in Southbound
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.658 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 535db5a8-ea9b-4490-a9a0-1be280e5bc25 in datapath 9f5f41ee-bc9c-44cb-8d23-de1279b7943e bound to our chassis
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.663 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f5f41ee-bc9c-44cb-8d23-de1279b7943e
Oct 02 12:20:11 compute-0 systemd-udevd[284943]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.684 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e6233b12-9af8-45a6-8b49-b3e597260682]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:11 compute-0 NetworkManager[44981]: <info>  [1759407611.6967] device (tap535db5a8-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:11 compute-0 NetworkManager[44981]: <info>  [1759407611.6989] device (tap535db5a8-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.721 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c850f38a-3dfe-42b7-817d-16510b48bf9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.725 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c6cde5-f094-495e-9445-9fa7cca35b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:11.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.754 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e76654-238b-458f-96c9-51cb5f79e383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.774 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b8e260-dc04-4359-9f9a-aceba33220ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f5f41ee-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5c:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549741, 'reachable_time': 16499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284950, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.775 2 DEBUG nova.virt.libvirt.driver [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.776 2 DEBUG nova.virt.libvirt.driver [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.776 2 DEBUG nova.virt.libvirt.driver [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] No VIF found with MAC fa:16:3e:44:c7:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.776 2 DEBUG nova.virt.libvirt.driver [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] No VIF found with MAC fa:16:3e:1a:50:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.792 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[510ca014-e178-46d6-9d18-9c509f8a664a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f5f41ee-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549753, 'tstamp': 549753}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284951, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f5f41ee-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549756, 'tstamp': 549756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284951, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.794 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f5f41ee-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.798 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f5f41ee-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.798 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.798 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f5f41ee-b0, col_values=(('external_ids', {'iface-id': 'bdef37d5-b724-496d-b1f1-3c004abe3020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:11.799 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.813 2 DEBUG nova.virt.libvirt.guest [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesV270Test-server-1961707445</nova:name>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:20:11</nova:creationTime>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:user uuid="7384c819e86a426d84c53eace7ec8594">tempest-AttachInterfacesV270Test-1922422412-project-member</nova:user>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:project uuid="a376d124dcaf49f2a6c0208610070573">tempest-AttachInterfacesV270Test-1922422412</nova:project>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:port uuid="eacf5e24-e256-44f4-babc-b17f1293993a">
Oct 02 12:20:11 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     <nova:port uuid="535db5a8-ea9b-4490-a9a0-1be280e5bc25">
Oct 02 12:20:11 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:20:11 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:20:11 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:20:11 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:20:11 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:20:11 compute-0 nova_compute[256940]: 2025-10-02 12:20:11.850 2 DEBUG oslo_concurrency.lockutils [None req-db7d72c9-ad14-4f30-9cc0-9aec75e05c41 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "interface-87c6b2c0-d27f-46b9-811a-810a5fa87a19-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:12 compute-0 nova_compute[256940]: 2025-10-02 12:20:12.047 2 DEBUG nova.compute.manager [req-d2a05567-4a82-4592-b7fe-a11fa63a6c6f req-bb4acce0-056a-4944-b8d3-c06546c17930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:12 compute-0 nova_compute[256940]: 2025-10-02 12:20:12.048 2 DEBUG oslo_concurrency.lockutils [req-d2a05567-4a82-4592-b7fe-a11fa63a6c6f req-bb4acce0-056a-4944-b8d3-c06546c17930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:12 compute-0 nova_compute[256940]: 2025-10-02 12:20:12.048 2 DEBUG oslo_concurrency.lockutils [req-d2a05567-4a82-4592-b7fe-a11fa63a6c6f req-bb4acce0-056a-4944-b8d3-c06546c17930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:12 compute-0 nova_compute[256940]: 2025-10-02 12:20:12.048 2 DEBUG oslo_concurrency.lockutils [req-d2a05567-4a82-4592-b7fe-a11fa63a6c6f req-bb4acce0-056a-4944-b8d3-c06546c17930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:12 compute-0 nova_compute[256940]: 2025-10-02 12:20:12.048 2 DEBUG nova.compute.manager [req-d2a05567-4a82-4592-b7fe-a11fa63a6c6f req-bb4acce0-056a-4944-b8d3-c06546c17930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] No waiting events found dispatching network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:12 compute-0 nova_compute[256940]: 2025-10-02 12:20:12.048 2 WARNING nova.compute.manager [req-d2a05567-4a82-4592-b7fe-a11fa63a6c6f req-bb4acce0-056a-4944-b8d3-c06546c17930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received unexpected event network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 for instance with vm_state active and task_state None.
Oct 02 12:20:12 compute-0 ceph-mon[73668]: pgmap v1233: 305 pgs: 305 active+clean; 192 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.8 MiB/s wr, 331 op/s
Oct 02 12:20:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 210 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.7 MiB/s wr, 264 op/s
Oct 02 12:20:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:12.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:13 compute-0 sudo[284953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:13 compute-0 sudo[284953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-0 sudo[284953]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:13 compute-0 sudo[284978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:13 compute-0 sudo[284978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-0 sudo[284978]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:13 compute-0 sudo[285003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:13 compute-0 sudo[285003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-0 sudo[285003]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:13 compute-0 sudo[285028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:20:13 compute-0 sudo[285028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.455 2 DEBUG nova.network.neutron [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updated VIF entry in instance network info cache for port 535db5a8-ea9b-4490-a9a0-1be280e5bc25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.456 2 DEBUG nova.network.neutron [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updating instance_info_cache with network_info: [{"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.523 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-87c6b2c0-d27f-46b9-811a-810a5fa87a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:13.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:13 compute-0 podman[285118]: 2025-10-02 12:20:13.908583144 +0000 UTC m=+0.107165004 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.927 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.929 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.929 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.929 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.929 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.931 2 INFO nova.compute.manager [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Terminating instance
Oct 02 12:20:13 compute-0 nova_compute[256940]: 2025-10-02 12:20:13.932 2 DEBUG nova.compute.manager [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:20:13 compute-0 kernel: tapeacf5e24-e2 (unregistering): left promiscuous mode
Oct 02 12:20:13 compute-0 NetworkManager[44981]: <info>  [1759407613.9817] device (tapeacf5e24-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:20:13 compute-0 ovn_controller[148123]: 2025-10-02T12:20:13Z|00149|binding|INFO|Releasing lport eacf5e24-e256-44f4-babc-b17f1293993a from this chassis (sb_readonly=0)
Oct 02 12:20:13 compute-0 ovn_controller[148123]: 2025-10-02T12:20:13Z|00150|binding|INFO|Setting lport eacf5e24-e256-44f4-babc-b17f1293993a down in Southbound
Oct 02 12:20:13 compute-0 ovn_controller[148123]: 2025-10-02T12:20:13Z|00151|binding|INFO|Removing iface tapeacf5e24-e2 ovn-installed in OVS
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.002 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:c7:80 10.100.0.8'], port_security=['fa:16:3e:44:c7:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '87c6b2c0-d27f-46b9-811a-810a5fa87a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a376d124dcaf49f2a6c0208610070573', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c9c59665-afdf-420f-9fcc-3cd28ff14c14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fd2b58e-7c06-43bb-9bce-7a21c3428ab3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=eacf5e24-e256-44f4-babc-b17f1293993a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.004 158104 INFO neutron.agent.ovn.metadata.agent [-] Port eacf5e24-e256-44f4-babc-b17f1293993a in datapath 9f5f41ee-bc9c-44cb-8d23-de1279b7943e unbound from our chassis
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.005 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f5f41ee-bc9c-44cb-8d23-de1279b7943e
Oct 02 12:20:14 compute-0 kernel: tap535db5a8-ea (unregistering): left promiscuous mode
Oct 02 12:20:14 compute-0 NetworkManager[44981]: <info>  [1759407614.0167] device (tap535db5a8-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 ovn_controller[148123]: 2025-10-02T12:20:14Z|00152|binding|INFO|Releasing lport 535db5a8-ea9b-4490-a9a0-1be280e5bc25 from this chassis (sb_readonly=0)
Oct 02 12:20:14 compute-0 ovn_controller[148123]: 2025-10-02T12:20:14Z|00153|binding|INFO|Setting lport 535db5a8-ea9b-4490-a9a0-1be280e5bc25 down in Southbound
Oct 02 12:20:14 compute-0 ovn_controller[148123]: 2025-10-02T12:20:14Z|00154|binding|INFO|Removing iface tap535db5a8-ea ovn-installed in OVS
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.026 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6c64ea05-db9b-4ac9-a8d3-76175352a8e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.036 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:50:f9 10.100.0.10'], port_security=['fa:16:3e:1a:50:f9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '87c6b2c0-d27f-46b9-811a-810a5fa87a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a376d124dcaf49f2a6c0208610070573', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c9c59665-afdf-420f-9fcc-3cd28ff14c14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fd2b58e-7c06-43bb-9bce-7a21c3428ab3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=535db5a8-ea9b-4490-a9a0-1be280e5bc25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.064 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7811b011-13a7-4bf8-8da2-f0927e974cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 podman[285118]: 2025-10-02 12:20:14.067417753 +0000 UTC m=+0.265999583 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.068 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5b59699f-3d8e-45e8-91f0-6811017f105c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct 02 12:20:14 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000029.scope: Consumed 12.185s CPU time.
Oct 02 12:20:14 compute-0 systemd-machined[210927]: Machine qemu-20-instance-00000029 terminated.
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.101 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f29a79c9-aec9-4010-9a39-bedc91323d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.123 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ad3ea2-84b8-4821-b7f1-bc347983409a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f5f41ee-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5c:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549741, 'reachable_time': 16499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285163, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.142 2 DEBUG nova.compute.manager [req-045054a9-fe15-4f58-819f-d834c4c0ac12 req-c7e9edef-8734-449b-bff3-55521852724e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.143 2 DEBUG oslo_concurrency.lockutils [req-045054a9-fe15-4f58-819f-d834c4c0ac12 req-c7e9edef-8734-449b-bff3-55521852724e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.143 2 DEBUG oslo_concurrency.lockutils [req-045054a9-fe15-4f58-819f-d834c4c0ac12 req-c7e9edef-8734-449b-bff3-55521852724e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.145 2 DEBUG oslo_concurrency.lockutils [req-045054a9-fe15-4f58-819f-d834c4c0ac12 req-c7e9edef-8734-449b-bff3-55521852724e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.145 2 DEBUG nova.compute.manager [req-045054a9-fe15-4f58-819f-d834c4c0ac12 req-c7e9edef-8734-449b-bff3-55521852724e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] No waiting events found dispatching network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.145 2 WARNING nova.compute.manager [req-045054a9-fe15-4f58-819f-d834c4c0ac12 req-c7e9edef-8734-449b-bff3-55521852724e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received unexpected event network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 for instance with vm_state active and task_state deleting.
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.149 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d92a4b-62e3-41de-ad6b-e41aba5c3960]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f5f41ee-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549753, 'tstamp': 549753}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285164, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f5f41ee-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549756, 'tstamp': 549756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285164, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.152 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f5f41ee-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 NetworkManager[44981]: <info>  [1759407614.1604] manager: (tapeacf5e24-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.164 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f5f41ee-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.164 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.165 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f5f41ee-b0, col_values=(('external_ids', {'iface-id': 'bdef37d5-b724-496d-b1f1-3c004abe3020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.165 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.168 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 535db5a8-ea9b-4490-a9a0-1be280e5bc25 in datapath 9f5f41ee-bc9c-44cb-8d23-de1279b7943e unbound from our chassis
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.169 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f5f41ee-bc9c-44cb-8d23-de1279b7943e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.171 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[107d01fe-4ea2-4411-9684-d12675b416ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.172 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e namespace which is not needed anymore
Oct 02 12:20:14 compute-0 NetworkManager[44981]: <info>  [1759407614.1740] manager: (tap535db5a8-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.195 2 INFO nova.virt.libvirt.driver [-] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Instance destroyed successfully.
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.195 2 DEBUG nova.objects.instance [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lazy-loading 'resources' on Instance uuid 87c6b2c0-d27f-46b9-811a-810a5fa87a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.213 2 DEBUG nova.virt.libvirt.vif [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1961707445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1961707445',id=41,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a376d124dcaf49f2a6c0208610070573',ramdisk_id='',reservation_id='r-y8nbz4gv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1922422412',owner_user_name='tempest-AttachInterfacesV270Test-1922422412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:03Z,user_data=None,user_id='7384c819e86a426d84c53eace7ec8594',uuid=87c6b2c0-d27f-46b9-811a-810a5fa87a19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.214 2 DEBUG nova.network.os_vif_util [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converting VIF {"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.214 2 DEBUG nova.network.os_vif_util [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:c7:80,bridge_name='br-int',has_traffic_filtering=True,id=eacf5e24-e256-44f4-babc-b17f1293993a,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacf5e24-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.215 2 DEBUG os_vif [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:c7:80,bridge_name='br-int',has_traffic_filtering=True,id=eacf5e24-e256-44f4-babc-b17f1293993a,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacf5e24-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeacf5e24-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.227 2 INFO os_vif [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:c7:80,bridge_name='br-int',has_traffic_filtering=True,id=eacf5e24-e256-44f4-babc-b17f1293993a,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacf5e24-e2')
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.228 2 DEBUG nova.virt.libvirt.vif [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1961707445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1961707445',id=41,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a376d124dcaf49f2a6c0208610070573',ramdisk_id='',reservation_id='r-y8nbz4gv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1922422412',owner_user_name='tempest-AttachInterfacesV270Test-1922422412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:03Z,user_data=None,user_id='7384c819e86a426d84c53eace7ec8594',uuid=87c6b2c0-d27f-46b9-811a-810a5fa87a19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.228 2 DEBUG nova.network.os_vif_util [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converting VIF {"id": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "address": "fa:16:3e:1a:50:f9", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap535db5a8-ea", "ovs_interfaceid": "535db5a8-ea9b-4490-a9a0-1be280e5bc25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.228 2 DEBUG nova.network.os_vif_util [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:50:f9,bridge_name='br-int',has_traffic_filtering=True,id=535db5a8-ea9b-4490-a9a0-1be280e5bc25,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap535db5a8-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.229 2 DEBUG os_vif [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:50:f9,bridge_name='br-int',has_traffic_filtering=True,id=535db5a8-ea9b-4490-a9a0-1be280e5bc25,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap535db5a8-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.230 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap535db5a8-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.239 2 INFO os_vif [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:50:f9,bridge_name='br-int',has_traffic_filtering=True,id=535db5a8-ea9b-4490-a9a0-1be280e5bc25,network=Network(9f5f41ee-bc9c-44cb-8d23-de1279b7943e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap535db5a8-ea')
Oct 02 12:20:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:20:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e[284828]: [NOTICE]   (284832) : haproxy version is 2.8.14-c23fe91
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e[284828]: [NOTICE]   (284832) : path to executable is /usr/sbin/haproxy
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e[284828]: [WARNING]  (284832) : Exiting Master process...
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e[284828]: [ALERT]    (284832) : Current worker (284834) exited with code 143 (Terminated)
Oct 02 12:20:14 compute-0 neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e[284828]: [WARNING]  (284832) : All workers exited. Exiting... (0)
Oct 02 12:20:14 compute-0 systemd[1]: libpod-124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77.scope: Deactivated successfully.
Oct 02 12:20:14 compute-0 podman[285246]: 2025-10-02 12:20:14.359958308 +0000 UTC m=+0.067026408 container died 124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:20:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f25ff7423fc121dd091741980d12820159019f5145c554d8c025cbceee2f1f5-merged.mount: Deactivated successfully.
Oct 02 12:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77-userdata-shm.mount: Deactivated successfully.
Oct 02 12:20:14 compute-0 podman[285246]: 2025-10-02 12:20:14.671557379 +0000 UTC m=+0.378625479 container cleanup 124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 systemd[1]: libpod-conmon-124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77.scope: Deactivated successfully.
Oct 02 12:20:14 compute-0 podman[285328]: 2025-10-02 12:20:14.787271454 +0000 UTC m=+0.092972174 container remove 124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.793 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[afe0e341-6180-4b9c-810a-3048f0b675cf]: (4, ('Thu Oct  2 12:20:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e (124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77)\n124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77\nThu Oct  2 12:20:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e (124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77)\n124206f75cb70fa98bc2e14b2500f1c17a89258bbd1c72f56bc933885b375d77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.797 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff78fea-6858-49d3-acd3-0a2edf485379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.798 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f5f41ee-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 kernel: tap9f5f41ee-b0: left promiscuous mode
Oct 02 12:20:14 compute-0 nova_compute[256940]: 2025-10-02 12:20:14.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.834 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[58a7a567-cec1-494d-85a3-c9caf3699bd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.864 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bff59b-1758-44b6-bcea-cc987cf1aae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.866 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f76284-16f5-4d29-8f10-e678bd1247b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.890 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbef65b-4870-4b20-bfa0-95a8bca3413c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549732, 'reachable_time': 21140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285375, 'error': None, 'target': 'ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.893 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f5f41ee-bc9c-44cb-8d23-de1279b7943e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:20:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:14.893 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0bbb5c-d07f-46cf-b863-d90884f2ea34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f5f41ee\x2dbc9c\x2d44cb\x2d8d23\x2dde1279b7943e.mount: Deactivated successfully.
Oct 02 12:20:14 compute-0 ceph-mon[73668]: pgmap v1234: 305 pgs: 305 active+clean; 210 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.7 MiB/s wr, 264 op/s
Oct 02 12:20:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2996451535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 185 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 261 op/s
Oct 02 12:20:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:15 compute-0 podman[285378]: 2025-10-02 12:20:15.009554287 +0000 UTC m=+0.101302861 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:20:15 compute-0 podman[285401]: 2025-10-02 12:20:15.095540398 +0000 UTC m=+0.061794451 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:20:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:15 compute-0 podman[285378]: 2025-10-02 12:20:15.120437757 +0000 UTC m=+0.212186321 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:20:15 compute-0 podman[285445]: 2025-10-02 12:20:15.426184746 +0000 UTC m=+0.085209942 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 02 12:20:15 compute-0 podman[285445]: 2025-10-02 12:20:15.450585462 +0000 UTC m=+0.109610678 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, version=2.2.4, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc.)
Oct 02 12:20:15 compute-0 nova_compute[256940]: 2025-10-02 12:20:15.588 2 INFO nova.virt.libvirt.driver [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Deleting instance files /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19_del
Oct 02 12:20:15 compute-0 nova_compute[256940]: 2025-10-02 12:20:15.589 2 INFO nova.virt.libvirt.driver [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Deletion of /var/lib/nova/instances/87c6b2c0-d27f-46b9-811a-810a5fa87a19_del complete
Oct 02 12:20:15 compute-0 sudo[285028]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:20:15 compute-0 nova_compute[256940]: 2025-10-02 12:20:15.665 2 INFO nova.compute.manager [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Took 1.73 seconds to destroy the instance on the hypervisor.
Oct 02 12:20:15 compute-0 nova_compute[256940]: 2025-10-02 12:20:15.667 2 DEBUG oslo.service.loopingcall [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:20:15 compute-0 nova_compute[256940]: 2025-10-02 12:20:15.668 2 DEBUG nova.compute.manager [-] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:20:15 compute-0 nova_compute[256940]: 2025-10-02 12:20:15.668 2 DEBUG nova.network.neutron [-] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:20:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:20:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:15.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:15 compute-0 sudo[285495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:15 compute-0 sudo[285495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:15 compute-0 sudo[285495]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:15 compute-0 ceph-mon[73668]: osdmap e170: 3 total, 3 up, 3 in
Oct 02 12:20:15 compute-0 ceph-mon[73668]: pgmap v1236: 305 pgs: 305 active+clean; 185 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 261 op/s
Oct 02 12:20:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/956283661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:15 compute-0 sudo[285520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:15 compute-0 sudo[285520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:15 compute-0 sudo[285520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:15 compute-0 sudo[285545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:15 compute-0 sudo[285545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:15 compute-0 sudo[285545]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:16 compute-0 sudo[285570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:20:16 compute-0 sudo[285570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.290 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-unplugged-eacf5e24-e256-44f4-babc-b17f1293993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.290 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.291 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.291 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.291 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] No waiting events found dispatching network-vif-unplugged-eacf5e24-e256-44f4-babc-b17f1293993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.291 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-unplugged-eacf5e24-e256-44f4-babc-b17f1293993a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.291 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.292 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.292 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.292 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.292 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] No waiting events found dispatching network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.293 2 WARNING nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received unexpected event network-vif-plugged-eacf5e24-e256-44f4-babc-b17f1293993a for instance with vm_state active and task_state deleting.
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.293 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-unplugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.293 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.294 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.294 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.294 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] No waiting events found dispatching network-vif-unplugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.294 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-unplugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.295 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.295 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.295 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.296 2 DEBUG oslo_concurrency.lockutils [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.296 2 DEBUG nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] No waiting events found dispatching network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.296 2 WARNING nova.compute.manager [req-25df22f5-2452-4cb6-ab54-6e6e438bdc73 req-5ef9917b-235e-4f7e-9a6c-8ef74fd8af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received unexpected event network-vif-plugged-535db5a8-ea9b-4490-a9a0-1be280e5bc25 for instance with vm_state active and task_state deleting.
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.448 2 DEBUG nova.compute.manager [req-d8b63e72-c51f-463b-93ec-cc1f03176266 req-10de7d7b-20e1-4130-84eb-31e744c67075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-deleted-535db5a8-ea9b-4490-a9a0-1be280e5bc25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.448 2 INFO nova.compute.manager [req-d8b63e72-c51f-463b-93ec-cc1f03176266 req-10de7d7b-20e1-4130-84eb-31e744c67075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Neutron deleted interface 535db5a8-ea9b-4490-a9a0-1be280e5bc25; detaching it from the instance and deleting it from the info cache
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.448 2 DEBUG nova.network.neutron [req-d8b63e72-c51f-463b-93ec-cc1f03176266 req-10de7d7b-20e1-4130-84eb-31e744c67075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updating instance_info_cache with network_info: [{"id": "eacf5e24-e256-44f4-babc-b17f1293993a", "address": "fa:16:3e:44:c7:80", "network": {"id": "9f5f41ee-bc9c-44cb-8d23-de1279b7943e", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1876322557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a376d124dcaf49f2a6c0208610070573", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacf5e24-e2", "ovs_interfaceid": "eacf5e24-e256-44f4-babc-b17f1293993a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.467 2 DEBUG nova.compute.manager [req-d8b63e72-c51f-463b-93ec-cc1f03176266 req-10de7d7b-20e1-4130-84eb-31e744c67075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Detach interface failed, port_id=535db5a8-ea9b-4490-a9a0-1be280e5bc25, reason: Instance 87c6b2c0-d27f-46b9-811a-810a5fa87a19 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:20:16 compute-0 sudo[285570]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.680 2 DEBUG nova.network.neutron [-] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.696 2 INFO nova.compute.manager [-] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Took 1.03 seconds to deallocate network for instance.
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:20:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:20:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.736 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.736 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 02 12:20:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:16 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 845b31c6-aa8d-42fd-a425-6178ae7c1fe0 does not exist
Oct 02 12:20:16 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 78536613-b6ae-4a1a-ad2d-e5cac1ec36eb does not exist
Oct 02 12:20:16 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 67b305f9-7573-40c4-93ab-4daec902e5cb does not exist
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:20:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:20:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:20:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:16 compute-0 nova_compute[256940]: 2025-10-02 12:20:16.805 2 DEBUG oslo_concurrency.processutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 02 12:20:16 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 02 12:20:16 compute-0 sudo[285627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:16 compute-0 sudo[285627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:16 compute-0 sudo[285627]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 179 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.7 MiB/s wr, 395 op/s
Oct 02 12:20:16 compute-0 sudo[285653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:16 compute-0 sudo[285653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:16 compute-0 sudo[285653]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:17 compute-0 sudo[285697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:17 compute-0 sudo[285697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:17 compute-0 sudo[285697]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2270279448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:20:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:20:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:20:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:17 compute-0 ceph-mon[73668]: osdmap e171: 3 total, 3 up, 3 in
Oct 02 12:20:17 compute-0 sudo[285722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:20:17 compute-0 sudo[285722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613112588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:17 compute-0 nova_compute[256940]: 2025-10-02 12:20:17.272 2 DEBUG oslo_concurrency.processutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:17 compute-0 nova_compute[256940]: 2025-10-02 12:20:17.281 2 DEBUG nova.compute.provider_tree [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:17 compute-0 nova_compute[256940]: 2025-10-02 12:20:17.298 2 DEBUG nova.scheduler.client.report [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:17 compute-0 nova_compute[256940]: 2025-10-02 12:20:17.327 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:17 compute-0 nova_compute[256940]: 2025-10-02 12:20:17.373 2 INFO nova.scheduler.client.report [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Deleted allocations for instance 87c6b2c0-d27f-46b9-811a-810a5fa87a19
Oct 02 12:20:17 compute-0 nova_compute[256940]: 2025-10-02 12:20:17.451 2 DEBUG oslo_concurrency.lockutils [None req-dbe22ca7-ee4c-4392-bc36-d1f1c0390aa6 7384c819e86a426d84c53eace7ec8594 a376d124dcaf49f2a6c0208610070573 - - default default] Lock "87c6b2c0-d27f-46b9-811a-810a5fa87a19" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:17 compute-0 podman[285789]: 2025-10-02 12:20:17.491117411 +0000 UTC m=+0.038645478 container create 99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:20:17 compute-0 systemd[1]: Started libpod-conmon-99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c.scope.
Oct 02 12:20:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:17 compute-0 podman[285789]: 2025-10-02 12:20:17.474542909 +0000 UTC m=+0.022070996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:17 compute-0 podman[285789]: 2025-10-02 12:20:17.576922807 +0000 UTC m=+0.124450884 container init 99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:20:17 compute-0 podman[285789]: 2025-10-02 12:20:17.587677848 +0000 UTC m=+0.135205915 container start 99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:20:17 compute-0 podman[285789]: 2025-10-02 12:20:17.593487349 +0000 UTC m=+0.141015426 container attach 99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:20:17 compute-0 systemd[1]: libpod-99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c.scope: Deactivated successfully.
Oct 02 12:20:17 compute-0 modest_bohr[285805]: 167 167
Oct 02 12:20:17 compute-0 conmon[285805]: conmon 99065cc802be8221bc2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c.scope/container/memory.events
Oct 02 12:20:17 compute-0 podman[285789]: 2025-10-02 12:20:17.596765304 +0000 UTC m=+0.144293371 container died 99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f82c574b1ec121c1b9d3d2c596f1ff593335c717eaef6fe436260616b64f91c-merged.mount: Deactivated successfully.
Oct 02 12:20:17 compute-0 podman[285789]: 2025-10-02 12:20:17.648594985 +0000 UTC m=+0.196123062 container remove 99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:20:17 compute-0 systemd[1]: libpod-conmon-99065cc802be8221bc2e42c2cc923b679ba2d87ca4a017d288a9f685221abc8c.scope: Deactivated successfully.
Oct 02 12:20:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:17.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:17 compute-0 podman[285832]: 2025-10-02 12:20:17.825129176 +0000 UTC m=+0.043220177 container create 7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 02 12:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 02 12:20:17 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 02 12:20:17 compute-0 systemd[1]: Started libpod-conmon-7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37.scope.
Oct 02 12:20:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d02315205a4d3832df55655754e8fc9aefef2eeafd2270296fdb19b3a2b047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d02315205a4d3832df55655754e8fc9aefef2eeafd2270296fdb19b3a2b047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d02315205a4d3832df55655754e8fc9aefef2eeafd2270296fdb19b3a2b047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d02315205a4d3832df55655754e8fc9aefef2eeafd2270296fdb19b3a2b047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d02315205a4d3832df55655754e8fc9aefef2eeafd2270296fdb19b3a2b047/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:17 compute-0 podman[285832]: 2025-10-02 12:20:17.806917611 +0000 UTC m=+0.025008632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:17 compute-0 podman[285832]: 2025-10-02 12:20:17.924866245 +0000 UTC m=+0.142957246 container init 7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:20:17 compute-0 podman[285832]: 2025-10-02 12:20:17.933308365 +0000 UTC m=+0.151399366 container start 7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:20:17 compute-0 podman[285832]: 2025-10-02 12:20:17.951404237 +0000 UTC m=+0.169495238 container attach 7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:20:18 compute-0 ceph-mon[73668]: pgmap v1238: 305 pgs: 305 active+clean; 179 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.7 MiB/s wr, 395 op/s
Oct 02 12:20:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/405233859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3613112588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3353938704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:18 compute-0 ceph-mon[73668]: osdmap e172: 3 total, 3 up, 3 in
Oct 02 12:20:18 compute-0 nova_compute[256940]: 2025-10-02 12:20:18.623 2 DEBUG nova.compute.manager [req-8b32b1d9-5be7-442b-9b4c-550631eef59e req-6af56e7b-9523-4c77-b155-62a9d493c179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Received event network-vif-deleted-eacf5e24-e256-44f4-babc-b17f1293993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:18 compute-0 sweet_jemison[285849]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:20:18 compute-0 sweet_jemison[285849]: --> relative data size: 1.0
Oct 02 12:20:18 compute-0 sweet_jemison[285849]: --> All data devices are unavailable
Oct 02 12:20:18 compute-0 systemd[1]: libpod-7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37.scope: Deactivated successfully.
Oct 02 12:20:18 compute-0 podman[285832]: 2025-10-02 12:20:18.817391006 +0000 UTC m=+1.035482007 container died 7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d02315205a4d3832df55655754e8fc9aefef2eeafd2270296fdb19b3a2b047-merged.mount: Deactivated successfully.
Oct 02 12:20:18 compute-0 podman[285832]: 2025-10-02 12:20:18.891540459 +0000 UTC m=+1.109631460 container remove 7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:20:18 compute-0 systemd[1]: libpod-conmon-7655d3df04696481c4fb1caac72afd36a6ef7e1c7de0c419bc8a7b9fe28baf37.scope: Deactivated successfully.
Oct 02 12:20:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 179 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 217 op/s
Oct 02 12:20:18 compute-0 sudo[285722]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:18.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:19 compute-0 sudo[285879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:19 compute-0 sudo[285879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:19 compute-0 sudo[285879]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:19 compute-0 sudo[285904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:19 compute-0 sudo[285904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:19 compute-0 sudo[285904]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:19 compute-0 sudo[285929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:19 compute-0 sudo[285929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:19 compute-0 nova_compute[256940]: 2025-10-02 12:20:19.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:19 compute-0 sudo[285929]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:19 compute-0 sudo[285954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:20:19 compute-0 sudo[285954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:19 compute-0 nova_compute[256940]: 2025-10-02 12:20:19.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:19 compute-0 podman[286020]: 2025-10-02 12:20:19.721979022 +0000 UTC m=+0.068295081 container create da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:20:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:19 compute-0 systemd[1]: Started libpod-conmon-da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a.scope.
Oct 02 12:20:19 compute-0 podman[286020]: 2025-10-02 12:20:19.67856456 +0000 UTC m=+0.024880709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:19 compute-0 podman[286020]: 2025-10-02 12:20:19.898801509 +0000 UTC m=+0.245117568 container init da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:20:19 compute-0 podman[286020]: 2025-10-02 12:20:19.90691184 +0000 UTC m=+0.253227929 container start da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:20:19 compute-0 nice_perlman[286036]: 167 167
Oct 02 12:20:19 compute-0 systemd[1]: libpod-da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a.scope: Deactivated successfully.
Oct 02 12:20:19 compute-0 podman[286020]: 2025-10-02 12:20:19.941629685 +0000 UTC m=+0.287945764 container attach da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:20:19 compute-0 podman[286020]: 2025-10-02 12:20:19.942306613 +0000 UTC m=+0.288622682 container died da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 12:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a39bef9afccf12c40c37c755642f5e1b9aab002eda0321355f3b0a55f203da10-merged.mount: Deactivated successfully.
Oct 02 12:20:20 compute-0 podman[286020]: 2025-10-02 12:20:20.0657472 +0000 UTC m=+0.412063289 container remove da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:20:20 compute-0 systemd[1]: libpod-conmon-da47ae04dda43df921c4bef2415bbe97d379e6df2fe24bcd70cfc15e5412f92a.scope: Deactivated successfully.
Oct 02 12:20:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:20 compute-0 ceph-mon[73668]: pgmap v1240: 305 pgs: 305 active+clean; 179 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 217 op/s
Oct 02 12:20:20 compute-0 podman[286061]: 2025-10-02 12:20:20.274376597 +0000 UTC m=+0.060488637 container create 30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:20:20 compute-0 podman[286061]: 2025-10-02 12:20:20.244862838 +0000 UTC m=+0.030974898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:20 compute-0 systemd[1]: Started libpod-conmon-30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21.scope.
Oct 02 12:20:20 compute-0 nova_compute[256940]: 2025-10-02 12:20:20.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e728d6cc14ab8cbf765b0697f2424bc3cd69e1898a97788a990c1629d6a42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e728d6cc14ab8cbf765b0697f2424bc3cd69e1898a97788a990c1629d6a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e728d6cc14ab8cbf765b0697f2424bc3cd69e1898a97788a990c1629d6a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e728d6cc14ab8cbf765b0697f2424bc3cd69e1898a97788a990c1629d6a42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:20 compute-0 podman[286061]: 2025-10-02 12:20:20.393275576 +0000 UTC m=+0.179387646 container init 30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:20:20 compute-0 podman[286061]: 2025-10-02 12:20:20.404248712 +0000 UTC m=+0.190360752 container start 30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:20:20 compute-0 podman[286061]: 2025-10-02 12:20:20.410538816 +0000 UTC m=+0.196650876 container attach 30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:20:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 227 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 445 op/s
Oct 02 12:20:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:20.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:21 compute-0 pensive_poincare[286077]: {
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:     "1": [
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:         {
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "devices": [
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "/dev/loop3"
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             ],
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "lv_name": "ceph_lv0",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "lv_size": "7511998464",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "name": "ceph_lv0",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "tags": {
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.cluster_name": "ceph",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.crush_device_class": "",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.encrypted": "0",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.osd_id": "1",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.type": "block",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:                 "ceph.vdo": "0"
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             },
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "type": "block",
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:             "vg_name": "ceph_vg0"
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:         }
Oct 02 12:20:21 compute-0 pensive_poincare[286077]:     ]
Oct 02 12:20:21 compute-0 pensive_poincare[286077]: }
Oct 02 12:20:21 compute-0 systemd[1]: libpod-30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21.scope: Deactivated successfully.
Oct 02 12:20:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 02 12:20:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 02 12:20:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 02 12:20:21 compute-0 podman[286087]: 2025-10-02 12:20:21.245927798 +0000 UTC m=+0.026840271 container died 30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b8e728d6cc14ab8cbf765b0697f2424bc3cd69e1898a97788a990c1629d6a42-merged.mount: Deactivated successfully.
Oct 02 12:20:21 compute-0 podman[286087]: 2025-10-02 12:20:21.302316658 +0000 UTC m=+0.083229111 container remove 30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:20:21 compute-0 systemd[1]: libpod-conmon-30677ece51fac66f08e27c3bc77f9372537fb2fde55d8aa68f3f8c223a21bc21.scope: Deactivated successfully.
Oct 02 12:20:21 compute-0 sudo[285954]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:21 compute-0 sudo[286102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:21 compute-0 sudo[286102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:21 compute-0 sudo[286102]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:21 compute-0 sudo[286127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:21 compute-0 sudo[286127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:21 compute-0 sudo[286127]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:21 compute-0 sudo[286152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:21 compute-0 sudo[286152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:21 compute-0 sudo[286152]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:21 compute-0 sudo[286177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:20:21 compute-0 sudo[286177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:22 compute-0 podman[286246]: 2025-10-02 12:20:22.047122769 +0000 UTC m=+0.056576886 container create 5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:20:22 compute-0 systemd[1]: Started libpod-conmon-5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd.scope.
Oct 02 12:20:22 compute-0 podman[286246]: 2025-10-02 12:20:22.026633875 +0000 UTC m=+0.036088022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:22 compute-0 podman[286246]: 2025-10-02 12:20:22.16649522 +0000 UTC m=+0.175949407 container init 5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:20:22 compute-0 podman[286246]: 2025-10-02 12:20:22.1737709 +0000 UTC m=+0.183225037 container start 5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:20:22 compute-0 nervous_chatterjee[286262]: 167 167
Oct 02 12:20:22 compute-0 systemd[1]: libpod-5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd.scope: Deactivated successfully.
Oct 02 12:20:22 compute-0 conmon[286262]: conmon 5f94c873036fc0e054aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd.scope/container/memory.events
Oct 02 12:20:22 compute-0 podman[286246]: 2025-10-02 12:20:22.181352417 +0000 UTC m=+0.190806564 container attach 5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:20:22 compute-0 podman[286246]: 2025-10-02 12:20:22.182376324 +0000 UTC m=+0.191830461 container died 5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-16a97924963c14f62cc4ae75aee7d8e2cb146fa08b7ea7fdb34c7621b059590c-merged.mount: Deactivated successfully.
Oct 02 12:20:22 compute-0 ceph-mon[73668]: pgmap v1241: 305 pgs: 305 active+clean; 227 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 445 op/s
Oct 02 12:20:22 compute-0 ceph-mon[73668]: osdmap e173: 3 total, 3 up, 3 in
Oct 02 12:20:22 compute-0 podman[286246]: 2025-10-02 12:20:22.257454411 +0000 UTC m=+0.266908558 container remove 5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:20:22 compute-0 systemd[1]: libpod-conmon-5f94c873036fc0e054aaef6d7306cf78ade015fbdf326070dd15d0a3323968dd.scope: Deactivated successfully.
Oct 02 12:20:22 compute-0 podman[286286]: 2025-10-02 12:20:22.489661792 +0000 UTC m=+0.061012541 container create 5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:20:22 compute-0 systemd[1]: Started libpod-conmon-5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f.scope.
Oct 02 12:20:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab7710e6d64cf6e6c10765fee15ca64d3f5b91711baa58bd0da5c8d80c48d66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab7710e6d64cf6e6c10765fee15ca64d3f5b91711baa58bd0da5c8d80c48d66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab7710e6d64cf6e6c10765fee15ca64d3f5b91711baa58bd0da5c8d80c48d66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab7710e6d64cf6e6c10765fee15ca64d3f5b91711baa58bd0da5c8d80c48d66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:22 compute-0 podman[286286]: 2025-10-02 12:20:22.470551644 +0000 UTC m=+0.041902423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:20:22 compute-0 podman[286286]: 2025-10-02 12:20:22.574452152 +0000 UTC m=+0.145802931 container init 5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:20:22 compute-0 podman[286286]: 2025-10-02 12:20:22.584235977 +0000 UTC m=+0.155586736 container start 5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:20:22 compute-0 podman[286286]: 2025-10-02 12:20:22.592675307 +0000 UTC m=+0.164026066 container attach 5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:20:22 compute-0 podman[286301]: 2025-10-02 12:20:22.615258725 +0000 UTC m=+0.077240203 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:20:22 compute-0 podman[286305]: 2025-10-02 12:20:22.69485776 +0000 UTC m=+0.148798879 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:20:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 227 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 8.8 MiB/s rd, 6.0 MiB/s wr, 428 op/s
Oct 02 12:20:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:23 compute-0 gifted_golick[286306]: {
Oct 02 12:20:23 compute-0 gifted_golick[286306]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:20:23 compute-0 gifted_golick[286306]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:20:23 compute-0 gifted_golick[286306]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:20:23 compute-0 gifted_golick[286306]:         "osd_id": 1,
Oct 02 12:20:23 compute-0 gifted_golick[286306]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:20:23 compute-0 gifted_golick[286306]:         "type": "bluestore"
Oct 02 12:20:23 compute-0 gifted_golick[286306]:     }
Oct 02 12:20:23 compute-0 gifted_golick[286306]: }
Oct 02 12:20:23 compute-0 systemd[1]: libpod-5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f.scope: Deactivated successfully.
Oct 02 12:20:23 compute-0 podman[286286]: 2025-10-02 12:20:23.472720061 +0000 UTC m=+1.044070840 container died 5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ab7710e6d64cf6e6c10765fee15ca64d3f5b91711baa58bd0da5c8d80c48d66-merged.mount: Deactivated successfully.
Oct 02 12:20:23 compute-0 podman[286286]: 2025-10-02 12:20:23.53443577 +0000 UTC m=+1.105786529 container remove 5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:20:23 compute-0 systemd[1]: libpod-conmon-5783acb1b453f97a1d428b12d05857163c020b9197c318762a3fa083e0b8c03f.scope: Deactivated successfully.
Oct 02 12:20:23 compute-0 sudo[286177]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:20:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:20:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a90434b6-3113-4c81-ab09-44f83b3ac3c0 does not exist
Oct 02 12:20:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 05d32cfb-75bb-44d7-a19a-6c7f5d4362b2 does not exist
Oct 02 12:20:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 99f1db47-0595-47d3-b782-f1894747b49b does not exist
Oct 02 12:20:23 compute-0 sudo[286384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:23 compute-0 sudo[286384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:23 compute-0 sudo[286384]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:23 compute-0 sudo[286409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:20:23 compute-0 sudo[286409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:23 compute-0 sudo[286409]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:23.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:24 compute-0 nova_compute[256940]: 2025-10-02 12:20:24.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:24 compute-0 ceph-mon[73668]: pgmap v1243: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 227 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 8.8 MiB/s rd, 6.0 MiB/s wr, 428 op/s
Oct 02 12:20:24 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:24 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:24 compute-0 nova_compute[256940]: 2025-10-02 12:20:24.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 212 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.6 MiB/s wr, 370 op/s
Oct 02 12:20:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:24.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.154 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.154 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.169 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:20:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 02 12:20:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.247 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.248 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.257 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.258 2 INFO nova.compute.claims [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.344 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/13144440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:25 compute-0 ceph-mon[73668]: osdmap e174: 3 total, 3 up, 3 in
Oct 02 12:20:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:25.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/813570305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.821 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.829 2 DEBUG nova.compute.provider_tree [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.846 2 DEBUG nova.scheduler.client.report [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.872 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.873 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.935 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.936 2 DEBUG nova.network.neutron [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:20:25 compute-0 nova_compute[256940]: 2025-10-02 12:20:25.982 2 INFO nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.005 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.121 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.122 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.123 2 INFO nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Creating image(s)
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.163 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.200 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.234 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.239 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.272 2 DEBUG nova.policy [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.314 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.315 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.315 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.316 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.345 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.351 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:26.457 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:26.458 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:26.458 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:26 compute-0 ceph-mon[73668]: pgmap v1244: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 212 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.6 MiB/s wr, 370 op/s
Oct 02 12:20:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/813570305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.868 2 DEBUG nova.network.neutron [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Successfully created port: 7905f48e-7719-4693-9ec3-ca47c1ff67e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.889 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 134 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 4.6 MiB/s wr, 413 op/s
Oct 02 12:20:26 compute-0 nova_compute[256940]: 2025-10-02 12:20:26.982 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] resizing rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:20:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.105 2 DEBUG nova.objects.instance [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid 51faf5e3-3295-46e2-8cf7-ac53503b72ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.120 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.121 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Ensure instance console log exists: /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.121 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.122 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.122 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.627 2 DEBUG nova.network.neutron [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Successfully updated port: 7905f48e-7719-4693-9ec3-ca47c1ff67e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.640 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.640 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:27 compute-0 nova_compute[256940]: 2025-10-02 12:20:27.641 2 DEBUG nova.network.neutron [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:20:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:27.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:20:28
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'backups']
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:20:28 compute-0 ceph-mon[73668]: pgmap v1246: 305 pgs: 305 active+clean; 134 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 4.6 MiB/s wr, 413 op/s
Oct 02 12:20:28 compute-0 nova_compute[256940]: 2025-10-02 12:20:28.843 2 DEBUG nova.network.neutron [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:20:28 compute-0 nova_compute[256940]: 2025-10-02 12:20:28.913 2 DEBUG nova.compute.manager [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received event network-changed-7905f48e-7719-4693-9ec3-ca47c1ff67e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:28 compute-0 nova_compute[256940]: 2025-10-02 12:20:28.914 2 DEBUG nova.compute.manager [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Refreshing instance network info cache due to event network-changed-7905f48e-7719-4693-9ec3-ca47c1ff67e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:20:28 compute-0 nova_compute[256940]: 2025-10-02 12:20:28.914 2 DEBUG oslo_concurrency.lockutils [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 134 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 22 KiB/s wr, 201 op/s
Oct 02 12:20:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.192 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407614.1906388, 87c6b2c0-d27f-46b9-811a-810a5fa87a19 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.193 2 INFO nova.compute.manager [-] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] VM Stopped (Lifecycle Event)
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.227 2 DEBUG nova.compute.manager [None req-87f83978-e174-447c-a1e0-67969a619543 - - - - - -] [instance: 87c6b2c0-d27f-46b9-811a-810a5fa87a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.711 2 DEBUG nova.network.neutron [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Updating instance_info_cache with network_info: [{"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.728 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.729 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance network_info: |[{"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.729 2 DEBUG oslo_concurrency.lockutils [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.729 2 DEBUG nova.network.neutron [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Refreshing network info cache for port 7905f48e-7719-4693-9ec3-ca47c1ff67e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.732 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Start _get_guest_xml network_info=[{"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.737 2 WARNING nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:29.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.771 2 DEBUG nova.virt.libvirt.host [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.772 2 DEBUG nova.virt.libvirt.host [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.778 2 DEBUG nova.virt.libvirt.host [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.778 2 DEBUG nova.virt.libvirt.host [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.780 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.780 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.780 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.781 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.781 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.781 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.781 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.781 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.782 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.782 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.782 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.782 2 DEBUG nova.virt.hardware [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:29 compute-0 nova_compute[256940]: 2025-10-02 12:20:29.785 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:30 compute-0 sudo[286645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:30 compute-0 sudo[286645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:30 compute-0 sudo[286645]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:30 compute-0 sudo[286670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:30 compute-0 sudo[286670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:30 compute-0 sudo[286670]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667482974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.262 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.296 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.302 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:30 compute-0 ceph-mon[73668]: pgmap v1247: 305 pgs: 305 active+clean; 134 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 22 KiB/s wr, 201 op/s
Oct 02 12:20:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 02 12:20:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 02 12:20:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430916388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.753 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.756 2 DEBUG nova.virt.libvirt.vif [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:20:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-224839176',display_name='tempest-ImagesTestJSON-server-224839176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-224839176',id=44,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-7he80jwk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:20:26Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=51faf5e3-3295-46e2-8cf7-ac53503b72ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.757 2 DEBUG nova.network.os_vif_util [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.758 2 DEBUG nova.network.os_vif_util [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7c:bc,bridge_name='br-int',has_traffic_filtering=True,id=7905f48e-7719-4693-9ec3-ca47c1ff67e8,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7905f48e-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.760 2 DEBUG nova.objects.instance [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51faf5e3-3295-46e2-8cf7-ac53503b72ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.783 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <uuid>51faf5e3-3295-46e2-8cf7-ac53503b72ca</uuid>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <name>instance-0000002c</name>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <nova:name>tempest-ImagesTestJSON-server-224839176</nova:name>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:20:29</nova:creationTime>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <nova:port uuid="7905f48e-7719-4693-9ec3-ca47c1ff67e8">
Oct 02 12:20:30 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <system>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <entry name="serial">51faf5e3-3295-46e2-8cf7-ac53503b72ca</entry>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <entry name="uuid">51faf5e3-3295-46e2-8cf7-ac53503b72ca</entry>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </system>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <os>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   </os>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <features>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   </features>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk">
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       </source>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk.config">
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       </source>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:20:30 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:5d:7c:bc"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <target dev="tap7905f48e-77"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/console.log" append="off"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <video>
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </video>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:20:30 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:20:30 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:20:30 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:20:30 compute-0 nova_compute[256940]: </domain>
Oct 02 12:20:30 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.785 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Preparing to wait for external event network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.786 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.786 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.787 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.788 2 DEBUG nova.virt.libvirt.vif [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:20:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-224839176',display_name='tempest-ImagesTestJSON-server-224839176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-224839176',id=44,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-7he80jwk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:20:26Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=51faf5e3-3295-46e2-8cf7-ac53503b72ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.789 2 DEBUG nova.network.os_vif_util [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.790 2 DEBUG nova.network.os_vif_util [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7c:bc,bridge_name='br-int',has_traffic_filtering=True,id=7905f48e-7719-4693-9ec3-ca47c1ff67e8,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7905f48e-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.791 2 DEBUG os_vif [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7c:bc,bridge_name='br-int',has_traffic_filtering=True,id=7905f48e-7719-4693-9ec3-ca47c1ff67e8,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7905f48e-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.799 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7905f48e-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.800 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7905f48e-77, col_values=(('external_ids', {'iface-id': '7905f48e-7719-4693-9ec3-ca47c1ff67e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:7c:bc', 'vm-uuid': '51faf5e3-3295-46e2-8cf7-ac53503b72ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:30 compute-0 NetworkManager[44981]: <info>  [1759407630.8034] manager: (tap7905f48e-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:30 compute-0 nova_compute[256940]: 2025-10-02 12:20:30.816 2 INFO os_vif [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7c:bc,bridge_name='br-int',has_traffic_filtering=True,id=7905f48e-7719-4693-9ec3-ca47c1ff67e8,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7905f48e-77')
Oct 02 12:20:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 191 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.9 MiB/s wr, 162 op/s
Oct 02 12:20:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:31 compute-0 nova_compute[256940]: 2025-10-02 12:20:31.591 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:31 compute-0 nova_compute[256940]: 2025-10-02 12:20:31.592 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:31 compute-0 nova_compute[256940]: 2025-10-02 12:20:31.592 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:5d:7c:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:31 compute-0 nova_compute[256940]: 2025-10-02 12:20:31.593 2 INFO nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Using config drive
Oct 02 12:20:31 compute-0 nova_compute[256940]: 2025-10-02 12:20:31.717 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:31.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/667482974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:31 compute-0 ceph-mon[73668]: osdmap e175: 3 total, 3 up, 3 in
Oct 02 12:20:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2430916388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.083 2 DEBUG nova.network.neutron [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Updated VIF entry in instance network info cache for port 7905f48e-7719-4693-9ec3-ca47c1ff67e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.084 2 DEBUG nova.network.neutron [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Updating instance_info_cache with network_info: [{"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.105 2 DEBUG oslo_concurrency.lockutils [req-8a7b5513-1a98-411e-8405-f12c5609373a req-4b1c8700-4918-4e73-b137-ae52e7dd1bca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.139 2 INFO nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Creating config drive at /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/disk.config
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.144 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpau804nlc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.299 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpau804nlc" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.345 2 DEBUG nova.storage.rbd_utils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.352 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/disk.config 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.661 2 DEBUG oslo_concurrency.processutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/disk.config 51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.662 2 INFO nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Deleting local config drive /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca/disk.config because it was imported into RBD.
Oct 02 12:20:32 compute-0 kernel: tap7905f48e-77: entered promiscuous mode
Oct 02 12:20:32 compute-0 NetworkManager[44981]: <info>  [1759407632.7203] manager: (tap7905f48e-77): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 ovn_controller[148123]: 2025-10-02T12:20:32Z|00155|binding|INFO|Claiming lport 7905f48e-7719-4693-9ec3-ca47c1ff67e8 for this chassis.
Oct 02 12:20:32 compute-0 ovn_controller[148123]: 2025-10-02T12:20:32Z|00156|binding|INFO|7905f48e-7719-4693-9ec3-ca47c1ff67e8: Claiming fa:16:3e:5d:7c:bc 10.100.0.14
Oct 02 12:20:32 compute-0 systemd-udevd[286812]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:32 compute-0 systemd-machined[210927]: New machine qemu-21-instance-0000002c.
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.809 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:7c:bc 10.100.0.14'], port_security=['fa:16:3e:5d:7c:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '51faf5e3-3295-46e2-8cf7-ac53503b72ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7905f48e-7719-4693-9ec3-ca47c1ff67e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.810 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7905f48e-7719-4693-9ec3-ca47c1ff67e8 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.811 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:20:32 compute-0 NetworkManager[44981]: <info>  [1759407632.8219] device (tap7905f48e-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:32 compute-0 NetworkManager[44981]: <info>  [1759407632.8234] device (tap7905f48e-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:32 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-0000002c.
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.829 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd0f25f-c292-4d94-8a91-adaa5f77fd46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.830 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.832 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.832 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fe980d35-af37-4262-9336-ee5f22c72ffc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.833 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93c3bea6-3c90-4355-88d4-6e28c16b9bac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.845 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9aa9d1-8c5e-4f40-b2b9-90a3c970413e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.863 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9085371d-adc5-4c6f-9cec-443982a25e1b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ovn_controller[148123]: 2025-10-02T12:20:32Z|00157|binding|INFO|Setting lport 7905f48e-7719-4693-9ec3-ca47c1ff67e8 ovn-installed in OVS
Oct 02 12:20:32 compute-0 ovn_controller[148123]: 2025-10-02T12:20:32Z|00158|binding|INFO|Setting lport 7905f48e-7719-4693-9ec3-ca47c1ff67e8 up in Southbound
Oct 02 12:20:32 compute-0 nova_compute[256940]: 2025-10-02 12:20:32.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.900 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c0a25b-2500-41dc-b328-f137f3f124e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 NetworkManager[44981]: <info>  [1759407632.9082] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.907 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6381666d-bfc0-4132-951b-14c22933afe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 205 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 478 KiB/s rd, 6.2 MiB/s wr, 147 op/s
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.943 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c23d990f-7f06-4770-a5fa-5917f4e812f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.946 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3c26a97a-b1a8-4cce-bd4b-cbbe43044171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 NetworkManager[44981]: <info>  [1759407632.9714] device (tapd68ff9e0-a0): carrier: link connected
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.978 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[323eec25-9639-4971-8e41-22f2de72dc70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:32.998 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[671f591c-183c-4b4b-92b0-d31fc0668326]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552845, 'reachable_time': 30003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286846, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:33.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:33 compute-0 ceph-mon[73668]: pgmap v1249: 305 pgs: 305 active+clean; 191 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.9 MiB/s wr, 162 op/s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.020 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5eb62b-15bd-475d-bc77-1b200be466f3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552845, 'tstamp': 552845}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286847, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.050 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc366296-56db-427a-ba39-9c25c4afe14a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552845, 'reachable_time': 30003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286848, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.096 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[41573359-716e-4995-93b9-0683705699de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.184 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ee64a1fe-8f4d-47d7-9734-f0581038371b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.187 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.187 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.187 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 NetworkManager[44981]: <info>  [1759407633.1906] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 02 12:20:33 compute-0 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.194 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:33 compute-0 ovn_controller[148123]: 2025-10-02T12:20:33Z|00159|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=0)
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.230 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.231 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c360ef62-d8a8-4e11-8f8f-af3a95e58d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.231 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:20:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:20:33.232 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.756 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407633.7554777, 51faf5e3-3295-46e2-8cf7-ac53503b72ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.757 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] VM Started (Lifecycle Event)
Oct 02 12:20:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:33 compute-0 podman[286922]: 2025-10-02 12:20:33.676890459 +0000 UTC m=+0.042302053 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.816 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.822 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407633.7569427, 51faf5e3-3295-46e2-8cf7-ac53503b72ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.823 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] VM Paused (Lifecycle Event)
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.884 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.889 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:33 compute-0 nova_compute[256940]: 2025-10-02 12:20:33.964 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:34 compute-0 podman[286922]: 2025-10-02 12:20:34.046296216 +0000 UTC m=+0.411707810 container create ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:20:34 compute-0 systemd[1]: Started libpod-conmon-ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603.scope.
Oct 02 12:20:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936d6bed81180df1828e204567a316fe736e85a7c899066a52bb226200e27b33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:34 compute-0 ceph-mon[73668]: pgmap v1250: 305 pgs: 305 active+clean; 205 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 478 KiB/s rd, 6.2 MiB/s wr, 147 op/s
Oct 02 12:20:34 compute-0 podman[286922]: 2025-10-02 12:20:34.488012737 +0000 UTC m=+0.853424401 container init ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:20:34 compute-0 podman[286922]: 2025-10-02 12:20:34.49924358 +0000 UTC m=+0.864655174 container start ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:20:34 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[286937]: [NOTICE]   (286942) : New worker (286944) forked
Oct 02 12:20:34 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[286937]: [NOTICE]   (286942) : Loading success.
Oct 02 12:20:34 compute-0 nova_compute[256940]: 2025-10-02 12:20:34.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 236 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 6.5 MiB/s wr, 198 op/s
Oct 02 12:20:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:35.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:35.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:35 compute-0 nova_compute[256940]: 2025-10-02 12:20:35.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.232 2 DEBUG nova.compute.manager [req-9db4326e-22ae-4b8c-baa7-0193a1a0ae16 req-eb454eaa-9d16-4a0c-8a64-0e07a67b4758 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received event network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.232 2 DEBUG oslo_concurrency.lockutils [req-9db4326e-22ae-4b8c-baa7-0193a1a0ae16 req-eb454eaa-9d16-4a0c-8a64-0e07a67b4758 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.233 2 DEBUG oslo_concurrency.lockutils [req-9db4326e-22ae-4b8c-baa7-0193a1a0ae16 req-eb454eaa-9d16-4a0c-8a64-0e07a67b4758 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.233 2 DEBUG oslo_concurrency.lockutils [req-9db4326e-22ae-4b8c-baa7-0193a1a0ae16 req-eb454eaa-9d16-4a0c-8a64-0e07a67b4758 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.233 2 DEBUG nova.compute.manager [req-9db4326e-22ae-4b8c-baa7-0193a1a0ae16 req-eb454eaa-9d16-4a0c-8a64-0e07a67b4758 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Processing event network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.234 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.238 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407636.2385552, 51faf5e3-3295-46e2-8cf7-ac53503b72ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.239 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] VM Resumed (Lifecycle Event)
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.241 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.245 2 INFO nova.virt.libvirt.driver [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance spawned successfully.
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.245 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.273 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.280 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.287 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.287 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.288 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.288 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.288 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.289 2 DEBUG nova.virt.libvirt.driver [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.343 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.538 2 INFO nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Took 10.42 seconds to spawn the instance on the hypervisor.
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.539 2 DEBUG nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:36 compute-0 ceph-mon[73668]: pgmap v1251: 305 pgs: 305 active+clean; 236 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 6.5 MiB/s wr, 198 op/s
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.724 2 INFO nova.compute.manager [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Took 11.51 seconds to build instance.
Oct 02 12:20:36 compute-0 nova_compute[256940]: 2025-10-02 12:20:36.861 2 DEBUG oslo_concurrency.lockutils [None req-3fdb0674-8ac3-4de5-a40d-4ea83fc554be afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 7.3 MiB/s wr, 194 op/s
Oct 02 12:20:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:37.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 02 12:20:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:37.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 02 12:20:38 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 02 12:20:38 compute-0 ceph-mon[73668]: pgmap v1252: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 7.3 MiB/s wr, 194 op/s
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.373 2 DEBUG nova.compute.manager [req-ea524f96-2138-4aca-86b7-ced962400b47 req-09e8dcd9-ce7d-4fd9-ba2c-908b5bed1347 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received event network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.374 2 DEBUG oslo_concurrency.lockutils [req-ea524f96-2138-4aca-86b7-ced962400b47 req-09e8dcd9-ce7d-4fd9-ba2c-908b5bed1347 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.374 2 DEBUG oslo_concurrency.lockutils [req-ea524f96-2138-4aca-86b7-ced962400b47 req-09e8dcd9-ce7d-4fd9-ba2c-908b5bed1347 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.374 2 DEBUG oslo_concurrency.lockutils [req-ea524f96-2138-4aca-86b7-ced962400b47 req-09e8dcd9-ce7d-4fd9-ba2c-908b5bed1347 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.374 2 DEBUG nova.compute.manager [req-ea524f96-2138-4aca-86b7-ced962400b47 req-09e8dcd9-ce7d-4fd9-ba2c-908b5bed1347 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] No waiting events found dispatching network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.374 2 WARNING nova.compute.manager [req-ea524f96-2138-4aca-86b7-ced962400b47 req-09e8dcd9-ce7d-4fd9-ba2c-908b5bed1347 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received unexpected event network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 for instance with vm_state active and task_state None.
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.545 2 DEBUG oslo_concurrency.lockutils [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.546 2 DEBUG oslo_concurrency.lockutils [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.546 2 DEBUG nova.compute.manager [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.550 2 DEBUG nova.compute.manager [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.551 2 DEBUG nova.objects.instance [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'flavor' on Instance uuid 51faf5e3-3295-46e2-8cf7-ac53503b72ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:38 compute-0 nova_compute[256940]: 2025-10-02 12:20:38.583 2 DEBUG nova.virt.libvirt.driver [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:20:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 750 KiB/s rd, 4.1 MiB/s wr, 161 op/s
Oct 02 12:20:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:20:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:39.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:20:39 compute-0 ceph-mon[73668]: osdmap e176: 3 total, 3 up, 3 in
Oct 02 12:20:39 compute-0 podman[286956]: 2025-10-02 12:20:39.416258406 +0000 UTC m=+0.081792473 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:20:39 compute-0 podman[286955]: 2025-10-02 12:20:39.442977702 +0000 UTC m=+0.108336304 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:20:39 compute-0 nova_compute[256940]: 2025-10-02 12:20:39.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:39.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005324473932160804 of space, bias 1.0, pg target 1.5973421796482412 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:20:40 compute-0 nova_compute[256940]: 2025-10-02 12:20:40.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:40 compute-0 nova_compute[256940]: 2025-10-02 12:20:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:40 compute-0 nova_compute[256940]: 2025-10-02 12:20:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:40 compute-0 nova_compute[256940]: 2025-10-02 12:20:40.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:20:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:40 compute-0 nova_compute[256940]: 2025-10-02 12:20:40.435 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:20:40 compute-0 ceph-mon[73668]: pgmap v1254: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 750 KiB/s rd, 4.1 MiB/s wr, 161 op/s
Oct 02 12:20:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3271158754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1887875243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:40 compute-0 nova_compute[256940]: 2025-10-02 12:20:40.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:20:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:41.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:41 compute-0 nova_compute[256940]: 2025-10-02 12:20:41.435 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:41 compute-0 nova_compute[256940]: 2025-10-02 12:20:41.435 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:20:41 compute-0 nova_compute[256940]: 2025-10-02 12:20:41.435 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:20:41 compute-0 nova_compute[256940]: 2025-10-02 12:20:41.450 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:41 compute-0 nova_compute[256940]: 2025-10-02 12:20:41.451 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:41 compute-0 nova_compute[256940]: 2025-10-02 12:20:41.451 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:20:41 compute-0 nova_compute[256940]: 2025-10-02 12:20:41.452 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 51faf5e3-3295-46e2-8cf7-ac53503b72ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:41.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/672575832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3085986660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:42 compute-0 nova_compute[256940]: 2025-10-02 12:20:42.808 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Updating instance_info_cache with network_info: [{"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:42 compute-0 nova_compute[256940]: 2025-10-02 12:20:42.853 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-51faf5e3-3295-46e2-8cf7-ac53503b72ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:42 compute-0 nova_compute[256940]: 2025-10-02 12:20:42.853 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:20:42 compute-0 nova_compute[256940]: 2025-10-02 12:20:42.854 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:42 compute-0 nova_compute[256940]: 2025-10-02 12:20:42.854 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:20:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 252 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 234 op/s
Oct 02 12:20:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 02 12:20:42 compute-0 ceph-mon[73668]: pgmap v1255: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:20:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 02 12:20:43 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 02 12:20:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:43.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.357 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.358 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.359 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.360 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.360 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:43.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206873957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.819 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.995 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:20:43 compute-0 nova_compute[256940]: 2025-10-02 12:20:43.996 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:20:44 compute-0 ceph-mon[73668]: pgmap v1256: 305 pgs: 305 active+clean; 252 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 234 op/s
Oct 02 12:20:44 compute-0 ceph-mon[73668]: osdmap e177: 3 total, 3 up, 3 in
Oct 02 12:20:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/206873957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 02 12:20:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 02 12:20:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.194 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.195 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4533MB free_disk=20.87641143798828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.196 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.196 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.330 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 51faf5e3-3295-46e2-8cf7-ac53503b72ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.331 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.331 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.348 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.423 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.424 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.586 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.627 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.670 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:44 compute-0 nova_compute[256940]: 2025-10-02 12:20:44.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 283 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 3.8 MiB/s wr, 201 op/s
Oct 02 12:20:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:45.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:45 compute-0 ceph-mon[73668]: osdmap e178: 3 total, 3 up, 3 in
Oct 02 12:20:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549998780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:45 compute-0 nova_compute[256940]: 2025-10-02 12:20:45.151 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:45 compute-0 nova_compute[256940]: 2025-10-02 12:20:45.157 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:45 compute-0 nova_compute[256940]: 2025-10-02 12:20:45.171 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:45 compute-0 nova_compute[256940]: 2025-10-02 12:20:45.207 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:20:45 compute-0 nova_compute[256940]: 2025-10-02 12:20:45.208 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:45 compute-0 nova_compute[256940]: 2025-10-02 12:20:45.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:46 compute-0 ceph-mon[73668]: pgmap v1259: 305 pgs: 305 active+clean; 283 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 3.8 MiB/s wr, 201 op/s
Oct 02 12:20:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3549998780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.155016) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646155070, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2436, "num_deletes": 504, "total_data_size": 3699667, "memory_usage": 3767384, "flush_reason": "Manual Compaction"}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646197563, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3266205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26702, "largest_seqno": 29136, "table_properties": {"data_size": 3256394, "index_size": 5601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25651, "raw_average_key_size": 20, "raw_value_size": 3233993, "raw_average_value_size": 2583, "num_data_blocks": 243, "num_entries": 1252, "num_filter_entries": 1252, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407459, "oldest_key_time": 1759407459, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 42808 microseconds, and 8882 cpu microseconds.
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:20:46 compute-0 nova_compute[256940]: 2025-10-02 12:20:46.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.197817) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3266205 bytes OK
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.197887) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.214981) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.215059) EVENT_LOG_v1 {"time_micros": 1759407646215043, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.215126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3688687, prev total WAL file size 3688687, number of live WAL files 2.
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.216887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3189KB)], [62(10MB)]
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646216992, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 14072226, "oldest_snapshot_seqno": -1}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5406 keys, 8629807 bytes, temperature: kUnknown
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646303296, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8629807, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8593882, "index_size": 21287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 137953, "raw_average_key_size": 25, "raw_value_size": 8496804, "raw_average_value_size": 1571, "num_data_blocks": 857, "num_entries": 5406, "num_filter_entries": 5406, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.303575) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8629807 bytes
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.323828) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.3 rd, 100.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(7.0) write-amplify(2.6) OK, records in: 6414, records dropped: 1008 output_compression: NoCompression
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.323871) EVENT_LOG_v1 {"time_micros": 1759407646323850, "job": 34, "event": "compaction_finished", "compaction_time_micros": 86186, "compaction_time_cpu_micros": 27024, "output_level": 6, "num_output_files": 1, "total_output_size": 8629807, "num_input_records": 6414, "num_output_records": 5406, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646324671, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646326860, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.216727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.326930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.326937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.326939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.326941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:20:46.326943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 308 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 236 op/s
Oct 02 12:20:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:47.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:47.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:48 compute-0 nova_compute[256940]: 2025-10-02 12:20:48.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:48 compute-0 nova_compute[256940]: 2025-10-02 12:20:48.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:20:48 compute-0 ceph-mon[73668]: pgmap v1260: 305 pgs: 305 active+clean; 308 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 236 op/s
Oct 02 12:20:48 compute-0 nova_compute[256940]: 2025-10-02 12:20:48.635 2 DEBUG nova.virt.libvirt.driver [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:20:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 308 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 114 op/s
Oct 02 12:20:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:49 compute-0 nova_compute[256940]: 2025-10-02 12:20:49.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:49 compute-0 ovn_controller[148123]: 2025-10-02T12:20:49Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:7c:bc 10.100.0.14
Oct 02 12:20:49 compute-0 ovn_controller[148123]: 2025-10-02T12:20:49Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:7c:bc 10.100.0.14
Oct 02 12:20:49 compute-0 nova_compute[256940]: 2025-10-02 12:20:49.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 02 12:20:50 compute-0 sudo[287046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:50 compute-0 sudo[287046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:50 compute-0 sudo[287046]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 02 12:20:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 02 12:20:50 compute-0 ceph-mon[73668]: pgmap v1261: 305 pgs: 305 active+clean; 308 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 114 op/s
Oct 02 12:20:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1483081823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4104050532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:50 compute-0 sudo[287071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:50 compute-0 sudo[287071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:50 compute-0 sudo[287071]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:50 compute-0 nova_compute[256940]: 2025-10-02 12:20:50.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 241 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 9.5 MiB/s wr, 241 op/s
Oct 02 12:20:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:51.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:51 compute-0 ceph-mon[73668]: osdmap e179: 3 total, 3 up, 3 in
Oct 02 12:20:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3095567087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:52 compute-0 ceph-mon[73668]: pgmap v1263: 305 pgs: 305 active+clean; 241 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 9.5 MiB/s wr, 241 op/s
Oct 02 12:20:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 237 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.9 MiB/s wr, 252 op/s
Oct 02 12:20:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:53.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:53 compute-0 podman[287098]: 2025-10-02 12:20:53.394467263 +0000 UTC m=+0.061979927 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:20:53 compute-0 podman[287099]: 2025-10-02 12:20:53.436216381 +0000 UTC m=+0.103006276 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct 02 12:20:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:54 compute-0 ceph-mon[73668]: pgmap v1264: 305 pgs: 305 active+clean; 237 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.9 MiB/s wr, 252 op/s
Oct 02 12:20:54 compute-0 nova_compute[256940]: 2025-10-02 12:20:54.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 244 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.7 MiB/s wr, 276 op/s
Oct 02 12:20:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:55.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:55.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:55 compute-0 nova_compute[256940]: 2025-10-02 12:20:55.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 246 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 261 op/s
Oct 02 12:20:56 compute-0 ceph-mon[73668]: pgmap v1265: 305 pgs: 305 active+clean; 244 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.7 MiB/s wr, 276 op/s
Oct 02 12:20:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:57.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:57.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:58 compute-0 ceph-mon[73668]: pgmap v1266: 305 pgs: 305 active+clean; 246 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 261 op/s
Oct 02 12:20:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2057428937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:20:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 246 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 261 op/s
Oct 02 12:20:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:20:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:20:59 compute-0 nova_compute[256940]: 2025-10-02 12:20:59.706 2 DEBUG nova.virt.libvirt.driver [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:20:59 compute-0 nova_compute[256940]: 2025-10-02 12:20:59.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:20:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:00 compute-0 ceph-mon[73668]: pgmap v1267: 305 pgs: 305 active+clean; 246 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 261 op/s
Oct 02 12:21:00 compute-0 nova_compute[256940]: 2025-10-02 12:21:00.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 218 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.6 MiB/s wr, 221 op/s
Oct 02 12:21:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:01.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/287197055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:02 compute-0 ceph-mon[73668]: pgmap v1268: 305 pgs: 305 active+clean; 218 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.6 MiB/s wr, 221 op/s
Oct 02 12:21:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1740980608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2306558928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 232 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 187 op/s
Oct 02 12:21:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:03 compute-0 nova_compute[256940]: 2025-10-02 12:21:03.729 2 INFO nova.virt.libvirt.driver [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance shutdown successfully after 25 seconds.
Oct 02 12:21:03 compute-0 kernel: tap7905f48e-77 (unregistering): left promiscuous mode
Oct 02 12:21:03 compute-0 NetworkManager[44981]: <info>  [1759407663.7603] device (tap7905f48e-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:21:03 compute-0 ovn_controller[148123]: 2025-10-02T12:21:03Z|00160|binding|INFO|Releasing lport 7905f48e-7719-4693-9ec3-ca47c1ff67e8 from this chassis (sb_readonly=0)
Oct 02 12:21:03 compute-0 ovn_controller[148123]: 2025-10-02T12:21:03Z|00161|binding|INFO|Setting lport 7905f48e-7719-4693-9ec3-ca47c1ff67e8 down in Southbound
Oct 02 12:21:03 compute-0 ovn_controller[148123]: 2025-10-02T12:21:03Z|00162|binding|INFO|Removing iface tap7905f48e-77 ovn-installed in OVS
Oct 02 12:21:03 compute-0 nova_compute[256940]: 2025-10-02 12:21:03.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:03 compute-0 nova_compute[256940]: 2025-10-02 12:21:03.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:03.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:03 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct 02 12:21:03 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002c.scope: Consumed 14.255s CPU time.
Oct 02 12:21:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:03.826 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:7c:bc 10.100.0.14'], port_security=['fa:16:3e:5d:7c:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '51faf5e3-3295-46e2-8cf7-ac53503b72ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7905f48e-7719-4693-9ec3-ca47c1ff67e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:21:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:03.829 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7905f48e-7719-4693-9ec3-ca47c1ff67e8 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis
Oct 02 12:21:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:03.830 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:21:03 compute-0 systemd-machined[210927]: Machine qemu-21-instance-0000002c terminated.
Oct 02 12:21:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:03.833 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bd64d6-9dac-4d02-b74d-49faee6cf40b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:03.834 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore
Oct 02 12:21:03 compute-0 nova_compute[256940]: 2025-10-02 12:21:03.967 2 INFO nova.virt.libvirt.driver [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance destroyed successfully.
Oct 02 12:21:03 compute-0 nova_compute[256940]: 2025-10-02 12:21:03.968 2 DEBUG nova.objects.instance [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'numa_topology' on Instance uuid 51faf5e3-3295-46e2-8cf7-ac53503b72ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:03 compute-0 ceph-mon[73668]: pgmap v1269: 305 pgs: 305 active+clean; 232 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 187 op/s
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.024 2 DEBUG nova.compute.manager [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.104 2 DEBUG oslo_concurrency.lockutils [None req-0db2596c-4e83-4d13-8518-8f3a69d23bf7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 25.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:04 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[286937]: [NOTICE]   (286942) : haproxy version is 2.8.14-c23fe91
Oct 02 12:21:04 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[286937]: [NOTICE]   (286942) : path to executable is /usr/sbin/haproxy
Oct 02 12:21:04 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[286937]: [WARNING]  (286942) : Exiting Master process...
Oct 02 12:21:04 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[286937]: [ALERT]    (286942) : Current worker (286944) exited with code 143 (Terminated)
Oct 02 12:21:04 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[286937]: [WARNING]  (286942) : All workers exited. Exiting... (0)
Oct 02 12:21:04 compute-0 systemd[1]: libpod-ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603.scope: Deactivated successfully.
Oct 02 12:21:04 compute-0 podman[287170]: 2025-10-02 12:21:04.173292806 +0000 UTC m=+0.231430512 container died ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.325 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603-userdata-shm.mount: Deactivated successfully.
Oct 02 12:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-936d6bed81180df1828e204567a316fe736e85a7c899066a52bb226200e27b33-merged.mount: Deactivated successfully.
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.480 2 DEBUG nova.compute.manager [req-78ae2691-8e3c-4818-9638-33d421ecf022 req-486d1c9d-d021-4f6a-a6f8-5fcbc3110453 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received event network-vif-unplugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.480 2 DEBUG oslo_concurrency.lockutils [req-78ae2691-8e3c-4818-9638-33d421ecf022 req-486d1c9d-d021-4f6a-a6f8-5fcbc3110453 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.481 2 DEBUG oslo_concurrency.lockutils [req-78ae2691-8e3c-4818-9638-33d421ecf022 req-486d1c9d-d021-4f6a-a6f8-5fcbc3110453 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.481 2 DEBUG oslo_concurrency.lockutils [req-78ae2691-8e3c-4818-9638-33d421ecf022 req-486d1c9d-d021-4f6a-a6f8-5fcbc3110453 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.482 2 DEBUG nova.compute.manager [req-78ae2691-8e3c-4818-9638-33d421ecf022 req-486d1c9d-d021-4f6a-a6f8-5fcbc3110453 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] No waiting events found dispatching network-vif-unplugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.482 2 WARNING nova.compute.manager [req-78ae2691-8e3c-4818-9638-33d421ecf022 req-486d1c9d-d021-4f6a-a6f8-5fcbc3110453 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received unexpected event network-vif-unplugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 for instance with vm_state stopped and task_state None.
Oct 02 12:21:04 compute-0 podman[287170]: 2025-10-02 12:21:04.527433346 +0000 UTC m=+0.585571052 container cleanup ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:21:04 compute-0 systemd[1]: libpod-conmon-ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603.scope: Deactivated successfully.
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:04 compute-0 podman[287209]: 2025-10-02 12:21:04.803720577 +0000 UTC m=+0.247689957 container remove ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.813 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[635dbec0-4758-48e8-9854-06cc2be0b981]: (4, ('Thu Oct  2 12:21:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603)\nef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603\nThu Oct  2 12:21:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (ef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603)\nef15d598a6c92458d470c4860b002d956b1d1ce33983c23c48fba1ff67876603\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.815 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba487ed-d709-47d5-99e7-b7f013fe4da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.816 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:04 compute-0 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct 02 12:21:04 compute-0 nova_compute[256940]: 2025-10-02 12:21:04.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.846 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1c21247f-b232-4f6a-8772-2ba380d3171b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.876 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[daf0984d-0465-4b5b-9608-7265fcbd6dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.877 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8dba46-796f-4d64-a532-a12be9003cd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.895 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e03d9db5-8bdf-43cc-a2c0-c0054dbc4d4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552837, 'reachable_time': 19858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287227, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.899 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.900 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[f80e107b-d011-41d5-98aa-f74f7ee8e9c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:21:04 compute-0 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct 02 12:21:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:04.901 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:21:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 279 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.0 MiB/s wr, 200 op/s
Oct 02 12:21:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 02 12:21:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:21:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:21:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 02 12:21:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 02 12:21:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:05 compute-0 nova_compute[256940]: 2025-10-02 12:21:05.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:05.903 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:06 compute-0 ceph-mon[73668]: pgmap v1270: 305 pgs: 305 active+clean; 279 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.0 MiB/s wr, 200 op/s
Oct 02 12:21:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/251241660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/251241660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:06 compute-0 ceph-mon[73668]: osdmap e180: 3 total, 3 up, 3 in
Oct 02 12:21:06 compute-0 nova_compute[256940]: 2025-10-02 12:21:06.815 2 DEBUG nova.compute.manager [req-05533dcf-b0c7-494e-bd2c-9e160be43a6c req-77f685be-41de-47a8-9584-95d6b448aca6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received event network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:06 compute-0 nova_compute[256940]: 2025-10-02 12:21:06.815 2 DEBUG oslo_concurrency.lockutils [req-05533dcf-b0c7-494e-bd2c-9e160be43a6c req-77f685be-41de-47a8-9584-95d6b448aca6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:06 compute-0 nova_compute[256940]: 2025-10-02 12:21:06.815 2 DEBUG oslo_concurrency.lockutils [req-05533dcf-b0c7-494e-bd2c-9e160be43a6c req-77f685be-41de-47a8-9584-95d6b448aca6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:06 compute-0 nova_compute[256940]: 2025-10-02 12:21:06.816 2 DEBUG oslo_concurrency.lockutils [req-05533dcf-b0c7-494e-bd2c-9e160be43a6c req-77f685be-41de-47a8-9584-95d6b448aca6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:06 compute-0 nova_compute[256940]: 2025-10-02 12:21:06.816 2 DEBUG nova.compute.manager [req-05533dcf-b0c7-494e-bd2c-9e160be43a6c req-77f685be-41de-47a8-9584-95d6b448aca6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] No waiting events found dispatching network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:21:06 compute-0 nova_compute[256940]: 2025-10-02 12:21:06.816 2 WARNING nova.compute.manager [req-05533dcf-b0c7-494e-bd2c-9e160be43a6c req-77f685be-41de-47a8-9584-95d6b448aca6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received unexpected event network-vif-plugged-7905f48e-7719-4693-9ec3-ca47c1ff67e8 for instance with vm_state stopped and task_state None.
Oct 02 12:21:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 257 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Oct 02 12:21:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:07.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:07.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:08 compute-0 nova_compute[256940]: 2025-10-02 12:21:08.491 2 DEBUG nova.compute.manager [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:08 compute-0 nova_compute[256940]: 2025-10-02 12:21:08.539 2 INFO nova.compute.manager [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] instance snapshotting
Oct 02 12:21:08 compute-0 nova_compute[256940]: 2025-10-02 12:21:08.540 2 WARNING nova.compute.manager [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 02 12:21:08 compute-0 ceph-mon[73668]: pgmap v1272: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 257 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Oct 02 12:21:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 257 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Oct 02 12:21:08 compute-0 nova_compute[256940]: 2025-10-02 12:21:08.991 2 INFO nova.virt.libvirt.driver [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Beginning cold snapshot process
Oct 02 12:21:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:21:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:21:09 compute-0 nova_compute[256940]: 2025-10-02 12:21:09.716 2 DEBUG nova.virt.libvirt.imagebackend [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:21:09 compute-0 nova_compute[256940]: 2025-10-02 12:21:09.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:09.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:09 compute-0 nova_compute[256940]: 2025-10-02 12:21:09.969 2 DEBUG nova.storage.rbd_utils [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(ba877ff257714d86a27acf4c6a12106a) on rbd image(51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:21:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:10 compute-0 podman[287281]: 2025-10-02 12:21:10.423470239 +0000 UTC m=+0.091784113 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:21:10 compute-0 sudo[287311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:10 compute-0 podman[287282]: 2025-10-02 12:21:10.459636692 +0000 UTC m=+0.117793521 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:21:10 compute-0 sudo[287311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:10 compute-0 sudo[287311]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:10 compute-0 sudo[287349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:10 compute-0 sudo[287349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:10 compute-0 sudo[287349]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 02 12:21:10 compute-0 nova_compute[256940]: 2025-10-02 12:21:10.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:10 compute-0 ceph-mon[73668]: pgmap v1273: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 257 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Oct 02 12:21:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 241 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.1 MiB/s wr, 264 op/s
Oct 02 12:21:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:11.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 02 12:21:11 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 02 12:21:11 compute-0 nova_compute[256940]: 2025-10-02 12:21:11.180 2 DEBUG nova.storage.rbd_utils [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning vms/51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk@ba877ff257714d86a27acf4c6a12106a to images/bc229961-c2ce-4076-a41e-7413c2291d5e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:21:11 compute-0 nova_compute[256940]: 2025-10-02 12:21:11.388 2 DEBUG nova.storage.rbd_utils [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] flattening images/bc229961-c2ce-4076-a41e-7413c2291d5e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:21:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:21:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:11.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:21:12 compute-0 ceph-mon[73668]: pgmap v1274: 305 pgs: 305 active+clean; 241 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.1 MiB/s wr, 264 op/s
Oct 02 12:21:12 compute-0 ceph-mon[73668]: osdmap e181: 3 total, 3 up, 3 in
Oct 02 12:21:12 compute-0 nova_compute[256940]: 2025-10-02 12:21:12.313 2 DEBUG nova.storage.rbd_utils [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(ba877ff257714d86a27acf4c6a12106a) on rbd image(51faf5e3-3295-46e2-8cf7-ac53503b72ca_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:21:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 246 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 208 op/s
Oct 02 12:21:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:13.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 02 12:21:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 02 12:21:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 02 12:21:13 compute-0 nova_compute[256940]: 2025-10-02 12:21:13.573 2 DEBUG nova.storage.rbd_utils [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(snap) on rbd image(bc229961-c2ce-4076-a41e-7413c2291d5e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:21:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:13.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 02 12:21:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 02 12:21:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 02 12:21:14 compute-0 ceph-mon[73668]: pgmap v1276: 305 pgs: 305 active+clean; 246 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 208 op/s
Oct 02 12:21:14 compute-0 ceph-mon[73668]: osdmap e182: 3 total, 3 up, 3 in
Oct 02 12:21:14 compute-0 nova_compute[256940]: 2025-10-02 12:21:14.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 275 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.9 MiB/s wr, 212 op/s
Oct 02 12:21:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:15.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 02 12:21:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-0 ceph-mon[73668]: osdmap e183: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-0 ceph-mon[73668]: osdmap e184: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:15.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:15 compute-0 nova_compute[256940]: 2025-10-02 12:21:15.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:16 compute-0 ceph-mon[73668]: pgmap v1279: 305 pgs: 305 active+clean; 275 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.9 MiB/s wr, 212 op/s
Oct 02 12:21:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 8.1 MiB/s wr, 210 op/s
Oct 02 12:21:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:17.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:21:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:17.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:21:17 compute-0 ceph-mon[73668]: pgmap v1281: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 8.1 MiB/s wr, 210 op/s
Oct 02 12:21:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 173 op/s
Oct 02 12:21:18 compute-0 nova_compute[256940]: 2025-10-02 12:21:18.963 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407663.96224, 51faf5e3-3295-46e2-8cf7-ac53503b72ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:18 compute-0 nova_compute[256940]: 2025-10-02 12:21:18.964 2 INFO nova.compute.manager [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] VM Stopped (Lifecycle Event)
Oct 02 12:21:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:19.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:19 compute-0 nova_compute[256940]: 2025-10-02 12:21:19.166 2 DEBUG nova.compute.manager [None req-f6ce5e93-45e1-48be-80e6-faf04cc24dde - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:19 compute-0 nova_compute[256940]: 2025-10-02 12:21:19.170 2 DEBUG nova.compute.manager [None req-f6ce5e93-45e1-48be-80e6-faf04cc24dde - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: image_uploading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:19 compute-0 nova_compute[256940]: 2025-10-02 12:21:19.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:19 compute-0 nova_compute[256940]: 2025-10-02 12:21:19.758 2 INFO nova.compute.manager [None req-f6ce5e93-45e1-48be-80e6-faf04cc24dde - - - - - -] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] During sync_power_state the instance has a pending task (image_uploading). Skip.
Oct 02 12:21:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:19.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:20 compute-0 ceph-mon[73668]: pgmap v1282: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 173 op/s
Oct 02 12:21:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 02 12:21:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 02 12:21:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 02 12:21:20 compute-0 nova_compute[256940]: 2025-10-02 12:21:20.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.0 MiB/s wr, 149 op/s
Oct 02 12:21:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:21.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:21.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:22 compute-0 ceph-mon[73668]: osdmap e185: 3 total, 3 up, 3 in
Oct 02 12:21:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.0 MiB/s wr, 132 op/s
Oct 02 12:21:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:23.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:23 compute-0 ceph-mon[73668]: pgmap v1284: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.0 MiB/s wr, 149 op/s
Oct 02 12:21:23 compute-0 nova_compute[256940]: 2025-10-02 12:21:23.258 2 INFO nova.virt.libvirt.driver [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Snapshot image upload complete
Oct 02 12:21:23 compute-0 nova_compute[256940]: 2025-10-02 12:21:23.259 2 INFO nova.compute.manager [None req-f712e342-28db-4abb-ad38-42b4a5fdbf1b afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Took 14.72 seconds to snapshot the instance on the hypervisor.
Oct 02 12:21:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:23.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:24 compute-0 sudo[287471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:24 compute-0 sudo[287471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-0 sudo[287471]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:24 compute-0 podman[287495]: 2025-10-02 12:21:24.219146286 +0000 UTC m=+0.072466719 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:21:24 compute-0 sudo[287503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:24 compute-0 sudo[287503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-0 sudo[287503]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:24 compute-0 podman[287496]: 2025-10-02 12:21:24.298786201 +0000 UTC m=+0.132789471 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:21:24 compute-0 sudo[287560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:24 compute-0 sudo[287560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-0 sudo[287560]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:24 compute-0 sudo[287588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:21:24 compute-0 sudo[287588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-0 nova_compute[256940]: 2025-10-02 12:21:24.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:24 compute-0 sudo[287588]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 114 op/s
Oct 02 12:21:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:21:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:21:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:21:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:21:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:25.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:25 compute-0 ceph-mon[73668]: pgmap v1285: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.0 MiB/s wr, 132 op/s
Oct 02 12:21:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f65e8865-86d1-465c-98c2-b5f03647555e does not exist
Oct 02 12:21:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d2114355-3043-4674-bb0e-ce5c506e2aa5 does not exist
Oct 02 12:21:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e5b0d876-f3ec-4166-a5fd-e37f3125b995 does not exist
Oct 02 12:21:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:21:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:21:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:21:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:21:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:21:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:25 compute-0 sudo[287645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:25 compute-0 sudo[287645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:25 compute-0 sudo[287645]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:25 compute-0 sudo[287670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:25 compute-0 sudo[287670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:25 compute-0 sudo[287670]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:25 compute-0 sudo[287695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:25 compute-0 sudo[287695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:25 compute-0 sudo[287695]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:25 compute-0 sudo[287720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:21:25 compute-0 sudo[287720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:25.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:25 compute-0 nova_compute[256940]: 2025-10-02 12:21:25.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:26 compute-0 podman[287785]: 2025-10-02 12:21:26.177155576 +0000 UTC m=+0.029540611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:26.458 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:26.459 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:21:26.459 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 02 12:21:26 compute-0 ceph-mon[73668]: pgmap v1286: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 114 op/s
Oct 02 12:21:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:21:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:21:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:21:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:26 compute-0 podman[287785]: 2025-10-02 12:21:26.618755604 +0000 UTC m=+0.471140609 container create 2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hertz, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:26 compute-0 systemd[1]: Started libpod-conmon-2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38.scope.
Oct 02 12:21:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 327 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 397 KiB/s rd, 38 KiB/s wr, 53 op/s
Oct 02 12:21:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:27.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:27 compute-0 podman[287785]: 2025-10-02 12:21:27.332200868 +0000 UTC m=+1.184585923 container init 2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hertz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:21:27 compute-0 podman[287785]: 2025-10-02 12:21:27.342271851 +0000 UTC m=+1.194656856 container start 2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:27 compute-0 sleepy_hertz[287802]: 167 167
Oct 02 12:21:27 compute-0 systemd[1]: libpod-2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38.scope: Deactivated successfully.
Oct 02 12:21:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 02 12:21:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 02 12:21:27 compute-0 podman[287785]: 2025-10-02 12:21:27.736648209 +0000 UTC m=+1.589033304 container attach 2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:27 compute-0 podman[287785]: 2025-10-02 12:21:27.739484933 +0000 UTC m=+1.591869978 container died 2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:21:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:27.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:21:28
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'backups', 'vms', '.rgw.root', '.mgr']
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d6da761aadb1fa6cfff61c9a59153c5862dd62fcf2217505d1c20bf076c9f96-merged.mount: Deactivated successfully.
Oct 02 12:21:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 02 12:21:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 327 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 32 KiB/s wr, 28 op/s
Oct 02 12:21:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:21:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:21:29 compute-0 ceph-mon[73668]: pgmap v1287: 305 pgs: 305 active+clean; 327 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 397 KiB/s rd, 38 KiB/s wr, 53 op/s
Oct 02 12:21:29 compute-0 ceph-mon[73668]: osdmap e186: 3 total, 3 up, 3 in
Oct 02 12:21:29 compute-0 nova_compute[256940]: 2025-10-02 12:21:29.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 02 12:21:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:29.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:29 compute-0 podman[287785]: 2025-10-02 12:21:29.947424434 +0000 UTC m=+3.799809479 container remove 2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:21:30 compute-0 systemd[1]: libpod-conmon-2a71a40d2ad939ee8bdc2e3da5a5ba3308b291f90cda7af2179d188c3ed24a38.scope: Deactivated successfully.
Oct 02 12:21:30 compute-0 ceph-mon[73668]: pgmap v1289: 305 pgs: 305 active+clean; 327 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 32 KiB/s wr, 28 op/s
Oct 02 12:21:30 compute-0 ceph-mon[73668]: osdmap e187: 3 total, 3 up, 3 in
Oct 02 12:21:30 compute-0 podman[287827]: 2025-10-02 12:21:30.126211784 +0000 UTC m=+0.032819327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:30 compute-0 podman[287827]: 2025-10-02 12:21:30.357756678 +0000 UTC m=+0.264364191 container create 7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:21:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:30 compute-0 sudo[287842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:30 compute-0 sudo[287842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:30 compute-0 sudo[287842]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:30 compute-0 systemd[1]: Started libpod-conmon-7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb.scope.
Oct 02 12:21:30 compute-0 sudo[287867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:30 compute-0 sudo[287867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:30 compute-0 sudo[287867]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a606305c9d561a83d47b99654ea2b062769fc341b3d3b0a0a6c68c7e67e09a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a606305c9d561a83d47b99654ea2b062769fc341b3d3b0a0a6c68c7e67e09a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a606305c9d561a83d47b99654ea2b062769fc341b3d3b0a0a6c68c7e67e09a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a606305c9d561a83d47b99654ea2b062769fc341b3d3b0a0a6c68c7e67e09a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a606305c9d561a83d47b99654ea2b062769fc341b3d3b0a0a6c68c7e67e09a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:30 compute-0 nova_compute[256940]: 2025-10-02 12:21:30.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 315 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 33 KiB/s wr, 20 op/s
Oct 02 12:21:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:31 compute-0 podman[287827]: 2025-10-02 12:21:31.20926365 +0000 UTC m=+1.115871233 container init 7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:21:31 compute-0 podman[287827]: 2025-10-02 12:21:31.225232046 +0000 UTC m=+1.131839559 container start 7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:31 compute-0 podman[287827]: 2025-10-02 12:21:31.823766514 +0000 UTC m=+1.730374067 container attach 7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:21:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:31.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:32 compute-0 serene_kapitsa[287893]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:21:32 compute-0 serene_kapitsa[287893]: --> relative data size: 1.0
Oct 02 12:21:32 compute-0 serene_kapitsa[287893]: --> All data devices are unavailable
Oct 02 12:21:32 compute-0 systemd[1]: libpod-7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb.scope: Deactivated successfully.
Oct 02 12:21:32 compute-0 podman[287827]: 2025-10-02 12:21:32.198335626 +0000 UTC m=+2.104943149 container died 7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:21:32 compute-0 ceph-mon[73668]: pgmap v1291: 305 pgs: 305 active+clean; 315 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 33 KiB/s wr, 20 op/s
Oct 02 12:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-45a606305c9d561a83d47b99654ea2b062769fc341b3d3b0a0a6c68c7e67e09a-merged.mount: Deactivated successfully.
Oct 02 12:21:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 312 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 90 op/s
Oct 02 12:21:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:33.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:33 compute-0 podman[287827]: 2025-10-02 12:21:33.697827646 +0000 UTC m=+3.604435199 container remove 7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:21:33 compute-0 systemd[1]: libpod-conmon-7a7282808157aedd883a186fc9e5338084ed6734a74b59134a326ad1e8979feb.scope: Deactivated successfully.
Oct 02 12:21:33 compute-0 sudo[287720]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:33 compute-0 sudo[287924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:33 compute-0 sudo[287924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:33 compute-0 sudo[287924]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:33.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:33 compute-0 sudo[287949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:33 compute-0 sudo[287949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:33 compute-0 sudo[287949]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:33 compute-0 sudo[287974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:33 compute-0 sudo[287974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:33 compute-0 sudo[287974]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:34 compute-0 sudo[287999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:21:34 compute-0 sudo[287999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.348 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.349 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.349 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.349 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.350 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.351 2 INFO nova.compute.manager [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Terminating instance
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.351 2 DEBUG nova.compute.manager [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.359 2 INFO nova.virt.libvirt.driver [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Instance destroyed successfully.
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.359 2 DEBUG nova.objects.instance [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid 51faf5e3-3295-46e2-8cf7-ac53503b72ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:34 compute-0 podman[288065]: 2025-10-02 12:21:34.366191025 +0000 UTC m=+0.062227123 container create 2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:21:34 compute-0 podman[288065]: 2025-10-02 12:21:34.327750533 +0000 UTC m=+0.023786721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:34 compute-0 systemd[1]: Started libpod-conmon-2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c.scope.
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.425 2 DEBUG nova.virt.libvirt.vif [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:20:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-224839176',display_name='tempest-ImagesTestJSON-server-224839176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-224839176',id=44,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-7he80jwk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:23Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=51faf5e3-3295-46e2-8cf7-ac53503b72ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.428 2 DEBUG nova.network.os_vif_util [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "address": "fa:16:3e:5d:7c:bc", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7905f48e-77", "ovs_interfaceid": "7905f48e-7719-4693-9ec3-ca47c1ff67e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.429 2 DEBUG nova.network.os_vif_util [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:7c:bc,bridge_name='br-int',has_traffic_filtering=True,id=7905f48e-7719-4693-9ec3-ca47c1ff67e8,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7905f48e-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.430 2 DEBUG os_vif [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:7c:bc,bridge_name='br-int',has_traffic_filtering=True,id=7905f48e-7719-4693-9ec3-ca47c1ff67e8,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7905f48e-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7905f48e-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.437 2 INFO os_vif [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:7c:bc,bridge_name='br-int',has_traffic_filtering=True,id=7905f48e-7719-4693-9ec3-ca47c1ff67e8,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7905f48e-77')
Oct 02 12:21:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:34 compute-0 podman[288065]: 2025-10-02 12:21:34.495373891 +0000 UTC m=+0.191410009 container init 2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:21:34 compute-0 podman[288065]: 2025-10-02 12:21:34.50299932 +0000 UTC m=+0.199035418 container start 2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:21:34 compute-0 cool_kapitsa[288082]: 167 167
Oct 02 12:21:34 compute-0 systemd[1]: libpod-2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c.scope: Deactivated successfully.
Oct 02 12:21:34 compute-0 podman[288065]: 2025-10-02 12:21:34.523494124 +0000 UTC m=+0.219530232 container attach 2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:21:34 compute-0 podman[288065]: 2025-10-02 12:21:34.524027688 +0000 UTC m=+0.220063786 container died 2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 02 12:21:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 02 12:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-884e92cf1a5abaf1079652105d5b4932d55654971d04bc72c25d6a23e3babd2e-merged.mount: Deactivated successfully.
Oct 02 12:21:34 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 02 12:21:34 compute-0 ceph-mon[73668]: pgmap v1292: 305 pgs: 305 active+clean; 312 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 90 op/s
Oct 02 12:21:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1810387230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:34 compute-0 nova_compute[256940]: 2025-10-02 12:21:34.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:34 compute-0 podman[288065]: 2025-10-02 12:21:34.773211142 +0000 UTC m=+0.469247240 container remove 2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:21:34 compute-0 systemd[1]: libpod-conmon-2f9e3bc78af6a2994a7f72bd3a6862d94928cd464ba6512bde1702c1c3f61f2c.scope: Deactivated successfully.
Oct 02 12:21:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 296 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct 02 12:21:35 compute-0 podman[288129]: 2025-10-02 12:21:34.949549237 +0000 UTC m=+0.028502093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:35 compute-0 podman[288129]: 2025-10-02 12:21:35.045559459 +0000 UTC m=+0.124512295 container create 044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 12:21:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:35.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:35 compute-0 systemd[1]: Started libpod-conmon-044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97.scope.
Oct 02 12:21:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ba5dc235c3884c32fa6a673edf6f53ac0bb1636d05853df3cf82db49aeb57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ba5dc235c3884c32fa6a673edf6f53ac0bb1636d05853df3cf82db49aeb57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ba5dc235c3884c32fa6a673edf6f53ac0bb1636d05853df3cf82db49aeb57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ba5dc235c3884c32fa6a673edf6f53ac0bb1636d05853df3cf82db49aeb57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:35 compute-0 podman[288129]: 2025-10-02 12:21:35.338193566 +0000 UTC m=+0.417146432 container init 044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:21:35 compute-0 podman[288129]: 2025-10-02 12:21:35.348843313 +0000 UTC m=+0.427796149 container start 044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ritchie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:21:35 compute-0 podman[288129]: 2025-10-02 12:21:35.364327027 +0000 UTC m=+0.443279893 container attach 044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:21:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 02 12:21:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 02 12:21:35 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 02 12:21:35 compute-0 ceph-mon[73668]: osdmap e188: 3 total, 3 up, 3 in
Oct 02 12:21:35 compute-0 ceph-mon[73668]: osdmap e189: 3 total, 3 up, 3 in
Oct 02 12:21:35 compute-0 nova_compute[256940]: 2025-10-02 12:21:35.800 2 INFO nova.virt.libvirt.driver [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Deleting instance files /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca_del
Oct 02 12:21:35 compute-0 nova_compute[256940]: 2025-10-02 12:21:35.802 2 INFO nova.virt.libvirt.driver [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Deletion of /var/lib/nova/instances/51faf5e3-3295-46e2-8cf7-ac53503b72ca_del complete
Oct 02 12:21:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:35.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]: {
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:     "1": [
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:         {
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "devices": [
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "/dev/loop3"
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             ],
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "lv_name": "ceph_lv0",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "lv_size": "7511998464",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "name": "ceph_lv0",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "tags": {
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.cluster_name": "ceph",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.crush_device_class": "",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.encrypted": "0",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.osd_id": "1",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.type": "block",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:                 "ceph.vdo": "0"
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             },
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "type": "block",
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:             "vg_name": "ceph_vg0"
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:         }
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]:     ]
Oct 02 12:21:36 compute-0 pensive_ritchie[288146]: }
Oct 02 12:21:36 compute-0 systemd[1]: libpod-044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97.scope: Deactivated successfully.
Oct 02 12:21:36 compute-0 podman[288129]: 2025-10-02 12:21:36.179843861 +0000 UTC m=+1.258796697 container died 044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:21:36 compute-0 nova_compute[256940]: 2025-10-02 12:21:36.264 2 INFO nova.compute.manager [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Took 1.91 seconds to destroy the instance on the hypervisor.
Oct 02 12:21:36 compute-0 nova_compute[256940]: 2025-10-02 12:21:36.266 2 DEBUG oslo.service.loopingcall [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:21:36 compute-0 nova_compute[256940]: 2025-10-02 12:21:36.267 2 DEBUG nova.compute.manager [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:21:36 compute-0 nova_compute[256940]: 2025-10-02 12:21:36.267 2 DEBUG nova.network.neutron [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:21:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2ba5dc235c3884c32fa6a673edf6f53ac0bb1636d05853df3cf82db49aeb57-merged.mount: Deactivated successfully.
Oct 02 12:21:36 compute-0 podman[288129]: 2025-10-02 12:21:36.4123369 +0000 UTC m=+1.491289736 container remove 044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ritchie, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:36 compute-0 systemd[1]: libpod-conmon-044a97c008a74af7bbb8ec1203f2d356f7a315dcaf9238d9bb26b68c16503b97.scope: Deactivated successfully.
Oct 02 12:21:36 compute-0 sudo[287999]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:36 compute-0 sudo[288170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:36 compute-0 sudo[288170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:36 compute-0 sudo[288170]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:36 compute-0 sudo[288195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:36 compute-0 sudo[288195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:36 compute-0 sudo[288195]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:36 compute-0 sudo[288220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:36 compute-0 sudo[288220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:36 compute-0 sudo[288220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:36 compute-0 sudo[288245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:21:36 compute-0 sudo[288245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:36 compute-0 ceph-mon[73668]: pgmap v1294: 305 pgs: 305 active+clean; 296 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct 02 12:21:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2673187842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1982999056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 322 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 165 op/s
Oct 02 12:21:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:37.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:37 compute-0 podman[288310]: 2025-10-02 12:21:37.0278157 +0000 UTC m=+0.022150338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:37 compute-0 podman[288310]: 2025-10-02 12:21:37.153674271 +0000 UTC m=+0.148008859 container create 7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:21:37 compute-0 systemd[1]: Started libpod-conmon-7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5.scope.
Oct 02 12:21:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:37 compute-0 podman[288310]: 2025-10-02 12:21:37.287247402 +0000 UTC m=+0.281581990 container init 7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:21:37 compute-0 podman[288310]: 2025-10-02 12:21:37.295791094 +0000 UTC m=+0.290125682 container start 7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:21:37 compute-0 elastic_thompson[288326]: 167 167
Oct 02 12:21:37 compute-0 systemd[1]: libpod-7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5.scope: Deactivated successfully.
Oct 02 12:21:37 compute-0 podman[288310]: 2025-10-02 12:21:37.330747925 +0000 UTC m=+0.325082513 container attach 7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:21:37 compute-0 podman[288310]: 2025-10-02 12:21:37.331379532 +0000 UTC m=+0.325714120 container died 7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2870d9e80959fad763a9b104684c61db5fa8f52fa734a7a08a9a0c0bd3b4d2d8-merged.mount: Deactivated successfully.
Oct 02 12:21:37 compute-0 podman[288310]: 2025-10-02 12:21:37.609713706 +0000 UTC m=+0.604048294 container remove 7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:21:37 compute-0 systemd[1]: libpod-conmon-7286b84c188ed1044db15f8843a43d5587757a9fd76bad7cda1a62e402565ad5.scope: Deactivated successfully.
Oct 02 12:21:37 compute-0 podman[288349]: 2025-10-02 12:21:37.824093903 +0000 UTC m=+0.078812365 container create a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curran, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:21:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:37.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:37 compute-0 podman[288349]: 2025-10-02 12:21:37.77139569 +0000 UTC m=+0.026114162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:21:37 compute-0 systemd[1]: Started libpod-conmon-a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e.scope.
Oct 02 12:21:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfeba85d403b91b322dc00726436392ac56644e96ed27f9282410f69849cf797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfeba85d403b91b322dc00726436392ac56644e96ed27f9282410f69849cf797/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfeba85d403b91b322dc00726436392ac56644e96ed27f9282410f69849cf797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfeba85d403b91b322dc00726436392ac56644e96ed27f9282410f69849cf797/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:21:37 compute-0 podman[288349]: 2025-10-02 12:21:37.986453435 +0000 UTC m=+0.241171937 container init a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:21:37 compute-0 podman[288349]: 2025-10-02 12:21:37.995726586 +0000 UTC m=+0.250445038 container start a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:21:38 compute-0 podman[288349]: 2025-10-02 12:21:38.009723841 +0000 UTC m=+0.264442343 container attach a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curran, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:21:38 compute-0 ceph-mon[73668]: pgmap v1296: 305 pgs: 305 active+clean; 322 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 165 op/s
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.423 2 DEBUG nova.network.neutron [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.503 2 DEBUG nova.compute.manager [req-4ff9c4c6-1de2-4360-9067-022ccefee4f3 req-57e8ce92-14a1-40fe-96ea-410f095ce993 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Received event network-vif-deleted-7905f48e-7719-4693-9ec3-ca47c1ff67e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.504 2 INFO nova.compute.manager [req-4ff9c4c6-1de2-4360-9067-022ccefee4f3 req-57e8ce92-14a1-40fe-96ea-410f095ce993 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Neutron deleted interface 7905f48e-7719-4693-9ec3-ca47c1ff67e8; detaching it from the instance and deleting it from the info cache
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.504 2 DEBUG nova.network.neutron [req-4ff9c4c6-1de2-4360-9067-022ccefee4f3 req-57e8ce92-14a1-40fe-96ea-410f095ce993 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.541 2 INFO nova.compute.manager [-] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Took 2.27 seconds to deallocate network for instance.
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.558 2 DEBUG nova.compute.manager [req-4ff9c4c6-1de2-4360-9067-022ccefee4f3 req-57e8ce92-14a1-40fe-96ea-410f095ce993 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51faf5e3-3295-46e2-8cf7-ac53503b72ca] Detach interface failed, port_id=7905f48e-7719-4693-9ec3-ca47c1ff67e8, reason: Instance 51faf5e3-3295-46e2-8cf7-ac53503b72ca could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.694 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.695 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:38 compute-0 nova_compute[256940]: 2025-10-02 12:21:38.747 2 DEBUG oslo_concurrency.processutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]: {
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]:         "osd_id": 1,
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]:         "type": "bluestore"
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]:     }
Oct 02 12:21:38 compute-0 xenodochial_curran[288365]: }
Oct 02 12:21:38 compute-0 systemd[1]: libpod-a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e.scope: Deactivated successfully.
Oct 02 12:21:38 compute-0 podman[288407]: 2025-10-02 12:21:38.951442013 +0000 UTC m=+0.026295277 container died a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curran, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:21:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 322 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.7 MiB/s wr, 155 op/s
Oct 02 12:21:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:39.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652067774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:39 compute-0 nova_compute[256940]: 2025-10-02 12:21:39.214 2 DEBUG oslo_concurrency.processutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:39 compute-0 nova_compute[256940]: 2025-10-02 12:21:39.221 2 DEBUG nova.compute.provider_tree [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:39 compute-0 nova_compute[256940]: 2025-10-02 12:21:39.298 2 DEBUG nova.scheduler.client.report [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfeba85d403b91b322dc00726436392ac56644e96ed27f9282410f69849cf797-merged.mount: Deactivated successfully.
Oct 02 12:21:39 compute-0 nova_compute[256940]: 2025-10-02 12:21:39.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/652067774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:39 compute-0 nova_compute[256940]: 2025-10-02 12:21:39.687 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:39 compute-0 nova_compute[256940]: 2025-10-02 12:21:39.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:39 compute-0 nova_compute[256940]: 2025-10-02 12:21:39.953 2 INFO nova.scheduler.client.report [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance 51faf5e3-3295-46e2-8cf7-ac53503b72ca
Oct 02 12:21:39 compute-0 podman[288407]: 2025-10-02 12:21:39.983912611 +0000 UTC m=+1.058765825 container remove a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curran, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:21:39 compute-0 systemd[1]: libpod-conmon-a1565ce21feab24cb34a97f7b1ac8d0917f4be283262f8704f1523cd2212d37e.scope: Deactivated successfully.
Oct 02 12:21:40 compute-0 sudo[288245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004093819793411502 of space, bias 1.0, pg target 1.2281459380234505 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2959049806323283 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004065829727340406 of space, bias 1.0, pg target 1.2156830884747813 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:21:40 compute-0 nova_compute[256940]: 2025-10-02 12:21:40.163 2 DEBUG oslo_concurrency.lockutils [None req-4918133b-d34a-42f1-886a-9c1fb36b1993 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "51faf5e3-3295-46e2-8cf7-ac53503b72ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:21:40 compute-0 nova_compute[256940]: 2025-10-02 12:21:40.248 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:40 compute-0 nova_compute[256940]: 2025-10-02 12:21:40.248 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 88a6aa9f-4d6f-4461-8d8d-a14b3136eb49 does not exist
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0dc3655e-1bf7-44ed-8734-00ef76477d45 does not exist
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ddc531fb-e583-429c-9055-d2cd8e733e69 does not exist
Oct 02 12:21:40 compute-0 sudo[288424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:40 compute-0 sudo[288424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:40 compute-0 sudo[288424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:40 compute-0 sudo[288450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:21:40 compute-0 sudo[288450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:40 compute-0 sudo[288450]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:40 compute-0 podman[288475]: 2025-10-02 12:21:40.583775444 +0000 UTC m=+0.070532850 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:21:40 compute-0 podman[288474]: 2025-10-02 12:21:40.589525934 +0000 UTC m=+0.077022719 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:21:40 compute-0 ceph-mon[73668]: pgmap v1297: 305 pgs: 305 active+clean; 322 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.7 MiB/s wr, 155 op/s
Oct 02 12:21:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/474486150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 296 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.6 MiB/s wr, 205 op/s
Oct 02 12:21:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:41.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:41 compute-0 nova_compute[256940]: 2025-10-02 12:21:41.238 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3520970729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:41.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:42 compute-0 nova_compute[256940]: 2025-10-02 12:21:42.560 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:42 compute-0 nova_compute[256940]: 2025-10-02 12:21:42.560 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:21:42 compute-0 nova_compute[256940]: 2025-10-02 12:21:42.561 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:21:42 compute-0 nova_compute[256940]: 2025-10-02 12:21:42.632 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:21:42 compute-0 nova_compute[256940]: 2025-10-02 12:21:42.633 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:42 compute-0 nova_compute[256940]: 2025-10-02 12:21:42.633 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:21:42 compute-0 ceph-mon[73668]: pgmap v1298: 305 pgs: 305 active+clean; 296 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.6 MiB/s wr, 205 op/s
Oct 02 12:21:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 275 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 235 op/s
Oct 02 12:21:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:43.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:43.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/197691806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/106516087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.337 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.337 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.338 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.338 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.338 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805478565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.767 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 228 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.984 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.986 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4667MB free_disk=20.93573760986328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.986 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:44 compute-0 nova_compute[256940]: 2025-10-02 12:21:44.986 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:45 compute-0 ceph-mon[73668]: pgmap v1299: 305 pgs: 305 active+clean; 275 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 235 op/s
Oct 02 12:21:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/830024931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3805478565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:45 compute-0 nova_compute[256940]: 2025-10-02 12:21:45.095 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:21:45 compute-0 nova_compute[256940]: 2025-10-02 12:21:45.096 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:21:45 compute-0 nova_compute[256940]: 2025-10-02 12:21:45.120 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:45.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/830072830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:45 compute-0 nova_compute[256940]: 2025-10-02 12:21:45.563 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:45 compute-0 nova_compute[256940]: 2025-10-02 12:21:45.570 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 02 12:21:45 compute-0 nova_compute[256940]: 2025-10-02 12:21:45.783 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 02 12:21:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:45 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.037 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.038 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:46 compute-0 ceph-mon[73668]: pgmap v1300: 305 pgs: 305 active+clean; 228 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 02 12:21:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/497092318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/830072830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:46 compute-0 ceph-mon[73668]: osdmap e190: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.574 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "1157acbd-682f-4901-851e-6e392eba7ed1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.574 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "1157acbd-682f-4901-851e-6e392eba7ed1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.606 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.698 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.699 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.706 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.706 2 INFO nova.compute.claims [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:21:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 241 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Oct 02 12:21:46 compute-0 nova_compute[256940]: 2025-10-02 12:21:46.983 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:47 compute-0 nova_compute[256940]: 2025-10-02 12:21:47.038 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:47 compute-0 nova_compute[256940]: 2025-10-02 12:21:47.039 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:47.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637826586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:47 compute-0 nova_compute[256940]: 2025-10-02 12:21:47.429 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:47 compute-0 nova_compute[256940]: 2025-10-02 12:21:47.437 2 DEBUG nova.compute.provider_tree [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:47 compute-0 nova_compute[256940]: 2025-10-02 12:21:47.493 2 DEBUG nova.scheduler.client.report [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2637826586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:47 compute-0 nova_compute[256940]: 2025-10-02 12:21:47.786 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:47 compute-0 nova_compute[256940]: 2025-10-02 12:21:47.787 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:21:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.122 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.122 2 DEBUG nova.network.neutron [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.191 2 INFO nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.228 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.338 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.339 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.340 2 INFO nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Creating image(s)
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.375 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.408 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.437 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.442 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.512 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.514 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.514 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.514 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.543 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:48 compute-0 nova_compute[256940]: 2025-10-02 12:21:48.548 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1157acbd-682f-4901-851e-6e392eba7ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 241 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Oct 02 12:21:49 compute-0 ceph-mon[73668]: pgmap v1302: 305 pgs: 305 active+clean; 241 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Oct 02 12:21:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/187165604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:49.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:49 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 02 12:21:49 compute-0 nova_compute[256940]: 2025-10-02 12:21:49.274 2 DEBUG nova.network.neutron [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:21:49 compute-0 nova_compute[256940]: 2025-10-02 12:21:49.275 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:21:49 compute-0 nova_compute[256940]: 2025-10-02 12:21:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:49 compute-0 nova_compute[256940]: 2025-10-02 12:21:49.639 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1157acbd-682f-4901-851e-6e392eba7ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:49 compute-0 nova_compute[256940]: 2025-10-02 12:21:49.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:49 compute-0 nova_compute[256940]: 2025-10-02 12:21:49.763 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] resizing rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:21:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:49.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:49 compute-0 nova_compute[256940]: 2025-10-02 12:21:49.995 2 DEBUG nova.objects.instance [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'migration_context' on Instance uuid 1157acbd-682f-4901-851e-6e392eba7ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:50 compute-0 ceph-mon[73668]: pgmap v1303: 305 pgs: 305 active+clean; 241 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Oct 02 12:21:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:50 compute-0 sudo[288754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:50 compute-0 sudo[288754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:50 compute-0 sudo[288754]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:50 compute-0 sudo[288779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:50 compute-0 sudo[288779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:50 compute-0 sudo[288779]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 273 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.9 MiB/s wr, 134 op/s
Oct 02 12:21:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:51.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:51.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.535 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.536 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Ensure instance console log exists: /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.536 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.537 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.537 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.539 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.545 2 WARNING nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.550 2 DEBUG nova.virt.libvirt.host [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.551 2 DEBUG nova.virt.libvirt.host [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.555 2 DEBUG nova.virt.libvirt.host [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.556 2 DEBUG nova.virt.libvirt.host [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.558 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.559 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.560 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.560 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.560 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.561 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.561 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.561 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.562 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.562 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.562 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.563 2 DEBUG nova.virt.hardware [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:21:52 compute-0 nova_compute[256940]: 2025-10-02 12:21:52.568 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 307 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 734 KiB/s rd, 5.0 MiB/s wr, 120 op/s
Oct 02 12:21:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883937379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:53 compute-0 ceph-mon[73668]: pgmap v1304: 305 pgs: 305 active+clean; 273 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.9 MiB/s wr, 134 op/s
Oct 02 12:21:53 compute-0 nova_compute[256940]: 2025-10-02 12:21:53.105 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:53 compute-0 nova_compute[256940]: 2025-10-02 12:21:53.481 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:53 compute-0 nova_compute[256940]: 2025-10-02 12:21:53.490 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889626090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:53 compute-0 nova_compute[256940]: 2025-10-02 12:21:53.936 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:53 compute-0 nova_compute[256940]: 2025-10-02 12:21:53.940 2 DEBUG nova.objects.instance [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1157acbd-682f-4901-851e-6e392eba7ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:53 compute-0 nova_compute[256940]: 2025-10-02 12:21:53.960 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <uuid>1157acbd-682f-4901-851e-6e392eba7ed1</uuid>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <name>instance-0000002f</name>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-305571077</nova:name>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:21:52</nova:creationTime>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <nova:user uuid="6ce6b90597304cd29e06b1f1e62246eb">tempest-ServersAdminNegativeTestJSON-1444821380-project-member</nova:user>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <nova:project uuid="1ff6686454554253817cdb343c2f7e5e">tempest-ServersAdminNegativeTestJSON-1444821380</nova:project>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <system>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <entry name="serial">1157acbd-682f-4901-851e-6e392eba7ed1</entry>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <entry name="uuid">1157acbd-682f-4901-851e-6e392eba7ed1</entry>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </system>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <os>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   </os>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <features>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   </features>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1157acbd-682f-4901-851e-6e392eba7ed1_disk">
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       </source>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1157acbd-682f-4901-851e-6e392eba7ed1_disk.config">
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       </source>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:21:53 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/console.log" append="off"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <video>
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </video>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:21:53 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:21:53 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:21:53 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:21:53 compute-0 nova_compute[256940]: </domain>
Oct 02 12:21:53 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.370 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.370 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.371 2 INFO nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Using config drive
Oct 02 12:21:54 compute-0 podman[288867]: 2025-10-02 12:21:54.407056229 +0000 UTC m=+0.067753917 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.413 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:54 compute-0 podman[288868]: 2025-10-02 12:21:54.433301123 +0000 UTC m=+0.100000997 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:21:54 compute-0 ceph-mon[73668]: pgmap v1305: 305 pgs: 305 active+clean; 307 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 734 KiB/s rd, 5.0 MiB/s wr, 120 op/s
Oct 02 12:21:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3883937379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3889626090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 333 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.8 MiB/s wr, 166 op/s
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.974 2 INFO nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Creating config drive at /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/disk.config
Oct 02 12:21:54 compute-0 nova_compute[256940]: 2025-10-02 12:21:54.978 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gnttkk4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:55 compute-0 nova_compute[256940]: 2025-10-02 12:21:55.120 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gnttkk4" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:55 compute-0 nova_compute[256940]: 2025-10-02 12:21:55.659 2 DEBUG nova.storage.rbd_utils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 1157acbd-682f-4901-851e-6e392eba7ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:55 compute-0 nova_compute[256940]: 2025-10-02 12:21:55.664 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/disk.config 1157acbd-682f-4901-851e-6e392eba7ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:55.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2041755104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3924843442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 364 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.0 MiB/s wr, 176 op/s
Oct 02 12:21:57 compute-0 nova_compute[256940]: 2025-10-02 12:21:57.100 2 DEBUG oslo_concurrency.processutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/disk.config 1157acbd-682f-4901-851e-6e392eba7ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:57 compute-0 nova_compute[256940]: 2025-10-02 12:21:57.102 2 INFO nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Deleting local config drive /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1/disk.config because it was imported into RBD.
Oct 02 12:21:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:57 compute-0 systemd-machined[210927]: New machine qemu-22-instance-0000002f.
Oct 02 12:21:57 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-0000002f.
Oct 02 12:21:57 compute-0 ceph-mon[73668]: pgmap v1306: 305 pgs: 305 active+clean; 333 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.8 MiB/s wr, 166 op/s
Oct 02 12:21:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:57.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:57 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:21:57 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:21:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 364 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.2 MiB/s wr, 158 op/s
Oct 02 12:21:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:59.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.691 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407719.691236, 1157acbd-682f-4901-851e-6e392eba7ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.692 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] VM Resumed (Lifecycle Event)
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.695 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.695 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.700 2 INFO nova.virt.libvirt.driver [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Instance spawned successfully.
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.700 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.726 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.732 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.732 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.733 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.733 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.733 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.734 2 DEBUG nova.virt.libvirt.driver [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.738 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.784 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.785 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407719.6919606, 1157acbd-682f-4901-851e-6e392eba7ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.786 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] VM Started (Lifecycle Event)
Oct 02 12:21:59 compute-0 ceph-mon[73668]: pgmap v1307: 305 pgs: 305 active+clean; 364 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.0 MiB/s wr, 176 op/s
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.827 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.833 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.851 2 INFO nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Took 11.51 seconds to spawn the instance on the hypervisor.
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.851 2 DEBUG nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:21:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:21:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:59.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.916 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:21:59 compute-0 nova_compute[256940]: 2025-10-02 12:21:59.994 2 INFO nova.compute.manager [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Took 13.33 seconds to build instance.
Oct 02 12:22:00 compute-0 nova_compute[256940]: 2025-10-02 12:22:00.038 2 DEBUG oslo_concurrency.lockutils [None req-c48ec95d-e6e2-417e-b27d-29a8a31d4151 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "1157acbd-682f-4901-851e-6e392eba7ed1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:00 compute-0 nova_compute[256940]: 2025-10-02 12:22:00.854 2 DEBUG nova.objects.instance [None req-5bcd19ed-ea77-44e9-8fba-0dac397f72e7 2f723850db484803b25ad103759fe27f 7e9551381eb943a7ad55e26c6de4a1f8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1157acbd-682f-4901-851e-6e392eba7ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:00 compute-0 nova_compute[256940]: 2025-10-02 12:22:00.898 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407720.8969765, 1157acbd-682f-4901-851e-6e392eba7ed1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:00 compute-0 nova_compute[256940]: 2025-10-02 12:22:00.899 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] VM Paused (Lifecycle Event)
Oct 02 12:22:00 compute-0 nova_compute[256940]: 2025-10-02 12:22:00.932 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:00 compute-0 nova_compute[256940]: 2025-10-02 12:22:00.937 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:00 compute-0 nova_compute[256940]: 2025-10-02 12:22:00.960 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 12:22:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 383 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 174 op/s
Oct 02 12:22:00 compute-0 ceph-mon[73668]: pgmap v1308: 305 pgs: 305 active+clean; 364 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.2 MiB/s wr, 158 op/s
Oct 02 12:22:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4012375347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:01.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:01 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 02 12:22:01 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002f.scope: Consumed 2.630s CPU time.
Oct 02 12:22:01 compute-0 systemd-machined[210927]: Machine qemu-22-instance-0000002f terminated.
Oct 02 12:22:01 compute-0 nova_compute[256940]: 2025-10-02 12:22:01.470 2 DEBUG nova.compute.manager [None req-5bcd19ed-ea77-44e9-8fba-0dac397f72e7 2f723850db484803b25ad103759fe27f 7e9551381eb943a7ad55e26c6de4a1f8 - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:01.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:02 compute-0 ceph-mon[73668]: pgmap v1309: 305 pgs: 305 active+clean; 383 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 174 op/s
Oct 02 12:22:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3786116264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 417 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.2 MiB/s wr, 229 op/s
Oct 02 12:22:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:03.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:03.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 02 12:22:04 compute-0 ceph-mon[73668]: pgmap v1310: 305 pgs: 305 active+clean; 417 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.2 MiB/s wr, 229 op/s
Oct 02 12:22:04 compute-0 nova_compute[256940]: 2025-10-02 12:22:04.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 02 12:22:04 compute-0 nova_compute[256940]: 2025-10-02 12:22:04.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 02 12:22:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 422 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 243 op/s
Oct 02 12:22:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:05.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:05.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:06 compute-0 ceph-mon[73668]: osdmap e191: 3 total, 3 up, 3 in
Oct 02 12:22:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2952618562' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:22:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2952618562' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.621 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "1157acbd-682f-4901-851e-6e392eba7ed1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.621 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "1157acbd-682f-4901-851e-6e392eba7ed1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.621 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "1157acbd-682f-4901-851e-6e392eba7ed1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.622 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "1157acbd-682f-4901-851e-6e392eba7ed1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.622 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "1157acbd-682f-4901-851e-6e392eba7ed1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.623 2 INFO nova.compute.manager [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Terminating instance
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.623 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "refresh_cache-1157acbd-682f-4901-851e-6e392eba7ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.624 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquired lock "refresh_cache-1157acbd-682f-4901-851e-6e392eba7ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.624 2 DEBUG nova.network.neutron [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 354 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.9 MiB/s wr, 284 op/s
Oct 02 12:22:06 compute-0 nova_compute[256940]: 2025-10-02 12:22:06.974 2 DEBUG nova.network.neutron [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:07.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:07 compute-0 nova_compute[256940]: 2025-10-02 12:22:07.193 2 DEBUG nova.network.neutron [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:07 compute-0 nova_compute[256940]: 2025-10-02 12:22:07.261 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Releasing lock "refresh_cache-1157acbd-682f-4901-851e-6e392eba7ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:07 compute-0 nova_compute[256940]: 2025-10-02 12:22:07.262 2 DEBUG nova.compute.manager [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:22:07 compute-0 nova_compute[256940]: 2025-10-02 12:22:07.276 2 INFO nova.virt.libvirt.driver [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Instance destroyed successfully.
Oct 02 12:22:07 compute-0 nova_compute[256940]: 2025-10-02 12:22:07.277 2 DEBUG nova.objects.instance [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'resources' on Instance uuid 1157acbd-682f-4901-851e-6e392eba7ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 02 12:22:07 compute-0 ceph-mon[73668]: pgmap v1312: 305 pgs: 305 active+clean; 422 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 243 op/s
Oct 02 12:22:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 02 12:22:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 02 12:22:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 354 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.4 MiB/s wr, 331 op/s
Oct 02 12:22:09 compute-0 ceph-mon[73668]: pgmap v1313: 305 pgs: 305 active+clean; 354 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.9 MiB/s wr, 284 op/s
Oct 02 12:22:09 compute-0 ceph-mon[73668]: osdmap e192: 3 total, 3 up, 3 in
Oct 02 12:22:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3645296222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:09.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:09 compute-0 nova_compute[256940]: 2025-10-02 12:22:09.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/503698474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:09 compute-0 nova_compute[256940]: 2025-10-02 12:22:09.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:09.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:10 compute-0 ceph-mon[73668]: pgmap v1315: 305 pgs: 305 active+clean; 354 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.4 MiB/s wr, 331 op/s
Oct 02 12:22:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/503698474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 356 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 827 KiB/s wr, 260 op/s
Oct 02 12:22:10 compute-0 sudo[289061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:10 compute-0 sudo[289061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:10 compute-0 sudo[289061]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:11 compute-0 sudo[289098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:11 compute-0 sudo[289098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:11 compute-0 sudo[289098]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:11 compute-0 podman[289086]: 2025-10-02 12:22:11.069377496 +0000 UTC m=+0.077449339 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:22:11 compute-0 podman[289085]: 2025-10-02 12:22:11.089247444 +0000 UTC m=+0.096701941 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:22:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:11.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 02 12:22:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 02 12:22:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3078669458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:11.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:11 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 02 12:22:12 compute-0 nova_compute[256940]: 2025-10-02 12:22:12.064 2 INFO nova.virt.libvirt.driver [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Deleting instance files /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1_del
Oct 02 12:22:12 compute-0 nova_compute[256940]: 2025-10-02 12:22:12.065 2 INFO nova.virt.libvirt.driver [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Deletion of /var/lib/nova/instances/1157acbd-682f-4901-851e-6e392eba7ed1_del complete
Oct 02 12:22:12 compute-0 nova_compute[256940]: 2025-10-02 12:22:12.430 2 INFO nova.compute.manager [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Took 5.17 seconds to destroy the instance on the hypervisor.
Oct 02 12:22:12 compute-0 nova_compute[256940]: 2025-10-02 12:22:12.431 2 DEBUG oslo.service.loopingcall [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:22:12 compute-0 nova_compute[256940]: 2025-10-02 12:22:12.431 2 DEBUG nova.compute.manager [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:22:12 compute-0 nova_compute[256940]: 2025-10-02 12:22:12.431 2 DEBUG nova.network.neutron [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:22:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 02 12:22:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 360 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 211 op/s
Oct 02 12:22:13 compute-0 ceph-mon[73668]: pgmap v1316: 305 pgs: 305 active+clean; 356 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 827 KiB/s wr, 260 op/s
Oct 02 12:22:13 compute-0 ceph-mon[73668]: osdmap e193: 3 total, 3 up, 3 in
Oct 02 12:22:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2804927787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:13.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 02 12:22:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 02 12:22:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:13.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.026 2 DEBUG nova.network.neutron [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.136 2 DEBUG nova.network.neutron [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.222 2 INFO nova.compute.manager [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Took 1.79 seconds to deallocate network for instance.
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.341 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.342 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:14 compute-0 ceph-mon[73668]: pgmap v1318: 305 pgs: 305 active+clean; 360 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 211 op/s
Oct 02 12:22:14 compute-0 ceph-mon[73668]: osdmap e194: 3 total, 3 up, 3 in
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.392 2 DEBUG oslo_concurrency.processutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388008822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.889 2 DEBUG oslo_concurrency.processutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.896 2 DEBUG nova.compute.provider_tree [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.918 2 DEBUG nova.scheduler.client.report [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:14 compute-0 nova_compute[256940]: 2025-10-02 12:22:14.955 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 317 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 175 op/s
Oct 02 12:22:15 compute-0 nova_compute[256940]: 2025-10-02 12:22:15.023 2 INFO nova.scheduler.client.report [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Deleted allocations for instance 1157acbd-682f-4901-851e-6e392eba7ed1
Oct 02 12:22:15 compute-0 nova_compute[256940]: 2025-10-02 12:22:15.120 2 DEBUG oslo_concurrency.lockutils [None req-32f6547b-0278-4a1d-8917-641eb97a537e 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "1157acbd-682f-4901-851e-6e392eba7ed1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:15.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1369805820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/388008822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 02 12:22:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:15.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 02 12:22:15 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 02 12:22:16 compute-0 nova_compute[256940]: 2025-10-02 12:22:16.473 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407721.471551, 1157acbd-682f-4901-851e-6e392eba7ed1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:16 compute-0 nova_compute[256940]: 2025-10-02 12:22:16.474 2 INFO nova.compute.manager [-] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] VM Stopped (Lifecycle Event)
Oct 02 12:22:16 compute-0 nova_compute[256940]: 2025-10-02 12:22:16.513 2 DEBUG nova.compute.manager [None req-4f97282d-a75f-44cf-a92d-5056e30849fd - - - - - -] [instance: 1157acbd-682f-4901-851e-6e392eba7ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 260 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 182 op/s
Oct 02 12:22:17 compute-0 ceph-mon[73668]: pgmap v1320: 305 pgs: 305 active+clean; 317 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 175 op/s
Oct 02 12:22:17 compute-0 ceph-mon[73668]: osdmap e195: 3 total, 3 up, 3 in
Oct 02 12:22:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:17.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:17.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 02 12:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:18.616 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:18.617 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:22:18 compute-0 nova_compute[256940]: 2025-10-02 12:22:18.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:18 compute-0 ceph-mon[73668]: pgmap v1322: 305 pgs: 305 active+clean; 260 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 182 op/s
Oct 02 12:22:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 02 12:22:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 02 12:22:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 260 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 5.7 KiB/s wr, 118 op/s
Oct 02 12:22:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:19.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:19 compute-0 nova_compute[256940]: 2025-10-02 12:22:19.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:19 compute-0 nova_compute[256940]: 2025-10-02 12:22:19.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:19.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:19 compute-0 ceph-mon[73668]: osdmap e196: 3 total, 3 up, 3 in
Oct 02 12:22:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1670408849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 02 12:22:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 02 12:22:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 231 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 5.7 KiB/s wr, 120 op/s
Oct 02 12:22:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 02 12:22:21 compute-0 ceph-mon[73668]: pgmap v1324: 305 pgs: 305 active+clean; 260 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 5.7 KiB/s wr, 118 op/s
Oct 02 12:22:21 compute-0 ceph-mon[73668]: osdmap e197: 3 total, 3 up, 3 in
Oct 02 12:22:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:21.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:22:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:21.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:22:22 compute-0 ceph-mon[73668]: pgmap v1325: 305 pgs: 305 active+clean; 231 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 5.7 KiB/s wr, 120 op/s
Oct 02 12:22:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.9 KiB/s wr, 120 op/s
Oct 02 12:22:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:23.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:24 compute-0 nova_compute[256940]: 2025-10-02 12:22:24.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:24 compute-0 ceph-mon[73668]: pgmap v1327: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.9 KiB/s wr, 120 op/s
Oct 02 12:22:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:24.619 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:24 compute-0 nova_compute[256940]: 2025-10-02 12:22:24.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 145 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 3.5 KiB/s wr, 86 op/s
Oct 02 12:22:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:25.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:25 compute-0 podman[289181]: 2025-10-02 12:22:25.424195893 +0000 UTC m=+0.090422738 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 12:22:25 compute-0 podman[289182]: 2025-10-02 12:22:25.436329759 +0000 UTC m=+0.098698273 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:22:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 02 12:22:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:25.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 02 12:22:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.423 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "e91892fc-a992-4187-9137-db6f648aa851" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.424 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "e91892fc-a992-4187-9137-db6f648aa851" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.455 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:26.459 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:26.459 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:26.459 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.574 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.575 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.583 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.583 2 INFO nova.compute.claims [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:22:26 compute-0 ceph-mon[73668]: pgmap v1328: 305 pgs: 305 active+clean; 145 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 3.5 KiB/s wr, 86 op/s
Oct 02 12:22:26 compute-0 ceph-mon[73668]: osdmap e198: 3 total, 3 up, 3 in
Oct 02 12:22:26 compute-0 nova_compute[256940]: 2025-10-02 12:22:26.720 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 88 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 4.7 KiB/s wr, 109 op/s
Oct 02 12:22:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108879586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.173 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.180 2 DEBUG nova.compute.provider_tree [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.348 2 DEBUG nova.scheduler.client.report [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.408 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.409 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.485 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.486 2 DEBUG nova.network.neutron [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.514 2 INFO nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.544 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.628 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.628 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.690 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.692 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.692 2 INFO nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Creating image(s)
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.723 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] rbd image e91892fc-a992-4187-9137-db6f648aa851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.792 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] rbd image e91892fc-a992-4187-9137-db6f648aa851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.820 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] rbd image e91892fc-a992-4187-9137-db6f648aa851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.824 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.848 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.886 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.887 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.887 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.888 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:27.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1984249346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2108879586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.922 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] rbd image e91892fc-a992-4187-9137-db6f648aa851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.937 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e91892fc-a992-4187-9137-db6f648aa851_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.994 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:27 compute-0 nova_compute[256940]: 2025-10-02 12:22:27.994 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.003 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.004 2 INFO nova.compute.claims [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.134 2 DEBUG nova.network.neutron [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.135 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.185 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.518 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e91892fc-a992-4187-9137-db6f648aa851_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.596 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] resizing rbd image e91892fc-a992-4187-9137-db6f648aa851_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:22:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/740931267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.675 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.681 2 DEBUG nova.compute.provider_tree [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:22:28
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'images', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.control']
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.741 2 DEBUG nova.scheduler.client.report [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.752 2 DEBUG nova.objects.instance [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lazy-loading 'migration_context' on Instance uuid e91892fc-a992-4187-9137-db6f648aa851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.778 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.779 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.791 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.791 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Ensure instance console log exists: /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.792 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.793 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.793 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.795 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.800 2 WARNING nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.812 2 DEBUG nova.virt.libvirt.host [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.813 2 DEBUG nova.virt.libvirt.host [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.820 2 DEBUG nova.virt.libvirt.host [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.821 2 DEBUG nova.virt.libvirt.host [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.822 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.822 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.823 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.823 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.823 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.823 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.824 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.824 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.824 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.824 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.824 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.825 2 DEBUG nova.virt.hardware [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.828 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.865 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.866 2 DEBUG nova.network.neutron [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.907 2 INFO nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:22:28 compute-0 nova_compute[256940]: 2025-10-02 12:22:28.960 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:22:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 88 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 3.6 KiB/s wr, 83 op/s
Oct 02 12:22:29 compute-0 ceph-mon[73668]: pgmap v1330: 305 pgs: 305 active+clean; 88 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 4.7 KiB/s wr, 109 op/s
Oct 02 12:22:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/740931267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:29.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.211 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.213 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.213 2 INFO nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Creating image(s)
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.244 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.275 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408600532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.310 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.314 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.345 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.373 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] rbd image e91892fc-a992-4187-9137-db6f648aa851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.380 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.412 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.413 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.414 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.415 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.446 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.450 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.475 2 DEBUG nova.policy [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.843 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.845 2 DEBUG nova.objects.instance [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lazy-loading 'pci_devices' on Instance uuid e91892fc-a992-4187-9137-db6f648aa851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:29.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:29 compute-0 nova_compute[256940]: 2025-10-02 12:22:29.909 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <uuid>e91892fc-a992-4187-9137-db6f648aa851</uuid>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <name>instance-00000031</name>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <nova:name>tempest-TenantUsagesTestJSON-server-147235703</nova:name>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:22:28</nova:creationTime>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <nova:user uuid="ebdbd300d8f847ccbbe92d82262322b7">tempest-TenantUsagesTestJSON-487692520-project-member</nova:user>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <nova:project uuid="b11410f7db8648e5a46fd3647014b180">tempest-TenantUsagesTestJSON-487692520</nova:project>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <system>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <entry name="serial">e91892fc-a992-4187-9137-db6f648aa851</entry>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <entry name="uuid">e91892fc-a992-4187-9137-db6f648aa851</entry>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </system>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <os>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   </os>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <features>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   </features>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e91892fc-a992-4187-9137-db6f648aa851_disk">
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       </source>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e91892fc-a992-4187-9137-db6f648aa851_disk.config">
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       </source>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:22:29 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/console.log" append="off"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <video>
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </video>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:22:29 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:22:29 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:22:29 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:22:29 compute-0 nova_compute[256940]: </domain>
Oct 02 12:22:29 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.018 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.105 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.106 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.106 2 INFO nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Using config drive
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.136 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] rbd image e91892fc-a992-4187-9137-db6f648aa851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.146 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] resizing rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:22:30 compute-0 ceph-mon[73668]: pgmap v1331: 305 pgs: 305 active+clean; 88 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 3.6 KiB/s wr, 83 op/s
Oct 02 12:22:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/408600532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2442339680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.275 2 DEBUG nova.objects.instance [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid ef0d5be3-6f62-49b2-82fa-b646bea14a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.294 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.294 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Ensure instance console log exists: /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.295 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.295 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.295 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.366 2 INFO nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Creating config drive at /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/disk.config
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.372 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppd5udpsx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.508 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppd5udpsx" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.538 2 DEBUG nova.storage.rbd_utils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] rbd image e91892fc-a992-4187-9137-db6f648aa851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.543 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/disk.config e91892fc-a992-4187-9137-db6f648aa851_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:30 compute-0 nova_compute[256940]: 2025-10-02 12:22:30.812 2 DEBUG nova.network.neutron [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Successfully created port: 58fa2695-ab12-4921-9efa-8934cf29f79b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 111 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 1.0 MiB/s wr, 86 op/s
Oct 02 12:22:31 compute-0 sudo[289726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:31 compute-0 sudo[289726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:31 compute-0 sudo[289726]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:31 compute-0 nova_compute[256940]: 2025-10-02 12:22:31.162 2 DEBUG oslo_concurrency.processutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/disk.config e91892fc-a992-4187-9137-db6f648aa851_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:31 compute-0 nova_compute[256940]: 2025-10-02 12:22:31.163 2 INFO nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Deleting local config drive /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851/disk.config because it was imported into RBD.
Oct 02 12:22:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:31.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:31 compute-0 sudo[289751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:31 compute-0 sudo[289751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:31 compute-0 sudo[289751]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:31 compute-0 systemd-machined[210927]: New machine qemu-23-instance-00000031.
Oct 02 12:22:31 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000031.
Oct 02 12:22:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:31.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:32 compute-0 ceph-mon[73668]: pgmap v1332: 305 pgs: 305 active+clean; 111 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 1.0 MiB/s wr, 86 op/s
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.698 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407752.697734, e91892fc-a992-4187-9137-db6f648aa851 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.698 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] VM Resumed (Lifecycle Event)
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.702 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.703 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.707 2 INFO nova.virt.libvirt.driver [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] Instance spawned successfully.
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.708 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.730 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.740 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.744 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.745 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.745 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.746 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.746 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.747 2 DEBUG nova.virt.libvirt.driver [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.811 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.811 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407752.7015336, e91892fc-a992-4187-9137-db6f648aa851 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.812 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] VM Started (Lifecycle Event)
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.849 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.855 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.862 2 INFO nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Took 5.17 seconds to spawn the instance on the hypervisor.
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.862 2 DEBUG nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.911 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.945 2 INFO nova.compute.manager [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Took 6.41 seconds to build instance.
Oct 02 12:22:32 compute-0 nova_compute[256940]: 2025-10-02 12:22:32.995 2 DEBUG oslo_concurrency.lockutils [None req-c71e818d-f003-46fc-9ae8-d7e86ca7ea39 ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "e91892fc-a992-4187-9137-db6f648aa851" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 161 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.2 MiB/s wr, 77 op/s
Oct 02 12:22:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:33.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:34 compute-0 ceph-mon[73668]: pgmap v1333: 305 pgs: 305 active+clean; 161 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.2 MiB/s wr, 77 op/s
Oct 02 12:22:34 compute-0 nova_compute[256940]: 2025-10-02 12:22:34.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 nova_compute[256940]: 2025-10-02 12:22:34.720 2 DEBUG nova.network.neutron [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Successfully updated port: 58fa2695-ab12-4921-9efa-8934cf29f79b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:34 compute-0 nova_compute[256940]: 2025-10-02 12:22:34.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:34 compute-0 nova_compute[256940]: 2025-10-02 12:22:34.808 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-ef0d5be3-6f62-49b2-82fa-b646bea14a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:34 compute-0 nova_compute[256940]: 2025-10-02 12:22:34.808 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-ef0d5be3-6f62-49b2-82fa-b646bea14a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:34 compute-0 nova_compute[256940]: 2025-10-02 12:22:34.809 2 DEBUG nova.network.neutron [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 12:22:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:35.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:35 compute-0 nova_compute[256940]: 2025-10-02 12:22:35.376 2 DEBUG nova.compute.manager [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received event network-changed-58fa2695-ab12-4921-9efa-8934cf29f79b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:35 compute-0 nova_compute[256940]: 2025-10-02 12:22:35.378 2 DEBUG nova.compute.manager [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Refreshing instance network info cache due to event network-changed-58fa2695-ab12-4921-9efa-8934cf29f79b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:35 compute-0 nova_compute[256940]: 2025-10-02 12:22:35.379 2 DEBUG oslo_concurrency.lockutils [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ef0d5be3-6f62-49b2-82fa-b646bea14a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:35 compute-0 nova_compute[256940]: 2025-10-02 12:22:35.834 2 DEBUG nova.network.neutron [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:35.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:36 compute-0 ceph-mon[73668]: pgmap v1334: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 12:22:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.006 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "e91892fc-a992-4187-9137-db6f648aa851" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.007 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "e91892fc-a992-4187-9137-db6f648aa851" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.007 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "e91892fc-a992-4187-9137-db6f648aa851-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.007 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "e91892fc-a992-4187-9137-db6f648aa851-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.008 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "e91892fc-a992-4187-9137-db6f648aa851-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.009 2 INFO nova.compute.manager [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Terminating instance
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.010 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "refresh_cache-e91892fc-a992-4187-9137-db6f648aa851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.011 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquired lock "refresh_cache-e91892fc-a992-4187-9137-db6f648aa851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.011 2 DEBUG nova.network.neutron [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:37 compute-0 nova_compute[256940]: 2025-10-02 12:22:37.479 2 DEBUG nova.network.neutron [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:37.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:38 compute-0 nova_compute[256940]: 2025-10-02 12:22:38.614 2 DEBUG nova.network.neutron [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:38 compute-0 nova_compute[256940]: 2025-10-02 12:22:38.640 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Releasing lock "refresh_cache-e91892fc-a992-4187-9137-db6f648aa851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:38 compute-0 nova_compute[256940]: 2025-10-02 12:22:38.641 2 DEBUG nova.compute.manager [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:22:38 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct 02 12:22:38 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000031.scope: Consumed 7.374s CPU time.
Oct 02 12:22:38 compute-0 systemd-machined[210927]: Machine qemu-23-instance-00000031 terminated.
Oct 02 12:22:38 compute-0 ceph-mon[73668]: pgmap v1335: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Oct 02 12:22:38 compute-0 nova_compute[256940]: 2025-10-02 12:22:38.863 2 INFO nova.virt.libvirt.driver [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] Instance destroyed successfully.
Oct 02 12:22:38 compute-0 nova_compute[256940]: 2025-10-02 12:22:38.865 2 DEBUG nova.objects.instance [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lazy-loading 'resources' on Instance uuid e91892fc-a992-4187-9137-db6f648aa851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Oct 02 12:22:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:39.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:39 compute-0 nova_compute[256940]: 2025-10-02 12:22:39.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:39 compute-0 nova_compute[256940]: 2025-10-02 12:22:39.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:39.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019845683859110366 of space, bias 1.0, pg target 0.595370515773311 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:22:40 compute-0 nova_compute[256940]: 2025-10-02 12:22:40.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:40 compute-0 nova_compute[256940]: 2025-10-02 12:22:40.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:40 compute-0 ceph-mon[73668]: pgmap v1336: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Oct 02 12:22:40 compute-0 sudo[289859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:40 compute-0 sudo[289859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:40 compute-0 sudo[289859]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:40 compute-0 sudo[289884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:22:40 compute-0 sudo[289884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:40 compute-0 sudo[289884]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-0 sudo[289909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:41 compute-0 sudo[289909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Oct 02 12:22:41 compute-0 sudo[289909]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-0 sudo[289934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:22:41 compute-0 sudo[289934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.394 2 DEBUG nova.network.neutron [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Updating instance_info_cache with network_info: [{"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:41 compute-0 podman[289976]: 2025-10-02 12:22:41.407483315 +0000 UTC m=+0.069425741 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:22:41 compute-0 podman[289975]: 2025-10-02 12:22:41.434327464 +0000 UTC m=+0.096644049 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct 02 12:22:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3528556499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.466 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-ef0d5be3-6f62-49b2-82fa-b646bea14a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.467 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Instance network_info: |[{"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.467 2 DEBUG oslo_concurrency.lockutils [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ef0d5be3-6f62-49b2-82fa-b646bea14a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.467 2 DEBUG nova.network.neutron [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Refreshing network info cache for port 58fa2695-ab12-4921-9efa-8934cf29f79b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.470 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Start _get_guest_xml network_info=[{"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.475 2 WARNING nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.489 2 DEBUG nova.virt.libvirt.host [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.491 2 DEBUG nova.virt.libvirt.host [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.498 2 DEBUG nova.virt.libvirt.host [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.499 2 DEBUG nova.virt.libvirt.host [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.500 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.501 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.501 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.501 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.501 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.501 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.502 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.502 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.502 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.502 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.502 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.503 2 DEBUG nova.virt.hardware [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:41 compute-0 nova_compute[256940]: 2025-10-02 12:22:41.505 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:41 compute-0 sudo[289934]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:22:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:22:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:22:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5af132d4-0b8b-4680-a1be-2558f8f11798 does not exist
Oct 02 12:22:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f1efcf01-b89b-4084-b187-cb77bb0b29df does not exist
Oct 02 12:22:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5c3a906b-8e6f-4806-b9ba-48d1b7e410bf does not exist
Oct 02 12:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:22:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:22:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:22:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:22:41 compute-0 sudo[290052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:41 compute-0 sudo[290052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:41 compute-0 sudo[290052]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:41.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:41 compute-0 sudo[290077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:22:41 compute-0 sudo[290077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:41 compute-0 sudo[290077]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264439520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.011 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:42 compute-0 sudo[290102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:42 compute-0 sudo[290102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:42 compute-0 sudo[290102]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.040 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.045 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:42 compute-0 sudo[290144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:22:42 compute-0 sudo[290144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.139 2 INFO nova.virt.libvirt.driver [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Deleting instance files /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851_del
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.141 2 INFO nova.virt.libvirt.driver [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Deletion of /var/lib/nova/instances/e91892fc-a992-4187-9137-db6f648aa851_del complete
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.208 2 INFO nova.compute.manager [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] [instance: e91892fc-a992-4187-9137-db6f648aa851] Took 3.57 seconds to destroy the instance on the hypervisor.
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.210 2 DEBUG oslo.service.loopingcall [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.212 2 DEBUG nova.compute.manager [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.213 2 DEBUG nova.network.neutron [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.244 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.246 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.246 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.247 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.247 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:22:42 compute-0 podman[290232]: 2025-10-02 12:22:42.479205126 +0000 UTC m=+0.102407090 container create b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:22:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213202268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 podman[290232]: 2025-10-02 12:22:42.39876785 +0000 UTC m=+0.021969834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:22:42 compute-0 ceph-mon[73668]: pgmap v1337: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4094779463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3447753627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3264439520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3141264522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.514 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.516 2 DEBUG nova.virt.libvirt.vif [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-216045878',display_name='tempest-ImagesTestJSON-server-216045878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-216045878',id=50,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-ybit459w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:29Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=ef0d5be3-6f62-49b2-82fa-b646bea14a10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.516 2 DEBUG nova.network.os_vif_util [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.517 2 DEBUG nova.network.os_vif_util [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:36:88,bridge_name='br-int',has_traffic_filtering=True,id=58fa2695-ab12-4921-9efa-8934cf29f79b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58fa2695-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.518 2 DEBUG nova.objects.instance [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef0d5be3-6f62-49b2-82fa-b646bea14a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:42 compute-0 systemd[1]: Started libpod-conmon-b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651.scope.
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.593 2 DEBUG nova.network.neutron [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.638 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <uuid>ef0d5be3-6f62-49b2-82fa-b646bea14a10</uuid>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <name>instance-00000032</name>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <nova:name>tempest-ImagesTestJSON-server-216045878</nova:name>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:22:41</nova:creationTime>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <nova:port uuid="58fa2695-ab12-4921-9efa-8934cf29f79b">
Oct 02 12:22:42 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <system>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <entry name="serial">ef0d5be3-6f62-49b2-82fa-b646bea14a10</entry>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <entry name="uuid">ef0d5be3-6f62-49b2-82fa-b646bea14a10</entry>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </system>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <os>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   </os>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <features>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   </features>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk">
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       </source>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk.config">
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       </source>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:22:42 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:6b:36:88"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <target dev="tap58fa2695-ab"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/console.log" append="off"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <video>
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </video>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:22:42 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:22:42 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:22:42 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:22:42 compute-0 nova_compute[256940]: </domain>
Oct 02 12:22:42 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.640 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Preparing to wait for external event network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.641 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.641 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.641 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.642 2 DEBUG nova.virt.libvirt.vif [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-216045878',display_name='tempest-ImagesTestJSON-server-216045878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-216045878',id=50,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-ybit459w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:29Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=ef0d5be3-6f62-49b2-82fa-b646bea14a10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.642 2 DEBUG nova.network.os_vif_util [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.643 2 DEBUG nova.network.os_vif_util [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:36:88,bridge_name='br-int',has_traffic_filtering=True,id=58fa2695-ab12-4921-9efa-8934cf29f79b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58fa2695-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.643 2 DEBUG os_vif [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:36:88,bridge_name='br-int',has_traffic_filtering=True,id=58fa2695-ab12-4921-9efa-8934cf29f79b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58fa2695-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.647 2 DEBUG nova.network.neutron [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fa2695-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58fa2695-ab, col_values=(('external_ids', {'iface-id': '58fa2695-ab12-4921-9efa-8934cf29f79b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:36:88', 'vm-uuid': 'ef0d5be3-6f62-49b2-82fa-b646bea14a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 NetworkManager[44981]: <info>  [1759407762.6562] manager: (tap58fa2695-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 02 12:22:42 compute-0 podman[290232]: 2025-10-02 12:22:42.65852402 +0000 UTC m=+0.281726014 container init b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.665 2 INFO os_vif [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:36:88,bridge_name='br-int',has_traffic_filtering=True,id=58fa2695-ab12-4921-9efa-8934cf29f79b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58fa2695-ab')
Oct 02 12:22:42 compute-0 podman[290232]: 2025-10-02 12:22:42.667523124 +0000 UTC m=+0.290725088 container start b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.670 2 INFO nova.compute.manager [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] Took 0.46 seconds to deallocate network for instance.
Oct 02 12:22:42 compute-0 systemd[1]: libpod-b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651.scope: Deactivated successfully.
Oct 02 12:22:42 compute-0 zen_dhawan[290252]: 167 167
Oct 02 12:22:42 compute-0 conmon[290252]: conmon b6347d2c01ce4d51ad51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651.scope/container/memory.events
Oct 02 12:22:42 compute-0 podman[290232]: 2025-10-02 12:22:42.74216989 +0000 UTC m=+0.365371854 container attach b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:22:42 compute-0 podman[290232]: 2025-10-02 12:22:42.743423182 +0000 UTC m=+0.366625146 container died b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.756 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.758 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.798 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.799 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.799 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:6b:36:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.799 2 INFO nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Using config drive
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.823 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:42 compute-0 nova_compute[256940]: 2025-10-02 12:22:42.941 2 DEBUG oslo_concurrency.processutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf32f47b1fbcedfe926ca07f30a9379e07135177a06507c247f252e9ec8d7c77-merged.mount: Deactivated successfully.
Oct 02 12:22:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 157 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Oct 02 12:22:43 compute-0 podman[290232]: 2025-10-02 12:22:43.166621701 +0000 UTC m=+0.789823665 container remove b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:22:43 compute-0 systemd[1]: libpod-conmon-b6347d2c01ce4d51ad516c619fc2afbb09ca8f36265b8ff800652041d054f651.scope: Deactivated successfully.
Oct 02 12:22:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:22:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:22:43 compute-0 podman[290319]: 2025-10-02 12:22:43.388520604 +0000 UTC m=+0.089819732 container create b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:22:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356043525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:43 compute-0 podman[290319]: 2025-10-02 12:22:43.324304 +0000 UTC m=+0.025603138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:22:43 compute-0 nova_compute[256940]: 2025-10-02 12:22:43.419 2 DEBUG oslo_concurrency.processutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:43 compute-0 nova_compute[256940]: 2025-10-02 12:22:43.428 2 DEBUG nova.compute.provider_tree [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:43 compute-0 nova_compute[256940]: 2025-10-02 12:22:43.467 2 DEBUG nova.scheduler.client.report [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:43 compute-0 systemd[1]: Started libpod-conmon-b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438.scope.
Oct 02 12:22:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12be1207b9c37e522957b0fd5948a4a2301def4fab00fe1ff3ac458598c2ea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12be1207b9c37e522957b0fd5948a4a2301def4fab00fe1ff3ac458598c2ea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12be1207b9c37e522957b0fd5948a4a2301def4fab00fe1ff3ac458598c2ea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12be1207b9c37e522957b0fd5948a4a2301def4fab00fe1ff3ac458598c2ea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12be1207b9c37e522957b0fd5948a4a2301def4fab00fe1ff3ac458598c2ea8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:43 compute-0 nova_compute[256940]: 2025-10-02 12:22:43.514 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:43 compute-0 nova_compute[256940]: 2025-10-02 12:22:43.549 2 INFO nova.scheduler.client.report [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Deleted allocations for instance e91892fc-a992-4187-9137-db6f648aa851
Oct 02 12:22:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3213202268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1356043525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:43 compute-0 podman[290319]: 2025-10-02 12:22:43.614421871 +0000 UTC m=+0.315721069 container init b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:22:43 compute-0 podman[290319]: 2025-10-02 12:22:43.624487034 +0000 UTC m=+0.325786152 container start b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:22:43 compute-0 podman[290319]: 2025-10-02 12:22:43.678226354 +0000 UTC m=+0.379525492 container attach b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:22:43 compute-0 nova_compute[256940]: 2025-10-02 12:22:43.705 2 DEBUG oslo_concurrency.lockutils [None req-ab333869-5d8c-4723-b841-0d2b9b7dabaa ebdbd300d8f847ccbbe92d82262322b7 b11410f7db8648e5a46fd3647014b180 - - default default] Lock "e91892fc-a992-4187-9137-db6f648aa851" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:43.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.075 2 INFO nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Creating config drive at /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/disk.config
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.080 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxnmy7zz3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.219 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxnmy7zz3" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.252 2 DEBUG nova.storage.rbd_utils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.258 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/disk.config ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.294 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.295 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.295 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.296 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.297 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:44 compute-0 condescending_bhaskara[290337]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:22:44 compute-0 condescending_bhaskara[290337]: --> relative data size: 1.0
Oct 02 12:22:44 compute-0 condescending_bhaskara[290337]: --> All data devices are unavailable
Oct 02 12:22:44 compute-0 systemd[1]: libpod-b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438.scope: Deactivated successfully.
Oct 02 12:22:44 compute-0 podman[290319]: 2025-10-02 12:22:44.540671401 +0000 UTC m=+1.241970509 container died b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.647 2 DEBUG nova.network.neutron [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Updated VIF entry in instance network info cache for port 58fa2695-ab12-4921-9efa-8934cf29f79b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.650 2 DEBUG nova.network.neutron [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Updating instance_info_cache with network_info: [{"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.672 2 DEBUG oslo_concurrency.lockutils [req-e5d3dd99-7265-4299-85c8-22d244cc707a req-5956b522-48e7-4feb-b560-e0668c7e8ce3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ef0d5be3-6f62-49b2-82fa-b646bea14a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.677 2 DEBUG oslo_concurrency.processutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/disk.config ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.677 2 INFO nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Deleting local config drive /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10/disk.config because it was imported into RBD.
Oct 02 12:22:44 compute-0 kernel: tap58fa2695-ab: entered promiscuous mode
Oct 02 12:22:44 compute-0 NetworkManager[44981]: <info>  [1759407764.7426] manager: (tap58fa2695-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Oct 02 12:22:44 compute-0 ovn_controller[148123]: 2025-10-02T12:22:44Z|00163|binding|INFO|Claiming lport 58fa2695-ab12-4921-9efa-8934cf29f79b for this chassis.
Oct 02 12:22:44 compute-0 ovn_controller[148123]: 2025-10-02T12:22:44Z|00164|binding|INFO|58fa2695-ab12-4921-9efa-8934cf29f79b: Claiming fa:16:3e:6b:36:88 10.100.0.6
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.754 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:36:88 10.100.0.6'], port_security=['fa:16:3e:6b:36:88 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ef0d5be3-6f62-49b2-82fa-b646bea14a10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=58fa2695-ab12-4921-9efa-8934cf29f79b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.755 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 58fa2695-ab12-4921-9efa-8934cf29f79b in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.757 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:22:44 compute-0 ovn_controller[148123]: 2025-10-02T12:22:44Z|00165|binding|INFO|Setting lport 58fa2695-ab12-4921-9efa-8934cf29f79b up in Southbound
Oct 02 12:22:44 compute-0 ovn_controller[148123]: 2025-10-02T12:22:44Z|00166|binding|INFO|Setting lport 58fa2695-ab12-4921-9efa-8934cf29f79b ovn-installed in OVS
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.773 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa631df-f10b-4ca6-bab8-1cdf2df66797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.773 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.776 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.776 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe9daab-ae51-451d-a5cf-d4e9e6192e2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.777 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[90f95429-e039-4922-bba6-d308e6438ecd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 systemd-udevd[290437]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:44 compute-0 NetworkManager[44981]: <info>  [1759407764.8002] device (tap58fa2695-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.798 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[f63d99b4-9468-477f-9c3e-4ee8bd4e555e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 NetworkManager[44981]: <info>  [1759407764.8016] device (tap58fa2695-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:44 compute-0 systemd-machined[210927]: New machine qemu-24-instance-00000032.
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.816 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0c747066-7e21-4ed2-877c-de9f99901cf5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000032.
Oct 02 12:22:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097927493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12be1207b9c37e522957b0fd5948a4a2301def4fab00fe1ff3ac458598c2ea8-merged.mount: Deactivated successfully.
Oct 02 12:22:44 compute-0 ceph-mon[73668]: pgmap v1338: 305 pgs: 305 active+clean; 157 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.850 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.853 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9c2dd4-2fb0-4cb9-8e2b-ee5bf980084e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 NetworkManager[44981]: <info>  [1759407764.8608] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.859 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[46023051-10d2-499a-89d3-cc1d22182b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.907 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a48768d8-e157-453a-9c1a-10625da822f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.910 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e33dd594-ba7e-4335-b474-fe496988d3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 NetworkManager[44981]: <info>  [1759407764.9395] device (tapd68ff9e0-a0): carrier: link connected
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.947 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4b7f47-610f-403e-99a2-30e85b260f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.959 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:22:44 compute-0 nova_compute[256940]: 2025-10-02 12:22:44.959 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.974 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf8032d-35f0-46c5-87d5-a0588a5d69e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566042, 'reachable_time': 41197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290473, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:44.993 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[505925d8-174e-4562-ab87-8adbb35b1e2e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566042, 'tstamp': 566042}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290474, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 134 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 921 KiB/s wr, 114 op/s
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.015 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0f32d0fb-f269-4e5e-875b-1f306f6ce23c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566042, 'reachable_time': 41197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290475, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.061 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcf9e1a-3846-4aed-a439-437c39fed32b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:45 compute-0 podman[290319]: 2025-10-02 12:22:45.13150357 +0000 UTC m=+1.832802688 container remove b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:22:45 compute-0 systemd[1]: libpod-conmon-b3e6153d7d9baf4bc5631129cec61284086a775df4549187b89dc1a795e41438.scope: Deactivated successfully.
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.148 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9377b85b-0e55-45bb-bd1f-78ff1ad25929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.150 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.150 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.151 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-0 NetworkManager[44981]: <info>  [1759407765.1543] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-0 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.159 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-0 ovn_controller[148123]: 2025-10-02T12:22:45Z|00167|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=0)
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.175 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.176 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=20.957168579101562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.176 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.177 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.186 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.187 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d48bac-d521-4d8d-917e-72eb372635e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.188 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:22:45.188 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:45 compute-0 sudo[290144]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:45 compute-0 sudo[290483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:45.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.271 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance ef0d5be3-6f62-49b2-82fa-b646bea14a10 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.274 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:22:45 compute-0 sudo[290483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.274 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:22:45 compute-0 sudo[290483]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:45 compute-0 sudo[290510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:22:45 compute-0 sudo[290510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:45 compute-0 sudo[290510]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.348 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:45 compute-0 sudo[290554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:45 compute-0 sudo[290554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:45 compute-0 sudo[290554]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:45 compute-0 sudo[290598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:22:45 compute-0 sudo[290598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:45 compute-0 podman[290667]: 2025-10-02 12:22:45.591631522 +0000 UTC m=+0.025542887 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.706 2 DEBUG nova.compute.manager [req-dc76c1db-018d-4e05-ba4c-23613acd6a7f req-62b05d71-2a82-4ffe-b4cb-82b35b041ce5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received event network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.707 2 DEBUG oslo_concurrency.lockutils [req-dc76c1db-018d-4e05-ba4c-23613acd6a7f req-62b05d71-2a82-4ffe-b4cb-82b35b041ce5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.707 2 DEBUG oslo_concurrency.lockutils [req-dc76c1db-018d-4e05-ba4c-23613acd6a7f req-62b05d71-2a82-4ffe-b4cb-82b35b041ce5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.708 2 DEBUG oslo_concurrency.lockutils [req-dc76c1db-018d-4e05-ba4c-23613acd6a7f req-62b05d71-2a82-4ffe-b4cb-82b35b041ce5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.708 2 DEBUG nova.compute.manager [req-dc76c1db-018d-4e05-ba4c-23613acd6a7f req-62b05d71-2a82-4ffe-b4cb-82b35b041ce5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Processing event network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:45 compute-0 podman[290667]: 2025-10-02 12:22:45.746271362 +0000 UTC m=+0.180182707 container create df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:22:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16677038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.823 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.830 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.864 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:45 compute-0 systemd[1]: Started libpod-conmon-df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf.scope.
Oct 02 12:22:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.906 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.906 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c986eef8f5fae99d19b291ae7a151aa9b5ae7857fee67154194254fb1e41fb5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:22:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:22:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4097927493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2272921810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/16677038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-0 podman[290667]: 2025-10-02 12:22:45.951056599 +0000 UTC m=+0.384967944 container init df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:22:45 compute-0 podman[290667]: 2025-10-02 12:22:45.958269707 +0000 UTC m=+0.392181052 container start df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.977 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.979 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407765.977029, ef0d5be3-6f62-49b2-82fa-b646bea14a10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.980 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] VM Started (Lifecycle Event)
Oct 02 12:22:45 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [NOTICE]   (290713) : New worker (290715) forked
Oct 02 12:22:45 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [NOTICE]   (290713) : Loading success.
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.989 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.993 2 INFO nova.virt.libvirt.driver [-] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Instance spawned successfully.
Oct 02 12:22:45 compute-0 nova_compute[256940]: 2025-10-02 12:22:45.994 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.028 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.034 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.037 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.038 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.038 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.038 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.039 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.039 2 DEBUG nova.virt.libvirt.driver [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.071 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.071 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407765.9773242, ef0d5be3-6f62-49b2-82fa-b646bea14a10 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.072 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] VM Paused (Lifecycle Event)
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.096 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.108 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407765.982531, ef0d5be3-6f62-49b2-82fa-b646bea14a10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.109 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] VM Resumed (Lifecycle Event)
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.122 2 INFO nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Took 16.91 seconds to spawn the instance on the hypervisor.
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.123 2 DEBUG nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.137 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.146 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.207 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.224 2 INFO nova.compute.manager [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Took 18.25 seconds to build instance.
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.267 2 DEBUG oslo_concurrency.lockutils [None req-2cd32833-e55d-4388-ae8a-ab1a5e3f4db9 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:46 compute-0 podman[290737]: 2025-10-02 12:22:46.203405086 +0000 UTC m=+0.024821338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:22:46 compute-0 podman[290737]: 2025-10-02 12:22:46.299765817 +0000 UTC m=+0.121182039 container create 59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:22:46 compute-0 systemd[1]: Started libpod-conmon-59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0.scope.
Oct 02 12:22:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:46 compute-0 podman[290737]: 2025-10-02 12:22:46.4422567 +0000 UTC m=+0.263672952 container init 59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:22:46 compute-0 podman[290737]: 2025-10-02 12:22:46.452957109 +0000 UTC m=+0.274373341 container start 59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:22:46 compute-0 priceless_cartwright[290753]: 167 167
Oct 02 12:22:46 compute-0 systemd[1]: libpod-59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0.scope: Deactivated successfully.
Oct 02 12:22:46 compute-0 podman[290737]: 2025-10-02 12:22:46.541894576 +0000 UTC m=+0.363310908 container attach 59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:22:46 compute-0 podman[290737]: 2025-10-02 12:22:46.548018536 +0000 UTC m=+0.369434798 container died 59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7944337569edea2a6d58904cbb08e1d2a4cfd82228a5abe96b3b91531f77bf4-merged.mount: Deactivated successfully.
Oct 02 12:22:46 compute-0 podman[290737]: 2025-10-02 12:22:46.833082285 +0000 UTC m=+0.654498517 container remove 59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.906 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.910 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:46 compute-0 nova_compute[256940]: 2025-10-02 12:22:46.911 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:46 compute-0 systemd[1]: libpod-conmon-59baee937ffe85f9809ab51d8b9a5b48d4ada53c5e648c2502eef7cbf822f5a0.scope: Deactivated successfully.
Oct 02 12:22:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct 02 12:22:47 compute-0 podman[290777]: 2025-10-02 12:22:47.004607815 +0000 UTC m=+0.027307882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:22:47 compute-0 podman[290777]: 2025-10-02 12:22:47.116494981 +0000 UTC m=+0.139195018 container create 8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_payne, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:47 compute-0 systemd[1]: Started libpod-conmon-8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444.scope.
Oct 02 12:22:47 compute-0 ceph-mon[73668]: pgmap v1339: 305 pgs: 305 active+clean; 134 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 921 KiB/s wr, 114 op/s
Oct 02 12:22:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be11155a2325b108352fb42b5f21f65530fc9a03ce9c6ccd73a1556114eee975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be11155a2325b108352fb42b5f21f65530fc9a03ce9c6ccd73a1556114eee975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be11155a2325b108352fb42b5f21f65530fc9a03ce9c6ccd73a1556114eee975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be11155a2325b108352fb42b5f21f65530fc9a03ce9c6ccd73a1556114eee975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:47 compute-0 podman[290777]: 2025-10-02 12:22:47.407180747 +0000 UTC m=+0.429880804 container init 8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:22:47 compute-0 podman[290777]: 2025-10-02 12:22:47.415464163 +0000 UTC m=+0.438164200 container start 8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:22:47 compute-0 podman[290777]: 2025-10-02 12:22:47.480072477 +0000 UTC m=+0.502772544 container attach 8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_payne, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:47.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.921 2 DEBUG nova.compute.manager [req-a6d2d095-fbd8-4abe-af39-dca5d98d3e1f req-d3058970-4943-493e-b6f7-a6bef3c67187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received event network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.922 2 DEBUG oslo_concurrency.lockutils [req-a6d2d095-fbd8-4abe-af39-dca5d98d3e1f req-d3058970-4943-493e-b6f7-a6bef3c67187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.922 2 DEBUG oslo_concurrency.lockutils [req-a6d2d095-fbd8-4abe-af39-dca5d98d3e1f req-d3058970-4943-493e-b6f7-a6bef3c67187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.923 2 DEBUG oslo_concurrency.lockutils [req-a6d2d095-fbd8-4abe-af39-dca5d98d3e1f req-d3058970-4943-493e-b6f7-a6bef3c67187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.923 2 DEBUG nova.compute.manager [req-a6d2d095-fbd8-4abe-af39-dca5d98d3e1f req-d3058970-4943-493e-b6f7-a6bef3c67187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] No waiting events found dispatching network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:47 compute-0 nova_compute[256940]: 2025-10-02 12:22:47.923 2 WARNING nova.compute.manager [req-a6d2d095-fbd8-4abe-af39-dca5d98d3e1f req-d3058970-4943-493e-b6f7-a6bef3c67187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received unexpected event network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b for instance with vm_state active and task_state None.
Oct 02 12:22:48 compute-0 vibrant_payne[290794]: {
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:     "1": [
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:         {
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "devices": [
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "/dev/loop3"
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             ],
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "lv_name": "ceph_lv0",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "lv_size": "7511998464",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "name": "ceph_lv0",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "tags": {
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.cluster_name": "ceph",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.crush_device_class": "",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.encrypted": "0",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.osd_id": "1",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.type": "block",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:                 "ceph.vdo": "0"
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             },
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "type": "block",
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:             "vg_name": "ceph_vg0"
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:         }
Oct 02 12:22:48 compute-0 vibrant_payne[290794]:     ]
Oct 02 12:22:48 compute-0 vibrant_payne[290794]: }
Oct 02 12:22:48 compute-0 systemd[1]: libpod-8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444.scope: Deactivated successfully.
Oct 02 12:22:48 compute-0 podman[290777]: 2025-10-02 12:22:48.289947774 +0000 UTC m=+1.312647821 container died 8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-be11155a2325b108352fb42b5f21f65530fc9a03ce9c6ccd73a1556114eee975-merged.mount: Deactivated successfully.
Oct 02 12:22:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 33 op/s
Oct 02 12:22:49 compute-0 ceph-mon[73668]: pgmap v1340: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct 02 12:22:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:49.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:49 compute-0 podman[290777]: 2025-10-02 12:22:49.708995337 +0000 UTC m=+2.731695394 container remove 8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:22:49 compute-0 sudo[290598]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:49 compute-0 systemd[1]: libpod-conmon-8dcbbf49c542bf36e171d0dc8283a1a555cd03e0b91dc2f26c88d999e7255444.scope: Deactivated successfully.
Oct 02 12:22:49 compute-0 nova_compute[256940]: 2025-10-02 12:22:49.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:49 compute-0 sudo[290816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:49 compute-0 sudo[290816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:49 compute-0 sudo[290816]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:49 compute-0 sudo[290841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:22:49 compute-0 sudo[290841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:49 compute-0 sudo[290841]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:49.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:49 compute-0 sudo[290866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:49 compute-0 sudo[290866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:49 compute-0 sudo[290866]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:50 compute-0 sudo[290891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:22:50 compute-0 sudo[290891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:50 compute-0 podman[290953]: 2025-10-02 12:22:50.342892547 +0000 UTC m=+0.026574864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:22:50 compute-0 podman[290953]: 2025-10-02 12:22:50.643303776 +0000 UTC m=+0.326986113 container create e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shaw, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:22:50 compute-0 ceph-mon[73668]: pgmap v1341: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 33 op/s
Oct 02 12:22:50 compute-0 systemd[1]: Started libpod-conmon-e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd.scope.
Oct 02 12:22:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 29 KiB/s wr, 69 op/s
Oct 02 12:22:51 compute-0 podman[290953]: 2025-10-02 12:22:51.048693451 +0000 UTC m=+0.732375768 container init e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:22:51 compute-0 podman[290953]: 2025-10-02 12:22:51.057119201 +0000 UTC m=+0.740801498 container start e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shaw, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:22:51 compute-0 dazzling_shaw[290970]: 167 167
Oct 02 12:22:51 compute-0 systemd[1]: libpod-e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd.scope: Deactivated successfully.
Oct 02 12:22:51 compute-0 nova_compute[256940]: 2025-10-02 12:22:51.151 2 DEBUG nova.compute.manager [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:51 compute-0 nova_compute[256940]: 2025-10-02 12:22:51.209 2 INFO nova.compute.manager [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] instance snapshotting
Oct 02 12:22:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:51 compute-0 podman[290953]: 2025-10-02 12:22:51.294088007 +0000 UTC m=+0.977770334 container attach e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shaw, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:22:51 compute-0 podman[290953]: 2025-10-02 12:22:51.294678152 +0000 UTC m=+0.978360469 container died e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:22:51 compute-0 sudo[290986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:51 compute-0 sudo[290986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:51 compute-0 sudo[290986]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:51 compute-0 sudo[291011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:51 compute-0 sudo[291011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:51 compute-0 sudo[291011]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c646e4fa760824b2eaa88bfd31b45b6359dd41b5cb12390f3ca1ab846ba3b44-merged.mount: Deactivated successfully.
Oct 02 12:22:51 compute-0 nova_compute[256940]: 2025-10-02 12:22:51.670 2 INFO nova.virt.libvirt.driver [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Beginning live snapshot process
Oct 02 12:22:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:51.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:51 compute-0 nova_compute[256940]: 2025-10-02 12:22:51.988 2 DEBUG nova.virt.libvirt.imagebackend [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:22:52 compute-0 podman[290953]: 2025-10-02 12:22:52.510739435 +0000 UTC m=+2.194421732 container remove e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:22:52 compute-0 ceph-mon[73668]: pgmap v1342: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 29 KiB/s wr, 69 op/s
Oct 02 12:22:52 compute-0 systemd[1]: libpod-conmon-e392a325809b20f536b9602db524dc3c6cfd6c7a17e74669e4ea3894397b9fbd.scope: Deactivated successfully.
Oct 02 12:22:52 compute-0 nova_compute[256940]: 2025-10-02 12:22:52.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:52 compute-0 podman[291078]: 2025-10-02 12:22:52.707911824 +0000 UTC m=+0.041581405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:22:52 compute-0 nova_compute[256940]: 2025-10-02 12:22:52.821 2 DEBUG nova.storage.rbd_utils [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(e20ceefb2aa6470abfc07604cc8216e2) on rbd image(ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:22:52 compute-0 podman[291078]: 2025-10-02 12:22:52.947040966 +0000 UTC m=+0.280710507 container create e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:22:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 107 op/s
Oct 02 12:22:53 compute-0 systemd[1]: Started libpod-conmon-e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5.scope.
Oct 02 12:22:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1983ad1b4b60edff3cfd2b0c7b5ebd613f05bcdfc78deb008e8cd3d96311dc49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1983ad1b4b60edff3cfd2b0c7b5ebd613f05bcdfc78deb008e8cd3d96311dc49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1983ad1b4b60edff3cfd2b0c7b5ebd613f05bcdfc78deb008e8cd3d96311dc49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1983ad1b4b60edff3cfd2b0c7b5ebd613f05bcdfc78deb008e8cd3d96311dc49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:53.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:53 compute-0 podman[291078]: 2025-10-02 12:22:53.327498401 +0000 UTC m=+0.661167932 container init e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:22:53 compute-0 podman[291078]: 2025-10-02 12:22:53.340628904 +0000 UTC m=+0.674298435 container start e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:22:53 compute-0 podman[291078]: 2025-10-02 12:22:53.632704906 +0000 UTC m=+0.966374607 container attach e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:22:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 02 12:22:53 compute-0 nova_compute[256940]: 2025-10-02 12:22:53.861 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407758.8598077, e91892fc-a992-4187-9137-db6f648aa851 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:53 compute-0 nova_compute[256940]: 2025-10-02 12:22:53.863 2 INFO nova.compute.manager [-] [instance: e91892fc-a992-4187-9137-db6f648aa851] VM Stopped (Lifecycle Event)
Oct 02 12:22:53 compute-0 nova_compute[256940]: 2025-10-02 12:22:53.888 2 DEBUG nova.compute.manager [None req-d167f17c-668a-44d7-b164-c23f6e9bc0cf - - - - - -] [instance: e91892fc-a992-4187-9137-db6f648aa851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:53.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]: {
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]:         "osd_id": 1,
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]:         "type": "bluestore"
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]:     }
Oct 02 12:22:54 compute-0 dreamy_fermat[291112]: }
Oct 02 12:22:54 compute-0 systemd[1]: libpod-e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5.scope: Deactivated successfully.
Oct 02 12:22:54 compute-0 podman[291078]: 2025-10-02 12:22:54.22955238 +0000 UTC m=+1.563221911 container died e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:22:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 02 12:22:54 compute-0 nova_compute[256940]: 2025-10-02 12:22:54.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:54 compute-0 ceph-mon[73668]: pgmap v1343: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 107 op/s
Oct 02 12:22:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 119 op/s
Oct 02 12:22:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1983ad1b4b60edff3cfd2b0c7b5ebd613f05bcdfc78deb008e8cd3d96311dc49-merged.mount: Deactivated successfully.
Oct 02 12:22:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:55.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:55 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 02 12:22:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:55.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:56 compute-0 podman[291078]: 2025-10-02 12:22:56.331300625 +0000 UTC m=+3.664970166 container remove e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:22:56 compute-0 sudo[290891]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:56 compute-0 ceph-mon[73668]: osdmap e199: 3 total, 3 up, 3 in
Oct 02 12:22:56 compute-0 podman[291146]: 2025-10-02 12:22:56.402028978 +0000 UTC m=+0.067158791 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:22:56 compute-0 systemd[1]: libpod-conmon-e17945b597a9854203ddebfcd72b2340133c898cdaf305bcc5584bcd178988b5.scope: Deactivated successfully.
Oct 02 12:22:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:22:56 compute-0 podman[291147]: 2025-10-02 12:22:56.502048735 +0000 UTC m=+0.165038023 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:22:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:22:56 compute-0 nova_compute[256940]: 2025-10-02 12:22:56.762 2 DEBUG nova.storage.rbd_utils [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning vms/ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk@e20ceefb2aa6470abfc07604cc8216e2 to images/38906c7c-521b-42f0-a991-950eaae1ee1c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:22:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 023739c8-b90e-492a-a727-43f28c81ac76 does not exist
Oct 02 12:22:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a2835b66-521f-4336-bde7-a145aa275459 does not exist
Oct 02 12:22:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d6a6783a-4b32-4fdd-98d2-c4358862c941 does not exist
Oct 02 12:22:56 compute-0 sudo[291227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:56 compute-0 sudo[291227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:56 compute-0 sudo[291227]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 02 12:22:57 compute-0 sudo[291252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:22:57 compute-0 sudo[291252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:57 compute-0 sudo[291252]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:22:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:22:57 compute-0 nova_compute[256940]: 2025-10-02 12:22:57.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:57 compute-0 ceph-mon[73668]: pgmap v1345: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 119 op/s
Oct 02 12:22:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:57.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:57 compute-0 nova_compute[256940]: 2025-10-02 12:22:57.996 2 DEBUG nova.storage.rbd_utils [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] flattening images/38906c7c-521b-42f0-a991-950eaae1ee1c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:22:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 02 12:22:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:22:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:59.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:22:59 compute-0 ceph-mon[73668]: pgmap v1346: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 02 12:22:59 compute-0 nova_compute[256940]: 2025-10-02 12:22:59.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:22:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:59.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:00 compute-0 nova_compute[256940]: 2025-10-02 12:23:00.718 2 DEBUG nova.storage.rbd_utils [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(e20ceefb2aa6470abfc07604cc8216e2) on rbd image(ef0d5be3-6f62-49b2-82fa-b646bea14a10_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:23:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:01 compute-0 ceph-mon[73668]: pgmap v1347: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 02 12:23:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 141 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 44 KiB/s wr, 152 op/s
Oct 02 12:23:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:01.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:01.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 02 12:23:02 compute-0 nova_compute[256940]: 2025-10-02 12:23:02.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 02 12:23:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 192 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Oct 02 12:23:03 compute-0 ceph-mon[73668]: pgmap v1348: 305 pgs: 305 active+clean; 141 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 44 KiB/s wr, 152 op/s
Oct 02 12:23:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:03.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:03 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 02 12:23:03 compute-0 nova_compute[256940]: 2025-10-02 12:23:03.497 2 DEBUG nova.storage.rbd_utils [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(snap) on rbd image(38906c7c-521b-42f0-a991-950eaae1ee1c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:23:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:03.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:04 compute-0 ceph-mon[73668]: pgmap v1349: 305 pgs: 305 active+clean; 192 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Oct 02 12:23:04 compute-0 ceph-mon[73668]: osdmap e200: 3 total, 3 up, 3 in
Oct 02 12:23:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 02 12:23:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 02 12:23:04 compute-0 nova_compute[256940]: 2025-10-02 12:23:04.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 02 12:23:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 202 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.0 MiB/s wr, 150 op/s
Oct 02 12:23:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:05.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:05 compute-0 ceph-mon[73668]: osdmap e201: 3 total, 3 up, 3 in
Oct 02 12:23:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:23:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:23:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 202 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 193 op/s
Oct 02 12:23:07 compute-0 nova_compute[256940]: 2025-10-02 12:23:07.037 2 INFO nova.virt.libvirt.driver [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Snapshot image upload complete
Oct 02 12:23:07 compute-0 nova_compute[256940]: 2025-10-02 12:23:07.038 2 INFO nova.compute.manager [None req-946b7eeb-3027-422a-ba3d-bfc05925c925 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Took 15.83 seconds to snapshot the instance on the hypervisor.
Oct 02 12:23:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:07.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:07 compute-0 ceph-mon[73668]: pgmap v1352: 305 pgs: 305 active+clean; 202 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.0 MiB/s wr, 150 op/s
Oct 02 12:23:07 compute-0 nova_compute[256940]: 2025-10-02 12:23:07.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:07 compute-0 ovn_controller[148123]: 2025-10-02T12:23:07Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:36:88 10.100.0.6
Oct 02 12:23:07 compute-0 ovn_controller[148123]: 2025-10-02T12:23:07Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:36:88 10.100.0.6
Oct 02 12:23:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:07.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 202 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.7 MiB/s wr, 140 op/s
Oct 02 12:23:09 compute-0 ceph-mon[73668]: pgmap v1353: 305 pgs: 305 active+clean; 202 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 193 op/s
Oct 02 12:23:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:09.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:09 compute-0 nova_compute[256940]: 2025-10-02 12:23:09.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:09.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 02 12:23:10 compute-0 ceph-mon[73668]: pgmap v1354: 305 pgs: 305 active+clean; 202 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.7 MiB/s wr, 140 op/s
Oct 02 12:23:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 206 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 476 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Oct 02 12:23:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:11.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:11 compute-0 sudo[291341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:11 compute-0 sudo[291341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:11 compute-0 sudo[291341]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:11 compute-0 sudo[291369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:11 compute-0 sudo[291369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:11 compute-0 sudo[291369]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:11 compute-0 podman[291365]: 2025-10-02 12:23:11.572309644 +0000 UTC m=+0.072861676 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:23:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 02 12:23:11 compute-0 podman[291366]: 2025-10-02 12:23:11.591125714 +0000 UTC m=+0.097720543 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:23:11 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 02 12:23:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:11.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:12 compute-0 nova_compute[256940]: 2025-10-02 12:23:12.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:12 compute-0 ceph-mon[73668]: pgmap v1355: 305 pgs: 305 active+clean; 206 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 476 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Oct 02 12:23:12 compute-0 ceph-mon[73668]: osdmap e202: 3 total, 3 up, 3 in
Oct 02 12:23:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3526900912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 218 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 485 KiB/s rd, 2.8 MiB/s wr, 95 op/s
Oct 02 12:23:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:13.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:13.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:14 compute-0 nova_compute[256940]: 2025-10-02 12:23:14.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 222 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 2.5 MiB/s wr, 86 op/s
Oct 02 12:23:15 compute-0 ceph-mon[73668]: pgmap v1357: 305 pgs: 305 active+clean; 218 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 485 KiB/s rd, 2.8 MiB/s wr, 95 op/s
Oct 02 12:23:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:15.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:15.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:16 compute-0 ceph-mon[73668]: pgmap v1358: 305 pgs: 305 active+clean; 222 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 2.5 MiB/s wr, 86 op/s
Oct 02 12:23:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 232 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Oct 02 12:23:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:17.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:17 compute-0 nova_compute[256940]: 2025-10-02 12:23:17.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:17.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:18 compute-0 ceph-mon[73668]: pgmap v1359: 305 pgs: 305 active+clean; 232 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Oct 02 12:23:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 232 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Oct 02 12:23:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:19.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:19 compute-0 nova_compute[256940]: 2025-10-02 12:23:19.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:19.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 239 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 12:23:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:21.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:21 compute-0 ceph-mon[73668]: pgmap v1360: 305 pgs: 305 active+clean; 232 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Oct 02 12:23:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/291030701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3021617743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:21.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:22.156 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:22 compute-0 nova_compute[256940]: 2025-10-02 12:23:22.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:22.158 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:23:22 compute-0 nova_compute[256940]: 2025-10-02 12:23:22.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 243 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 881 KiB/s wr, 90 op/s
Oct 02 12:23:23 compute-0 ceph-mon[73668]: pgmap v1361: 305 pgs: 305 active+clean; 239 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 12:23:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:23.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:23.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 02 12:23:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 02 12:23:24 compute-0 nova_compute[256940]: 2025-10-02 12:23:24.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:24 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 02 12:23:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 251 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 116 op/s
Oct 02 12:23:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:25 compute-0 ceph-mon[73668]: pgmap v1362: 305 pgs: 305 active+clean; 243 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 881 KiB/s wr, 90 op/s
Oct 02 12:23:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:25.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:26.161 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:26.459 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:26.460 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:26.461 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 12:23:27 compute-0 ceph-mon[73668]: osdmap e203: 3 total, 3 up, 3 in
Oct 02 12:23:27 compute-0 ceph-mon[73668]: pgmap v1364: 305 pgs: 305 active+clean; 251 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 116 op/s
Oct 02 12:23:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:27 compute-0 podman[291438]: 2025-10-02 12:23:27.408916922 +0000 UTC m=+0.078322668 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 12:23:27 compute-0 podman[291439]: 2025-10-02 12:23:27.483520414 +0000 UTC m=+0.146621296 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller)
Oct 02 12:23:27 compute-0 nova_compute[256940]: 2025-10-02 12:23:27.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:28 compute-0 ceph-mon[73668]: pgmap v1365: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:23:28
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', 'images']
Oct 02 12:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:23:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 12:23:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:29.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:29 compute-0 nova_compute[256940]: 2025-10-02 12:23:29.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:29.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:30 compute-0 ceph-mon[73668]: pgmap v1366: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 12:23:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Oct 02 12:23:31 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 12:23:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:31.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:31 compute-0 sudo[291486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:31 compute-0 sudo[291486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:31 compute-0 sudo[291486]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:31 compute-0 sudo[291511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:31 compute-0 sudo[291511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:31 compute-0 sudo[291511]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:31.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:32 compute-0 ceph-mon[73668]: pgmap v1367: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Oct 02 12:23:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1782790133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:32 compute-0 nova_compute[256940]: 2025-10-02 12:23:32.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 02 12:23:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:33.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 02 12:23:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:23:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:23:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 02 12:23:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1057627497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:34 compute-0 nova_compute[256940]: 2025-10-02 12:23:34.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 770 KiB/s rd, 1.6 MiB/s wr, 89 op/s
Oct 02 12:23:35 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 02 12:23:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:35.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:35 compute-0 ceph-mon[73668]: pgmap v1368: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 02 12:23:35 compute-0 ceph-mon[73668]: osdmap e204: 3 total, 3 up, 3 in
Oct 02 12:23:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 58 op/s
Oct 02 12:23:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:37.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:37 compute-0 nova_compute[256940]: 2025-10-02 12:23:37.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:37 compute-0 ceph-mon[73668]: pgmap v1370: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 770 KiB/s rd, 1.6 MiB/s wr, 89 op/s
Oct 02 12:23:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 58 op/s
Oct 02 12:23:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:23:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7102 writes, 30K keys, 7102 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 7101 writes, 7101 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1763 writes, 7589 keys, 1763 commit groups, 1.0 writes per commit group, ingest: 11.51 MB, 0.02 MB/s
                                           Interval WAL: 1762 writes, 1762 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     61.4      0.61              0.12        17    0.036       0      0       0.0       0.0
                                             L6      1/0    8.23 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7     90.1     74.4      1.88              0.45        16    0.117     79K   8941       0.0       0.0
                                            Sum      1/0    8.23 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     67.9     71.2      2.49              0.57        33    0.075     79K   8941       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     39.0     40.0      1.24              0.16         8    0.155     23K   2584       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     90.1     74.4      1.88              0.45        16    0.117     79K   8941       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     61.7      0.61              0.12        16    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.037, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.17 GB read, 0.07 MB/s read, 2.5 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 17.43 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000234 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1029,16.80 MB,5.52595%) FilterBlock(34,228.30 KB,0.0733376%) IndexBlock(34,417.36 KB,0.134072%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:23:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:39.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:39 compute-0 ceph-mon[73668]: pgmap v1371: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 58 op/s
Oct 02 12:23:39 compute-0 nova_compute[256940]: 2025-10-02 12:23:39.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00217504623813512 of space, bias 1.0, pg target 0.652513871440536 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002166867322724735 of space, bias 1.0, pg target 0.6500601968174204 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0038433632281779265 of space, bias 1.0, pg target 1.153008968453378 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:23:40 compute-0 nova_compute[256940]: 2025-10-02 12:23:40.861 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:40 compute-0 nova_compute[256940]: 2025-10-02 12:23:40.862 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:40 compute-0 nova_compute[256940]: 2025-10-02 12:23:40.863 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:40 compute-0 nova_compute[256940]: 2025-10-02 12:23:40.863 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:40 compute-0 nova_compute[256940]: 2025-10-02 12:23:40.863 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:40 compute-0 nova_compute[256940]: 2025-10-02 12:23:40.865 2 INFO nova.compute.manager [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Terminating instance
Oct 02 12:23:40 compute-0 nova_compute[256940]: 2025-10-02 12:23:40.867 2 DEBUG nova.compute.manager [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:23:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 276 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 568 KiB/s wr, 65 op/s
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:41.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 02 12:23:41 compute-0 ceph-mon[73668]: pgmap v1372: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 58 op/s
Oct 02 12:23:41 compute-0 kernel: tap58fa2695-ab (unregistering): left promiscuous mode
Oct 02 12:23:41 compute-0 NetworkManager[44981]: <info>  [1759407821.7252] device (tap58fa2695-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:41 compute-0 ovn_controller[148123]: 2025-10-02T12:23:41Z|00168|binding|INFO|Releasing lport 58fa2695-ab12-4921-9efa-8934cf29f79b from this chassis (sb_readonly=0)
Oct 02 12:23:41 compute-0 ovn_controller[148123]: 2025-10-02T12:23:41Z|00169|binding|INFO|Setting lport 58fa2695-ab12-4921-9efa-8934cf29f79b down in Southbound
Oct 02 12:23:41 compute-0 ovn_controller[148123]: 2025-10-02T12:23:41Z|00170|binding|INFO|Removing iface tap58fa2695-ab ovn-installed in OVS
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:41.799 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:36:88 10.100.0.6'], port_security=['fa:16:3e:6b:36:88 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ef0d5be3-6f62-49b2-82fa-b646bea14a10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=58fa2695-ab12-4921-9efa-8934cf29f79b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:41.801 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 58fa2695-ab12-4921-9efa-8934cf29f79b in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis
Oct 02 12:23:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:41.804 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:41.806 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6d7c7f-c958-45f6-8af1-e39ee8f759ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:41.807 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore
Oct 02 12:23:41 compute-0 podman[291543]: 2025-10-02 12:23:41.852407401 +0000 UTC m=+0.098607327 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct 02 12:23:41 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 02 12:23:41 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000032.scope: Consumed 15.786s CPU time.
Oct 02 12:23:41 compute-0 systemd-machined[210927]: Machine qemu-24-instance-00000032 terminated.
Oct 02 12:23:41 compute-0 podman[291558]: 2025-10-02 12:23:41.86505046 +0000 UTC m=+0.069878390 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.906 2 INFO nova.virt.libvirt.driver [-] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Instance destroyed successfully.
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.907 2 DEBUG nova.objects.instance [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid ef0d5be3-6f62-49b2-82fa-b646bea14a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.951 2 DEBUG nova.virt.libvirt.vif [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-216045878',display_name='tempest-ImagesTestJSON-server-216045878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-216045878',id=50,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-ybit459w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:07Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=ef0d5be3-6f62-49b2-82fa-b646bea14a10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.952 2 DEBUG nova.network.os_vif_util [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "58fa2695-ab12-4921-9efa-8934cf29f79b", "address": "fa:16:3e:6b:36:88", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58fa2695-ab", "ovs_interfaceid": "58fa2695-ab12-4921-9efa-8934cf29f79b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.953 2 DEBUG nova.network.os_vif_util [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:36:88,bridge_name='br-int',has_traffic_filtering=True,id=58fa2695-ab12-4921-9efa-8934cf29f79b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58fa2695-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.954 2 DEBUG os_vif [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:36:88,bridge_name='br-int',has_traffic_filtering=True,id=58fa2695-ab12-4921-9efa-8934cf29f79b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58fa2695-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fa2695-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-0 nova_compute[256940]: 2025-10-02 12:23:41.961 2 INFO os_vif [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:36:88,bridge_name='br-int',has_traffic_filtering=True,id=58fa2695-ab12-4921-9efa-8934cf29f79b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58fa2695-ab')
Oct 02 12:23:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:41.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:42 compute-0 nova_compute[256940]: 2025-10-02 12:23:42.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:42 compute-0 nova_compute[256940]: 2025-10-02 12:23:42.243 2 DEBUG nova.compute.manager [req-4c01fb13-a250-4da3-80d5-00671fe13378 req-188427ca-c458-46fd-9d78-09c8962a4fc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received event network-vif-unplugged-58fa2695-ab12-4921-9efa-8934cf29f79b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:42 compute-0 nova_compute[256940]: 2025-10-02 12:23:42.245 2 DEBUG oslo_concurrency.lockutils [req-4c01fb13-a250-4da3-80d5-00671fe13378 req-188427ca-c458-46fd-9d78-09c8962a4fc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:42 compute-0 nova_compute[256940]: 2025-10-02 12:23:42.245 2 DEBUG oslo_concurrency.lockutils [req-4c01fb13-a250-4da3-80d5-00671fe13378 req-188427ca-c458-46fd-9d78-09c8962a4fc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:42 compute-0 nova_compute[256940]: 2025-10-02 12:23:42.246 2 DEBUG oslo_concurrency.lockutils [req-4c01fb13-a250-4da3-80d5-00671fe13378 req-188427ca-c458-46fd-9d78-09c8962a4fc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:42 compute-0 nova_compute[256940]: 2025-10-02 12:23:42.247 2 DEBUG nova.compute.manager [req-4c01fb13-a250-4da3-80d5-00671fe13378 req-188427ca-c458-46fd-9d78-09c8962a4fc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] No waiting events found dispatching network-vif-unplugged-58fa2695-ab12-4921-9efa-8934cf29f79b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:42 compute-0 nova_compute[256940]: 2025-10-02 12:23:42.247 2 DEBUG nova.compute.manager [req-4c01fb13-a250-4da3-80d5-00671fe13378 req-188427ca-c458-46fd-9d78-09c8962a4fc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received event network-vif-unplugged-58fa2695-ab12-4921-9efa-8934cf29f79b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 02 12:23:42 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [NOTICE]   (290713) : haproxy version is 2.8.14-c23fe91
Oct 02 12:23:42 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [NOTICE]   (290713) : path to executable is /usr/sbin/haproxy
Oct 02 12:23:42 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [WARNING]  (290713) : Exiting Master process...
Oct 02 12:23:42 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [WARNING]  (290713) : Exiting Master process...
Oct 02 12:23:42 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [ALERT]    (290713) : Current worker (290715) exited with code 143 (Terminated)
Oct 02 12:23:42 compute-0 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[290709]: [WARNING]  (290713) : All workers exited. Exiting... (0)
Oct 02 12:23:42 compute-0 systemd[1]: libpod-df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf.scope: Deactivated successfully.
Oct 02 12:23:42 compute-0 podman[291618]: 2025-10-02 12:23:42.539770876 +0000 UTC m=+0.620243300 container died df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:23:42 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 02 12:23:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 263 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 90 op/s
Oct 02 12:23:43 compute-0 ceph-mon[73668]: pgmap v1373: 305 pgs: 305 active+clean; 276 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 568 KiB/s wr, 65 op/s
Oct 02 12:23:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/85177071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/550051481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1889210619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3492679320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-0 ceph-mon[73668]: osdmap e205: 3 total, 3 up, 3 in
Oct 02 12:23:43 compute-0 nova_compute[256940]: 2025-10-02 12:23:43.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:43 compute-0 nova_compute[256940]: 2025-10-02 12:23:43.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:23:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:43.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c986eef8f5fae99d19b291ae7a151aa9b5ae7857fee67154194254fb1e41fb5b-merged.mount: Deactivated successfully.
Oct 02 12:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf-userdata-shm.mount: Deactivated successfully.
Oct 02 12:23:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:43.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.219 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.220 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.220 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.245 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.245 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.404 2 DEBUG nova.compute.manager [req-9d0e6498-c79d-4570-9d52-68901b44a923 req-88bac4ed-5062-4e84-a52d-b3b916afefc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received event network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.408 2 DEBUG oslo_concurrency.lockutils [req-9d0e6498-c79d-4570-9d52-68901b44a923 req-88bac4ed-5062-4e84-a52d-b3b916afefc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.409 2 DEBUG oslo_concurrency.lockutils [req-9d0e6498-c79d-4570-9d52-68901b44a923 req-88bac4ed-5062-4e84-a52d-b3b916afefc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.409 2 DEBUG oslo_concurrency.lockutils [req-9d0e6498-c79d-4570-9d52-68901b44a923 req-88bac4ed-5062-4e84-a52d-b3b916afefc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.410 2 DEBUG nova.compute.manager [req-9d0e6498-c79d-4570-9d52-68901b44a923 req-88bac4ed-5062-4e84-a52d-b3b916afefc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] No waiting events found dispatching network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.411 2 WARNING nova.compute.manager [req-9d0e6498-c79d-4570-9d52-68901b44a923 req-88bac4ed-5062-4e84-a52d-b3b916afefc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received unexpected event network-vif-plugged-58fa2695-ab12-4921-9efa-8934cf29f79b for instance with vm_state active and task_state deleting.
Oct 02 12:23:44 compute-0 nova_compute[256940]: 2025-10-02 12:23:44.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:44 compute-0 podman[291618]: 2025-10-02 12:23:44.986937298 +0000 UTC m=+3.067409722 container cleanup df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:23:44 compute-0 ceph-mon[73668]: pgmap v1375: 305 pgs: 305 active+clean; 263 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 90 op/s
Oct 02 12:23:45 compute-0 systemd[1]: libpod-conmon-df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf.scope: Deactivated successfully.
Oct 02 12:23:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 234 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.225 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.269 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.347 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.347 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.348 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.348 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.348 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:45.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127619204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.852 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:45 compute-0 podman[291671]: 2025-10-02 12:23:45.955021628 +0000 UTC m=+0.923362607 container remove df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.960 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.961 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:23:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:45.963 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[10dc0ba3-5aeb-4595-869b-d3175c25e42a]: (4, ('Thu Oct  2 12:23:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf)\ndf89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf\nThu Oct  2 12:23:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (df89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf)\ndf89b7208314c620e6523e1d84d0f082f0e6dd3169f532f540eb7c7792e52cbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:45.968 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[99c1b8d6-398f-4147-9e86-9dd813d9ff71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:45.970 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:45 compute-0 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct 02 12:23:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:23:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:45.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:23:45 compute-0 nova_compute[256940]: 2025-10-02 12:23:45.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:45.991 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a87bbe1f-e51e-4bc0-bc61-4b47f778f692]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:46 compute-0 virtqemud[257589]: An error occurred, but the cause is unknown
Oct 02 12:23:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:46.022 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffe9025-de41-420c-85c0-ae1072ea2721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:46.025 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fae19bf6-fed9-4f1a-b3b2-b90e76b2d8a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:46.049 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[59ad0e6d-f0c3-4a5c-bb87-8dbd50ac9448]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566033, 'reachable_time': 16139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291708, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:46.052 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:23:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:23:46.052 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[715d27fb-048d-441d-8021-df13d52b076c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:46 compute-0 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.148 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.149 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4655MB free_disk=20.922557830810547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.150 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.150 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.253 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance ef0d5be3-6f62-49b2-82fa-b646bea14a10 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.254 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.254 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.286 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:46 compute-0 ceph-mon[73668]: pgmap v1376: 305 pgs: 305 active+clean; 234 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:23:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/941580738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1127619204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3521193034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942404161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.933 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.940 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:46 compute-0 nova_compute[256940]: 2025-10-02 12:23:46.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:47 compute-0 nova_compute[256940]: 2025-10-02 12:23:47.018 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 206 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 514 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Oct 02 12:23:47 compute-0 nova_compute[256940]: 2025-10-02 12:23:47.178 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:23:47 compute-0 nova_compute[256940]: 2025-10-02 12:23:47.179 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:47.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:48 compute-0 nova_compute[256940]: 2025-10-02 12:23:48.122 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:48 compute-0 nova_compute[256940]: 2025-10-02 12:23:48.123 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:48 compute-0 nova_compute[256940]: 2025-10-02 12:23:48.123 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:48 compute-0 nova_compute[256940]: 2025-10-02 12:23:48.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3942404161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2051029789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3580906529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 206 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 514 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Oct 02 12:23:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:49 compute-0 nova_compute[256940]: 2025-10-02 12:23:49.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:49 compute-0 ceph-mon[73668]: pgmap v1377: 305 pgs: 305 active+clean; 206 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 514 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Oct 02 12:23:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/334952972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 250 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 4.1 MiB/s wr, 107 op/s
Oct 02 12:23:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:51.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:51 compute-0 ceph-mon[73668]: pgmap v1378: 305 pgs: 305 active+clean; 206 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 514 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Oct 02 12:23:51 compute-0 sudo[291738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:51 compute-0 sudo[291738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:51 compute-0 sudo[291738]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:51 compute-0 sudo[291763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:51 compute-0 sudo[291763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:51 compute-0 sudo[291763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:51 compute-0 nova_compute[256940]: 2025-10-02 12:23:51.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:51.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 280 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 4.1 MiB/s wr, 107 op/s
Oct 02 12:23:53 compute-0 ceph-mon[73668]: pgmap v1379: 305 pgs: 305 active+clean; 250 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 4.1 MiB/s wr, 107 op/s
Oct 02 12:23:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1853136177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3570288810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:53.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:53.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2916462947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3028286666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:54 compute-0 ceph-mon[73668]: pgmap v1380: 305 pgs: 305 active+clean; 280 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 4.1 MiB/s wr, 107 op/s
Oct 02 12:23:54 compute-0 nova_compute[256940]: 2025-10-02 12:23:54.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 280 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 3.6 MiB/s wr, 121 op/s
Oct 02 12:23:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:55.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:56 compute-0 ceph-mon[73668]: pgmap v1381: 305 pgs: 305 active+clean; 280 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 3.6 MiB/s wr, 121 op/s
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.640 2 INFO nova.virt.libvirt.driver [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Deleting instance files /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10_del
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.642 2 INFO nova.virt.libvirt.driver [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Deletion of /var/lib/nova/instances/ef0d5be3-6f62-49b2-82fa-b646bea14a10_del complete
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.712 2 INFO nova.compute.manager [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Took 15.84 seconds to destroy the instance on the hypervisor.
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.713 2 DEBUG oslo.service.loopingcall [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.714 2 DEBUG nova.compute.manager [-] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.714 2 DEBUG nova.network.neutron [-] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:23:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.905 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407821.904588, ef0d5be3-6f62-49b2-82fa-b646bea14a10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.906 2 INFO nova.compute.manager [-] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] VM Stopped (Lifecycle Event)
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.963 2 DEBUG nova.compute.manager [None req-aae00729-9672-4030-81c0-787d79a49518 - - - - - -] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:56 compute-0 nova_compute[256940]: 2025-10-02 12:23:56.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 160 op/s
Oct 02 12:23:57 compute-0 sudo[291791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:57 compute-0 sudo[291791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-0 sudo[291791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:57 compute-0 sudo[291816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:23:57 compute-0 sudo[291816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-0 sudo[291816]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:57 compute-0 sudo[291844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:57 compute-0 podman[291840]: 2025-10-02 12:23:57.54488781 +0000 UTC m=+0.077061717 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:23:57 compute-0 sudo[291844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-0 sudo[291844]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:57 compute-0 sudo[291891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:23:57 compute-0 sudo[291891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-0 podman[291881]: 2025-10-02 12:23:57.696356031 +0000 UTC m=+0.136269947 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:23:57 compute-0 nova_compute[256940]: 2025-10-02 12:23:57.881 2 DEBUG nova.network.neutron [-] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:57 compute-0 nova_compute[256940]: 2025-10-02 12:23:57.930 2 INFO nova.compute.manager [-] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Took 1.22 seconds to deallocate network for instance.
Oct 02 12:23:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:23:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:58.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:23:58 compute-0 nova_compute[256940]: 2025-10-02 12:23:58.056 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:58 compute-0 nova_compute[256940]: 2025-10-02 12:23:58.057 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:58 compute-0 nova_compute[256940]: 2025-10-02 12:23:58.062 2 DEBUG nova.compute.manager [req-1be85db6-c1c9-4dd8-bb21-e079076ef851 req-1eaab2f4-4802-4bb9-a838-1e59e201bea0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ef0d5be3-6f62-49b2-82fa-b646bea14a10] Received event network-vif-deleted-58fa2695-ab12-4921-9efa-8934cf29f79b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:58 compute-0 nova_compute[256940]: 2025-10-02 12:23:58.147 2 DEBUG oslo_concurrency.processutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:58 compute-0 sudo[291891]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:23:58 compute-0 ceph-mon[73668]: pgmap v1382: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 160 op/s
Oct 02 12:23:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 143 op/s
Oct 02 12:23:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523132386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:23:59 compute-0 nova_compute[256940]: 2025-10-02 12:23:59.226 2 DEBUG oslo_concurrency.processutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:59 compute-0 nova_compute[256940]: 2025-10-02 12:23:59.236 2 DEBUG nova.compute.provider_tree [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:59 compute-0 nova_compute[256940]: 2025-10-02 12:23:59.363 2 DEBUG nova.scheduler.client.report [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:23:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:59 compute-0 nova_compute[256940]: 2025-10-02 12:23:59.430 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:23:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:23:59 compute-0 nova_compute[256940]: 2025-10-02 12:23:59.473 2 INFO nova.scheduler.client.report [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance ef0d5be3-6f62-49b2-82fa-b646bea14a10
Oct 02 12:23:59 compute-0 nova_compute[256940]: 2025-10-02 12:23:59.594 2 DEBUG oslo_concurrency.lockutils [None req-53ce11ce-c8d5-4ffd-9fa5-1bec146f7e2e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "ef0d5be3-6f62-49b2-82fa-b646bea14a10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:59 compute-0 nova_compute[256940]: 2025-10-02 12:23:59.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:00.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3523132386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:24:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:24:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:24:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:24:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ccd142d1-7365-42a9-8dc6-6ab1dc0639a2 does not exist
Oct 02 12:24:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0327b99f-8e42-4f68-84a2-2e5e1abd9205 does not exist
Oct 02 12:24:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 20e0e4b6-1d5b-4790-940c-50eda7569afa does not exist
Oct 02 12:24:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:24:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:24:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:24:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 157 op/s
Oct 02 12:24:01 compute-0 sudo[291992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:01 compute-0 sudo[291992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:01 compute-0 sudo[291992]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:01 compute-0 sudo[292017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:24:01 compute-0 sudo[292017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:01 compute-0 sudo[292017]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:01 compute-0 ceph-mon[73668]: pgmap v1383: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 143 op/s
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:01 compute-0 sudo[292042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:01 compute-0 sudo[292042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:01 compute-0 sudo[292042]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:01 compute-0 sudo[292067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:24:01 compute-0 sudo[292067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:01 compute-0 nova_compute[256940]: 2025-10-02 12:24:01.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:01 compute-0 podman[292134]: 2025-10-02 12:24:01.897860671 +0000 UTC m=+0.047668482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:02.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:02 compute-0 podman[292134]: 2025-10-02 12:24:02.255704752 +0000 UTC m=+0.405512553 container create d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:24:02 compute-0 ceph-mon[73668]: pgmap v1384: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 157 op/s
Oct 02 12:24:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1954818865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1954818865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:02 compute-0 systemd[1]: Started libpod-conmon-d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292.scope.
Oct 02 12:24:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 281 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 207 op/s
Oct 02 12:24:03 compute-0 podman[292134]: 2025-10-02 12:24:03.109898787 +0000 UTC m=+1.259706598 container init d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:24:03 compute-0 podman[292134]: 2025-10-02 12:24:03.123900831 +0000 UTC m=+1.273708592 container start d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:24:03 compute-0 vigilant_greider[292151]: 167 167
Oct 02 12:24:03 compute-0 systemd[1]: libpod-d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292.scope: Deactivated successfully.
Oct 02 12:24:03 compute-0 conmon[292151]: conmon d80c8e19f93ffb4f7de2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292.scope/container/memory.events
Oct 02 12:24:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:03.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:03 compute-0 podman[292134]: 2025-10-02 12:24:03.649061556 +0000 UTC m=+1.798869417 container attach d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:24:03 compute-0 podman[292134]: 2025-10-02 12:24:03.650608756 +0000 UTC m=+1.800416557 container died d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:24:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 02 12:24:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:04.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 02 12:24:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 02 12:24:04 compute-0 nova_compute[256940]: 2025-10-02 12:24:04.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 263 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 852 KiB/s wr, 284 op/s
Oct 02 12:24:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:05.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b3c60e9e73c520a9ff2f40abc7bb371e0ae2421faccc7418d84a36ca9255b61-merged.mount: Deactivated successfully.
Oct 02 12:24:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:06.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:06 compute-0 ceph-mon[73668]: pgmap v1385: 305 pgs: 305 active+clean; 281 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 207 op/s
Oct 02 12:24:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:06 compute-0 nova_compute[256940]: 2025-10-02 12:24:06.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 222 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Oct 02 12:24:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:07.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:07 compute-0 podman[292134]: 2025-10-02 12:24:07.57476017 +0000 UTC m=+5.724567961 container remove d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:24:07 compute-0 systemd[1]: libpod-conmon-d80c8e19f93ffb4f7de2ef57d335ca83997e3ec250280a96270c91506f5f9292.scope: Deactivated successfully.
Oct 02 12:24:07 compute-0 podman[292177]: 2025-10-02 12:24:07.742016652 +0000 UTC m=+0.026577533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:08.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:08 compute-0 ceph-mon[73668]: osdmap e206: 3 total, 3 up, 3 in
Oct 02 12:24:08 compute-0 ceph-mon[73668]: pgmap v1387: 305 pgs: 305 active+clean; 263 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 852 KiB/s wr, 284 op/s
Oct 02 12:24:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:08 compute-0 podman[292177]: 2025-10-02 12:24:08.566636628 +0000 UTC m=+0.851197519 container create 7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:24:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 222 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Oct 02 12:24:09 compute-0 systemd[1]: Started libpod-conmon-7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff.scope.
Oct 02 12:24:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2eec11ab008cf8939cbab122160fe7d00b405cb186db6d76105b9a5165f5991/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2eec11ab008cf8939cbab122160fe7d00b405cb186db6d76105b9a5165f5991/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2eec11ab008cf8939cbab122160fe7d00b405cb186db6d76105b9a5165f5991/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2eec11ab008cf8939cbab122160fe7d00b405cb186db6d76105b9a5165f5991/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2eec11ab008cf8939cbab122160fe7d00b405cb186db6d76105b9a5165f5991/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:09 compute-0 podman[292177]: 2025-10-02 12:24:09.474788356 +0000 UTC m=+1.759349207 container init 7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rubin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:24:09 compute-0 podman[292177]: 2025-10-02 12:24:09.484967961 +0000 UTC m=+1.769528822 container start 7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:24:09 compute-0 podman[292177]: 2025-10-02 12:24:09.833031187 +0000 UTC m=+2.117592048 container attach 7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:24:09 compute-0 nova_compute[256940]: 2025-10-02 12:24:09.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:09 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 02 12:24:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:09.941408) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:24:09 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 02 12:24:09 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849941462, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2575, "num_deletes": 518, "total_data_size": 3978553, "memory_usage": 4032432, "flush_reason": "Manual Compaction"}
Oct 02 12:24:09 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 02 12:24:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:10.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850380349, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3917090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29138, "largest_seqno": 31711, "table_properties": {"data_size": 3905675, "index_size": 7013, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26911, "raw_average_key_size": 20, "raw_value_size": 3880750, "raw_average_value_size": 2917, "num_data_blocks": 302, "num_entries": 1330, "num_filter_entries": 1330, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407646, "oldest_key_time": 1759407646, "file_creation_time": 1759407849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 439039 microseconds, and 8138 cpu microseconds.
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:24:10 compute-0 hopeful_rubin[292195]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:24:10 compute-0 hopeful_rubin[292195]: --> relative data size: 1.0
Oct 02 12:24:10 compute-0 hopeful_rubin[292195]: --> All data devices are unavailable
Oct 02 12:24:10 compute-0 systemd[1]: libpod-7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff.scope: Deactivated successfully.
Oct 02 12:24:10 compute-0 podman[292177]: 2025-10-02 12:24:10.432092225 +0000 UTC m=+2.716653076 container died 7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rubin, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:10.380441) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3917090 bytes OK
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:10.380493) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:10.631684) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:10.631748) EVENT_LOG_v1 {"time_micros": 1759407850631735, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:10.631782) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3966838, prev total WAL file size 3970467, number of live WAL files 2.
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:10.633426) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3825KB)], [65(8427KB)]
Oct 02 12:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850633596, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12546897, "oldest_snapshot_seqno": -1}
Oct 02 12:24:10 compute-0 ceph-mon[73668]: pgmap v1388: 305 pgs: 305 active+clean; 222 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Oct 02 12:24:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 242 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 293 op/s
Oct 02 12:24:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:24:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5684 keys, 10470649 bytes, temperature: kUnknown
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407851492834, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10470649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10430242, "index_size": 25065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 146188, "raw_average_key_size": 25, "raw_value_size": 10325687, "raw_average_value_size": 1816, "num_data_blocks": 1008, "num_entries": 5684, "num_filter_entries": 5684, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759407850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.493264) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10470649 bytes
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.599982) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 14.6 rd, 12.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6736, records dropped: 1052 output_compression: NoCompression
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.600141) EVENT_LOG_v1 {"time_micros": 1759407851600020, "job": 36, "event": "compaction_finished", "compaction_time_micros": 859316, "compaction_time_cpu_micros": 33311, "output_level": 6, "num_output_files": 1, "total_output_size": 10470649, "num_input_records": 6736, "num_output_records": 5684, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407851601704, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407851603981, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:10.633246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.604043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.604051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.604053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.604054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:24:11.604056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2eec11ab008cf8939cbab122160fe7d00b405cb186db6d76105b9a5165f5991-merged.mount: Deactivated successfully.
Oct 02 12:24:11 compute-0 nova_compute[256940]: 2025-10-02 12:24:11.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:12.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:12 compute-0 sudo[292226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:12 compute-0 sudo[292226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:12 compute-0 sudo[292226]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:12 compute-0 sudo[292264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:12 compute-0 sudo[292264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:12 compute-0 sudo[292264]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 02 12:24:12 compute-0 ceph-mon[73668]: pgmap v1389: 305 pgs: 305 active+clean; 222 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Oct 02 12:24:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 276 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.6 MiB/s wr, 231 op/s
Oct 02 12:24:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 02 12:24:13 compute-0 podman[292177]: 2025-10-02 12:24:13.310314253 +0000 UTC m=+5.594875144 container remove 7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rubin, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:24:13 compute-0 systemd[1]: libpod-conmon-7312fbe366d2cf9faa2d1fe960c47cae17afb469b99b4383bb404f03140f71ff.scope: Deactivated successfully.
Oct 02 12:24:13 compute-0 sudo[292067]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:13 compute-0 podman[292250]: 2025-10-02 12:24:13.379558535 +0000 UTC m=+1.311464524 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:24:13 compute-0 podman[292251]: 2025-10-02 12:24:13.383133548 +0000 UTC m=+1.316181757 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:24:13 compute-0 sudo[292312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:13 compute-0 sudo[292312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:13 compute-0 sudo[292312]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:13.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:13 compute-0 sudo[292342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:24:13 compute-0 sudo[292342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:13 compute-0 sudo[292342]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 02 12:24:13 compute-0 sudo[292367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:13 compute-0 sudo[292367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:13 compute-0 sudo[292367]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:13 compute-0 sudo[292392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:24:13 compute-0 sudo[292392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:14.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:14 compute-0 ceph-mon[73668]: pgmap v1390: 305 pgs: 305 active+clean; 242 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 293 op/s
Oct 02 12:24:14 compute-0 ceph-mon[73668]: osdmap e207: 3 total, 3 up, 3 in
Oct 02 12:24:14 compute-0 podman[292459]: 2025-10-02 12:24:14.009003013 +0000 UTC m=+0.030279269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:14 compute-0 podman[292459]: 2025-10-02 12:24:14.14990621 +0000 UTC m=+0.171182466 container create 7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:24:14 compute-0 systemd[1]: Started libpod-conmon-7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1.scope.
Oct 02 12:24:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:14 compute-0 nova_compute[256940]: 2025-10-02 12:24:14.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:15 compute-0 podman[292459]: 2025-10-02 12:24:15.003402337 +0000 UTC m=+1.024678633 container init 7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:24:15 compute-0 podman[292459]: 2025-10-02 12:24:15.012330759 +0000 UTC m=+1.033607035 container start 7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:24:15 compute-0 tender_chatterjee[292476]: 167 167
Oct 02 12:24:15 compute-0 systemd[1]: libpod-7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1.scope: Deactivated successfully.
Oct 02 12:24:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 298 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.2 MiB/s wr, 180 op/s
Oct 02 12:24:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 02 12:24:15 compute-0 podman[292459]: 2025-10-02 12:24:15.121224593 +0000 UTC m=+1.142500879 container attach 7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:24:15 compute-0 podman[292459]: 2025-10-02 12:24:15.126584602 +0000 UTC m=+1.147860868 container died 7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:24:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 02 12:24:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:15 compute-0 ceph-mon[73668]: pgmap v1391: 305 pgs: 305 active+clean; 276 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.6 MiB/s wr, 231 op/s
Oct 02 12:24:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:16.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1603ede97f4eb99f9e551c30a7305c72f6faaa002511bda0c46a5b23daf4c11a-merged.mount: Deactivated successfully.
Oct 02 12:24:16 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 02 12:24:16 compute-0 podman[292459]: 2025-10-02 12:24:16.741280276 +0000 UTC m=+2.762556522 container remove 7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:24:16 compute-0 systemd[1]: libpod-conmon-7a30ace293cbce61d4ea5a49c5b29fb9e0ff266a776b3add3ecc3c16e2104dc1.scope: Deactivated successfully.
Oct 02 12:24:16 compute-0 nova_compute[256940]: 2025-10-02 12:24:16.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:17 compute-0 podman[292501]: 2025-10-02 12:24:16.909671888 +0000 UTC m=+0.025810643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:17 compute-0 ceph-mon[73668]: pgmap v1393: 305 pgs: 305 active+clean; 298 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.2 MiB/s wr, 180 op/s
Oct 02 12:24:17 compute-0 ceph-mon[73668]: osdmap e208: 3 total, 3 up, 3 in
Oct 02 12:24:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 323 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 8.8 MiB/s wr, 210 op/s
Oct 02 12:24:17 compute-0 podman[292501]: 2025-10-02 12:24:17.228998607 +0000 UTC m=+0.345137362 container create 38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:24:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:17 compute-0 systemd[1]: Started libpod-conmon-38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457.scope.
Oct 02 12:24:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89257e1e1ea760f56918cb3db2091605fbcac1cedf3485ef1af56f5860a1b012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89257e1e1ea760f56918cb3db2091605fbcac1cedf3485ef1af56f5860a1b012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89257e1e1ea760f56918cb3db2091605fbcac1cedf3485ef1af56f5860a1b012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89257e1e1ea760f56918cb3db2091605fbcac1cedf3485ef1af56f5860a1b012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:18.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 02 12:24:18 compute-0 podman[292501]: 2025-10-02 12:24:18.212483076 +0000 UTC m=+1.328621851 container init 38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:24:18 compute-0 podman[292501]: 2025-10-02 12:24:18.221306806 +0000 UTC m=+1.337445571 container start 38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:24:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 02 12:24:18 compute-0 podman[292501]: 2025-10-02 12:24:18.447241614 +0000 UTC m=+1.563380389 container attach 38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:24:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 02 12:24:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 323 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 162 KiB/s rd, 8.0 MiB/s wr, 132 op/s
Oct 02 12:24:19 compute-0 ceph-mon[73668]: pgmap v1395: 305 pgs: 305 active+clean; 323 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 8.8 MiB/s wr, 210 op/s
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]: {
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:     "1": [
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:         {
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "devices": [
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "/dev/loop3"
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             ],
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "lv_name": "ceph_lv0",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "lv_size": "7511998464",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "name": "ceph_lv0",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "tags": {
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.cluster_name": "ceph",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.crush_device_class": "",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.encrypted": "0",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.osd_id": "1",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.type": "block",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:                 "ceph.vdo": "0"
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             },
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "type": "block",
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:             "vg_name": "ceph_vg0"
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:         }
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]:     ]
Oct 02 12:24:19 compute-0 elastic_hofstadter[292516]: }
Oct 02 12:24:19 compute-0 systemd[1]: libpod-38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457.scope: Deactivated successfully.
Oct 02 12:24:19 compute-0 podman[292501]: 2025-10-02 12:24:19.250086984 +0000 UTC m=+2.366225729 container died 38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:24:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-89257e1e1ea760f56918cb3db2091605fbcac1cedf3485ef1af56f5860a1b012-merged.mount: Deactivated successfully.
Oct 02 12:24:19 compute-0 nova_compute[256940]: 2025-10-02 12:24:19.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:20 compute-0 podman[292501]: 2025-10-02 12:24:20.328594326 +0000 UTC m=+3.444733061 container remove 38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:24:20 compute-0 systemd[1]: libpod-conmon-38913f7cea113b4b8b12b9e6495aef767d56bf78c6a160948655f60f5f604457.scope: Deactivated successfully.
Oct 02 12:24:20 compute-0 sudo[292392]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:20 compute-0 sudo[292540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:20 compute-0 sudo[292540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:20 compute-0 sudo[292540]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:20 compute-0 sudo[292566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:24:20 compute-0 sudo[292566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:20 compute-0 sudo[292566]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:20 compute-0 ceph-mon[73668]: osdmap e209: 3 total, 3 up, 3 in
Oct 02 12:24:20 compute-0 ceph-mon[73668]: pgmap v1397: 305 pgs: 305 active+clean; 323 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 162 KiB/s rd, 8.0 MiB/s wr, 132 op/s
Oct 02 12:24:20 compute-0 sudo[292591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:20 compute-0 sudo[292591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:20 compute-0 sudo[292591]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:20 compute-0 sudo[292616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:24:20 compute-0 sudo[292616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 323 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 569 KiB/s rd, 6.6 MiB/s wr, 169 op/s
Oct 02 12:24:21 compute-0 podman[292681]: 2025-10-02 12:24:20.987394947 +0000 UTC m=+0.022668841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:21 compute-0 podman[292681]: 2025-10-02 12:24:21.183485659 +0000 UTC m=+0.218759523 container create 819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:24:21 compute-0 systemd[1]: Started libpod-conmon-819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab.scope.
Oct 02 12:24:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:21 compute-0 podman[292681]: 2025-10-02 12:24:21.397015145 +0000 UTC m=+0.432289029 container init 819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 02 12:24:21 compute-0 podman[292681]: 2025-10-02 12:24:21.406294447 +0000 UTC m=+0.441568311 container start 819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:24:21 compute-0 happy_bartik[292697]: 167 167
Oct 02 12:24:21 compute-0 systemd[1]: libpod-819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab.scope: Deactivated successfully.
Oct 02 12:24:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:21 compute-0 podman[292681]: 2025-10-02 12:24:21.448267169 +0000 UTC m=+0.483541053 container attach 819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:24:21 compute-0 podman[292681]: 2025-10-02 12:24:21.448995988 +0000 UTC m=+0.484269852 container died 819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:24:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-947521b6c08d1af51fd0a3efe93bab4b681aa970b640e38163d6f28cbc796055-merged.mount: Deactivated successfully.
Oct 02 12:24:21 compute-0 nova_compute[256940]: 2025-10-02 12:24:21.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-0 podman[292681]: 2025-10-02 12:24:22.00287116 +0000 UTC m=+1.038145024 container remove 819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:24:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:22 compute-0 systemd[1]: libpod-conmon-819207c523383a9744444a73ffabbdecf14bd7f955b12eab5357586ec112a6ab.scope: Deactivated successfully.
Oct 02 12:24:22 compute-0 podman[292721]: 2025-10-02 12:24:22.231258902 +0000 UTC m=+0.103366930 container create 05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:24:22 compute-0 podman[292721]: 2025-10-02 12:24:22.158634132 +0000 UTC m=+0.030742170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:24:22 compute-0 ceph-mon[73668]: pgmap v1398: 305 pgs: 305 active+clean; 323 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 569 KiB/s rd, 6.6 MiB/s wr, 169 op/s
Oct 02 12:24:22 compute-0 systemd[1]: Started libpod-conmon-05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b.scope.
Oct 02 12:24:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1ff9cb8d076056a4f0cb49a482aa9d4c352bd785b6ea413841e3c2a96971cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1ff9cb8d076056a4f0cb49a482aa9d4c352bd785b6ea413841e3c2a96971cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1ff9cb8d076056a4f0cb49a482aa9d4c352bd785b6ea413841e3c2a96971cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1ff9cb8d076056a4f0cb49a482aa9d4c352bd785b6ea413841e3c2a96971cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:22 compute-0 podman[292721]: 2025-10-02 12:24:22.51056928 +0000 UTC m=+0.382677328 container init 05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:24:22 compute-0 podman[292721]: 2025-10-02 12:24:22.523310201 +0000 UTC m=+0.395418229 container start 05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:24:22 compute-0 podman[292721]: 2025-10-02 12:24:22.585370506 +0000 UTC m=+0.457478524 container attach 05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:24:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 02 12:24:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 02 12:24:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 02 12:24:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 302 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 826 KiB/s rd, 3.6 MiB/s wr, 158 op/s
Oct 02 12:24:23 compute-0 youthful_williamson[292738]: {
Oct 02 12:24:23 compute-0 youthful_williamson[292738]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:24:23 compute-0 youthful_williamson[292738]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:24:23 compute-0 youthful_williamson[292738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:24:23 compute-0 youthful_williamson[292738]:         "osd_id": 1,
Oct 02 12:24:23 compute-0 youthful_williamson[292738]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:24:23 compute-0 youthful_williamson[292738]:         "type": "bluestore"
Oct 02 12:24:23 compute-0 youthful_williamson[292738]:     }
Oct 02 12:24:23 compute-0 youthful_williamson[292738]: }
Oct 02 12:24:23 compute-0 systemd[1]: libpod-05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b.scope: Deactivated successfully.
Oct 02 12:24:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:23 compute-0 podman[292760]: 2025-10-02 12:24:23.489704555 +0000 UTC m=+0.028790900 container died 05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:24:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:24.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:24 compute-0 ceph-mon[73668]: osdmap e210: 3 total, 3 up, 3 in
Oct 02 12:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a1ff9cb8d076056a4f0cb49a482aa9d4c352bd785b6ea413841e3c2a96971cb-merged.mount: Deactivated successfully.
Oct 02 12:24:24 compute-0 nova_compute[256940]: 2025-10-02 12:24:24.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 270 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 926 KiB/s rd, 508 KiB/s wr, 165 op/s
Oct 02 12:24:25 compute-0 podman[292760]: 2025-10-02 12:24:25.076426581 +0000 UTC m=+1.615512906 container remove 05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:24:25 compute-0 systemd[1]: libpod-conmon-05e512ed482619c9576155fce4c44cecc1e763f8bbd7bcf16e79298f90e7032b.scope: Deactivated successfully.
Oct 02 12:24:25 compute-0 sudo[292616]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:24:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:24:25 compute-0 ceph-mon[73668]: pgmap v1400: 305 pgs: 305 active+clean; 302 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 826 KiB/s rd, 3.6 MiB/s wr, 158 op/s
Oct 02 12:24:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:26.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cef42c12-06ab-44f4-935b-55057d57e6a1 does not exist
Oct 02 12:24:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 65a61a79-88da-4e8f-9168-7d239722a2e6 does not exist
Oct 02 12:24:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4851b52e-ceb2-436b-a440-024f39cc7736 does not exist
Oct 02 12:24:26 compute-0 sudo[292776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:26 compute-0 sudo[292776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:26 compute-0 sudo[292776]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:26 compute-0 sudo[292801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:24:26 compute-0 sudo[292801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:26 compute-0 sudo[292801]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:24:26.460 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:24:26.461 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:24:26.461 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:26 compute-0 ceph-mon[73668]: pgmap v1401: 305 pgs: 305 active+clean; 270 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 926 KiB/s rd, 508 KiB/s wr, 165 op/s
Oct 02 12:24:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:26 compute-0 nova_compute[256940]: 2025-10-02 12:24:26.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 928 KiB/s rd, 523 KiB/s wr, 184 op/s
Oct 02 12:24:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:27.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 02 12:24:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 02 12:24:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 02 12:24:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:28.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:24:28.148 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:24:28.148 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:24:28 compute-0 nova_compute[256940]: 2025-10-02 12:24:28.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:28 compute-0 podman[292827]: 2025-10-02 12:24:28.440021689 +0000 UTC m=+0.091830851 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:24:28 compute-0 podman[292828]: 2025-10-02 12:24:28.469822574 +0000 UTC m=+0.119932691 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:24:28
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'images', '.mgr', 'vms', '.rgw.root', 'backups']
Oct 02 12:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:24:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 324 KiB/s wr, 131 op/s
Oct 02 12:24:29 compute-0 ceph-mon[73668]: pgmap v1402: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 928 KiB/s rd, 523 KiB/s wr, 184 op/s
Oct 02 12:24:29 compute-0 ceph-mon[73668]: osdmap e211: 3 total, 3 up, 3 in
Oct 02 12:24:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2032694493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:29 compute-0 nova_compute[256940]: 2025-10-02 12:24:29.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:30.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 226 KiB/s rd, 198 KiB/s wr, 79 op/s
Oct 02 12:24:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:24:31.150 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:31 compute-0 ceph-mon[73668]: pgmap v1404: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 324 KiB/s wr, 131 op/s
Oct 02 12:24:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:31.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:31 compute-0 nova_compute[256940]: 2025-10-02 12:24:31.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:32.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:32 compute-0 sudo[292874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:32 compute-0 sudo[292874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:32 compute-0 sudo[292874]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:32 compute-0 sudo[292899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:32 compute-0 sudo[292899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:32 compute-0 sudo[292899]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:33 compute-0 ceph-mon[73668]: pgmap v1405: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 226 KiB/s rd, 198 KiB/s wr, 79 op/s
Oct 02 12:24:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3742383102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 189 KiB/s rd, 179 KiB/s wr, 68 op/s
Oct 02 12:24:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:34.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:34 compute-0 ceph-mon[73668]: pgmap v1406: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 189 KiB/s rd, 179 KiB/s wr, 68 op/s
Oct 02 12:24:34 compute-0 nova_compute[256940]: 2025-10-02 12:24:34.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 48 KiB/s wr, 28 op/s
Oct 02 12:24:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:35.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:36.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:36 compute-0 nova_compute[256940]: 2025-10-02 12:24:36.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 19 KiB/s wr, 10 op/s
Oct 02 12:24:37 compute-0 ceph-mon[73668]: pgmap v1407: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 48 KiB/s wr, 28 op/s
Oct 02 12:24:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:38.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:38 compute-0 nova_compute[256940]: 2025-10-02 12:24:38.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:38 compute-0 ceph-mon[73668]: pgmap v1408: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 19 KiB/s wr, 10 op/s
Oct 02 12:24:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 17 KiB/s wr, 9 op/s
Oct 02 12:24:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:39.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2860898462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:39 compute-0 nova_compute[256940]: 2025-10-02 12:24:39.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:40.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004343185836590359 of space, bias 1.0, pg target 1.3029557509771077 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:24:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 186 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 16 KiB/s wr, 10 op/s
Oct 02 12:24:41 compute-0 ceph-mon[73668]: pgmap v1409: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 17 KiB/s wr, 9 op/s
Oct 02 12:24:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:41.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:41 compute-0 nova_compute[256940]: 2025-10-02 12:24:41.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:42.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:42 compute-0 nova_compute[256940]: 2025-10-02 12:24:42.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:42 compute-0 ceph-mon[73668]: pgmap v1410: 305 pgs: 305 active+clean; 186 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 16 KiB/s wr, 10 op/s
Oct 02 12:24:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1773491671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3235092226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 164 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Oct 02 12:24:43 compute-0 nova_compute[256940]: 2025-10-02 12:24:43.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:43 compute-0 nova_compute[256940]: 2025-10-02 12:24:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:43 compute-0 nova_compute[256940]: 2025-10-02 12:24:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:24:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:43.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:44.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:44 compute-0 ceph-mon[73668]: pgmap v1411: 305 pgs: 305 active+clean; 164 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Oct 02 12:24:44 compute-0 podman[292931]: 2025-10-02 12:24:44.393051464 +0000 UTC m=+0.062029675 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:24:44 compute-0 podman[292930]: 2025-10-02 12:24:44.394250195 +0000 UTC m=+0.067209029 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:24:44 compute-0 nova_compute[256940]: 2025-10-02 12:24:44.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 183 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.7 MiB/s wr, 55 op/s
Oct 02 12:24:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:46 compute-0 nova_compute[256940]: 2025-10-02 12:24:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:46 compute-0 nova_compute[256940]: 2025-10-02 12:24:46.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:24:46 compute-0 nova_compute[256940]: 2025-10-02 12:24:46.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:24:46 compute-0 nova_compute[256940]: 2025-10-02 12:24:46.276 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:24:46 compute-0 ceph-mon[73668]: pgmap v1412: 305 pgs: 305 active+clean; 183 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.7 MiB/s wr, 55 op/s
Oct 02 12:24:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/318574402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:46 compute-0 nova_compute[256940]: 2025-10-02 12:24:46.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.251 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.252 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.252 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.252 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.252 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/405708132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4126952542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/64841178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.732 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.883 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.884 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4663MB free_disk=20.92194366455078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.884 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:47 compute-0 nova_compute[256940]: 2025-10-02 12:24:47.884 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:48.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:48 compute-0 nova_compute[256940]: 2025-10-02 12:24:48.617 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:24:48 compute-0 nova_compute[256940]: 2025-10-02 12:24:48.617 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:24:48 compute-0 nova_compute[256940]: 2025-10-02 12:24:48.636 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:48 compute-0 ceph-mon[73668]: pgmap v1413: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 02 12:24:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/64841178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1523422355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3165033264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:24:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782473490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-0 nova_compute[256940]: 2025-10-02 12:24:49.163 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:49 compute-0 nova_compute[256940]: 2025-10-02 12:24:49.170 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:24:49 compute-0 nova_compute[256940]: 2025-10-02 12:24:49.396 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:24:49 compute-0 nova_compute[256940]: 2025-10-02 12:24:49.436 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:24:49 compute-0 nova_compute[256940]: 2025-10-02 12:24:49.436 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 02 12:24:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/782473490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-0 nova_compute[256940]: 2025-10-02 12:24:49.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 02 12:24:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 02 12:24:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 12:24:51 compute-0 ceph-mon[73668]: pgmap v1414: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:24:51 compute-0 ceph-mon[73668]: osdmap e212: 3 total, 3 up, 3 in
Oct 02 12:24:51 compute-0 nova_compute[256940]: 2025-10-02 12:24:51.437 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:51 compute-0 nova_compute[256940]: 2025-10-02 12:24:51.438 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:24:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:51.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:24:51 compute-0 nova_compute[256940]: 2025-10-02 12:24:51.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:24:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:24:52 compute-0 sudo[293016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:52 compute-0 sudo[293016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:52 compute-0 sudo[293016]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:52 compute-0 sudo[293041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:52 compute-0 sudo[293041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:52 compute-0 sudo[293041]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:52 compute-0 ceph-mon[73668]: pgmap v1416: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 12:24:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 171 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 868 KiB/s wr, 60 op/s
Oct 02 12:24:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:53.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:54.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:54 compute-0 nova_compute[256940]: 2025-10-02 12:24:54.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:54 compute-0 ceph-mon[73668]: pgmap v1417: 305 pgs: 305 active+clean; 171 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 868 KiB/s wr, 60 op/s
Oct 02 12:24:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 167 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 70 KiB/s wr, 65 op/s
Oct 02 12:24:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:56.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:56 compute-0 ceph-mon[73668]: pgmap v1418: 305 pgs: 305 active+clean; 167 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 70 KiB/s wr, 65 op/s
Oct 02 12:24:57 compute-0 nova_compute[256940]: 2025-10-02 12:24:57.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 128 op/s
Oct 02 12:24:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:57.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 02 12:24:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 02 12:24:58 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 02 12:24:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:58.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:24:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3937864801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:24:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3937864801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:24:58 compute-0 ceph-mon[73668]: pgmap v1419: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 128 op/s
Oct 02 12:24:58 compute-0 ceph-mon[73668]: osdmap e213: 3 total, 3 up, 3 in
Oct 02 12:24:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3937864801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3937864801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 24 KiB/s wr, 135 op/s
Oct 02 12:24:59 compute-0 podman[293070]: 2025-10-02 12:24:59.385093697 +0000 UTC m=+0.056748718 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:24:59 compute-0 podman[293071]: 2025-10-02 12:24:59.482963843 +0000 UTC m=+0.150831485 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:24:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:24:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:24:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:59.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:24:59 compute-0 nova_compute[256940]: 2025-10-02 12:24:59.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:00.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:00 compute-0 ceph-mon[73668]: pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 24 KiB/s wr, 135 op/s
Oct 02 12:25:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 121 op/s
Oct 02 12:25:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:01.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:02.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:02 compute-0 nova_compute[256940]: 2025-10-02 12:25:02.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:02 compute-0 ceph-mon[73668]: pgmap v1422: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 121 op/s
Oct 02 12:25:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 20 KiB/s wr, 121 op/s
Oct 02 12:25:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:03.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:25:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:04.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:25:04 compute-0 nova_compute[256940]: 2025-10-02 12:25:04.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 KiB/s wr, 79 op/s
Oct 02 12:25:05 compute-0 ceph-mon[73668]: pgmap v1423: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 20 KiB/s wr, 121 op/s
Oct 02 12:25:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:06 compute-0 ceph-mon[73668]: pgmap v1424: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 KiB/s wr, 79 op/s
Oct 02 12:25:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3312100151' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:25:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3312100151' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:25:07 compute-0 nova_compute[256940]: 2025-10-02 12:25:07.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 184 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 34 op/s
Oct 02 12:25:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:07.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:08 compute-0 ceph-mon[73668]: pgmap v1425: 305 pgs: 305 active+clean; 184 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 34 op/s
Oct 02 12:25:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 184 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Oct 02 12:25:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:09.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:09 compute-0 nova_compute[256940]: 2025-10-02 12:25:09.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:10.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:10 compute-0 ceph-mon[73668]: pgmap v1426: 305 pgs: 305 active+clean; 184 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Oct 02 12:25:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 190 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 12:25:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:11.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:12 compute-0 nova_compute[256940]: 2025-10-02 12:25:12.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:12.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:12 compute-0 sudo[293123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:12 compute-0 sudo[293123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:12 compute-0 ceph-mon[73668]: pgmap v1427: 305 pgs: 305 active+clean; 190 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 12:25:12 compute-0 sudo[293123]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:12 compute-0 sudo[293148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:12 compute-0 sudo[293148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:12 compute-0 sudo[293148]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 200 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 02 12:25:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:13.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:14.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:14 compute-0 nova_compute[256940]: 2025-10-02 12:25:14.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 201 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 02 12:25:15 compute-0 ceph-mon[73668]: pgmap v1428: 305 pgs: 305 active+clean; 200 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 02 12:25:15 compute-0 ovn_controller[148123]: 2025-10-02T12:25:15Z|00171|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 02 12:25:15 compute-0 podman[293174]: 2025-10-02 12:25:15.415136427 +0000 UTC m=+0.077118617 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:25:15 compute-0 podman[293175]: 2025-10-02 12:25:15.447255203 +0000 UTC m=+0.101612925 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:25:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:15.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:25:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3819116064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:16 compute-0 ceph-mon[73668]: pgmap v1429: 305 pgs: 305 active+clean; 201 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 02 12:25:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3819116064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:17 compute-0 nova_compute[256940]: 2025-10-02 12:25:17.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Oct 02 12:25:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:17.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:18.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:18 compute-0 ceph-mon[73668]: pgmap v1430: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Oct 02 12:25:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 2.3 MiB/s wr, 76 op/s
Oct 02 12:25:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:19.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:19 compute-0 nova_compute[256940]: 2025-10-02 12:25:19.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:20.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 2.3 MiB/s wr, 77 op/s
Oct 02 12:25:21 compute-0 ceph-mon[73668]: pgmap v1431: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 2.3 MiB/s wr, 76 op/s
Oct 02 12:25:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:21.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:22.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:22 compute-0 nova_compute[256940]: 2025-10-02 12:25:22.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:22 compute-0 ceph-mon[73668]: pgmap v1432: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 2.3 MiB/s wr, 77 op/s
Oct 02 12:25:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 147 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 12:25:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:23.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2486575946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:24.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:24 compute-0 nova_compute[256940]: 2025-10-02 12:25:24.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 12:25:25 compute-0 ceph-mon[73668]: pgmap v1433: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 147 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 12:25:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:25.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:26.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:25:26.461 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:25:26.461 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:25:26.461 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:26 compute-0 sudo[293220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:26 compute-0 sudo[293220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:26 compute-0 sudo[293220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:26 compute-0 sudo[293245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:26 compute-0 sudo[293245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:26 compute-0 sudo[293245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:26 compute-0 sudo[293270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:26 compute-0 sudo[293270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:26 compute-0 sudo[293270]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:26 compute-0 ceph-mon[73668]: pgmap v1434: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 12:25:26 compute-0 sudo[293295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:25:26 compute-0 sudo[293295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Oct 02 12:25:27 compute-0 nova_compute[256940]: 2025-10-02 12:25:27.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:27 compute-0 sudo[293295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:27.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:25:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:25:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:25:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3e4361a9-96f0-4f42-83b7-bc09a0f5a3d4 does not exist
Oct 02 12:25:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d7a351a7-9c46-48e4-8597-f2e2dad60d5e does not exist
Oct 02 12:25:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 817b8e1e-dcb1-481c-afb3-24d3341f662f does not exist
Oct 02 12:25:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:25:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:25:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:25:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:27 compute-0 sudo[293351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:27 compute-0 sudo[293351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:27 compute-0 sudo[293351]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:27 compute-0 sudo[293376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:27 compute-0 sudo[293376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:27 compute-0 sudo[293376]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:27 compute-0 sudo[293401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:27 compute-0 sudo[293401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:27 compute-0 sudo[293401]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:25:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:27 compute-0 sudo[293426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:25:27 compute-0 sudo[293426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:28 compute-0 podman[293493]: 2025-10-02 12:25:28.262163904 +0000 UTC m=+0.027602080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:28 compute-0 podman[293493]: 2025-10-02 12:25:28.412275069 +0000 UTC m=+0.177713225 container create 8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:28 compute-0 systemd[1]: Started libpod-conmon-8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af.scope.
Oct 02 12:25:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:28 compute-0 podman[293493]: 2025-10-02 12:25:28.574430879 +0000 UTC m=+0.339869045 container init 8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:28 compute-0 podman[293493]: 2025-10-02 12:25:28.580859236 +0000 UTC m=+0.346297392 container start 8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:25:28 compute-0 laughing_archimedes[293511]: 167 167
Oct 02 12:25:28 compute-0 systemd[1]: libpod-8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af.scope: Deactivated successfully.
Oct 02 12:25:28 compute-0 podman[293493]: 2025-10-02 12:25:28.659569404 +0000 UTC m=+0.425007570 container attach 8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:25:28 compute-0 podman[293493]: 2025-10-02 12:25:28.660736755 +0000 UTC m=+0.426174931 container died 8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:25:28
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root']
Oct 02 12:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f57cdd1474317a2a09146ab527bce4735ae790d8172286533314ab6ea2823e8-merged.mount: Deactivated successfully.
Oct 02 12:25:28 compute-0 ceph-mon[73668]: pgmap v1435: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Oct 02 12:25:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 12:25:29 compute-0 podman[293493]: 2025-10-02 12:25:29.203951409 +0000 UTC m=+0.969389605 container remove 8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:25:29 compute-0 systemd[1]: libpod-conmon-8c69dd7896507330e32c7ed7eb92f941ec2335111c2428437a767e6a080969af.scope: Deactivated successfully.
Oct 02 12:25:29 compute-0 podman[293535]: 2025-10-02 12:25:29.431343816 +0000 UTC m=+0.088413341 container create 0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:25:29 compute-0 podman[293535]: 2025-10-02 12:25:29.373321206 +0000 UTC m=+0.030390741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:29 compute-0 systemd[1]: Started libpod-conmon-0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18.scope.
Oct 02 12:25:29 compute-0 podman[293549]: 2025-10-02 12:25:29.595321903 +0000 UTC m=+0.103122014 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca3b15c55516991fcf45f754153b895521994b723f699e0222910ce33056893/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca3b15c55516991fcf45f754153b895521994b723f699e0222910ce33056893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca3b15c55516991fcf45f754153b895521994b723f699e0222910ce33056893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca3b15c55516991fcf45f754153b895521994b723f699e0222910ce33056893/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca3b15c55516991fcf45f754153b895521994b723f699e0222910ce33056893/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:29 compute-0 podman[293535]: 2025-10-02 12:25:29.81507029 +0000 UTC m=+0.472139885 container init 0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:25:29 compute-0 podman[293535]: 2025-10-02 12:25:29.824838964 +0000 UTC m=+0.481908449 container start 0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:25:29 compute-0 podman[293535]: 2025-10-02 12:25:29.855491552 +0000 UTC m=+0.512561057 container attach 0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:25:29 compute-0 podman[293570]: 2025-10-02 12:25:29.947160647 +0000 UTC m=+0.341931118 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:25:29 compute-0 nova_compute[256940]: 2025-10-02 12:25:29.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:30.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:30 compute-0 ceph-mon[73668]: pgmap v1436: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 12:25:30 compute-0 musing_nash[293569]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:25:30 compute-0 musing_nash[293569]: --> relative data size: 1.0
Oct 02 12:25:30 compute-0 musing_nash[293569]: --> All data devices are unavailable
Oct 02 12:25:30 compute-0 systemd[1]: libpod-0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18.scope: Deactivated successfully.
Oct 02 12:25:30 compute-0 podman[293535]: 2025-10-02 12:25:30.711361561 +0000 UTC m=+1.368431086 container died 0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca3b15c55516991fcf45f754153b895521994b723f699e0222910ce33056893-merged.mount: Deactivated successfully.
Oct 02 12:25:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 12:25:31 compute-0 podman[293535]: 2025-10-02 12:25:31.436417276 +0000 UTC m=+2.093486761 container remove 0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:25:31 compute-0 systemd[1]: libpod-conmon-0d01d45abed9353459b3fb25a4c4881a123614dc33d8d071665911aaff6c8b18.scope: Deactivated successfully.
Oct 02 12:25:31 compute-0 sudo[293426]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 sudo[293624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:31 compute-0 sudo[293624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:31.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:31 compute-0 sudo[293624]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 sudo[293649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:31 compute-0 sudo[293649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:31 compute-0 sudo[293649]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 sudo[293674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:31 compute-0 sudo[293674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:31 compute-0 sudo[293674]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:31 compute-0 sudo[293699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:25:31 compute-0 sudo[293699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:32.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:32 compute-0 podman[293765]: 2025-10-02 12:25:32.075813003 +0000 UTC m=+0.021718886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:32 compute-0 nova_compute[256940]: 2025-10-02 12:25:32.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:32 compute-0 podman[293765]: 2025-10-02 12:25:32.214121922 +0000 UTC m=+0.160027805 container create 17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:25:32 compute-0 systemd[1]: Started libpod-conmon-17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33.scope.
Oct 02 12:25:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:32 compute-0 podman[293765]: 2025-10-02 12:25:32.599418797 +0000 UTC m=+0.545324720 container init 17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dirac, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:25:32 compute-0 ceph-mon[73668]: pgmap v1437: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 12:25:32 compute-0 podman[293765]: 2025-10-02 12:25:32.615637209 +0000 UTC m=+0.561543052 container start 17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:25:32 compute-0 naughty_dirac[293782]: 167 167
Oct 02 12:25:32 compute-0 systemd[1]: libpod-17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33.scope: Deactivated successfully.
Oct 02 12:25:32 compute-0 sudo[293789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:32 compute-0 sudo[293789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:32 compute-0 sudo[293789]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:32 compute-0 sudo[293823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:32 compute-0 sudo[293823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:32 compute-0 sudo[293823]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:32 compute-0 podman[293765]: 2025-10-02 12:25:32.81438164 +0000 UTC m=+0.760287523 container attach 17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dirac, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 02 12:25:32 compute-0 podman[293765]: 2025-10-02 12:25:32.814920324 +0000 UTC m=+0.760826177 container died 17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:25:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 12 KiB/s wr, 6 op/s
Oct 02 12:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6921b00b7dbe19972452d3561ddfbebd572d99a11143965bcc714a2b57e7e491-merged.mount: Deactivated successfully.
Oct 02 12:25:33 compute-0 podman[293765]: 2025-10-02 12:25:33.48694581 +0000 UTC m=+1.432851693 container remove 17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:25:33 compute-0 systemd[1]: libpod-conmon-17a9138cf5efbeef0a3d9221aa812fc6d5bd9bfd11b1aac41839c1956f770b33.scope: Deactivated successfully.
Oct 02 12:25:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:33.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:33 compute-0 podman[293856]: 2025-10-02 12:25:33.715488086 +0000 UTC m=+0.034021406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:33 compute-0 podman[293856]: 2025-10-02 12:25:33.893632272 +0000 UTC m=+0.212165572 container create 4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_tu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:25:34 compute-0 systemd[1]: Started libpod-conmon-4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e.scope.
Oct 02 12:25:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d59c07716fa86fd704ccb992ffc65328b2d825a030e2ff32996605907567d53f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d59c07716fa86fd704ccb992ffc65328b2d825a030e2ff32996605907567d53f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d59c07716fa86fd704ccb992ffc65328b2d825a030e2ff32996605907567d53f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d59c07716fa86fd704ccb992ffc65328b2d825a030e2ff32996605907567d53f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:25:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:34.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:25:34 compute-0 podman[293856]: 2025-10-02 12:25:34.359527544 +0000 UTC m=+0.678060904 container init 4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:25:34 compute-0 podman[293856]: 2025-10-02 12:25:34.373700873 +0000 UTC m=+0.692234183 container start 4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:25:34 compute-0 podman[293856]: 2025-10-02 12:25:34.470333747 +0000 UTC m=+0.788867047 container attach 4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_tu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:25:34 compute-0 ceph-mon[73668]: pgmap v1438: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 12 KiB/s wr, 6 op/s
Oct 02 12:25:34 compute-0 nova_compute[256940]: 2025-10-02 12:25:34.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Oct 02 12:25:35 compute-0 tender_tu[293872]: {
Oct 02 12:25:35 compute-0 tender_tu[293872]:     "1": [
Oct 02 12:25:35 compute-0 tender_tu[293872]:         {
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "devices": [
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "/dev/loop3"
Oct 02 12:25:35 compute-0 tender_tu[293872]:             ],
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "lv_name": "ceph_lv0",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "lv_size": "7511998464",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "name": "ceph_lv0",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "tags": {
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.cluster_name": "ceph",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.crush_device_class": "",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.encrypted": "0",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.osd_id": "1",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.type": "block",
Oct 02 12:25:35 compute-0 tender_tu[293872]:                 "ceph.vdo": "0"
Oct 02 12:25:35 compute-0 tender_tu[293872]:             },
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "type": "block",
Oct 02 12:25:35 compute-0 tender_tu[293872]:             "vg_name": "ceph_vg0"
Oct 02 12:25:35 compute-0 tender_tu[293872]:         }
Oct 02 12:25:35 compute-0 tender_tu[293872]:     ]
Oct 02 12:25:35 compute-0 tender_tu[293872]: }
Oct 02 12:25:35 compute-0 systemd[1]: libpod-4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e.scope: Deactivated successfully.
Oct 02 12:25:35 compute-0 podman[293882]: 2025-10-02 12:25:35.206001918 +0000 UTC m=+0.027782024 container died 4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:25:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:35.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d59c07716fa86fd704ccb992ffc65328b2d825a030e2ff32996605907567d53f-merged.mount: Deactivated successfully.
Oct 02 12:25:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:36.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:36 compute-0 podman[293882]: 2025-10-02 12:25:36.220017133 +0000 UTC m=+1.041797259 container remove 4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:25:36 compute-0 systemd[1]: libpod-conmon-4587172f62ddb49118e663cebe1c137b024d043eb8e212832c457c4a5739810e.scope: Deactivated successfully.
Oct 02 12:25:36 compute-0 ceph-mon[73668]: pgmap v1439: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Oct 02 12:25:36 compute-0 sudo[293699]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:36 compute-0 sudo[293898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:36 compute-0 sudo[293898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:36 compute-0 sudo[293898]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:36 compute-0 sudo[293923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:36 compute-0 sudo[293923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:36 compute-0 sudo[293923]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:36 compute-0 sudo[293949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:36 compute-0 sudo[293949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:36 compute-0 sudo[293949]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:36 compute-0 sudo[293974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:25:36 compute-0 sudo[293974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:36 compute-0 podman[294039]: 2025-10-02 12:25:36.957765748 +0000 UTC m=+0.084615592 container create 43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:25:36 compute-0 podman[294039]: 2025-10-02 12:25:36.897375837 +0000 UTC m=+0.024225701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:37 compute-0 systemd[1]: Started libpod-conmon-43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7.scope.
Oct 02 12:25:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:37 compute-0 podman[294039]: 2025-10-02 12:25:37.090194224 +0000 UTC m=+0.217044118 container init 43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:25:37 compute-0 podman[294039]: 2025-10-02 12:25:37.097010501 +0000 UTC m=+0.223860355 container start 43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 02 12:25:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 02 12:25:37 compute-0 beautiful_yalow[294055]: 167 167
Oct 02 12:25:37 compute-0 systemd[1]: libpod-43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7.scope: Deactivated successfully.
Oct 02 12:25:37 compute-0 podman[294039]: 2025-10-02 12:25:37.124459366 +0000 UTC m=+0.251309230 container attach 43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:25:37 compute-0 podman[294039]: 2025-10-02 12:25:37.125149584 +0000 UTC m=+0.251999428 container died 43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:25:37 compute-0 nova_compute[256940]: 2025-10-02 12:25:37.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ed7a6582f6a46a7f9b731828a1549717b1ce9b1805cd6650246decfa8adc913-merged.mount: Deactivated successfully.
Oct 02 12:25:37 compute-0 podman[294039]: 2025-10-02 12:25:37.37557629 +0000 UTC m=+0.502426134 container remove 43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:25:37 compute-0 systemd[1]: libpod-conmon-43c48059c00656599e3be30a6a2dc32baeb82dee8098f83fc77cd9f68fed86c7.scope: Deactivated successfully.
Oct 02 12:25:37 compute-0 podman[294080]: 2025-10-02 12:25:37.556540098 +0000 UTC m=+0.052894707 container create 4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:37.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:37 compute-0 systemd[1]: Started libpod-conmon-4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39.scope.
Oct 02 12:25:37 compute-0 podman[294080]: 2025-10-02 12:25:37.526780074 +0000 UTC m=+0.023134703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:25:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fce04da8c2ad7e0ddfa2f596b7de86bd239e9912e500593fd827d0a3f11a90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fce04da8c2ad7e0ddfa2f596b7de86bd239e9912e500593fd827d0a3f11a90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fce04da8c2ad7e0ddfa2f596b7de86bd239e9912e500593fd827d0a3f11a90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fce04da8c2ad7e0ddfa2f596b7de86bd239e9912e500593fd827d0a3f11a90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:25:37 compute-0 podman[294080]: 2025-10-02 12:25:37.712015844 +0000 UTC m=+0.208370473 container init 4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:25:37 compute-0 podman[294080]: 2025-10-02 12:25:37.720011642 +0000 UTC m=+0.216366251 container start 4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:25:37 compute-0 podman[294080]: 2025-10-02 12:25:37.755496515 +0000 UTC m=+0.251851144 container attach 4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:25:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:38.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:38 compute-0 ceph-mon[73668]: pgmap v1440: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]: {
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]:         "osd_id": 1,
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]:         "type": "bluestore"
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]:     }
Oct 02 12:25:38 compute-0 reverent_mclaren[294097]: }
Oct 02 12:25:38 compute-0 systemd[1]: libpod-4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39.scope: Deactivated successfully.
Oct 02 12:25:38 compute-0 podman[294080]: 2025-10-02 12:25:38.65774126 +0000 UTC m=+1.154095869 container died 4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-45fce04da8c2ad7e0ddfa2f596b7de86bd239e9912e500593fd827d0a3f11a90-merged.mount: Deactivated successfully.
Oct 02 12:25:38 compute-0 podman[294080]: 2025-10-02 12:25:38.741277084 +0000 UTC m=+1.237631723 container remove 4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:25:38 compute-0 systemd[1]: libpod-conmon-4003118ad4cb50e3e94e29305a5e5a959e1d3c4265da5e033cafe788b1924c39.scope: Deactivated successfully.
Oct 02 12:25:38 compute-0 sudo[293974]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:25:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:25:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 178511bb-a5fd-46b3-82dc-ebab17946574 does not exist
Oct 02 12:25:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b610f90e-71f2-43d1-91f7-1ef3f9b6d591 does not exist
Oct 02 12:25:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1e56dffd-c2af-4624-be0f-23be5fdfa0db does not exist
Oct 02 12:25:38 compute-0 sudo[294133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:38 compute-0 sudo[294133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:38 compute-0 sudo[294133]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:38 compute-0 sudo[294158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:25:38 compute-0 sudo[294158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:38 compute-0 sudo[294158]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:25:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:25:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 19K writes, 81K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 5444 syncs, 3.54 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6192 writes, 28K keys, 6192 commit groups, 1.0 writes per commit group, ingest: 26.97 MB, 0.04 MB/s
                                           Interval WAL: 6192 writes, 2039 syncs, 3.04 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:25:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:39.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:39 compute-0 nova_compute[256940]: 2025-10-02 12:25:39.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004344276358645077 of space, bias 1.0, pg target 1.3032829075935233 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29601366933044854 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:25:40 compute-0 ceph-mon[73668]: pgmap v1441: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:25:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Oct 02 12:25:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:41.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3840614406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:42.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:42 compute-0 nova_compute[256940]: 2025-10-02 12:25:42.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:42 compute-0 nova_compute[256940]: 2025-10-02 12:25:42.214 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:25:42.401 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:25:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:25:42.401 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:25:42 compute-0 nova_compute[256940]: 2025-10-02 12:25:42.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:42 compute-0 ceph-mon[73668]: pgmap v1442: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Oct 02 12:25:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1383723782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 02 12:25:43 compute-0 nova_compute[256940]: 2025-10-02 12:25:43.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:43 compute-0 nova_compute[256940]: 2025-10-02 12:25:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:43 compute-0 nova_compute[256940]: 2025-10-02 12:25:43.210 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:25:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:25:43.405 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:43.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1350195972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1633199061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:44.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:44 compute-0 ceph-mon[73668]: pgmap v1443: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 02 12:25:44 compute-0 nova_compute[256940]: 2025-10-02 12:25:44.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 02 12:25:45 compute-0 nova_compute[256940]: 2025-10-02 12:25:45.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:45 compute-0 nova_compute[256940]: 2025-10-02 12:25:45.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:25:45 compute-0 nova_compute[256940]: 2025-10-02 12:25:45.261 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:25:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:45.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:46.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:46 compute-0 podman[294186]: 2025-10-02 12:25:46.397325407 +0000 UTC m=+0.059058178 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:25:46 compute-0 podman[294187]: 2025-10-02 12:25:46.405026317 +0000 UTC m=+0.067120717 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:25:47 compute-0 ceph-mon[73668]: pgmap v1444: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 02 12:25:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 192 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.2 KiB/s wr, 7 op/s
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.260 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.261 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.261 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.395 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.396 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.548 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.549 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.549 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.549 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.549 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:47.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 12:25:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/509332985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:47 compute-0 nova_compute[256940]: 2025-10-02 12:25:47.992 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:48 compute-0 ceph-mon[73668]: pgmap v1445: 305 pgs: 305 active+clean; 192 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.2 KiB/s wr, 7 op/s
Oct 02 12:25:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/509332985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:48.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.163 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.164 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4656MB free_disk=20.897098541259766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.165 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.165 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.384 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.385 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.460 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.544 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance c4f0a30c-38bf-4720-afc0-3a9bf537b820 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.544 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.545 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.596 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.687 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.728 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.729 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.770 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.815 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:25:48 compute-0 nova_compute[256940]: 2025-10-02 12:25:48.941 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 192 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.2 KiB/s wr, 7 op/s
Oct 02 12:25:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254338683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.433 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.440 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.479 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.481 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.481 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.481 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.490 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.491 2 INFO nova.compute.claims [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:25:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:49.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.834 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:49 compute-0 nova_compute[256940]: 2025-10-02 12:25:49.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:50.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471354611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.339 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:50 compute-0 ceph-mon[73668]: pgmap v1446: 305 pgs: 305 active+clean; 192 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.2 KiB/s wr, 7 op/s
Oct 02 12:25:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2837588613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3254338683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.344 2 DEBUG nova.compute.provider_tree [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.387 2 DEBUG nova.scheduler.client.report [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.444 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.446 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.597 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.598 2 DEBUG nova.network.neutron [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.643 2 INFO nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:25:50 compute-0 nova_compute[256940]: 2025-10-02 12:25:50.719 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:25:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.229 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.231 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.231 2 INFO nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Creating image(s)
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.269 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.303 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.337 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.343 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.377 2 DEBUG nova.policy [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45e71bb62c1e482bbcf07657b3049198', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2715815cf51b4499b2f5906aa745dedc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.380 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1471354611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.412 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.413 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.413 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.413 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.421 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.422 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.423 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.424 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.455 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:25:51 compute-0 nova_compute[256940]: 2025-10-02 12:25:51.460 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:51.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:52.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:52 compute-0 nova_compute[256940]: 2025-10-02 12:25:52.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:52 compute-0 ceph-mon[73668]: pgmap v1447: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 02 12:25:52 compute-0 sudo[294388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:52 compute-0 sudo[294388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:52 compute-0 sudo[294388]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:52 compute-0 sudo[294413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:52 compute-0 sudo[294413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:52 compute-0 sudo[294413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 3.3 KiB/s wr, 34 op/s
Oct 02 12:25:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:54.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:54 compute-0 ceph-mon[73668]: pgmap v1448: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 3.3 KiB/s wr, 34 op/s
Oct 02 12:25:54 compute-0 nova_compute[256940]: 2025-10-02 12:25:54.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 135 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 811 KiB/s wr, 40 op/s
Oct 02 12:25:55 compute-0 nova_compute[256940]: 2025-10-02 12:25:55.323 2 DEBUG nova.network.neutron [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Successfully created port: d472d0c9-f952-487f-8225-97cbf1dea77c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:25:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:55.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:55 compute-0 nova_compute[256940]: 2025-10-02 12:25:55.735 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:55 compute-0 nova_compute[256940]: 2025-10-02 12:25:55.841 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] resizing rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:25:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:56.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:56 compute-0 ceph-mon[73668]: pgmap v1449: 305 pgs: 305 active+clean; 135 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 811 KiB/s wr, 40 op/s
Oct 02 12:25:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.154 2 DEBUG nova.network.neutron [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Successfully updated port: d472d0c9-f952-487f-8225-97cbf1dea77c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.184 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.184 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquired lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.184 2 DEBUG nova.network.neutron [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.301 2 DEBUG nova.compute.manager [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.301 2 DEBUG nova.compute.manager [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing instance network info cache due to event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.302 2 DEBUG oslo_concurrency.lockutils [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:57 compute-0 nova_compute[256940]: 2025-10-02 12:25:57.456 2 DEBUG nova.network.neutron [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:25:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:57.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:25:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:58.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:25:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Oct 02 12:25:59 compute-0 ceph-mon[73668]: pgmap v1450: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:25:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:25:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:59.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:59 compute-0 nova_compute[256940]: 2025-10-02 12:25:59.691 2 DEBUG nova.objects.instance [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lazy-loading 'migration_context' on Instance uuid c4f0a30c-38bf-4720-afc0-3a9bf537b820 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:59 compute-0 nova_compute[256940]: 2025-10-02 12:25:59.712 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:25:59 compute-0 nova_compute[256940]: 2025-10-02 12:25:59.713 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Ensure instance console log exists: /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:25:59 compute-0 nova_compute[256940]: 2025-10-02 12:25:59.713 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:59 compute-0 nova_compute[256940]: 2025-10-02 12:25:59.714 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:59 compute-0 nova_compute[256940]: 2025-10-02 12:25:59.714 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:59 compute-0 nova_compute[256940]: 2025-10-02 12:25:59.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.020 2 DEBUG nova.network.neutron [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.057 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Releasing lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.057 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Instance network_info: |[{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.057 2 DEBUG oslo_concurrency.lockutils [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.058 2 DEBUG nova.network.neutron [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.061 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Start _get_guest_xml network_info=[{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.066 2 WARNING nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.074 2 DEBUG nova.virt.libvirt.host [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.075 2 DEBUG nova.virt.libvirt.host [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.085 2 DEBUG nova.virt.libvirt.host [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.086 2 DEBUG nova.virt.libvirt.host [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.087 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.087 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.087 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.088 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.088 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.088 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.089 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.089 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.089 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.089 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.090 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.090 2 DEBUG nova.virt.hardware [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.093 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:00.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:00 compute-0 podman[294533]: 2025-10-02 12:26:00.413785782 +0000 UTC m=+0.063150354 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:26:00 compute-0 podman[294534]: 2025-10-02 12:26:00.472582152 +0000 UTC m=+0.121892033 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:26:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524818878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.523 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.561 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:00.568 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:00 compute-0 ceph-mon[73668]: pgmap v1451: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Oct 02 12:26:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:26:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934729901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.011 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.014 2 DEBUG nova.virt.libvirt.vif [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-869769966',display_name='tempest-AttachInterfacesUnderV243Test-server-869769966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-869769966',id=56,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtkGzugK3IZRVfceP7hFkbcV33sZPF/J/HciuNE7jAvjhstl51pIdkbrlclVJSflcTaJ1WR2y5pr5AxOmX2g+eh6U5h4dR/YdZfsojMIox7PSkNmhz+GOLEKcF/K9D2pQ==',key_name='tempest-keypair-1572188106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2715815cf51b4499b2f5906aa745dedc',ramdisk_id='',reservation_id='r-5id1v3z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1924785210',owner_user_name='tempest-AttachInterfacesUnderV243Test-1924785210-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:25:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45e71bb62c1e482bbcf07657b3049198',uuid=c4f0a30c-38bf-4720-afc0-3a9bf537b820,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.014 2 DEBUG nova.network.os_vif_util [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Converting VIF {"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.016 2 DEBUG nova.network.os_vif_util [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7e:75,bridge_name='br-int',has_traffic_filtering=True,id=d472d0c9-f952-487f-8225-97cbf1dea77c,network=Network(33c7df65-d612-4b8e-b5f7-5bf3c51288de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd472d0c9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.017 2 DEBUG nova.objects.instance [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lazy-loading 'pci_devices' on Instance uuid c4f0a30c-38bf-4720-afc0-3a9bf537b820 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.038 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <uuid>c4f0a30c-38bf-4720-afc0-3a9bf537b820</uuid>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <name>instance-00000038</name>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-869769966</nova:name>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:26:00</nova:creationTime>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:user uuid="45e71bb62c1e482bbcf07657b3049198">tempest-AttachInterfacesUnderV243Test-1924785210-project-member</nova:user>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:project uuid="2715815cf51b4499b2f5906aa745dedc">tempest-AttachInterfacesUnderV243Test-1924785210</nova:project>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <nova:port uuid="d472d0c9-f952-487f-8225-97cbf1dea77c">
Oct 02 12:26:01 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <system>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <entry name="serial">c4f0a30c-38bf-4720-afc0-3a9bf537b820</entry>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <entry name="uuid">c4f0a30c-38bf-4720-afc0-3a9bf537b820</entry>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </system>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <os>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   </os>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <features>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   </features>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk">
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       </source>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk.config">
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       </source>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:26:01 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d5:7e:75"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <target dev="tapd472d0c9-f9"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/console.log" append="off"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <video>
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </video>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:26:01 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:26:01 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:26:01 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:26:01 compute-0 nova_compute[256940]: </domain>
Oct 02 12:26:01 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.040 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Preparing to wait for external event network-vif-plugged-d472d0c9-f952-487f-8225-97cbf1dea77c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.040 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.040 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.041 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.041 2 DEBUG nova.virt.libvirt.vif [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-869769966',display_name='tempest-AttachInterfacesUnderV243Test-server-869769966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-869769966',id=56,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtkGzugK3IZRVfceP7hFkbcV33sZPF/J/HciuNE7jAvjhstl51pIdkbrlclVJSflcTaJ1WR2y5pr5AxOmX2g+eh6U5h4dR/YdZfsojMIox7PSkNmhz+GOLEKcF/K9D2pQ==',key_name='tempest-keypair-1572188106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2715815cf51b4499b2f5906aa745dedc',ramdisk_id='',reservation_id='r-5id1v3z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1924785210',owner_user_name='tempest-AttachInterfacesUnderV243Test-1924785210-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:25:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45e71bb62c1e482bbcf07657b3049198',uuid=c4f0a30c-38bf-4720-afc0-3a9bf537b820,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.042 2 DEBUG nova.network.os_vif_util [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Converting VIF {"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.042 2 DEBUG nova.network.os_vif_util [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7e:75,bridge_name='br-int',has_traffic_filtering=True,id=d472d0c9-f952-487f-8225-97cbf1dea77c,network=Network(33c7df65-d612-4b8e-b5f7-5bf3c51288de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd472d0c9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.042 2 DEBUG os_vif [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7e:75,bridge_name='br-int',has_traffic_filtering=True,id=d472d0c9-f952-487f-8225-97cbf1dea77c,network=Network(33c7df65-d612-4b8e-b5f7-5bf3c51288de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd472d0c9-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.044 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.044 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd472d0c9-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd472d0c9-f9, col_values=(('external_ids', {'iface-id': 'd472d0c9-f952-487f-8225-97cbf1dea77c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:7e:75', 'vm-uuid': 'c4f0a30c-38bf-4720-afc0-3a9bf537b820'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:26:01 compute-0 NetworkManager[44981]: <info>  [1759407961.0533] manager: (tapd472d0c9-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.060 2 INFO os_vif [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7e:75,bridge_name='br-int',has_traffic_filtering=True,id=d472d0c9-f952-487f-8225-97cbf1dea77c,network=Network(33c7df65-d612-4b8e-b5f7-5bf3c51288de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd472d0c9-f9')
Oct 02 12:26:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.123 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.124 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.124 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] No VIF found with MAC fa:16:3e:d5:7e:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.125 2 INFO nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Using config drive
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.155 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:26:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:01.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.662 2 INFO nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Creating config drive at /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/disk.config
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.673 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1p_mbgek execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2524818878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1061664957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1934729901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.813 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1p_mbgek" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.893 2 DEBUG nova.storage.rbd_utils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] rbd image c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:01 compute-0 nova_compute[256940]: 2025-10-02 12:26:01.897 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/disk.config c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:02.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.274 2 DEBUG nova.network.neutron [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updated VIF entry in instance network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.276 2 DEBUG nova.network.neutron [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.328 2 DEBUG oslo_concurrency.lockutils [req-329befcb-ecfa-466d-8270-1580be867256 req-7cfafe30-598e-4438-a770-f0113b3a081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.385 2 DEBUG oslo_concurrency.processutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/disk.config c4f0a30c-38bf-4720-afc0-3a9bf537b820_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.387 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.389 2 INFO nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Deleting local config drive /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820/disk.config because it was imported into RBD.
Oct 02 12:26:02 compute-0 kernel: tapd472d0c9-f9: entered promiscuous mode
Oct 02 12:26:02 compute-0 NetworkManager[44981]: <info>  [1759407962.4600] manager: (tapd472d0c9-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 ovn_controller[148123]: 2025-10-02T12:26:02Z|00172|binding|INFO|Claiming lport d472d0c9-f952-487f-8225-97cbf1dea77c for this chassis.
Oct 02 12:26:02 compute-0 ovn_controller[148123]: 2025-10-02T12:26:02Z|00173|binding|INFO|d472d0c9-f952-487f-8225-97cbf1dea77c: Claiming fa:16:3e:d5:7e:75 10.100.0.12
Oct 02 12:26:02 compute-0 systemd-udevd[294694]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.493 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:7e:75 10.100.0.12'], port_security=['fa:16:3e:d5:7e:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c4f0a30c-38bf-4720-afc0-3a9bf537b820', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2715815cf51b4499b2f5906aa745dedc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f4deff7-2af5-4049-ac12-0fdec51f5f03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffc024bf-6341-4587-a9a1-7bb3f2b637b6, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d472d0c9-f952-487f-8225-97cbf1dea77c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.494 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d472d0c9-f952-487f-8225-97cbf1dea77c in datapath 33c7df65-d612-4b8e-b5f7-5bf3c51288de bound to our chassis
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.496 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c7df65-d612-4b8e-b5f7-5bf3c51288de
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.507 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[71228c54-5406-4874-ad7e-00805635bc88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.508 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33c7df65-d1 in ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:26:02 compute-0 NetworkManager[44981]: <info>  [1759407962.5108] device (tapd472d0c9-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:26:02 compute-0 NetworkManager[44981]: <info>  [1759407962.5122] device (tapd472d0c9-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:26:02 compute-0 systemd-machined[210927]: New machine qemu-25-instance-00000038.
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.512 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33c7df65-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.512 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6f8326-d753-4483-96d3-caf549002bce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.514 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[420ee8e2-eaf0-4436-94f8-7f306385d46c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.527 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c96a243c-dac3-4cc7-8ba3-efd7a4bfd303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000038.
Oct 02 12:26:02 compute-0 ovn_controller[148123]: 2025-10-02T12:26:02Z|00174|binding|INFO|Setting lport d472d0c9-f952-487f-8225-97cbf1dea77c ovn-installed in OVS
Oct 02 12:26:02 compute-0 ovn_controller[148123]: 2025-10-02T12:26:02Z|00175|binding|INFO|Setting lport d472d0c9-f952-487f-8225-97cbf1dea77c up in Southbound
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.555 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb27e01-c10d-4955-9f95-f359e1c08cb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.586 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[99ecbca4-2e80-4430-b892-83103c27bcc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.591 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[46848d86-8451-4899-a6fb-e7c8e5a7632c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 NetworkManager[44981]: <info>  [1759407962.5933] manager: (tap33c7df65-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/91)
Oct 02 12:26:02 compute-0 systemd-udevd[294698]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.604 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid c4f0a30c-38bf-4720-afc0-3a9bf537b820 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.605 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.634 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3d82d2ae-ac4f-40af-976b-9a3f74b64a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.639 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b6d31a-85a6-43ff-a9a8-0ec9be31b75b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 NetworkManager[44981]: <info>  [1759407962.6642] device (tap33c7df65-d0): carrier: link connected
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.668 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[20abc9fd-afd0-4182-b3f7-77fad1699387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.689 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fecc1a48-d09d-4433-82fc-5b49b6a1cff0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c7df65-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:f2:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585814, 'reachable_time': 18982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294728, 'error': None, 'target': 'ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.706 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e2776144-9988-4552-a496-e2e3796e1c3f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:f2a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585814, 'tstamp': 585814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294729, 'error': None, 'target': 'ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.723 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[875d29a1-2c74-47e9-909a-595ca5ce2ed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c7df65-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:f2:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585814, 'reachable_time': 18982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294730, 'error': None, 'target': 'ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.762 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0675e9-3571-45a0-a714-f02c7c2bca0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.822 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[258f8b62-e8a3-493e-96df-2deb03d57b43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.824 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c7df65-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.824 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.825 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c7df65-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 kernel: tap33c7df65-d0: entered promiscuous mode
Oct 02 12:26:02 compute-0 NetworkManager[44981]: <info>  [1759407962.8280] manager: (tap33c7df65-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.832 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c7df65-d0, col_values=(('external_ids', {'iface-id': 'e9f52189-485f-492c-b517-742fabc0b040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 ovn_controller[148123]: 2025-10-02T12:26:02Z|00176|binding|INFO|Releasing lport e9f52189-485f-492c-b517-742fabc0b040 from this chassis (sb_readonly=0)
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.848 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33c7df65-d612-4b8e-b5f7-5bf3c51288de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33c7df65-d612-4b8e-b5f7-5bf3c51288de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:26:02 compute-0 nova_compute[256940]: 2025-10-02 12:26:02.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.849 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d18fd1c6-e26e-4eb2-a76b-c54d33c1a780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.849 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-33c7df65-d612-4b8e-b5f7-5bf3c51288de
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/33c7df65-d612-4b8e-b5f7-5bf3c51288de.pid.haproxy
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 33c7df65-d612-4b8e-b5f7-5bf3c51288de
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:26:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:02.850 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'env', 'PROCESS_TAG=haproxy-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33c7df65-d612-4b8e-b5f7-5bf3c51288de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:26:02 compute-0 ceph-mon[73668]: pgmap v1452: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 02 12:26:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:03 compute-0 podman[294780]: 2025-10-02 12:26:03.294271911 +0000 UTC m=+0.061705407 container create 98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:03 compute-0 systemd[1]: Started libpod-conmon-98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea.scope.
Oct 02 12:26:03 compute-0 podman[294780]: 2025-10-02 12:26:03.256737244 +0000 UTC m=+0.024170780 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:26:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3339d2588eda0363acc33006081fff085354a1e5cccde9db294f4f0098c8b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:03 compute-0 podman[294780]: 2025-10-02 12:26:03.427310131 +0000 UTC m=+0.194743657 container init 98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:26:03 compute-0 podman[294780]: 2025-10-02 12:26:03.434298513 +0000 UTC m=+0.201732009 container start 98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:26:03 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [NOTICE]   (294824) : New worker (294826) forked
Oct 02 12:26:03 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [NOTICE]   (294824) : Loading success.
Oct 02 12:26:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:03.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.774 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407963.7744565, c4f0a30c-38bf-4720-afc0-3a9bf537b820 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.775 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] VM Started (Lifecycle Event)
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.880 2 DEBUG nova.compute.manager [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received event network-vif-plugged-d472d0c9-f952-487f-8225-97cbf1dea77c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.881 2 DEBUG oslo_concurrency.lockutils [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.881 2 DEBUG oslo_concurrency.lockutils [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.881 2 DEBUG oslo_concurrency.lockutils [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.881 2 DEBUG nova.compute.manager [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Processing event network-vif-plugged-d472d0c9-f952-487f-8225-97cbf1dea77c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.881 2 DEBUG nova.compute.manager [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received event network-vif-plugged-d472d0c9-f952-487f-8225-97cbf1dea77c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.882 2 DEBUG oslo_concurrency.lockutils [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.882 2 DEBUG oslo_concurrency.lockutils [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.882 2 DEBUG oslo_concurrency.lockutils [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.882 2 DEBUG nova.compute.manager [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] No waiting events found dispatching network-vif-plugged-d472d0c9-f952-487f-8225-97cbf1dea77c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.882 2 WARNING nova.compute.manager [req-b14aa28e-79c7-47ec-b9f2-41ab376f279b req-de40cbba-f00e-416b-a176-6f435f73544b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received unexpected event network-vif-plugged-d472d0c9-f952-487f-8225-97cbf1dea77c for instance with vm_state building and task_state spawning.
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.883 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.887 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.891 2 INFO nova.virt.libvirt.driver [-] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Instance spawned successfully.
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.891 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.958 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.961 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.968 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.969 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.969 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.971 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.971 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.972 2 DEBUG nova.virt.libvirt.driver [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.986 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.986 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407963.7769763, c4f0a30c-38bf-4720-afc0-3a9bf537b820 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:03 compute-0 nova_compute[256940]: 2025-10-02 12:26:03.986 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] VM Paused (Lifecycle Event)
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.025 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.032 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759407963.886275, c4f0a30c-38bf-4720-afc0-3a9bf537b820 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.032 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] VM Resumed (Lifecycle Event)
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.079 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.083 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.107 2 INFO nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Took 12.88 seconds to spawn the instance on the hypervisor.
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.108 2 DEBUG nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.121 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:26:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.188 2 INFO nova.compute.manager [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Took 15.55 seconds to build instance.
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.233 2 DEBUG oslo_concurrency.lockutils [None req-8241197f-c045-4d70-aff9-d233cc33b367 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.234 2 INFO nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:04 compute-0 ceph-mon[73668]: pgmap v1453: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:26:04 compute-0 nova_compute[256940]: 2025-10-02 12:26:04.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 111 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 02 12:26:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:05.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/425071959' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/425071959' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:06 compute-0 nova_compute[256940]: 2025-10-02 12:26:06.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:06.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:06 compute-0 ceph-mon[73668]: pgmap v1454: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 111 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 02 12:26:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2398778346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:07 compute-0 NetworkManager[44981]: <info>  [1759407967.0175] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 02 12:26:07 compute-0 nova_compute[256940]: 2025-10-02 12:26:07.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:07 compute-0 NetworkManager[44981]: <info>  [1759407967.0192] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct 02 12:26:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1021 KiB/s wr, 99 op/s
Oct 02 12:26:07 compute-0 nova_compute[256940]: 2025-10-02 12:26:07.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:07 compute-0 ovn_controller[148123]: 2025-10-02T12:26:07Z|00177|binding|INFO|Releasing lport e9f52189-485f-492c-b517-742fabc0b040 from this chassis (sb_readonly=0)
Oct 02 12:26:07 compute-0 nova_compute[256940]: 2025-10-02 12:26:07.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:07.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3384030533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3384030533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:08.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:08 compute-0 nova_compute[256940]: 2025-10-02 12:26:08.670 2 DEBUG nova.compute.manager [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:08 compute-0 nova_compute[256940]: 2025-10-02 12:26:08.671 2 DEBUG nova.compute.manager [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing instance network info cache due to event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:26:08 compute-0 nova_compute[256940]: 2025-10-02 12:26:08.671 2 DEBUG oslo_concurrency.lockutils [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:08 compute-0 nova_compute[256940]: 2025-10-02 12:26:08.671 2 DEBUG oslo_concurrency.lockutils [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:08 compute-0 nova_compute[256940]: 2025-10-02 12:26:08.671 2 DEBUG nova.network.neutron [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:26:08 compute-0 ceph-mon[73668]: pgmap v1455: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1021 KiB/s wr, 99 op/s
Oct 02 12:26:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 63 op/s
Oct 02 12:26:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:09 compute-0 nova_compute[256940]: 2025-10-02 12:26:09.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1364927460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:10.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:11 compute-0 nova_compute[256940]: 2025-10-02 12:26:11.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 90 op/s
Oct 02 12:26:11 compute-0 ceph-mon[73668]: pgmap v1456: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 63 op/s
Oct 02 12:26:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2035805136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2035805136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:11 compute-0 nova_compute[256940]: 2025-10-02 12:26:11.167 2 DEBUG nova.network.neutron [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updated VIF entry in instance network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:26:11 compute-0 nova_compute[256940]: 2025-10-02 12:26:11.168 2 DEBUG nova.network.neutron [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:11 compute-0 nova_compute[256940]: 2025-10-02 12:26:11.214 2 DEBUG oslo_concurrency.lockutils [req-c00345e2-6d6e-4355-b8b0-5a98b5698621 req-0ed182ba-14db-4e9d-9d3f-978f45fbfc38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:26:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1061387115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:12.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:12 compute-0 ceph-mon[73668]: pgmap v1457: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 90 op/s
Oct 02 12:26:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1061387115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:13 compute-0 sudo[294842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:13 compute-0 sudo[294842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:13 compute-0 sudo[294842]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:13 compute-0 sudo[294867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:13 compute-0 sudo[294867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:13 compute-0 sudo[294867]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 99 op/s
Oct 02 12:26:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/695025001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:14 compute-0 ceph-mon[73668]: pgmap v1458: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 99 op/s
Oct 02 12:26:14 compute-0 nova_compute[256940]: 2025-10-02 12:26:14.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 95 op/s
Oct 02 12:26:15 compute-0 ovn_controller[148123]: 2025-10-02T12:26:15Z|00178|binding|INFO|Releasing lport e9f52189-485f-492c-b517-742fabc0b040 from this chassis (sb_readonly=0)
Oct 02 12:26:15 compute-0 nova_compute[256940]: 2025-10-02 12:26:15.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:16 compute-0 nova_compute[256940]: 2025-10-02 12:26:16.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:16.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:16 compute-0 ovn_controller[148123]: 2025-10-02T12:26:16Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:7e:75 10.100.0.12
Oct 02 12:26:16 compute-0 ovn_controller[148123]: 2025-10-02T12:26:16Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:7e:75 10.100.0.12
Oct 02 12:26:16 compute-0 ceph-mon[73668]: pgmap v1459: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 95 op/s
Oct 02 12:26:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 149 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 117 op/s
Oct 02 12:26:17 compute-0 podman[294895]: 2025-10-02 12:26:17.412160349 +0000 UTC m=+0.068015911 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:17 compute-0 podman[294894]: 2025-10-02 12:26:17.412821006 +0000 UTC m=+0.074287144 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 12:26:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:18.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:18 compute-0 ceph-mon[73668]: pgmap v1460: 305 pgs: 305 active+clean; 149 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 117 op/s
Oct 02 12:26:18 compute-0 ovn_controller[148123]: 2025-10-02T12:26:18Z|00179|binding|INFO|Releasing lport e9f52189-485f-492c-b517-742fabc0b040 from this chassis (sb_readonly=0)
Oct 02 12:26:18 compute-0 nova_compute[256940]: 2025-10-02 12:26:18.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 149 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 963 KiB/s rd, 1.4 MiB/s wr, 75 op/s
Oct 02 12:26:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:19.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:19 compute-0 nova_compute[256940]: 2025-10-02 12:26:19.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:20.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:20 compute-0 ceph-mon[73668]: pgmap v1461: 305 pgs: 305 active+clean; 149 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 963 KiB/s rd, 1.4 MiB/s wr, 75 op/s
Oct 02 12:26:21 compute-0 nova_compute[256940]: 2025-10-02 12:26:21.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 162 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:26:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:22.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:22 compute-0 ceph-mon[73668]: pgmap v1462: 305 pgs: 305 active+clean; 162 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:26:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 02 12:26:23 compute-0 ovn_controller[148123]: 2025-10-02T12:26:23Z|00180|binding|INFO|Releasing lport e9f52189-485f-492c-b517-742fabc0b040 from this chassis (sb_readonly=0)
Oct 02 12:26:23 compute-0 nova_compute[256940]: 2025-10-02 12:26:23.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:23.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:24.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:24 compute-0 ceph-mon[73668]: pgmap v1463: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 02 12:26:24 compute-0 nova_compute[256940]: 2025-10-02 12:26:24.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 12:26:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:25.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:26 compute-0 nova_compute[256940]: 2025-10-02 12:26:26.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:26.462 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:26.462 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:26.463 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:26 compute-0 ceph-mon[73668]: pgmap v1464: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 12:26:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 12:26:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:27.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:28 compute-0 ceph-mon[73668]: pgmap v1465: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:26:28
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups']
Oct 02 12:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:26:28 compute-0 nova_compute[256940]: 2025-10-02 12:26:28.840 2 DEBUG nova.objects.instance [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lazy-loading 'flavor' on Instance uuid c4f0a30c-38bf-4720-afc0-3a9bf537b820 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:28 compute-0 nova_compute[256940]: 2025-10-02 12:26:28.939 2 DEBUG oslo_concurrency.lockutils [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:28 compute-0 nova_compute[256940]: 2025-10-02 12:26:28.940 2 DEBUG oslo_concurrency.lockutils [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquired lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 729 KiB/s wr, 24 op/s
Oct 02 12:26:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:29.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:30 compute-0 nova_compute[256940]: 2025-10-02 12:26:30.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:30.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:30 compute-0 ceph-mon[73668]: pgmap v1466: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 729 KiB/s wr, 24 op/s
Oct 02 12:26:31 compute-0 nova_compute[256940]: 2025-10-02 12:26:31.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 729 KiB/s wr, 24 op/s
Oct 02 12:26:31 compute-0 podman[294939]: 2025-10-02 12:26:31.395191437 +0000 UTC m=+0.069929101 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:26:31 compute-0 podman[294940]: 2025-10-02 12:26:31.427319413 +0000 UTC m=+0.097044896 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:31.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:32 compute-0 ceph-mon[73668]: pgmap v1467: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 729 KiB/s wr, 24 op/s
Oct 02 12:26:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 88 KiB/s wr, 11 op/s
Oct 02 12:26:33 compute-0 sudo[294988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:33 compute-0 sudo[294988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:33 compute-0 sudo[294988]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:33 compute-0 sudo[295013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:33 compute-0 sudo[295013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:33 compute-0 sudo[295013]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:33.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:34.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:34 compute-0 ceph-mon[73668]: pgmap v1468: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 88 KiB/s wr, 11 op/s
Oct 02 12:26:35 compute-0 nova_compute[256940]: 2025-10-02 12:26:35.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Oct 02 12:26:35 compute-0 nova_compute[256940]: 2025-10-02 12:26:35.436 2 DEBUG nova.network.neutron [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:26:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:35.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:35 compute-0 nova_compute[256940]: 2025-10-02 12:26:35.773 2 DEBUG nova.compute.manager [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:35 compute-0 nova_compute[256940]: 2025-10-02 12:26:35.773 2 DEBUG nova.compute.manager [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing instance network info cache due to event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:26:35 compute-0 nova_compute[256940]: 2025-10-02 12:26:35.773 2 DEBUG oslo_concurrency.lockutils [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:36 compute-0 nova_compute[256940]: 2025-10-02 12:26:36.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:26:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:26:36 compute-0 ceph-mon[73668]: pgmap v1469: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Oct 02 12:26:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Oct 02 12:26:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:37.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:38 compute-0 ceph-mon[73668]: pgmap v1470: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Oct 02 12:26:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:39 compute-0 sudo[295041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:39 compute-0 sudo[295041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 sudo[295041]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 sudo[295066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:39 compute-0 sudo[295066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 sudo[295066]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 sudo[295091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:39 compute-0 sudo[295091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 sudo[295091]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 sudo[295116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:26:39 compute-0 sudo[295116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:39.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:39 compute-0 sudo[295116]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-0 nova_compute[256940]: 2025-10-02 12:26:39.824 2 DEBUG nova.network.neutron [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:26:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:26:39 compute-0 nova_compute[256940]: 2025-10-02 12:26:39.952 2 DEBUG oslo_concurrency.lockutils [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Releasing lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:39 compute-0 nova_compute[256940]: 2025-10-02 12:26:39.953 2 DEBUG nova.compute.manager [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 12:26:39 compute-0 nova_compute[256940]: 2025-10-02 12:26:39.953 2 DEBUG nova.compute.manager [None req-ef4d5308-eb5b-49ad-ab20-0832bb6b0e36 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] network_info to inject: |[{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 12:26:39 compute-0 nova_compute[256940]: 2025-10-02 12:26:39.957 2 DEBUG oslo_concurrency.lockutils [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:39 compute-0 nova_compute[256940]: 2025-10-02 12:26:39.958 2 DEBUG nova.network.neutron [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:26:40 compute-0 nova_compute[256940]: 2025-10-02 12:26:40.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:26:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002171774671970966 of space, bias 1.0, pg target 0.6515324015912898 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:26:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:26:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:26:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:26:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:40 compute-0 nova_compute[256940]: 2025-10-02 12:26:40.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:40 compute-0 sudo[295163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:40 compute-0 sudo[295163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-0 sudo[295163]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:40 compute-0 sudo[295188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:40 compute-0 sudo[295188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-0 sudo[295188]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:40 compute-0 sudo[295213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:40 compute-0 sudo[295213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-0 sudo[295213]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:40 compute-0 sudo[295238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:26:40 compute-0 sudo[295238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-0 sudo[295238]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:26:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:26:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:26:41 compute-0 ceph-mon[73668]: pgmap v1471: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-0 nova_compute[256940]: 2025-10-02 12:26:41.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b3df3134-0384-4e64-80eb-34861aab186f does not exist
Oct 02 12:26:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7c74881d-db0d-448c-a00f-aad47778c37c does not exist
Oct 02 12:26:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cd207d44-6c0e-4734-96f1-2d42eee01efd does not exist
Oct 02 12:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:26:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:26:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:26:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:41 compute-0 sudo[295296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:41 compute-0 sudo[295296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 sudo[295296]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 sudo[295321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:41 compute-0 sudo[295321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 sudo[295321]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 sudo[295346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:41 compute-0 sudo[295346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 sudo[295346]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-0 sudo[295371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:26:41 compute-0 sudo[295371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:41.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:41 compute-0 podman[295435]: 2025-10-02 12:26:41.957973841 +0000 UTC m=+0.071461440 container create 8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:26:42 compute-0 podman[295435]: 2025-10-02 12:26:41.915769393 +0000 UTC m=+0.029257012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:42 compute-0 systemd[1]: Started libpod-conmon-8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0.scope.
Oct 02 12:26:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:26:42 compute-0 ceph-mon[73668]: pgmap v1472: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:26:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:26:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4063761561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:42 compute-0 podman[295435]: 2025-10-02 12:26:42.07323718 +0000 UTC m=+0.186724799 container init 8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:26:42 compute-0 podman[295435]: 2025-10-02 12:26:42.08399233 +0000 UTC m=+0.197479929 container start 8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:26:42 compute-0 strange_bassi[295452]: 167 167
Oct 02 12:26:42 compute-0 systemd[1]: libpod-8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0.scope: Deactivated successfully.
Oct 02 12:26:42 compute-0 conmon[295452]: conmon 8e3e4163741c6884cd6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0.scope/container/memory.events
Oct 02 12:26:42 compute-0 podman[295435]: 2025-10-02 12:26:42.098182849 +0000 UTC m=+0.211670448 container attach 8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:26:42 compute-0 podman[295435]: 2025-10-02 12:26:42.100944551 +0000 UTC m=+0.214432150 container died 8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7639b6a575cb6e60b14f069f6f0c1f022b7896509f72d8664b21b2b7676b3193-merged.mount: Deactivated successfully.
Oct 02 12:26:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:42 compute-0 podman[295435]: 2025-10-02 12:26:42.179841044 +0000 UTC m=+0.293328643 container remove 8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bassi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:26:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:42.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:42 compute-0 systemd[1]: libpod-conmon-8e3e4163741c6884cd6f65a418ba20a7febd6a5b481e282b3f5ab32c22ff0af0.scope: Deactivated successfully.
Oct 02 12:26:42 compute-0 podman[295475]: 2025-10-02 12:26:42.363517963 +0000 UTC m=+0.045926856 container create 3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:26:42 compute-0 systemd[1]: Started libpod-conmon-3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04.scope.
Oct 02 12:26:42 compute-0 nova_compute[256940]: 2025-10-02 12:26:42.413 2 DEBUG nova.objects.instance [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lazy-loading 'flavor' on Instance uuid c4f0a30c-38bf-4720-afc0-3a9bf537b820 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70788ea9b81b5db07b0837573f6287c2c25e9fbfb69edc66222b2db095be663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70788ea9b81b5db07b0837573f6287c2c25e9fbfb69edc66222b2db095be663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70788ea9b81b5db07b0837573f6287c2c25e9fbfb69edc66222b2db095be663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70788ea9b81b5db07b0837573f6287c2c25e9fbfb69edc66222b2db095be663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70788ea9b81b5db07b0837573f6287c2c25e9fbfb69edc66222b2db095be663/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:42 compute-0 podman[295475]: 2025-10-02 12:26:42.343850712 +0000 UTC m=+0.026259615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:42 compute-0 nova_compute[256940]: 2025-10-02 12:26:42.442 2 DEBUG oslo_concurrency.lockutils [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:42 compute-0 podman[295475]: 2025-10-02 12:26:42.457027996 +0000 UTC m=+0.139436909 container init 3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:26:42 compute-0 podman[295475]: 2025-10-02 12:26:42.464921222 +0000 UTC m=+0.147330115 container start 3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:26:42 compute-0 podman[295475]: 2025-10-02 12:26:42.516203546 +0000 UTC m=+0.198612449 container attach 3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nightingale, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:26:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1937974989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2116819383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1347811142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:43 compute-0 sleepy_nightingale[295492]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:26:43 compute-0 sleepy_nightingale[295492]: --> relative data size: 1.0
Oct 02 12:26:43 compute-0 sleepy_nightingale[295492]: --> All data devices are unavailable
Oct 02 12:26:43 compute-0 systemd[1]: libpod-3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04.scope: Deactivated successfully.
Oct 02 12:26:43 compute-0 podman[295475]: 2025-10-02 12:26:43.314310151 +0000 UTC m=+0.996719044 container died 3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d70788ea9b81b5db07b0837573f6287c2c25e9fbfb69edc66222b2db095be663-merged.mount: Deactivated successfully.
Oct 02 12:26:43 compute-0 podman[295475]: 2025-10-02 12:26:43.398709447 +0000 UTC m=+1.081118340 container remove 3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:26:43 compute-0 systemd[1]: libpod-conmon-3a16e3c781d0ed1bc3de730b947b99405de2b1bbe77a5b0de054fabd89970f04.scope: Deactivated successfully.
Oct 02 12:26:43 compute-0 sudo[295371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 sudo[295520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:43 compute-0 sudo[295520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:43 compute-0 sudo[295520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 sudo[295545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:43 compute-0 sudo[295545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:43 compute-0 sudo[295545]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 nova_compute[256940]: 2025-10-02 12:26:43.576 2 DEBUG nova.network.neutron [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updated VIF entry in instance network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:26:43 compute-0 nova_compute[256940]: 2025-10-02 12:26:43.577 2 DEBUG nova.network.neutron [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:43 compute-0 nova_compute[256940]: 2025-10-02 12:26:43.601 2 DEBUG oslo_concurrency.lockutils [req-a897b8f7-8f3b-4ad6-9273-27021036a91a req-be429265-f66f-4a04-ad3b-166c3f0ef126 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:43 compute-0 nova_compute[256940]: 2025-10-02 12:26:43.601 2 DEBUG oslo_concurrency.lockutils [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquired lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:43 compute-0 sudo[295570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:43 compute-0 sudo[295570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:43 compute-0 sudo[295570]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:43.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:43 compute-0 sudo[295595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:26:43 compute-0 sudo[295595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:44 compute-0 podman[295659]: 2025-10-02 12:26:44.006527212 +0000 UTC m=+0.019319863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:44 compute-0 podman[295659]: 2025-10-02 12:26:44.136393502 +0000 UTC m=+0.149186153 container create e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:26:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:44.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:44 compute-0 systemd[1]: Started libpod-conmon-e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95.scope.
Oct 02 12:26:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:44 compute-0 nova_compute[256940]: 2025-10-02 12:26:44.242 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:44 compute-0 ceph-mon[73668]: pgmap v1473: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:44 compute-0 podman[295659]: 2025-10-02 12:26:44.332044952 +0000 UTC m=+0.344837583 container init e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:26:44 compute-0 podman[295659]: 2025-10-02 12:26:44.341755835 +0000 UTC m=+0.354548466 container start e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:44 compute-0 zealous_shaw[295675]: 167 167
Oct 02 12:26:44 compute-0 systemd[1]: libpod-e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95.scope: Deactivated successfully.
Oct 02 12:26:44 compute-0 podman[295659]: 2025-10-02 12:26:44.35348963 +0000 UTC m=+0.366282291 container attach e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:26:44 compute-0 podman[295659]: 2025-10-02 12:26:44.353985243 +0000 UTC m=+0.366777884 container died e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-be99d8125ea71ddede964e579cecf37259640d2ec9860613e274926725b1d888-merged.mount: Deactivated successfully.
Oct 02 12:26:44 compute-0 podman[295659]: 2025-10-02 12:26:44.728829726 +0000 UTC m=+0.741622357 container remove e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:26:44 compute-0 systemd[1]: libpod-conmon-e9c42ee38226c6cf26d4eb07fef9c53e7ac32b60140f68e766acdf0198c53b95.scope: Deactivated successfully.
Oct 02 12:26:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:44.903 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:26:44 compute-0 nova_compute[256940]: 2025-10-02 12:26:44.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:44.905 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:26:44 compute-0 podman[295703]: 2025-10-02 12:26:44.88771164 +0000 UTC m=+0.023029570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:45 compute-0 nova_compute[256940]: 2025-10-02 12:26:45.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:45 compute-0 podman[295703]: 2025-10-02 12:26:45.056996905 +0000 UTC m=+0.192314805 container create 1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:26:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:45 compute-0 nova_compute[256940]: 2025-10-02 12:26:45.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:45 compute-0 nova_compute[256940]: 2025-10-02 12:26:45.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:45 compute-0 nova_compute[256940]: 2025-10-02 12:26:45.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:45 compute-0 nova_compute[256940]: 2025-10-02 12:26:45.210 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:26:45 compute-0 systemd[1]: Started libpod-conmon-1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1.scope.
Oct 02 12:26:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713b8afc988277562d36298321af5ad03fb88af17f428b6851b4f570d410e60d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713b8afc988277562d36298321af5ad03fb88af17f428b6851b4f570d410e60d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713b8afc988277562d36298321af5ad03fb88af17f428b6851b4f570d410e60d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713b8afc988277562d36298321af5ad03fb88af17f428b6851b4f570d410e60d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:45 compute-0 podman[295703]: 2025-10-02 12:26:45.486053579 +0000 UTC m=+0.621371509 container init 1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lehmann, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:26:45 compute-0 podman[295703]: 2025-10-02 12:26:45.495610377 +0000 UTC m=+0.630928277 container start 1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lehmann, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:26:45 compute-0 podman[295703]: 2025-10-02 12:26:45.614969653 +0000 UTC m=+0.750287553 container attach 1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lehmann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:26:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:45.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:45.907 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:45 compute-0 nova_compute[256940]: 2025-10-02 12:26:45.991 2 DEBUG nova.network.neutron [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:26:46 compute-0 nova_compute[256940]: 2025-10-02 12:26:46.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:26:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:46.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:26:46 compute-0 nova_compute[256940]: 2025-10-02 12:26:46.194 2 DEBUG nova.compute.manager [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:46 compute-0 nova_compute[256940]: 2025-10-02 12:26:46.194 2 DEBUG nova.compute.manager [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing instance network info cache due to event network-changed-d472d0c9-f952-487f-8225-97cbf1dea77c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:26:46 compute-0 nova_compute[256940]: 2025-10-02 12:26:46.195 2 DEBUG oslo_concurrency.lockutils [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]: {
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:     "1": [
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:         {
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "devices": [
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "/dev/loop3"
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             ],
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "lv_name": "ceph_lv0",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "lv_size": "7511998464",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "name": "ceph_lv0",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "tags": {
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.cluster_name": "ceph",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.crush_device_class": "",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.encrypted": "0",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.osd_id": "1",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.type": "block",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:                 "ceph.vdo": "0"
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             },
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "type": "block",
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:             "vg_name": "ceph_vg0"
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:         }
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]:     ]
Oct 02 12:26:46 compute-0 interesting_lehmann[295719]: }
Oct 02 12:26:46 compute-0 systemd[1]: libpod-1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1.scope: Deactivated successfully.
Oct 02 12:26:46 compute-0 conmon[295719]: conmon 1ba3fffa94e2cd7a0e2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1.scope/container/memory.events
Oct 02 12:26:46 compute-0 podman[295703]: 2025-10-02 12:26:46.389568226 +0000 UTC m=+1.524886126 container died 1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:26:46 compute-0 ceph-mon[73668]: pgmap v1474: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-713b8afc988277562d36298321af5ad03fb88af17f428b6851b4f570d410e60d-merged.mount: Deactivated successfully.
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.273 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.273 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.274 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.274 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.275 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:47 compute-0 podman[295703]: 2025-10-02 12:26:47.608478432 +0000 UTC m=+2.743796332 container remove 1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:26:47 compute-0 sudo[295595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:47.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:47 compute-0 systemd[1]: libpod-conmon-1ba3fffa94e2cd7a0e2c36d584b3d5b156281dacd75e95cb56637ab34d88aad1.scope: Deactivated successfully.
Oct 02 12:26:47 compute-0 sudo[295761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:47 compute-0 sudo[295761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:47 compute-0 sudo[295761]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:47 compute-0 podman[295769]: 2025-10-02 12:26:47.772529371 +0000 UTC m=+0.070147396 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:26:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3653337162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:47 compute-0 podman[295781]: 2025-10-02 12:26:47.779547154 +0000 UTC m=+0.076706537 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:26:47 compute-0 sudo[295814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:47 compute-0 sudo[295814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.797 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:47 compute-0 sudo[295814]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:47 compute-0 sudo[295851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:47 compute-0 sudo[295851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:47 compute-0 sudo[295851]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.898 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:26:47 compute-0 nova_compute[256940]: 2025-10-02 12:26:47.899 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:26:47 compute-0 sudo[295877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:26:47 compute-0 sudo[295877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3653337162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.097 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.098 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4442MB free_disk=20.942699432373047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.099 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.099 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:48.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.252 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance c4f0a30c-38bf-4720-afc0-3a9bf537b820 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.253 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.253 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:26:48 compute-0 podman[295942]: 2025-10-02 12:26:48.299504203 +0000 UTC m=+0.050229128 container create 111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:26:48 compute-0 systemd[1]: Started libpod-conmon-111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8.scope.
Oct 02 12:26:48 compute-0 podman[295942]: 2025-10-02 12:26:48.270838437 +0000 UTC m=+0.021563382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.429 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:48 compute-0 podman[295942]: 2025-10-02 12:26:48.440690267 +0000 UTC m=+0.191415202 container init 111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:26:48 compute-0 podman[295942]: 2025-10-02 12:26:48.452954506 +0000 UTC m=+0.203679431 container start 111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:26:48 compute-0 cool_bell[295959]: 167 167
Oct 02 12:26:48 compute-0 systemd[1]: libpod-111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8.scope: Deactivated successfully.
Oct 02 12:26:48 compute-0 conmon[295959]: conmon 111f96931e53583b7fe7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8.scope/container/memory.events
Oct 02 12:26:48 compute-0 podman[295942]: 2025-10-02 12:26:48.464005964 +0000 UTC m=+0.214730919 container attach 111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:26:48 compute-0 podman[295942]: 2025-10-02 12:26:48.465717748 +0000 UTC m=+0.216442673 container died 111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:26:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e8d8b872c0e47109286a082463a9e2f6f7b309fa14818ebde3f50140f6e2a15-merged.mount: Deactivated successfully.
Oct 02 12:26:48 compute-0 podman[295942]: 2025-10-02 12:26:48.564080818 +0000 UTC m=+0.314805763 container remove 111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:26:48 compute-0 systemd[1]: libpod-conmon-111f96931e53583b7fe751e5973c96069a251d5ce5a8e18028bf8985350b00b8.scope: Deactivated successfully.
Oct 02 12:26:48 compute-0 podman[296006]: 2025-10-02 12:26:48.787291966 +0000 UTC m=+0.059344945 container create 7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moore, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:26:48 compute-0 systemd[1]: Started libpod-conmon-7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602.scope.
Oct 02 12:26:48 compute-0 podman[296006]: 2025-10-02 12:26:48.757596933 +0000 UTC m=+0.029649932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:26:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3462639232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f1ffae1eaa6b7042b93de897eb50f278116d6563c23307135fd843b04010be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f1ffae1eaa6b7042b93de897eb50f278116d6563c23307135fd843b04010be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f1ffae1eaa6b7042b93de897eb50f278116d6563c23307135fd843b04010be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f1ffae1eaa6b7042b93de897eb50f278116d6563c23307135fd843b04010be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.891 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:48 compute-0 podman[296006]: 2025-10-02 12:26:48.897974006 +0000 UTC m=+0.170026985 container init 7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:26:48 compute-0 nova_compute[256940]: 2025-10-02 12:26:48.900 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:26:48 compute-0 podman[296006]: 2025-10-02 12:26:48.908381737 +0000 UTC m=+0.180434696 container start 7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moore, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:26:48 compute-0 podman[296006]: 2025-10-02 12:26:48.925264926 +0000 UTC m=+0.197317885 container attach 7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.001 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.070 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.072 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:49 compute-0 ceph-mon[73668]: pgmap v1475: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2080025941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3462639232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.500 2 DEBUG nova.network.neutron [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.524 2 DEBUG oslo_concurrency.lockutils [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Releasing lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.524 2 DEBUG nova.compute.manager [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.525 2 DEBUG nova.compute.manager [None req-419c71df-d05f-489f-8e73-7dee64322063 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] network_info to inject: |[{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.527 2 DEBUG oslo_concurrency.lockutils [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:26:49 compute-0 nova_compute[256940]: 2025-10-02 12:26:49.528 2 DEBUG nova.network.neutron [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Refreshing network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:26:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:49.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:49 compute-0 hardcore_moore[296023]: {
Oct 02 12:26:49 compute-0 hardcore_moore[296023]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:26:49 compute-0 hardcore_moore[296023]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:26:49 compute-0 hardcore_moore[296023]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:26:49 compute-0 hardcore_moore[296023]:         "osd_id": 1,
Oct 02 12:26:49 compute-0 hardcore_moore[296023]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:26:49 compute-0 hardcore_moore[296023]:         "type": "bluestore"
Oct 02 12:26:49 compute-0 hardcore_moore[296023]:     }
Oct 02 12:26:49 compute-0 hardcore_moore[296023]: }
Oct 02 12:26:49 compute-0 systemd[1]: libpod-7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602.scope: Deactivated successfully.
Oct 02 12:26:49 compute-0 podman[296006]: 2025-10-02 12:26:49.814561886 +0000 UTC m=+1.086614845 container died 7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:26:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-96f1ffae1eaa6b7042b93de897eb50f278116d6563c23307135fd843b04010be-merged.mount: Deactivated successfully.
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-0 podman[296006]: 2025-10-02 12:26:50.048982725 +0000 UTC m=+1.321035684 container remove 7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:26:50 compute-0 systemd[1]: libpod-conmon-7ee302e51c9ec3bebec458d91ac874abe692b7d2df4e900924fb7f5ba9df6602.scope: Deactivated successfully.
Oct 02 12:26:50 compute-0 sudo[295877]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:26:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:26:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:50.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.221 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.223 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.223 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.223 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.223 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.224 2 INFO nova.compute.manager [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Terminating instance
Oct 02 12:26:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.225 2 DEBUG nova.compute.manager [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:26:50 compute-0 kernel: tapd472d0c9-f9 (unregistering): left promiscuous mode
Oct 02 12:26:50 compute-0 NetworkManager[44981]: <info>  [1759408010.2909] device (tapd472d0c9-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-0 ovn_controller[148123]: 2025-10-02T12:26:50Z|00181|binding|INFO|Releasing lport d472d0c9-f952-487f-8225-97cbf1dea77c from this chassis (sb_readonly=0)
Oct 02 12:26:50 compute-0 ovn_controller[148123]: 2025-10-02T12:26:50Z|00182|binding|INFO|Setting lport d472d0c9-f952-487f-8225-97cbf1dea77c down in Southbound
Oct 02 12:26:50 compute-0 ovn_controller[148123]: 2025-10-02T12:26:50Z|00183|binding|INFO|Removing iface tapd472d0c9-f9 ovn-installed in OVS
Oct 02 12:26:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:50.310 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:7e:75 10.100.0.12'], port_security=['fa:16:3e:d5:7e:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c4f0a30c-38bf-4720-afc0-3a9bf537b820', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2715815cf51b4499b2f5906aa745dedc', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4f4deff7-2af5-4049-ac12-0fdec51f5f03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffc024bf-6341-4587-a9a1-7bb3f2b637b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d472d0c9-f952-487f-8225-97cbf1dea77c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:26:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:50.311 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d472d0c9-f952-487f-8225-97cbf1dea77c in datapath 33c7df65-d612-4b8e-b5f7-5bf3c51288de unbound from our chassis
Oct 02 12:26:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:50.313 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33c7df65-d612-4b8e-b5f7-5bf3c51288de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:26:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:50.314 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c576bfd6-bb10-4163-bbbb-62da47efa263]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:50.314 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de namespace which is not needed anymore
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:26:50 compute-0 ceph-mon[73668]: pgmap v1476: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:50 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 02 12:26:50 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000038.scope: Consumed 14.755s CPU time.
Oct 02 12:26:50 compute-0 systemd-machined[210927]: Machine qemu-25-instance-00000038 terminated.
Oct 02 12:26:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 462b35c2-3ac9-42d3-b30b-4e4a2fca6718 does not exist
Oct 02 12:26:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 794f7bfd-fa36-4ae5-8b1e-6af0e6e13752 does not exist
Oct 02 12:26:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5ac3211c-22d3-4af6-9c36-1c21ae0f00fd does not exist
Oct 02 12:26:50 compute-0 sudo[296081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.465 2 INFO nova.virt.libvirt.driver [-] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Instance destroyed successfully.
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.466 2 DEBUG nova.objects.instance [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lazy-loading 'resources' on Instance uuid c4f0a30c-38bf-4720-afc0-3a9bf537b820 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:26:50 compute-0 sudo[296081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:50 compute-0 sudo[296081]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.489 2 DEBUG nova.virt.libvirt.vif [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-869769966',display_name='tempest-AttachInterfacesUnderV243Test-server-869769966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-869769966',id=56,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtkGzugK3IZRVfceP7hFkbcV33sZPF/J/HciuNE7jAvjhstl51pIdkbrlclVJSflcTaJ1WR2y5pr5AxOmX2g+eh6U5h4dR/YdZfsojMIox7PSkNmhz+GOLEKcF/K9D2pQ==',key_name='tempest-keypair-1572188106',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2715815cf51b4499b2f5906aa745dedc',ramdisk_id='',reservation_id='r-5id1v3z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1924785210',owner_user_name='tempest-AttachInterfacesUnderV243Test-1924785210-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45e71bb62c1e482bbcf07657b3049198',uuid=c4f0a30c-38bf-4720-afc0-3a9bf537b820,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.490 2 DEBUG nova.network.os_vif_util [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Converting VIF {"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.491 2 DEBUG nova.network.os_vif_util [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:7e:75,bridge_name='br-int',has_traffic_filtering=True,id=d472d0c9-f952-487f-8225-97cbf1dea77c,network=Network(33c7df65-d612-4b8e-b5f7-5bf3c51288de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd472d0c9-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.492 2 DEBUG os_vif [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:7e:75,bridge_name='br-int',has_traffic_filtering=True,id=d472d0c9-f952-487f-8225-97cbf1dea77c,network=Network(33c7df65-d612-4b8e-b5f7-5bf3c51288de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd472d0c9-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd472d0c9-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:50 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [NOTICE]   (294824) : haproxy version is 2.8.14-c23fe91
Oct 02 12:26:50 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [NOTICE]   (294824) : path to executable is /usr/sbin/haproxy
Oct 02 12:26:50 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [WARNING]  (294824) : Exiting Master process...
Oct 02 12:26:50 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [WARNING]  (294824) : Exiting Master process...
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [ALERT]    (294824) : Current worker (294826) exited with code 143 (Terminated)
Oct 02 12:26:50 compute-0 neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de[294819]: [WARNING]  (294824) : All workers exited. Exiting... (0)
Oct 02 12:26:50 compute-0 systemd[1]: libpod-98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea.scope: Deactivated successfully.
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:26:50 compute-0 conmon[294819]: conmon 98fd480ec1f579ee8ce1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea.scope/container/memory.events
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-0 nova_compute[256940]: 2025-10-02 12:26:50.504 2 INFO os_vif [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:7e:75,bridge_name='br-int',has_traffic_filtering=True,id=d472d0c9-f952-487f-8225-97cbf1dea77c,network=Network(33c7df65-d612-4b8e-b5f7-5bf3c51288de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd472d0c9-f9')
Oct 02 12:26:50 compute-0 podman[296087]: 2025-10-02 12:26:50.508730178 +0000 UTC m=+0.084126610 container died 98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:26:50 compute-0 sudo[296130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:26:50 compute-0 sudo[296130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:50 compute-0 sudo[296130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea-userdata-shm.mount: Deactivated successfully.
Oct 02 12:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f3339d2588eda0363acc33006081fff085354a1e5cccde9db294f4f0098c8b3-merged.mount: Deactivated successfully.
Oct 02 12:26:50 compute-0 podman[296087]: 2025-10-02 12:26:50.739176684 +0000 UTC m=+0.314573136 container cleanup 98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:26:50 compute-0 systemd[1]: libpod-conmon-98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea.scope: Deactivated successfully.
Oct 02 12:26:51 compute-0 podman[296192]: 2025-10-02 12:26:51.054128719 +0000 UTC m=+0.287066631 container remove 98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.064 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6f3550-592e-4b3b-9a60-be01ced80a08]: (4, ('Thu Oct  2 12:26:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de (98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea)\n98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea\nThu Oct  2 12:26:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de (98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea)\n98fd480ec1f579ee8ce1d26f788ae3307636c398bbdd74e24bc1c05ea94377ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.066 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[36797c76-7024-4413-9194-80d43d8c0e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.068 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c7df65-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.074 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.074 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.075 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.100 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.101 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.101 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.102 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:51 compute-0 kernel: tap33c7df65-d0: left promiscuous mode
Oct 02 12:26:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 12:26:51 compute-0 nova_compute[256940]: 2025-10-02 12:26:51.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.160 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5de8da7b-2599-40ea-bf8b-5bc500e0097b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.195 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f01b10-1dcb-4df1-96a8-909062d4d2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.197 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a2861246-40fc-41de-99e7-629e2d2bd55c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.218 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7276f3ca-9646-4ec7-9a87-117842e65e3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585806, 'reachable_time': 22126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296208, 'error': None, 'target': 'ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d33c7df65\x2dd612\x2d4b8e\x2db5f7\x2d5bf3c51288de.mount: Deactivated successfully.
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.224 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-33c7df65-d612-4b8e-b5f7-5bf3c51288de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:26:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:26:51.225 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7d1277-2634-4f8f-9de1-6f719110f4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:26:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:51.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:52.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:52 compute-0 nova_compute[256940]: 2025-10-02 12:26:52.207 2 INFO nova.virt.libvirt.driver [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Deleting instance files /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820_del
Oct 02 12:26:52 compute-0 nova_compute[256940]: 2025-10-02 12:26:52.208 2 INFO nova.virt.libvirt.driver [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Deletion of /var/lib/nova/instances/c4f0a30c-38bf-4720-afc0-3a9bf537b820_del complete
Oct 02 12:26:52 compute-0 nova_compute[256940]: 2025-10-02 12:26:52.277 2 INFO nova.compute.manager [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Took 2.05 seconds to destroy the instance on the hypervisor.
Oct 02 12:26:52 compute-0 nova_compute[256940]: 2025-10-02 12:26:52.278 2 DEBUG oslo.service.loopingcall [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:26:52 compute-0 nova_compute[256940]: 2025-10-02 12:26:52.278 2 DEBUG nova.compute.manager [-] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:26:52 compute-0 nova_compute[256940]: 2025-10-02 12:26:52.278 2 DEBUG nova.network.neutron [-] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:26:52 compute-0 ceph-mon[73668]: pgmap v1477: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 12:26:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:53 compute-0 nova_compute[256940]: 2025-10-02 12:26:53.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 118 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 25 op/s
Oct 02 12:26:53 compute-0 nova_compute[256940]: 2025-10-02 12:26:53.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:53 compute-0 nova_compute[256940]: 2025-10-02 12:26:53.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:53 compute-0 sudo[296210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:53 compute-0 sudo[296210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:53 compute-0 sudo[296210]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:53 compute-0 sudo[296235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:53 compute-0 sudo[296235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:53 compute-0 sudo[296235]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:53 compute-0 nova_compute[256940]: 2025-10-02 12:26:53.936 2 DEBUG nova.network.neutron [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updated VIF entry in instance network info cache for port d472d0c9-f952-487f-8225-97cbf1dea77c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:26:53 compute-0 nova_compute[256940]: 2025-10-02 12:26:53.936 2 DEBUG nova.network.neutron [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [{"id": "d472d0c9-f952-487f-8225-97cbf1dea77c", "address": "fa:16:3e:d5:7e:75", "network": {"id": "33c7df65-d612-4b8e-b5f7-5bf3c51288de", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-691354237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2715815cf51b4499b2f5906aa745dedc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd472d0c9-f9", "ovs_interfaceid": "d472d0c9-f952-487f-8225-97cbf1dea77c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:54 compute-0 nova_compute[256940]: 2025-10-02 12:26:54.048 2 DEBUG oslo_concurrency.lockutils [req-39c086dd-fbc3-4931-8fd3-e2e4c7a3a6ac req-f68d3253-b1c3-481c-85a6-2126674cc7af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c4f0a30c-38bf-4720-afc0-3a9bf537b820" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:26:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:54.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:54 compute-0 nova_compute[256940]: 2025-10-02 12:26:54.593 2 DEBUG nova.network.neutron [-] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:26:54 compute-0 ceph-mon[73668]: pgmap v1478: 305 pgs: 305 active+clean; 118 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 25 op/s
Oct 02 12:26:54 compute-0 nova_compute[256940]: 2025-10-02 12:26:54.977 2 INFO nova.compute.manager [-] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Took 2.70 seconds to deallocate network for instance.
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 88 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 19 KiB/s wr, 40 op/s
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.382 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.383 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.434 2 DEBUG oslo_concurrency.processutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.598 2 DEBUG nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Received event network-vif-deleted-d472d0c9-f952-487f-8225-97cbf1dea77c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:26:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:55.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/455478377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.908 2 DEBUG oslo_concurrency.processutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.916 2 DEBUG nova.compute.provider_tree [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.932 2 DEBUG nova.scheduler.client.report [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.954 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:55 compute-0 nova_compute[256940]: 2025-10-02 12:26:55.988 2 INFO nova.scheduler.client.report [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Deleted allocations for instance c4f0a30c-38bf-4720-afc0-3a9bf537b820
Oct 02 12:26:56 compute-0 nova_compute[256940]: 2025-10-02 12:26:56.118 2 DEBUG oslo_concurrency.lockutils [None req-9463c39b-61da-4826-9749-9a45d53701e5 45e71bb62c1e482bbcf07657b3049198 2715815cf51b4499b2f5906aa745dedc - - default default] Lock "c4f0a30c-38bf-4720-afc0-3a9bf537b820" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:56.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:56 compute-0 ceph-mon[73668]: pgmap v1479: 305 pgs: 305 active+clean; 88 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 19 KiB/s wr, 40 op/s
Oct 02 12:26:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/455478377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 19 KiB/s wr, 43 op/s
Oct 02 12:26:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:57.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.003991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018004042, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1816, "num_deletes": 255, "total_data_size": 3178476, "memory_usage": 3230184, "flush_reason": "Manual Compaction"}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018024722, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1899287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31712, "largest_seqno": 33527, "table_properties": {"data_size": 1892988, "index_size": 3245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16669, "raw_average_key_size": 21, "raw_value_size": 1879034, "raw_average_value_size": 2402, "num_data_blocks": 145, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407850, "oldest_key_time": 1759407850, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 20831 microseconds, and 5554 cpu microseconds.
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.024816) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1899287 bytes OK
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.024863) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.027224) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.027248) EVENT_LOG_v1 {"time_micros": 1759408018027241, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.027276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3170861, prev total WAL file size 3170861, number of live WAL files 2.
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.028409) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1854KB)], [68(10225KB)]
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018028453, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 12369936, "oldest_snapshot_seqno": -1}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6017 keys, 9706432 bytes, temperature: kUnknown
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018116031, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9706432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9666058, "index_size": 24200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153424, "raw_average_key_size": 25, "raw_value_size": 9557969, "raw_average_value_size": 1588, "num_data_blocks": 978, "num_entries": 6017, "num_filter_entries": 6017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.116413) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9706432 bytes
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.121855) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.0 rd, 110.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 6466, records dropped: 449 output_compression: NoCompression
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.121914) EVENT_LOG_v1 {"time_micros": 1759408018121893, "job": 38, "event": "compaction_finished", "compaction_time_micros": 87749, "compaction_time_cpu_micros": 26957, "output_level": 6, "num_output_files": 1, "total_output_size": 9706432, "num_input_records": 6466, "num_output_records": 6017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018122565, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018125198, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.028277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.125248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.125256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.125258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.125260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:26:58.125262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:26:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:58.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:26:58 compute-0 nova_compute[256940]: 2025-10-02 12:26:58.721 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:58 compute-0 nova_compute[256940]: 2025-10-02 12:26:58.721 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:58 compute-0 nova_compute[256940]: 2025-10-02 12:26:58.758 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:26:58 compute-0 nova_compute[256940]: 2025-10-02 12:26:58.841 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:58 compute-0 nova_compute[256940]: 2025-10-02 12:26:58.842 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:58 compute-0 nova_compute[256940]: 2025-10-02 12:26:58.849 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:26:58 compute-0 nova_compute[256940]: 2025-10-02 12:26:58.850 2 INFO nova.compute.claims [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:26:59 compute-0 ceph-mon[73668]: pgmap v1480: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 19 KiB/s wr, 43 op/s
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.032 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 19 KiB/s wr, 43 op/s
Oct 02 12:26:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458153804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.463 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.469 2 DEBUG nova.compute.provider_tree [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.490 2 DEBUG nova.scheduler.client.report [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.514 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.514 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.559 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.559 2 DEBUG nova.network.neutron [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.581 2 INFO nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.597 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:26:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:26:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:26:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:59.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.705 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.706 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.706 2 INFO nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Creating image(s)
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.739 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.775 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.811 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.822 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.898 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.899 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.900 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:59 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.900 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:26:59.999 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.003 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c723f52b-fc30-4046-8444-28f280a39a16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.039 2 DEBUG nova.policy [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b6687fbfb1f484ba99ef93bbb4ffa7e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c20ce9073494490588607984318667f5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:27:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/458153804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:00.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.493 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c723f52b-fc30-4046-8444-28f280a39a16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.579 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] resizing rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.700 2 DEBUG nova.objects.instance [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'migration_context' on Instance uuid c723f52b-fc30-4046-8444-28f280a39a16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.772 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.772 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Ensure instance console log exists: /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.773 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.773 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:00 compute-0 nova_compute[256940]: 2025-10-02 12:27:00.773 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 19 KiB/s wr, 45 op/s
Oct 02 12:27:01 compute-0 ceph-mon[73668]: pgmap v1481: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 19 KiB/s wr, 43 op/s
Oct 02 12:27:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:02 compute-0 nova_compute[256940]: 2025-10-02 12:27:02.070 2 DEBUG nova.network.neutron [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Successfully created port: bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:27:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:02.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:02 compute-0 podman[296475]: 2025-10-02 12:27:02.400448443 +0000 UTC m=+0.066411529 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 12:27:02 compute-0 podman[296476]: 2025-10-02 12:27:02.438094613 +0000 UTC m=+0.100827285 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:27:02 compute-0 ceph-mon[73668]: pgmap v1482: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 19 KiB/s wr, 45 op/s
Oct 02 12:27:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 117 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 988 KiB/s wr, 121 op/s
Oct 02 12:27:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:04.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:04 compute-0 nova_compute[256940]: 2025-10-02 12:27:04.473 2 DEBUG nova.network.neutron [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Successfully updated port: bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:27:04 compute-0 nova_compute[256940]: 2025-10-02 12:27:04.492 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:04 compute-0 nova_compute[256940]: 2025-10-02 12:27:04.493 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquired lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:04 compute-0 nova_compute[256940]: 2025-10-02 12:27:04.493 2 DEBUG nova.network.neutron [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:27:04 compute-0 ceph-mon[73668]: pgmap v1483: 305 pgs: 305 active+clean; 117 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 988 KiB/s wr, 121 op/s
Oct 02 12:27:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.309 2 DEBUG nova.network.neutron [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.462 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408010.4602585, c4f0a30c-38bf-4720-afc0-3a9bf537b820 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.463 2 INFO nova.compute.manager [-] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] VM Stopped (Lifecycle Event)
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.488 2 DEBUG nova.compute.manager [None req-ea17ef66-107b-49a5-85e0-9314c8f7a8b8 - - - - - -] [instance: c4f0a30c-38bf-4720-afc0-3a9bf537b820] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:27:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.677 2 DEBUG nova.compute.manager [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.677 2 DEBUG nova.compute.manager [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing instance network info cache due to event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:27:05 compute-0 nova_compute[256940]: 2025-10-02 12:27:05.677 2 DEBUG oslo_concurrency.lockutils [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:05.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:06.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.486 2 DEBUG nova.network.neutron [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.514 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Releasing lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.515 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Instance network_info: |[{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.515 2 DEBUG oslo_concurrency.lockutils [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.515 2 DEBUG nova.network.neutron [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.518 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Start _get_guest_xml network_info=[{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.523 2 WARNING nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.529 2 DEBUG nova.virt.libvirt.host [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.530 2 DEBUG nova.virt.libvirt.host [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.534 2 DEBUG nova.virt.libvirt.host [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.535 2 DEBUG nova.virt.libvirt.host [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.536 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.536 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.537 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.537 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.537 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.537 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.538 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.538 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.538 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.538 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.539 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.539 2 DEBUG nova.virt.hardware [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.542 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:06 compute-0 ceph-mon[73668]: pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 02 12:27:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134444350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.961 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.991 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:06 compute-0 nova_compute[256940]: 2025-10-02 12:27:06.996 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 02 12:27:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571085260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.443 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.445 2 DEBUG nova.virt.libvirt.vif [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1421852808',display_name='tempest-FloatingIPsAssociationTestJSON-server-1421852808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1421852808',id=58,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xprfi8ek',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:59Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=c723f52b-fc30-4046-8444-28f280a39a16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.446 2 DEBUG nova.network.os_vif_util [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.446 2 DEBUG nova.network.os_vif_util [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:48:81,bridge_name='br-int',has_traffic_filtering=True,id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc6bf86-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.448 2 DEBUG nova.objects.instance [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid c723f52b-fc30-4046-8444-28f280a39a16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.587 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <uuid>c723f52b-fc30-4046-8444-28f280a39a16</uuid>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <name>instance-0000003a</name>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1421852808</nova:name>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:27:06</nova:creationTime>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:user uuid="2b6687fbfb1f484ba99ef93bbb4ffa7e">tempest-FloatingIPsAssociationTestJSON-875651151-project-member</nova:user>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:project uuid="c20ce9073494490588607984318667f5">tempest-FloatingIPsAssociationTestJSON-875651151</nova:project>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <nova:port uuid="bfc6bf86-3893-44ff-abb2-2c3a2f152cf2">
Oct 02 12:27:07 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <system>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <entry name="serial">c723f52b-fc30-4046-8444-28f280a39a16</entry>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <entry name="uuid">c723f52b-fc30-4046-8444-28f280a39a16</entry>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </system>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <os>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   </os>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <features>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   </features>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c723f52b-fc30-4046-8444-28f280a39a16_disk">
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c723f52b-fc30-4046-8444-28f280a39a16_disk.config">
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:27:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:fd:48:81"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <target dev="tapbfc6bf86-38"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/console.log" append="off"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <video>
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </video>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:27:07 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:27:07 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:27:07 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:27:07 compute-0 nova_compute[256940]: </domain>
Oct 02 12:27:07 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.589 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Preparing to wait for external event network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.589 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.590 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.590 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.591 2 DEBUG nova.virt.libvirt.vif [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1421852808',display_name='tempest-FloatingIPsAssociationTestJSON-server-1421852808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1421852808',id=58,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xprfi8ek',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:59Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=c723f52b-fc30-4046-8444-28f280a39a16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.591 2 DEBUG nova.network.os_vif_util [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.592 2 DEBUG nova.network.os_vif_util [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:48:81,bridge_name='br-int',has_traffic_filtering=True,id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc6bf86-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.592 2 DEBUG os_vif [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:48:81,bridge_name='br-int',has_traffic_filtering=True,id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc6bf86-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfc6bf86-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfc6bf86-38, col_values=(('external_ids', {'iface-id': 'bfc6bf86-3893-44ff-abb2-2c3a2f152cf2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:48:81', 'vm-uuid': 'c723f52b-fc30-4046-8444-28f280a39a16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:07 compute-0 NetworkManager[44981]: <info>  [1759408027.6026] manager: (tapbfc6bf86-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.613 2 INFO os_vif [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:48:81,bridge_name='br-int',has_traffic_filtering=True,id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc6bf86-38')
Oct 02 12:27:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:07.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2134444350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3571085260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.754 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.755 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.755 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No VIF found with MAC fa:16:3e:fd:48:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.756 2 INFO nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Using config drive
Oct 02 12:27:07 compute-0 nova_compute[256940]: 2025-10-02 12:27:07.787 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:08.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.414 2 INFO nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Creating config drive at /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/disk.config
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.419 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7cbpascx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.557 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7cbpascx" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.586 2 DEBUG nova.storage.rbd_utils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image c723f52b-fc30-4046-8444-28f280a39a16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.590 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/disk.config c723f52b-fc30-4046-8444-28f280a39a16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.722 2 DEBUG nova.network.neutron [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updated VIF entry in instance network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.723 2 DEBUG nova.network.neutron [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.785 2 DEBUG oslo_concurrency.processutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/disk.config c723f52b-fc30-4046-8444-28f280a39a16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.786 2 INFO nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Deleting local config drive /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16/disk.config because it was imported into RBD.
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.800 2 DEBUG oslo_concurrency.lockutils [req-10ef6464-8cac-466e-b3a6-31490170f4a6 req-15764bbd-1ef1-4f1d-840e-462973aa42da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:08 compute-0 kernel: tapbfc6bf86-38: entered promiscuous mode
Oct 02 12:27:08 compute-0 NetworkManager[44981]: <info>  [1759408028.8353] manager: (tapbfc6bf86-38): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct 02 12:27:08 compute-0 ovn_controller[148123]: 2025-10-02T12:27:08Z|00184|binding|INFO|Claiming lport bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 for this chassis.
Oct 02 12:27:08 compute-0 ovn_controller[148123]: 2025-10-02T12:27:08Z|00185|binding|INFO|bfc6bf86-3893-44ff-abb2-2c3a2f152cf2: Claiming fa:16:3e:fd:48:81 10.100.0.13
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:08 compute-0 ceph-mon[73668]: pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 02 12:27:08 compute-0 systemd-udevd[296657]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:27:08 compute-0 systemd-machined[210927]: New machine qemu-26-instance-0000003a.
Oct 02 12:27:08 compute-0 NetworkManager[44981]: <info>  [1759408028.8840] device (tapbfc6bf86-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:27:08 compute-0 NetworkManager[44981]: <info>  [1759408028.8853] device (tapbfc6bf86-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:27:08 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000003a.
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.894 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:48:81 10.100.0.13'], port_security=['fa:16:3e:fd:48:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c723f52b-fc30-4046-8444-28f280a39a16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20ce9073494490588607984318667f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '389e89f9-e63f-4b21-acd4-c43535d1012a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4006d2de-ae16-44cd-90c1-7beb9f87e38f, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.897 158104 INFO neutron.agent.ovn.metadata.agent [-] Port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 in datapath a920a635-8cc8-4478-b2dc-4d6329a5abef bound to our chassis
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.898 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct 02 12:27:08 compute-0 ovn_controller[148123]: 2025-10-02T12:27:08Z|00186|binding|INFO|Setting lport bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 ovn-installed in OVS
Oct 02 12:27:08 compute-0 ovn_controller[148123]: 2025-10-02T12:27:08Z|00187|binding|INFO|Setting lport bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 up in Southbound
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.914 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[01db1ea3-ea5d-42a2-9124-1f38e984ece6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.915 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa920a635-81 in ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:27:08 compute-0 nova_compute[256940]: 2025-10-02 12:27:08.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.917 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa920a635-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.917 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6063a6-6f3e-4ac8-858c-37c693d5b973]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.918 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4afbeeb7-fc04-4beb-8f5f-4ffe9665a741]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.939 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e48a0d15-3bd4-42b7-a31a-b967df25e975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.956 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[da3d7a72-10b5-42e0-9bf3-e4d0810f65df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.989 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[75504a0e-f9bd-41e5-98ce-6d1ce97d1c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:08 compute-0 NetworkManager[44981]: <info>  [1759408028.9975] manager: (tapa920a635-80): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Oct 02 12:27:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:08.997 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[12b9ad46-6a24-4fd3-960c-35a385849f24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.041 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e13380e2-750c-44c5-bba8-58718acaaace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.044 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f8860b-900b-4ba8-8cce-91904f306a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 NetworkManager[44981]: <info>  [1759408029.0747] device (tapa920a635-80): carrier: link connected
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.078 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[96940c4c-aa8f-4b12-bd7f-fa4fad89a08a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.096 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2ffbf6-aa00-4be6-9fc5-1edabe0c165b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa920a635-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:4d:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592455, 'reachable_time': 42766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296690, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.112 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fb85f59b-1367-4a52-bbed-a49391edd1ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:4d8c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592455, 'tstamp': 592455}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296691, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.133 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[078be489-a1ce-464f-9383-ee851e1a83d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa920a635-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:4d:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592455, 'reachable_time': 42766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296692, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.173 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4fdb8678-b8be-4711-b48d-548b0096d7c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.242 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5ee9e0-315c-4017-ac2a-db90c2f78a00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.243 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920a635-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.244 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.244 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa920a635-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:09 compute-0 NetworkManager[44981]: <info>  [1759408029.2471] manager: (tapa920a635-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct 02 12:27:09 compute-0 kernel: tapa920a635-80: entered promiscuous mode
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.252 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa920a635-80, col_values=(('external_ids', {'iface-id': '7f4ed3f3-7aae-4781-9951-b5f99eb58474'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:09 compute-0 ovn_controller[148123]: 2025-10-02T12:27:09Z|00188|binding|INFO|Releasing lport 7f4ed3f3-7aae-4781-9951-b5f99eb58474 from this chassis (sb_readonly=0)
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.270 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.271 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eec883c6-a266-4706-8954-60d438db43fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.271 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:27:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:09.273 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'env', 'PROCESS_TAG=haproxy-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a920a635-8cc8-4478-b2dc-4d6329a5abef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.489 2 DEBUG nova.compute.manager [req-4f7a57f9-fbe9-4d70-8a17-c7b544835bd9 req-dc4218c0-cb30-419b-85b6-69390c8b6f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.490 2 DEBUG oslo_concurrency.lockutils [req-4f7a57f9-fbe9-4d70-8a17-c7b544835bd9 req-dc4218c0-cb30-419b-85b6-69390c8b6f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.490 2 DEBUG oslo_concurrency.lockutils [req-4f7a57f9-fbe9-4d70-8a17-c7b544835bd9 req-dc4218c0-cb30-419b-85b6-69390c8b6f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.490 2 DEBUG oslo_concurrency.lockutils [req-4f7a57f9-fbe9-4d70-8a17-c7b544835bd9 req-dc4218c0-cb30-419b-85b6-69390c8b6f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:09 compute-0 nova_compute[256940]: 2025-10-02 12:27:09.490 2 DEBUG nova.compute.manager [req-4f7a57f9-fbe9-4d70-8a17-c7b544835bd9 req-dc4218c0-cb30-419b-85b6-69390c8b6f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Processing event network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:27:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:09.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:09 compute-0 podman[296765]: 2025-10-02 12:27:09.731277995 +0000 UTC m=+0.118465594 container create 2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:27:09 compute-0 podman[296765]: 2025-10-02 12:27:09.643254194 +0000 UTC m=+0.030441793 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:27:09 compute-0 systemd[1]: Started libpod-conmon-2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a.scope.
Oct 02 12:27:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9907ad2e597612d3aacaacee5f8e547ccf1a5485ad77921ec2498ceb076d2dac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:09 compute-0 podman[296765]: 2025-10-02 12:27:09.927723156 +0000 UTC m=+0.314910785 container init 2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:27:09 compute-0 podman[296765]: 2025-10-02 12:27:09.935409346 +0000 UTC m=+0.322596945 container start 2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:27:09 compute-0 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[296781]: [NOTICE]   (296785) : New worker (296787) forked
Oct 02 12:27:09 compute-0 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[296781]: [NOTICE]   (296785) : Loading success.
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.094 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408030.0940025, c723f52b-fc30-4046-8444-28f280a39a16 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.095 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] VM Started (Lifecycle Event)
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.097 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.101 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.105 2 INFO nova.virt.libvirt.driver [-] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Instance spawned successfully.
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.106 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:10.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.226 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.232 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.264 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.265 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.266 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.266 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.266 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.267 2 DEBUG nova.virt.libvirt.driver [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.271 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.271 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408030.0969508, c723f52b-fc30-4046-8444-28f280a39a16 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.271 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] VM Paused (Lifecycle Event)
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.414 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.419 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408030.0997133, c723f52b-fc30-4046-8444-28f280a39a16 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.419 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] VM Resumed (Lifecycle Event)
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.494 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.502 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.549 2 INFO nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Took 10.84 seconds to spawn the instance on the hypervisor.
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.550 2 DEBUG nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.630 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.767 2 INFO nova.compute.manager [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Took 11.94 seconds to build instance.
Oct 02 12:27:10 compute-0 nova_compute[256940]: 2025-10-02 12:27:10.820 2 DEBUG oslo_concurrency.lockutils [None req-a58e25bb-6f5f-4e9b-9fd1-9e05ac4d39c1 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:11 compute-0 ceph-mon[73668]: pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 12:27:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 02 12:27:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:11.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:12.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:12 compute-0 nova_compute[256940]: 2025-10-02 12:27:12.292 2 DEBUG nova.compute.manager [req-2d186644-46bd-4359-871f-8d9b08dc11f8 req-aec75ff8-3a3c-4167-992a-21b0d042b5be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:12 compute-0 nova_compute[256940]: 2025-10-02 12:27:12.293 2 DEBUG oslo_concurrency.lockutils [req-2d186644-46bd-4359-871f-8d9b08dc11f8 req-aec75ff8-3a3c-4167-992a-21b0d042b5be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:12 compute-0 nova_compute[256940]: 2025-10-02 12:27:12.294 2 DEBUG oslo_concurrency.lockutils [req-2d186644-46bd-4359-871f-8d9b08dc11f8 req-aec75ff8-3a3c-4167-992a-21b0d042b5be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:12 compute-0 nova_compute[256940]: 2025-10-02 12:27:12.294 2 DEBUG oslo_concurrency.lockutils [req-2d186644-46bd-4359-871f-8d9b08dc11f8 req-aec75ff8-3a3c-4167-992a-21b0d042b5be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:12 compute-0 nova_compute[256940]: 2025-10-02 12:27:12.294 2 DEBUG nova.compute.manager [req-2d186644-46bd-4359-871f-8d9b08dc11f8 req-aec75ff8-3a3c-4167-992a-21b0d042b5be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] No waiting events found dispatching network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:12 compute-0 nova_compute[256940]: 2025-10-02 12:27:12.295 2 WARNING nova.compute.manager [req-2d186644-46bd-4359-871f-8d9b08dc11f8 req-aec75ff8-3a3c-4167-992a-21b0d042b5be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received unexpected event network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 for instance with vm_state active and task_state None.
Oct 02 12:27:12 compute-0 ceph-mon[73668]: pgmap v1487: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 02 12:27:12 compute-0 nova_compute[256940]: 2025-10-02 12:27:12.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 142 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Oct 02 12:27:13 compute-0 sudo[296799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:13 compute-0 sudo[296799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:13 compute-0 sudo[296799]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:13 compute-0 sudo[296824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:13 compute-0 sudo[296824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:13 compute-0 sudo[296824]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:13.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:14.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:14 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 02 12:27:14 compute-0 ceph-mon[73668]: pgmap v1488: 305 pgs: 305 active+clean; 142 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Oct 02 12:27:14 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 147 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 97 op/s
Oct 02 12:27:15 compute-0 nova_compute[256940]: 2025-10-02 12:27:15.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:15.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:16.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:17 compute-0 ceph-mon[73668]: pgmap v1489: 305 pgs: 305 active+clean; 147 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 97 op/s
Oct 02 12:27:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 165 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Oct 02 12:27:17 compute-0 nova_compute[256940]: 2025-10-02 12:27:17.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:17.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:18.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:18 compute-0 ceph-mon[73668]: pgmap v1490: 305 pgs: 305 active+clean; 165 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Oct 02 12:27:18 compute-0 podman[296852]: 2025-10-02 12:27:18.391539968 +0000 UTC m=+0.060412193 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:27:18 compute-0 podman[296853]: 2025-10-02 12:27:18.399052113 +0000 UTC m=+0.067522747 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 12:27:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 165 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Oct 02 12:27:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3727851687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:20 compute-0 nova_compute[256940]: 2025-10-02 12:27:20.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:20.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:20 compute-0 ceph-mon[73668]: pgmap v1491: 305 pgs: 305 active+clean; 165 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Oct 02 12:27:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 167 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 214 op/s
Oct 02 12:27:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:22 compute-0 nova_compute[256940]: 2025-10-02 12:27:22.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:22 compute-0 ceph-mon[73668]: pgmap v1492: 305 pgs: 305 active+clean; 167 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 214 op/s
Oct 02 12:27:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 184 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 292 op/s
Oct 02 12:27:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:24.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:24 compute-0 ovn_controller[148123]: 2025-10-02T12:27:24Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:48:81 10.100.0.13
Oct 02 12:27:24 compute-0 ovn_controller[148123]: 2025-10-02T12:27:24Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:48:81 10.100.0.13
Oct 02 12:27:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 225 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 283 op/s
Oct 02 12:27:25 compute-0 nova_compute[256940]: 2025-10-02 12:27:25.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:25 compute-0 ceph-mon[73668]: pgmap v1493: 305 pgs: 305 active+clean; 184 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 292 op/s
Oct 02 12:27:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:25.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:27:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:26.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:27:26 compute-0 ceph-mon[73668]: pgmap v1494: 305 pgs: 305 active+clean; 225 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 283 op/s
Oct 02 12:27:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1124206307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:26.463 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:26.465 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:26.465 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 264 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 287 op/s
Oct 02 12:27:27 compute-0 nova_compute[256940]: 2025-10-02 12:27:27.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2610359259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:28.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:27:28
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:27:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:27:28 compute-0 ceph-mon[73668]: pgmap v1495: 305 pgs: 305 active+clean; 264 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 287 op/s
Oct 02 12:27:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3572455864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 264 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 4.6 MiB/s wr, 203 op/s
Oct 02 12:27:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:30 compute-0 nova_compute[256940]: 2025-10-02 12:27:30.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:30.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:31 compute-0 ceph-mon[73668]: pgmap v1496: 305 pgs: 305 active+clean; 264 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 4.6 MiB/s wr, 203 op/s
Oct 02 12:27:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 293 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 5.7 MiB/s wr, 218 op/s
Oct 02 12:27:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:32.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:32 compute-0 ceph-mon[73668]: pgmap v1497: 305 pgs: 305 active+clean; 293 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 5.7 MiB/s wr, 218 op/s
Oct 02 12:27:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1910363574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3686929686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:32 compute-0 nova_compute[256940]: 2025-10-02 12:27:32.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.7 MiB/s wr, 212 op/s
Oct 02 12:27:33 compute-0 podman[296901]: 2025-10-02 12:27:33.450160893 +0000 UTC m=+0.101917648 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:27:33 compute-0 podman[296902]: 2025-10-02 12:27:33.47725652 +0000 UTC m=+0.126569643 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:27:33 compute-0 sudo[296945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:33 compute-0 sudo[296945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:33 compute-0 sudo[296945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:33 compute-0 sudo[296970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:33.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:33 compute-0 sudo[296970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:33 compute-0 sudo[296970]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3350442704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:34.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 168 op/s
Oct 02 12:27:35 compute-0 nova_compute[256940]: 2025-10-02 12:27:35.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-0 ceph-mon[73668]: pgmap v1498: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.7 MiB/s wr, 212 op/s
Oct 02 12:27:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:35.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:35 compute-0 nova_compute[256940]: 2025-10-02 12:27:35.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-0 NetworkManager[44981]: <info>  [1759408055.9341] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct 02 12:27:35 compute-0 NetworkManager[44981]: <info>  [1759408055.9357] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 02 12:27:36 compute-0 ovn_controller[148123]: 2025-10-02T12:27:36Z|00189|binding|INFO|Releasing lport 7f4ed3f3-7aae-4781-9951-b5f99eb58474 from this chassis (sb_readonly=0)
Oct 02 12:27:36 compute-0 ovn_controller[148123]: 2025-10-02T12:27:36Z|00190|binding|INFO|Releasing lport 7f4ed3f3-7aae-4781-9951-b5f99eb58474 from this chassis (sb_readonly=0)
Oct 02 12:27:36 compute-0 nova_compute[256940]: 2025-10-02 12:27:36.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:36.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:36 compute-0 nova_compute[256940]: 2025-10-02 12:27:36.881 2 DEBUG nova.compute.manager [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:36 compute-0 nova_compute[256940]: 2025-10-02 12:27:36.881 2 DEBUG nova.compute.manager [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing instance network info cache due to event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:27:36 compute-0 nova_compute[256940]: 2025-10-02 12:27:36.882 2 DEBUG oslo_concurrency.lockutils [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:36 compute-0 nova_compute[256940]: 2025-10-02 12:27:36.882 2 DEBUG oslo_concurrency.lockutils [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:36 compute-0 nova_compute[256940]: 2025-10-02 12:27:36.882 2 DEBUG nova.network.neutron [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:27:37 compute-0 ceph-mon[73668]: pgmap v1499: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 168 op/s
Oct 02 12:27:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 309 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Oct 02 12:27:37 compute-0 nova_compute[256940]: 2025-10-02 12:27:37.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:38 compute-0 ceph-mon[73668]: pgmap v1500: 305 pgs: 305 active+clean; 309 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Oct 02 12:27:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 309 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.610072) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059610142, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 601, "num_deletes": 251, "total_data_size": 678469, "memory_usage": 689664, "flush_reason": "Manual Compaction"}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059619779, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 670427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33528, "largest_seqno": 34128, "table_properties": {"data_size": 667281, "index_size": 1054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7438, "raw_average_key_size": 19, "raw_value_size": 661007, "raw_average_value_size": 1699, "num_data_blocks": 47, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408019, "oldest_key_time": 1759408019, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9754 microseconds, and 3108 cpu microseconds.
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.619823) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 670427 bytes OK
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.619847) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.622859) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.622874) EVENT_LOG_v1 {"time_micros": 1759408059622868, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.622893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 675259, prev total WAL file size 690766, number of live WAL files 2.
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.623281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(654KB)], [71(9478KB)]
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059623315, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10376859, "oldest_snapshot_seqno": -1}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5895 keys, 8521735 bytes, temperature: kUnknown
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059677779, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8521735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8483240, "index_size": 22641, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 151584, "raw_average_key_size": 25, "raw_value_size": 8378238, "raw_average_value_size": 1421, "num_data_blocks": 906, "num_entries": 5895, "num_filter_entries": 5895, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.678197) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8521735 bytes
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.680745) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.1 rd, 156.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.3 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(28.2) write-amplify(12.7) OK, records in: 6406, records dropped: 511 output_compression: NoCompression
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.680776) EVENT_LOG_v1 {"time_micros": 1759408059680763, "job": 40, "event": "compaction_finished", "compaction_time_micros": 54596, "compaction_time_cpu_micros": 21656, "output_level": 6, "num_output_files": 1, "total_output_size": 8521735, "num_input_records": 6406, "num_output_records": 5895, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059681045, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059682982, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.623244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004340459531453564 of space, bias 1.0, pg target 1.302137859436069 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002169048366834171 of space, bias 1.0, pg target 0.6485454616834171 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:27:40 compute-0 nova_compute[256940]: 2025-10-02 12:27:40.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:40.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:40 compute-0 ceph-mon[73668]: pgmap v1501: 305 pgs: 305 active+clean; 309 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Oct 02 12:27:41 compute-0 nova_compute[256940]: 2025-10-02 12:27:41.056 2 DEBUG nova.network.neutron [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updated VIF entry in instance network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:27:41 compute-0 nova_compute[256940]: 2025-10-02 12:27:41.057 2 DEBUG nova.network.neutron [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:41 compute-0 nova_compute[256940]: 2025-10-02 12:27:41.082 2 DEBUG oslo_concurrency.lockutils [req-444b90f6-65be-447f-a690-cd0f546c00da req-310317a2-f336-4d86-9fcf-8a43c4cbebfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 336 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 159 op/s
Oct 02 12:27:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:41.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:42 compute-0 nova_compute[256940]: 2025-10-02 12:27:42.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 339 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Oct 02 12:27:43 compute-0 ceph-mon[73668]: pgmap v1502: 305 pgs: 305 active+clean; 336 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 159 op/s
Oct 02 12:27:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3804469328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:43 compute-0 nova_compute[256940]: 2025-10-02 12:27:43.659 2 DEBUG nova.compute.manager [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:43 compute-0 nova_compute[256940]: 2025-10-02 12:27:43.659 2 DEBUG nova.compute.manager [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing instance network info cache due to event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:27:43 compute-0 nova_compute[256940]: 2025-10-02 12:27:43.660 2 DEBUG oslo_concurrency.lockutils [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:43 compute-0 nova_compute[256940]: 2025-10-02 12:27:43.660 2 DEBUG oslo_concurrency.lockutils [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:43 compute-0 nova_compute[256940]: 2025-10-02 12:27:43.660 2 DEBUG nova.network.neutron [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:27:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:43.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:44 compute-0 nova_compute[256940]: 2025-10-02 12:27:44.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:44.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:44 compute-0 ceph-mon[73668]: pgmap v1503: 305 pgs: 305 active+clean; 339 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Oct 02 12:27:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 347 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 152 op/s
Oct 02 12:27:45 compute-0 nova_compute[256940]: 2025-10-02 12:27:45.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:45 compute-0 nova_compute[256940]: 2025-10-02 12:27:45.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:45 compute-0 nova_compute[256940]: 2025-10-02 12:27:45.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:27:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:45.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2182550942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2161945545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:46 compute-0 nova_compute[256940]: 2025-10-02 12:27:46.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:46.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:46 compute-0 nova_compute[256940]: 2025-10-02 12:27:46.975 2 DEBUG nova.network.neutron [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updated VIF entry in instance network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:27:46 compute-0 nova_compute[256940]: 2025-10-02 12:27:46.976 2 DEBUG nova.network.neutron [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:47 compute-0 ceph-mon[73668]: pgmap v1504: 305 pgs: 305 active+clean; 347 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 152 op/s
Oct 02 12:27:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/36665426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1623385835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 357 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 139 op/s
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.469 2 DEBUG oslo_concurrency.lockutils [req-0a95463f-9732-4fe0-b2d4-f33d7abcc59e req-98955ccc-9ad5-45ed-a5cf-3b960147edc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.529 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.530 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.530 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.531 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.531 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:47.630 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:47.631 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:27:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:27:47.632 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:47 compute-0 nova_compute[256940]: 2025-10-02 12:27:47.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:47.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/982489555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:48 compute-0 nova_compute[256940]: 2025-10-02 12:27:48.001 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:48 compute-0 ceph-mon[73668]: pgmap v1505: 305 pgs: 305 active+clean; 357 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 139 op/s
Oct 02 12:27:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3220107927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/982489555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:27:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:48.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.170 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.171 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 357 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 124 op/s
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.357 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.359 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4476MB free_disk=20.869464874267578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.359 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.359 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:49 compute-0 podman[297027]: 2025-10-02 12:27:49.395314016 +0000 UTC m=+0.065073942 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 12:27:49 compute-0 podman[297028]: 2025-10-02 12:27:49.4222261 +0000 UTC m=+0.089341009 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:27:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:49.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.839 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance c723f52b-fc30-4046-8444-28f280a39a16 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.840 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.840 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:27:49 compute-0 nova_compute[256940]: 2025-10-02 12:27:49.887 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:50 compute-0 nova_compute[256940]: 2025-10-02 12:27:50.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:50.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561522923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:50 compute-0 nova_compute[256940]: 2025-10-02 12:27:50.573 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:50 compute-0 nova_compute[256940]: 2025-10-02 12:27:50.581 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:27:50 compute-0 nova_compute[256940]: 2025-10-02 12:27:50.652 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:27:50 compute-0 ceph-mon[73668]: pgmap v1506: 305 pgs: 305 active+clean; 357 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 124 op/s
Oct 02 12:27:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/561522923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:50 compute-0 sudo[297090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:50 compute-0 sudo[297090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:50 compute-0 sudo[297090]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:50 compute-0 nova_compute[256940]: 2025-10-02 12:27:50.913 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:27:50 compute-0 nova_compute[256940]: 2025-10-02 12:27:50.913 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:50 compute-0 sudo[297115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:27:50 compute-0 sudo[297115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:50 compute-0 sudo[297115]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:51 compute-0 sudo[297140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:51 compute-0 sudo[297140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:51 compute-0 sudo[297140]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:51 compute-0 sudo[297165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:27:51 compute-0 sudo[297165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 363 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 158 op/s
Oct 02 12:27:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:27:51 compute-0 sudo[297165]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:27:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:27:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:51.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:27:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:52.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:27:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:27:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:27:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:27:52 compute-0 nova_compute[256940]: 2025-10-02 12:27:52.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f0f67d9e-7456-4cd0-be3b-8aa5721eb73e does not exist
Oct 02 12:27:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 80d163be-cf0c-42ee-9640-25cc6358e35a does not exist
Oct 02 12:27:52 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e0460ff9-4b9b-48c1-96c4-0426a2340d77 does not exist
Oct 02 12:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:27:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:27:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:27:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:52 compute-0 sudo[297222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:52 compute-0 sudo[297222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:52 compute-0 sudo[297222]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:52 compute-0 sudo[297247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:27:52 compute-0 sudo[297247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:52 compute-0 sudo[297247]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:52 compute-0 nova_compute[256940]: 2025-10-02 12:27:52.914 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:52 compute-0 nova_compute[256940]: 2025-10-02 12:27:52.915 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:27:52 compute-0 nova_compute[256940]: 2025-10-02 12:27:52.915 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:27:52 compute-0 sudo[297272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:52 compute-0 sudo[297272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:52 compute-0 sudo[297272]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:53 compute-0 sudo[297297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:27:53 compute-0 sudo[297297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 387 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 141 op/s
Oct 02 12:27:53 compute-0 ceph-mon[73668]: pgmap v1507: 305 pgs: 305 active+clean; 363 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 158 op/s
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:27:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:53 compute-0 nova_compute[256940]: 2025-10-02 12:27:53.346 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:53 compute-0 nova_compute[256940]: 2025-10-02 12:27:53.346 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:53 compute-0 nova_compute[256940]: 2025-10-02 12:27:53.346 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:27:53 compute-0 nova_compute[256940]: 2025-10-02 12:27:53.346 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c723f52b-fc30-4046-8444-28f280a39a16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:53 compute-0 podman[297362]: 2025-10-02 12:27:53.343031733 +0000 UTC m=+0.024784080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:27:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:27:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:53.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:27:53 compute-0 podman[297362]: 2025-10-02 12:27:53.775089361 +0000 UTC m=+0.456841658 container create 3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:27:53 compute-0 sudo[297376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:53 compute-0 sudo[297376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:53 compute-0 sudo[297376]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:53 compute-0 sudo[297401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:53 compute-0 sudo[297401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:53 compute-0 sudo[297401]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:54 compute-0 systemd[1]: Started libpod-conmon-3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425.scope.
Oct 02 12:27:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:27:54 compute-0 podman[297362]: 2025-10-02 12:27:54.180569433 +0000 UTC m=+0.862321750 container init 3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:27:54 compute-0 podman[297362]: 2025-10-02 12:27:54.188276119 +0000 UTC m=+0.870028426 container start 3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:27:54 compute-0 competent_chatterjee[297428]: 167 167
Oct 02 12:27:54 compute-0 systemd[1]: libpod-3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425.scope: Deactivated successfully.
Oct 02 12:27:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:54.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:54 compute-0 podman[297362]: 2025-10-02 12:27:54.295726206 +0000 UTC m=+0.977478503 container attach 3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 02 12:27:54 compute-0 podman[297362]: 2025-10-02 12:27:54.296483106 +0000 UTC m=+0.978235403 container died 3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4d7be751a54840aaa8dbd7e7e4cf3fc88a5ea20e6bee8b659604d8d1faa5c8-merged.mount: Deactivated successfully.
Oct 02 12:27:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 396 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 4.2 MiB/s wr, 149 op/s
Oct 02 12:27:55 compute-0 nova_compute[256940]: 2025-10-02 12:27:55.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-0 ceph-mon[73668]: pgmap v1508: 305 pgs: 305 active+clean; 387 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 141 op/s
Oct 02 12:27:55 compute-0 podman[297362]: 2025-10-02 12:27:55.307436346 +0000 UTC m=+1.989188663 container remove 3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:27:55 compute-0 systemd[1]: libpod-conmon-3776021335f696f771dd9bc5ed0f0c2932fba669ba10c927e00fb7bfeae1d425.scope: Deactivated successfully.
Oct 02 12:27:55 compute-0 podman[297454]: 2025-10-02 12:27:55.467201542 +0000 UTC m=+0.022039620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:27:55 compute-0 podman[297454]: 2025-10-02 12:27:55.706733552 +0000 UTC m=+0.261571630 container create 7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_spence, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:27:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:27:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:55.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:27:55 compute-0 systemd[1]: Started libpod-conmon-7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6.scope.
Oct 02 12:27:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f42436477c78dead8a4d9f9f0d88f3fcd3179b80a6e8511c336fb0f8561d7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f42436477c78dead8a4d9f9f0d88f3fcd3179b80a6e8511c336fb0f8561d7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f42436477c78dead8a4d9f9f0d88f3fcd3179b80a6e8511c336fb0f8561d7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f42436477c78dead8a4d9f9f0d88f3fcd3179b80a6e8511c336fb0f8561d7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f42436477c78dead8a4d9f9f0d88f3fcd3179b80a6e8511c336fb0f8561d7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:56 compute-0 podman[297454]: 2025-10-02 12:27:56.070743392 +0000 UTC m=+0.625581480 container init 7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:27:56 compute-0 podman[297454]: 2025-10-02 12:27:56.079186177 +0000 UTC m=+0.634024235 container start 7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:27:56 compute-0 podman[297454]: 2025-10-02 12:27:56.251285565 +0000 UTC m=+0.806123673 container attach 7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:27:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:56.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:56 compute-0 ceph-mon[73668]: pgmap v1509: 305 pgs: 305 active+clean; 396 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 4.2 MiB/s wr, 149 op/s
Oct 02 12:27:56 compute-0 nova_compute[256940]: 2025-10-02 12:27:56.921 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:57 compute-0 xenodochial_spence[297471]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:27:57 compute-0 xenodochial_spence[297471]: --> relative data size: 1.0
Oct 02 12:27:57 compute-0 xenodochial_spence[297471]: --> All data devices are unavailable
Oct 02 12:27:57 compute-0 systemd[1]: libpod-7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6.scope: Deactivated successfully.
Oct 02 12:27:57 compute-0 podman[297454]: 2025-10-02 12:27:57.043683959 +0000 UTC m=+1.598522027 container died 7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:27:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 405 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 3.4 MiB/s wr, 145 op/s
Oct 02 12:27:57 compute-0 nova_compute[256940]: 2025-10-02 12:27:57.274 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:57 compute-0 nova_compute[256940]: 2025-10-02 12:27:57.274 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:27:57 compute-0 nova_compute[256940]: 2025-10-02 12:27:57.274 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:57 compute-0 nova_compute[256940]: 2025-10-02 12:27:57.275 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:57 compute-0 nova_compute[256940]: 2025-10-02 12:27:57.275 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:57 compute-0 nova_compute[256940]: 2025-10-02 12:27:57.275 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-37f42436477c78dead8a4d9f9f0d88f3fcd3179b80a6e8511c336fb0f8561d7d-merged.mount: Deactivated successfully.
Oct 02 12:27:57 compute-0 nova_compute[256940]: 2025-10-02 12:27:57.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:58 compute-0 podman[297454]: 2025-10-02 12:27:58.177144409 +0000 UTC m=+2.731982477 container remove 7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:27:58 compute-0 sudo[297297]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:58 compute-0 systemd[1]: libpod-conmon-7e0df1be77df4900e3bb4ac51bbd97402e99782ccd44cd3f5a1da44bd4f6f7d6.scope: Deactivated successfully.
Oct 02 12:27:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:27:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:58.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:27:58 compute-0 sudo[297499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:58 compute-0 sudo[297499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:58 compute-0 sudo[297499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:58 compute-0 sudo[297524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:27:58 compute-0 sudo[297524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:58 compute-0 sudo[297524]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:27:58 compute-0 sudo[297549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:58 compute-0 sudo[297549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:58 compute-0 sudo[297549]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:58 compute-0 sudo[297575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:27:58 compute-0 sudo[297575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:58 compute-0 nova_compute[256940]: 2025-10-02 12:27:58.568 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:58 compute-0 podman[297641]: 2025-10-02 12:27:58.891870662 +0000 UTC m=+0.021922708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:27:59 compute-0 podman[297641]: 2025-10-02 12:27:59.157683869 +0000 UTC m=+0.287735885 container create 148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:27:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 405 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 617 KiB/s rd, 2.6 MiB/s wr, 124 op/s
Oct 02 12:27:59 compute-0 ceph-mon[73668]: pgmap v1510: 305 pgs: 305 active+clean; 405 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 3.4 MiB/s wr, 145 op/s
Oct 02 12:27:59 compute-0 systemd[1]: Started libpod-conmon-148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549.scope.
Oct 02 12:27:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:27:59 compute-0 podman[297641]: 2025-10-02 12:27:59.482300949 +0000 UTC m=+0.612353015 container init 148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 02 12:27:59 compute-0 podman[297641]: 2025-10-02 12:27:59.491696308 +0000 UTC m=+0.621748344 container start 148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:27:59 compute-0 cranky_saha[297657]: 167 167
Oct 02 12:27:59 compute-0 systemd[1]: libpod-148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549.scope: Deactivated successfully.
Oct 02 12:27:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:27:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:59.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:59 compute-0 podman[297641]: 2025-10-02 12:27:59.754074328 +0000 UTC m=+0.884126354 container attach 148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:27:59 compute-0 podman[297641]: 2025-10-02 12:27:59.754653273 +0000 UTC m=+0.884705289 container died 148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 02 12:28:00 compute-0 nova_compute[256940]: 2025-10-02 12:28:00.213 2 DEBUG nova.compute.manager [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:00 compute-0 nova_compute[256940]: 2025-10-02 12:28:00.214 2 DEBUG nova.compute.manager [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing instance network info cache due to event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:28:00 compute-0 nova_compute[256940]: 2025-10-02 12:28:00.214 2 DEBUG oslo_concurrency.lockutils [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:00 compute-0 nova_compute[256940]: 2025-10-02 12:28:00.215 2 DEBUG oslo_concurrency.lockutils [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:00 compute-0 nova_compute[256940]: 2025-10-02 12:28:00.215 2 DEBUG nova.network.neutron [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5155e94ce7b88401b26f62b235c5556dab68b1129076593deab381e2c5a6ec36-merged.mount: Deactivated successfully.
Oct 02 12:28:00 compute-0 nova_compute[256940]: 2025-10-02 12:28:00.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:00.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:00 compute-0 podman[297641]: 2025-10-02 12:28:00.553002898 +0000 UTC m=+1.683054914 container remove 148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:28:00 compute-0 systemd[1]: libpod-conmon-148541017b9c7db62a71372f92d8776c6fc5f44c93e81e4764c4760da6fa7549.scope: Deactivated successfully.
Oct 02 12:28:00 compute-0 podman[297683]: 2025-10-02 12:28:00.715683007 +0000 UTC m=+0.025960490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:28:00 compute-0 ceph-mon[73668]: pgmap v1511: 305 pgs: 305 active+clean; 405 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 617 KiB/s rd, 2.6 MiB/s wr, 124 op/s
Oct 02 12:28:01 compute-0 podman[297683]: 2025-10-02 12:28:01.007870854 +0000 UTC m=+0.318148357 container create ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:28:01 compute-0 systemd[1]: Started libpod-conmon-ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c.scope.
Oct 02 12:28:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 328 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd9dd47a2d22abeceedf55547f493132589540e28cdb51b4308dd01c4e2f0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd9dd47a2d22abeceedf55547f493132589540e28cdb51b4308dd01c4e2f0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd9dd47a2d22abeceedf55547f493132589540e28cdb51b4308dd01c4e2f0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd9dd47a2d22abeceedf55547f493132589540e28cdb51b4308dd01c4e2f0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:01 compute-0 podman[297683]: 2025-10-02 12:28:01.248378119 +0000 UTC m=+0.558655592 container init ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:28:01 compute-0 podman[297683]: 2025-10-02 12:28:01.256089825 +0000 UTC m=+0.566367288 container start ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:28:01 compute-0 podman[297683]: 2025-10-02 12:28:01.321152066 +0000 UTC m=+0.631429529 container attach ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:28:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:02 compute-0 friendly_gould[297699]: {
Oct 02 12:28:02 compute-0 friendly_gould[297699]:     "1": [
Oct 02 12:28:02 compute-0 friendly_gould[297699]:         {
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "devices": [
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "/dev/loop3"
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             ],
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "lv_name": "ceph_lv0",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "lv_size": "7511998464",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "name": "ceph_lv0",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "tags": {
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.cluster_name": "ceph",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.crush_device_class": "",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.encrypted": "0",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.osd_id": "1",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.type": "block",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:                 "ceph.vdo": "0"
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             },
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "type": "block",
Oct 02 12:28:02 compute-0 friendly_gould[297699]:             "vg_name": "ceph_vg0"
Oct 02 12:28:02 compute-0 friendly_gould[297699]:         }
Oct 02 12:28:02 compute-0 friendly_gould[297699]:     ]
Oct 02 12:28:02 compute-0 friendly_gould[297699]: }
Oct 02 12:28:02 compute-0 systemd[1]: libpod-ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c.scope: Deactivated successfully.
Oct 02 12:28:02 compute-0 podman[297683]: 2025-10-02 12:28:02.117207422 +0000 UTC m=+1.427484895 container died ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:28:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecdd9dd47a2d22abeceedf55547f493132589540e28cdb51b4308dd01c4e2f0b-merged.mount: Deactivated successfully.
Oct 02 12:28:02 compute-0 podman[297683]: 2025-10-02 12:28:02.491114993 +0000 UTC m=+1.801392456 container remove ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:28:02 compute-0 systemd[1]: libpod-conmon-ba9fdcf0d3c3ce47a66d2107f2ac2156d3134787ea01e545edec2f5c0ee3c04c.scope: Deactivated successfully.
Oct 02 12:28:02 compute-0 sudo[297575]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 sudo[297723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:02 compute-0 sudo[297723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 sudo[297723]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 nova_compute[256940]: 2025-10-02 12:28:02.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:02 compute-0 sudo[297748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:28:02 compute-0 sudo[297748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 sudo[297748]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 sudo[297773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:02 compute-0 sudo[297773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 sudo[297773]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:02 compute-0 sudo[297798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:28:02 compute-0 sudo[297798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:02 compute-0 ceph-mon[73668]: pgmap v1512: 305 pgs: 305 active+clean; 328 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Oct 02 12:28:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:03 compute-0 podman[297862]: 2025-10-02 12:28:03.101858786 +0000 UTC m=+0.038238431 container create a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:28:03 compute-0 systemd[1]: Started libpod-conmon-a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e.scope.
Oct 02 12:28:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:03 compute-0 podman[297862]: 2025-10-02 12:28:03.084543007 +0000 UTC m=+0.020922682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:28:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 247 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Oct 02 12:28:03 compute-0 podman[297862]: 2025-10-02 12:28:03.222576001 +0000 UTC m=+0.158955686 container init a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_newton, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:28:03 compute-0 podman[297862]: 2025-10-02 12:28:03.23200845 +0000 UTC m=+0.168388095 container start a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:28:03 compute-0 crazy_newton[297878]: 167 167
Oct 02 12:28:03 compute-0 systemd[1]: libpod-a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e.scope: Deactivated successfully.
Oct 02 12:28:03 compute-0 conmon[297878]: conmon a8ef85d80991a612788a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e.scope/container/memory.events
Oct 02 12:28:03 compute-0 podman[297862]: 2025-10-02 12:28:03.392791752 +0000 UTC m=+0.329171397 container attach a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:28:03 compute-0 podman[297862]: 2025-10-02 12:28:03.393165791 +0000 UTC m=+0.329545436 container died a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:28:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:28:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6bf1aa7fe1034ac104f434d2ad972f312d55f461416b95b0bcf9f59fc2a93f0-merged.mount: Deactivated successfully.
Oct 02 12:28:04 compute-0 podman[297862]: 2025-10-02 12:28:04.082741935 +0000 UTC m=+1.019121580 container remove a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:28:04 compute-0 systemd[1]: libpod-conmon-a8ef85d80991a612788a5d0a01a00e8ec2125dc3e7c59de62db002437823343e.scope: Deactivated successfully.
Oct 02 12:28:04 compute-0 ceph-mon[73668]: pgmap v1513: 305 pgs: 305 active+clean; 247 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Oct 02 12:28:04 compute-0 podman[297894]: 2025-10-02 12:28:04.191519536 +0000 UTC m=+0.506031146 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:28:04 compute-0 podman[297895]: 2025-10-02 12:28:04.224440132 +0000 UTC m=+0.538422268 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:28:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:04 compute-0 podman[297949]: 2025-10-02 12:28:04.304376361 +0000 UTC m=+0.049943619 container create 5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:28:04 compute-0 systemd[1]: Started libpod-conmon-5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d.scope.
Oct 02 12:28:04 compute-0 podman[297949]: 2025-10-02 12:28:04.279138711 +0000 UTC m=+0.024705989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:28:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61f3b05f36fde958292ed4416854a2f70b7a9ab6f63e2e8f79dfb87e56e5e19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61f3b05f36fde958292ed4416854a2f70b7a9ab6f63e2e8f79dfb87e56e5e19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61f3b05f36fde958292ed4416854a2f70b7a9ab6f63e2e8f79dfb87e56e5e19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61f3b05f36fde958292ed4416854a2f70b7a9ab6f63e2e8f79dfb87e56e5e19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:04 compute-0 podman[297949]: 2025-10-02 12:28:04.494775454 +0000 UTC m=+0.240342742 container init 5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_beaver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:28:04 compute-0 podman[297949]: 2025-10-02 12:28:04.5052263 +0000 UTC m=+0.250793568 container start 5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_beaver, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:28:04 compute-0 podman[297949]: 2025-10-02 12:28:04.529460765 +0000 UTC m=+0.275028023 container attach 5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:28:04 compute-0 nova_compute[256940]: 2025-10-02 12:28:04.692 2 DEBUG nova.network.neutron [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updated VIF entry in instance network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:28:04 compute-0 nova_compute[256940]: 2025-10-02 12:28:04.694 2 DEBUG nova.network.neutron [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:04 compute-0 nova_compute[256940]: 2025-10-02 12:28:04.804 2 DEBUG oslo_concurrency.lockutils [req-be8f7d50-29f3-4dcd-bcf2-e7658d6f8b79 req-1c191032-fae2-4334-841b-96a3335bb456 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 711 KiB/s wr, 167 op/s
Oct 02 12:28:05 compute-0 nova_compute[256940]: 2025-10-02 12:28:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/908879594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1435707135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1435707135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:05 compute-0 elated_beaver[297965]: {
Oct 02 12:28:05 compute-0 elated_beaver[297965]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:28:05 compute-0 elated_beaver[297965]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:28:05 compute-0 elated_beaver[297965]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:28:05 compute-0 elated_beaver[297965]:         "osd_id": 1,
Oct 02 12:28:05 compute-0 elated_beaver[297965]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:28:05 compute-0 elated_beaver[297965]:         "type": "bluestore"
Oct 02 12:28:05 compute-0 elated_beaver[297965]:     }
Oct 02 12:28:05 compute-0 elated_beaver[297965]: }
Oct 02 12:28:05 compute-0 systemd[1]: libpod-5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d.scope: Deactivated successfully.
Oct 02 12:28:05 compute-0 podman[297987]: 2025-10-02 12:28:05.416218343 +0000 UTC m=+0.028247418 container died 5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:28:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b61f3b05f36fde958292ed4416854a2f70b7a9ab6f63e2e8f79dfb87e56e5e19-merged.mount: Deactivated successfully.
Oct 02 12:28:05 compute-0 podman[297987]: 2025-10-02 12:28:05.977833719 +0000 UTC m=+0.589862774 container remove 5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_beaver, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:28:05 compute-0 systemd[1]: libpod-conmon-5f0f80035e4fd8e31bf601a52b09568f4d5776fb9096e42710a71a8b160d984d.scope: Deactivated successfully.
Oct 02 12:28:06 compute-0 sudo[297798]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:28:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:28:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:28:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:28:06 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev da07a731-98e4-42f1-8ffc-972f89e1cbe1 does not exist
Oct 02 12:28:06 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 671e7e86-f9bd-4974-83f5-000eb939d56f does not exist
Oct 02 12:28:06 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a018a3f5-de63-45de-b989-7630f72647d3 does not exist
Oct 02 12:28:06 compute-0 sudo[298001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:06 compute-0 sudo[298001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:06 compute-0 sudo[298001]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:06.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:06 compute-0 sudo[298026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:28:06 compute-0 sudo[298026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:06 compute-0 sudo[298026]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:06 compute-0 ceph-mon[73668]: pgmap v1514: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 711 KiB/s wr, 167 op/s
Oct 02 12:28:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4026011726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:28:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:28:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1381430579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 52 KiB/s wr, 133 op/s
Oct 02 12:28:07 compute-0 nova_compute[256940]: 2025-10-02 12:28:07.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:07 compute-0 nova_compute[256940]: 2025-10-02 12:28:07.732 2 DEBUG nova.compute.manager [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:07 compute-0 nova_compute[256940]: 2025-10-02 12:28:07.732 2 DEBUG nova.compute.manager [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing instance network info cache due to event network-changed-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:28:07 compute-0 nova_compute[256940]: 2025-10-02 12:28:07.732 2 DEBUG oslo_concurrency.lockutils [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:07 compute-0 nova_compute[256940]: 2025-10-02 12:28:07.733 2 DEBUG oslo_concurrency.lockutils [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:07 compute-0 nova_compute[256940]: 2025-10-02 12:28:07.733 2 DEBUG nova.network.neutron [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Refreshing network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:28:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:07.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:08 compute-0 ceph-mon[73668]: pgmap v1515: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 52 KiB/s wr, 133 op/s
Oct 02 12:28:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 KiB/s wr, 115 op/s
Oct 02 12:28:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:09.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:10 compute-0 nova_compute[256940]: 2025-10-02 12:28:10.133 2 DEBUG nova.network.neutron [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updated VIF entry in instance network info cache for port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:28:10 compute-0 nova_compute[256940]: 2025-10-02 12:28:10.134 2 DEBUG nova.network.neutron [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [{"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:10 compute-0 nova_compute[256940]: 2025-10-02 12:28:10.153 2 DEBUG oslo_concurrency.lockutils [req-b1689c55-b702-4511-8fdd-f96e1a17e848 req-e7b44bc7-c350-4e8d-9cba-f27382029fc9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c723f52b-fc30-4046-8444-28f280a39a16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:10 compute-0 nova_compute[256940]: 2025-10-02 12:28:10.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:10 compute-0 ceph-mon[73668]: pgmap v1516: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 KiB/s wr, 115 op/s
Oct 02 12:28:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 255 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 826 KiB/s wr, 129 op/s
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.682 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.683 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.683 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.683 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.684 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.685 2 INFO nova.compute.manager [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Terminating instance
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.686 2 DEBUG nova.compute.manager [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:28:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:11 compute-0 kernel: tapbfc6bf86-38 (unregistering): left promiscuous mode
Oct 02 12:28:11 compute-0 NetworkManager[44981]: <info>  [1759408091.9335] device (tapbfc6bf86-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:11 compute-0 ovn_controller[148123]: 2025-10-02T12:28:11Z|00191|binding|INFO|Releasing lport bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 from this chassis (sb_readonly=0)
Oct 02 12:28:11 compute-0 ovn_controller[148123]: 2025-10-02T12:28:11Z|00192|binding|INFO|Setting lport bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 down in Southbound
Oct 02 12:28:11 compute-0 ovn_controller[148123]: 2025-10-02T12:28:11Z|00193|binding|INFO|Removing iface tapbfc6bf86-38 ovn-installed in OVS
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:11 compute-0 nova_compute[256940]: 2025-10-02 12:28:11.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:12 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 02 12:28:12 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003a.scope: Consumed 16.140s CPU time.
Oct 02 12:28:12 compute-0 systemd-machined[210927]: Machine qemu-26-instance-0000003a terminated.
Oct 02 12:28:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:12.026 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:48:81 10.100.0.13'], port_security=['fa:16:3e:fd:48:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c723f52b-fc30-4046-8444-28f280a39a16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20ce9073494490588607984318667f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '389e89f9-e63f-4b21-acd4-c43535d1012a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4006d2de-ae16-44cd-90c1-7beb9f87e38f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:12.029 158104 INFO neutron.agent.ovn.metadata.agent [-] Port bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 in datapath a920a635-8cc8-4478-b2dc-4d6329a5abef unbound from our chassis
Oct 02 12:28:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:12.031 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a920a635-8cc8-4478-b2dc-4d6329a5abef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:28:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:12.032 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbaa944-9b3f-472d-a3fa-743537eb7d9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:12.034 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef namespace which is not needed anymore
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.121 2 INFO nova.virt.libvirt.driver [-] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Instance destroyed successfully.
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.122 2 DEBUG nova.objects.instance [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'resources' on Instance uuid c723f52b-fc30-4046-8444-28f280a39a16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:12 compute-0 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[296781]: [NOTICE]   (296785) : haproxy version is 2.8.14-c23fe91
Oct 02 12:28:12 compute-0 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[296781]: [NOTICE]   (296785) : path to executable is /usr/sbin/haproxy
Oct 02 12:28:12 compute-0 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[296781]: [WARNING]  (296785) : Exiting Master process...
Oct 02 12:28:12 compute-0 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[296781]: [ALERT]    (296785) : Current worker (296787) exited with code 143 (Terminated)
Oct 02 12:28:12 compute-0 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[296781]: [WARNING]  (296785) : All workers exited. Exiting... (0)
Oct 02 12:28:12 compute-0 systemd[1]: libpod-2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a.scope: Deactivated successfully.
Oct 02 12:28:12 compute-0 podman[298089]: 2025-10-02 12:28:12.207634252 +0000 UTC m=+0.067129515 container died 2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:28:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:12.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9907ad2e597612d3aacaacee5f8e547ccf1a5485ad77921ec2498ceb076d2dac-merged.mount: Deactivated successfully.
Oct 02 12:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:28:12 compute-0 podman[298089]: 2025-10-02 12:28:12.574588535 +0000 UTC m=+0.434083798 container cleanup 2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:28:12 compute-0 systemd[1]: libpod-conmon-2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a.scope: Deactivated successfully.
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.665 2 DEBUG nova.virt.libvirt.vif [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1421852808',display_name='tempest-FloatingIPsAssociationTestJSON-server-1421852808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1421852808',id=58,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xprfi8ek',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:10Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=c723f52b-fc30-4046-8444-28f280a39a16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.666 2 DEBUG nova.network.os_vif_util [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "address": "fa:16:3e:fd:48:81", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc6bf86-38", "ovs_interfaceid": "bfc6bf86-3893-44ff-abb2-2c3a2f152cf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.667 2 DEBUG nova.network.os_vif_util [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:48:81,bridge_name='br-int',has_traffic_filtering=True,id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc6bf86-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.667 2 DEBUG os_vif [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:48:81,bridge_name='br-int',has_traffic_filtering=True,id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc6bf86-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfc6bf86-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:12 compute-0 nova_compute[256940]: 2025-10-02 12:28:12.741 2 INFO os_vif [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:48:81,bridge_name='br-int',has_traffic_filtering=True,id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc6bf86-38')
Oct 02 12:28:12 compute-0 ceph-mon[73668]: pgmap v1517: 305 pgs: 305 active+clean; 255 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 826 KiB/s wr, 129 op/s
Oct 02 12:28:13 compute-0 podman[298121]: 2025-10-02 12:28:13.054756984 +0000 UTC m=+0.458153661 container remove 2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.063 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[16da69fe-862b-4ca2-a365-a9ee56dfcc13]: (4, ('Thu Oct  2 12:28:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef (2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a)\n2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a\nThu Oct  2 12:28:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef (2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a)\n2ad2b15b6cb2b723b63a42eca66beb016270c1c5890c22532c7e0a4298690c4a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.065 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0ac8a8-6959-4bc8-aff9-e6d2076c09aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.066 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920a635-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:13 compute-0 kernel: tapa920a635-80: left promiscuous mode
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.087 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[54e4b7f9-92ca-405b-b6e6-b5e7e499aa29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.117 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[097f15dd-72f1-4a8a-befb-b56a374d37dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.119 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[329044de-ed9c-4d63-b48b-1f400b1ee149]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.142 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b989f63d-b96b-4e0d-88c8-ab600150a01b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592446, 'reachable_time': 40218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298154, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:13 compute-0 systemd[1]: run-netns-ovnmeta\x2da920a635\x2d8cc8\x2d4478\x2db2dc\x2d4d6329a5abef.mount: Deactivated successfully.
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.147 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:28:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:13.147 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9434aa76-7553-4e64-a32a-b960f03fd2f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 272 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.499 2 DEBUG nova.compute.manager [req-b28c18a6-6311-46af-973c-e4c4887fa345 req-961bd5d7-347c-4b78-9a59-d0b7a6186b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-vif-unplugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.500 2 DEBUG oslo_concurrency.lockutils [req-b28c18a6-6311-46af-973c-e4c4887fa345 req-961bd5d7-347c-4b78-9a59-d0b7a6186b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.500 2 DEBUG oslo_concurrency.lockutils [req-b28c18a6-6311-46af-973c-e4c4887fa345 req-961bd5d7-347c-4b78-9a59-d0b7a6186b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.500 2 DEBUG oslo_concurrency.lockutils [req-b28c18a6-6311-46af-973c-e4c4887fa345 req-961bd5d7-347c-4b78-9a59-d0b7a6186b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.501 2 DEBUG nova.compute.manager [req-b28c18a6-6311-46af-973c-e4c4887fa345 req-961bd5d7-347c-4b78-9a59-d0b7a6186b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] No waiting events found dispatching network-vif-unplugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.501 2 DEBUG nova.compute.manager [req-b28c18a6-6311-46af-973c-e4c4887fa345 req-961bd5d7-347c-4b78-9a59-d0b7a6186b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-vif-unplugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.709 2 INFO nova.virt.libvirt.driver [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Deleting instance files /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16_del
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.710 2 INFO nova.virt.libvirt.driver [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Deletion of /var/lib/nova/instances/c723f52b-fc30-4046-8444-28f280a39a16_del complete
Oct 02 12:28:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:13.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.892 2 INFO nova.compute.manager [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Took 2.21 seconds to destroy the instance on the hypervisor.
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.893 2 DEBUG oslo.service.loopingcall [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.893 2 DEBUG nova.compute.manager [-] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:28:13 compute-0 nova_compute[256940]: 2025-10-02 12:28:13.893 2 DEBUG nova.network.neutron [-] [instance: c723f52b-fc30-4046-8444-28f280a39a16] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:28:13 compute-0 sudo[298156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:13 compute-0 sudo[298156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:13 compute-0 sudo[298156]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:14 compute-0 sudo[298181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:14 compute-0 sudo[298181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:14 compute-0 sudo[298181]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:28:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:28:15 compute-0 ceph-mon[73668]: pgmap v1518: 305 pgs: 305 active+clean; 272 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Oct 02 12:28:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/253313661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 242 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.486 2 DEBUG nova.network.neutron [-] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.523 2 DEBUG nova.compute.manager [req-6e6904de-6fce-44a0-ac86-dbbe5516a318 req-d0152763-5e1f-42b6-babc-743adf6c9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-vif-deleted-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.523 2 INFO nova.compute.manager [req-6e6904de-6fce-44a0-ac86-dbbe5516a318 req-d0152763-5e1f-42b6-babc-743adf6c9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Neutron deleted interface bfc6bf86-3893-44ff-abb2-2c3a2f152cf2; detaching it from the instance and deleting it from the info cache
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.523 2 DEBUG nova.network.neutron [req-6e6904de-6fce-44a0-ac86-dbbe5516a318 req-d0152763-5e1f-42b6-babc-743adf6c9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.614 2 INFO nova.compute.manager [-] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Took 1.72 seconds to deallocate network for instance.
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.621 2 DEBUG nova.compute.manager [req-6e6904de-6fce-44a0-ac86-dbbe5516a318 req-d0152763-5e1f-42b6-babc-743adf6c9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Detach interface failed, port_id=bfc6bf86-3893-44ff-abb2-2c3a2f152cf2, reason: Instance c723f52b-fc30-4046-8444-28f280a39a16 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.666 2 DEBUG nova.compute.manager [req-6e3cd326-3465-4bb7-bb30-8731a7f9a610 req-b6da7e0e-d210-4ebd-880a-1fda5f957ef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received event network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.667 2 DEBUG oslo_concurrency.lockutils [req-6e3cd326-3465-4bb7-bb30-8731a7f9a610 req-b6da7e0e-d210-4ebd-880a-1fda5f957ef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c723f52b-fc30-4046-8444-28f280a39a16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.667 2 DEBUG oslo_concurrency.lockutils [req-6e3cd326-3465-4bb7-bb30-8731a7f9a610 req-b6da7e0e-d210-4ebd-880a-1fda5f957ef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.667 2 DEBUG oslo_concurrency.lockutils [req-6e3cd326-3465-4bb7-bb30-8731a7f9a610 req-b6da7e0e-d210-4ebd-880a-1fda5f957ef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.667 2 DEBUG nova.compute.manager [req-6e3cd326-3465-4bb7-bb30-8731a7f9a610 req-b6da7e0e-d210-4ebd-880a-1fda5f957ef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] No waiting events found dispatching network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.668 2 WARNING nova.compute.manager [req-6e3cd326-3465-4bb7-bb30-8731a7f9a610 req-b6da7e0e-d210-4ebd-880a-1fda5f957ef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Received unexpected event network-vif-plugged-bfc6bf86-3893-44ff-abb2-2c3a2f152cf2 for instance with vm_state active and task_state deleting.
Oct 02 12:28:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000076s ======
Oct 02 12:28:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:15.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.872 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.873 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:15 compute-0 nova_compute[256940]: 2025-10-02 12:28:15.928 2 DEBUG oslo_concurrency.processutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:16 compute-0 ceph-mon[73668]: pgmap v1519: 305 pgs: 305 active+clean; 242 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Oct 02 12:28:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:16.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927939317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:16 compute-0 nova_compute[256940]: 2025-10-02 12:28:16.414 2 DEBUG oslo_concurrency.processutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:16 compute-0 nova_compute[256940]: 2025-10-02 12:28:16.421 2 DEBUG nova.compute.provider_tree [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:16 compute-0 nova_compute[256940]: 2025-10-02 12:28:16.441 2 DEBUG nova.scheduler.client.report [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:16 compute-0 nova_compute[256940]: 2025-10-02 12:28:16.464 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:16 compute-0 nova_compute[256940]: 2025-10-02 12:28:16.512 2 INFO nova.scheduler.client.report [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Deleted allocations for instance c723f52b-fc30-4046-8444-28f280a39a16
Oct 02 12:28:16 compute-0 nova_compute[256940]: 2025-10-02 12:28:16.582 2 DEBUG oslo_concurrency.lockutils [None req-d94517b9-0528-4ca8-bcee-9a7a306b646b 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "c723f52b-fc30-4046-8444-28f280a39a16" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 230 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Oct 02 12:28:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2927939317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:17 compute-0 nova_compute[256940]: 2025-10-02 12:28:17.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:17.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:18.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:18 compute-0 ceph-mon[73668]: pgmap v1520: 305 pgs: 305 active+clean; 230 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Oct 02 12:28:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 230 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Oct 02 12:28:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:19.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3400719442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4210674111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4210674111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:20 compute-0 nova_compute[256940]: 2025-10-02 12:28:20.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:20.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:20 compute-0 podman[298232]: 2025-10-02 12:28:20.399078498 +0000 UTC m=+0.064375855 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:28:20 compute-0 podman[298231]: 2025-10-02 12:28:20.399074758 +0000 UTC m=+0.064440657 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:28:21 compute-0 ceph-mon[73668]: pgmap v1521: 305 pgs: 305 active+clean; 230 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Oct 02 12:28:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/569439342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:21 compute-0 nova_compute[256940]: 2025-10-02 12:28:21.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 208 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Oct 02 12:28:21 compute-0 nova_compute[256940]: 2025-10-02 12:28:21.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:28:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:21.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:28:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:28:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4286952671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:28:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4286952671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:22.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:22 compute-0 ceph-mon[73668]: pgmap v1522: 305 pgs: 305 active+clean; 208 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Oct 02 12:28:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4286952671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4286952671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:22 compute-0 nova_compute[256940]: 2025-10-02 12:28:22.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 167 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 3.1 MiB/s wr, 149 op/s
Oct 02 12:28:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3000373398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:23.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:24.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:24 compute-0 ceph-mon[73668]: pgmap v1523: 305 pgs: 305 active+clean; 167 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 3.1 MiB/s wr, 149 op/s
Oct 02 12:28:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4121041882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4121041882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 160 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 745 KiB/s rd, 1.9 MiB/s wr, 170 op/s
Oct 02 12:28:25 compute-0 nova_compute[256940]: 2025-10-02 12:28:25.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:25.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:26.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:26.463 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:26.464 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:26.464 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:26 compute-0 ceph-mon[73668]: pgmap v1524: 305 pgs: 305 active+clean; 160 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 745 KiB/s rd, 1.9 MiB/s wr, 170 op/s
Oct 02 12:28:27 compute-0 nova_compute[256940]: 2025-10-02 12:28:27.121 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408092.1195838, c723f52b-fc30-4046-8444-28f280a39a16 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:27 compute-0 nova_compute[256940]: 2025-10-02 12:28:27.122 2 INFO nova.compute.manager [-] [instance: c723f52b-fc30-4046-8444-28f280a39a16] VM Stopped (Lifecycle Event)
Oct 02 12:28:27 compute-0 nova_compute[256940]: 2025-10-02 12:28:27.169 2 DEBUG nova.compute.manager [None req-9f07003f-5452-4b9d-9f54-0d74994638e5 - - - - - -] [instance: c723f52b-fc30-4046-8444-28f280a39a16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Oct 02 12:28:27 compute-0 nova_compute[256940]: 2025-10-02 12:28:27.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:28.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:28 compute-0 ceph-mon[73668]: pgmap v1525: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:28:28
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'backups', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta']
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:28:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:28:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 626 KiB/s wr, 154 op/s
Oct 02 12:28:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 02 12:28:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 02 12:28:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 02 12:28:30 compute-0 nova_compute[256940]: 2025-10-02 12:28:30.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:30.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:31 compute-0 ceph-mon[73668]: pgmap v1526: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 626 KiB/s wr, 154 op/s
Oct 02 12:28:31 compute-0 ceph-mon[73668]: osdmap e214: 3 total, 3 up, 3 in
Oct 02 12:28:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 161 op/s
Oct 02 12:28:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:31.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:32.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:32 compute-0 ceph-mon[73668]: pgmap v1528: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 161 op/s
Oct 02 12:28:32 compute-0 nova_compute[256940]: 2025-10-02 12:28:32.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 88 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 136 op/s
Oct 02 12:28:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:34 compute-0 sudo[298280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:34 compute-0 sudo[298280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:34 compute-0 sudo[298280]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:34 compute-0 sudo[298305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:34 compute-0 sudo[298305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:34 compute-0 sudo[298305]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:34.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:34 compute-0 podman[298329]: 2025-10-02 12:28:34.308693833 +0000 UTC m=+0.065343650 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:28:34 compute-0 podman[298330]: 2025-10-02 12:28:34.359463821 +0000 UTC m=+0.108088794 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:28:34 compute-0 ceph-mon[73668]: pgmap v1529: 305 pgs: 305 active+clean; 88 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 136 op/s
Oct 02 12:28:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 72 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 17 KiB/s wr, 99 op/s
Oct 02 12:28:35 compute-0 nova_compute[256940]: 2025-10-02 12:28:35.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 02 12:28:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 02 12:28:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 02 12:28:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:36.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 319 KiB/s rd, 5.5 KiB/s wr, 84 op/s
Oct 02 12:28:37 compute-0 ceph-mon[73668]: pgmap v1530: 305 pgs: 305 active+clean; 72 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 17 KiB/s wr, 99 op/s
Oct 02 12:28:37 compute-0 ceph-mon[73668]: osdmap e215: 3 total, 3 up, 3 in
Oct 02 12:28:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:28:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:28:37 compute-0 nova_compute[256940]: 2025-10-02 12:28:37.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:38.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:38 compute-0 ceph-mon[73668]: pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 319 KiB/s rd, 5.5 KiB/s wr, 84 op/s
Oct 02 12:28:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1184234749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 4.8 KiB/s wr, 66 op/s
Oct 02 12:28:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:39.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019042332612134749 of space, bias 1.0, pg target 0.5712699783640425 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:28:40 compute-0 nova_compute[256940]: 2025-10-02 12:28:40.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:40.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 02 12:28:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 4.6 KiB/s wr, 61 op/s
Oct 02 12:28:41 compute-0 ceph-mon[73668]: pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 4.8 KiB/s wr, 66 op/s
Oct 02 12:28:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 02 12:28:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 02 12:28:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:42.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.771 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.771 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.789 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.890 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.890 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.903 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:28:42 compute-0 nova_compute[256940]: 2025-10-02 12:28:42.903 2 INFO nova.compute.claims [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.008 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:43 compute-0 ceph-mon[73668]: pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 4.6 KiB/s wr, 61 op/s
Oct 02 12:28:43 compute-0 ceph-mon[73668]: osdmap e216: 3 total, 3 up, 3 in
Oct 02 12:28:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2569227816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.7 KiB/s wr, 47 op/s
Oct 02 12:28:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965843975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.540 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.547 2 DEBUG nova.compute.provider_tree [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.563 2 DEBUG nova.scheduler.client.report [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.582 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.583 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.635 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.635 2 DEBUG nova.network.neutron [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.665 2 INFO nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.685 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:28:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:43.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.823 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.824 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.824 2 INFO nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Creating image(s)
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.855 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.890 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.921 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.926 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.954 2 DEBUG nova.policy [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eff0431e92464c78b780c8365e6e920c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.992 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.993 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.994 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:43 compute-0 nova_compute[256940]: 2025-10-02 12:28:43.994 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:44 compute-0 nova_compute[256940]: 2025-10-02 12:28:44.021 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:44 compute-0 nova_compute[256940]: 2025-10-02 12:28:44.027 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:44 compute-0 nova_compute[256940]: 2025-10-02 12:28:44.774 2 DEBUG nova.network.neutron [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Successfully created port: 474d330f-2b1e-463d-ac82-51937f7262ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:28:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2909641701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:45 compute-0 ceph-mon[73668]: pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.7 KiB/s wr, 47 op/s
Oct 02 12:28:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1184975754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3965843975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1417505443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 60 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.4 MiB/s wr, 58 op/s
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.538 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.616 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] resizing rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.722 2 DEBUG nova.objects.instance [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 08be0388-74af-4ba2-aeb8-32109a14c4dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.745 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.746 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Ensure instance console log exists: /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.746 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.746 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:45 compute-0 nova_compute[256940]: 2025-10-02 12:28:45.747 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.442 2 DEBUG nova.network.neutron [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Successfully updated port: 474d330f-2b1e-463d-ac82-51937f7262ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:28:46 compute-0 ceph-mon[73668]: pgmap v1537: 305 pgs: 305 active+clean; 60 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.4 MiB/s wr, 58 op/s
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.697 2 DEBUG nova.compute.manager [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received event network-changed-474d330f-2b1e-463d-ac82-51937f7262ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.698 2 DEBUG nova.compute.manager [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Refreshing instance network info cache due to event network-changed-474d330f-2b1e-463d-ac82-51937f7262ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.699 2 DEBUG oslo_concurrency.lockutils [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-08be0388-74af-4ba2-aeb8-32109a14c4dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.699 2 DEBUG oslo_concurrency.lockutils [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-08be0388-74af-4ba2-aeb8-32109a14c4dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.700 2 DEBUG nova.network.neutron [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Refreshing network info cache for port 474d330f-2b1e-463d-ac82-51937f7262ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:28:46 compute-0 nova_compute[256940]: 2025-10-02 12:28:46.704 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "refresh_cache-08be0388-74af-4ba2-aeb8-32109a14c4dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.067 2 DEBUG nova.network.neutron [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:28:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.245 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.247 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.484 2 DEBUG nova.network.neutron [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.505 2 DEBUG oslo_concurrency.lockutils [req-7a6c5058-adf5-48af-9212-21b7bc6add12 req-b6c1befa-3aee-4c94-a7fc-3a255dddf56a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-08be0388-74af-4ba2-aeb8-32109a14c4dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.506 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquired lock "refresh_cache-08be0388-74af-4ba2-aeb8-32109a14c4dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.506 2 DEBUG nova.network.neutron [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:28:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950793471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.685 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.699 2 DEBUG nova.network.neutron [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:28:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:47.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2950793471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.914 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.916 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4675MB free_disk=20.97604751586914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.916 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:47 compute-0 nova_compute[256940]: 2025-10-02 12:28:47.916 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.024 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 08be0388-74af-4ba2-aeb8-32109a14c4dd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.024 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.024 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.075 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:48.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:48.382 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:48.384 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:28:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595482658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.568 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.573 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.578 2 DEBUG nova.network.neutron [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Updating instance_info_cache with network_info: [{"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.600 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.604 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Releasing lock "refresh_cache-08be0388-74af-4ba2-aeb8-32109a14c4dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.605 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Instance network_info: |[{"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.607 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Start _get_guest_xml network_info=[{"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.611 2 WARNING nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.615 2 DEBUG nova.virt.libvirt.host [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.615 2 DEBUG nova.virt.libvirt.host [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.618 2 DEBUG nova.virt.libvirt.host [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.618 2 DEBUG nova.virt.libvirt.host [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.619 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.619 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.620 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.620 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.620 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.621 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.621 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.621 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.621 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.621 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.622 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.622 2 DEBUG nova.virt.hardware [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.625 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.651 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:28:48 compute-0 nova_compute[256940]: 2025-10-02 12:28:48.652 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:48 compute-0 ceph-mon[73668]: pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:28:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1154819649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3140232862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1595482658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174407595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.140 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.170 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.176 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 88 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:28:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:49.386 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556924285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.620 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.623 2 DEBUG nova.virt.libvirt.vif [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1681410933',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1681410933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1681410933',id=63,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-fmc5oxgv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:43Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=08be0388-74af-4ba2-aeb8-32109a14c4dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.623 2 DEBUG nova.network.os_vif_util [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.625 2 DEBUG nova.network.os_vif_util [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:4c:25,bridge_name='br-int',has_traffic_filtering=True,id=474d330f-2b1e-463d-ac82-51937f7262ff,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap474d330f-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.626 2 DEBUG nova.objects.instance [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08be0388-74af-4ba2-aeb8-32109a14c4dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.646 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <uuid>08be0388-74af-4ba2-aeb8-32109a14c4dd</uuid>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <name>instance-0000003f</name>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1681410933</nova:name>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:28:48</nova:creationTime>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:user uuid="eff0431e92464c78b780c8365e6e920c">tempest-ImagesOneServerNegativeTestJSON-883313902-project-member</nova:user>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:project uuid="bfd7bec5bd4b4366a96cc55cfe95fcc9">tempest-ImagesOneServerNegativeTestJSON-883313902</nova:project>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <nova:port uuid="474d330f-2b1e-463d-ac82-51937f7262ff">
Oct 02 12:28:49 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <system>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <entry name="serial">08be0388-74af-4ba2-aeb8-32109a14c4dd</entry>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <entry name="uuid">08be0388-74af-4ba2-aeb8-32109a14c4dd</entry>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </system>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <os>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   </os>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <features>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   </features>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/08be0388-74af-4ba2-aeb8-32109a14c4dd_disk">
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       </source>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/08be0388-74af-4ba2-aeb8-32109a14c4dd_disk.config">
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       </source>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:28:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:e2:4c:25"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <target dev="tap474d330f-2b"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/console.log" append="off"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <video>
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </video>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:28:49 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:28:49 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:28:49 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:28:49 compute-0 nova_compute[256940]: </domain>
Oct 02 12:28:49 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.647 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Preparing to wait for external event network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.648 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.648 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.648 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.649 2 DEBUG nova.virt.libvirt.vif [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1681410933',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1681410933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1681410933',id=63,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-fmc5oxgv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:43Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=08be0388-74af-4ba2-aeb8-32109a14c4dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.650 2 DEBUG nova.network.os_vif_util [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.651 2 DEBUG nova.network.os_vif_util [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:4c:25,bridge_name='br-int',has_traffic_filtering=True,id=474d330f-2b1e-463d-ac82-51937f7262ff,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap474d330f-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.651 2 DEBUG os_vif [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:4c:25,bridge_name='br-int',has_traffic_filtering=True,id=474d330f-2b1e-463d-ac82-51937f7262ff,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap474d330f-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap474d330f-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap474d330f-2b, col_values=(('external_ids', {'iface-id': '474d330f-2b1e-463d-ac82-51937f7262ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:4c:25', 'vm-uuid': '08be0388-74af-4ba2-aeb8-32109a14c4dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:49 compute-0 NetworkManager[44981]: <info>  [1759408129.6600] manager: (tap474d330f-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.669 2 INFO os_vif [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:4c:25,bridge_name='br-int',has_traffic_filtering=True,id=474d330f-2b1e-463d-ac82-51937f7262ff,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap474d330f-2b')
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.797 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.797 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.798 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No VIF found with MAC fa:16:3e:e2:4c:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.799 2 INFO nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Using config drive
Oct 02 12:28:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:49.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:49 compute-0 nova_compute[256940]: 2025-10-02 12:28:49.840 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/139225508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/174407595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/556924285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.278 2 INFO nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Creating config drive at /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/disk.config
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.287 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpog4oju_q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.433 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpog4oju_q" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.486 2 DEBUG nova.storage.rbd_utils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.493 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/disk.config 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.810 2 DEBUG oslo_concurrency.processutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/disk.config 08be0388-74af-4ba2-aeb8-32109a14c4dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.814 2 INFO nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Deleting local config drive /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd/disk.config because it was imported into RBD.
Oct 02 12:28:50 compute-0 kernel: tap474d330f-2b: entered promiscuous mode
Oct 02 12:28:50 compute-0 NetworkManager[44981]: <info>  [1759408130.8992] manager: (tap474d330f-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Oct 02 12:28:50 compute-0 ovn_controller[148123]: 2025-10-02T12:28:50Z|00194|binding|INFO|Claiming lport 474d330f-2b1e-463d-ac82-51937f7262ff for this chassis.
Oct 02 12:28:50 compute-0 ovn_controller[148123]: 2025-10-02T12:28:50Z|00195|binding|INFO|474d330f-2b1e-463d-ac82-51937f7262ff: Claiming fa:16:3e:e2:4c:25 10.100.0.12
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.914 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:4c:25 10.100.0.12'], port_security=['fa:16:3e:e2:4c:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '08be0388-74af-4ba2-aeb8-32109a14c4dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=474d330f-2b1e-463d-ac82-51937f7262ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.915 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 474d330f-2b1e-463d-ac82-51937f7262ff in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 bound to our chassis
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.917 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.932 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9860f638-e96a-4761-8acf-2c4892a61002]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.933 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeefd67eb-b1 in ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.936 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeefd67eb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.936 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[38cd644a-de96-471c-96b6-0f686334be06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.937 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1ded88-7e78-4159-af97-6d099d2e7b57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:50 compute-0 systemd-machined[210927]: New machine qemu-27-instance-0000003f.
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.962 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2fad5d-c625-4f91-b264-5208e22387cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:50 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000003f.
Oct 02 12:28:50 compute-0 systemd-udevd[298782]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:28:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:50.991 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[50307e69-4b7f-458d-8e42-8d131642078a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:50 compute-0 nova_compute[256940]: 2025-10-02 12:28:50.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:51 compute-0 ovn_controller[148123]: 2025-10-02T12:28:51Z|00196|binding|INFO|Setting lport 474d330f-2b1e-463d-ac82-51937f7262ff ovn-installed in OVS
Oct 02 12:28:51 compute-0 ovn_controller[148123]: 2025-10-02T12:28:51Z|00197|binding|INFO|Setting lport 474d330f-2b1e-463d-ac82-51937f7262ff up in Southbound
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:51 compute-0 NetworkManager[44981]: <info>  [1759408131.0090] device (tap474d330f-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:28:51 compute-0 NetworkManager[44981]: <info>  [1759408131.0101] device (tap474d330f-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:28:51 compute-0 podman[298749]: 2025-10-02 12:28:51.026877308 +0000 UTC m=+0.087033110 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 12:28:51 compute-0 podman[298748]: 2025-10-02 12:28:51.029878865 +0000 UTC m=+0.095746922 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.033 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ed156aa9-4917-4968-bd59-815307aaa28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 systemd-udevd[298795]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:28:51 compute-0 NetworkManager[44981]: <info>  [1759408131.0405] manager: (tapeefd67eb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.039 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[57fa1074-fd63-4087-8887-d65d41d41165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.078 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[189b418e-c503-4312-8296-f9dc73c71857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.081 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[52c1d3fa-b6fd-438c-a8a1-0a09d68eaab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 NetworkManager[44981]: <info>  [1759408131.1079] device (tapeefd67eb-b0): carrier: link connected
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.116 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cc987e20-9124-4215-94b6-1adb314b9fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.136 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[640fbac8-6ccd-4b9d-888d-7f018b1b6f1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602659, 'reachable_time': 27075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298824, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.156 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83f2f212-c046-46f0-a1ad-fb651bc53ee8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:db93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602659, 'tstamp': 602659}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298825, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.177 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11d5f244-94f0-401a-b690-d3d8905e5b58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602659, 'reachable_time': 27075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298826, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 148 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 4.9 MiB/s wr, 71 op/s
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.215 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8a76bf34-d78c-475c-9f58-7170a69bfdaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.290 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6f4a54-c29e-4479-b031-1b2a7a5dfdbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.292 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.292 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.293 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeefd67eb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:51 compute-0 NetworkManager[44981]: <info>  [1759408131.2965] manager: (tapeefd67eb-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct 02 12:28:51 compute-0 kernel: tapeefd67eb-b0: entered promiscuous mode
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.299 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeefd67eb-b0, col_values=(('external_ids', {'iface-id': '4a1c64ee-2e43-4924-ad64-0ba8b656d152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:51 compute-0 ovn_controller[148123]: 2025-10-02T12:28:51Z|00198|binding|INFO|Releasing lport 4a1c64ee-2e43-4924-ad64-0ba8b656d152 from this chassis (sb_readonly=0)
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.321 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.322 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cae86c-8cf2-4f72-af02-f5784b74f064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.323 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:28:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:51.324 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'env', 'PROCESS_TAG=haproxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:28:51 compute-0 ceph-mon[73668]: pgmap v1539: 305 pgs: 305 active+clean; 88 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.569 2 DEBUG nova.compute.manager [req-77fb3e97-abe1-4b71-840a-b68b398dc8d4 req-667faff8-3c34-4ec4-97ad-2f906526f4f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received event network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.570 2 DEBUG oslo_concurrency.lockutils [req-77fb3e97-abe1-4b71-840a-b68b398dc8d4 req-667faff8-3c34-4ec4-97ad-2f906526f4f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.570 2 DEBUG oslo_concurrency.lockutils [req-77fb3e97-abe1-4b71-840a-b68b398dc8d4 req-667faff8-3c34-4ec4-97ad-2f906526f4f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.570 2 DEBUG oslo_concurrency.lockutils [req-77fb3e97-abe1-4b71-840a-b68b398dc8d4 req-667faff8-3c34-4ec4-97ad-2f906526f4f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.570 2 DEBUG nova.compute.manager [req-77fb3e97-abe1-4b71-840a-b68b398dc8d4 req-667faff8-3c34-4ec4-97ad-2f906526f4f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Processing event network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.653 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.653 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.653 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.679 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.680 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.680 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.680 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:51 compute-0 podman[298900]: 2025-10-02 12:28:51.703247676 +0000 UTC m=+0.024199515 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:28:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:51.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.910 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408131.909419, 08be0388-74af-4ba2-aeb8-32109a14c4dd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.910 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] VM Started (Lifecycle Event)
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.913 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.917 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.921 2 INFO nova.virt.libvirt.driver [-] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Instance spawned successfully.
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.921 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.941 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.953 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.954 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.954 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.955 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.955 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.956 2 DEBUG nova.virt.libvirt.driver [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.959 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.996 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.996 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408131.90966, 08be0388-74af-4ba2-aeb8-32109a14c4dd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:51 compute-0 nova_compute[256940]: 2025-10-02 12:28:51.996 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] VM Paused (Lifecycle Event)
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.028 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.034 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408131.9163516, 08be0388-74af-4ba2-aeb8-32109a14c4dd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.035 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] VM Resumed (Lifecycle Event)
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.045 2 INFO nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Took 8.22 seconds to spawn the instance on the hypervisor.
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.045 2 DEBUG nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.055 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.058 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.095 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.130 2 INFO nova.compute.manager [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Took 9.26 seconds to build instance.
Oct 02 12:28:52 compute-0 nova_compute[256940]: 2025-10-02 12:28:52.152 2 DEBUG oslo_concurrency.lockutils [None req-76b2ff41-fdd4-4fb0-b365-0290155d8db0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:52.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:52 compute-0 podman[298900]: 2025-10-02 12:28:52.495593669 +0000 UTC m=+0.816545528 container create 371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:28:52 compute-0 ceph-mon[73668]: pgmap v1540: 305 pgs: 305 active+clean; 148 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 4.9 MiB/s wr, 71 op/s
Oct 02 12:28:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/988252884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/639640980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2019023000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1091874348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-0 systemd[1]: Started libpod-conmon-371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a.scope.
Oct 02 12:28:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:28:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4421cd5734628a7ecb6a10d96d9ce3c1543e9585b2681947a6e426347b425ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:53 compute-0 podman[298900]: 2025-10-02 12:28:53.076499345 +0000 UTC m=+1.397451184 container init 371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:28:53 compute-0 podman[298900]: 2025-10-02 12:28:53.084503448 +0000 UTC m=+1.405455267 container start 371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:28:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:53 compute-0 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[298915]: [NOTICE]   (298921) : New worker (298923) forked
Oct 02 12:28:53 compute-0 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[298915]: [NOTICE]   (298921) : Loading success.
Oct 02 12:28:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 209 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 6.7 MiB/s wr, 112 op/s
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.654 2 DEBUG nova.compute.manager [req-ad0192fa-eade-47b9-a88c-3c867e51ba1d req-fba8c8d0-9993-4aad-9a78-7bca02aaf6c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received event network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.655 2 DEBUG oslo_concurrency.lockutils [req-ad0192fa-eade-47b9-a88c-3c867e51ba1d req-fba8c8d0-9993-4aad-9a78-7bca02aaf6c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.655 2 DEBUG oslo_concurrency.lockutils [req-ad0192fa-eade-47b9-a88c-3c867e51ba1d req-fba8c8d0-9993-4aad-9a78-7bca02aaf6c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.655 2 DEBUG oslo_concurrency.lockutils [req-ad0192fa-eade-47b9-a88c-3c867e51ba1d req-fba8c8d0-9993-4aad-9a78-7bca02aaf6c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.655 2 DEBUG nova.compute.manager [req-ad0192fa-eade-47b9-a88c-3c867e51ba1d req-fba8c8d0-9993-4aad-9a78-7bca02aaf6c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] No waiting events found dispatching network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.656 2 WARNING nova.compute.manager [req-ad0192fa-eade-47b9-a88c-3c867e51ba1d req-fba8c8d0-9993-4aad-9a78-7bca02aaf6c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received unexpected event network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff for instance with vm_state active and task_state None.
Oct 02 12:28:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2513697758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3622040677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:28:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.950 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.951 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.952 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.952 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.953 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.954 2 INFO nova.compute.manager [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Terminating instance
Oct 02 12:28:53 compute-0 nova_compute[256940]: 2025-10-02 12:28:53.956 2 DEBUG nova.compute.manager [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:28:54 compute-0 kernel: tap474d330f-2b (unregistering): left promiscuous mode
Oct 02 12:28:54 compute-0 NetworkManager[44981]: <info>  [1759408134.0144] device (tap474d330f-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:28:54 compute-0 ovn_controller[148123]: 2025-10-02T12:28:54Z|00199|binding|INFO|Releasing lport 474d330f-2b1e-463d-ac82-51937f7262ff from this chassis (sb_readonly=0)
Oct 02 12:28:54 compute-0 ovn_controller[148123]: 2025-10-02T12:28:54Z|00200|binding|INFO|Setting lport 474d330f-2b1e-463d-ac82-51937f7262ff down in Southbound
Oct 02 12:28:54 compute-0 ovn_controller[148123]: 2025-10-02T12:28:54Z|00201|binding|INFO|Removing iface tap474d330f-2b ovn-installed in OVS
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:54.078 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:4c:25 10.100.0.12'], port_security=['fa:16:3e:e2:4c:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '08be0388-74af-4ba2-aeb8-32109a14c4dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=474d330f-2b1e-463d-ac82-51937f7262ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:54.079 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 474d330f-2b1e-463d-ac82-51937f7262ff in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 unbound from our chassis
Oct 02 12:28:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:54.081 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:28:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:54.082 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf52d43-c525-4094-b955-a575d615c5f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:54.082 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace which is not needed anymore
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:54 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 02 12:28:54 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003f.scope: Consumed 3.011s CPU time.
Oct 02 12:28:54 compute-0 systemd-machined[210927]: Machine qemu-27-instance-0000003f terminated.
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.197 2 INFO nova.virt.libvirt.driver [-] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Instance destroyed successfully.
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.198 2 DEBUG nova.objects.instance [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'resources' on Instance uuid 08be0388-74af-4ba2-aeb8-32109a14c4dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.218 2 DEBUG nova.virt.libvirt.vif [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1681410933',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1681410933',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1681410933',id=63,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-fmc5oxgv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:28:52Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=08be0388-74af-4ba2-aeb8-32109a14c4dd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.219 2 DEBUG nova.network.os_vif_util [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "474d330f-2b1e-463d-ac82-51937f7262ff", "address": "fa:16:3e:e2:4c:25", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap474d330f-2b", "ovs_interfaceid": "474d330f-2b1e-463d-ac82-51937f7262ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.220 2 DEBUG nova.network.os_vif_util [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:4c:25,bridge_name='br-int',has_traffic_filtering=True,id=474d330f-2b1e-463d-ac82-51937f7262ff,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap474d330f-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.220 2 DEBUG os_vif [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:4c:25,bridge_name='br-int',has_traffic_filtering=True,id=474d330f-2b1e-463d-ac82-51937f7262ff,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap474d330f-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.222 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap474d330f-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:54 compute-0 nova_compute[256940]: 2025-10-02 12:28:54.229 2 INFO os_vif [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:4c:25,bridge_name='br-int',has_traffic_filtering=True,id=474d330f-2b1e-463d-ac82-51937f7262ff,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap474d330f-2b')
Oct 02 12:28:54 compute-0 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[298915]: [NOTICE]   (298921) : haproxy version is 2.8.14-c23fe91
Oct 02 12:28:54 compute-0 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[298915]: [NOTICE]   (298921) : path to executable is /usr/sbin/haproxy
Oct 02 12:28:54 compute-0 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[298915]: [WARNING]  (298921) : Exiting Master process...
Oct 02 12:28:54 compute-0 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[298915]: [ALERT]    (298921) : Current worker (298923) exited with code 143 (Terminated)
Oct 02 12:28:54 compute-0 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[298915]: [WARNING]  (298921) : All workers exited. Exiting... (0)
Oct 02 12:28:54 compute-0 systemd[1]: libpod-371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a.scope: Deactivated successfully.
Oct 02 12:28:54 compute-0 podman[298958]: 2025-10-02 12:28:54.311926434 +0000 UTC m=+0.124156522 container died 371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:28:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:54 compute-0 sudo[298995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:54 compute-0 sudo[298995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:54 compute-0 sudo[298995]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:54 compute-0 sudo[299035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:54 compute-0 sudo[299035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:54 compute-0 sudo[299035]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4421cd5734628a7ecb6a10d96d9ce3c1543e9585b2681947a6e426347b425ec-merged.mount: Deactivated successfully.
Oct 02 12:28:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 208 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 7.1 MiB/s wr, 178 op/s
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:55 compute-0 ceph-mon[73668]: pgmap v1541: 305 pgs: 305 active+clean; 209 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 6.7 MiB/s wr, 112 op/s
Oct 02 12:28:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3896983913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:55 compute-0 podman[298958]: 2025-10-02 12:28:55.398950956 +0000 UTC m=+1.211181084 container cleanup 371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:28:55 compute-0 systemd[1]: libpod-conmon-371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a.scope: Deactivated successfully.
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.756 2 DEBUG nova.compute.manager [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received event network-vif-unplugged-474d330f-2b1e-463d-ac82-51937f7262ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.757 2 DEBUG oslo_concurrency.lockutils [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.757 2 DEBUG oslo_concurrency.lockutils [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.757 2 DEBUG oslo_concurrency.lockutils [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.758 2 DEBUG nova.compute.manager [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] No waiting events found dispatching network-vif-unplugged-474d330f-2b1e-463d-ac82-51937f7262ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.758 2 DEBUG nova.compute.manager [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received event network-vif-unplugged-474d330f-2b1e-463d-ac82-51937f7262ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.759 2 DEBUG nova.compute.manager [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received event network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.759 2 DEBUG oslo_concurrency.lockutils [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.760 2 DEBUG oslo_concurrency.lockutils [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.760 2 DEBUG oslo_concurrency.lockutils [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.760 2 DEBUG nova.compute.manager [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] No waiting events found dispatching network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:55 compute-0 nova_compute[256940]: 2025-10-02 12:28:55.761 2 WARNING nova.compute.manager [req-cda178a0-1b31-4d34-9d5a-7979e6e4f539 req-21bf58b1-3ec4-4157-8738-63d25406f5d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received unexpected event network-vif-plugged-474d330f-2b1e-463d-ac82-51937f7262ff for instance with vm_state active and task_state deleting.
Oct 02 12:28:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:55.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:56.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:56 compute-0 podman[299066]: 2025-10-02 12:28:56.334564255 +0000 UTC m=+0.896506497 container remove 371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.345 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1f4d5d-64d0-4a2c-a474-c32499d88e3b]: (4, ('Thu Oct  2 12:28:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a)\n371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a\nThu Oct  2 12:28:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a)\n371bf8f790c906e1eb3e3cca7913bb05d76d172d73f93302d994ce138fbe151a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.347 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd44768-684a-4fa7-ad28-a7b3dd3681b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.348 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:56 compute-0 kernel: tapeefd67eb-b0: left promiscuous mode
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.369 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[afb740b2-1236-47f1-84e2-de72d279dcf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.403 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc39895-a852-4cf2-a3a9-f048d15d7be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.405 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[33117c1a-f6f0-4aea-b151-3228d5c8ee4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.422 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee1afb7-c9e5-4f5a-b0fc-c24429f85bdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602650, 'reachable_time': 37876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299082, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.425 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:28:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:28:56.425 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c5113724-b422-4547-a0c9-5452b11b281c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:56 compute-0 systemd[1]: run-netns-ovnmeta\x2deefd67eb\x2db4b6\x2d4162\x2dbbdd\x2d0cce7dbdb491.mount: Deactivated successfully.
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.565 2 INFO nova.virt.libvirt.driver [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Deleting instance files /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd_del
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.566 2 INFO nova.virt.libvirt.driver [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Deletion of /var/lib/nova/instances/08be0388-74af-4ba2-aeb8-32109a14c4dd_del complete
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.634 2 INFO nova.compute.manager [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Took 2.68 seconds to destroy the instance on the hypervisor.
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.635 2 DEBUG oslo.service.loopingcall [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.636 2 DEBUG nova.compute.manager [-] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:28:56 compute-0 nova_compute[256940]: 2025-10-02 12:28:56.636 2 DEBUG nova.network.neutron [-] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:28:56 compute-0 ceph-mon[73668]: pgmap v1542: 305 pgs: 305 active+clean; 208 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 7.1 MiB/s wr, 178 op/s
Oct 02 12:28:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 207 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.0 MiB/s wr, 294 op/s
Oct 02 12:28:57 compute-0 nova_compute[256940]: 2025-10-02 12:28:57.261 2 DEBUG nova.network.neutron [-] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:57 compute-0 nova_compute[256940]: 2025-10-02 12:28:57.302 2 INFO nova.compute.manager [-] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Took 0.67 seconds to deallocate network for instance.
Oct 02 12:28:57 compute-0 nova_compute[256940]: 2025-10-02 12:28:57.370 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:57 compute-0 nova_compute[256940]: 2025-10-02 12:28:57.371 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:57 compute-0 nova_compute[256940]: 2025-10-02 12:28:57.427 2 DEBUG oslo_concurrency.processutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:28:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:57.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:28:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 02 12:28:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248735020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:57 compute-0 nova_compute[256940]: 2025-10-02 12:28:57.984 2 DEBUG oslo_concurrency.processutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:57 compute-0 nova_compute[256940]: 2025-10-02 12:28:57.991 2 DEBUG nova.compute.provider_tree [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:58 compute-0 nova_compute[256940]: 2025-10-02 12:28:58.009 2 DEBUG nova.scheduler.client.report [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:58 compute-0 nova_compute[256940]: 2025-10-02 12:28:58.030 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:58 compute-0 nova_compute[256940]: 2025-10-02 12:28:58.046 2 DEBUG nova.compute.manager [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Received event network-vif-deleted-474d330f-2b1e-463d-ac82-51937f7262ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:58 compute-0 nova_compute[256940]: 2025-10-02 12:28:58.065 2 INFO nova.scheduler.client.report [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Deleted allocations for instance 08be0388-74af-4ba2-aeb8-32109a14c4dd
Oct 02 12:28:58 compute-0 nova_compute[256940]: 2025-10-02 12:28:58.144 2 DEBUG oslo_concurrency.lockutils [None req-a629b115-f5b4-4da0-91d8-a2dd6d0726d8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "08be0388-74af-4ba2-aeb8-32109a14c4dd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 02 12:28:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1248735020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:58.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:58 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 02 12:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:28:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 207 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.5 MiB/s wr, 335 op/s
Oct 02 12:28:59 compute-0 nova_compute[256940]: 2025-10-02 12:28:59.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:59 compute-0 ceph-mon[73668]: pgmap v1543: 305 pgs: 305 active+clean; 207 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.0 MiB/s wr, 294 op/s
Oct 02 12:28:59 compute-0 ceph-mon[73668]: osdmap e217: 3 total, 3 up, 3 in
Oct 02 12:28:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:28:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:59.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:00 compute-0 nova_compute[256940]: 2025-10-02 12:29:00.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:00 compute-0 ceph-mon[73668]: pgmap v1545: 305 pgs: 305 active+clean; 207 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.5 MiB/s wr, 335 op/s
Oct 02 12:29:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3081959650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/81590209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 227 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 469 op/s
Oct 02 12:29:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:01.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 02 12:29:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1362851634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/93481825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 02 12:29:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 02 12:29:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:03 compute-0 ceph-mon[73668]: pgmap v1546: 305 pgs: 305 active+clean; 227 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 469 op/s
Oct 02 12:29:03 compute-0 ceph-mon[73668]: osdmap e218: 3 total, 3 up, 3 in
Oct 02 12:29:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 262 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.6 MiB/s wr, 469 op/s
Oct 02 12:29:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:03.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 02 12:29:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 02 12:29:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 02 12:29:04 compute-0 nova_compute[256940]: 2025-10-02 12:29:04.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:04 compute-0 ceph-mon[73668]: pgmap v1548: 305 pgs: 305 active+clean; 262 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.6 MiB/s wr, 469 op/s
Oct 02 12:29:04 compute-0 ceph-mon[73668]: osdmap e219: 3 total, 3 up, 3 in
Oct 02 12:29:04 compute-0 nova_compute[256940]: 2025-10-02 12:29:04.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 273 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 4.6 MiB/s wr, 466 op/s
Oct 02 12:29:05 compute-0 nova_compute[256940]: 2025-10-02 12:29:05.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:05 compute-0 podman[299112]: 2025-10-02 12:29:05.4189857 +0000 UTC m=+0.081492059 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:29:05 compute-0 podman[299113]: 2025-10-02 12:29:05.464523136 +0000 UTC m=+0.117793861 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:29:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:29:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:29:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:29:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:05.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:29:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:06.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:06 compute-0 sudo[299159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:06 compute-0 sudo[299159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 sudo[299159]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-0 sudo[299184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:06 compute-0 sudo[299184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 sudo[299184]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-0 sudo[299209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:06 compute-0 sudo[299209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-0 sudo[299209]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-0 sudo[299234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:29:06 compute-0 sudo[299234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:07 compute-0 ceph-mon[73668]: pgmap v1550: 305 pgs: 305 active+clean; 273 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 4.6 MiB/s wr, 466 op/s
Oct 02 12:29:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 259 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.5 MiB/s wr, 544 op/s
Oct 02 12:29:07 compute-0 sudo[299234]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:29:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:29:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:29:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b33f6d9d-b752-429c-a173-4c15f3e60374 does not exist
Oct 02 12:29:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1344caf2-41f9-4a6c-aab0-33ece05c560d does not exist
Oct 02 12:29:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 89c861f9-d267-4d9b-a09a-eeb42f9deb1d does not exist
Oct 02 12:29:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:29:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:29:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:29:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:29:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:07.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:07 compute-0 sudo[299291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:07 compute-0 sudo[299291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:07 compute-0 sudo[299291]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:07 compute-0 sudo[299316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:07 compute-0 sudo[299316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:07 compute-0 sudo[299316]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:08 compute-0 sudo[299341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:08 compute-0 sudo[299341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:08 compute-0 sudo[299341]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:08 compute-0 sudo[299366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:29:08 compute-0 sudo[299366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:08 compute-0 podman[299431]: 2025-10-02 12:29:08.454141983 +0000 UTC m=+0.021428185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:09 compute-0 ceph-mon[73668]: pgmap v1551: 305 pgs: 305 active+clean; 259 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.5 MiB/s wr, 544 op/s
Oct 02 12:29:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:29:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:29:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:29:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:09 compute-0 podman[299431]: 2025-10-02 12:29:09.070229542 +0000 UTC m=+0.637515724 container create 0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:29:09 compute-0 nova_compute[256940]: 2025-10-02 12:29:09.195 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408134.1938226, 08be0388-74af-4ba2-aeb8-32109a14c4dd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:09 compute-0 nova_compute[256940]: 2025-10-02 12:29:09.196 2 INFO nova.compute.manager [-] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] VM Stopped (Lifecycle Event)
Oct 02 12:29:09 compute-0 nova_compute[256940]: 2025-10-02 12:29:09.215 2 DEBUG nova.compute.manager [None req-0790afdd-c49e-4b2b-9615-74f77fc13326 - - - - - -] [instance: 08be0388-74af-4ba2-aeb8-32109a14c4dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 259 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.2 MiB/s wr, 334 op/s
Oct 02 12:29:09 compute-0 nova_compute[256940]: 2025-10-02 12:29:09.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:09 compute-0 systemd[1]: Started libpod-conmon-0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535.scope.
Oct 02 12:29:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:09 compute-0 podman[299431]: 2025-10-02 12:29:09.480489596 +0000 UTC m=+1.047775798 container init 0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:29:09 compute-0 podman[299431]: 2025-10-02 12:29:09.489818933 +0000 UTC m=+1.057105115 container start 0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:29:09 compute-0 zen_mirzakhani[299448]: 167 167
Oct 02 12:29:09 compute-0 systemd[1]: libpod-0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535.scope: Deactivated successfully.
Oct 02 12:29:09 compute-0 podman[299431]: 2025-10-02 12:29:09.665772448 +0000 UTC m=+1.233058730 container attach 0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:29:09 compute-0 podman[299431]: 2025-10-02 12:29:09.667950863 +0000 UTC m=+1.235237055 container died 0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:29:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:09.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:10.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:10 compute-0 nova_compute[256940]: 2025-10-02 12:29:10.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-689d233f8b5062ab0ae090230a42f3e0652cb17b8cf6c9e694a550d9bc754aff-merged.mount: Deactivated successfully.
Oct 02 12:29:11 compute-0 ceph-mon[73668]: pgmap v1552: 305 pgs: 305 active+clean; 259 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.2 MiB/s wr, 334 op/s
Oct 02 12:29:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 250 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.6 MiB/s wr, 320 op/s
Oct 02 12:29:11 compute-0 podman[299431]: 2025-10-02 12:29:11.590920955 +0000 UTC m=+3.158207137 container remove 0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:29:11 compute-0 systemd[1]: libpod-conmon-0d068468ab170620dac5da7180f08fb1f0065fcfa58d682f3b62f3ca2b680535.scope: Deactivated successfully.
Oct 02 12:29:11 compute-0 podman[299473]: 2025-10-02 12:29:11.752957778 +0000 UTC m=+0.021785604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:11.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:12 compute-0 podman[299473]: 2025-10-02 12:29:12.197564204 +0000 UTC m=+0.466392010 container create 0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:29:12 compute-0 systemd[1]: Started libpod-conmon-0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520.scope.
Oct 02 12:29:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d67742168bfe0a2ed10ebd4cb9aabd24f7a1a73446b29fafaef074d3e46b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d67742168bfe0a2ed10ebd4cb9aabd24f7a1a73446b29fafaef074d3e46b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d67742168bfe0a2ed10ebd4cb9aabd24f7a1a73446b29fafaef074d3e46b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d67742168bfe0a2ed10ebd4cb9aabd24f7a1a73446b29fafaef074d3e46b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d67742168bfe0a2ed10ebd4cb9aabd24f7a1a73446b29fafaef074d3e46b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:12 compute-0 ceph-mon[73668]: pgmap v1553: 305 pgs: 305 active+clean; 250 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.6 MiB/s wr, 320 op/s
Oct 02 12:29:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1899312348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:12.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:12 compute-0 podman[299473]: 2025-10-02 12:29:12.43573742 +0000 UTC m=+0.704565236 container init 0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:29:12 compute-0 podman[299473]: 2025-10-02 12:29:12.447305553 +0000 UTC m=+0.716133359 container start 0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:29:12 compute-0 podman[299473]: 2025-10-02 12:29:12.510557729 +0000 UTC m=+0.779385535 container attach 0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 273 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.6 MiB/s wr, 350 op/s
Oct 02 12:29:13 compute-0 bold_cannon[299489]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:29:13 compute-0 bold_cannon[299489]: --> relative data size: 1.0
Oct 02 12:29:13 compute-0 bold_cannon[299489]: --> All data devices are unavailable
Oct 02 12:29:13 compute-0 systemd[1]: libpod-0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520.scope: Deactivated successfully.
Oct 02 12:29:13 compute-0 podman[299473]: 2025-10-02 12:29:13.270598071 +0000 UTC m=+1.539425877 container died 0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:29:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 02 12:29:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 02 12:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-664d67742168bfe0a2ed10ebd4cb9aabd24f7a1a73446b29fafaef074d3e46b6-merged.mount: Deactivated successfully.
Oct 02 12:29:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 02 12:29:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:13.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:13 compute-0 podman[299473]: 2025-10-02 12:29:13.91979276 +0000 UTC m=+2.188620566 container remove 0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:29:13 compute-0 systemd[1]: libpod-conmon-0f933b63219a05cb3c528bfeb8c86ceb1fea316334b65bf63748ae8f4c532520.scope: Deactivated successfully.
Oct 02 12:29:13 compute-0 sudo[299366]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 sudo[299520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:14 compute-0 sudo[299520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[299520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 sudo[299545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:14 compute-0 sudo[299545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[299545]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 sudo[299570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:14 compute-0 sudo[299570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[299570]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 sudo[299595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:29:14 compute-0 sudo[299595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 nova_compute[256940]: 2025-10-02 12:29:14.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:14 compute-0 sudo[299657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:14 compute-0 sudo[299657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[299657]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 podman[299659]: 2025-10-02 12:29:14.528785588 +0000 UTC m=+0.055515610 container create 332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dubinsky, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:29:14 compute-0 sudo[299697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:14 compute-0 sudo[299697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:14 compute-0 sudo[299697]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:14 compute-0 podman[299659]: 2025-10-02 12:29:14.495474813 +0000 UTC m=+0.022204855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:14 compute-0 systemd[1]: Started libpod-conmon-332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e.scope.
Oct 02 12:29:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:14 compute-0 ceph-mon[73668]: pgmap v1554: 305 pgs: 305 active+clean; 273 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.6 MiB/s wr, 350 op/s
Oct 02 12:29:14 compute-0 ceph-mon[73668]: osdmap e220: 3 total, 3 up, 3 in
Oct 02 12:29:14 compute-0 podman[299659]: 2025-10-02 12:29:14.884440996 +0000 UTC m=+0.411171038 container init 332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dubinsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:29:14 compute-0 podman[299659]: 2025-10-02 12:29:14.89324396 +0000 UTC m=+0.419973982 container start 332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 02 12:29:14 compute-0 goofy_dubinsky[299724]: 167 167
Oct 02 12:29:14 compute-0 systemd[1]: libpod-332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e.scope: Deactivated successfully.
Oct 02 12:29:14 compute-0 conmon[299724]: conmon 332fa17f5768333c3b3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e.scope/container/memory.events
Oct 02 12:29:14 compute-0 podman[299659]: 2025-10-02 12:29:14.962837906 +0000 UTC m=+0.489567948 container attach 332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:29:14 compute-0 podman[299659]: 2025-10-02 12:29:14.96417287 +0000 UTC m=+0.490902892 container died 332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dubinsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-833e2c4972bb84dc2c2f3bba9f05ce6ef094da22bb7b58c65a22d58a971886dc-merged.mount: Deactivated successfully.
Oct 02 12:29:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 287 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.0 MiB/s wr, 281 op/s
Oct 02 12:29:15 compute-0 podman[299659]: 2025-10-02 12:29:15.29154032 +0000 UTC m=+0.818270342 container remove 332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dubinsky, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:29:15 compute-0 systemd[1]: libpod-conmon-332fa17f5768333c3b3a3682d407be4ceee0f3dff6e7d60ecf7e01cb7aa1065e.scope: Deactivated successfully.
Oct 02 12:29:15 compute-0 nova_compute[256940]: 2025-10-02 12:29:15.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:15 compute-0 podman[299749]: 2025-10-02 12:29:15.511519433 +0000 UTC m=+0.081354035 container create d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_torvalds, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:29:15 compute-0 podman[299749]: 2025-10-02 12:29:15.460352565 +0000 UTC m=+0.030187177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:15 compute-0 systemd[1]: Started libpod-conmon-d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f.scope.
Oct 02 12:29:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8bb3e1a2af21d31a72d0824c164f53e7834982c13d265b01ff49508ffbff3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8bb3e1a2af21d31a72d0824c164f53e7834982c13d265b01ff49508ffbff3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8bb3e1a2af21d31a72d0824c164f53e7834982c13d265b01ff49508ffbff3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8bb3e1a2af21d31a72d0824c164f53e7834982c13d265b01ff49508ffbff3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:15 compute-0 podman[299749]: 2025-10-02 12:29:15.70913144 +0000 UTC m=+0.278966062 container init d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_torvalds, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:29:15 compute-0 podman[299749]: 2025-10-02 12:29:15.719283858 +0000 UTC m=+0.289118460 container start d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:29:15 compute-0 podman[299749]: 2025-10-02 12:29:15.732181265 +0000 UTC m=+0.302015867 container attach d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:29:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 02 12:29:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 02 12:29:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:15.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:15 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 02 12:29:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:16.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]: {
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:     "1": [
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:         {
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "devices": [
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "/dev/loop3"
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             ],
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "lv_name": "ceph_lv0",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "lv_size": "7511998464",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "name": "ceph_lv0",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "tags": {
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.cluster_name": "ceph",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.crush_device_class": "",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.encrypted": "0",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.osd_id": "1",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.type": "block",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:                 "ceph.vdo": "0"
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             },
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "type": "block",
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:             "vg_name": "ceph_vg0"
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:         }
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]:     ]
Oct 02 12:29:16 compute-0 fervent_torvalds[299765]: }
Oct 02 12:29:16 compute-0 systemd[1]: libpod-d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f.scope: Deactivated successfully.
Oct 02 12:29:16 compute-0 podman[299749]: 2025-10-02 12:29:16.548306161 +0000 UTC m=+1.118140763 container died d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:29:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d8bb3e1a2af21d31a72d0824c164f53e7834982c13d265b01ff49508ffbff3b-merged.mount: Deactivated successfully.
Oct 02 12:29:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 305 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.9 MiB/s wr, 305 op/s
Oct 02 12:29:17 compute-0 ceph-mon[73668]: pgmap v1556: 305 pgs: 305 active+clean; 287 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.0 MiB/s wr, 281 op/s
Oct 02 12:29:17 compute-0 ceph-mon[73668]: osdmap e221: 3 total, 3 up, 3 in
Oct 02 12:29:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:17.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:18 compute-0 podman[299749]: 2025-10-02 12:29:18.101981029 +0000 UTC m=+2.671815631 container remove d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:29:18 compute-0 systemd[1]: libpod-conmon-d399ef412b1042307b1f34ed3118803d12b9f5daf18b46ccef780e92cf8d246f.scope: Deactivated successfully.
Oct 02 12:29:18 compute-0 sudo[299595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:18 compute-0 sudo[299789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:18 compute-0 sudo[299789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:18 compute-0 sudo[299789]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:18 compute-0 sudo[299814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:18 compute-0 sudo[299814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:18 compute-0 sudo[299814]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:18 compute-0 sudo[299839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:18 compute-0 sudo[299839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:18 compute-0 sudo[299839]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:18.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:18 compute-0 sudo[299864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:29:18 compute-0 sudo[299864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:18 compute-0 podman[299929]: 2025-10-02 12:29:18.710129466 +0000 UTC m=+0.021873896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:18 compute-0 ceph-mon[73668]: pgmap v1558: 305 pgs: 305 active+clean; 305 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.9 MiB/s wr, 305 op/s
Oct 02 12:29:18 compute-0 podman[299929]: 2025-10-02 12:29:18.907358382 +0000 UTC m=+0.219102792 container create c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:29:19 compute-0 systemd[1]: Started libpod-conmon-c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e.scope.
Oct 02 12:29:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 305 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 215 op/s
Oct 02 12:29:19 compute-0 podman[299929]: 2025-10-02 12:29:19.300798409 +0000 UTC m=+0.612542919 container init c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:29:19 compute-0 podman[299929]: 2025-10-02 12:29:19.308630078 +0000 UTC m=+0.620374488 container start c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:29:19 compute-0 beautiful_mclaren[299945]: 167 167
Oct 02 12:29:19 compute-0 systemd[1]: libpod-c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e.scope: Deactivated successfully.
Oct 02 12:29:19 compute-0 nova_compute[256940]: 2025-10-02 12:29:19.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:19 compute-0 podman[299929]: 2025-10-02 12:29:19.478968252 +0000 UTC m=+0.790712672 container attach c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:29:19 compute-0 podman[299929]: 2025-10-02 12:29:19.480260584 +0000 UTC m=+0.792004994 container died c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:29:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c64203d81beee1344cd602145b7f9cf30dd07030a821d2ab6ecd2b10746e141f-merged.mount: Deactivated successfully.
Oct 02 12:29:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 02 12:29:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:19.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 02 12:29:19 compute-0 podman[299929]: 2025-10-02 12:29:19.947526556 +0000 UTC m=+1.259270966 container remove c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:29:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 02 12:29:19 compute-0 systemd[1]: libpod-conmon-c0a22ed5ba31c53b89c4dd75ede6be36463275cc568d13f902d0187ff619b50e.scope: Deactivated successfully.
Oct 02 12:29:20 compute-0 podman[299971]: 2025-10-02 12:29:20.140392302 +0000 UTC m=+0.057598524 container create 88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:29:20 compute-0 podman[299971]: 2025-10-02 12:29:20.106412349 +0000 UTC m=+0.023618601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:29:20 compute-0 systemd[1]: Started libpod-conmon-88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8.scope.
Oct 02 12:29:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfc9019882ec111c1e714e499613b992ca84ee7495a57cf893deab4e2e0a7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfc9019882ec111c1e714e499613b992ca84ee7495a57cf893deab4e2e0a7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfc9019882ec111c1e714e499613b992ca84ee7495a57cf893deab4e2e0a7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfc9019882ec111c1e714e499613b992ca84ee7495a57cf893deab4e2e0a7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:20 compute-0 podman[299971]: 2025-10-02 12:29:20.370651515 +0000 UTC m=+0.287857757 container init 88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:29:20 compute-0 podman[299971]: 2025-10-02 12:29:20.378008392 +0000 UTC m=+0.295214614 container start 88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:29:20 compute-0 nova_compute[256940]: 2025-10-02 12:29:20.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:20 compute-0 podman[299971]: 2025-10-02 12:29:20.437135803 +0000 UTC m=+0.354342055 container attach 88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 12:29:20 compute-0 ceph-mon[73668]: pgmap v1559: 305 pgs: 305 active+clean; 305 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 215 op/s
Oct 02 12:29:20 compute-0 ceph-mon[73668]: osdmap e222: 3 total, 3 up, 3 in
Oct 02 12:29:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 02 12:29:21 compute-0 pensive_haibt[299987]: {
Oct 02 12:29:21 compute-0 pensive_haibt[299987]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:29:21 compute-0 pensive_haibt[299987]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:29:21 compute-0 pensive_haibt[299987]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:29:21 compute-0 pensive_haibt[299987]:         "osd_id": 1,
Oct 02 12:29:21 compute-0 pensive_haibt[299987]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:29:21 compute-0 pensive_haibt[299987]:         "type": "bluestore"
Oct 02 12:29:21 compute-0 pensive_haibt[299987]:     }
Oct 02 12:29:21 compute-0 pensive_haibt[299987]: }
Oct 02 12:29:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 347 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.1 MiB/s wr, 239 op/s
Oct 02 12:29:21 compute-0 systemd[1]: libpod-88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8.scope: Deactivated successfully.
Oct 02 12:29:21 compute-0 podman[299971]: 2025-10-02 12:29:21.240333431 +0000 UTC m=+1.157539663 container died 88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:29:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 02 12:29:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 02 12:29:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcbfc9019882ec111c1e714e499613b992ca84ee7495a57cf893deab4e2e0a7b-merged.mount: Deactivated successfully.
Oct 02 12:29:21 compute-0 podman[299971]: 2025-10-02 12:29:21.62558604 +0000 UTC m=+1.542792262 container remove 88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:29:21 compute-0 systemd[1]: libpod-conmon-88c441949f183ca17e2b38d504a0918813d95aedaf12db546b2dd65e0db618f8.scope: Deactivated successfully.
Oct 02 12:29:21 compute-0 sudo[299864]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:21 compute-0 podman[300016]: 2025-10-02 12:29:21.667933985 +0000 UTC m=+0.395423018 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:29:21 compute-0 podman[300009]: 2025-10-02 12:29:21.667969116 +0000 UTC m=+0.397325206 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:29:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:29:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:29:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:21.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 357441ad-0645-4e45-88b9-2de6e953bd8d does not exist
Oct 02 12:29:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d21f8a6c-5dba-46ff-9536-ed35c87b9717 does not exist
Oct 02 12:29:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 01843fcb-f589-4f8f-811e-1a05e59105aa does not exist
Oct 02 12:29:21 compute-0 sudo[300063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:21 compute-0 sudo[300063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:21 compute-0 sudo[300063]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:22 compute-0 sudo[300088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:29:22 compute-0 sudo[300088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:22 compute-0 sudo[300088]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:22.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:22 compute-0 ceph-mon[73668]: pgmap v1561: 305 pgs: 305 active+clean; 347 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.1 MiB/s wr, 239 op/s
Oct 02 12:29:22 compute-0 ceph-mon[73668]: osdmap e223: 3 total, 3 up, 3 in
Oct 02 12:29:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 405 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 263 op/s
Oct 02 12:29:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:24 compute-0 nova_compute[256940]: 2025-10-02 12:29:24.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:24 compute-0 ceph-mon[73668]: pgmap v1563: 305 pgs: 305 active+clean; 405 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 263 op/s
Oct 02 12:29:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 373 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 8.1 MiB/s wr, 174 op/s
Oct 02 12:29:25 compute-0 nova_compute[256940]: 2025-10-02 12:29:25.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/336891820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:26.464 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:26.464 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:26.465 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:26 compute-0 ceph-mon[73668]: pgmap v1564: 305 pgs: 305 active+clean; 373 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 8.1 MiB/s wr, 174 op/s
Oct 02 12:29:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1261912510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 372 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 243 op/s
Oct 02 12:29:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 02 12:29:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 02 12:29:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 02 12:29:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:29:28
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms']
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:29:28 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:29:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 02 12:29:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 02 12:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:29:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 372 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 596 KiB/s rd, 7.2 MiB/s wr, 155 op/s
Oct 02 12:29:29 compute-0 ceph-mon[73668]: pgmap v1565: 305 pgs: 305 active+clean; 372 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 243 op/s
Oct 02 12:29:29 compute-0 ceph-mon[73668]: osdmap e224: 3 total, 3 up, 3 in
Oct 02 12:29:29 compute-0 ceph-mgr[73961]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.429 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.429 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.442 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.515 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.515 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.521 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.522 2 INFO nova.compute.claims [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:29:29 compute-0 nova_compute[256940]: 2025-10-02 12:29:29.687 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:29.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314630718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.159 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.165 2 DEBUG nova.compute.provider_tree [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:30 compute-0 ceph-mon[73668]: osdmap e225: 3 total, 3 up, 3 in
Oct 02 12:29:30 compute-0 ceph-mon[73668]: pgmap v1568: 305 pgs: 305 active+clean; 372 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 596 KiB/s rd, 7.2 MiB/s wr, 155 op/s
Oct 02 12:29:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2314630718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.418 2 DEBUG nova.scheduler.client.report [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.474 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.475 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.557 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.557 2 DEBUG nova.network.neutron [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.576 2 INFO nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.604 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.765 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.766 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.766 2 INFO nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Creating image(s)
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.791 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.820 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.845 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.849 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.883 2 DEBUG nova.policy [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fcf5a512c0c24bebbd413916d07e5b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4026877484004ce08dcaf59271d4309e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.924 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.925 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.926 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.926 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.952 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:30 compute-0 nova_compute[256940]: 2025-10-02 12:29:30.957 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5a441f33-2ccc-4111-880f-c2c184d6c078_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 02 12:29:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 385 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 127 op/s
Oct 02 12:29:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 02 12:29:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.594 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5a441f33-2ccc-4111-880f-c2c184d6c078_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.696 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] resizing rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.833 2 DEBUG nova.objects.instance [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lazy-loading 'migration_context' on Instance uuid 5a441f33-2ccc-4111-880f-c2c184d6c078 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:31.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.905 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.906 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Ensure instance console log exists: /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.907 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.907 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:31 compute-0 nova_compute[256940]: 2025-10-02 12:29:31.907 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:32 compute-0 nova_compute[256940]: 2025-10-02 12:29:32.272 2 DEBUG nova.network.neutron [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Successfully created port: d7ff9261-f76d-4480-97fa-10fe3308acd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:29:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:32 compute-0 ceph-mon[73668]: pgmap v1569: 305 pgs: 305 active+clean; 385 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 127 op/s
Oct 02 12:29:32 compute-0 ceph-mon[73668]: osdmap e226: 3 total, 3 up, 3 in
Oct 02 12:29:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 02 12:29:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 02 12:29:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 02 12:29:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 484 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 298 op/s
Oct 02 12:29:33 compute-0 nova_compute[256940]: 2025-10-02 12:29:33.312 2 DEBUG nova.network.neutron [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Successfully updated port: d7ff9261-f76d-4480-97fa-10fe3308acd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:29:33 compute-0 nova_compute[256940]: 2025-10-02 12:29:33.339 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "refresh_cache-5a441f33-2ccc-4111-880f-c2c184d6c078" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:33 compute-0 nova_compute[256940]: 2025-10-02 12:29:33.340 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquired lock "refresh_cache-5a441f33-2ccc-4111-880f-c2c184d6c078" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:33 compute-0 nova_compute[256940]: 2025-10-02 12:29:33.340 2 DEBUG nova.network.neutron [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:29:33 compute-0 nova_compute[256940]: 2025-10-02 12:29:33.523 2 DEBUG nova.network.neutron [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:29:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:33 compute-0 ceph-mon[73668]: osdmap e227: 3 total, 3 up, 3 in
Oct 02 12:29:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:33.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:34 compute-0 nova_compute[256940]: 2025-10-02 12:29:34.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:34 compute-0 sudo[300308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:34 compute-0 sudo[300308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:34 compute-0 sudo[300308]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:34 compute-0 sudo[300333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:34 compute-0 sudo[300333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:34 compute-0 sudo[300333]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:34 compute-0 ceph-mon[73668]: pgmap v1572: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 484 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 298 op/s
Oct 02 12:29:34 compute-0 nova_compute[256940]: 2025-10-02 12:29:34.923 2 DEBUG nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received event network-changed-d7ff9261-f76d-4480-97fa-10fe3308acd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:34 compute-0 nova_compute[256940]: 2025-10-02 12:29:34.924 2 DEBUG nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Refreshing instance network info cache due to event network-changed-d7ff9261-f76d-4480-97fa-10fe3308acd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:34 compute-0 nova_compute[256940]: 2025-10-02 12:29:34.924 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5a441f33-2ccc-4111-880f-c2c184d6c078" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 491 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 346 op/s
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.285 2 DEBUG nova.network.neutron [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Updating instance_info_cache with network_info: [{"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.304 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Releasing lock "refresh_cache-5a441f33-2ccc-4111-880f-c2c184d6c078" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.305 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Instance network_info: |[{"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.305 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5a441f33-2ccc-4111-880f-c2c184d6c078" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.306 2 DEBUG nova.network.neutron [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Refreshing network info cache for port d7ff9261-f76d-4480-97fa-10fe3308acd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.309 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Start _get_guest_xml network_info=[{"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.314 2 WARNING nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.319 2 DEBUG nova.virt.libvirt.host [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.320 2 DEBUG nova.virt.libvirt.host [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.327 2 DEBUG nova.virt.libvirt.host [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.328 2 DEBUG nova.virt.libvirt.host [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.329 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.330 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.330 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.330 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.331 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.331 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.331 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.331 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.331 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.332 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.332 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.332 2 DEBUG nova.virt.hardware [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.335 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654852695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.780 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.811 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:35 compute-0 nova_compute[256940]: 2025-10-02 12:29:35.816 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:35.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1493501614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2654852695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.246 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.247 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.265 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:29:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972595771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.297 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.299 2 DEBUG nova.virt.libvirt.vif [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-850216628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-850216628',id=68,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4026877484004ce08dcaf59271d4309e',ramdisk_id='',reservation_id='r-tcyjeuc2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-2060127866',owner_user_name='tempest-InstanceActionsV221TestJSON-2060127866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:30Z,user_data=None,user_id='fcf5a512c0c24bebbd413916d07e5b77',uuid=5a441f33-2ccc-4111-880f-c2c184d6c078,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.299 2 DEBUG nova.network.os_vif_util [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Converting VIF {"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.300 2 DEBUG nova.network.os_vif_util [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:6a:79,bridge_name='br-int',has_traffic_filtering=True,id=d7ff9261-f76d-4480-97fa-10fe3308acd8,network=Network(d1e8107f-faf5-4b56-8380-50000a8fe5e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7ff9261-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.301 2 DEBUG nova.objects.instance [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a441f33-2ccc-4111-880f-c2c184d6c078 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.329 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <uuid>5a441f33-2ccc-4111-880f-c2c184d6c078</uuid>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <name>instance-00000044</name>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-850216628</nova:name>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:29:35</nova:creationTime>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:user uuid="fcf5a512c0c24bebbd413916d07e5b77">tempest-InstanceActionsV221TestJSON-2060127866-project-member</nova:user>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:project uuid="4026877484004ce08dcaf59271d4309e">tempest-InstanceActionsV221TestJSON-2060127866</nova:project>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <nova:port uuid="d7ff9261-f76d-4480-97fa-10fe3308acd8">
Oct 02 12:29:36 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <system>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <entry name="serial">5a441f33-2ccc-4111-880f-c2c184d6c078</entry>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <entry name="uuid">5a441f33-2ccc-4111-880f-c2c184d6c078</entry>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </system>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <os>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   </os>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <features>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   </features>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/5a441f33-2ccc-4111-880f-c2c184d6c078_disk">
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/5a441f33-2ccc-4111-880f-c2c184d6c078_disk.config">
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:29:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:1d:6a:79"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <target dev="tapd7ff9261-f7"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/console.log" append="off"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <video>
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </video>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:29:36 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:29:36 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:29:36 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:29:36 compute-0 nova_compute[256940]: </domain>
Oct 02 12:29:36 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.330 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Preparing to wait for external event network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.330 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.330 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.330 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.331 2 DEBUG nova.virt.libvirt.vif [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-850216628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-850216628',id=68,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4026877484004ce08dcaf59271d4309e',ramdisk_id='',reservation_id='r-tcyjeuc2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-2060127866',owner_user_name='tempest-InstanceActionsV221TestJSON-2060127866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:30Z,user_data=None,user_id='fcf5a512c0c24bebbd413916d07e5b77',uuid=5a441f33-2ccc-4111-880f-c2c184d6c078,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.331 2 DEBUG nova.network.os_vif_util [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Converting VIF {"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.332 2 DEBUG nova.network.os_vif_util [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:6a:79,bridge_name='br-int',has_traffic_filtering=True,id=d7ff9261-f76d-4480-97fa-10fe3308acd8,network=Network(d1e8107f-faf5-4b56-8380-50000a8fe5e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7ff9261-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.332 2 DEBUG os_vif [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:6a:79,bridge_name='br-int',has_traffic_filtering=True,id=d7ff9261-f76d-4480-97fa-10fe3308acd8,network=Network(d1e8107f-faf5-4b56-8380-50000a8fe5e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7ff9261-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7ff9261-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7ff9261-f7, col_values=(('external_ids', {'iface-id': 'd7ff9261-f76d-4480-97fa-10fe3308acd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:6a:79', 'vm-uuid': '5a441f33-2ccc-4111-880f-c2c184d6c078'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:36 compute-0 NetworkManager[44981]: <info>  [1759408176.3462] manager: (tapd7ff9261-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.355 2 INFO os_vif [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:6a:79,bridge_name='br-int',has_traffic_filtering=True,id=d7ff9261-f76d-4480-97fa-10fe3308acd8,network=Network(d1e8107f-faf5-4b56-8380-50000a8fe5e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7ff9261-f7')
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.359 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.359 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:36.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.375 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.376 2 INFO nova.compute.claims [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:29:36 compute-0 podman[300420]: 2025-10-02 12:29:36.396953928 +0000 UTC m=+0.060727083 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 12:29:36 compute-0 podman[300421]: 2025-10-02 12:29:36.42343233 +0000 UTC m=+0.084721382 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.625 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.706 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.706 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.707 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] No VIF found with MAC fa:16:3e:1d:6a:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.707 2 INFO nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Using config drive
Oct 02 12:29:36 compute-0 nova_compute[256940]: 2025-10-02 12:29:36.735 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3268243486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:37 compute-0 ceph-mon[73668]: pgmap v1573: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 491 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 346 op/s
Oct 02 12:29:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1972595771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.064 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.070 2 DEBUG nova.compute.provider_tree [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.089 2 DEBUG nova.scheduler.client.report [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.115 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.115 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.160 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.160 2 DEBUG nova.network.neutron [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.183 2 INFO nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.204 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:29:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 451 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 8.5 MiB/s wr, 321 op/s
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.306 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.307 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.308 2 INFO nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Creating image(s)
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.332 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.359 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.388 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.392 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.462 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.463 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.464 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.464 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.489 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.492 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a059e667-9c76-4f71-92d2-9490d75ce24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.525 2 DEBUG nova.network.neutron [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Updated VIF entry in instance network info cache for port d7ff9261-f76d-4480-97fa-10fe3308acd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.526 2 DEBUG nova.network.neutron [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Updating instance_info_cache with network_info: [{"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.587 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5a441f33-2ccc-4111-880f-c2c184d6c078" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.609 2 DEBUG nova.policy [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28d5425714b04888ba9e6112879fae33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.862 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a059e667-9c76-4f71-92d2-9490d75ce24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:37.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.899 2 INFO nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Creating config drive at /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/disk.config
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.904 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpei5nyaa7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:37 compute-0 nova_compute[256940]: 2025-10-02 12:29:37.978 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.042 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpei5nyaa7" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.071 2 DEBUG nova.storage.rbd_utils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] rbd image 5a441f33-2ccc-4111-880f-c2c184d6c078_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3268243486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.076 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/disk.config 5a441f33-2ccc-4111-880f-c2c184d6c078_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.183 2 DEBUG nova.objects.instance [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.206 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.206 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Ensure instance console log exists: /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.207 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.207 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.207 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.493 2 DEBUG nova.network.neutron [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Successfully created port: 555fa830-9397-48fc-a495-4e130004193f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.598 2 DEBUG oslo_concurrency.processutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/disk.config 5a441f33-2ccc-4111-880f-c2c184d6c078_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.599 2 INFO nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Deleting local config drive /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078/disk.config because it was imported into RBD.
Oct 02 12:29:38 compute-0 kernel: tapd7ff9261-f7: entered promiscuous mode
Oct 02 12:29:38 compute-0 NetworkManager[44981]: <info>  [1759408178.6662] manager: (tapd7ff9261-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Oct 02 12:29:38 compute-0 ovn_controller[148123]: 2025-10-02T12:29:38Z|00202|binding|INFO|Claiming lport d7ff9261-f76d-4480-97fa-10fe3308acd8 for this chassis.
Oct 02 12:29:38 compute-0 ovn_controller[148123]: 2025-10-02T12:29:38Z|00203|binding|INFO|d7ff9261-f76d-4480-97fa-10fe3308acd8: Claiming fa:16:3e:1d:6a:79 10.100.0.11
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.696 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:6a:79 10.100.0.11'], port_security=['fa:16:3e:1d:6a:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5a441f33-2ccc-4111-880f-c2c184d6c078', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4026877484004ce08dcaf59271d4309e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c9b9ef8-23b8-4d8e-bc64-a9505510c3e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bba6d732-0f3d-4c41-b93f-840c0aae6110, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d7ff9261-f76d-4480-97fa-10fe3308acd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.698 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d7ff9261-f76d-4480-97fa-10fe3308acd8 in datapath d1e8107f-faf5-4b56-8380-50000a8fe5e3 bound to our chassis
Oct 02 12:29:38 compute-0 systemd-machined[210927]: New machine qemu-28-instance-00000044.
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.700 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d1e8107f-faf5-4b56-8380-50000a8fe5e3
Oct 02 12:29:38 compute-0 systemd-udevd[300728]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.716 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[baff8b6c-566e-4455-ac67-461e5a82fbb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.717 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd1e8107f-f1 in ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.719 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd1e8107f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.719 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[736c89c0-51ab-41a0-9937-930ea80d24a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.720 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f9724d-09be-4163-bd2d-7d3b12aa7be4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000044.
Oct 02 12:29:38 compute-0 NetworkManager[44981]: <info>  [1759408178.7274] device (tapd7ff9261-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:29:38 compute-0 NetworkManager[44981]: <info>  [1759408178.7286] device (tapd7ff9261-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.731 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a2af5010-a304-43d0-91ba-844a8c2c98b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_controller[148123]: 2025-10-02T12:29:38Z|00204|binding|INFO|Setting lport d7ff9261-f76d-4480-97fa-10fe3308acd8 ovn-installed in OVS
Oct 02 12:29:38 compute-0 ovn_controller[148123]: 2025-10-02T12:29:38Z|00205|binding|INFO|Setting lport d7ff9261-f76d-4480-97fa-10fe3308acd8 up in Southbound
Oct 02 12:29:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:38 compute-0 nova_compute[256940]: 2025-10-02 12:29:38.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.802 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[61621cd3-61be-47ec-8340-5d6ebb4af8ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.832 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[80bd96bc-b9cc-4cb9-bcbd-2dc7140d13cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 NetworkManager[44981]: <info>  [1759408178.8382] manager: (tapd1e8107f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.836 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c98175b5-eec4-4c47-ab0d-662676d428ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.879 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5aa23a-8f58-4301-9741-e48e0d120adb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.884 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad2cb0f-d3c1-4b66-9deb-d551ae991062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 NetworkManager[44981]: <info>  [1759408178.9078] device (tapd1e8107f-f0): carrier: link connected
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.914 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[645b81d9-0d50-4b3e-bb04-be8247cc2cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.932 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[af866430-5d89-4292-ad31-d6c04f500c20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd1e8107f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9f:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607439, 'reachable_time': 32532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300760, 'error': None, 'target': 'ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.953 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2b01d8-160b-4e38-9210-b27593e23ab0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:9f64'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607439, 'tstamp': 607439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300761, 'error': None, 'target': 'ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:38.972 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eea0a7fc-f43f-4731-bc7f-12e20a098d9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd1e8107f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9f:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607439, 'reachable_time': 32532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300762, 'error': None, 'target': 'ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.010 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[365fe2d4-5174-4e7a-8b93-c30bcd8798e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.078 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[137abde9-177d-4a2e-bc7f-3580ac0ebd62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.080 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e8107f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.080 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.081 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1e8107f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:39 compute-0 NetworkManager[44981]: <info>  [1759408179.0833] manager: (tapd1e8107f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Oct 02 12:29:39 compute-0 kernel: tapd1e8107f-f0: entered promiscuous mode
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.088 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd1e8107f-f0, col_values=(('external_ids', {'iface-id': '62c1fef6-6589-4ebc-a1b4-6799d7e26935'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:39 compute-0 ovn_controller[148123]: 2025-10-02T12:29:39Z|00206|binding|INFO|Releasing lport 62c1fef6-6589-4ebc-a1b4-6799d7e26935 from this chassis (sb_readonly=0)
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.091 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d1e8107f-faf5-4b56-8380-50000a8fe5e3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d1e8107f-faf5-4b56-8380-50000a8fe5e3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.096 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[acc286cb-1902-4e97-9e30-2c039c78c29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.097 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-d1e8107f-faf5-4b56-8380-50000a8fe5e3
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/d1e8107f-faf5-4b56-8380-50000a8fe5e3.pid.haproxy
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID d1e8107f-faf5-4b56-8380-50000a8fe5e3
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:29:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:39.098 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'env', 'PROCESS_TAG=haproxy-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d1e8107f-faf5-4b56-8380-50000a8fe5e3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:39 compute-0 ceph-mon[73668]: pgmap v1574: 305 pgs: 305 active+clean; 451 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 8.5 MiB/s wr, 321 op/s
Oct 02 12:29:39 compute-0 ceph-mon[73668]: osdmap e228: 3 total, 3 up, 3 in
Oct 02 12:29:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 451 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.9 MiB/s wr, 296 op/s
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.254 2 DEBUG nova.compute.manager [req-91426172-72a1-43a6-a452-f8e9cd727652 req-a5dc470f-da0e-458b-acbc-377c186e7fcd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received event network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.255 2 DEBUG oslo_concurrency.lockutils [req-91426172-72a1-43a6-a452-f8e9cd727652 req-a5dc470f-da0e-458b-acbc-377c186e7fcd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.255 2 DEBUG oslo_concurrency.lockutils [req-91426172-72a1-43a6-a452-f8e9cd727652 req-a5dc470f-da0e-458b-acbc-377c186e7fcd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.255 2 DEBUG oslo_concurrency.lockutils [req-91426172-72a1-43a6-a452-f8e9cd727652 req-a5dc470f-da0e-458b-acbc-377c186e7fcd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.255 2 DEBUG nova.compute.manager [req-91426172-72a1-43a6-a452-f8e9cd727652 req-a5dc470f-da0e-458b-acbc-377c186e7fcd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Processing event network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.449 2 DEBUG nova.network.neutron [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Successfully updated port: 555fa830-9397-48fc-a495-4e130004193f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.465 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.466 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.466 2 DEBUG nova.network.neutron [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.553 2 DEBUG nova.compute.manager [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-changed-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.554 2 DEBUG nova.compute.manager [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Refreshing instance network info cache due to event network-changed-555fa830-9397-48fc-a495-4e130004193f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.554 2 DEBUG oslo_concurrency.lockutils [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:39 compute-0 podman[300794]: 2025-10-02 12:29:39.478355704 +0000 UTC m=+0.024909393 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:29:39 compute-0 nova_compute[256940]: 2025-10-02 12:29:39.694 2 DEBUG nova.network.neutron [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:29:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:39.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:39 compute-0 podman[300794]: 2025-10-02 12:29:39.98923511 +0000 UTC m=+0.535788779 container create ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:29:40 compute-0 systemd[1]: Started libpod-conmon-ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338.scope.
Oct 02 12:29:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b59e1d5b2e4b0f61404493a95e054b071a069829b19a3988f23fd4686e6ffb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005321929380699795 of space, bias 1.0, pg target 1.5965788142099384 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00720780552065885 of space, bias 1.0, pg target 2.155133850676996 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:29:40 compute-0 podman[300794]: 2025-10-02 12:29:40.280904986 +0000 UTC m=+0.827458735 container init ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:29:40 compute-0 podman[300794]: 2025-10-02 12:29:40.290975621 +0000 UTC m=+0.837529270 container start ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:29:40 compute-0 neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3[300852]: [NOTICE]   (300856) : New worker (300858) forked
Oct 02 12:29:40 compute-0 neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3[300852]: [NOTICE]   (300856) : Loading success.
Oct 02 12:29:40 compute-0 ceph-mon[73668]: pgmap v1576: 305 pgs: 305 active+clean; 451 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.9 MiB/s wr, 296 op/s
Oct 02 12:29:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.530 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.532 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408180.5297155, 5a441f33-2ccc-4111-880f-c2c184d6c078 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.532 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] VM Started (Lifecycle Event)
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.535 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.539 2 INFO nova.virt.libvirt.driver [-] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Instance spawned successfully.
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.539 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.555 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.562 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.567 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.567 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.568 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.568 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.568 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.569 2 DEBUG nova.virt.libvirt.driver [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.582 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.583 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408180.5299382, 5a441f33-2ccc-4111-880f-c2c184d6c078 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.583 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] VM Paused (Lifecycle Event)
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.601 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.607 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408180.5344117, 5a441f33-2ccc-4111-880f-c2c184d6c078 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.607 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] VM Resumed (Lifecycle Event)
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.630 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.636 2 INFO nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Took 9.87 seconds to spawn the instance on the hypervisor.
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.637 2 DEBUG nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.637 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.671 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:40 compute-0 nova_compute[256940]: 2025-10-02 12:29:40.706 2 INFO nova.compute.manager [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Took 11.22 seconds to build instance.
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.001 2 DEBUG oslo_concurrency.lockutils [None req-dd813c15-2d3b-4e07-8437-604fa4c3722a fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 475 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 226 KiB/s rd, 1.6 MiB/s wr, 149 op/s
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.360 2 DEBUG nova.compute.manager [req-81b10378-1bff-4d79-a6e9-9bad52f0b32b req-e87bc06f-71f8-45be-ac8d-05b75fb36ecc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received event network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.361 2 DEBUG oslo_concurrency.lockutils [req-81b10378-1bff-4d79-a6e9-9bad52f0b32b req-e87bc06f-71f8-45be-ac8d-05b75fb36ecc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.361 2 DEBUG oslo_concurrency.lockutils [req-81b10378-1bff-4d79-a6e9-9bad52f0b32b req-e87bc06f-71f8-45be-ac8d-05b75fb36ecc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.361 2 DEBUG oslo_concurrency.lockutils [req-81b10378-1bff-4d79-a6e9-9bad52f0b32b req-e87bc06f-71f8-45be-ac8d-05b75fb36ecc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.362 2 DEBUG nova.compute.manager [req-81b10378-1bff-4d79-a6e9-9bad52f0b32b req-e87bc06f-71f8-45be-ac8d-05b75fb36ecc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] No waiting events found dispatching network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.362 2 WARNING nova.compute.manager [req-81b10378-1bff-4d79-a6e9-9bad52f0b32b req-e87bc06f-71f8-45be-ac8d-05b75fb36ecc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received unexpected event network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 for instance with vm_state active and task_state None.
Oct 02 12:29:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:41.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:41 compute-0 nova_compute[256940]: 2025-10-02 12:29:41.921 2 DEBUG nova.network.neutron [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Updating instance_info_cache with network_info: [{"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.061 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.061 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance network_info: |[{"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.062 2 DEBUG oslo_concurrency.lockutils [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.062 2 DEBUG nova.network.neutron [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Refreshing network info cache for port 555fa830-9397-48fc-a495-4e130004193f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.064 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Start _get_guest_xml network_info=[{"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.068 2 WARNING nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.076 2 DEBUG nova.virt.libvirt.host [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.077 2 DEBUG nova.virt.libvirt.host [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.084 2 DEBUG nova.virt.libvirt.host [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.084 2 DEBUG nova.virt.libvirt.host [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.085 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.086 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.086 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.086 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.086 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.087 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.087 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.087 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.087 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.087 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.088 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.088 2 DEBUG nova.virt.hardware [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.090 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.143 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.144 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.144 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.144 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.144 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.146 2 INFO nova.compute.manager [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Terminating instance
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.147 2 DEBUG nova.compute.manager [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:29:42 compute-0 kernel: tapd7ff9261-f7 (unregistering): left promiscuous mode
Oct 02 12:29:42 compute-0 NetworkManager[44981]: <info>  [1759408182.3251] device (tapd7ff9261-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:29:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:42 compute-0 ovn_controller[148123]: 2025-10-02T12:29:42Z|00207|binding|INFO|Releasing lport d7ff9261-f76d-4480-97fa-10fe3308acd8 from this chassis (sb_readonly=0)
Oct 02 12:29:42 compute-0 ovn_controller[148123]: 2025-10-02T12:29:42Z|00208|binding|INFO|Setting lport d7ff9261-f76d-4480-97fa-10fe3308acd8 down in Southbound
Oct 02 12:29:42 compute-0 ovn_controller[148123]: 2025-10-02T12:29:42Z|00209|binding|INFO|Removing iface tapd7ff9261-f7 ovn-installed in OVS
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:42.390 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:6a:79 10.100.0.11'], port_security=['fa:16:3e:1d:6a:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5a441f33-2ccc-4111-880f-c2c184d6c078', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4026877484004ce08dcaf59271d4309e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c9b9ef8-23b8-4d8e-bc64-a9505510c3e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bba6d732-0f3d-4c41-b93f-840c0aae6110, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=d7ff9261-f76d-4480-97fa-10fe3308acd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:42.392 158104 INFO neutron.agent.ovn.metadata.agent [-] Port d7ff9261-f76d-4480-97fa-10fe3308acd8 in datapath d1e8107f-faf5-4b56-8380-50000a8fe5e3 unbound from our chassis
Oct 02 12:29:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:42.394 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d1e8107f-faf5-4b56-8380-50000a8fe5e3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:29:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:42.395 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa19fcb-e28a-4581-98e9-cddc2d9f0f43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:42.396 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3 namespace which is not needed anymore
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:42 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000044.scope: Deactivated successfully.
Oct 02 12:29:42 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000044.scope: Consumed 3.237s CPU time.
Oct 02 12:29:42 compute-0 systemd-machined[210927]: Machine qemu-28-instance-00000044 terminated.
Oct 02 12:29:42 compute-0 ceph-mon[73668]: pgmap v1577: 305 pgs: 305 active+clean; 475 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 226 KiB/s rd, 1.6 MiB/s wr, 149 op/s
Oct 02 12:29:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32196474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:42 compute-0 neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3[300852]: [NOTICE]   (300856) : haproxy version is 2.8.14-c23fe91
Oct 02 12:29:42 compute-0 neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3[300852]: [NOTICE]   (300856) : path to executable is /usr/sbin/haproxy
Oct 02 12:29:42 compute-0 neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3[300852]: [ALERT]    (300856) : Current worker (300858) exited with code 143 (Terminated)
Oct 02 12:29:42 compute-0 neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3[300852]: [WARNING]  (300856) : All workers exited. Exiting... (0)
Oct 02 12:29:42 compute-0 systemd[1]: libpod-ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338.scope: Deactivated successfully.
Oct 02 12:29:42 compute-0 podman[300912]: 2025-10-02 12:29:42.550753242 +0000 UTC m=+0.049562879 container died ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.555 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.608 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338-userdata-shm.mount: Deactivated successfully.
Oct 02 12:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b59e1d5b2e4b0f61404493a95e054b071a069829b19a3988f23fd4686e6ffb6-merged.mount: Deactivated successfully.
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.618 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.661 2 INFO nova.virt.libvirt.driver [-] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Instance destroyed successfully.
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.662 2 DEBUG nova.objects.instance [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lazy-loading 'resources' on Instance uuid 5a441f33-2ccc-4111-880f-c2c184d6c078 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.682 2 DEBUG nova.virt.libvirt.vif [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-850216628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-850216628',id=68,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4026877484004ce08dcaf59271d4309e',ramdisk_id='',reservation_id='r-tcyjeuc2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-2060127866',owner_user_name='tempest-InstanceActionsV221TestJSON-2060127866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:40Z,user_data=None,user_id='fcf5a512c0c24bebbd413916d07e5b77',uuid=5a441f33-2ccc-4111-880f-c2c184d6c078,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.683 2 DEBUG nova.network.os_vif_util [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Converting VIF {"id": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "address": "fa:16:3e:1d:6a:79", "network": {"id": "d1e8107f-faf5-4b56-8380-50000a8fe5e3", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1576027428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4026877484004ce08dcaf59271d4309e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7ff9261-f7", "ovs_interfaceid": "d7ff9261-f76d-4480-97fa-10fe3308acd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.684 2 DEBUG nova.network.os_vif_util [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:6a:79,bridge_name='br-int',has_traffic_filtering=True,id=d7ff9261-f76d-4480-97fa-10fe3308acd8,network=Network(d1e8107f-faf5-4b56-8380-50000a8fe5e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7ff9261-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.684 2 DEBUG os_vif [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:6a:79,bridge_name='br-int',has_traffic_filtering=True,id=d7ff9261-f76d-4480-97fa-10fe3308acd8,network=Network(d1e8107f-faf5-4b56-8380-50000a8fe5e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7ff9261-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7ff9261-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:29:42 compute-0 nova_compute[256940]: 2025-10-02 12:29:42.692 2 INFO os_vif [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:6a:79,bridge_name='br-int',has_traffic_filtering=True,id=d7ff9261-f76d-4480-97fa-10fe3308acd8,network=Network(d1e8107f-faf5-4b56-8380-50000a8fe5e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7ff9261-f7')
Oct 02 12:29:42 compute-0 podman[300912]: 2025-10-02 12:29:42.776473992 +0000 UTC m=+0.275283619 container cleanup ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:29:42 compute-0 systemd[1]: libpod-conmon-ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338.scope: Deactivated successfully.
Oct 02 12:29:43 compute-0 podman[301009]: 2025-10-02 12:29:43.005361582 +0000 UTC m=+0.202985894 container remove ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.013 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8f289cc6-8a04-4012-ac47-7a4de8dc360a]: (4, ('Thu Oct  2 12:29:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3 (ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338)\nae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338\nThu Oct  2 12:29:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3 (ae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338)\nae761f5ae89b4d1c6c6da083c1ad2a71e88bd9b55877beb6627bb5b827db6338\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.015 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f19720-30a1-442c-995c-399cc4176248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.016 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e8107f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:43 compute-0 kernel: tapd1e8107f-f0: left promiscuous mode
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.039 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eece8b06-758b-44b6-8098-3241cbfcf838]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.079 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0eee2b44-0729-471f-b4a6-81678fa0302d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.081 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fecd3d1e-5878-49be-8c38-7bd173c2edea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.102 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[76bdd69c-21a3-4e1f-a330-5d3ce0d27d9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607431, 'reachable_time': 30036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301024, 'error': None, 'target': 'ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dd1e8107f\x2dfaf5\x2d4b56\x2d8380\x2d50000a8fe5e3.mount: Deactivated successfully.
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.108 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d1e8107f-faf5-4b56-8380-50000a8fe5e3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:29:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:43.109 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[83f5838f-86f4-484d-83c5-bfcd60edb2a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535058869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.201 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.203 2 DEBUG nova.virt.libvirt.vif [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-536456714',display_name='tempest-ServerDiskConfigTestJSON-server-536456714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-536456714',id=69,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-zvu4uxk3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:37Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=a059e667-9c76-4f71-92d2-9490d75ce24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.204 2 DEBUG nova.network.os_vif_util [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.205 2 DEBUG nova.network.os_vif_util [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.207 2 DEBUG nova.objects.instance [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.221 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <uuid>a059e667-9c76-4f71-92d2-9490d75ce24d</uuid>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <name>instance-00000045</name>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-536456714</nova:name>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:29:42</nova:creationTime>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <nova:port uuid="555fa830-9397-48fc-a495-4e130004193f">
Oct 02 12:29:43 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <system>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <entry name="serial">a059e667-9c76-4f71-92d2-9490d75ce24d</entry>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <entry name="uuid">a059e667-9c76-4f71-92d2-9490d75ce24d</entry>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </system>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <os>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   </os>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <features>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   </features>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a059e667-9c76-4f71-92d2-9490d75ce24d_disk">
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       </source>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config">
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       </source>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:29:43 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d9:e4:e0"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <target dev="tap555fa830-93"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/console.log" append="off"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <video>
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </video>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:29:43 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:29:43 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:29:43 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:29:43 compute-0 nova_compute[256940]: </domain>
Oct 02 12:29:43 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.223 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Preparing to wait for external event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.223 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.223 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.223 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.224 2 DEBUG nova.virt.libvirt.vif [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-536456714',display_name='tempest-ServerDiskConfigTestJSON-server-536456714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-536456714',id=69,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-zvu4uxk3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:37Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=a059e667-9c76-4f71-92d2-9490d75ce24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.224 2 DEBUG nova.network.os_vif_util [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.225 2 DEBUG nova.network.os_vif_util [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.225 2 DEBUG os_vif [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.230 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap555fa830-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.230 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap555fa830-93, col_values=(('external_ids', {'iface-id': '555fa830-9397-48fc-a495-4e130004193f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:e4:e0', 'vm-uuid': 'a059e667-9c76-4f71-92d2-9490d75ce24d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-0 NetworkManager[44981]: <info>  [1759408183.2334] manager: (tap555fa830-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 02 12:29:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 497 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 181 op/s
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.242 2 INFO os_vif [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93')
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.409 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.409 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.410 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:d9:e4:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.410 2 INFO nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Using config drive
Oct 02 12:29:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.603 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.615 2 DEBUG nova.compute.manager [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received event network-vif-unplugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.616 2 DEBUG oslo_concurrency.lockutils [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.616 2 DEBUG oslo_concurrency.lockutils [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.617 2 DEBUG oslo_concurrency.lockutils [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.617 2 DEBUG nova.compute.manager [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] No waiting events found dispatching network-vif-unplugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.617 2 DEBUG nova.compute.manager [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received event network-vif-unplugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.618 2 DEBUG nova.compute.manager [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received event network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.618 2 DEBUG oslo_concurrency.lockutils [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.618 2 DEBUG oslo_concurrency.lockutils [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.619 2 DEBUG oslo_concurrency.lockutils [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.619 2 DEBUG nova.compute.manager [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] No waiting events found dispatching network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.619 2 WARNING nova.compute.manager [req-8229820c-e305-4300-9517-78bd7c6389e1 req-5a7cbadb-6b35-4163-855b-bc75eaee0a48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received unexpected event network-vif-plugged-d7ff9261-f76d-4480-97fa-10fe3308acd8 for instance with vm_state active and task_state deleting.
Oct 02 12:29:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/32196474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3535058869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:29:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:43.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.940 2 INFO nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Creating config drive at /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config
Oct 02 12:29:43 compute-0 nova_compute[256940]: 2025-10-02 12:29:43.945 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7j828ohn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 02 12:29:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 02 12:29:44 compute-0 nova_compute[256940]: 2025-10-02 12:29:44.078 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7j828ohn" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:44 compute-0 nova_compute[256940]: 2025-10-02 12:29:44.225 2 DEBUG nova.storage.rbd_utils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:44 compute-0 nova_compute[256940]: 2025-10-02 12:29:44.230 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:44 compute-0 nova_compute[256940]: 2025-10-02 12:29:44.260 2 DEBUG nova.network.neutron [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Updated VIF entry in instance network info cache for port 555fa830-9397-48fc-a495-4e130004193f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:29:44 compute-0 nova_compute[256940]: 2025-10-02 12:29:44.261 2 DEBUG nova.network.neutron [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Updating instance_info_cache with network_info: [{"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:44 compute-0 nova_compute[256940]: 2025-10-02 12:29:44.277 2 DEBUG oslo_concurrency.lockutils [req-8dd58584-04e9-48a2-8c7f-525861fad846 req-646a82cb-2e2e-4cf9-b146-4ca6acee1238 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:45 compute-0 ceph-mon[73668]: pgmap v1578: 305 pgs: 305 active+clean; 497 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 181 op/s
Oct 02 12:29:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4200825112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-0 ceph-mon[73668]: osdmap e229: 3 total, 3 up, 3 in
Oct 02 12:29:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4124496307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3910498834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-0 nova_compute[256940]: 2025-10-02 12:29:45.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:45 compute-0 nova_compute[256940]: 2025-10-02 12:29:45.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:29:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 484 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 150 op/s
Oct 02 12:29:45 compute-0 nova_compute[256940]: 2025-10-02 12:29:45.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:45.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:45 compute-0 nova_compute[256940]: 2025-10-02 12:29:45.971 2 DEBUG oslo_concurrency.processutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:45 compute-0 nova_compute[256940]: 2025-10-02 12:29:45.972 2 INFO nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Deleting local config drive /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config because it was imported into RBD.
Oct 02 12:29:46 compute-0 kernel: tap555fa830-93: entered promiscuous mode
Oct 02 12:29:46 compute-0 nova_compute[256940]: 2025-10-02 12:29:46.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:46 compute-0 NetworkManager[44981]: <info>  [1759408186.0458] manager: (tap555fa830-93): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Oct 02 12:29:46 compute-0 ovn_controller[148123]: 2025-10-02T12:29:46Z|00210|binding|INFO|Claiming lport 555fa830-9397-48fc-a495-4e130004193f for this chassis.
Oct 02 12:29:46 compute-0 ovn_controller[148123]: 2025-10-02T12:29:46Z|00211|binding|INFO|555fa830-9397-48fc-a495-4e130004193f: Claiming fa:16:3e:d9:e4:e0 10.100.0.14
Oct 02 12:29:46 compute-0 nova_compute[256940]: 2025-10-02 12:29:46.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.054 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e4:e0 10.100.0.14'], port_security=['fa:16:3e:d9:e4:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a059e667-9c76-4f71-92d2-9490d75ce24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=555fa830-9397-48fc-a495-4e130004193f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.055 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 555fa830-9397-48fc-a495-4e130004193f in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.056 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.074 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7aaa4cae-1852-41ec-8cd2-70e6a7778924]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.077 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.079 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.079 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d2d138-9184-4d71-a884-b2481fe462a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.080 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[819f4687-709d-4378-a08b-b62b3e549145]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 systemd-udevd[301103]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:29:46 compute-0 systemd-machined[210927]: New machine qemu-29-instance-00000045.
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.095 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[82e4b033-01aa-4204-b620-4eae76cffb32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 NetworkManager[44981]: <info>  [1759408186.0988] device (tap555fa830-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:29:46 compute-0 NetworkManager[44981]: <info>  [1759408186.0998] device (tap555fa830-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:29:46 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000045.
Oct 02 12:29:46 compute-0 nova_compute[256940]: 2025-10-02 12:29:46.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.122 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[69d32e14-bca7-4c35-81b8-77019bb8972e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_controller[148123]: 2025-10-02T12:29:46Z|00212|binding|INFO|Setting lport 555fa830-9397-48fc-a495-4e130004193f ovn-installed in OVS
Oct 02 12:29:46 compute-0 ovn_controller[148123]: 2025-10-02T12:29:46Z|00213|binding|INFO|Setting lport 555fa830-9397-48fc-a495-4e130004193f up in Southbound
Oct 02 12:29:46 compute-0 nova_compute[256940]: 2025-10-02 12:29:46.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.156 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a4f9ee-54a7-4295-b57f-65212a2a197f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 NetworkManager[44981]: <info>  [1759408186.1676] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.166 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c03ac2-6615-402d-9fd3-cf204d0a0721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 systemd-udevd[301107]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.208 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[90305238-dc98-4e90-bf32-07902d8bc441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.211 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee69acf-2b85-4bc2-a314-491ba310f6e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 NetworkManager[44981]: <info>  [1759408186.2428] device (tape21cd6a6-f0): carrier: link connected
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.255 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dc00ef-a44e-4a2c-ace1-9e1b9e893b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.277 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2722d469-4eff-4f51-8d8a-c5f7d82370b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608172, 'reachable_time': 16116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301135, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.294 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1160a6d0-ab3d-4581-9ab0-db21e4c7dd9a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608172, 'tstamp': 608172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301136, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.312 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1647be55-b396-4349-b6ae-8d61c8315541]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608172, 'reachable_time': 16116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301137, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.352 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[73935a46-006c-49b3-b351-12c6ef28c25d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.418 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[992dee86-8344-4b5a-8f85-b610b296db27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.419 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.420 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.420 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:46 compute-0 NetworkManager[44981]: <info>  [1759408186.4229] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 02 12:29:46 compute-0 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct 02 12:29:46 compute-0 nova_compute[256940]: 2025-10-02 12:29:46.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.424 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:46 compute-0 ovn_controller[148123]: 2025-10-02T12:29:46Z|00214|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:29:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4172126751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.446 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:29:46 compute-0 nova_compute[256940]: 2025-10-02 12:29:46.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.447 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[00a41eeb-b6e1-48c0-a545-3809f3d1b1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.448 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:29:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:46.449 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:29:46 compute-0 podman[301206]: 2025-10-02 12:29:46.799158272 +0000 UTC m=+0.030922306 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 385 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 226 op/s
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.278 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408187.2780328, a059e667-9c76-4f71-92d2-9490d75ce24d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.279 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Started (Lifecycle Event)
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.304 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.309 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408187.278206, a059e667-9c76-4f71-92d2-9490d75ce24d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.310 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Paused (Lifecycle Event)
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.330 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.333 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.356 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:47 compute-0 podman[301206]: 2025-10-02 12:29:47.540766377 +0000 UTC m=+0.772530371 container create a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:47 compute-0 ceph-mon[73668]: pgmap v1580: 305 pgs: 305 active+clean; 484 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 150 op/s
Oct 02 12:29:47 compute-0 systemd[1]: Started libpod-conmon-a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b.scope.
Oct 02 12:29:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07a7962f2f24abebc366bd531083efe657a322693c7b739bdee558d709db750/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:47 compute-0 podman[301206]: 2025-10-02 12:29:47.765610514 +0000 UTC m=+0.997374538 container init a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:47 compute-0 podman[301206]: 2025-10-02 12:29:47.772650593 +0000 UTC m=+1.004414597 container start a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:29:47 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[301228]: [NOTICE]   (301232) : New worker (301234) forked
Oct 02 12:29:47 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[301228]: [NOTICE]   (301232) : Loading success.
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.816 2 INFO nova.virt.libvirt.driver [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Deleting instance files /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078_del
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.817 2 INFO nova.virt.libvirt.driver [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Deletion of /var/lib/nova/instances/5a441f33-2ccc-4111-880f-c2c184d6c078_del complete
Oct 02 12:29:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.904 2 INFO nova.compute.manager [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Took 5.76 seconds to destroy the instance on the hypervisor.
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.904 2 DEBUG oslo.service.loopingcall [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.905 2 DEBUG nova.compute.manager [-] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:29:47 compute-0 nova_compute[256940]: 2025-10-02 12:29:47.905 2 DEBUG nova.network.neutron [-] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:29:48 compute-0 nova_compute[256940]: 2025-10-02 12:29:48.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:48.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 02 12:29:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 02 12:29:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 02 12:29:48 compute-0 ceph-mon[73668]: pgmap v1581: 305 pgs: 305 active+clean; 385 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 226 op/s
Oct 02 12:29:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 02 12:29:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 02 12:29:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.063 2 DEBUG nova.network.neutron [-] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.085 2 DEBUG nova.compute.manager [req-0878ba6c-c82f-4346-8e21-be6e195b9ed2 req-8e43db4b-e214-4300-91df-030ad0bf1fe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Received event network-vif-deleted-d7ff9261-f76d-4480-97fa-10fe3308acd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.086 2 INFO nova.compute.manager [req-0878ba6c-c82f-4346-8e21-be6e195b9ed2 req-8e43db4b-e214-4300-91df-030ad0bf1fe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Neutron deleted interface d7ff9261-f76d-4480-97fa-10fe3308acd8; detaching it from the instance and deleting it from the info cache
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.086 2 DEBUG nova.network.neutron [req-0878ba6c-c82f-4346-8e21-be6e195b9ed2 req-8e43db4b-e214-4300-91df-030ad0bf1fe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.235 2 INFO nova.compute.manager [-] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Took 1.33 seconds to deallocate network for instance.
Oct 02 12:29:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 385 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 34 KiB/s wr, 175 op/s
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.244 2 DEBUG nova.compute.manager [req-0878ba6c-c82f-4346-8e21-be6e195b9ed2 req-8e43db4b-e214-4300-91df-030ad0bf1fe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Detach interface failed, port_id=d7ff9261-f76d-4480-97fa-10fe3308acd8, reason: Instance 5a441f33-2ccc-4111-880f-c2c184d6c078 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.386 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.387 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.387 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.388 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.388 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.427 2 DEBUG nova.compute.manager [req-7f05041c-afab-410c-ab74-e785a7989da9 req-67b90d40-6740-4c90-8cf5-57dbc87325f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.428 2 DEBUG oslo_concurrency.lockutils [req-7f05041c-afab-410c-ab74-e785a7989da9 req-67b90d40-6740-4c90-8cf5-57dbc87325f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.429 2 DEBUG oslo_concurrency.lockutils [req-7f05041c-afab-410c-ab74-e785a7989da9 req-67b90d40-6740-4c90-8cf5-57dbc87325f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.429 2 DEBUG oslo_concurrency.lockutils [req-7f05041c-afab-410c-ab74-e785a7989da9 req-67b90d40-6740-4c90-8cf5-57dbc87325f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.429 2 DEBUG nova.compute.manager [req-7f05041c-afab-410c-ab74-e785a7989da9 req-67b90d40-6740-4c90-8cf5-57dbc87325f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Processing event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.430 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.435 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408189.4352305, a059e667-9c76-4f71-92d2-9490d75ce24d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.436 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Resumed (Lifecycle Event)
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.439 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.443 2 INFO nova.virt.libvirt.driver [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance spawned successfully.
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.444 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.771 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.772 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.787 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.794 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.797 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.798 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.798 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.799 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.799 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.799 2 DEBUG nova.virt.libvirt.driver [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.843 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.877 2 INFO nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Took 12.57 seconds to spawn the instance on the hypervisor.
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.878 2 DEBUG nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.884 2 DEBUG oslo_concurrency.processutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216246306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.938 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:49 compute-0 nova_compute[256940]: 2025-10-02 12:29:49.978 2 INFO nova.compute.manager [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Took 13.64 seconds to build instance.
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.005 2 DEBUG oslo_concurrency.lockutils [None req-413665c8-2424-493e-b02a-7bf998ecc533 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.025 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.026 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:50 compute-0 ceph-mon[73668]: osdmap e230: 3 total, 3 up, 3 in
Oct 02 12:29:50 compute-0 ceph-mon[73668]: osdmap e231: 3 total, 3 up, 3 in
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.214 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.215 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4521MB free_disk=20.876293182373047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.216 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497082779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.408 2 DEBUG oslo_concurrency.processutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.414 2 DEBUG nova.compute.provider_tree [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.429 2 DEBUG nova.scheduler.client.report [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.451 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.454 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.482 2 INFO nova.scheduler.client.report [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Deleted allocations for instance 5a441f33-2ccc-4111-880f-c2c184d6c078
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.519 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance a059e667-9c76-4f71-92d2-9490d75ce24d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.519 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.520 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.551 2 DEBUG oslo_concurrency.lockutils [None req-71a37249-2a7e-4f6a-9973-5bd1daec35ed fcf5a512c0c24bebbd413916d07e5b77 4026877484004ce08dcaf59271d4309e - - default default] Lock "5a441f33-2ccc-4111-880f-c2c184d6c078" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.554 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:50.839 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:50 compute-0 nova_compute[256940]: 2025-10-02 12:29:50.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:50.841 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:29:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103223059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.027 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.033 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:51 compute-0 ceph-mon[73668]: pgmap v1584: 305 pgs: 305 active+clean; 385 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 34 KiB/s wr, 175 op/s
Oct 02 12:29:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2216246306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1497082779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/103223059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.054 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.076 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.077 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 02 12:29:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 372 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 30 KiB/s wr, 156 op/s
Oct 02 12:29:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 02 12:29:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.494 2 DEBUG nova.compute.manager [req-e32fd36e-f13f-4c01-9d7f-7c908a84419c req-e03ecdee-d6d5-4622-8fb3-6f7c3fa8c1c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.495 2 DEBUG oslo_concurrency.lockutils [req-e32fd36e-f13f-4c01-9d7f-7c908a84419c req-e03ecdee-d6d5-4622-8fb3-6f7c3fa8c1c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.496 2 DEBUG oslo_concurrency.lockutils [req-e32fd36e-f13f-4c01-9d7f-7c908a84419c req-e03ecdee-d6d5-4622-8fb3-6f7c3fa8c1c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.496 2 DEBUG oslo_concurrency.lockutils [req-e32fd36e-f13f-4c01-9d7f-7c908a84419c req-e03ecdee-d6d5-4622-8fb3-6f7c3fa8c1c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.496 2 DEBUG nova.compute.manager [req-e32fd36e-f13f-4c01-9d7f-7c908a84419c req-e03ecdee-d6d5-4622-8fb3-6f7c3fa8c1c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] No waiting events found dispatching network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:51 compute-0 nova_compute[256940]: 2025-10-02 12:29:51.496 2 WARNING nova.compute.manager [req-e32fd36e-f13f-4c01-9d7f-7c908a84419c req-e03ecdee-d6d5-4622-8fb3-6f7c3fa8c1c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received unexpected event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f for instance with vm_state active and task_state None.
Oct 02 12:29:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:51.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:52 compute-0 nova_compute[256940]: 2025-10-02 12:29:52.078 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:52 compute-0 nova_compute[256940]: 2025-10-02 12:29:52.078 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:52 compute-0 ceph-mon[73668]: pgmap v1585: 305 pgs: 305 active+clean; 372 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 30 KiB/s wr, 156 op/s
Oct 02 12:29:52 compute-0 ceph-mon[73668]: osdmap e232: 3 total, 3 up, 3 in
Oct 02 12:29:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:29:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:52.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:29:52 compute-0 podman[301312]: 2025-10-02 12:29:52.396507313 +0000 UTC m=+0.059867391 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:29:52 compute-0 podman[301313]: 2025-10-02 12:29:52.403795828 +0000 UTC m=+0.064878978 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 316 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 KiB/s wr, 140 op/s
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.444 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.445 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.445 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:29:53 compute-0 nova_compute[256940]: 2025-10-02 12:29:53.445 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:29:53.844 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:53.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 02 12:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 02 12:29:54 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 02 12:29:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:54.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:54 compute-0 nova_compute[256940]: 2025-10-02 12:29:54.519 2 INFO nova.compute.manager [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Rebuilding instance
Oct 02 12:29:54 compute-0 nova_compute[256940]: 2025-10-02 12:29:54.763 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:54 compute-0 nova_compute[256940]: 2025-10-02 12:29:54.764 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:54 compute-0 ovn_controller[148123]: 2025-10-02T12:29:54Z|00215|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:29:54 compute-0 nova_compute[256940]: 2025-10-02 12:29:54.795 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:54 compute-0 nova_compute[256940]: 2025-10-02 12:29:54.826 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:29:54 compute-0 nova_compute[256940]: 2025-10-02 12:29:54.832 2 DEBUG nova.compute.manager [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:54 compute-0 sudo[301354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:54 compute-0 sudo[301354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:54 compute-0 sudo[301354]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:54 compute-0 nova_compute[256940]: 2025-10-02 12:29:54.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:54 compute-0 sudo[301379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:54 compute-0 sudo[301379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:54 compute-0 sudo[301379]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:55 compute-0 ceph-mon[73668]: pgmap v1587: 305 pgs: 305 active+clean; 316 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 KiB/s wr, 140 op/s
Oct 02 12:29:55 compute-0 ceph-mon[73668]: osdmap e233: 3 total, 3 up, 3 in
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.098 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.099 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.108 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.108 2 INFO nova.compute.claims [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:29:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 231 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.1 KiB/s wr, 226 op/s
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.257 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_requests' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.354 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Updating instance_info_cache with network_info: [{"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.410 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.493 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-a059e667-9c76-4f71-92d2-9490d75ce24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.493 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.493 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.499 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.569 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.597 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.636 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:29:55 compute-0 nova_compute[256940]: 2025-10-02 12:29:55.641 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:29:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:55.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613187794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.016 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.021 2 DEBUG nova.compute.provider_tree [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.049 2 DEBUG nova.scheduler.client.report [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.079 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.079 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.123 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.124 2 DEBUG nova.network.neutron [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.142 2 INFO nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.158 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.243 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.245 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.245 2 INFO nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Creating image(s)
Oct 02 12:29:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:56.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.422 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2737000721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3613187794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.540 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.577 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.583 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.621 2 DEBUG nova.policy [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.672 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.673 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.673 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.674 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.706 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:56 compute-0 nova_compute[256940]: 2025-10-02 12:29:56.711 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 167 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.1 KiB/s wr, 211 op/s
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.266 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.338 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] resizing rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.456 2 DEBUG nova.objects.instance [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.473 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.473 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Ensure instance console log exists: /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.474 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.474 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.474 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:57 compute-0 ceph-mon[73668]: pgmap v1589: 305 pgs: 305 active+clean; 231 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.1 KiB/s wr, 226 op/s
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.589 2 DEBUG nova.network.neutron [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Successfully created port: fff1abf1-c689-4533-86c7-a3d1075ea6de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.658 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408182.5778887, 5a441f33-2ccc-4111-880f-c2c184d6c078 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.658 2 INFO nova.compute.manager [-] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] VM Stopped (Lifecycle Event)
Oct 02 12:29:57 compute-0 nova_compute[256940]: 2025-10-02 12:29:57.680 2 DEBUG nova.compute.manager [None req-a514c6c3-0c7b-4684-99a1-448b6fabe4a3 - - - - - -] [instance: 5a441f33-2ccc-4111-880f-c2c184d6c078] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:57.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.374 2 DEBUG nova.network.neutron [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Successfully updated port: fff1abf1-c689-4533-86c7-a3d1075ea6de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:29:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:58.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.395 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.395 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.395 2 DEBUG nova.network.neutron [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.461 2 DEBUG nova.compute.manager [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-changed-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.461 2 DEBUG nova.compute.manager [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Refreshing instance network info cache due to event network-changed-fff1abf1-c689-4533-86c7-a3d1075ea6de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:29:58 compute-0 nova_compute[256940]: 2025-10-02 12:29:58.461 2 DEBUG oslo_concurrency.lockutils [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:58 compute-0 ceph-mon[73668]: pgmap v1590: 305 pgs: 305 active+clean; 167 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.1 KiB/s wr, 211 op/s
Oct 02 12:29:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 02 12:29:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 02 12:29:59 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 02 12:29:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 167 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.8 KiB/s wr, 204 op/s
Oct 02 12:29:59 compute-0 nova_compute[256940]: 2025-10-02 12:29:59.362 2 DEBUG nova.network.neutron [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:29:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:29:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:29:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:30:00 compute-0 ceph-mon[73668]: osdmap e234: 3 total, 3 up, 3 in
Oct 02 12:30:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 12:30:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:00.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:00 compute-0 nova_compute[256940]: 2025-10-02 12:30:00.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.062 2 DEBUG nova.network.neutron [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.096 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.097 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Instance network_info: |[{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.098 2 DEBUG oslo_concurrency.lockutils [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.099 2 DEBUG nova.network.neutron [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Refreshing network info cache for port fff1abf1-c689-4533-86c7-a3d1075ea6de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.104 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Start _get_guest_xml network_info=[{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.110 2 WARNING nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.117 2 DEBUG nova.virt.libvirt.host [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.119 2 DEBUG nova.virt.libvirt.host [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.122 2 DEBUG nova.virt.libvirt.host [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.123 2 DEBUG nova.virt.libvirt.host [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.125 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.125 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.126 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.126 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.126 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.127 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.127 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.127 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.128 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.128 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.128 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.128 2 DEBUG nova.virt.hardware [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.131 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 162 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Oct 02 12:30:01 compute-0 ceph-mon[73668]: pgmap v1592: 305 pgs: 305 active+clean; 167 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.8 KiB/s wr, 204 op/s
Oct 02 12:30:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/653846976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196927650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.653 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.684 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:01 compute-0 nova_compute[256940]: 2025-10-02 12:30:01.689 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720487963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.155 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.157 2 DEBUG nova.virt.libvirt.vif [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.158 2 DEBUG nova.network.os_vif_util [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.159 2 DEBUG nova.network.os_vif_util [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:8a:e3,bridge_name='br-int',has_traffic_filtering=True,id=fff1abf1-c689-4533-86c7-a3d1075ea6de,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfff1abf1-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.160 2 DEBUG nova.objects.instance [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.176 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <uuid>878c4166-4a08-41bd-9263-c5d4b4c7a280</uuid>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <name>instance-00000046</name>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:30:01</nova:creationTime>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:02 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <system>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <entry name="serial">878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <entry name="uuid">878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </system>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <os>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   </os>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <features>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   </features>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk">
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config">
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:30:02 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:10:8a:e3"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <target dev="tapfff1abf1-c6"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log" append="off"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <video>
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </video>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:30:02 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:30:02 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:30:02 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:30:02 compute-0 nova_compute[256940]: </domain>
Oct 02 12:30:02 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.177 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Preparing to wait for external event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.178 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.178 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.178 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.179 2 DEBUG nova.virt.libvirt.vif [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.179 2 DEBUG nova.network.os_vif_util [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.180 2 DEBUG nova.network.os_vif_util [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:8a:e3,bridge_name='br-int',has_traffic_filtering=True,id=fff1abf1-c689-4533-86c7-a3d1075ea6de,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfff1abf1-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.180 2 DEBUG os_vif [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:8a:e3,bridge_name='br-int',has_traffic_filtering=True,id=fff1abf1-c689-4533-86c7-a3d1075ea6de,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfff1abf1-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff1abf1-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfff1abf1-c6, col_values=(('external_ids', {'iface-id': 'fff1abf1-c689-4533-86c7-a3d1075ea6de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:8a:e3', 'vm-uuid': '878c4166-4a08-41bd-9263-c5d4b4c7a280'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:02 compute-0 NetworkManager[44981]: <info>  [1759408202.1883] manager: (tapfff1abf1-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.200 2 INFO os_vif [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:8a:e3,bridge_name='br-int',has_traffic_filtering=True,id=fff1abf1-c689-4533-86c7-a3d1075ea6de,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfff1abf1-c6')
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.245 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.245 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.246 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:10:8a:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.246 2 INFO nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Using config drive
Oct 02 12:30:02 compute-0 nova_compute[256940]: 2025-10-02 12:30:02.271 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:02.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:02 compute-0 ceph-mon[73668]: pgmap v1593: 305 pgs: 305 active+clean; 162 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Oct 02 12:30:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2196927650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3720487963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 02 12:30:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 02 12:30:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 02 12:30:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 137 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Oct 02 12:30:03 compute-0 nova_compute[256940]: 2025-10-02 12:30:03.325 2 INFO nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Creating config drive at /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/disk.config
Oct 02 12:30:03 compute-0 nova_compute[256940]: 2025-10-02 12:30:03.331 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuc3zcfbt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:03 compute-0 ovn_controller[148123]: 2025-10-02T12:30:03Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:e4:e0 10.100.0.14
Oct 02 12:30:03 compute-0 ovn_controller[148123]: 2025-10-02T12:30:03Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:e4:e0 10.100.0.14
Oct 02 12:30:03 compute-0 nova_compute[256940]: 2025-10-02 12:30:03.465 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuc3zcfbt" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:03 compute-0 nova_compute[256940]: 2025-10-02 12:30:03.507 2 DEBUG nova.storage.rbd_utils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:03 compute-0 nova_compute[256940]: 2025-10-02 12:30:03.514 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/disk.config 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:03 compute-0 ceph-mon[73668]: osdmap e235: 3 total, 3 up, 3 in
Oct 02 12:30:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.103 2 DEBUG oslo_concurrency.processutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/disk.config 878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.104 2 INFO nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Deleting local config drive /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/disk.config because it was imported into RBD.
Oct 02 12:30:04 compute-0 kernel: tapfff1abf1-c6: entered promiscuous mode
Oct 02 12:30:04 compute-0 NetworkManager[44981]: <info>  [1759408204.1630] manager: (tapfff1abf1-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Oct 02 12:30:04 compute-0 ovn_controller[148123]: 2025-10-02T12:30:04Z|00216|binding|INFO|Claiming lport fff1abf1-c689-4533-86c7-a3d1075ea6de for this chassis.
Oct 02 12:30:04 compute-0 ovn_controller[148123]: 2025-10-02T12:30:04Z|00217|binding|INFO|fff1abf1-c689-4533-86c7-a3d1075ea6de: Claiming fa:16:3e:10:8a:e3 10.100.0.3
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.177 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:8a:e3 10.100.0.3'], port_security=['fa:16:3e:10:8a:e3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '878c4166-4a08-41bd-9263-c5d4b4c7a280', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45fed08c-57a9-4001-9934-7fb62d17f115', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=fff1abf1-c689-4533-86c7-a3d1075ea6de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.179 158104 INFO neutron.agent.ovn.metadata.agent [-] Port fff1abf1-c689-4533-86c7-a3d1075ea6de in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.180 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.195 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[694939cf-45a9-4daa-92dc-14697c348fc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.196 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a187d8a-71 in ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:30:04 compute-0 systemd-udevd[301731]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.199 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a187d8a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.199 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c94fb1-7cb3-46c9-ae7d-1f24958751f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.200 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[448b0a94-61d4-41e4-83f2-c2d4eec2b8e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 systemd-machined[210927]: New machine qemu-30-instance-00000046.
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.212 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[50ce9b7f-e3a7-40ec-b12d-82e6b1e58254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 NetworkManager[44981]: <info>  [1759408204.2185] device (tapfff1abf1-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:04 compute-0 NetworkManager[44981]: <info>  [1759408204.2198] device (tapfff1abf1-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:04 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000046.
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.239 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d99206e5-8b4c-4340-947b-3e08cc0f546b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:04 compute-0 ovn_controller[148123]: 2025-10-02T12:30:04Z|00218|binding|INFO|Setting lport fff1abf1-c689-4533-86c7-a3d1075ea6de ovn-installed in OVS
Oct 02 12:30:04 compute-0 ovn_controller[148123]: 2025-10-02T12:30:04Z|00219|binding|INFO|Setting lport fff1abf1-c689-4533-86c7-a3d1075ea6de up in Southbound
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.271 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c6feba9f-317e-4b69-a4e2-494ebdeeb57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.276 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7f69a50d-278f-444f-b312-f3d72857d4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 NetworkManager[44981]: <info>  [1759408204.2779] manager: (tap6a187d8a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/115)
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.311 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3f99f2-89b9-434d-8642-4ffe0cf850b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.315 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3a218802-4de6-4038-9b3e-1191138f3826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 NetworkManager[44981]: <info>  [1759408204.3374] device (tap6a187d8a-70): carrier: link connected
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.344 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfbba4a-2284-4aa4-a861-83a79e8ade40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.364 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[daa6319a-8d91-426b-97a3-689763261ab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609982, 'reachable_time': 32665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301764, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.385 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0ff5ab-9cc5-4823-8cb5-fbc86d32915e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:e868'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609982, 'tstamp': 609982}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301765, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:04.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.406 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5294b9fe-6927-4592-853d-dfc13817a7dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609982, 'reachable_time': 32665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301766, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.445 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf0498c-96b8-4736-b12f-14e89334fa25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.514 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e3d61e-0fc9-46b6-ac16-5d7344cb438a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.516 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.516 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.516 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:04 compute-0 NetworkManager[44981]: <info>  [1759408204.5196] manager: (tap6a187d8a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct 02 12:30:04 compute-0 kernel: tap6a187d8a-70: entered promiscuous mode
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.522 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:04 compute-0 ovn_controller[148123]: 2025-10-02T12:30:04Z|00220|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.540 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.541 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[044ef65f-30d0-43c0-bd2c-447fab4ccf8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.542 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:30:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:04.542 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'env', 'PROCESS_TAG=haproxy-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a187d8a-77c6-4b27-bb13-654f471c1faf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.547 2 DEBUG nova.network.neutron [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updated VIF entry in instance network info cache for port fff1abf1-c689-4533-86c7-a3d1075ea6de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.548 2 DEBUG nova.network.neutron [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:04 compute-0 nova_compute[256940]: 2025-10-02 12:30:04.563 2 DEBUG oslo_concurrency.lockutils [req-dfaa1cd9-f02e-4107-8498-4e20a8c0d8af req-cda2cdca-2d9b-48df-a145-4dd7075c12d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 02 12:30:05 compute-0 podman[301840]: 2025-10-02 12:30:04.910295926 +0000 UTC m=+0.025242351 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.100 2 DEBUG nova.compute.manager [req-9b009315-6359-487e-b8c1-8f802c4b9e34 req-fba27e67-6e78-4b1a-b2d4-1d317524881c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.101 2 DEBUG oslo_concurrency.lockutils [req-9b009315-6359-487e-b8c1-8f802c4b9e34 req-fba27e67-6e78-4b1a-b2d4-1d317524881c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.102 2 DEBUG oslo_concurrency.lockutils [req-9b009315-6359-487e-b8c1-8f802c4b9e34 req-fba27e67-6e78-4b1a-b2d4-1d317524881c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.102 2 DEBUG oslo_concurrency.lockutils [req-9b009315-6359-487e-b8c1-8f802c4b9e34 req-fba27e67-6e78-4b1a-b2d4-1d317524881c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.102 2 DEBUG nova.compute.manager [req-9b009315-6359-487e-b8c1-8f802c4b9e34 req-fba27e67-6e78-4b1a-b2d4-1d317524881c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Processing event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:30:05 compute-0 podman[301840]: 2025-10-02 12:30:05.110440357 +0000 UTC m=+0.225386762 container create a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:30:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 02 12:30:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 02 12:30:05 compute-0 ceph-mon[73668]: pgmap v1595: 305 pgs: 305 active+clean; 137 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Oct 02 12:30:05 compute-0 systemd[1]: Started libpod-conmon-a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c.scope.
Oct 02 12:30:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:30:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1721363249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:30:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:30:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1721363249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:30:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f9d67856d7a3990f55c22f6cce9909beb88cc4d499bb6b131eba653dac5543/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:05 compute-0 podman[301840]: 2025-10-02 12:30:05.230156125 +0000 UTC m=+0.345102610 container init a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:30:05 compute-0 podman[301840]: 2025-10-02 12:30:05.236958528 +0000 UTC m=+0.351904963 container start a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:30:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 156 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 470 KiB/s rd, 6.6 MiB/s wr, 219 op/s
Oct 02 12:30:05 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [NOTICE]   (301860) : New worker (301862) forked
Oct 02 12:30:05 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [NOTICE]   (301860) : Loading success.
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.371 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408205.3711727, 878c4166-4a08-41bd-9263-c5d4b4c7a280 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.372 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] VM Started (Lifecycle Event)
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.374 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.378 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.382 2 INFO nova.virt.libvirt.driver [-] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Instance spawned successfully.
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.382 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.401 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.409 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.410 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.410 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.411 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.411 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.411 2 DEBUG nova.virt.libvirt.driver [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.415 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.444 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.445 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408205.3713832, 878c4166-4a08-41bd-9263-c5d4b4c7a280 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.446 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] VM Paused (Lifecycle Event)
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.472 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.476 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408205.3772812, 878c4166-4a08-41bd-9263-c5d4b4c7a280 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.476 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] VM Resumed (Lifecycle Event)
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.481 2 INFO nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Took 9.24 seconds to spawn the instance on the hypervisor.
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.481 2 DEBUG nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.507 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.512 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.570 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.593 2 INFO nova.compute.manager [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Took 10.51 seconds to build instance.
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.622 2 DEBUG oslo_concurrency.lockutils [None req-d25701fe-afb9-4026-b50a-44d4dbf4c91a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:05 compute-0 nova_compute[256940]: 2025-10-02 12:30:05.739 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:30:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:05.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:06 compute-0 ceph-mon[73668]: osdmap e236: 3 total, 3 up, 3 in
Oct 02 12:30:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1721363249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:30:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1721363249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:30:06 compute-0 ceph-mon[73668]: pgmap v1597: 305 pgs: 305 active+clean; 156 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 470 KiB/s rd, 6.6 MiB/s wr, 219 op/s
Oct 02 12:30:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:30:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:06.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.198 2 DEBUG nova.compute.manager [req-f5b3def3-6289-4fd8-8a15-58fa82003ca5 req-21d8e6ef-faae-452e-9247-aa64aebdea2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.199 2 DEBUG oslo_concurrency.lockutils [req-f5b3def3-6289-4fd8-8a15-58fa82003ca5 req-21d8e6ef-faae-452e-9247-aa64aebdea2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.199 2 DEBUG oslo_concurrency.lockutils [req-f5b3def3-6289-4fd8-8a15-58fa82003ca5 req-21d8e6ef-faae-452e-9247-aa64aebdea2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.199 2 DEBUG oslo_concurrency.lockutils [req-f5b3def3-6289-4fd8-8a15-58fa82003ca5 req-21d8e6ef-faae-452e-9247-aa64aebdea2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.200 2 DEBUG nova.compute.manager [req-f5b3def3-6289-4fd8-8a15-58fa82003ca5 req-21d8e6ef-faae-452e-9247-aa64aebdea2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.200 2 WARNING nova.compute.manager [req-f5b3def3-6289-4fd8-8a15-58fa82003ca5 req-21d8e6ef-faae-452e-9247-aa64aebdea2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de for instance with vm_state active and task_state None.
Oct 02 12:30:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 755 KiB/s rd, 5.9 MiB/s wr, 244 op/s
Oct 02 12:30:07 compute-0 podman[301872]: 2025-10-02 12:30:07.381815042 +0000 UTC m=+0.051131299 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:30:07 compute-0 podman[301873]: 2025-10-02 12:30:07.425253504 +0000 UTC m=+0.089618565 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:07 compute-0 NetworkManager[44981]: <info>  [1759408207.5097] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Oct 02 12:30:07 compute-0 NetworkManager[44981]: <info>  [1759408207.5106] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:07 compute-0 ovn_controller[148123]: 2025-10-02T12:30:07Z|00221|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:30:07 compute-0 ovn_controller[148123]: 2025-10-02T12:30:07Z|00222|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:07 compute-0 nova_compute[256940]: 2025-10-02 12:30:07.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:08.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:08 compute-0 ceph-mon[73668]: pgmap v1598: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 755 KiB/s rd, 5.9 MiB/s wr, 244 op/s
Oct 02 12:30:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 02 12:30:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 734 KiB/s rd, 4.1 MiB/s wr, 208 op/s
Oct 02 12:30:09 compute-0 nova_compute[256940]: 2025-10-02 12:30:09.283 2 DEBUG nova.compute.manager [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-changed-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:09 compute-0 nova_compute[256940]: 2025-10-02 12:30:09.283 2 DEBUG nova.compute.manager [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Refreshing instance network info cache due to event network-changed-fff1abf1-c689-4533-86c7-a3d1075ea6de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:09 compute-0 nova_compute[256940]: 2025-10-02 12:30:09.283 2 DEBUG oslo_concurrency.lockutils [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:09 compute-0 nova_compute[256940]: 2025-10-02 12:30:09.284 2 DEBUG oslo_concurrency.lockutils [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:09 compute-0 nova_compute[256940]: 2025-10-02 12:30:09.284 2 DEBUG nova.network.neutron [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Refreshing network info cache for port fff1abf1-c689-4533-86c7-a3d1075ea6de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 02 12:30:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 02 12:30:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:10.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:10 compute-0 nova_compute[256940]: 2025-10-02 12:30:10.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:10 compute-0 ceph-mon[73668]: pgmap v1599: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 734 KiB/s rd, 4.1 MiB/s wr, 208 op/s
Oct 02 12:30:10 compute-0 ceph-mon[73668]: osdmap e237: 3 total, 3 up, 3 in
Oct 02 12:30:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 216 op/s
Oct 02 12:30:11 compute-0 kernel: tap555fa830-93 (unregistering): left promiscuous mode
Oct 02 12:30:11 compute-0 NetworkManager[44981]: <info>  [1759408211.5923] device (tap555fa830-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:11 compute-0 ovn_controller[148123]: 2025-10-02T12:30:11Z|00223|binding|INFO|Releasing lport 555fa830-9397-48fc-a495-4e130004193f from this chassis (sb_readonly=0)
Oct 02 12:30:11 compute-0 ovn_controller[148123]: 2025-10-02T12:30:11Z|00224|binding|INFO|Setting lport 555fa830-9397-48fc-a495-4e130004193f down in Southbound
Oct 02 12:30:11 compute-0 ovn_controller[148123]: 2025-10-02T12:30:11Z|00225|binding|INFO|Removing iface tap555fa830-93 ovn-installed in OVS
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.651 2 DEBUG nova.network.neutron [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updated VIF entry in instance network info cache for port fff1abf1-c689-4533-86c7-a3d1075ea6de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.651 2 DEBUG nova.network.neutron [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:11.659 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e4:e0 10.100.0.14'], port_security=['fa:16:3e:d9:e4:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a059e667-9c76-4f71-92d2-9490d75ce24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=555fa830-9397-48fc-a495-4e130004193f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:11.660 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 555fa830-9397-48fc-a495-4e130004193f in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis
Oct 02 12:30:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:11.662 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:11.663 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[945e0ec3-ec12-46a0-9808-093216416d3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:11.664 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.678 2 DEBUG oslo_concurrency.lockutils [req-31526dca-003c-4a59-ac27-93df7115c8fb req-25c6fe0e-4e5c-42cc-b896-f2c5e7420a1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:11 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000045.scope: Deactivated successfully.
Oct 02 12:30:11 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000045.scope: Consumed 13.741s CPU time.
Oct 02 12:30:11 compute-0 systemd-machined[210927]: Machine qemu-29-instance-00000045 terminated.
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.892 2 INFO nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance shutdown successfully after 16 seconds.
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.898 2 INFO nova.virt.libvirt.driver [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance destroyed successfully.
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.904 2 INFO nova.virt.libvirt.driver [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance destroyed successfully.
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.905 2 DEBUG nova.virt.libvirt.vif [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-536456714',display_name='tempest-ServerDiskConfigTestJSON-server-536456714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-536456714',id=69,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-zvu4uxk3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:53Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=a059e667-9c76-4f71-92d2-9490d75ce24d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.906 2 DEBUG nova.network.os_vif_util [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.906 2 DEBUG nova.network.os_vif_util [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.907 2 DEBUG os_vif [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap555fa830-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:11 compute-0 nova_compute[256940]: 2025-10-02 12:30:11.918 2 INFO os_vif [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93')
Oct 02 12:30:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:11.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 02 12:30:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:12 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[301228]: [NOTICE]   (301232) : haproxy version is 2.8.14-c23fe91
Oct 02 12:30:12 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[301228]: [NOTICE]   (301232) : path to executable is /usr/sbin/haproxy
Oct 02 12:30:12 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[301228]: [WARNING]  (301232) : Exiting Master process...
Oct 02 12:30:12 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[301228]: [ALERT]    (301232) : Current worker (301234) exited with code 143 (Terminated)
Oct 02 12:30:12 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[301228]: [WARNING]  (301232) : All workers exited. Exiting... (0)
Oct 02 12:30:12 compute-0 systemd[1]: libpod-a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b.scope: Deactivated successfully.
Oct 02 12:30:12 compute-0 podman[301941]: 2025-10-02 12:30:12.552183024 +0000 UTC m=+0.777001364 container died a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:30:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 02 12:30:12 compute-0 nova_compute[256940]: 2025-10-02 12:30:12.877 2 DEBUG nova.compute.manager [req-c71eb3c8-85bd-40a7-b767-67cbe68ac0dd req-0d869418-b501-4fbd-a6ff-a80992456e89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-unplugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:12 compute-0 nova_compute[256940]: 2025-10-02 12:30:12.879 2 DEBUG oslo_concurrency.lockutils [req-c71eb3c8-85bd-40a7-b767-67cbe68ac0dd req-0d869418-b501-4fbd-a6ff-a80992456e89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:12 compute-0 nova_compute[256940]: 2025-10-02 12:30:12.880 2 DEBUG oslo_concurrency.lockutils [req-c71eb3c8-85bd-40a7-b767-67cbe68ac0dd req-0d869418-b501-4fbd-a6ff-a80992456e89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:12 compute-0 nova_compute[256940]: 2025-10-02 12:30:12.880 2 DEBUG oslo_concurrency.lockutils [req-c71eb3c8-85bd-40a7-b767-67cbe68ac0dd req-0d869418-b501-4fbd-a6ff-a80992456e89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:12 compute-0 nova_compute[256940]: 2025-10-02 12:30:12.880 2 DEBUG nova.compute.manager [req-c71eb3c8-85bd-40a7-b767-67cbe68ac0dd req-0d869418-b501-4fbd-a6ff-a80992456e89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] No waiting events found dispatching network-vif-unplugged-555fa830-9397-48fc-a495-4e130004193f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:12 compute-0 nova_compute[256940]: 2025-10-02 12:30:12.881 2 WARNING nova.compute.manager [req-c71eb3c8-85bd-40a7-b767-67cbe68ac0dd req-0d869418-b501-4fbd-a6ff-a80992456e89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received unexpected event network-vif-unplugged-555fa830-9397-48fc-a495-4e130004193f for instance with vm_state active and task_state rebuilding.
Oct 02 12:30:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 871 KiB/s wr, 188 op/s
Oct 02 12:30:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 02 12:30:13 compute-0 nova_compute[256940]: 2025-10-02 12:30:13.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:14 compute-0 nova_compute[256940]: 2025-10-02 12:30:14.993 2 DEBUG nova.compute.manager [req-fe40460d-1315-455f-bcc0-6df57e57dfd7 req-8c36ffea-91e2-487c-860a-c02568088f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:14 compute-0 nova_compute[256940]: 2025-10-02 12:30:14.996 2 DEBUG oslo_concurrency.lockutils [req-fe40460d-1315-455f-bcc0-6df57e57dfd7 req-8c36ffea-91e2-487c-860a-c02568088f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:14 compute-0 nova_compute[256940]: 2025-10-02 12:30:14.997 2 DEBUG oslo_concurrency.lockutils [req-fe40460d-1315-455f-bcc0-6df57e57dfd7 req-8c36ffea-91e2-487c-860a-c02568088f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:14 compute-0 nova_compute[256940]: 2025-10-02 12:30:14.997 2 DEBUG oslo_concurrency.lockutils [req-fe40460d-1315-455f-bcc0-6df57e57dfd7 req-8c36ffea-91e2-487c-860a-c02568088f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:14 compute-0 nova_compute[256940]: 2025-10-02 12:30:14.998 2 DEBUG nova.compute.manager [req-fe40460d-1315-455f-bcc0-6df57e57dfd7 req-8c36ffea-91e2-487c-860a-c02568088f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] No waiting events found dispatching network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:14 compute-0 nova_compute[256940]: 2025-10-02 12:30:14.999 2 WARNING nova.compute.manager [req-fe40460d-1315-455f-bcc0-6df57e57dfd7 req-8c36ffea-91e2-487c-860a-c02568088f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received unexpected event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f for instance with vm_state active and task_state rebuilding.
Oct 02 12:30:15 compute-0 ceph-mon[73668]: pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 216 op/s
Oct 02 12:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b07a7962f2f24abebc366bd531083efe657a322693c7b739bdee558d709db750-merged.mount: Deactivated successfully.
Oct 02 12:30:15 compute-0 sudo[302002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:15 compute-0 sudo[302002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:15 compute-0 sudo[302002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:15 compute-0 sudo[302027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:15 compute-0 sudo[302027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:15 compute-0 sudo[302027]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 124 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 134 op/s
Oct 02 12:30:15 compute-0 nova_compute[256940]: 2025-10-02 12:30:15.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:15 compute-0 podman[301941]: 2025-10-02 12:30:15.727492865 +0000 UTC m=+3.952311235 container cleanup a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:30:15 compute-0 systemd[1]: libpod-conmon-a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b.scope: Deactivated successfully.
Oct 02 12:30:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:16 compute-0 ceph-mon[73668]: pgmap v1602: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 871 KiB/s wr, 188 op/s
Oct 02 12:30:16 compute-0 ceph-mon[73668]: osdmap e238: 3 total, 3 up, 3 in
Oct 02 12:30:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:16.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:16 compute-0 nova_compute[256940]: 2025-10-02 12:30:16.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:17 compute-0 podman[302055]: 2025-10-02 12:30:17.091822176 +0000 UTC m=+1.325546898 container remove a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.101 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[faadc8bf-49f1-476f-a560-40b743a1011e]: (4, ('Thu Oct  2 12:30:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b)\na0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b\nThu Oct  2 12:30:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (a0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b)\na0966df96a0b905e3280b001bc1b03d41c67440bc8a107b5dc4184aa8057de7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.103 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[03dd9209-efd7-484f-bbaf-035260cee198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.104 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:17 compute-0 kernel: tape21cd6a6-f0: left promiscuous mode
Oct 02 12:30:17 compute-0 nova_compute[256940]: 2025-10-02 12:30:17.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:17 compute-0 nova_compute[256940]: 2025-10-02 12:30:17.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.131 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[635b3827-4397-4e45-8300-cc83477961da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.160 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[59fa71ea-d1fb-4d86-bb25-1291cc3aab6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.162 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1240607d-e9dd-485d-aa2b-3852369a756f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.187 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d1679ca3-3c4a-443f-adce-2183f958b7c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608163, 'reachable_time': 27116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302070, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:17 compute-0 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.190 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:30:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:17.190 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e81f5c-aa3e-472b-bc51-178c0c2a728a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 88 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Oct 02 12:30:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:18 compute-0 ceph-mon[73668]: pgmap v1604: 305 pgs: 305 active+clean; 124 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 134 op/s
Oct 02 12:30:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 88 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 669 KiB/s rd, 19 KiB/s wr, 77 op/s
Oct 02 12:30:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 02 12:30:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:20.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:20 compute-0 ceph-mon[73668]: pgmap v1605: 305 pgs: 305 active+clean; 88 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Oct 02 12:30:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3677734845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2337983430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1106284102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:20 compute-0 nova_compute[256940]: 2025-10-02 12:30:20.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 02 12:30:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 121 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 02 12:30:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 02 12:30:21 compute-0 nova_compute[256940]: 2025-10-02 12:30:21.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:22 compute-0 ceph-mon[73668]: pgmap v1606: 305 pgs: 305 active+clean; 88 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 669 KiB/s rd, 19 KiB/s wr, 77 op/s
Oct 02 12:30:22 compute-0 ceph-mon[73668]: osdmap e239: 3 total, 3 up, 3 in
Oct 02 12:30:22 compute-0 sudo[302077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:22 compute-0 sudo[302077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:22 compute-0 sudo[302077]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:22 compute-0 sudo[302120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:22 compute-0 sudo[302120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:22 compute-0 sudo[302120]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:22 compute-0 podman[302102]: 2025-10-02 12:30:22.650206137 +0000 UTC m=+0.109152742 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 02 12:30:22 compute-0 podman[302103]: 2025-10-02 12:30:22.66648792 +0000 UTC m=+0.117601236 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:30:22 compute-0 sudo[302167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:22 compute-0 sudo[302167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:22 compute-0 sudo[302167]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:22 compute-0 sudo[302192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:30:22 compute-0 sudo[302192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 155 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 4.5 MiB/s wr, 101 op/s
Oct 02 12:30:23 compute-0 ceph-mon[73668]: pgmap v1607: 305 pgs: 305 active+clean; 121 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 02 12:30:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1752376986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2545413803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:23 compute-0 podman[302288]: 2025-10-02 12:30:23.979297204 +0000 UTC m=+0.610940859 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:30:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:30:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:24 compute-0 podman[302288]: 2025-10-02 12:30:24.509669137 +0000 UTC m=+1.141312802 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:30:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 195 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 6.3 MiB/s wr, 103 op/s
Oct 02 12:30:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:30:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:25 compute-0 nova_compute[256940]: 2025-10-02 12:30:25.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:25 compute-0 ceph-mon[73668]: pgmap v1609: 305 pgs: 305 active+clean; 155 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 4.5 MiB/s wr, 101 op/s
Oct 02 12:30:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:25.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:26.465 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:26.467 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:26.468 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:26 compute-0 nova_compute[256940]: 2025-10-02 12:30:26.894 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408211.892558, a059e667-9c76-4f71-92d2-9490d75ce24d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:26 compute-0 nova_compute[256940]: 2025-10-02 12:30:26.895 2 INFO nova.compute.manager [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Stopped (Lifecycle Event)
Oct 02 12:30:26 compute-0 nova_compute[256940]: 2025-10-02 12:30:26.919 2 DEBUG nova.compute.manager [None req-b0fd6395-7340-4a45-98f1-363499ee49c3 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:26 compute-0 nova_compute[256940]: 2025-10-02 12:30:26.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:26 compute-0 nova_compute[256940]: 2025-10-02 12:30:26.923 2 DEBUG nova.compute.manager [None req-b0fd6395-7340-4a45-98f1-363499ee49c3 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rebuilding, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:26 compute-0 nova_compute[256940]: 2025-10-02 12:30:26.942 2 INFO nova.compute.manager [None req-b0fd6395-7340-4a45-98f1-363499ee49c3 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] During sync_power_state the instance has a pending task (rebuilding). Skip.
Oct 02 12:30:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 248 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.7 MiB/s wr, 183 op/s
Oct 02 12:30:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:27 compute-0 ceph-mon[73668]: pgmap v1610: 305 pgs: 305 active+clean; 195 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 6.3 MiB/s wr, 103 op/s
Oct 02 12:30:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:27 compute-0 podman[302423]: 2025-10-02 12:30:27.995582471 +0000 UTC m=+1.250841832 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:30:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:28.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:28 compute-0 podman[302423]: 2025-10-02 12:30:28.421314827 +0000 UTC m=+1.676574178 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:30:28
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', '.mgr', 'default.rgw.control']
Oct 02 12:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:30:28 compute-0 ceph-mon[73668]: pgmap v1611: 305 pgs: 305 active+clean; 248 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.7 MiB/s wr, 183 op/s
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:30:29 compute-0 ovn_controller[148123]: 2025-10-02T12:30:29Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:8a:e3 10.100.0.3
Oct 02 12:30:29 compute-0 ovn_controller[148123]: 2025-10-02T12:30:29Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:8a:e3 10.100.0.3
Oct 02 12:30:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 248 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.7 MiB/s wr, 183 op/s
Oct 02 12:30:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:30 compute-0 podman[302491]: 2025-10-02 12:30:30.35653715 +0000 UTC m=+0.525043338 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2)
Oct 02 12:30:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:30.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:30 compute-0 nova_compute[256940]: 2025-10-02 12:30:30.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:30 compute-0 podman[302491]: 2025-10-02 12:30:30.707311754 +0000 UTC m=+0.875817852 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, version=2.2.4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Oct 02 12:30:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 253 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.9 MiB/s wr, 199 op/s
Oct 02 12:30:31 compute-0 ceph-mon[73668]: pgmap v1612: 305 pgs: 305 active+clean; 248 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.7 MiB/s wr, 183 op/s
Oct 02 12:30:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1719555084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2808893995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:31 compute-0 sudo[302192]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:31 compute-0 nova_compute[256940]: 2025-10-02 12:30:31.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:30:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:30:32 compute-0 nova_compute[256940]: 2025-10-02 12:30:32.688 2 INFO nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Deleting instance files /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d_del
Oct 02 12:30:32 compute-0 nova_compute[256940]: 2025-10-02 12:30:32.690 2 INFO nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Deletion of /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d_del complete
Oct 02 12:30:32 compute-0 nova_compute[256940]: 2025-10-02 12:30:32.868 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:30:32 compute-0 nova_compute[256940]: 2025-10-02 12:30:32.869 2 INFO nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Creating image(s)
Oct 02 12:30:32 compute-0 nova_compute[256940]: 2025-10-02 12:30:32.947 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:32 compute-0 nova_compute[256940]: 2025-10-02 12:30:32.990 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.026 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.037 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:33 compute-0 sudo[302598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:33 compute-0 sudo[302598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:33 compute-0 sudo[302598]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.138 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.139 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.140 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.141 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:33 compute-0 sudo[302626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:33 compute-0 sudo[302626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:33 compute-0 sudo[302626]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 253 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 197 op/s
Oct 02 12:30:33 compute-0 sudo[302658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:33 compute-0 sudo[302658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:33 compute-0 sudo[302658]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:33 compute-0 sudo[302683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:30:33 compute-0 sudo[302683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.356 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.361 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 a059e667-9c76-4f71-92d2-9490d75ce24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:33 compute-0 ceph-mon[73668]: pgmap v1613: 305 pgs: 305 active+clean; 253 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.9 MiB/s wr, 199 op/s
Oct 02 12:30:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3003137044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:33 compute-0 nova_compute[256940]: 2025-10-02 12:30:33.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:33 compute-0 sudo[302683]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:30:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:30:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:30:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:30:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 18269d5f-ec7c-46d4-96e7-7799ce6252a0 does not exist
Oct 02 12:30:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 82103176-7edd-41ab-aa8e-3112330a43c3 does not exist
Oct 02 12:30:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e0381276-d260-41ed-b89d-6df4f17d8a48 does not exist
Oct 02 12:30:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:30:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:30:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:30:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:34 compute-0 sudo[302769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:34 compute-0 sudo[302769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:34 compute-0 sudo[302769]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/750541670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:34 compute-0 ceph-mon[73668]: pgmap v1614: 305 pgs: 305 active+clean; 253 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 197 op/s
Oct 02 12:30:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:30:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:30:34 compute-0 sudo[302795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:34 compute-0 sudo[302795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:34 compute-0 sudo[302795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:34 compute-0 nova_compute[256940]: 2025-10-02 12:30:34.678 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 a059e667-9c76-4f71-92d2-9490d75ce24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:34 compute-0 sudo[302820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:34 compute-0 sudo[302820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:34 compute-0 sudo[302820]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:34 compute-0 sudo[302861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:30:34 compute-0 sudo[302861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:34 compute-0 nova_compute[256940]: 2025-10-02 12:30:34.828 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.064 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.065 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Ensure instance console log exists: /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.066 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.067 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.067 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.070 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Start _get_guest_xml network_info=[{"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.075 2 WARNING nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.081 2 DEBUG nova.virt.libvirt.host [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.081 2 DEBUG nova.virt.libvirt.host [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.086 2 DEBUG nova.virt.libvirt.host [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.087 2 DEBUG nova.virt.libvirt.host [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.088 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.088 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.089 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.089 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.089 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.089 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.089 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.089 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.090 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.090 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.090 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.090 2 DEBUG nova.virt.hardware [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.090 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.108 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:35 compute-0 sudo[302993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:35 compute-0 sudo[302993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:35 compute-0 sudo[302993]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:35 compute-0 podman[302984]: 2025-10-02 12:30:35.154467088 +0000 UTC m=+0.024018880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 269 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Oct 02 12:30:35 compute-0 sudo[303024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:35 compute-0 sudo[303024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:35 compute-0 sudo[303024]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:35 compute-0 podman[302984]: 2025-10-02 12:30:35.366523011 +0000 UTC m=+0.236074783 container create 499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:30:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232359257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.537 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.581 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.586 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:35 compute-0 systemd[1]: Started libpod-conmon-499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1.scope.
Oct 02 12:30:35 compute-0 nova_compute[256940]: 2025-10-02 12:30:35.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:30:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/232359257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:36 compute-0 podman[302984]: 2025-10-02 12:30:36.012535118 +0000 UTC m=+0.882086910 container init 499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:36 compute-0 podman[302984]: 2025-10-02 12:30:36.021569948 +0000 UTC m=+0.891121720 container start 499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_einstein, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:36 compute-0 naughty_einstein[303090]: 167 167
Oct 02 12:30:36 compute-0 systemd[1]: libpod-499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1.scope: Deactivated successfully.
Oct 02 12:30:36 compute-0 podman[302984]: 2025-10-02 12:30:36.126918932 +0000 UTC m=+0.996470724 container attach 499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:30:36 compute-0 podman[302984]: 2025-10-02 12:30:36.127395514 +0000 UTC m=+0.996947276 container died 499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:30:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315921504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.200 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.202 2 DEBUG nova.virt.libvirt.vif [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-536456714',display_name='tempest-ServerDiskConfigTestJSON-server-536456714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-536456714',id=69,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-zvu4uxk3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:32Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=a059e667-9c76-4f71-92d2-9490d75ce24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.203 2 DEBUG nova.network.os_vif_util [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.204 2 DEBUG nova.network.os_vif_util [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.206 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <uuid>a059e667-9c76-4f71-92d2-9490d75ce24d</uuid>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <name>instance-00000045</name>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-536456714</nova:name>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:30:35</nova:creationTime>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <nova:port uuid="555fa830-9397-48fc-a495-4e130004193f">
Oct 02 12:30:36 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <system>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <entry name="serial">a059e667-9c76-4f71-92d2-9490d75ce24d</entry>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <entry name="uuid">a059e667-9c76-4f71-92d2-9490d75ce24d</entry>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </system>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <os>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   </os>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <features>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   </features>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a059e667-9c76-4f71-92d2-9490d75ce24d_disk">
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config">
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:30:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d9:e4:e0"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <target dev="tap555fa830-93"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/console.log" append="off"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <video>
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </video>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:30:36 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:30:36 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:30:36 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:30:36 compute-0 nova_compute[256940]: </domain>
Oct 02 12:30:36 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.207 2 DEBUG nova.compute.manager [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Preparing to wait for external event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.207 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.207 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.208 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.208 2 DEBUG nova.virt.libvirt.vif [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-536456714',display_name='tempest-ServerDiskConfigTestJSON-server-536456714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-536456714',id=69,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-zvu4uxk3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:32Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=a059e667-9c76-4f71-92d2-9490d75ce24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.208 2 DEBUG nova.network.os_vif_util [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.209 2 DEBUG nova.network.os_vif_util [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.209 2 DEBUG os_vif [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.213 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap555fa830-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap555fa830-93, col_values=(('external_ids', {'iface-id': '555fa830-9397-48fc-a495-4e130004193f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:e4:e0', 'vm-uuid': 'a059e667-9c76-4f71-92d2-9490d75ce24d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:36 compute-0 NetworkManager[44981]: <info>  [1759408236.2169] manager: (tap555fa830-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.227 2 INFO os_vif [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93')
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.415 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.415 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.416 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:d9:e4:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.416 2 INFO nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Using config drive
Oct 02 12:30:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:36.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.446 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.468 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:36 compute-0 nova_compute[256940]: 2025-10-02 12:30:36.511 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'keypairs' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5087a82a85f1888c252c6804205acc196866fa98ce267d34aacb43afeb6f43c3-merged.mount: Deactivated successfully.
Oct 02 12:30:37 compute-0 ceph-mon[73668]: pgmap v1615: 305 pgs: 305 active+clean; 269 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Oct 02 12:30:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/315921504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:37 compute-0 nova_compute[256940]: 2025-10-02 12:30:37.114 2 INFO nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Creating config drive at /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config
Oct 02 12:30:37 compute-0 nova_compute[256940]: 2025-10-02 12:30:37.121 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5p7s6k1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:37 compute-0 nova_compute[256940]: 2025-10-02 12:30:37.259 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5p7s6k1s" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 321 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 216 op/s
Oct 02 12:30:37 compute-0 nova_compute[256940]: 2025-10-02 12:30:37.293 2 DEBUG nova.storage.rbd_utils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:37 compute-0 nova_compute[256940]: 2025-10-02 12:30:37.298 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:37 compute-0 podman[302984]: 2025-10-02 12:30:37.697997732 +0000 UTC m=+2.567549494 container remove 499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_einstein, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:37 compute-0 systemd[1]: libpod-conmon-499fa591a076114921571bc13c2330babe232f18c7b1424e85744493bd4884f1.scope: Deactivated successfully.
Oct 02 12:30:37 compute-0 podman[303190]: 2025-10-02 12:30:37.857905531 +0000 UTC m=+0.088694952 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:30:37 compute-0 podman[303194]: 2025-10-02 12:30:37.877048027 +0000 UTC m=+0.102738779 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:30:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:37.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:37 compute-0 podman[303237]: 2025-10-02 12:30:37.892783676 +0000 UTC m=+0.026806351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:38 compute-0 podman[303237]: 2025-10-02 12:30:38.125407981 +0000 UTC m=+0.259430636 container create 697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:38 compute-0 nova_compute[256940]: 2025-10-02 12:30:38.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:38 compute-0 systemd[1]: Started libpod-conmon-697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c.scope.
Oct 02 12:30:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a71ff6b445df2cf9341d2f1f3b5f5abd0668fb188f687ce459e42cd7363a84d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a71ff6b445df2cf9341d2f1f3b5f5abd0668fb188f687ce459e42cd7363a84d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a71ff6b445df2cf9341d2f1f3b5f5abd0668fb188f687ce459e42cd7363a84d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a71ff6b445df2cf9341d2f1f3b5f5abd0668fb188f687ce459e42cd7363a84d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a71ff6b445df2cf9341d2f1f3b5f5abd0668fb188f687ce459e42cd7363a84d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:38 compute-0 podman[303237]: 2025-10-02 12:30:38.531854259 +0000 UTC m=+0.665877014 container init 697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:30:38 compute-0 podman[303237]: 2025-10-02 12:30:38.539184405 +0000 UTC m=+0.673207100 container start 697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:30:38 compute-0 nova_compute[256940]: 2025-10-02 12:30:38.587 2 DEBUG oslo_concurrency.processutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config a059e667-9c76-4f71-92d2-9490d75ce24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:38 compute-0 nova_compute[256940]: 2025-10-02 12:30:38.588 2 INFO nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Deleting local config drive /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d/disk.config because it was imported into RBD.
Oct 02 12:30:38 compute-0 kernel: tap555fa830-93: entered promiscuous mode
Oct 02 12:30:38 compute-0 NetworkManager[44981]: <info>  [1759408238.6617] manager: (tap555fa830-93): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Oct 02 12:30:38 compute-0 ovn_controller[148123]: 2025-10-02T12:30:38Z|00226|binding|INFO|Claiming lport 555fa830-9397-48fc-a495-4e130004193f for this chassis.
Oct 02 12:30:38 compute-0 ovn_controller[148123]: 2025-10-02T12:30:38Z|00227|binding|INFO|555fa830-9397-48fc-a495-4e130004193f: Claiming fa:16:3e:d9:e4:e0 10.100.0.14
Oct 02 12:30:38 compute-0 nova_compute[256940]: 2025-10-02 12:30:38.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.668 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e4:e0 10.100.0.14'], port_security=['fa:16:3e:d9:e4:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a059e667-9c76-4f71-92d2-9490d75ce24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=555fa830-9397-48fc-a495-4e130004193f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.669 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 555fa830-9397-48fc-a495-4e130004193f in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.672 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:30:38 compute-0 ovn_controller[148123]: 2025-10-02T12:30:38Z|00228|binding|INFO|Setting lport 555fa830-9397-48fc-a495-4e130004193f ovn-installed in OVS
Oct 02 12:30:38 compute-0 ovn_controller[148123]: 2025-10-02T12:30:38Z|00229|binding|INFO|Setting lport 555fa830-9397-48fc-a495-4e130004193f up in Southbound
Oct 02 12:30:38 compute-0 nova_compute[256940]: 2025-10-02 12:30:38.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.689 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d592743c-6434-4111-a08b-725b309e1210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.692 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.695 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.695 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4e108255-5177-4f65-b7c4-c21cde19b807]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.696 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e8264c5f-8a61-437d-8a2f-8a255aef4039]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 nova_compute[256940]: 2025-10-02 12:30:38.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:38 compute-0 systemd-udevd[303277]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:38 compute-0 systemd-machined[210927]: New machine qemu-31-instance-00000045.
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.724 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[5c76c88f-63ad-4294-8103-5c9a0dcc454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 NetworkManager[44981]: <info>  [1759408238.7280] device (tap555fa830-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:38 compute-0 NetworkManager[44981]: <info>  [1759408238.7295] device (tap555fa830-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:38 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-00000045.
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.750 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd447dd-9cbd-42ac-af38-f6c09398325c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.784 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6649df7b-2868-4fd7-a50a-6e8fdbb14025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 systemd-udevd[303279]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.793 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c804cea5-640a-49f2-83ee-86c136c95232]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 NetworkManager[44981]: <info>  [1759408238.7985] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.837 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c3e0dc-4019-47b9-9ae2-d330ec4d0506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.840 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[837ad849-c833-41bd-a45a-3ed7214653b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 podman[303237]: 2025-10-02 12:30:38.840174865 +0000 UTC m=+0.974197550 container attach 697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:38 compute-0 NetworkManager[44981]: <info>  [1759408238.8678] device (tape21cd6a6-f0): carrier: link connected
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.880 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[70a2a70c-d7ce-4952-93eb-23a743c50bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.902 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[12cfd682-ff25-4184-b5a5-654088d82b89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 42287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303308, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.924 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11280a66-9738-4340-be79-a2fbf5965ba5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613435, 'tstamp': 613435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303309, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.949 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc33928-03e6-459f-87c3-4863e5654c96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 42287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303310, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:38.993 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3b236727-edb8-4f14-8f0c-3ea129dd3993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.041 2 DEBUG nova.compute.manager [req-8b76de06-17a9-4f20-a6cc-be825369e445 req-0b6f64ce-78ce-44e3-8a99-bf78ac85d387 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.042 2 DEBUG oslo_concurrency.lockutils [req-8b76de06-17a9-4f20-a6cc-be825369e445 req-0b6f64ce-78ce-44e3-8a99-bf78ac85d387 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.042 2 DEBUG oslo_concurrency.lockutils [req-8b76de06-17a9-4f20-a6cc-be825369e445 req-0b6f64ce-78ce-44e3-8a99-bf78ac85d387 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.043 2 DEBUG oslo_concurrency.lockutils [req-8b76de06-17a9-4f20-a6cc-be825369e445 req-0b6f64ce-78ce-44e3-8a99-bf78ac85d387 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.043 2 DEBUG nova.compute.manager [req-8b76de06-17a9-4f20-a6cc-be825369e445 req-0b6f64ce-78ce-44e3-8a99-bf78ac85d387 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Processing event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.068 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[34145cc6-1636-4f1d-a552-12419d809b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.070 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.070 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.070 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:39 compute-0 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 NetworkManager[44981]: <info>  [1759408239.0739] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.075 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.078 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.079 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe5fd86-d3f8-4e55-a61b-dbcb0545d1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:39 compute-0 ovn_controller[148123]: 2025-10-02T12:30:39Z|00230|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.082 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:30:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:39.083 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:30:39 compute-0 nova_compute[256940]: 2025-10-02 12:30:39.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 321 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 934 KiB/s rd, 3.1 MiB/s wr, 138 op/s
Oct 02 12:30:39 compute-0 laughing_mclaren[303256]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:30:39 compute-0 laughing_mclaren[303256]: --> relative data size: 1.0
Oct 02 12:30:39 compute-0 laughing_mclaren[303256]: --> All data devices are unavailable
Oct 02 12:30:39 compute-0 systemd[1]: libpod-697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c.scope: Deactivated successfully.
Oct 02 12:30:39 compute-0 podman[303356]: 2025-10-02 12:30:39.455661237 +0000 UTC m=+0.033974153 container died 697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:30:39 compute-0 ceph-mon[73668]: pgmap v1616: 305 pgs: 305 active+clean; 321 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 216 op/s
Oct 02 12:30:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a71ff6b445df2cf9341d2f1f3b5f5abd0668fb188f687ce459e42cd7363a84d-merged.mount: Deactivated successfully.
Oct 02 12:30:40 compute-0 podman[303356]: 2025-10-02 12:30:40.153538772 +0000 UTC m=+0.731851698 container remove 697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:40 compute-0 systemd[1]: libpod-conmon-697cd39318f5c5a8badd7df2512159ea3a02a5267b88e15d3b24515195e9c23c.scope: Deactivated successfully.
Oct 02 12:30:40 compute-0 sudo[302861]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0067637812907128236 of space, bias 1.0, pg target 2.029134387213847 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:30:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:40 compute-0 podman[303373]: 2025-10-02 12:30:40.214208112 +0000 UTC m=+0.763894561 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:30:40 compute-0 sudo[303417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:40 compute-0 sudo[303417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:40 compute-0 sudo[303417]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:40 compute-0 podman[303373]: 2025-10-02 12:30:40.370601982 +0000 UTC m=+0.920288421 container create aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:30:40 compute-0 sudo[303443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:40 compute-0 sudo[303443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:40 compute-0 sudo[303443]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:40 compute-0 sudo[303468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:40 compute-0 sudo[303468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:40 compute-0 sudo[303468]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:40 compute-0 systemd[1]: Started libpod-conmon-aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9.scope.
Oct 02 12:30:40 compute-0 sudo[303493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:30:40 compute-0 sudo[303493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd75dc6c49da4aaa4cce32f3211529f1fa4c14b5623143722b29656599b4a306/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.614 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408240.6144469, a059e667-9c76-4f71-92d2-9490d75ce24d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.615 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Started (Lifecycle Event)
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.617 2 DEBUG nova.compute.manager [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.620 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.630 2 INFO nova.virt.libvirt.driver [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance spawned successfully.
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.630 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.636 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.639 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.648 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.648 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.649 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.649 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.649 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.650 2 DEBUG nova.virt.libvirt.driver [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.657 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.657 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408240.616666, a059e667-9c76-4f71-92d2-9490d75ce24d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.657 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Paused (Lifecycle Event)
Oct 02 12:30:40 compute-0 podman[303373]: 2025-10-02 12:30:40.667998741 +0000 UTC m=+1.217685220 container init aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:30:40 compute-0 podman[303373]: 2025-10-02 12:30:40.683426813 +0000 UTC m=+1.233113252 container start aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.682 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.687 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408240.619397, a059e667-9c76-4f71-92d2-9490d75ce24d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.687 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Resumed (Lifecycle Event)
Oct 02 12:30:40 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[303520]: [NOTICE]   (303532) : New worker (303540) forked
Oct 02 12:30:40 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[303520]: [NOTICE]   (303532) : Loading success.
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.716 2 DEBUG nova.compute.manager [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.723 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.725 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.762 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.852 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.853 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.853 2 DEBUG nova.objects.instance [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:30:40 compute-0 ceph-mon[73668]: pgmap v1617: 305 pgs: 305 active+clean; 321 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 934 KiB/s rd, 3.1 MiB/s wr, 138 op/s
Oct 02 12:30:40 compute-0 nova_compute[256940]: 2025-10-02 12:30:40.924 2 DEBUG oslo_concurrency.lockutils [None req-4f3c14ab-7c25-4473-9f92-5b810a319c35 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:41 compute-0 podman[303576]: 2025-10-02 12:30:40.950631835 +0000 UTC m=+0.030827343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:41 compute-0 podman[303576]: 2025-10-02 12:30:41.054551023 +0000 UTC m=+0.134746501 container create cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:41 compute-0 systemd[1]: Started libpod-conmon-cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0.scope.
Oct 02 12:30:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.161 2 DEBUG nova.compute.manager [req-84701d18-45d1-42cd-8b82-b044d8860434 req-6a3a2816-41ff-42fe-a707-e3421f220f45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.162 2 DEBUG oslo_concurrency.lockutils [req-84701d18-45d1-42cd-8b82-b044d8860434 req-6a3a2816-41ff-42fe-a707-e3421f220f45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.162 2 DEBUG oslo_concurrency.lockutils [req-84701d18-45d1-42cd-8b82-b044d8860434 req-6a3a2816-41ff-42fe-a707-e3421f220f45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.162 2 DEBUG oslo_concurrency.lockutils [req-84701d18-45d1-42cd-8b82-b044d8860434 req-6a3a2816-41ff-42fe-a707-e3421f220f45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.162 2 DEBUG nova.compute.manager [req-84701d18-45d1-42cd-8b82-b044d8860434 req-6a3a2816-41ff-42fe-a707-e3421f220f45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] No waiting events found dispatching network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.162 2 WARNING nova.compute.manager [req-84701d18-45d1-42cd-8b82-b044d8860434 req-6a3a2816-41ff-42fe-a707-e3421f220f45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received unexpected event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f for instance with vm_state active and task_state None.
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 336 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 221 op/s
Oct 02 12:30:41 compute-0 podman[303576]: 2025-10-02 12:30:41.281449923 +0000 UTC m=+0.361645421 container init cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:30:41 compute-0 podman[303576]: 2025-10-02 12:30:41.293218951 +0000 UTC m=+0.373414429 container start cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:30:41 compute-0 recursing_feynman[303592]: 167 167
Oct 02 12:30:41 compute-0 systemd[1]: libpod-cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0.scope: Deactivated successfully.
Oct 02 12:30:41 compute-0 conmon[303592]: conmon cc3b2235f7bd7477bcb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0.scope/container/memory.events
Oct 02 12:30:41 compute-0 podman[303576]: 2025-10-02 12:30:41.319504309 +0000 UTC m=+0.399699817 container attach cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:30:41 compute-0 podman[303576]: 2025-10-02 12:30:41.321501679 +0000 UTC m=+0.401697157 container died cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-89d95d766b1e4cd185c035ae4215353023a8c19b9483e3a5fc64e12561135c9d-merged.mount: Deactivated successfully.
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.617 2 DEBUG oslo_concurrency.lockutils [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-878c4166-4a08-41bd-9263-c5d4b4c7a280-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.617 2 DEBUG oslo_concurrency.lockutils [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-878c4166-4a08-41bd-9263-c5d4b4c7a280-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:41 compute-0 nova_compute[256940]: 2025-10-02 12:30:41.618 2 DEBUG nova.objects.instance [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:41 compute-0 podman[303576]: 2025-10-02 12:30:41.716566027 +0000 UTC m=+0.796761505 container remove cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:41 compute-0 systemd[1]: libpod-conmon-cc3b2235f7bd7477bcb15823a270bd05e7d831a8ca653f8c8d27c8c9f7ecb8a0.scope: Deactivated successfully.
Oct 02 12:30:41 compute-0 podman[303616]: 2025-10-02 12:30:41.974441883 +0000 UTC m=+0.097864625 container create 1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:41.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:42 compute-0 podman[303616]: 2025-10-02 12:30:41.906427377 +0000 UTC m=+0.029850139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:42 compute-0 nova_compute[256940]: 2025-10-02 12:30:42.082 2 DEBUG nova.objects.instance [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:42 compute-0 systemd[1]: Started libpod-conmon-1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192.scope.
Oct 02 12:30:42 compute-0 nova_compute[256940]: 2025-10-02 12:30:42.119 2 DEBUG nova.network.neutron [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:30:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748e6705dc331fbee1e63b8ab13d6469e478f9a27b9478a9e408a54726f348e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748e6705dc331fbee1e63b8ab13d6469e478f9a27b9478a9e408a54726f348e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748e6705dc331fbee1e63b8ab13d6469e478f9a27b9478a9e408a54726f348e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748e6705dc331fbee1e63b8ab13d6469e478f9a27b9478a9e408a54726f348e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:42 compute-0 podman[303616]: 2025-10-02 12:30:42.254618265 +0000 UTC m=+0.378041027 container init 1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 12:30:42 compute-0 podman[303616]: 2025-10-02 12:30:42.26307902 +0000 UTC m=+0.386501762 container start 1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:42 compute-0 podman[303616]: 2025-10-02 12:30:42.276360187 +0000 UTC m=+0.399782939 container attach 1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:42 compute-0 nova_compute[256940]: 2025-10-02 12:30:42.304 2 DEBUG nova.policy [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:30:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:42.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:43 compute-0 amazing_poincare[303630]: {
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:     "1": [
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:         {
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "devices": [
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "/dev/loop3"
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             ],
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "lv_name": "ceph_lv0",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "lv_size": "7511998464",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "name": "ceph_lv0",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "tags": {
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.cluster_name": "ceph",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.crush_device_class": "",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.encrypted": "0",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.osd_id": "1",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.type": "block",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:                 "ceph.vdo": "0"
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             },
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "type": "block",
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:             "vg_name": "ceph_vg0"
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:         }
Oct 02 12:30:43 compute-0 amazing_poincare[303630]:     ]
Oct 02 12:30:43 compute-0 amazing_poincare[303630]: }
Oct 02 12:30:43 compute-0 systemd[1]: libpod-1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192.scope: Deactivated successfully.
Oct 02 12:30:43 compute-0 podman[303616]: 2025-10-02 12:30:43.046486625 +0000 UTC m=+1.169909357 container died 1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:43 compute-0 ceph-mon[73668]: pgmap v1618: 305 pgs: 305 active+clean; 336 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 221 op/s
Oct 02 12:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-748e6705dc331fbee1e63b8ab13d6469e478f9a27b9478a9e408a54726f348e8-merged.mount: Deactivated successfully.
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.251 2 DEBUG nova.network.neutron [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Successfully created port: 62fb85d7-3242-47b1-8854-4bf595996914 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:30:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 298 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.1 MiB/s wr, 329 op/s
Oct 02 12:30:43 compute-0 podman[303616]: 2025-10-02 12:30:43.348897081 +0000 UTC m=+1.472319843 container remove 1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:43 compute-0 systemd[1]: libpod-conmon-1df013fcfccff270162893b303e3908b4fd1327e795a5e64db7bcfecd1ee3192.scope: Deactivated successfully.
Oct 02 12:30:43 compute-0 sudo[303493]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:43 compute-0 sudo[303658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:43 compute-0 sudo[303658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:43 compute-0 sudo[303658]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:43 compute-0 sudo[303683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:43 compute-0 sudo[303683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:43 compute-0 sudo[303683]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:43 compute-0 sudo[303708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:43 compute-0 sudo[303708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:43 compute-0 sudo[303708]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:43 compute-0 sudo[303733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:30:43 compute-0 sudo[303733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.867 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.869 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.869 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.869 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.870 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.871 2 INFO nova.compute.manager [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Terminating instance
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.873 2 DEBUG nova.compute.manager [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:30:43 compute-0 kernel: tap555fa830-93 (unregistering): left promiscuous mode
Oct 02 12:30:43 compute-0 NetworkManager[44981]: <info>  [1759408243.9439] device (tap555fa830-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:43 compute-0 ovn_controller[148123]: 2025-10-02T12:30:43Z|00231|binding|INFO|Releasing lport 555fa830-9397-48fc-a495-4e130004193f from this chassis (sb_readonly=0)
Oct 02 12:30:43 compute-0 ovn_controller[148123]: 2025-10-02T12:30:43Z|00232|binding|INFO|Setting lport 555fa830-9397-48fc-a495-4e130004193f down in Southbound
Oct 02 12:30:43 compute-0 ovn_controller[148123]: 2025-10-02T12:30:43Z|00233|binding|INFO|Removing iface tap555fa830-93 ovn-installed in OVS
Oct 02 12:30:43 compute-0 nova_compute[256940]: 2025-10-02 12:30:43.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.019 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e4:e0 10.100.0.14'], port_security=['fa:16:3e:d9:e4:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a059e667-9c76-4f71-92d2-9490d75ce24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=555fa830-9397-48fc-a495-4e130004193f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.021 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 555fa830-9397-48fc-a495-4e130004193f in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.023 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.025 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d4511128-14ae-4f7a-85fa-a9152a1e8d9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.026 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore
Oct 02 12:30:44 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000045.scope: Deactivated successfully.
Oct 02 12:30:44 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000045.scope: Consumed 4.519s CPU time.
Oct 02 12:30:44 compute-0 systemd-machined[210927]: Machine qemu-31-instance-00000045 terminated.
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.083 2 DEBUG nova.network.neutron [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Successfully updated port: 62fb85d7-3242-47b1-8854-4bf595996914 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.112 2 DEBUG oslo_concurrency.lockutils [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.112 2 DEBUG oslo_concurrency.lockutils [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.113 2 DEBUG nova.network.neutron [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.115 2 INFO nova.virt.libvirt.driver [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Instance destroyed successfully.
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.115 2 DEBUG nova.objects.instance [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid a059e667-9c76-4f71-92d2-9490d75ce24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:44 compute-0 podman[303806]: 2025-10-02 12:30:44.12729932 +0000 UTC m=+0.068746596 container create df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.148 2 DEBUG nova.virt.libvirt.vif [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-536456714',display_name='tempest-ServerDiskConfigTestJSON-server-536456714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-536456714',id=69,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-zvu4uxk3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:40Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=a059e667-9c76-4f71-92d2-9490d75ce24d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.149 2 DEBUG nova.network.os_vif_util [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "555fa830-9397-48fc-a495-4e130004193f", "address": "fa:16:3e:d9:e4:e0", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap555fa830-93", "ovs_interfaceid": "555fa830-9397-48fc-a495-4e130004193f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.150 2 DEBUG nova.network.os_vif_util [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.150 2 DEBUG os_vif [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap555fa830-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.161 2 INFO os_vif [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e4:e0,bridge_name='br-int',has_traffic_filtering=True,id=555fa830-9397-48fc-a495-4e130004193f,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap555fa830-93')
Oct 02 12:30:44 compute-0 podman[303806]: 2025-10-02 12:30:44.086233637 +0000 UTC m=+0.027680933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.186 2 DEBUG nova.compute.manager [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-changed-62fb85d7-3242-47b1-8854-4bf595996914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.186 2 DEBUG nova.compute.manager [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Refreshing instance network info cache due to event network-changed-62fb85d7-3242-47b1-8854-4bf595996914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.187 2 DEBUG oslo_concurrency.lockutils [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3250871919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:44 compute-0 systemd[1]: Started libpod-conmon-df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec.scope.
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.287 2 DEBUG nova.compute.manager [req-a3bd42f0-aab6-4d3d-9aad-b2b83131130b req-66765688-e09d-4b97-9979-7e1953d2f965 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-unplugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.287 2 DEBUG oslo_concurrency.lockutils [req-a3bd42f0-aab6-4d3d-9aad-b2b83131130b req-66765688-e09d-4b97-9979-7e1953d2f965 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.288 2 DEBUG oslo_concurrency.lockutils [req-a3bd42f0-aab6-4d3d-9aad-b2b83131130b req-66765688-e09d-4b97-9979-7e1953d2f965 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.288 2 DEBUG oslo_concurrency.lockutils [req-a3bd42f0-aab6-4d3d-9aad-b2b83131130b req-66765688-e09d-4b97-9979-7e1953d2f965 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.288 2 DEBUG nova.compute.manager [req-a3bd42f0-aab6-4d3d-9aad-b2b83131130b req-66765688-e09d-4b97-9979-7e1953d2f965 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] No waiting events found dispatching network-vif-unplugged-555fa830-9397-48fc-a495-4e130004193f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.288 2 DEBUG nova.compute.manager [req-a3bd42f0-aab6-4d3d-9aad-b2b83131130b req-66765688-e09d-4b97-9979-7e1953d2f965 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-unplugged-555fa830-9397-48fc-a495-4e130004193f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:30:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.300 2 WARNING nova.network.neutron [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:30:44 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[303520]: [NOTICE]   (303532) : haproxy version is 2.8.14-c23fe91
Oct 02 12:30:44 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[303520]: [NOTICE]   (303532) : path to executable is /usr/sbin/haproxy
Oct 02 12:30:44 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[303520]: [WARNING]  (303532) : Exiting Master process...
Oct 02 12:30:44 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[303520]: [ALERT]    (303532) : Current worker (303540) exited with code 143 (Terminated)
Oct 02 12:30:44 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[303520]: [WARNING]  (303532) : All workers exited. Exiting... (0)
Oct 02 12:30:44 compute-0 systemd[1]: libpod-aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9.scope: Deactivated successfully.
Oct 02 12:30:44 compute-0 podman[303806]: 2025-10-02 12:30:44.333047443 +0000 UTC m=+0.274494749 container init df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:44 compute-0 podman[303867]: 2025-10-02 12:30:44.33493576 +0000 UTC m=+0.072554032 container died aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:30:44 compute-0 podman[303806]: 2025-10-02 12:30:44.344408621 +0000 UTC m=+0.285855897 container start df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:30:44 compute-0 gallant_ride[303868]: 167 167
Oct 02 12:30:44 compute-0 systemd[1]: libpod-df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec.scope: Deactivated successfully.
Oct 02 12:30:44 compute-0 podman[303806]: 2025-10-02 12:30:44.380765694 +0000 UTC m=+0.322212980 container attach df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:30:44 compute-0 podman[303806]: 2025-10-02 12:30:44.38140625 +0000 UTC m=+0.322853526 container died df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:30:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8894683b02f20bb5fa677074a4a515e5dea0e209d6027abc680e0232dcddbb-merged.mount: Deactivated successfully.
Oct 02 12:30:44 compute-0 podman[303806]: 2025-10-02 12:30:44.531407088 +0000 UTC m=+0.472854364 container remove df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9-userdata-shm.mount: Deactivated successfully.
Oct 02 12:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd75dc6c49da4aaa4cce32f3211529f1fa4c14b5623143722b29656599b4a306-merged.mount: Deactivated successfully.
Oct 02 12:30:44 compute-0 podman[303867]: 2025-10-02 12:30:44.648011787 +0000 UTC m=+0.385630049 container cleanup aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:30:44 compute-0 systemd[1]: libpod-conmon-aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9.scope: Deactivated successfully.
Oct 02 12:30:44 compute-0 systemd[1]: libpod-conmon-df996e7c1a93b84e0f884f26de068cc8d7d1ed841da2a097e55cbb8efc4d61ec.scope: Deactivated successfully.
Oct 02 12:30:44 compute-0 podman[303922]: 2025-10-02 12:30:44.791246373 +0000 UTC m=+0.108315480 container remove aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.799 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a0c8fd-2044-44ef-bf92-f3e881c25df5]: (4, ('Thu Oct  2 12:30:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9)\naecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9\nThu Oct  2 12:30:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (aecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9)\naecbdad3f544f0bb3ca182161cb1ea48a7cf85a052c8af37aa8df3a5e7f82df9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.802 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[85681d27-b064-4ebf-8ce1-c4d4880f651e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.804 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 kernel: tape21cd6a6-f0: left promiscuous mode
Oct 02 12:30:44 compute-0 podman[303924]: 2025-10-02 12:30:44.716177618 +0000 UTC m=+0.028021823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:30:44 compute-0 podman[303924]: 2025-10-02 12:30:44.817419748 +0000 UTC m=+0.129263953 container create 2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:30:44 compute-0 nova_compute[256940]: 2025-10-02 12:30:44.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.830 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dde395c9-6539-44cd-a18e-b1be6a0fee9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.857 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[27895867-89b6-495f-ad0e-d58b8fa5441b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.860 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3e180e5e-513f-4f2d-9676-e4b1b2f73f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 systemd[1]: Started libpod-conmon-2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59.scope.
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.880 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc479fd1-de59-467a-96e9-a7a939e6fdb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613426, 'reachable_time': 37463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303954, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.882 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:30:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:44.883 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[11c967f0-1d51-469b-95c6-7dd6fa1f945d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:44 compute-0 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct 02 12:30:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69948d524eae0f5fbc9ab7694b07c02f621814a403eaa35df1dba81e27d359da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69948d524eae0f5fbc9ab7694b07c02f621814a403eaa35df1dba81e27d359da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69948d524eae0f5fbc9ab7694b07c02f621814a403eaa35df1dba81e27d359da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69948d524eae0f5fbc9ab7694b07c02f621814a403eaa35df1dba81e27d359da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:44 compute-0 podman[303924]: 2025-10-02 12:30:44.983954505 +0000 UTC m=+0.295798740 container init 2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:30:44 compute-0 podman[303924]: 2025-10-02 12:30:44.994465642 +0000 UTC m=+0.306309847 container start 2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:30:45 compute-0 podman[303924]: 2025-10-02 12:30:45.019320783 +0000 UTC m=+0.331164988 container attach 2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:45 compute-0 ceph-mon[73668]: pgmap v1619: 305 pgs: 305 active+clean; 298 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.1 MiB/s wr, 329 op/s
Oct 02 12:30:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/744951610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/97662532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:45 compute-0 nova_compute[256940]: 2025-10-02 12:30:45.231 2 INFO nova.virt.libvirt.driver [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Deleting instance files /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d_del
Oct 02 12:30:45 compute-0 nova_compute[256940]: 2025-10-02 12:30:45.232 2 INFO nova.virt.libvirt.driver [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Deletion of /var/lib/nova/instances/a059e667-9c76-4f71-92d2-9490d75ce24d_del complete
Oct 02 12:30:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 248 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.0 MiB/s wr, 337 op/s
Oct 02 12:30:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:45 compute-0 nova_compute[256940]: 2025-10-02 12:30:45.284 2 INFO nova.compute.manager [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Took 1.41 seconds to destroy the instance on the hypervisor.
Oct 02 12:30:45 compute-0 nova_compute[256940]: 2025-10-02 12:30:45.285 2 DEBUG oslo.service.loopingcall [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:30:45 compute-0 nova_compute[256940]: 2025-10-02 12:30:45.285 2 DEBUG nova.compute.manager [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:30:45 compute-0 nova_compute[256940]: 2025-10-02 12:30:45.285 2 DEBUG nova.network.neutron [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.287201) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245287249, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2247, "num_deletes": 264, "total_data_size": 3776155, "memory_usage": 3833104, "flush_reason": "Manual Compaction"}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245320256, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3707008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34129, "largest_seqno": 36375, "table_properties": {"data_size": 3696471, "index_size": 6775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22145, "raw_average_key_size": 21, "raw_value_size": 3675363, "raw_average_value_size": 3487, "num_data_blocks": 291, "num_entries": 1054, "num_filter_entries": 1054, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408059, "oldest_key_time": 1759408059, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 33112 microseconds, and 7927 cpu microseconds.
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.320308) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3707008 bytes OK
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.320336) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.323343) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.323416) EVENT_LOG_v1 {"time_micros": 1759408245323401, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.323455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3766738, prev total WAL file size 3766738, number of live WAL files 2.
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324917) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3620KB)], [74(8322KB)]
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245325041, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12228743, "oldest_snapshot_seqno": -1}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6407 keys, 12064058 bytes, temperature: kUnknown
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245414346, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12064058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12018148, "index_size": 28776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 163743, "raw_average_key_size": 25, "raw_value_size": 11900324, "raw_average_value_size": 1857, "num_data_blocks": 1163, "num_entries": 6407, "num_filter_entries": 6407, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.414701) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12064058 bytes
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.419322) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.8 rd, 134.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.3) OK, records in: 6949, records dropped: 542 output_compression: NoCompression
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.419379) EVENT_LOG_v1 {"time_micros": 1759408245419357, "job": 42, "event": "compaction_finished", "compaction_time_micros": 89397, "compaction_time_cpu_micros": 26481, "output_level": 6, "num_output_files": 1, "total_output_size": 12064058, "num_input_records": 6949, "num_output_records": 6407, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245420215, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245422120, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.422198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.422204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.422206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.422208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:45.422210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-0 nova_compute[256940]: 2025-10-02 12:30:45.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]: {
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]:         "osd_id": 1,
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]:         "type": "bluestore"
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]:     }
Oct 02 12:30:45 compute-0 upbeat_ritchie[303955]: }
Oct 02 12:30:45 compute-0 systemd[1]: libpod-2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59.scope: Deactivated successfully.
Oct 02 12:30:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:45.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:46 compute-0 podman[303976]: 2025-10-02 12:30:46.000718064 +0000 UTC m=+0.027199201 container died 2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.043 2 DEBUG nova.network.neutron [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.061 2 INFO nova.compute.manager [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Took 0.78 seconds to deallocate network for instance.
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.109 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.109 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-69948d524eae0f5fbc9ab7694b07c02f621814a403eaa35df1dba81e27d359da-merged.mount: Deactivated successfully.
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.165 2 DEBUG nova.compute.manager [req-64d590b5-d198-4ee5-8288-8dd91ceac016 req-7600e2e1-fb2d-4f75-bf12-81b0fce01ba5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-deleted-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.176 2 DEBUG oslo_concurrency.processutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:46 compute-0 podman[303976]: 2025-10-02 12:30:46.205264176 +0000 UTC m=+0.231745303 container remove 2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:46 compute-0 systemd[1]: libpod-conmon-2498f4839bfa0f1025580d6fa7da93fc6c69fdaddd5bc0139385b7fc48cd9f59.scope: Deactivated successfully.
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.220 2 DEBUG nova.network.neutron [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:46 compute-0 sudo[303733]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.276 2 DEBUG oslo_concurrency.lockutils [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.277 2 DEBUG oslo_concurrency.lockutils [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.278 2 DEBUG nova.network.neutron [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Refreshing network info cache for port 62fb85d7-3242-47b1-8854-4bf595996914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.281 2 DEBUG nova.virt.libvirt.vif [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.282 2 DEBUG nova.network.os_vif_util [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.283 2 DEBUG nova.network.os_vif_util [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.283 2 DEBUG os_vif [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62fb85d7-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62fb85d7-32, col_values=(('external_ids', {'iface-id': '62fb85d7-3242-47b1-8854-4bf595996914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:c8:d9', 'vm-uuid': '878c4166-4a08-41bd-9263-c5d4b4c7a280'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 NetworkManager[44981]: <info>  [1759408246.2939] manager: (tap62fb85d7-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct 02 12:30:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:30:46 compute-0 ceph-mon[73668]: pgmap v1620: 305 pgs: 305 active+clean; 248 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.0 MiB/s wr, 337 op/s
Oct 02 12:30:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/829913293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/737162632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.368 2 INFO os_vif [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32')
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.369 2 DEBUG nova.virt.libvirt.vif [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.369 2 DEBUG nova.network.os_vif_util [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.369 2 DEBUG nova.network.os_vif_util [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.373 2 DEBUG nova.virt.libvirt.guest [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:3f:c8:d9"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <target dev="tap62fb85d7-32"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]: </interface>
Oct 02 12:30:46 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.377 2 DEBUG nova.compute.manager [req-6a98c425-e443-4ac4-ad9e-4269da61ed63 req-d9f6de60-dbee-4ce7-ad2c-24b21613304b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.377 2 DEBUG oslo_concurrency.lockutils [req-6a98c425-e443-4ac4-ad9e-4269da61ed63 req-d9f6de60-dbee-4ce7-ad2c-24b21613304b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.378 2 DEBUG oslo_concurrency.lockutils [req-6a98c425-e443-4ac4-ad9e-4269da61ed63 req-d9f6de60-dbee-4ce7-ad2c-24b21613304b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.378 2 DEBUG oslo_concurrency.lockutils [req-6a98c425-e443-4ac4-ad9e-4269da61ed63 req-d9f6de60-dbee-4ce7-ad2c-24b21613304b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.378 2 DEBUG nova.compute.manager [req-6a98c425-e443-4ac4-ad9e-4269da61ed63 req-d9f6de60-dbee-4ce7-ad2c-24b21613304b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] No waiting events found dispatching network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.378 2 WARNING nova.compute.manager [req-6a98c425-e443-4ac4-ad9e-4269da61ed63 req-d9f6de60-dbee-4ce7-ad2c-24b21613304b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Received unexpected event network-vif-plugged-555fa830-9397-48fc-a495-4e130004193f for instance with vm_state deleted and task_state None.
Oct 02 12:30:46 compute-0 kernel: tap62fb85d7-32: entered promiscuous mode
Oct 02 12:30:46 compute-0 NetworkManager[44981]: <info>  [1759408246.3893] manager: (tap62fb85d7-32): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct 02 12:30:46 compute-0 systemd-udevd[303789]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:46 compute-0 ovn_controller[148123]: 2025-10-02T12:30:46Z|00234|binding|INFO|Claiming lport 62fb85d7-3242-47b1-8854-4bf595996914 for this chassis.
Oct 02 12:30:46 compute-0 ovn_controller[148123]: 2025-10-02T12:30:46Z|00235|binding|INFO|62fb85d7-3242-47b1-8854-4bf595996914: Claiming fa:16:3e:3f:c8:d9 10.100.0.14
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.398 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:c8:d9 10.100.0.14'], port_security=['fa:16:3e:3f:c8:d9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '878c4166-4a08-41bd-9263-c5d4b4c7a280', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=62fb85d7-3242-47b1-8854-4bf595996914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.399 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 62fb85d7-3242-47b1-8854-4bf595996914 in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.400 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:30:46 compute-0 NetworkManager[44981]: <info>  [1759408246.4053] device (tap62fb85d7-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:46 compute-0 NetworkManager[44981]: <info>  [1759408246.4066] device (tap62fb85d7-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:46 compute-0 ovn_controller[148123]: 2025-10-02T12:30:46Z|00236|binding|INFO|Setting lport 62fb85d7-3242-47b1-8854-4bf595996914 ovn-installed in OVS
Oct 02 12:30:46 compute-0 ovn_controller[148123]: 2025-10-02T12:30:46Z|00237|binding|INFO|Setting lport 62fb85d7-3242-47b1-8854-4bf595996914 up in Southbound
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 475f33b2-5b1b-472e-8c64-8b6d5ae332fc does not exist
Oct 02 12:30:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c5e729a7-1895-4b6b-821f-985c2d195e52 does not exist
Oct 02 12:30:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev fa8cdad0-35fb-4189-8d61-b2a32df95e28 does not exist
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.428 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[074a8c60-ecdc-4613-a4d9-25ce611f0a8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:30:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:46.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.471 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2d422d9e-fab5-46a2-ae2b-20c29d96fdfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.474 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c79d924c-8e86-4af2-96ac-bd8c25f612a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:46 compute-0 sudo[304018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.503 2 DEBUG nova.virt.libvirt.driver [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.504 2 DEBUG nova.virt.libvirt.driver [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.504 2 DEBUG nova.virt.libvirt.driver [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:10:8a:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.504 2 DEBUG nova.virt.libvirt.driver [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:3f:c8:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:46 compute-0 sudo[304018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:46 compute-0 sudo[304018]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.513 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e633d9-476d-4655-a2fc-c8192b68206c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.532 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[164cc1a0-c026-476b-bdbb-29f8a11a7105]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609982, 'reachable_time': 32665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304048, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.552 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5e65808f-90a1-4f74-a1d3-674bf927d92a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609995, 'tstamp': 609995}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304060, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609999, 'tstamp': 609999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304060, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.555 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.558 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.558 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.559 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:46.559 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:46 compute-0 sudo[304049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:30:46 compute-0 sudo[304049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:46 compute-0 sudo[304049]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.621 2 DEBUG nova.virt.libvirt.guest [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:30:46</nova:creationTime>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:46 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     <nova:port uuid="62fb85d7-3242-47b1-8854-4bf595996914">
Oct 02 12:30:46 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:30:46 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:46 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:30:46 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:30:46 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:30:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763093652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.701 2 DEBUG oslo_concurrency.lockutils [None req-63dd2150-1caa-4b2b-9f9d-092cfcbd72a1 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-878c4166-4a08-41bd-9263-c5d4b4c7a280-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.714 2 DEBUG oslo_concurrency.processutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.720 2 DEBUG nova.compute.provider_tree [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.746 2 DEBUG nova.scheduler.client.report [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.811 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:46 compute-0 nova_compute[256940]: 2025-10-02 12:30:46.893 2 INFO nova.scheduler.client.report [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocations for instance a059e667-9c76-4f71-92d2-9490d75ce24d
Oct 02 12:30:47 compute-0 nova_compute[256940]: 2025-10-02 12:30:47.064 2 DEBUG oslo_concurrency.lockutils [None req-fd7f6572-c870-4a37-a7a0-6812bba3b67e 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "a059e667-9c76-4f71-92d2-9490d75ce24d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 237 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 4.5 MiB/s wr, 358 op/s
Oct 02 12:30:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1763093652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:47 compute-0 nova_compute[256940]: 2025-10-02 12:30:47.707 2 DEBUG nova.network.neutron [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updated VIF entry in instance network info cache for port 62fb85d7-3242-47b1-8854-4bf595996914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:47 compute-0 nova_compute[256940]: 2025-10-02 12:30:47.708 2 DEBUG nova.network.neutron [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:47 compute-0 nova_compute[256940]: 2025-10-02 12:30:47.745 2 DEBUG oslo_concurrency.lockutils [req-c5230f3f-c642-4410-b581-35a4d6f325c7 req-60933cec-6490-4435-8152-ced0aa332848 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:30:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:30:48 compute-0 ovn_controller[148123]: 2025-10-02T12:30:48Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:c8:d9 10.100.0.14
Oct 02 12:30:48 compute-0 ovn_controller[148123]: 2025-10-02T12:30:48Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:c8:d9 10.100.0.14
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.333 2 DEBUG oslo_concurrency.lockutils [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-878c4166-4a08-41bd-9263-c5d4b4c7a280-62fb85d7-3242-47b1-8854-4bf595996914" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.333 2 DEBUG oslo_concurrency.lockutils [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-878c4166-4a08-41bd-9263-c5d4b4c7a280-62fb85d7-3242-47b1-8854-4bf595996914" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.416 2 DEBUG nova.objects.instance [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:48 compute-0 ceph-mon[73668]: pgmap v1621: 305 pgs: 305 active+clean; 237 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 4.5 MiB/s wr, 358 op/s
Oct 02 12:30:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4286409731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1935961188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.465282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248465326, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 312, "num_deletes": 251, "total_data_size": 130824, "memory_usage": 138096, "flush_reason": "Manual Compaction"}
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248468477, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 130207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36376, "largest_seqno": 36687, "table_properties": {"data_size": 128097, "index_size": 274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5379, "raw_average_key_size": 18, "raw_value_size": 123927, "raw_average_value_size": 433, "num_data_blocks": 11, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408246, "oldest_key_time": 1759408246, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3236 microseconds, and 1011 cpu microseconds.
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.468511) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 130207 bytes OK
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.468538) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470358) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470375) EVENT_LOG_v1 {"time_micros": 1759408248470370, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 128593, prev total WAL file size 128593, number of live WAL files 2.
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(127KB)], [77(11MB)]
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248470896, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12194265, "oldest_snapshot_seqno": -1}
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.541 2 DEBUG nova.virt.libvirt.vif [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.541 2 DEBUG nova.network.os_vif_util [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.542 2 DEBUG nova.network.os_vif_util [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.548 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.552 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.555 2 DEBUG nova.virt.libvirt.driver [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Attempting to detach device tap62fb85d7-32 from instance 878c4166-4a08-41bd-9263-c5d4b4c7a280 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6180 keys, 10187626 bytes, temperature: kUnknown
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248555920, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 10187626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10145068, "index_size": 26000, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 159735, "raw_average_key_size": 25, "raw_value_size": 10032907, "raw_average_value_size": 1623, "num_data_blocks": 1038, "num_entries": 6180, "num_filter_entries": 6180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.555 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:3f:c8:d9"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <target dev="tap62fb85d7-32"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]: </interface>
Oct 02 12:30:48 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.556349) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10187626 bytes
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.563289) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 119.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(171.9) write-amplify(78.2) OK, records in: 6693, records dropped: 513 output_compression: NoCompression
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.563338) EVENT_LOG_v1 {"time_micros": 1759408248563322, "job": 44, "event": "compaction_finished", "compaction_time_micros": 85146, "compaction_time_cpu_micros": 22582, "output_level": 6, "num_output_files": 1, "total_output_size": 10187626, "num_input_records": 6693, "num_output_records": 6180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248563711, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248565870, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.566034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.566043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.566045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.566048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:30:48.566049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.568 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.573 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface>not found in domain: <domain type='kvm' id='30'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <name>instance-00000046</name>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <uuid>878c4166-4a08-41bd-9263-c5d4b4c7a280</uuid>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:30:46</nova:creationTime>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:port uuid="62fb85d7-3242-47b1-8854-4bf595996914">
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:30:48 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <system>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='serial'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='uuid'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </system>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <os>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </os>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <features>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </features>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk' index='2'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config' index='1'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:10:8a:e3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target dev='tapfff1abf1-c6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:3f:c8:d9'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target dev='tap62fb85d7-32'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='net1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </target>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </console>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <video>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </video>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c273,c337</label>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c337</imagelabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]: </domain>
Oct 02 12:30:48 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.574 2 INFO nova.virt.libvirt.driver [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap62fb85d7-32 from instance 878c4166-4a08-41bd-9263-c5d4b4c7a280 from the persistent domain config.
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.574 2 DEBUG nova.virt.libvirt.driver [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] (1/8): Attempting to detach device tap62fb85d7-32 with device alias net1 from instance 878c4166-4a08-41bd-9263-c5d4b4c7a280 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.575 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:3f:c8:d9"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <target dev="tap62fb85d7-32"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]: </interface>
Oct 02 12:30:48 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:30:48 compute-0 kernel: tap62fb85d7-32 (unregistering): left promiscuous mode
Oct 02 12:30:48 compute-0 NetworkManager[44981]: <info>  [1759408248.6728] device (tap62fb85d7-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:48 compute-0 ovn_controller[148123]: 2025-10-02T12:30:48Z|00238|binding|INFO|Releasing lport 62fb85d7-3242-47b1-8854-4bf595996914 from this chassis (sb_readonly=0)
Oct 02 12:30:48 compute-0 ovn_controller[148123]: 2025-10-02T12:30:48Z|00239|binding|INFO|Setting lport 62fb85d7-3242-47b1-8854-4bf595996914 down in Southbound
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-0 ovn_controller[148123]: 2025-10-02T12:30:48Z|00240|binding|INFO|Removing iface tap62fb85d7-32 ovn-installed in OVS
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.709 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759408248.709212, 878c4166-4a08-41bd-9263-c5d4b4c7a280 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.711 2 DEBUG nova.virt.libvirt.driver [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Start waiting for the detach event from libvirt for device tap62fb85d7-32 with device alias net1 for instance 878c4166-4a08-41bd-9263-c5d4b4c7a280 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.712 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.718 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface>not found in domain: <domain type='kvm' id='30'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <name>instance-00000046</name>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <uuid>878c4166-4a08-41bd-9263-c5d4b4c7a280</uuid>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:30:46</nova:creationTime>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:port uuid="62fb85d7-3242-47b1-8854-4bf595996914">
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:30:48 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <system>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='serial'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='uuid'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </system>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <os>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </os>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <features>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </features>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk' index='2'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config' index='1'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:10:8a:e3'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target dev='tapfff1abf1-c6'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       </target>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </console>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <video>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </video>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c273,c337</label>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c337</imagelabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:48 compute-0 nova_compute[256940]: </domain>
Oct 02 12:30:48 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.718 2 INFO nova.virt.libvirt.driver [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap62fb85d7-32 from instance 878c4166-4a08-41bd-9263-c5d4b4c7a280 from the live domain config.
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.719 2 DEBUG nova.virt.libvirt.vif [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.719 2 DEBUG nova.network.os_vif_util [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.720 2 DEBUG nova.network.os_vif_util [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.721 2 DEBUG os_vif [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62fb85d7-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.728 2 INFO os_vif [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32')
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.729 2 DEBUG nova.virt.libvirt.guest [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:30:48</nova:creationTime>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:48 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:48 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:48 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:30:48 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:30:48 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.772 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:c8:d9 10.100.0.14'], port_security=['fa:16:3e:3f:c8:d9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '878c4166-4a08-41bd-9263-c5d4b4c7a280', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=62fb85d7-3242-47b1-8854-4bf595996914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.773 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 62fb85d7-3242-47b1-8854-4bf595996914 in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.776 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.795 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[21fdd51c-b090-4229-b514-08be525593bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.839 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[15abed78-41b3-4ef2-ba31-76ee1882942f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.842 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[51228f72-c2f6-43f1-9100-ba6735285ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.886 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fca33cba-8232-4b44-96d9-58b76dfcc6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.906 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[908f11cc-f439-4bfd-9eb4-6a0077ea7b99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609982, 'reachable_time': 32665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304089, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.927 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e48f55a6-3c37-44f7-8d66-760af3915d23]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609995, 'tstamp': 609995}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304090, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609999, 'tstamp': 609999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304090, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.929 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.932 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.933 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.933 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:48.933 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.993 2 DEBUG nova.compute.manager [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.994 2 DEBUG oslo_concurrency.lockutils [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.994 2 DEBUG oslo_concurrency.lockutils [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.994 2 DEBUG oslo_concurrency.lockutils [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.995 2 DEBUG nova.compute.manager [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.995 2 WARNING nova.compute.manager [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 for instance with vm_state active and task_state None.
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.995 2 DEBUG nova.compute.manager [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.995 2 DEBUG oslo_concurrency.lockutils [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.996 2 DEBUG oslo_concurrency.lockutils [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.996 2 DEBUG oslo_concurrency.lockutils [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.996 2 DEBUG nova.compute.manager [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:48 compute-0 nova_compute[256940]: 2025-10-02 12:30:48.996 2 WARNING nova.compute.manager [req-c3d13ef2-75d3-43f0-9d6c-eab5173b410b req-ca411be3-68ae-4ba6-bff1-ed9a3c399ac7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 for instance with vm_state active and task_state None.
Oct 02 12:30:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 237 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.0 MiB/s wr, 307 op/s
Oct 02 12:30:49 compute-0 nova_compute[256940]: 2025-10-02 12:30:49.406 2 DEBUG oslo_concurrency.lockutils [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:49 compute-0 nova_compute[256940]: 2025-10-02 12:30:49.406 2 DEBUG oslo_concurrency.lockutils [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:49 compute-0 nova_compute[256940]: 2025-10-02 12:30:49.406 2 DEBUG nova.network.neutron [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:30:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:50.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.249 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.249 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:50 compute-0 ceph-mon[73668]: pgmap v1622: 305 pgs: 305 active+clean; 237 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.0 MiB/s wr, 307 op/s
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128164467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.756 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.881 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:50 compute-0 nova_compute[256940]: 2025-10-02 12:30:50.881 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.016 2 DEBUG nova.compute.manager [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-deleted-62fb85d7-3242-47b1-8854-4bf595996914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.016 2 INFO nova.compute.manager [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Neutron deleted interface 62fb85d7-3242-47b1-8854-4bf595996914; detaching it from the instance and deleting it from the info cache
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.016 2 DEBUG nova.network.neutron [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.058 2 DEBUG nova.objects.instance [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'system_metadata' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.078 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.079 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4402MB free_disk=20.889118194580078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.079 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.080 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.109 2 DEBUG nova.objects.instance [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'flavor' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.134 2 DEBUG nova.virt.libvirt.vif [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.134 2 DEBUG nova.network.os_vif_util [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.135 2 DEBUG nova.network.os_vif_util [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.138 2 DEBUG nova.virt.libvirt.guest [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.144 2 DEBUG nova.virt.libvirt.guest [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface>not found in domain: <domain type='kvm' id='30'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <name>instance-00000046</name>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <uuid>878c4166-4a08-41bd-9263-c5d4b4c7a280</uuid>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:30:48</nova:creationTime>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:30:51 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <system>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='serial'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='uuid'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </system>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <os>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </os>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <features>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </features>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk' index='2'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config' index='1'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:10:8a:e3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target dev='tapfff1abf1-c6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </target>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </console>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <video>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </video>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c273,c337</label>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c337</imagelabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]: </domain>
Oct 02 12:30:51 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.145 2 DEBUG nova.virt.libvirt.guest [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.149 2 DEBUG nova.virt.libvirt.guest [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3f:c8:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62fb85d7-32"/></interface>not found in domain: <domain type='kvm' id='30'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <name>instance-00000046</name>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <uuid>878c4166-4a08-41bd-9263-c5d4b4c7a280</uuid>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:30:48</nova:creationTime>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:30:51 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <system>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='serial'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='uuid'>878c4166-4a08-41bd-9263-c5d4b4c7a280</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </system>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <os>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </os>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <features>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </features>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk' index='2'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/878c4166-4a08-41bd-9263-c5d4b4c7a280_disk.config' index='1'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </source>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:10:8a:e3'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target dev='tapfff1abf1-c6'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       </target>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280/console.log' append='off'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </console>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </input>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <video>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </video>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c273,c337</label>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c337</imagelabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:30:51 compute-0 nova_compute[256940]: </domain>
Oct 02 12:30:51 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.150 2 WARNING nova.virt.libvirt.driver [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Detaching interface fa:16:3e:3f:c8:d9 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap62fb85d7-32' not found.
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.150 2 DEBUG nova.virt.libvirt.vif [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.151 2 DEBUG nova.network.os_vif_util [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "62fb85d7-3242-47b1-8854-4bf595996914", "address": "fa:16:3e:3f:c8:d9", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62fb85d7-32", "ovs_interfaceid": "62fb85d7-3242-47b1-8854-4bf595996914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.151 2 DEBUG nova.network.os_vif_util [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.152 2 DEBUG os_vif [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.154 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62fb85d7-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.154 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.158 2 INFO os_vif [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:c8:d9,bridge_name='br-int',has_traffic_filtering=True,id=62fb85d7-3242-47b1-8854-4bf595996914,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62fb85d7-32')
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.160 2 DEBUG nova.virt.libvirt.guest [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-854521210</nova:name>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:30:51</nova:creationTime>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     <nova:port uuid="fff1abf1-c689-4533-86c7-a3d1075ea6de">
Oct 02 12:30:51 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:30:51 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:30:51 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:30:51 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:30:51 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.167 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 878c4166-4a08-41bd-9263-c5d4b4c7a280 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.168 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.168 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.239 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.239 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.239 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.240 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.240 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.241 2 INFO nova.compute.manager [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Terminating instance
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.242 2 DEBUG nova.compute.manager [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.256 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:30:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 230 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.4 MiB/s wr, 395 op/s
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.275 2 DEBUG nova.compute.manager [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-unplugged-62fb85d7-3242-47b1-8854-4bf595996914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.276 2 DEBUG oslo_concurrency.lockutils [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.277 2 DEBUG oslo_concurrency.lockutils [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.277 2 DEBUG oslo_concurrency.lockutils [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.277 2 DEBUG nova.compute.manager [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-unplugged-62fb85d7-3242-47b1-8854-4bf595996914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.277 2 DEBUG nova.compute.manager [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-unplugged-62fb85d7-3242-47b1-8854-4bf595996914 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.277 2 DEBUG nova.compute.manager [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.277 2 DEBUG oslo_concurrency.lockutils [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.278 2 DEBUG oslo_concurrency.lockutils [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.278 2 DEBUG oslo_concurrency.lockutils [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.278 2 DEBUG nova.compute.manager [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.278 2 WARNING nova.compute.manager [req-2528e233-c372-4533-945b-d1bd463d1d0e req-f4769cf3-1dbf-40ee-9e31-eb38a3272280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-62fb85d7-3242-47b1-8854-4bf595996914 for instance with vm_state active and task_state deleting.
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.287 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.289 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.356 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.357 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:30:51 compute-0 kernel: tapfff1abf1-c6 (unregistering): left promiscuous mode
Oct 02 12:30:51 compute-0 NetworkManager[44981]: <info>  [1759408251.3626] device (tapfff1abf1-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00241|binding|INFO|Releasing lport fff1abf1-c689-4533-86c7-a3d1075ea6de from this chassis (sb_readonly=0)
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00242|binding|INFO|Setting lport fff1abf1-c689-4533-86c7-a3d1075ea6de down in Southbound
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00243|binding|INFO|Removing iface tapfff1abf1-c6 ovn-installed in OVS
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.381 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:8a:e3 10.100.0.3'], port_security=['fa:16:3e:10:8a:e3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '878c4166-4a08-41bd-9263-c5d4b4c7a280', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45fed08c-57a9-4001-9934-7fb62d17f115', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=fff1abf1-c689-4533-86c7-a3d1075ea6de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.382 158104 INFO neutron.agent.ovn.metadata.agent [-] Port fff1abf1-c689-4533-86c7-a3d1075ea6de in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.383 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a187d8a-77c6-4b27-bb13-654f471c1faf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.383 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.384 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2cee5d-9031-42d0-b1c3-b0d289e37cce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.384 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace which is not needed anymore
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.405 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:30:51 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000046.scope: Deactivated successfully.
Oct 02 12:30:51 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000046.scope: Consumed 15.727s CPU time.
Oct 02 12:30:51 compute-0 systemd-machined[210927]: Machine qemu-30-instance-00000046 terminated.
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.450 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:51 compute-0 NetworkManager[44981]: <info>  [1759408251.4990] manager: (tapfff1abf1-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Oct 02 12:30:51 compute-0 kernel: tapfff1abf1-c6: entered promiscuous mode
Oct 02 12:30:51 compute-0 systemd-udevd[304080]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:51 compute-0 kernel: tapfff1abf1-c6 (unregistering): left promiscuous mode
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00244|binding|INFO|Claiming lport fff1abf1-c689-4533-86c7-a3d1075ea6de for this chassis.
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00245|binding|INFO|fff1abf1-c689-4533-86c7-a3d1075ea6de: Claiming fa:16:3e:10:8a:e3 10.100.0.3
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.526 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:8a:e3 10.100.0.3'], port_security=['fa:16:3e:10:8a:e3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '878c4166-4a08-41bd-9263-c5d4b4c7a280', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45fed08c-57a9-4001-9934-7fb62d17f115', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=fff1abf1-c689-4533-86c7-a3d1075ea6de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.536 2 INFO nova.virt.libvirt.driver [-] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Instance destroyed successfully.
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.536 2 DEBUG nova.objects.instance [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'resources' on Instance uuid 878c4166-4a08-41bd-9263-c5d4b4c7a280 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00246|binding|INFO|Setting lport fff1abf1-c689-4533-86c7-a3d1075ea6de ovn-installed in OVS
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00247|binding|INFO|Setting lport fff1abf1-c689-4533-86c7-a3d1075ea6de up in Southbound
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00248|binding|INFO|Releasing lport fff1abf1-c689-4533-86c7-a3d1075ea6de from this chassis (sb_readonly=1)
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00249|if_status|INFO|Dropped 2 log messages in last 950 seconds (most recently, 950 seconds ago) due to excessive rate
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00250|if_status|INFO|Not setting lport fff1abf1-c689-4533-86c7-a3d1075ea6de down as sb is readonly
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00251|binding|INFO|Removing iface tapfff1abf1-c6 ovn-installed in OVS
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00252|binding|INFO|Releasing lport fff1abf1-c689-4533-86c7-a3d1075ea6de from this chassis (sb_readonly=0)
Oct 02 12:30:51 compute-0 ovn_controller[148123]: 2025-10-02T12:30:51Z|00253|binding|INFO|Setting lport fff1abf1-c689-4533-86c7-a3d1075ea6de down in Southbound
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.557 2 DEBUG nova.virt.libvirt.vif [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-854521210',display_name='tempest-AttachInterfacesTestJSON-server-854521210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-854521210',id=70,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPj7xQU8Ssh/prNw7K45lI4BeRRWzKjgcv5C7lo7ldDUWEJrCyD5mpH6mnDgy7u5LwfAJ4iQxn2soM6NN9veB7o6VpRH2Ib/k1cUSUwweC2mdJQOxv/1o1UXyrYf3O1JQQ==',key_name='tempest-keypair-1581898372',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-02vn4ivd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=878c4166-4a08-41bd-9263-c5d4b4c7a280,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.557 2 DEBUG nova.network.os_vif_util [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.558 2 DEBUG nova.network.os_vif_util [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:8a:e3,bridge_name='br-int',has_traffic_filtering=True,id=fff1abf1-c689-4533-86c7-a3d1075ea6de,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfff1abf1-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.559 2 DEBUG os_vif [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:8a:e3,bridge_name='br-int',has_traffic_filtering=True,id=fff1abf1-c689-4533-86c7-a3d1075ea6de,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfff1abf1-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.561 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff1abf1-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.562 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:8a:e3 10.100.0.3'], port_security=['fa:16:3e:10:8a:e3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '878c4166-4a08-41bd-9263-c5d4b4c7a280', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45fed08c-57a9-4001-9934-7fb62d17f115', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=fff1abf1-c689-4533-86c7-a3d1075ea6de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [NOTICE]   (301860) : haproxy version is 2.8.14-c23fe91
Oct 02 12:30:51 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [NOTICE]   (301860) : path to executable is /usr/sbin/haproxy
Oct 02 12:30:51 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [WARNING]  (301860) : Exiting Master process...
Oct 02 12:30:51 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [WARNING]  (301860) : Exiting Master process...
Oct 02 12:30:51 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [ALERT]    (301860) : Current worker (301862) exited with code 143 (Terminated)
Oct 02 12:30:51 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[301856]: [WARNING]  (301860) : All workers exited. Exiting... (0)
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.572 2 INFO os_vif [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:8a:e3,bridge_name='br-int',has_traffic_filtering=True,id=fff1abf1-c689-4533-86c7-a3d1075ea6de,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfff1abf1-c6')
Oct 02 12:30:51 compute-0 systemd[1]: libpod-a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c.scope: Deactivated successfully.
Oct 02 12:30:51 compute-0 podman[304138]: 2025-10-02 12:30:51.583667188 +0000 UTC m=+0.081819898 container died a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:30:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/128164467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3f9d67856d7a3990f55c22f6cce9909beb88cc4d499bb6b131eba653dac5543-merged.mount: Deactivated successfully.
Oct 02 12:30:51 compute-0 podman[304138]: 2025-10-02 12:30:51.777487308 +0000 UTC m=+0.275640018 container cleanup a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:30:51 compute-0 systemd[1]: libpod-conmon-a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c.scope: Deactivated successfully.
Oct 02 12:30:51 compute-0 podman[304211]: 2025-10-02 12:30:51.901460105 +0000 UTC m=+0.093433783 container remove a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.908 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7200d326-e6b1-4fee-9229-2dabc4b6fc53]: (4, ('Thu Oct  2 12:30:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c)\na96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c\nThu Oct  2 12:30:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (a96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c)\na96c2eff81ac43950c47a19462fba7d56bf5d01bc233cab196bae35fad67788c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.910 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[575a6e9e-bd3a-4e50-a4be-734d20591181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.911 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/564061492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:51 compute-0 kernel: tap6a187d8a-70: left promiscuous mode
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.955 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a28cfd26-8639-4adf-94a8-7f074c0e8328]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-0 nova_compute[256940]: 2025-10-02 12:30:51.984 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.986 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a696a7d1-52c2-489c-9306-e39e121ae6b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:51.987 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60468f23-07ad-4b71-b0bd-ccb6822d93ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.001 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.006 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b9898c0c-0ba4-4b81-9f35-82c77d663b6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609975, 'reachable_time': 43506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304229, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:52.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a187d8a\x2d77c6\x2d4b27\x2dbb13\x2d654f471c1faf.mount: Deactivated successfully.
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.010 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.010 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[63e681bb-dd36-4dcb-8a1c-bc64e1e58874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.011 158104 INFO neutron.agent.ovn.metadata.agent [-] Port fff1abf1-c689-4533-86c7-a3d1075ea6de in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.013 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a187d8a-77c6-4b27-bb13-654f471c1faf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.014 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[85623736-9c93-44cc-9ece-5f2e0c41ce75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.014 158104 INFO neutron.agent.ovn.metadata.agent [-] Port fff1abf1-c689-4533-86c7-a3d1075ea6de in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.015 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a187d8a-77c6-4b27-bb13-654f471c1faf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:52.015 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c997876a-127c-4755-9fe5-4288ebd57315]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.022 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.048 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.048 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.229 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:30:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.452 2 INFO nova.network.neutron [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Port 62fb85d7-3242-47b1-8854-4bf595996914 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.453 2 DEBUG nova.network.neutron [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [{"id": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "address": "fa:16:3e:10:8a:e3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfff1abf1-c6", "ovs_interfaceid": "fff1abf1-c689-4533-86c7-a3d1075ea6de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.474 2 DEBUG oslo_concurrency.lockutils [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-878c4166-4a08-41bd-9263-c5d4b4c7a280" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:52 compute-0 nova_compute[256940]: 2025-10-02 12:30:52.717 2 DEBUG oslo_concurrency.lockutils [None req-3e5a6169-d437-4ee7-98e2-78d67f434b66 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-878c4166-4a08-41bd-9263-c5d4b4c7a280-62fb85d7-3242-47b1-8854-4bf595996914" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:52 compute-0 ceph-mon[73668]: pgmap v1623: 305 pgs: 305 active+clean; 230 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.4 MiB/s wr, 395 op/s
Oct 02 12:30:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4223918663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/564061492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/753036913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4246083289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1154353251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3813447493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.191 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-unplugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.191 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.191 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.192 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.192 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-unplugged-fff1abf1-c689-4533-86c7-a3d1075ea6de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.192 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-unplugged-fff1abf1-c689-4533-86c7-a3d1075ea6de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.192 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.192 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.192 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.193 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.193 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.193 2 WARNING nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de for instance with vm_state active and task_state deleting.
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.193 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.193 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.193 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.194 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.194 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.194 2 WARNING nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de for instance with vm_state active and task_state deleting.
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.194 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.194 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.194 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.195 2 DEBUG oslo_concurrency.lockutils [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.195 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.195 2 WARNING nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de for instance with vm_state active and task_state deleting.
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.228 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.228 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.228 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.242 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.242 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:30:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 213 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 333 op/s
Oct 02 12:30:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:30:53.292 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:53 compute-0 podman[304234]: 2025-10-02 12:30:53.398042823 +0000 UTC m=+0.063791520 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 12:30:53 compute-0 podman[304235]: 2025-10-02 12:30:53.399534531 +0000 UTC m=+0.065176476 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.440 2 INFO nova.virt.libvirt.driver [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Deleting instance files /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280_del
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.440 2 INFO nova.virt.libvirt.driver [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Deletion of /var/lib/nova/instances/878c4166-4a08-41bd-9263-c5d4b4c7a280_del complete
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.497 2 INFO nova.compute.manager [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Took 2.26 seconds to destroy the instance on the hypervisor.
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.498 2 DEBUG oslo.service.loopingcall [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.498 2 DEBUG nova.compute.manager [-] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:30:53 compute-0 nova_compute[256940]: 2025-10-02 12:30:53.498 2 DEBUG nova.network.neutron [-] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:30:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2493142074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:54.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:55 compute-0 ceph-mon[73668]: pgmap v1624: 305 pgs: 305 active+clean; 213 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 333 op/s
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.024 2 DEBUG nova.network.neutron [-] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.044 2 INFO nova.compute.manager [-] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Took 1.55 seconds to deallocate network for instance.
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.084 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.085 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.138 2 DEBUG oslo_concurrency.processutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.252 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-deleted-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 165 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 02 12:30:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.308 2 DEBUG nova.compute.manager [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-unplugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.308 2 DEBUG oslo_concurrency.lockutils [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.309 2 DEBUG oslo_concurrency.lockutils [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.309 2 DEBUG oslo_concurrency.lockutils [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.309 2 DEBUG nova.compute.manager [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-unplugged-fff1abf1-c689-4533-86c7-a3d1075ea6de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.310 2 WARNING nova.compute.manager [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-unplugged-fff1abf1-c689-4533-86c7-a3d1075ea6de for instance with vm_state deleted and task_state None.
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.310 2 DEBUG nova.compute.manager [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.310 2 DEBUG oslo_concurrency.lockutils [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.310 2 DEBUG oslo_concurrency.lockutils [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.311 2 DEBUG oslo_concurrency.lockutils [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.311 2 DEBUG nova.compute.manager [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] No waiting events found dispatching network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.311 2 WARNING nova.compute.manager [req-4ceff7c9-bab0-45ea-9196-e813a74aae7c req-18883494-43ff-4f38-a2fb-8d94a82c8e94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Received unexpected event network-vif-plugged-fff1abf1-c689-4533-86c7-a3d1075ea6de for instance with vm_state deleted and task_state None.
Oct 02 12:30:55 compute-0 sudo[304291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:55 compute-0 sudo[304291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:55 compute-0 sudo[304291]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:55 compute-0 sudo[304316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:55 compute-0 sudo[304316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:55 compute-0 sudo[304316]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2281957984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.605 2 DEBUG oslo_concurrency.processutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.611 2 DEBUG nova.compute.provider_tree [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.631 2 DEBUG nova.scheduler.client.report [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.662 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.692 2 INFO nova.scheduler.client.report [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Deleted allocations for instance 878c4166-4a08-41bd-9263-c5d4b4c7a280
Oct 02 12:30:55 compute-0 nova_compute[256940]: 2025-10-02 12:30:55.789 2 DEBUG oslo_concurrency.lockutils [None req-79d6cb38-4a7d-4ea5-baec-8792997f8583 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "878c4166-4a08-41bd-9263-c5d4b4c7a280" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:56.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2281957984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:56.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:56 compute-0 nova_compute[256940]: 2025-10-02 12:30:56.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:57 compute-0 nova_compute[256940]: 2025-10-02 12:30:57.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:57 compute-0 ceph-mon[73668]: pgmap v1625: 305 pgs: 305 active+clean; 165 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 02 12:30:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Oct 02 12:30:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:30:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:58.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:30:58 compute-0 ceph-mon[73668]: pgmap v1626: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Oct 02 12:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:30:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:30:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:58.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:59 compute-0 nova_compute[256940]: 2025-10-02 12:30:59.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:59 compute-0 nova_compute[256940]: 2025-10-02 12:30:59.111 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408244.1100693, a059e667-9c76-4f71-92d2-9490d75ce24d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:59 compute-0 nova_compute[256940]: 2025-10-02 12:30:59.111 2 INFO nova.compute.manager [-] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] VM Stopped (Lifecycle Event)
Oct 02 12:30:59 compute-0 nova_compute[256940]: 2025-10-02 12:30:59.165 2 DEBUG nova.compute.manager [None req-9eb44110-af4b-44a4-98c3-4ef3088525ac - - - - - -] [instance: a059e667-9c76-4f71-92d2-9490d75ce24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 177 op/s
Oct 02 12:30:59 compute-0 nova_compute[256940]: 2025-10-02 12:30:59.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:31:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:00.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:31:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:00.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:00 compute-0 ceph-mon[73668]: pgmap v1627: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 177 op/s
Oct 02 12:31:00 compute-0 nova_compute[256940]: 2025-10-02 12:31:00.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 272 op/s
Oct 02 12:31:01 compute-0 nova_compute[256940]: 2025-10-02 12:31:01.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:02.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:02.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:02 compute-0 ceph-mon[73668]: pgmap v1628: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 272 op/s
Oct 02 12:31:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 167 KiB/s wr, 196 op/s
Oct 02 12:31:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:04.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:05 compute-0 ceph-mon[73668]: pgmap v1629: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 167 KiB/s wr, 196 op/s
Oct 02 12:31:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 175 op/s
Oct 02 12:31:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:05 compute-0 nova_compute[256940]: 2025-10-02 12:31:05.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:06.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/633304687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:31:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/633304687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:31:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:06.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:06 compute-0 nova_compute[256940]: 2025-10-02 12:31:06.535 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408251.5335286, 878c4166-4a08-41bd-9263-c5d4b4c7a280 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:06 compute-0 nova_compute[256940]: 2025-10-02 12:31:06.535 2 INFO nova.compute.manager [-] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] VM Stopped (Lifecycle Event)
Oct 02 12:31:06 compute-0 nova_compute[256940]: 2025-10-02 12:31:06.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:06 compute-0 nova_compute[256940]: 2025-10-02 12:31:06.590 2 DEBUG nova.compute.manager [None req-46c9ddcb-ead3-4800-9602-203f3476eb8d - - - - - -] [instance: 878c4166-4a08-41bd-9263-c5d4b4c7a280] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.147 2 DEBUG nova.compute.manager [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:31:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 197 KiB/s wr, 163 op/s
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.301 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.302 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.366 2 DEBUG nova.objects.instance [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_requests' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.439 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.440 2 INFO nova.compute.claims [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.440 2 DEBUG nova.objects.instance [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:07 compute-0 ceph-mon[73668]: pgmap v1630: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 175 op/s
Oct 02 12:31:07 compute-0 nova_compute[256940]: 2025-10-02 12:31:07.606 2 DEBUG nova.objects.instance [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:08.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:08 compute-0 podman[304350]: 2025-10-02 12:31:08.394401702 +0000 UTC m=+0.060230620 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:08 compute-0 podman[304351]: 2025-10-02 12:31:08.413017484 +0000 UTC m=+0.077735574 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:08 compute-0 nova_compute[256940]: 2025-10-02 12:31:08.432 2 INFO nova.compute.resource_tracker [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating resource usage from migration 2151dc52-824f-4888-8c0d-1661bb797446
Oct 02 12:31:08 compute-0 nova_compute[256940]: 2025-10-02 12:31:08.432 2 DEBUG nova.compute.resource_tracker [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Starting to track incoming migration 2151dc52-824f-4888-8c0d-1661bb797446 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:31:08 compute-0 nova_compute[256940]: 2025-10-02 12:31:08.456 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:08 compute-0 nova_compute[256940]: 2025-10-02 12:31:08.457 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:08.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:08 compute-0 nova_compute[256940]: 2025-10-02 12:31:08.620 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:08 compute-0 nova_compute[256940]: 2025-10-02 12:31:08.703 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.018 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19304654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.059 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.066 2 DEBUG nova.compute.provider_tree [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.116 2 DEBUG nova.scheduler.client.report [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:09 compute-0 ceph-mon[73668]: pgmap v1631: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 197 KiB/s wr, 163 op/s
Oct 02 12:31:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 170 KiB/s wr, 108 op/s
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.443 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 2.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.444 2 INFO nova.compute.manager [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Migrating
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.450 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.458 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:31:09 compute-0 nova_compute[256940]: 2025-10-02 12:31:09.458 2 INFO nova.compute.claims [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:31:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:10.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:10 compute-0 nova_compute[256940]: 2025-10-02 12:31:10.290 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/19304654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:31:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:31:10 compute-0 nova_compute[256940]: 2025-10-02 12:31:10.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768776911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:10 compute-0 nova_compute[256940]: 2025-10-02 12:31:10.762 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:10 compute-0 nova_compute[256940]: 2025-10-02 12:31:10.769 2 DEBUG nova.compute.provider_tree [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:11 compute-0 nova_compute[256940]: 2025-10-02 12:31:11.270 2 DEBUG nova.scheduler.client.report [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 156 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 12:31:11 compute-0 ceph-mon[73668]: pgmap v1632: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 170 KiB/s wr, 108 op/s
Oct 02 12:31:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2768776911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:11 compute-0 nova_compute[256940]: 2025-10-02 12:31:11.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:11 compute-0 nova_compute[256940]: 2025-10-02 12:31:11.753 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:11 compute-0 nova_compute[256940]: 2025-10-02 12:31:11.754 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:31:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:12.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:12.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:12 compute-0 ceph-mon[73668]: pgmap v1633: 305 pgs: 305 active+clean; 156 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 12:31:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 165 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 405 KiB/s rd, 3.2 MiB/s wr, 52 op/s
Oct 02 12:31:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:14.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:14 compute-0 nova_compute[256940]: 2025-10-02 12:31:14.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:14 compute-0 nova_compute[256940]: 2025-10-02 12:31:14.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:31:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:14.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:14 compute-0 ceph-mon[73668]: pgmap v1634: 305 pgs: 305 active+clean; 165 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 405 KiB/s rd, 3.2 MiB/s wr, 52 op/s
Oct 02 12:31:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 177 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 4.1 MiB/s wr, 71 op/s
Oct 02 12:31:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:15 compute-0 sudo[304439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:15 compute-0 sudo[304439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:15 compute-0 sudo[304439]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:15 compute-0 sudo[304464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:15 compute-0 sudo[304464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:15 compute-0 sudo[304464]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:15 compute-0 nova_compute[256940]: 2025-10-02 12:31:15.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:31:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:16.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:31:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:16.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:16 compute-0 nova_compute[256940]: 2025-10-02 12:31:16.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:17 compute-0 ceph-mon[73668]: pgmap v1635: 305 pgs: 305 active+clean; 177 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 4.1 MiB/s wr, 71 op/s
Oct 02 12:31:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 704 KiB/s rd, 4.2 MiB/s wr, 113 op/s
Oct 02 12:31:17 compute-0 nova_compute[256940]: 2025-10-02 12:31:17.449 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:31:17 compute-0 nova_compute[256940]: 2025-10-02 12:31:17.450 2 DEBUG nova.network.neutron [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:31:17 compute-0 nova_compute[256940]: 2025-10-02 12:31:17.461 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:18 compute-0 nova_compute[256940]: 2025-10-02 12:31:18.071 2 INFO nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:31:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:18.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:18 compute-0 nova_compute[256940]: 2025-10-02 12:31:18.582 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:31:18 compute-0 ceph-mon[73668]: pgmap v1636: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 704 KiB/s rd, 4.2 MiB/s wr, 113 op/s
Oct 02 12:31:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 700 KiB/s rd, 4.0 MiB/s wr, 111 op/s
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.507 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.508 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.509 2 INFO nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Creating image(s)
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.538 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.568 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.603 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.610 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.698 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.699 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.700 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.700 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.731 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:19 compute-0 nova_compute[256940]: 2025-10-02 12:31:19.736 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:20.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:20 compute-0 nova_compute[256940]: 2025-10-02 12:31:20.385 2 DEBUG nova.policy [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:31:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 12:31:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:20.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 12:31:20 compute-0 nova_compute[256940]: 2025-10-02 12:31:20.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:20 compute-0 ceph-mon[73668]: pgmap v1637: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 700 KiB/s rd, 4.0 MiB/s wr, 111 op/s
Oct 02 12:31:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 780 KiB/s rd, 4.1 MiB/s wr, 129 op/s
Oct 02 12:31:21 compute-0 nova_compute[256940]: 2025-10-02 12:31:21.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:22.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:22.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:23 compute-0 ceph-mon[73668]: pgmap v1638: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 780 KiB/s rd, 4.1 MiB/s wr, 129 op/s
Oct 02 12:31:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:31:23 compute-0 nova_compute[256940]: 2025-10-02 12:31:23.658 2 DEBUG nova.network.neutron [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Successfully created port: 7592f97e-0ed9-49d9-a997-1e691974110b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:31:24 compute-0 sshd-session[304587]: Accepted publickey for nova from 192.168.122.102 port 45682 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:31:24 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:31:24 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:31:24 compute-0 systemd-logind[820]: New session 57 of user nova.
Oct 02 12:31:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:24.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:24 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:31:24 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:31:24 compute-0 podman[304591]: 2025-10-02 12:31:24.123244746 +0000 UTC m=+0.103850337 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 12:31:24 compute-0 podman[304589]: 2025-10-02 12:31:24.12221724 +0000 UTC m=+0.102818771 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:31:24 compute-0 systemd[304626]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:31:24 compute-0 systemd[304626]: Queued start job for default target Main User Target.
Oct 02 12:31:24 compute-0 systemd[304626]: Created slice User Application Slice.
Oct 02 12:31:24 compute-0 systemd[304626]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:31:24 compute-0 systemd[304626]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:31:24 compute-0 systemd[304626]: Reached target Paths.
Oct 02 12:31:24 compute-0 systemd[304626]: Reached target Timers.
Oct 02 12:31:24 compute-0 systemd[304626]: Starting D-Bus User Message Bus Socket...
Oct 02 12:31:24 compute-0 systemd[304626]: Starting Create User's Volatile Files and Directories...
Oct 02 12:31:24 compute-0 systemd[304626]: Finished Create User's Volatile Files and Directories.
Oct 02 12:31:24 compute-0 systemd[304626]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:31:24 compute-0 systemd[304626]: Reached target Sockets.
Oct 02 12:31:24 compute-0 systemd[304626]: Reached target Basic System.
Oct 02 12:31:24 compute-0 systemd[304626]: Reached target Main User Target.
Oct 02 12:31:24 compute-0 systemd[304626]: Startup finished in 141ms.
Oct 02 12:31:24 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:31:24 compute-0 systemd[1]: Started Session 57 of User nova.
Oct 02 12:31:24 compute-0 sshd-session[304587]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:31:24 compute-0 nova_compute[256940]: 2025-10-02 12:31:24.322 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:24 compute-0 sshd-session[304641]: Received disconnect from 192.168.122.102 port 45682:11: disconnected by user
Oct 02 12:31:24 compute-0 sshd-session[304641]: Disconnected from user nova 192.168.122.102 port 45682
Oct 02 12:31:24 compute-0 sshd-session[304587]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:31:24 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct 02 12:31:24 compute-0 systemd-logind[820]: Session 57 logged out. Waiting for processes to exit.
Oct 02 12:31:24 compute-0 systemd-logind[820]: Removed session 57.
Oct 02 12:31:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:24.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:24 compute-0 nova_compute[256940]: 2025-10-02 12:31:24.528 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] resizing rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:31:24 compute-0 sshd-session[304650]: Accepted publickey for nova from 192.168.122.102 port 45684 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:31:24 compute-0 systemd-logind[820]: New session 59 of user nova.
Oct 02 12:31:24 compute-0 systemd[1]: Started Session 59 of User nova.
Oct 02 12:31:24 compute-0 sshd-session[304650]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:31:24 compute-0 sshd-session[304701]: Received disconnect from 192.168.122.102 port 45684:11: disconnected by user
Oct 02 12:31:24 compute-0 sshd-session[304701]: Disconnected from user nova 192.168.122.102 port 45684
Oct 02 12:31:24 compute-0 sshd-session[304650]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:31:24 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 02 12:31:24 compute-0 systemd-logind[820]: Session 59 logged out. Waiting for processes to exit.
Oct 02 12:31:24 compute-0 systemd-logind[820]: Removed session 59.
Oct 02 12:31:24 compute-0 ceph-mon[73668]: pgmap v1639: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:31:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 212 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 1.2 MiB/s wr, 95 op/s
Oct 02 12:31:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.491 2 DEBUG nova.network.neutron [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Successfully updated port: 7592f97e-0ed9-49d9-a997-1e691974110b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.578 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.579 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.579 2 DEBUG nova.network.neutron [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.678 2 DEBUG nova.compute.manager [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-changed-7592f97e-0ed9-49d9-a997-1e691974110b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.679 2 DEBUG nova.compute.manager [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing instance network info cache due to event network-changed-7592f97e-0ed9-49d9-a997-1e691974110b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.679 2 DEBUG oslo_concurrency.lockutils [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:25 compute-0 nova_compute[256940]: 2025-10-02 12:31:25.980 2 DEBUG nova.objects.instance [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:26 compute-0 nova_compute[256940]: 2025-10-02 12:31:26.049 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:31:26 compute-0 nova_compute[256940]: 2025-10-02 12:31:26.049 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Ensure instance console log exists: /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:31:26 compute-0 nova_compute[256940]: 2025-10-02 12:31:26.050 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:26 compute-0 nova_compute[256940]: 2025-10-02 12:31:26.050 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:26 compute-0 nova_compute[256940]: 2025-10-02 12:31:26.050 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:26.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:26 compute-0 nova_compute[256940]: 2025-10-02 12:31:26.456 2 DEBUG nova.network.neutron [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:31:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:26.467 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:26.467 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:26.467 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:26 compute-0 nova_compute[256940]: 2025-10-02 12:31:26.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:26 compute-0 ceph-mon[73668]: pgmap v1640: 305 pgs: 305 active+clean; 212 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 1.2 MiB/s wr, 95 op/s
Oct 02 12:31:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.0 MiB/s wr, 90 op/s
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.448 2 DEBUG nova.network.neutron [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.619 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.620 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Instance network_info: |[{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.620 2 DEBUG oslo_concurrency.lockutils [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.621 2 DEBUG nova.network.neutron [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing network info cache for port 7592f97e-0ed9-49d9-a997-1e691974110b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.624 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Start _get_guest_xml network_info=[{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.627 2 WARNING nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.636 2 DEBUG nova.virt.libvirt.host [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.636 2 DEBUG nova.virt.libvirt.host [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.640 2 DEBUG nova.virt.libvirt.host [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.640 2 DEBUG nova.virt.libvirt.host [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.641 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.641 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.642 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.642 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.642 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.642 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.642 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.643 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.643 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.643 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.643 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.644 2 DEBUG nova.virt.hardware [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:31:27 compute-0 nova_compute[256940]: 2025-10-02 12:31:27.646 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274566321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.098 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.130 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.134 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3274566321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49731173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.584 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.586 2 DEBUG nova.virt.libvirt.vif [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.587 2 DEBUG nova.network.os_vif_util [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.587 2 DEBUG nova.network.os_vif_util [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:be:9d,bridge_name='br-int',has_traffic_filtering=True,id=7592f97e-0ed9-49d9-a997-1e691974110b,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7592f97e-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.589 2 DEBUG nova.objects.instance [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.675 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <uuid>27ae4ec8-c996-4a05-a592-2dc8cd23d759</uuid>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <name>instance-0000004c</name>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:31:27</nova:creationTime>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:31:28 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <system>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <entry name="serial">27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <entry name="uuid">27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </system>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <os>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   </os>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <features>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   </features>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk">
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       </source>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config">
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       </source>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:31:28 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d2:be:9d"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <target dev="tap7592f97e-0e"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log" append="off"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <video>
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </video>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:31:28 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:31:28 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:31:28 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:31:28 compute-0 nova_compute[256940]: </domain>
Oct 02 12:31:28 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.677 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Preparing to wait for external event network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.678 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.678 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.679 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.679 2 DEBUG nova.virt.libvirt.vif [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.680 2 DEBUG nova.network.os_vif_util [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.680 2 DEBUG nova.network.os_vif_util [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:be:9d,bridge_name='br-int',has_traffic_filtering=True,id=7592f97e-0ed9-49d9-a997-1e691974110b,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7592f97e-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.681 2 DEBUG os_vif [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:be:9d,bridge_name='br-int',has_traffic_filtering=True,id=7592f97e-0ed9-49d9-a997-1e691974110b,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7592f97e-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7592f97e-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7592f97e-0e, col_values=(('external_ids', {'iface-id': '7592f97e-0ed9-49d9-a997-1e691974110b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:be:9d', 'vm-uuid': '27ae4ec8-c996-4a05-a592-2dc8cd23d759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:28 compute-0 NetworkManager[44981]: <info>  [1759408288.6939] manager: (tap7592f97e-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:28 compute-0 nova_compute[256940]: 2025-10-02 12:31:28.707 2 INFO os_vif [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:be:9d,bridge_name='br-int',has_traffic_filtering=True,id=7592f97e-0ed9-49d9-a997-1e691974110b,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7592f97e-0e')
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:31:28
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['images', 'volumes', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control']
Oct 02 12:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:31:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.432 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.433 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.433 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:d2:be:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.433 2 INFO nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Using config drive
Oct 02 12:31:29 compute-0 ceph-mon[73668]: pgmap v1641: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.0 MiB/s wr, 90 op/s
Oct 02 12:31:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/49731173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.573 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.580 2 DEBUG nova.network.neutron [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updated VIF entry in instance network info cache for port 7592f97e-0ed9-49d9-a997-1e691974110b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.580 2 DEBUG nova.network.neutron [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.671 2 DEBUG oslo_concurrency.lockutils [req-2142f25d-5fe0-4fb3-9974-874715bca2d4 req-47be5fe9-970e-4ffe-8409-0d8eda8200fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.856 2 DEBUG nova.compute.manager [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.857 2 DEBUG oslo_concurrency.lockutils [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.858 2 DEBUG oslo_concurrency.lockutils [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.858 2 DEBUG oslo_concurrency.lockutils [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.859 2 DEBUG nova.compute.manager [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:29 compute-0 nova_compute[256940]: 2025-10-02 12:31:29.859 2 WARNING nova.compute.manager [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state active and task_state resize_migrated.
Oct 02 12:31:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:30.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:30 compute-0 nova_compute[256940]: 2025-10-02 12:31:30.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:30 compute-0 nova_compute[256940]: 2025-10-02 12:31:30.660 2 INFO nova.network.neutron [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating port baa241e6-fa7d-4fea-9e14-0af61693406b with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:31:30 compute-0 ceph-mon[73668]: pgmap v1642: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Oct 02 12:31:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Oct 02 12:31:31 compute-0 nova_compute[256940]: 2025-10-02 12:31:31.666 2 INFO nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Creating config drive at /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/disk.config
Oct 02 12:31:31 compute-0 nova_compute[256940]: 2025-10-02 12:31:31.673 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa_zq585b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:31 compute-0 nova_compute[256940]: 2025-10-02 12:31:31.811 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa_zq585b" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:31 compute-0 nova_compute[256940]: 2025-10-02 12:31:31.856 2 DEBUG nova.storage.rbd_utils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:31 compute-0 nova_compute[256940]: 2025-10-02 12:31:31.862 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/disk.config 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:32.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.336 2 DEBUG oslo_concurrency.processutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/disk.config 27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.337 2 INFO nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Deleting local config drive /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/disk.config because it was imported into RBD.
Oct 02 12:31:32 compute-0 kernel: tap7592f97e-0e: entered promiscuous mode
Oct 02 12:31:32 compute-0 NetworkManager[44981]: <info>  [1759408292.4053] manager: (tap7592f97e-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 ovn_controller[148123]: 2025-10-02T12:31:32Z|00254|binding|INFO|Claiming lport 7592f97e-0ed9-49d9-a997-1e691974110b for this chassis.
Oct 02 12:31:32 compute-0 ovn_controller[148123]: 2025-10-02T12:31:32Z|00255|binding|INFO|7592f97e-0ed9-49d9-a997-1e691974110b: Claiming fa:16:3e:d2:be:9d 10.100.0.9
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 systemd-machined[210927]: New machine qemu-32-instance-0000004c.
Oct 02 12:31:32 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000004c.
Oct 02 12:31:32 compute-0 systemd-udevd[304860]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.473 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:be:9d 10.100.0.9'], port_security=['fa:16:3e:d2:be:9d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07d16472-ba47-4252-b39c-3a10f45e3158', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7592f97e-0ed9-49d9-a997-1e691974110b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.474 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7592f97e-0ed9-49d9-a997-1e691974110b in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.476 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:31:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:32.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:32 compute-0 NetworkManager[44981]: <info>  [1759408292.4887] device (tap7592f97e-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:31:32 compute-0 NetworkManager[44981]: <info>  [1759408292.4918] device (tap7592f97e-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 ovn_controller[148123]: 2025-10-02T12:31:32Z|00256|binding|INFO|Setting lport 7592f97e-0ed9-49d9-a997-1e691974110b ovn-installed in OVS
Oct 02 12:31:32 compute-0 ovn_controller[148123]: 2025-10-02T12:31:32Z|00257|binding|INFO|Setting lport 7592f97e-0ed9-49d9-a997-1e691974110b up in Southbound
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.493 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[13836498-e319-40b3-9703-a3666c5c8126]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.495 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a187d8a-71 in ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.499 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a187d8a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.499 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c75ab8b9-9a92-4116-97cf-dd308417a928]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.501 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[91720824-542e-495a-8d87-d405f5e3b261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.515 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e8938b-4c76-481f-9c68-c126787472d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.535 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aa8157-5930-4a2b-b1f3-75c9a30c9a25]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.569 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce7ffab-efd9-42c6-a7e2-49404c1995e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.579 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[91c9c59a-220e-4c98-aba1-d925f8641c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 NetworkManager[44981]: <info>  [1759408292.5809] manager: (tap6a187d8a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/128)
Oct 02 12:31:32 compute-0 systemd-udevd[304863]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.626 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3e85a347-c194-4de1-b6ee-38f0a70a5d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.630 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8df3e28f-f8ff-4470-9255-85ccd94fa8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 NetworkManager[44981]: <info>  [1759408292.6543] device (tap6a187d8a-70): carrier: link connected
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.662 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd95edd-6b3e-4bf7-8164-71848792b96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.683 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2e08e69c-3f95-4091-9234-a09f1c80e992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304897, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.701 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4d601e-8695-4d78-a393-5bebe4518014]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:e868'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618813, 'tstamp': 618813}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304898, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.721 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[26053676-58a2-49e5-9c71-607af3fd103a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304899, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.757 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cd590b26-9935-469f-953d-3b9574a3db52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.804 2 DEBUG nova.compute.manager [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.806 2 DEBUG oslo_concurrency.lockutils [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.806 2 DEBUG oslo_concurrency.lockutils [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.806 2 DEBUG oslo_concurrency.lockutils [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.806 2 DEBUG nova.compute.manager [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.806 2 WARNING nova.compute.manager [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state active and task_state resize_migrated.
Oct 02 12:31:32 compute-0 ceph-mon[73668]: pgmap v1643: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.837 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d1d718-5a3c-47f9-a633-225b765eda6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.839 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.839 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.840 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:32 compute-0 kernel: tap6a187d8a-70: entered promiscuous mode
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 NetworkManager[44981]: <info>  [1759408292.8439] manager: (tap6a187d8a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.850 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:32 compute-0 ovn_controller[148123]: 2025-10-02T12:31:32Z|00258|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.853 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.855 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7ba00b-37ef-44f9-b545-f2993779a2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.856 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:31:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:32.857 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'env', 'PROCESS_TAG=haproxy-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a187d8a-77c6-4b27-bb13-654f471c1faf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.970 2 DEBUG nova.compute.manager [req-2f8bf5b1-14aa-45c8-a65b-687b6dc230e8 req-c5cd4f8f-b20a-4141-9334-6903c9b11555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.970 2 DEBUG oslo_concurrency.lockutils [req-2f8bf5b1-14aa-45c8-a65b-687b6dc230e8 req-c5cd4f8f-b20a-4141-9334-6903c9b11555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.970 2 DEBUG oslo_concurrency.lockutils [req-2f8bf5b1-14aa-45c8-a65b-687b6dc230e8 req-c5cd4f8f-b20a-4141-9334-6903c9b11555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.971 2 DEBUG oslo_concurrency.lockutils [req-2f8bf5b1-14aa-45c8-a65b-687b6dc230e8 req-c5cd4f8f-b20a-4141-9334-6903c9b11555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:32 compute-0 nova_compute[256940]: 2025-10-02 12:31:32.971 2 DEBUG nova.compute.manager [req-2f8bf5b1-14aa-45c8-a65b-687b6dc230e8 req-c5cd4f8f-b20a-4141-9334-6903c9b11555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Processing event network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:31:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 12:31:33 compute-0 podman[304973]: 2025-10-02 12:31:33.23373528 +0000 UTC m=+0.030776872 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.443 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408293.4432733, 27ae4ec8-c996-4a05-a592-2dc8cd23d759 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.444 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] VM Started (Lifecycle Event)
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.447 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.451 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.456 2 INFO nova.virt.libvirt.driver [-] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Instance spawned successfully.
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.456 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.477 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.480 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.490 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.490 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.491 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.491 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.492 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.492 2 DEBUG nova.virt.libvirt.driver [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.501 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.501 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408293.4460886, 27ae4ec8-c996-4a05-a592-2dc8cd23d759 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.501 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] VM Paused (Lifecycle Event)
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.533 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.537 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408293.4506211, 27ae4ec8-c996-4a05-a592-2dc8cd23d759 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.538 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] VM Resumed (Lifecycle Event)
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.583 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.589 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.596 2 INFO nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Took 14.09 seconds to spawn the instance on the hypervisor.
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.597 2 DEBUG nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.611 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.684 2 INFO nova.compute.manager [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Took 24.68 seconds to build instance.
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.701 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.701 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.701 2 DEBUG nova.network.neutron [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:31:33 compute-0 nova_compute[256940]: 2025-10-02 12:31:33.750 2 DEBUG oslo_concurrency.lockutils [None req-a46d5fb6-7346-45d1-aa53-52f5644428af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:33 compute-0 podman[304973]: 2025-10-02 12:31:33.794044103 +0000 UTC m=+0.591085695 container create a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:31:33 compute-0 systemd[1]: Started libpod-conmon-a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03.scope.
Oct 02 12:31:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed57b3c0d469af6aadf483b8ea620b175290654bf99652762369341f8889122/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:34 compute-0 podman[304973]: 2025-10-02 12:31:34.059486991 +0000 UTC m=+0.856528653 container init a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:31:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:34.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:34 compute-0 podman[304973]: 2025-10-02 12:31:34.071532457 +0000 UTC m=+0.868574039 container start a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:34 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[304988]: [NOTICE]   (304992) : New worker (304994) forked
Oct 02 12:31:34 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[304988]: [NOTICE]   (304992) : Loading success.
Oct 02 12:31:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:34.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:34 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:31:34 compute-0 systemd[304626]: Activating special unit Exit the Session...
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped target Main User Target.
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped target Basic System.
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped target Paths.
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped target Sockets.
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped target Timers.
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:31:34 compute-0 systemd[304626]: Closed D-Bus User Message Bus Socket.
Oct 02 12:31:34 compute-0 systemd[304626]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:31:34 compute-0 systemd[304626]: Removed slice User Application Slice.
Oct 02 12:31:34 compute-0 systemd[304626]: Reached target Shutdown.
Oct 02 12:31:34 compute-0 systemd[304626]: Finished Exit the Session.
Oct 02 12:31:34 compute-0 systemd[304626]: Reached target Exit the Session.
Oct 02 12:31:34 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:31:34 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:31:34 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:31:34 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:31:34 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:31:34 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:31:34 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:31:34 compute-0 ceph-mon[73668]: pgmap v1644: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 12:31:34 compute-0 nova_compute[256940]: 2025-10-02 12:31:34.943 2 DEBUG nova.compute.manager [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-changed-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:34 compute-0 nova_compute[256940]: 2025-10-02 12:31:34.944 2 DEBUG nova.compute.manager [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Refreshing instance network info cache due to event network-changed-baa241e6-fa7d-4fea-9e14-0af61693406b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:34 compute-0 nova_compute[256940]: 2025-10-02 12:31:34.944 2 DEBUG oslo_concurrency.lockutils [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:35 compute-0 nova_compute[256940]: 2025-10-02 12:31:35.077 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:35 compute-0 nova_compute[256940]: 2025-10-02 12:31:35.077 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:35 compute-0 nova_compute[256940]: 2025-10-02 12:31:35.078 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:35 compute-0 nova_compute[256940]: 2025-10-02 12:31:35.078 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:35 compute-0 nova_compute[256940]: 2025-10-02 12:31:35.078 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:35 compute-0 nova_compute[256940]: 2025-10-02 12:31:35.078 2 WARNING nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b for instance with vm_state active and task_state None.
Oct 02 12:31:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 220 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Oct 02 12:31:35 compute-0 nova_compute[256940]: 2025-10-02 12:31:35.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:35 compute-0 sudo[305005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:35 compute-0 sudo[305005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:35 compute-0 sudo[305005]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:35 compute-0 sudo[305030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:35 compute-0 sudo[305030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:35 compute-0 sudo[305030]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:36 compute-0 NetworkManager[44981]: <info>  [1759408296.2112] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Oct 02 12:31:36 compute-0 NetworkManager[44981]: <info>  [1759408296.2121] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.325 2 DEBUG nova.network.neutron [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:36 compute-0 ovn_controller[148123]: 2025-10-02T12:31:36Z|00259|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.426 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.429 2 DEBUG oslo_concurrency.lockutils [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.429 2 DEBUG nova.network.neutron [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Refreshing network info cache for port baa241e6-fa7d-4fea-9e14-0af61693406b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:36.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.606 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.610 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.611 2 INFO nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Creating image(s)
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.670 2 DEBUG nova.storage.rbd_utils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] creating snapshot(nova-resize) on rbd image(b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.834 2 DEBUG nova.compute.manager [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-changed-7592f97e-0ed9-49d9-a997-1e691974110b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.835 2 DEBUG nova.compute.manager [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing instance network info cache due to event network-changed-7592f97e-0ed9-49d9-a997-1e691974110b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.835 2 DEBUG oslo_concurrency.lockutils [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.835 2 DEBUG oslo_concurrency.lockutils [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:36 compute-0 nova_compute[256940]: 2025-10-02 12:31:36.836 2 DEBUG nova.network.neutron [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing network info cache for port 7592f97e-0ed9-49d9-a997-1e691974110b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 02 12:31:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 02 12:31:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 167 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 117 op/s
Oct 02 12:31:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 02 12:31:37 compute-0 ceph-mon[73668]: pgmap v1645: 305 pgs: 305 active+clean; 220 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.053 2 DEBUG nova.objects.instance [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:38.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.206 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.207 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Ensure instance console log exists: /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.208 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.208 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.209 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.211 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Start _get_guest_xml network_info=[{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:f4:93:64"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.215 2 WARNING nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.221 2 DEBUG nova.virt.libvirt.host [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.222 2 DEBUG nova.virt.libvirt.host [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.225 2 DEBUG nova.virt.libvirt.host [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.226 2 DEBUG nova.virt.libvirt.host [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.227 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.227 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.228 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.228 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.228 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.229 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.229 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.229 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.230 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.230 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.230 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.231 2 DEBUG nova.virt.hardware [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.231 2 DEBUG nova.objects.instance [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.291 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:38.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118957239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.814 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.865 2 DEBUG nova.network.neutron [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updated VIF entry in instance network info cache for port baa241e6-fa7d-4fea-9e14-0af61693406b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.866 2 DEBUG nova.network.neutron [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:38 compute-0 ceph-mon[73668]: pgmap v1646: 305 pgs: 305 active+clean; 167 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 117 op/s
Oct 02 12:31:38 compute-0 ceph-mon[73668]: osdmap e240: 3 total, 3 up, 3 in
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.873 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:38 compute-0 nova_compute[256940]: 2025-10-02 12:31:38.955 2 DEBUG oslo_concurrency.lockutils [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 167 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 110 op/s
Oct 02 12:31:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940299997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-0 podman[305190]: 2025-10-02 12:31:39.408217371 +0000 UTC m=+0.074310867 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.424 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.426 2 DEBUG nova.virt.libvirt.vif [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1329955703',display_name='tempest-ServerDiskConfigTestJSON-server-1329955703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1329955703',id=75,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-9ne0uubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:29Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b20c27bc-0af3-4e54-a5ab-51d9d5afce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:f4:93:64"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.426 2 DEBUG nova.network.os_vif_util [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:f4:93:64"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.427 2 DEBUG nova.network.os_vif_util [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.430 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <uuid>b20c27bc-0af3-4e54-a5ab-51d9d5afce82</uuid>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <name>instance-0000004b</name>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1329955703</nova:name>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:31:38</nova:creationTime>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <nova:port uuid="baa241e6-fa7d-4fea-9e14-0af61693406b">
Oct 02 12:31:39 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <system>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <entry name="serial">b20c27bc-0af3-4e54-a5ab-51d9d5afce82</entry>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <entry name="uuid">b20c27bc-0af3-4e54-a5ab-51d9d5afce82</entry>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </system>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <os>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   </os>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <features>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   </features>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk">
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       </source>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk.config">
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       </source>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:31:39 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:f4:93:64"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <target dev="tapbaa241e6-fa"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/console.log" append="off"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <video>
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </video>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:31:39 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:31:39 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:31:39 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:31:39 compute-0 nova_compute[256940]: </domain>
Oct 02 12:31:39 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.431 2 DEBUG nova.virt.libvirt.vif [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1329955703',display_name='tempest-ServerDiskConfigTestJSON-server-1329955703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1329955703',id=75,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-9ne0uubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:29Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b20c27bc-0af3-4e54-a5ab-51d9d5afce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:f4:93:64"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.431 2 DEBUG nova.network.os_vif_util [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:f4:93:64"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.431 2 DEBUG nova.network.os_vif_util [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.432 2 DEBUG os_vif [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbaa241e6-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbaa241e6-fa, col_values=(('external_ids', {'iface-id': 'baa241e6-fa7d-4fea-9e14-0af61693406b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:93:64', 'vm-uuid': 'b20c27bc-0af3-4e54-a5ab-51d9d5afce82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:39 compute-0 NetworkManager[44981]: <info>  [1759408299.4402] manager: (tapbaa241e6-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.449 2 INFO os_vif [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa')
Oct 02 12:31:39 compute-0 podman[305191]: 2025-10-02 12:31:39.464295095 +0000 UTC m=+0.126580794 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.535 2 DEBUG nova.network.neutron [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updated VIF entry in instance network info cache for port 7592f97e-0ed9-49d9-a997-1e691974110b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.535 2 DEBUG nova.network.neutron [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.633 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.633 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.633 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:f4:93:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.634 2 INFO nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Using config drive
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.682 2 DEBUG oslo_concurrency.lockutils [req-2b74841e-bb13-4f60-958f-42e31166a5ae req-6fed313b-b045-4409-885e-2a4f0570a0cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:39 compute-0 kernel: tapbaa241e6-fa: entered promiscuous mode
Oct 02 12:31:39 compute-0 NetworkManager[44981]: <info>  [1759408299.7358] manager: (tapbaa241e6-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:39 compute-0 ovn_controller[148123]: 2025-10-02T12:31:39Z|00260|binding|INFO|Claiming lport baa241e6-fa7d-4fea-9e14-0af61693406b for this chassis.
Oct 02 12:31:39 compute-0 ovn_controller[148123]: 2025-10-02T12:31:39Z|00261|binding|INFO|baa241e6-fa7d-4fea-9e14-0af61693406b: Claiming fa:16:3e:f4:93:64 10.100.0.14
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.750 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:93:64 10.100.0.14'], port_security=['fa:16:3e:f4:93:64 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b20c27bc-0af3-4e54-a5ab-51d9d5afce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=baa241e6-fa7d-4fea-9e14-0af61693406b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.755 158104 INFO neutron.agent.ovn.metadata.agent [-] Port baa241e6-fa7d-4fea-9e14-0af61693406b in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis
Oct 02 12:31:39 compute-0 ovn_controller[148123]: 2025-10-02T12:31:39Z|00262|binding|INFO|Setting lport baa241e6-fa7d-4fea-9e14-0af61693406b ovn-installed in OVS
Oct 02 12:31:39 compute-0 ovn_controller[148123]: 2025-10-02T12:31:39Z|00263|binding|INFO|Setting lport baa241e6-fa7d-4fea-9e14-0af61693406b up in Southbound
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.759 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:31:39 compute-0 nova_compute[256940]: 2025-10-02 12:31:39.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.779 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60d6b0b2-4bbb-4a7d-ad36-5e92e38c087a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.781 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:31:39 compute-0 systemd-udevd[305267]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.783 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:31:39 compute-0 systemd-machined[210927]: New machine qemu-33-instance-0000004b.
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.783 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7b704afe-b7e8-40c8-9806-8e811e86b2d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.791 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9de34e27-de9f-4885-8405-5a57d9234201]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 NetworkManager[44981]: <info>  [1759408299.7987] device (tapbaa241e6-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:31:39 compute-0 NetworkManager[44981]: <info>  [1759408299.7994] device (tapbaa241e6-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:31:39 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000004b.
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.811 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[4263f85f-8225-4a23-a4ea-aa61a9d7433c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.842 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0bf0ae-c8f5-4ca1-9262-eaecaf515b80]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.884 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[173eb4e4-b8a1-44d8-a64e-d9472d5e7b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.891 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[15861e6b-cdd5-4bb3-90e0-890c4964b06c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 NetworkManager[44981]: <info>  [1759408299.8934] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/134)
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.937 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[191199a7-8941-4de8-9f53-d325ed3a3ab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.941 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[75febbf0-f75e-4e44-878e-29c427cfbaf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2118957239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2973912813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2940299997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-0 NetworkManager[44981]: <info>  [1759408299.9749] device (tape21cd6a6-f0): carrier: link connected
Oct 02 12:31:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:39.983 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[250519ce-6ac1-4250-a61c-c2fb212e1187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.005 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3fcfdb-31f4-4b3d-816c-9626a469647e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619545, 'reachable_time': 25545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305299, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.024 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe8cd0f-c6d4-4f3c-9909-cc57073d0c27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619545, 'tstamp': 619545}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305300, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.045 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[461eec0d-865f-4cf3-b657-8f5f3b86482f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619545, 'reachable_time': 25545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305301, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:40.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.087 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[972f68cd-86f1-4590-84d6-1c35ec25540a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.169 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1de5a0b7-e028-472d-9f5c-7cf42f7c9a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.171 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.171 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.172 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 NetworkManager[44981]: <info>  [1759408300.1880] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Oct 02 12:31:40 compute-0 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.191 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.195 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:31:40 compute-0 ovn_controller[148123]: 2025-10-02T12:31:40Z|00264|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.196 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[44438f8f-2c0c-49a1-9707-7b19a3f0a850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.197 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:31:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:40.199 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003167057800576959 of space, bias 1.0, pg target 0.9501173401730877 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:31:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:40.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:40 compute-0 podman[305377]: 2025-10-02 12:31:40.565971198 +0000 UTC m=+0.025977630 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.680 2 DEBUG nova.compute.manager [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.681 2 DEBUG oslo_concurrency.lockutils [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.681 2 DEBUG oslo_concurrency.lockutils [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.681 2 DEBUG oslo_concurrency.lockutils [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.681 2 DEBUG nova.compute.manager [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.681 2 WARNING nova.compute.manager [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state active and task_state resize_finish.
Oct 02 12:31:40 compute-0 podman[305377]: 2025-10-02 12:31:40.701129779 +0000 UTC m=+0.161136191 container create 29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:31:40 compute-0 systemd[1]: Started libpod-conmon-29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c.scope.
Oct 02 12:31:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff514fd745289539d0d14c2b77f5fd2f5cfee1e3f3c9f90c751d8a3fe2b0c1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:40 compute-0 podman[305377]: 2025-10-02 12:31:40.824738117 +0000 UTC m=+0.284744549 container init 29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:31:40 compute-0 podman[305377]: 2025-10-02 12:31:40.831296713 +0000 UTC m=+0.291303125 container start 29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:31:40 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[305391]: [NOTICE]   (305395) : New worker (305397) forked
Oct 02 12:31:40 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[305391]: [NOTICE]   (305395) : Loading success.
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.975 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408300.9745607, b20c27bc-0af3-4e54-a5ab-51d9d5afce82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.975 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] VM Resumed (Lifecycle Event)
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.979 2 DEBUG nova.compute.manager [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.983 2 INFO nova.virt.libvirt.driver [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance running successfully.
Oct 02 12:31:40 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.987 2 DEBUG nova.virt.libvirt.guest [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:31:40 compute-0 nova_compute[256940]: 2025-10-02 12:31:40.987 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:31:41 compute-0 nova_compute[256940]: 2025-10-02 12:31:41.008 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:41 compute-0 nova_compute[256940]: 2025-10-02 12:31:41.012 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:41 compute-0 ceph-mon[73668]: pgmap v1648: 305 pgs: 305 active+clean; 167 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 110 op/s
Oct 02 12:31:41 compute-0 nova_compute[256940]: 2025-10-02 12:31:41.082 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:31:41 compute-0 nova_compute[256940]: 2025-10-02 12:31:41.083 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408300.9787638, b20c27bc-0af3-4e54-a5ab-51d9d5afce82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:41 compute-0 nova_compute[256940]: 2025-10-02 12:31:41.083 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] VM Started (Lifecycle Event)
Oct 02 12:31:41 compute-0 nova_compute[256940]: 2025-10-02 12:31:41.116 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:41 compute-0 nova_compute[256940]: 2025-10-02 12:31:41.122 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 145 op/s
Oct 02 12:31:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:42.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:42.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:42 compute-0 nova_compute[256940]: 2025-10-02 12:31:42.832 2 DEBUG nova.compute.manager [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:42 compute-0 nova_compute[256940]: 2025-10-02 12:31:42.833 2 DEBUG oslo_concurrency.lockutils [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:42 compute-0 nova_compute[256940]: 2025-10-02 12:31:42.834 2 DEBUG oslo_concurrency.lockutils [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:42 compute-0 nova_compute[256940]: 2025-10-02 12:31:42.834 2 DEBUG oslo_concurrency.lockutils [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:42 compute-0 nova_compute[256940]: 2025-10-02 12:31:42.835 2 DEBUG nova.compute.manager [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:42 compute-0 nova_compute[256940]: 2025-10-02 12:31:42.835 2 WARNING nova.compute.manager [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state resized and task_state None.
Oct 02 12:31:43 compute-0 ceph-mon[73668]: pgmap v1649: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 145 op/s
Oct 02 12:31:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 19 KiB/s wr, 151 op/s
Oct 02 12:31:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:44.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:44 compute-0 ceph-mon[73668]: pgmap v1650: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 19 KiB/s wr, 151 op/s
Oct 02 12:31:44 compute-0 nova_compute[256940]: 2025-10-02 12:31:44.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 KiB/s wr, 177 op/s
Oct 02 12:31:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3767337922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/231615588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:45 compute-0 nova_compute[256940]: 2025-10-02 12:31:45.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:45 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 02 12:31:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:46.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 02 12:31:46 compute-0 ceph-mon[73668]: pgmap v1651: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 KiB/s wr, 177 op/s
Oct 02 12:31:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3674039429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 02 12:31:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 02 12:31:46 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:31:46 compute-0 sudo[305409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:46 compute-0 sudo[305409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:46 compute-0 sudo[305409]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-0 sudo[305435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:47 compute-0 ovn_controller[148123]: 2025-10-02T12:31:47Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:be:9d 10.100.0.9
Oct 02 12:31:47 compute-0 ovn_controller[148123]: 2025-10-02T12:31:47Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:be:9d 10.100.0.9
Oct 02 12:31:47 compute-0 sudo[305435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:47 compute-0 sudo[305435]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-0 sudo[305460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:47 compute-0 sudo[305460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:47 compute-0 sudo[305460]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-0 sudo[305485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:31:47 compute-0 sudo[305485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 189 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Oct 02 12:31:47 compute-0 ovn_controller[148123]: 2025-10-02T12:31:47Z|00265|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:31:47 compute-0 ovn_controller[148123]: 2025-10-02T12:31:47Z|00266|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:31:47 compute-0 nova_compute[256940]: 2025-10-02 12:31:47.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:47 compute-0 sudo[305485]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:31:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:31:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:31:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:31:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:47 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f5665391-6f51-42e2-ad96-c52a3ecf49a5 does not exist
Oct 02 12:31:47 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 34bf73dd-80fc-4010-ae09-095b7693013c does not exist
Oct 02 12:31:47 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6784f60c-9eb3-4780-9f45-a40a8a744af1 does not exist
Oct 02 12:31:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:31:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:31:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:31:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:31:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:31:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:48 compute-0 ceph-mon[73668]: osdmap e241: 3 total, 3 up, 3 in
Oct 02 12:31:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3148228124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:31:48 compute-0 sudo[305541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:48 compute-0 sudo[305541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:48 compute-0 sudo[305541]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:48.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:48 compute-0 sudo[305566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:48 compute-0 sudo[305566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:48 compute-0 sudo[305566]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:48 compute-0 sudo[305591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:48 compute-0 sudo[305591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:48 compute-0 sudo[305591]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:48 compute-0 sudo[305616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:31:48 compute-0 sudo[305616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:48.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:48 compute-0 podman[305680]: 2025-10-02 12:31:48.676057561 +0000 UTC m=+0.046602834 container create 42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:31:48 compute-0 systemd[1]: Started libpod-conmon-42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200.scope.
Oct 02 12:31:48 compute-0 podman[305680]: 2025-10-02 12:31:48.654646238 +0000 UTC m=+0.025191531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:48 compute-0 podman[305680]: 2025-10-02 12:31:48.77881908 +0000 UTC m=+0.149364373 container init 42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:31:48 compute-0 podman[305680]: 2025-10-02 12:31:48.788780312 +0000 UTC m=+0.159325585 container start 42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:31:48 compute-0 podman[305680]: 2025-10-02 12:31:48.795614266 +0000 UTC m=+0.166159539 container attach 42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:31:48 compute-0 focused_wiles[305697]: 167 167
Oct 02 12:31:48 compute-0 systemd[1]: libpod-42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200.scope: Deactivated successfully.
Oct 02 12:31:48 compute-0 conmon[305697]: conmon 42fe4309a752b9ef5f6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200.scope/container/memory.events
Oct 02 12:31:48 compute-0 podman[305680]: 2025-10-02 12:31:48.800048198 +0000 UTC m=+0.170593481 container died 42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-60fc5154e411aaa372705995d77b13fb7b3751c9d181b90503b8e1f25f686211-merged.mount: Deactivated successfully.
Oct 02 12:31:48 compute-0 podman[305680]: 2025-10-02 12:31:48.877354351 +0000 UTC m=+0.247899624 container remove 42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:31:48 compute-0 systemd[1]: libpod-conmon-42fe4309a752b9ef5f6d6dff2d41846b8bbe45934dedc1c08cf4740a6f0f7200.scope: Deactivated successfully.
Oct 02 12:31:49 compute-0 ceph-mon[73668]: pgmap v1653: 305 pgs: 305 active+clean; 189 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Oct 02 12:31:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:31:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:31:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4138466693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-0 podman[305722]: 2025-10-02 12:31:49.089449285 +0000 UTC m=+0.055357257 container create 9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:31:49 compute-0 systemd[1]: Started libpod-conmon-9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412.scope.
Oct 02 12:31:49 compute-0 podman[305722]: 2025-10-02 12:31:49.062762017 +0000 UTC m=+0.028670019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dc83e7b398e7fcb4cb1357fd52dc9e4eb98a3b90545e15a9903d87c3fda65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dc83e7b398e7fcb4cb1357fd52dc9e4eb98a3b90545e15a9903d87c3fda65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dc83e7b398e7fcb4cb1357fd52dc9e4eb98a3b90545e15a9903d87c3fda65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dc83e7b398e7fcb4cb1357fd52dc9e4eb98a3b90545e15a9903d87c3fda65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dc83e7b398e7fcb4cb1357fd52dc9e4eb98a3b90545e15a9903d87c3fda65/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:49 compute-0 podman[305722]: 2025-10-02 12:31:49.228652978 +0000 UTC m=+0.194560970 container init 9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:31:49 compute-0 podman[305722]: 2025-10-02 12:31:49.237617296 +0000 UTC m=+0.203525268 container start 9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:31:49 compute-0 podman[305722]: 2025-10-02 12:31:49.243677569 +0000 UTC m=+0.209585541 container attach 9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 189 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Oct 02 12:31:49 compute-0 nova_compute[256940]: 2025-10-02 12:31:49.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:50.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2173979604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3674890317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/443590527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:50 compute-0 dreamy_banach[305739]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:31:50 compute-0 dreamy_banach[305739]: --> relative data size: 1.0
Oct 02 12:31:50 compute-0 dreamy_banach[305739]: --> All data devices are unavailable
Oct 02 12:31:50 compute-0 systemd[1]: libpod-9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412.scope: Deactivated successfully.
Oct 02 12:31:50 compute-0 podman[305754]: 2025-10-02 12:31:50.273852079 +0000 UTC m=+0.033213414 container died 9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:31:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 12:31:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:50.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 12:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-605dc83e7b398e7fcb4cb1357fd52dc9e4eb98a3b90545e15a9903d87c3fda65-merged.mount: Deactivated successfully.
Oct 02 12:31:50 compute-0 podman[305754]: 2025-10-02 12:31:50.600328725 +0000 UTC m=+0.359690030 container remove 9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:31:50 compute-0 systemd[1]: libpod-conmon-9e2a6d0011ce81b9b8422ec3ec88ed408a50645312dad18a01c7844b2092e412.scope: Deactivated successfully.
Oct 02 12:31:50 compute-0 nova_compute[256940]: 2025-10-02 12:31:50.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:50 compute-0 sudo[305616]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:50 compute-0 sudo[305771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:50 compute-0 sudo[305771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:50 compute-0 sudo[305771]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:50 compute-0 sudo[305796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:50 compute-0 sudo[305796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:50 compute-0 sudo[305796]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:50 compute-0 sudo[305821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:50 compute-0 sudo[305821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:50 compute-0 sudo[305821]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:50 compute-0 sudo[305846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:31:50 compute-0 sudo[305846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:51 compute-0 ceph-mon[73668]: pgmap v1654: 305 pgs: 305 active+clean; 189 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Oct 02 12:31:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 243 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 02 12:31:51 compute-0 podman[305909]: 2025-10-02 12:31:51.321413229 +0000 UTC m=+0.055032698 container create 94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:51 compute-0 systemd[1]: Started libpod-conmon-94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81.scope.
Oct 02 12:31:51 compute-0 podman[305909]: 2025-10-02 12:31:51.29897572 +0000 UTC m=+0.032595219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:51 compute-0 podman[305909]: 2025-10-02 12:31:51.434284294 +0000 UTC m=+0.167903783 container init 94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:31:51 compute-0 podman[305909]: 2025-10-02 12:31:51.443252222 +0000 UTC m=+0.176871691 container start 94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:31:51 compute-0 quirky_mclean[305925]: 167 167
Oct 02 12:31:51 compute-0 systemd[1]: libpod-94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81.scope: Deactivated successfully.
Oct 02 12:31:51 compute-0 podman[305909]: 2025-10-02 12:31:51.461252139 +0000 UTC m=+0.194871608 container attach 94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:51 compute-0 podman[305909]: 2025-10-02 12:31:51.463375193 +0000 UTC m=+0.196994672 container died 94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mclean, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdcb7bf1cc927e883524e285c7d5d38c32a6c168cbb13b49792bfdfdb00da4f5-merged.mount: Deactivated successfully.
Oct 02 12:31:51 compute-0 podman[305909]: 2025-10-02 12:31:51.552237008 +0000 UTC m=+0.285856477 container remove 94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:31:51 compute-0 systemd[1]: libpod-conmon-94bacba063aa489b29cde6f3aeb24ad5b47a69b5d2f3514d786daa1fbcedfb81.scope: Deactivated successfully.
Oct 02 12:31:51 compute-0 podman[305950]: 2025-10-02 12:31:51.740885257 +0000 UTC m=+0.047400914 container create 0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_greider, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:31:51 compute-0 systemd[1]: Started libpod-conmon-0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b.scope.
Oct 02 12:31:51 compute-0 podman[305950]: 2025-10-02 12:31:51.721900505 +0000 UTC m=+0.028416192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2eab7dee46515f2910a3c50aa08163747da0f043cd039d4a1b3e4c6144b5011/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2eab7dee46515f2910a3c50aa08163747da0f043cd039d4a1b3e4c6144b5011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2eab7dee46515f2910a3c50aa08163747da0f043cd039d4a1b3e4c6144b5011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2eab7dee46515f2910a3c50aa08163747da0f043cd039d4a1b3e4c6144b5011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:51 compute-0 podman[305950]: 2025-10-02 12:31:51.850600912 +0000 UTC m=+0.157116599 container init 0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:31:51 compute-0 podman[305950]: 2025-10-02 12:31:51.857488897 +0000 UTC m=+0.164004554 container start 0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:31:51 compute-0 podman[305950]: 2025-10-02 12:31:51.867699586 +0000 UTC m=+0.174215273 container attach 0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:51 compute-0 nova_compute[256940]: 2025-10-02 12:31:51.947 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:51 compute-0 nova_compute[256940]: 2025-10-02 12:31:51.950 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:51 compute-0 nova_compute[256940]: 2025-10-02 12:31:51.950 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:51 compute-0 nova_compute[256940]: 2025-10-02 12:31:51.951 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:31:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:52.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.233 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.234 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.235 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.293 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.294 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.295 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.295 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.295 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.297 2 INFO nova.compute.manager [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Terminating instance
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.298 2 DEBUG nova.compute.manager [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:31:52 compute-0 kernel: tapbaa241e6-fa (unregistering): left promiscuous mode
Oct 02 12:31:52 compute-0 NetworkManager[44981]: <info>  [1759408312.3981] device (tapbaa241e6-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:31:52 compute-0 ovn_controller[148123]: 2025-10-02T12:31:52Z|00267|binding|INFO|Releasing lport baa241e6-fa7d-4fea-9e14-0af61693406b from this chassis (sb_readonly=0)
Oct 02 12:31:52 compute-0 ovn_controller[148123]: 2025-10-02T12:31:52Z|00268|binding|INFO|Setting lport baa241e6-fa7d-4fea-9e14-0af61693406b down in Southbound
Oct 02 12:31:52 compute-0 ovn_controller[148123]: 2025-10-02T12:31:52Z|00269|binding|INFO|Removing iface tapbaa241e6-fa ovn-installed in OVS
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.435 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:93:64 10.100.0.14'], port_security=['fa:16:3e:f4:93:64 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b20c27bc-0af3-4e54-a5ab-51d9d5afce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=baa241e6-fa7d-4fea-9e14-0af61693406b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.436 158104 INFO neutron.agent.ovn.metadata.agent [-] Port baa241e6-fa7d-4fea-9e14-0af61693406b in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.438 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.439 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[79530d48-c594-4afd-a201-41fa044e2d4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.440 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore
Oct 02 12:31:52 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Oct 02 12:31:52 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004b.scope: Consumed 12.302s CPU time.
Oct 02 12:31:52 compute-0 systemd-machined[210927]: Machine qemu-33-instance-0000004b terminated.
Oct 02 12:31:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:52.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.544 2 INFO nova.virt.libvirt.driver [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance destroyed successfully.
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.547 2 DEBUG nova.objects.instance [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.562 2 DEBUG nova.virt.libvirt.vif [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1329955703',display_name='tempest-ServerDiskConfigTestJSON-server-1329955703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1329955703',id=75,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-9ne0uubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:48Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b20c27bc-0af3-4e54-a5ab-51d9d5afce82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.564 2 DEBUG nova.network.os_vif_util [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.565 2 DEBUG nova.network.os_vif_util [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.566 2 DEBUG os_vif [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbaa241e6-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.577 2 INFO os_vif [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa')
Oct 02 12:31:52 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[305391]: [NOTICE]   (305395) : haproxy version is 2.8.14-c23fe91
Oct 02 12:31:52 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[305391]: [NOTICE]   (305395) : path to executable is /usr/sbin/haproxy
Oct 02 12:31:52 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[305391]: [WARNING]  (305395) : Exiting Master process...
Oct 02 12:31:52 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[305391]: [ALERT]    (305395) : Current worker (305397) exited with code 143 (Terminated)
Oct 02 12:31:52 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[305391]: [WARNING]  (305395) : All workers exited. Exiting... (0)
Oct 02 12:31:52 compute-0 systemd[1]: libpod-29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c.scope: Deactivated successfully.
Oct 02 12:31:52 compute-0 podman[306026]: 2025-10-02 12:31:52.629548455 +0000 UTC m=+0.062903878 container died 29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ff514fd745289539d0d14c2b77f5fd2f5cfee1e3f3c9f90c751d8a3fe2b0c1a-merged.mount: Deactivated successfully.
Oct 02 12:31:52 compute-0 vigorous_greider[305966]: {
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:     "1": [
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:         {
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "devices": [
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "/dev/loop3"
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             ],
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "lv_name": "ceph_lv0",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "lv_size": "7511998464",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "name": "ceph_lv0",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "tags": {
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.cluster_name": "ceph",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.crush_device_class": "",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.encrypted": "0",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.osd_id": "1",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.type": "block",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:                 "ceph.vdo": "0"
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             },
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "type": "block",
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:             "vg_name": "ceph_vg0"
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:         }
Oct 02 12:31:52 compute-0 vigorous_greider[305966]:     ]
Oct 02 12:31:52 compute-0 vigorous_greider[305966]: }
Oct 02 12:31:52 compute-0 podman[306026]: 2025-10-02 12:31:52.715435975 +0000 UTC m=+0.148791398 container cleanup 29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:52 compute-0 systemd[1]: libpod-conmon-29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c.scope: Deactivated successfully.
Oct 02 12:31:52 compute-0 systemd[1]: libpod-0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b.scope: Deactivated successfully.
Oct 02 12:31:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:52 compute-0 podman[305950]: 2025-10-02 12:31:52.756750844 +0000 UTC m=+1.063266521 container died 0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_greider, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/660504455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2eab7dee46515f2910a3c50aa08163747da0f043cd039d4a1b3e4c6144b5011-merged.mount: Deactivated successfully.
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.790 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:52 compute-0 podman[305950]: 2025-10-02 12:31:52.838242092 +0000 UTC m=+1.144757749 container remove 0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:52 compute-0 podman[306078]: 2025-10-02 12:31:52.830041474 +0000 UTC m=+0.086574239 container remove 29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.848 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[82205824-ff51-462f-b998-1f67117f8d66]: (4, ('Thu Oct  2 12:31:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c)\n29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c\nThu Oct  2 12:31:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c)\n29db01b342dd37bdb449dcb82965ed6c4df25402fc6dd76cafdd19730b99704c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.852 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8aa6ef-a0ed-4419-8947-fc113d35c905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.853 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:52 compute-0 kernel: tape21cd6a6-f0: left promiscuous mode
Oct 02 12:31:52 compute-0 systemd[1]: libpod-conmon-0fc392c2df8d5ca279f4b1701c780eb6a4c5fec6e88c7b930d38ff6b8105e28b.scope: Deactivated successfully.
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.875 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cd08c9fa-48a3-4360-a6ac-ee83f32850c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 sudo[305846]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.896 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.897 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.902 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:52 compute-0 nova_compute[256940]: 2025-10-02 12:31:52.902 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.901 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3cde1521-a9b3-4488-bd96-7501efd56a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.903 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2de25857-2f6a-45c3-ba65-3444c26d0056]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.922 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d2240e37-82f5-440f-8e27-aa70d742037f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619535, 'reachable_time': 36136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306106, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.928 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:31:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:52.928 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfede7b-487b-40e6-bc7b-868043aa453b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:52 compute-0 sudo[306105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:52 compute-0 sudo[306105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:52 compute-0 sudo[306105]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:53 compute-0 sudo[306132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:53 compute-0 sudo[306132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:53 compute-0 sudo[306132]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.069 2 DEBUG nova.compute.manager [req-fd073405-32cf-49cc-a380-f875a2da92fb req-dc330274-ae34-4ed6-a337-18cf4626c296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.070 2 DEBUG oslo_concurrency.lockutils [req-fd073405-32cf-49cc-a380-f875a2da92fb req-dc330274-ae34-4ed6-a337-18cf4626c296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.070 2 DEBUG oslo_concurrency.lockutils [req-fd073405-32cf-49cc-a380-f875a2da92fb req-dc330274-ae34-4ed6-a337-18cf4626c296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.070 2 DEBUG oslo_concurrency.lockutils [req-fd073405-32cf-49cc-a380-f875a2da92fb req-dc330274-ae34-4ed6-a337-18cf4626c296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.070 2 DEBUG nova.compute.manager [req-fd073405-32cf-49cc-a380-f875a2da92fb req-dc330274-ae34-4ed6-a337-18cf4626c296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.071 2 DEBUG nova.compute.manager [req-fd073405-32cf-49cc-a380-f875a2da92fb req-dc330274-ae34-4ed6-a337-18cf4626c296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:31:53 compute-0 sudo[306157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:53 compute-0 sudo[306157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:53 compute-0 sudo[306157]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.135 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.136 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4384MB free_disk=20.876590728759766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.137 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.137 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:53 compute-0 ceph-mon[73668]: pgmap v1655: 305 pgs: 305 active+clean; 243 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 02 12:31:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/660504455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:53 compute-0 sudo[306182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:31:53 compute-0 sudo[306182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.221 2 INFO nova.virt.libvirt.driver [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Deleting instance files /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82_del
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.222 2 INFO nova.virt.libvirt.driver [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Deletion of /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82_del complete
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.246 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance b20c27bc-0af3-4e54-a5ab-51d9d5afce82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:31:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 246 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.323 2 INFO nova.compute.manager [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Took 1.02 seconds to destroy the instance on the hypervisor.
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.324 2 DEBUG oslo.service.loopingcall [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.324 2 DEBUG nova.compute.manager [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.324 2 DEBUG nova.network.neutron [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.421 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:53 compute-0 podman[306247]: 2025-10-02 12:31:53.587769388 +0000 UTC m=+0.096725446 container create 197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:31:53 compute-0 podman[306247]: 2025-10-02 12:31:53.515136334 +0000 UTC m=+0.024092422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:53 compute-0 systemd[1]: Started libpod-conmon-197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84.scope.
Oct 02 12:31:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:53 compute-0 podman[306247]: 2025-10-02 12:31:53.710808191 +0000 UTC m=+0.219764309 container init 197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:31:53 compute-0 podman[306247]: 2025-10-02 12:31:53.720443056 +0000 UTC m=+0.229399114 container start 197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:31:53 compute-0 upbeat_merkle[306281]: 167 167
Oct 02 12:31:53 compute-0 systemd[1]: libpod-197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84.scope: Deactivated successfully.
Oct 02 12:31:53 compute-0 podman[306247]: 2025-10-02 12:31:53.7702691 +0000 UTC m=+0.279225158 container attach 197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:31:53 compute-0 podman[306247]: 2025-10-02 12:31:53.772798415 +0000 UTC m=+0.281754473 container died 197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:31:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/718085045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5210b970ba6818a585944da9abde73acb5f06376992e818d7508922f43f289f5-merged.mount: Deactivated successfully.
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.906 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.914 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.935 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:53 compute-0 podman[306247]: 2025-10-02 12:31:53.942512373 +0000 UTC m=+0.451468431 container remove 197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:53 compute-0 systemd[1]: libpod-conmon-197fa104c8c16f1a58ce0e38cd170a1651e30ff0d5d9219696cec1c6f8a63f84.scope: Deactivated successfully.
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.970 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:31:53 compute-0 nova_compute[256940]: 2025-10-02 12:31:53.971 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:54.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:54 compute-0 podman[306310]: 2025-10-02 12:31:54.120641824 +0000 UTC m=+0.043035353 container create 17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:54 compute-0 systemd[1]: Started libpod-conmon-17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6.scope.
Oct 02 12:31:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24dffbdd1e2885701055ca3fe0d3cd1516badd5814c53ff5f21f14c74c348f2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24dffbdd1e2885701055ca3fe0d3cd1516badd5814c53ff5f21f14c74c348f2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24dffbdd1e2885701055ca3fe0d3cd1516badd5814c53ff5f21f14c74c348f2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24dffbdd1e2885701055ca3fe0d3cd1516badd5814c53ff5f21f14c74c348f2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:54 compute-0 podman[306310]: 2025-10-02 12:31:54.10155562 +0000 UTC m=+0.023949179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:31:54 compute-0 podman[306310]: 2025-10-02 12:31:54.20087878 +0000 UTC m=+0.123272329 container init 17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:31:54 compute-0 podman[306310]: 2025-10-02 12:31:54.209434117 +0000 UTC m=+0.131827646 container start 17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:31:54 compute-0 podman[306310]: 2025-10-02 12:31:54.213805898 +0000 UTC m=+0.136199447 container attach 17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/718085045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:54 compute-0 podman[306327]: 2025-10-02 12:31:54.267183553 +0000 UTC m=+0.079000796 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:31:54 compute-0 podman[306329]: 2025-10-02 12:31:54.2887305 +0000 UTC m=+0.100812050 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.507 2 DEBUG oslo_concurrency.lockutils [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.508 2 DEBUG oslo_concurrency.lockutils [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.509 2 DEBUG nova.objects.instance [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:54.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.528 2 DEBUG nova.objects.instance [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.540 2 DEBUG nova.network.neutron [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.972 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.972 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.972 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:31:54 compute-0 nova_compute[256940]: 2025-10-02 12:31:54.998 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.119 2 DEBUG nova.network.neutron [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:55 compute-0 jovial_ellis[306326]: {
Oct 02 12:31:55 compute-0 jovial_ellis[306326]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:31:55 compute-0 jovial_ellis[306326]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:31:55 compute-0 jovial_ellis[306326]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:31:55 compute-0 jovial_ellis[306326]:         "osd_id": 1,
Oct 02 12:31:55 compute-0 jovial_ellis[306326]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:31:55 compute-0 jovial_ellis[306326]:         "type": "bluestore"
Oct 02 12:31:55 compute-0 jovial_ellis[306326]:     }
Oct 02 12:31:55 compute-0 jovial_ellis[306326]: }
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.157 2 INFO nova.compute.manager [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Took 1.83 seconds to deallocate network for instance.
Oct 02 12:31:55 compute-0 systemd[1]: libpod-17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6.scope: Deactivated successfully.
Oct 02 12:31:55 compute-0 podman[306310]: 2025-10-02 12:31:55.166804369 +0000 UTC m=+1.089197898 container died 17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-24dffbdd1e2885701055ca3fe0d3cd1516badd5814c53ff5f21f14c74c348f2c-merged.mount: Deactivated successfully.
Oct 02 12:31:55 compute-0 podman[306310]: 2025-10-02 12:31:55.227825018 +0000 UTC m=+1.150218567 container remove 17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:31:55 compute-0 systemd[1]: libpod-conmon-17a95c269f229ff994fe5d3d4671da5dc97adf2532641e19c26a2332d66484b6.scope: Deactivated successfully.
Oct 02 12:31:55 compute-0 ceph-mon[73668]: pgmap v1656: 305 pgs: 305 active+clean; 246 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Oct 02 12:31:55 compute-0 sudo[306182]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.266 2 DEBUG nova.compute.manager [req-0ce8d755-5aaa-4b18-b6b1-be4742b0a58d req-c7a7e381-8612-4bff-9306-0e6a14a70466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.267 2 DEBUG oslo_concurrency.lockutils [req-0ce8d755-5aaa-4b18-b6b1-be4742b0a58d req-c7a7e381-8612-4bff-9306-0e6a14a70466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.267 2 DEBUG oslo_concurrency.lockutils [req-0ce8d755-5aaa-4b18-b6b1-be4742b0a58d req-c7a7e381-8612-4bff-9306-0e6a14a70466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.267 2 DEBUG oslo_concurrency.lockutils [req-0ce8d755-5aaa-4b18-b6b1-be4742b0a58d req-c7a7e381-8612-4bff-9306-0e6a14a70466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.268 2 DEBUG nova.compute.manager [req-0ce8d755-5aaa-4b18-b6b1-be4742b0a58d req-c7a7e381-8612-4bff-9306-0e6a14a70466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.268 2 WARNING nova.compute.manager [req-0ce8d755-5aaa-4b18-b6b1-be4742b0a58d req-c7a7e381-8612-4bff-9306-0e6a14a70466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state active and task_state deleting.
Oct 02 12:31:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.274 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.275 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:31:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0189a048-0466-4b73-bcb1-48054073ac16 does not exist
Oct 02 12:31:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 64f28f33-a09e-468a-b8e5-7d45b61a40e0 does not exist
Oct 02 12:31:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ecfb9214-be21-414a-a4ed-ea437a55ca6b does not exist
Oct 02 12:31:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 02 12:31:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 203 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.7 MiB/s wr, 186 op/s
Oct 02 12:31:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 02 12:31:55 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 02 12:31:55 compute-0 sudo[306403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:55 compute-0 sudo[306403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.376 2 DEBUG oslo_concurrency.processutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:55 compute-0 sudo[306403]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.415 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.416 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.416 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.416 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:55 compute-0 sudo[306429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:31:55 compute-0 sudo[306429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:55 compute-0 sudo[306429]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.462 2 DEBUG nova.compute.manager [req-558bb544-ca94-4899-8d4c-2010e7f5a96d req-55a8d937-e0d6-4a9b-9ba3-26b1a596e0d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-deleted-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.658 2 DEBUG nova.policy [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:31:55 compute-0 sudo[306473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:55 compute-0 sudo[306473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:55 compute-0 sudo[306473]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1214102434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.858 2 DEBUG oslo_concurrency.processutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.863 2 DEBUG nova.compute.provider_tree [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:55 compute-0 sudo[306498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:55 compute-0 sudo[306498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:55 compute-0 sudo[306498]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.882 2 DEBUG nova.scheduler.client.report [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.908 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:55 compute-0 nova_compute[256940]: 2025-10-02 12:31:55.969 2 INFO nova.scheduler.client.report [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocations for instance b20c27bc-0af3-4e54-a5ab-51d9d5afce82
Oct 02 12:31:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:31:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:56.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:31:56 compute-0 nova_compute[256940]: 2025-10-02 12:31:56.120 2 DEBUG oslo_concurrency.lockutils [None req-7ec708be-c2b6-4220-a16d-72043bd89fda 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:56 compute-0 ceph-mon[73668]: pgmap v1657: 305 pgs: 305 active+clean; 203 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.7 MiB/s wr, 186 op/s
Oct 02 12:31:56 compute-0 ceph-mon[73668]: osdmap e242: 3 total, 3 up, 3 in
Oct 02 12:31:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1214102434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:56 compute-0 nova_compute[256940]: 2025-10-02 12:31:56.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:31:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:56.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:31:56 compute-0 nova_compute[256940]: 2025-10-02 12:31:56.997 2 DEBUG nova.network.neutron [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Successfully created port: 20f71e08-5b04-4fd1-bf99-5a684d024adc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:31:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:57.077 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:57.078 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:31:57 compute-0 nova_compute[256940]: 2025-10-02 12:31:57.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:31:57.079 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 167 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 234 op/s
Oct 02 12:31:57 compute-0 nova_compute[256940]: 2025-10-02 12:31:57.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:58.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:58 compute-0 ceph-mon[73668]: pgmap v1659: 305 pgs: 305 active+clean; 167 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 234 op/s
Oct 02 12:31:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1408219479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:31:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:31:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:31:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:31:58 compute-0 nova_compute[256940]: 2025-10-02 12:31:58.540 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:58 compute-0 nova_compute[256940]: 2025-10-02 12:31:58.706 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:58 compute-0 nova_compute[256940]: 2025-10-02 12:31:58.706 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:31:58 compute-0 nova_compute[256940]: 2025-10-02 12:31:58.707 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:58 compute-0 nova_compute[256940]: 2025-10-02 12:31:58.707 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:58 compute-0 nova_compute[256940]: 2025-10-02 12:31:58.707 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 167 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 234 op/s
Oct 02 12:32:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:00.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.205 2 DEBUG nova.network.neutron [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Successfully updated port: 20f71e08-5b04-4fd1-bf99-5a684d024adc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.236 2 DEBUG oslo_concurrency.lockutils [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.236 2 DEBUG oslo_concurrency.lockutils [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.236 2 DEBUG nova.network.neutron [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.352 2 DEBUG nova.compute.manager [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-changed-20f71e08-5b04-4fd1-bf99-5a684d024adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.353 2 DEBUG nova.compute.manager [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing instance network info cache due to event network-changed-20f71e08-5b04-4fd1-bf99-5a684d024adc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.353 2 DEBUG oslo_concurrency.lockutils [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.450 2 WARNING nova.network.neutron [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:32:00 compute-0 ceph-mon[73668]: pgmap v1660: 305 pgs: 305 active+clean; 167 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 234 op/s
Oct 02 12:32:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:00.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:00 compute-0 nova_compute[256940]: 2025-10-02 12:32:00.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 192 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 134 op/s
Oct 02 12:32:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:02.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:02.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:02 compute-0 nova_compute[256940]: 2025-10-02 12:32:02.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:02 compute-0 ceph-mon[73668]: pgmap v1661: 305 pgs: 305 active+clean; 192 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 134 op/s
Oct 02 12:32:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 213 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Oct 02 12:32:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:04.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:04.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:04 compute-0 ceph-mon[73668]: pgmap v1662: 305 pgs: 305 active+clean; 213 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Oct 02 12:32:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 227 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 137 op/s
Oct 02 12:32:05 compute-0 nova_compute[256940]: 2025-10-02 12:32:05.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:05 compute-0 nova_compute[256940]: 2025-10-02 12:32:05.942 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:06.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.516 2 DEBUG nova.network.neutron [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:06.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.535 2 DEBUG oslo_concurrency.lockutils [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.536 2 DEBUG oslo_concurrency.lockutils [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.536 2 DEBUG nova.network.neutron [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing network info cache for port 20f71e08-5b04-4fd1-bf99-5a684d024adc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.541 2 DEBUG nova.virt.libvirt.vif [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.541 2 DEBUG nova.network.os_vif_util [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.542 2 DEBUG nova.network.os_vif_util [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.542 2 DEBUG os_vif [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20f71e08-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20f71e08-5b, col_values=(('external_ids', {'iface-id': '20f71e08-5b04-4fd1-bf99-5a684d024adc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:bf:20', 'vm-uuid': '27ae4ec8-c996-4a05-a592-2dc8cd23d759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 NetworkManager[44981]: <info>  [1759408326.5509] manager: (tap20f71e08-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.559 2 INFO os_vif [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b')
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.560 2 DEBUG nova.virt.libvirt.vif [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.560 2 DEBUG nova.network.os_vif_util [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.561 2 DEBUG nova.network.os_vif_util [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.565 2 DEBUG nova.virt.libvirt.guest [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:48:bf:20"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <target dev="tap20f71e08-5b"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]: </interface>
Oct 02 12:32:06 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:32:06 compute-0 kernel: tap20f71e08-5b: entered promiscuous mode
Oct 02 12:32:06 compute-0 NetworkManager[44981]: <info>  [1759408326.5831] manager: (tap20f71e08-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Oct 02 12:32:06 compute-0 ovn_controller[148123]: 2025-10-02T12:32:06Z|00270|binding|INFO|Claiming lport 20f71e08-5b04-4fd1-bf99-5a684d024adc for this chassis.
Oct 02 12:32:06 compute-0 ovn_controller[148123]: 2025-10-02T12:32:06Z|00271|binding|INFO|20f71e08-5b04-4fd1-bf99-5a684d024adc: Claiming fa:16:3e:48:bf:20 10.100.0.7
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.594 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:bf:20 10.100.0.7'], port_security=['fa:16:3e:48:bf:20 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20f71e08-5b04-4fd1-bf99-5a684d024adc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.595 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20f71e08-5b04-4fd1-bf99-5a684d024adc in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.597 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:32:06 compute-0 ovn_controller[148123]: 2025-10-02T12:32:06Z|00272|binding|INFO|Setting lport 20f71e08-5b04-4fd1-bf99-5a684d024adc ovn-installed in OVS
Oct 02 12:32:06 compute-0 ovn_controller[148123]: 2025-10-02T12:32:06Z|00273|binding|INFO|Setting lport 20f71e08-5b04-4fd1-bf99-5a684d024adc up in Southbound
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 systemd-udevd[306538]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.622 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5b19f638-a940-432b-929f-21f09e14efc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:06 compute-0 NetworkManager[44981]: <info>  [1759408326.6407] device (tap20f71e08-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:06 compute-0 NetworkManager[44981]: <info>  [1759408326.6418] device (tap20f71e08-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.667 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f6429585-4d3d-4827-907b-f301082c8f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.673 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[979ee878-281d-4dfd-8788-a3bc809cbf57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.690 2 DEBUG nova.virt.libvirt.driver [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.691 2 DEBUG nova.virt.libvirt.driver [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.691 2 DEBUG nova.virt.libvirt.driver [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:d2:be:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.691 2 DEBUG nova.virt.libvirt.driver [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:48:bf:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.711 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bce9972f-64d1-46a0-9d18-bc67730f9da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.724 2 DEBUG nova.virt.libvirt.guest [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:06</nova:creationTime>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:06 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     <nova:port uuid="20f71e08-5b04-4fd1-bf99-5a684d024adc">
Oct 02 12:32:06 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:32:06 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:06 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:06 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:06 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.729 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[befa0459-1cc5-4b3c-89dd-bb419109812b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306545, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.750 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[10de3d9b-4082-4c4c-bfcb-55119ddfeca4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618827, 'tstamp': 618827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306546, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618831, 'tstamp': 618831}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306546, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.752 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.755 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.755 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.756 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:06.756 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:06 compute-0 nova_compute[256940]: 2025-10-02 12:32:06.770 2 DEBUG oslo_concurrency.lockutils [None req-9ad5503c-460d-447f-9a6f-599abded9851 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 12.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:07 compute-0 ceph-mon[73668]: pgmap v1663: 305 pgs: 305 active+clean; 227 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 137 op/s
Oct 02 12:32:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 242 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.420 2 DEBUG nova.compute.manager [req-4f25a181-07a3-4af7-98c5-428ed7362974 req-06bcac51-d68f-48fb-825a-29e2b5fb1ad3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.420 2 DEBUG oslo_concurrency.lockutils [req-4f25a181-07a3-4af7-98c5-428ed7362974 req-06bcac51-d68f-48fb-825a-29e2b5fb1ad3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.420 2 DEBUG oslo_concurrency.lockutils [req-4f25a181-07a3-4af7-98c5-428ed7362974 req-06bcac51-d68f-48fb-825a-29e2b5fb1ad3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.420 2 DEBUG oslo_concurrency.lockutils [req-4f25a181-07a3-4af7-98c5-428ed7362974 req-06bcac51-d68f-48fb-825a-29e2b5fb1ad3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.421 2 DEBUG nova.compute.manager [req-4f25a181-07a3-4af7-98c5-428ed7362974 req-06bcac51-d68f-48fb-825a-29e2b5fb1ad3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.421 2 WARNING nova.compute.manager [req-4f25a181-07a3-4af7-98c5-428ed7362974 req-06bcac51-d68f-48fb-825a-29e2b5fb1ad3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc for instance with vm_state active and task_state None.
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.541 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408312.5406108, b20c27bc-0af3-4e54-a5ab-51d9d5afce82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.542 2 INFO nova.compute.manager [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] VM Stopped (Lifecycle Event)
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.585 2 DEBUG nova.compute.manager [None req-b46a3a61-6e9d-439b-aed5-5e5508787fb7 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:07 compute-0 nova_compute[256940]: 2025-10-02 12:32:07.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:07 compute-0 ovn_controller[148123]: 2025-10-02T12:32:07Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:bf:20 10.100.0.7
Oct 02 12:32:07 compute-0 ovn_controller[148123]: 2025-10-02T12:32:07Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:bf:20 10.100.0.7
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.056 2 DEBUG oslo_concurrency.lockutils [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.056 2 DEBUG oslo_concurrency.lockutils [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.057 2 DEBUG nova.objects.instance [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:08.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:08.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.684 2 DEBUG nova.objects.instance [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.708 2 DEBUG nova.network.neutron [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updated VIF entry in instance network info cache for port 20f71e08-5b04-4fd1-bf99-5a684d024adc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.709 2 DEBUG nova.network.neutron [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.710 2 DEBUG nova.network.neutron [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:32:08 compute-0 nova_compute[256940]: 2025-10-02 12:32:08.761 2 DEBUG oslo_concurrency.lockutils [req-a687ae9d-446a-4997-b862-5ba4ec73c843 req-340f9976-c7e2-496f-bb79-95f84a06a807 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 242 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct 02 12:32:09 compute-0 ceph-mon[73668]: pgmap v1664: 305 pgs: 305 active+clean; 242 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Oct 02 12:32:09 compute-0 nova_compute[256940]: 2025-10-02 12:32:09.419 2 DEBUG nova.policy [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:32:09 compute-0 nova_compute[256940]: 2025-10-02 12:32:09.610 2 DEBUG nova.compute.manager [req-b3f5c2ec-518b-4555-af5a-05ac45849b17 req-fa395c7a-615c-4f5d-884f-14fd1bf9bc65 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:09 compute-0 nova_compute[256940]: 2025-10-02 12:32:09.611 2 DEBUG oslo_concurrency.lockutils [req-b3f5c2ec-518b-4555-af5a-05ac45849b17 req-fa395c7a-615c-4f5d-884f-14fd1bf9bc65 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:09 compute-0 nova_compute[256940]: 2025-10-02 12:32:09.611 2 DEBUG oslo_concurrency.lockutils [req-b3f5c2ec-518b-4555-af5a-05ac45849b17 req-fa395c7a-615c-4f5d-884f-14fd1bf9bc65 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:09 compute-0 nova_compute[256940]: 2025-10-02 12:32:09.611 2 DEBUG oslo_concurrency.lockutils [req-b3f5c2ec-518b-4555-af5a-05ac45849b17 req-fa395c7a-615c-4f5d-884f-14fd1bf9bc65 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:09 compute-0 nova_compute[256940]: 2025-10-02 12:32:09.612 2 DEBUG nova.compute.manager [req-b3f5c2ec-518b-4555-af5a-05ac45849b17 req-fa395c7a-615c-4f5d-884f-14fd1bf9bc65 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:09 compute-0 nova_compute[256940]: 2025-10-02 12:32:09.612 2 WARNING nova.compute.manager [req-b3f5c2ec-518b-4555-af5a-05ac45849b17 req-fa395c7a-615c-4f5d-884f-14fd1bf9bc65 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc for instance with vm_state active and task_state None.
Oct 02 12:32:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:10.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:10 compute-0 nova_compute[256940]: 2025-10-02 12:32:10.443 2 DEBUG nova.network.neutron [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Successfully created port: 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:32:10 compute-0 podman[306548]: 2025-10-02 12:32:10.456516706 +0000 UTC m=+0.109654585 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:32:10 compute-0 podman[306549]: 2025-10-02 12:32:10.494723976 +0000 UTC m=+0.147668930 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:32:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:10 compute-0 nova_compute[256940]: 2025-10-02 12:32:10.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:10 compute-0 ceph-mon[73668]: pgmap v1665: 305 pgs: 305 active+clean; 242 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct 02 12:32:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2211173841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.527 2 DEBUG nova.network.neutron [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Successfully updated port: 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.573 2 DEBUG oslo_concurrency.lockutils [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.574 2 DEBUG oslo_concurrency.lockutils [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.574 2 DEBUG nova.network.neutron [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.650 2 DEBUG nova.compute.manager [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-changed-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.650 2 DEBUG nova.compute.manager [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing instance network info cache due to event network-changed-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.651 2 DEBUG oslo_concurrency.lockutils [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.792 2 WARNING nova.network.neutron [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:32:11 compute-0 nova_compute[256940]: 2025-10-02 12:32:11.793 2 WARNING nova.network.neutron [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:32:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:12.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3524789732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.9 MiB/s wr, 84 op/s
Oct 02 12:32:13 compute-0 ceph-mon[73668]: pgmap v1666: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 12:32:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:14.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:14 compute-0 nova_compute[256940]: 2025-10-02 12:32:14.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:14.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:15 compute-0 ceph-mon[73668]: pgmap v1667: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.9 MiB/s wr, 84 op/s
Oct 02 12:32:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:32:15 compute-0 nova_compute[256940]: 2025-10-02 12:32:15.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:15 compute-0 sudo[306599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:15 compute-0 sudo[306599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:15 compute-0 sudo[306599]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:16 compute-0 sudo[306624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:16 compute-0 sudo[306624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:16 compute-0 sudo[306624]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000076s ======
Oct 02 12:32:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:16.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Oct 02 12:32:16 compute-0 ceph-mon[73668]: pgmap v1668: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:32:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:16.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.732 2 DEBUG nova.network.neutron [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.772 2 DEBUG oslo_concurrency.lockutils [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.775 2 DEBUG oslo_concurrency.lockutils [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.776 2 DEBUG nova.network.neutron [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing network info cache for port 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.780 2 DEBUG nova.virt.libvirt.vif [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.781 2 DEBUG nova.network.os_vif_util [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.782 2 DEBUG nova.network.os_vif_util [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d5:97,bridge_name='br-int',has_traffic_filtering=True,id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130ec7c9-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.783 2 DEBUG os_vif [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d5:97,bridge_name='br-int',has_traffic_filtering=True,id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130ec7c9-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.785 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.785 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.790 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap130ec7c9-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.790 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap130ec7c9-b8, col_values=(('external_ids', {'iface-id': '130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:d5:97', 'vm-uuid': '27ae4ec8-c996-4a05-a592-2dc8cd23d759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 NetworkManager[44981]: <info>  [1759408336.7945] manager: (tap130ec7c9-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.803 2 INFO os_vif [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d5:97,bridge_name='br-int',has_traffic_filtering=True,id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130ec7c9-b8')
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.804 2 DEBUG nova.virt.libvirt.vif [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.804 2 DEBUG nova.network.os_vif_util [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.804 2 DEBUG nova.network.os_vif_util [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d5:97,bridge_name='br-int',has_traffic_filtering=True,id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130ec7c9-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.807 2 DEBUG nova.virt.libvirt.guest [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:32:16 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:97:d5:97"/>
Oct 02 12:32:16 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:32:16 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:16 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:32:16 compute-0 nova_compute[256940]:   <target dev="tap130ec7c9-b8"/>
Oct 02 12:32:16 compute-0 nova_compute[256940]: </interface>
Oct 02 12:32:16 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:32:16 compute-0 kernel: tap130ec7c9-b8: entered promiscuous mode
Oct 02 12:32:16 compute-0 NetworkManager[44981]: <info>  [1759408336.8208] manager: (tap130ec7c9-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Oct 02 12:32:16 compute-0 ovn_controller[148123]: 2025-10-02T12:32:16Z|00274|binding|INFO|Claiming lport 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f for this chassis.
Oct 02 12:32:16 compute-0 ovn_controller[148123]: 2025-10-02T12:32:16Z|00275|binding|INFO|130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f: Claiming fa:16:3e:97:d5:97 10.100.0.12
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.840 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:d5:97 10.100.0.12'], port_security=['fa:16:3e:97:d5:97 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.842 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:32:16 compute-0 ovn_controller[148123]: 2025-10-02T12:32:16Z|00276|binding|INFO|Setting lport 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f ovn-installed in OVS
Oct 02 12:32:16 compute-0 ovn_controller[148123]: 2025-10-02T12:32:16Z|00277|binding|INFO|Setting lport 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f up in Southbound
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.845 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-0 systemd-udevd[306657]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.867 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60755455-3766-4895-acfd-67a12dabb12d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-0 NetworkManager[44981]: <info>  [1759408336.8766] device (tap130ec7c9-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:16 compute-0 NetworkManager[44981]: <info>  [1759408336.8788] device (tap130ec7c9-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.921 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4021bd57-d81c-420e-9af1-a8eb066750f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.924 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[aecf08fa-1470-4228-8c4d-b5967ac73750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.963 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[12271268-f620-41f4-afb5-f4637629e6ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.975 2 DEBUG nova.virt.libvirt.driver [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.976 2 DEBUG nova.virt.libvirt.driver [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.976 2 DEBUG nova.virt.libvirt.driver [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:d2:be:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.976 2 DEBUG nova.virt.libvirt.driver [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:48:bf:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:16 compute-0 nova_compute[256940]: 2025-10-02 12:32:16.976 2 DEBUG nova.virt.libvirt.driver [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:97:d5:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:16.991 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[177121d0-1b7d-4a77-b161-e9ddb4bc94f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306664, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-0 nova_compute[256940]: 2025-10-02 12:32:17.012 2 DEBUG nova.virt.libvirt.guest [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:17 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:17</nova:creationTime>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:17 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:port uuid="20f71e08-5b04-4fd1-bf99-5a684d024adc">
Oct 02 12:32:17 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:17 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:17 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:17 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:17 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:17 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:32:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:17.014 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a7b8b1-2f66-4e8a-a1b0-067948e7041a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618827, 'tstamp': 618827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306665, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618831, 'tstamp': 618831}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306665, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:17.017 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:17 compute-0 nova_compute[256940]: 2025-10-02 12:32:17.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:17.020 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:17.021 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:17.021 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:17.022 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:17 compute-0 nova_compute[256940]: 2025-10-02 12:32:17.050 2 DEBUG oslo_concurrency.lockutils [None req-ed2af099-d68e-418e-9d92-16e9b937b2af 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 87 op/s
Oct 02 12:32:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2350958497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3187127440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.201 2 DEBUG nova.compute.manager [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.201 2 DEBUG oslo_concurrency.lockutils [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.201 2 DEBUG oslo_concurrency.lockutils [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.201 2 DEBUG oslo_concurrency.lockutils [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.202 2 DEBUG nova.compute.manager [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.202 2 WARNING nova.compute.manager [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f for instance with vm_state active and task_state None.
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.202 2 DEBUG nova.compute.manager [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.202 2 DEBUG oslo_concurrency.lockutils [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.203 2 DEBUG oslo_concurrency.lockutils [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.203 2 DEBUG oslo_concurrency.lockutils [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.203 2 DEBUG nova.compute.manager [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:18 compute-0 nova_compute[256940]: 2025-10-02 12:32:18.203 2 WARNING nova.compute.manager [req-cfbd282b-38e6-4bb2-8add-88b425726b9f req-9442316c-bf7f-4b77-8b74-4d2b3db06ae5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f for instance with vm_state active and task_state None.
Oct 02 12:32:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:18.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.071 2 DEBUG nova.compute.manager [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.160 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.160 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.192 2 DEBUG nova.objects.instance [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_requests' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.226 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.226 2 INFO nova.compute.claims [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.227 2 DEBUG nova.objects.instance [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.248 2 DEBUG nova.objects.instance [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.306 2 INFO nova.compute.resource_tracker [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating resource usage from migration 86dbf808-c068-4f96-81df-2af5c322181a
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.307 2 DEBUG nova.compute.resource_tracker [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Starting to track incoming migration 86dbf808-c068-4f96-81df-2af5c322181a with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:32:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 973 KiB/s rd, 37 KiB/s wr, 50 op/s
Oct 02 12:32:19 compute-0 ceph-mon[73668]: pgmap v1669: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 87 op/s
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.406 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.456 2 DEBUG nova.network.neutron [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updated VIF entry in instance network info cache for port 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.458 2 DEBUG nova.network.neutron [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.496 2 DEBUG oslo_concurrency.lockutils [req-5fe567d8-3e16-49bd-93eb-22726b8de5ae req-18068c47-182a-4d3a-a144-83d7f8c3ebdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.620 2 DEBUG oslo_concurrency.lockutils [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.621 2 DEBUG oslo_concurrency.lockutils [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.621 2 DEBUG nova.objects.instance [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:19 compute-0 ovn_controller[148123]: 2025-10-02T12:32:19Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:d5:97 10.100.0.12
Oct 02 12:32:19 compute-0 ovn_controller[148123]: 2025-10-02T12:32:19Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:d5:97 10.100.0.12
Oct 02 12:32:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908620508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.919 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.929 2 DEBUG nova.compute.provider_tree [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.957 2 DEBUG nova.scheduler.client.report [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:19 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.999 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:20 compute-0 nova_compute[256940]: 2025-10-02 12:32:19.999 2 INFO nova.compute.manager [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Migrating
Oct 02 12:32:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:20.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:20.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:20 compute-0 nova_compute[256940]: 2025-10-02 12:32:20.546 2 DEBUG nova.objects.instance [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:20 compute-0 nova_compute[256940]: 2025-10-02 12:32:20.562 2 DEBUG nova.network.neutron [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:32:20 compute-0 nova_compute[256940]: 2025-10-02 12:32:20.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:20 compute-0 ceph-mon[73668]: pgmap v1670: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 973 KiB/s rd, 37 KiB/s wr, 50 op/s
Oct 02 12:32:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2908620508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:21 compute-0 nova_compute[256940]: 2025-10-02 12:32:21.282 2 DEBUG nova.policy [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:32:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 107 op/s
Oct 02 12:32:21 compute-0 nova_compute[256940]: 2025-10-02 12:32:21.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:22.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.283 2 DEBUG nova.network.neutron [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Successfully updated port: b15a5cc5-f350-4edd-a0b1-caa8458fdd6a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:32:22 compute-0 sshd-session[306690]: Accepted publickey for nova from 192.168.122.102 port 40348 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.306 2 DEBUG oslo_concurrency.lockutils [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.307 2 DEBUG oslo_concurrency.lockutils [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.307 2 DEBUG nova.network.neutron [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:22 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:32:22 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:32:22 compute-0 systemd-logind[820]: New session 60 of user nova.
Oct 02 12:32:22 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:32:22 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:32:22 compute-0 systemd[306694]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.401 2 DEBUG nova.compute.manager [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-changed-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.404 2 DEBUG nova.compute.manager [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing instance network info cache due to event network-changed-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.404 2 DEBUG oslo_concurrency.lockutils [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.497 2 WARNING nova.network.neutron [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.499 2 WARNING nova.network.neutron [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:32:22 compute-0 nova_compute[256940]: 2025-10-02 12:32:22.499 2 WARNING nova.network.neutron [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:32:22 compute-0 systemd[306694]: Queued start job for default target Main User Target.
Oct 02 12:32:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:22.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:22 compute-0 systemd[306694]: Created slice User Application Slice.
Oct 02 12:32:22 compute-0 systemd[306694]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:32:22 compute-0 systemd[306694]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:32:22 compute-0 systemd[306694]: Reached target Paths.
Oct 02 12:32:22 compute-0 systemd[306694]: Reached target Timers.
Oct 02 12:32:22 compute-0 systemd[306694]: Starting D-Bus User Message Bus Socket...
Oct 02 12:32:22 compute-0 systemd[306694]: Starting Create User's Volatile Files and Directories...
Oct 02 12:32:22 compute-0 systemd[306694]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:32:22 compute-0 systemd[306694]: Reached target Sockets.
Oct 02 12:32:22 compute-0 systemd[306694]: Finished Create User's Volatile Files and Directories.
Oct 02 12:32:22 compute-0 systemd[306694]: Reached target Basic System.
Oct 02 12:32:22 compute-0 systemd[306694]: Reached target Main User Target.
Oct 02 12:32:22 compute-0 systemd[306694]: Startup finished in 205ms.
Oct 02 12:32:22 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:32:22 compute-0 systemd[1]: Started Session 60 of User nova.
Oct 02 12:32:22 compute-0 sshd-session[306690]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:32:22 compute-0 sshd-session[306710]: Received disconnect from 192.168.122.102 port 40348:11: disconnected by user
Oct 02 12:32:22 compute-0 sshd-session[306710]: Disconnected from user nova 192.168.122.102 port 40348
Oct 02 12:32:22 compute-0 sshd-session[306690]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:32:22 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Oct 02 12:32:22 compute-0 systemd-logind[820]: Session 60 logged out. Waiting for processes to exit.
Oct 02 12:32:22 compute-0 systemd-logind[820]: Removed session 60.
Oct 02 12:32:22 compute-0 sshd-session[306712]: Accepted publickey for nova from 192.168.122.102 port 40364 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:32:22 compute-0 systemd-logind[820]: New session 62 of user nova.
Oct 02 12:32:22 compute-0 systemd[1]: Started Session 62 of User nova.
Oct 02 12:32:22 compute-0 sshd-session[306712]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:32:23 compute-0 sshd-session[306715]: Received disconnect from 192.168.122.102 port 40364:11: disconnected by user
Oct 02 12:32:23 compute-0 sshd-session[306715]: Disconnected from user nova 192.168.122.102 port 40364
Oct 02 12:32:23 compute-0 sshd-session[306712]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:32:23 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Oct 02 12:32:23 compute-0 systemd-logind[820]: Session 62 logged out. Waiting for processes to exit.
Oct 02 12:32:23 compute-0 systemd-logind[820]: Removed session 62.
Oct 02 12:32:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 117 op/s
Oct 02 12:32:23 compute-0 ceph-mon[73668]: pgmap v1671: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 107 op/s
Oct 02 12:32:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:24.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:24 compute-0 podman[306718]: 2025-10-02 12:32:24.428322069 +0000 UTC m=+0.084337092 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:32:24 compute-0 podman[306717]: 2025-10-02 12:32:24.431742386 +0000 UTC m=+0.085898372 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 12:32:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:24.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 36 KiB/s wr, 135 op/s
Oct 02 12:32:25 compute-0 ceph-mon[73668]: pgmap v1672: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 117 op/s
Oct 02 12:32:25 compute-0 nova_compute[256940]: 2025-10-02 12:32:25.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:26.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:26.468 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:26.469 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:26.470 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:26 compute-0 ceph-mon[73668]: pgmap v1673: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 36 KiB/s wr, 135 op/s
Oct 02 12:32:26 compute-0 nova_compute[256940]: 2025-10-02 12:32:26.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 250 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 704 KiB/s wr, 155 op/s
Oct 02 12:32:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:28.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:32:28
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', '.mgr']
Oct 02 12:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:32:29 compute-0 ceph-mon[73668]: pgmap v1674: 305 pgs: 305 active+clean; 250 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 704 KiB/s wr, 155 op/s
Oct 02 12:32:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 250 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 680 KiB/s wr, 116 op/s
Oct 02 12:32:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:30.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:30 compute-0 ceph-mon[73668]: pgmap v1675: 305 pgs: 305 active+clean; 250 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 680 KiB/s wr, 116 op/s
Oct 02 12:32:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:30.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:30 compute-0 nova_compute[256940]: 2025-10-02 12:32:30.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.301 2 DEBUG nova.network.neutron [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.327 2 DEBUG oslo_concurrency.lockutils [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.328 2 DEBUG oslo_concurrency.lockutils [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.328 2 DEBUG nova.network.neutron [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Refreshing network info cache for port b15a5cc5-f350-4edd-a0b1-caa8458fdd6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 268 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.333 2 DEBUG nova.virt.libvirt.vif [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.334 2 DEBUG nova.network.os_vif_util [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.335 2 DEBUG nova.network.os_vif_util [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.335 2 DEBUG os_vif [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb15a5cc5-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb15a5cc5-f3, col_values=(('external_ids', {'iface-id': 'b15a5cc5-f350-4edd-a0b1-caa8458fdd6a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:a2:d3', 'vm-uuid': '27ae4ec8-c996-4a05-a592-2dc8cd23d759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 NetworkManager[44981]: <info>  [1759408351.3449] manager: (tapb15a5cc5-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.350 2 INFO os_vif [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3')
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.351 2 DEBUG nova.virt.libvirt.vif [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.351 2 DEBUG nova.network.os_vif_util [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.352 2 DEBUG nova.network.os_vif_util [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.354 2 DEBUG nova.virt.libvirt.guest [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:c3:a2:d3"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <target dev="tapb15a5cc5-f3"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]: </interface>
Oct 02 12:32:31 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:32:31 compute-0 kernel: tapb15a5cc5-f3: entered promiscuous mode
Oct 02 12:32:31 compute-0 NetworkManager[44981]: <info>  [1759408351.3696] manager: (tapb15a5cc5-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Oct 02 12:32:31 compute-0 ovn_controller[148123]: 2025-10-02T12:32:31Z|00278|binding|INFO|Claiming lport b15a5cc5-f350-4edd-a0b1-caa8458fdd6a for this chassis.
Oct 02 12:32:31 compute-0 ovn_controller[148123]: 2025-10-02T12:32:31Z|00279|binding|INFO|b15a5cc5-f350-4edd-a0b1-caa8458fdd6a: Claiming fa:16:3e:c3:a2:d3 10.100.0.11
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.377 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:a2:d3 10.100.0.11'], port_security=['fa:16:3e:c3:a2:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-113983688', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-113983688', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.378 158104 INFO neutron.agent.ovn.metadata.agent [-] Port b15a5cc5-f350-4edd-a0b1-caa8458fdd6a in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.380 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:32:31 compute-0 ovn_controller[148123]: 2025-10-02T12:32:31Z|00280|binding|INFO|Setting lport b15a5cc5-f350-4edd-a0b1-caa8458fdd6a ovn-installed in OVS
Oct 02 12:32:31 compute-0 ovn_controller[148123]: 2025-10-02T12:32:31Z|00281|binding|INFO|Setting lport b15a5cc5-f350-4edd-a0b1-caa8458fdd6a up in Southbound
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.404 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[57a27408-1018-4467-ad67-33e11af4d4ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 systemd-udevd[306765]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:31 compute-0 NetworkManager[44981]: <info>  [1759408351.4369] device (tapb15a5cc5-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:31 compute-0 NetworkManager[44981]: <info>  [1759408351.4377] device (tapb15a5cc5-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.445 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa142c7-b2ae-471e-8758-13460c867403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.449 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4958a2c2-9475-46e3-86f7-4b8bd1d306e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.478 2 DEBUG nova.virt.libvirt.driver [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.478 2 DEBUG nova.virt.libvirt.driver [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.479 2 DEBUG nova.virt.libvirt.driver [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:d2:be:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.479 2 DEBUG nova.virt.libvirt.driver [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:48:bf:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.479 2 DEBUG nova.virt.libvirt.driver [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:97:d5:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.479 2 DEBUG nova.virt.libvirt.driver [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:c3:a2:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.490 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bd64c8d2-7e6b-46c7-be35-90c00137d3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.512 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf6b746-d7b8-45c9-a8fe-af36dc84fb1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306772, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.514 2 DEBUG nova.virt.libvirt.guest [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:31</nova:creationTime>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:31 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:port uuid="20f71e08-5b04-4fd1-bf99-5a684d024adc">
Oct 02 12:32:31 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:31 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     <nova:port uuid="b15a5cc5-f350-4edd-a0b1-caa8458fdd6a">
Oct 02 12:32:31 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:31 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:31 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:31 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:31 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.534 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ba432184-e6bc-49b9-9c8f-2b762e1db642]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618827, 'tstamp': 618827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306773, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618831, 'tstamp': 618831}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306773, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.537 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.540 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.540 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.540 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:31.541 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.543 2 DEBUG oslo_concurrency.lockutils [None req-bc5023bb-903a-40b4-8e38-6feed28a5d44 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.645 2 DEBUG nova.compute.manager [req-69a23016-ae2b-49e5-aa35-0bc6304acb17 req-447d3255-797c-49c6-8aaf-b2c90b2b4636 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.645 2 DEBUG oslo_concurrency.lockutils [req-69a23016-ae2b-49e5-aa35-0bc6304acb17 req-447d3255-797c-49c6-8aaf-b2c90b2b4636 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.645 2 DEBUG oslo_concurrency.lockutils [req-69a23016-ae2b-49e5-aa35-0bc6304acb17 req-447d3255-797c-49c6-8aaf-b2c90b2b4636 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.645 2 DEBUG oslo_concurrency.lockutils [req-69a23016-ae2b-49e5-aa35-0bc6304acb17 req-447d3255-797c-49c6-8aaf-b2c90b2b4636 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.645 2 DEBUG nova.compute.manager [req-69a23016-ae2b-49e5-aa35-0bc6304acb17 req-447d3255-797c-49c6-8aaf-b2c90b2b4636 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:31 compute-0 nova_compute[256940]: 2025-10-02 12:32:31.646 2 WARNING nova.compute.manager [req-69a23016-ae2b-49e5-aa35-0bc6304acb17 req-447d3255-797c-49c6-8aaf-b2c90b2b4636 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a for instance with vm_state active and task_state None.
Oct 02 12:32:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:32.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:32 compute-0 ceph-mon[73668]: pgmap v1676: 305 pgs: 305 active+clean; 268 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Oct 02 12:32:33 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:32:33 compute-0 systemd[306694]: Activating special unit Exit the Session...
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped target Main User Target.
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped target Basic System.
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped target Paths.
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped target Sockets.
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped target Timers.
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:32:33 compute-0 systemd[306694]: Closed D-Bus User Message Bus Socket.
Oct 02 12:32:33 compute-0 systemd[306694]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:32:33 compute-0 systemd[306694]: Removed slice User Application Slice.
Oct 02 12:32:33 compute-0 systemd[306694]: Reached target Shutdown.
Oct 02 12:32:33 compute-0 systemd[306694]: Finished Exit the Session.
Oct 02 12:32:33 compute-0 systemd[306694]: Reached target Exit the Session.
Oct 02 12:32:33 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:32:33 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:32:33 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:32:33 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:32:33 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:32:33 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:32:33 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:32:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 275 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:32:33 compute-0 ovn_controller[148123]: 2025-10-02T12:32:33Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:a2:d3 10.100.0.11
Oct 02 12:32:33 compute-0 ovn_controller[148123]: 2025-10-02T12:32:33Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:a2:d3 10.100.0.11
Oct 02 12:32:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:34.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:34 compute-0 ceph-mon[73668]: pgmap v1677: 305 pgs: 305 active+clean; 275 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:32:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 276 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.692 2 DEBUG nova.compute.manager [req-cebb0f83-ed8c-46a4-a8d4-e5fc458a2193 req-67e5fe50-0908-4c6c-81c8-9959db6e0893 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.693 2 DEBUG oslo_concurrency.lockutils [req-cebb0f83-ed8c-46a4-a8d4-e5fc458a2193 req-67e5fe50-0908-4c6c-81c8-9959db6e0893 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.693 2 DEBUG oslo_concurrency.lockutils [req-cebb0f83-ed8c-46a4-a8d4-e5fc458a2193 req-67e5fe50-0908-4c6c-81c8-9959db6e0893 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.693 2 DEBUG oslo_concurrency.lockutils [req-cebb0f83-ed8c-46a4-a8d4-e5fc458a2193 req-67e5fe50-0908-4c6c-81c8-9959db6e0893 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.693 2 DEBUG nova.compute.manager [req-cebb0f83-ed8c-46a4-a8d4-e5fc458a2193 req-67e5fe50-0908-4c6c-81c8-9959db6e0893 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.693 2 WARNING nova.compute.manager [req-cebb0f83-ed8c-46a4-a8d4-e5fc458a2193 req-67e5fe50-0908-4c6c-81c8-9959db6e0893 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a for instance with vm_state active and task_state None.
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.894 2 DEBUG oslo_concurrency.lockutils [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-20f71e08-5b04-4fd1-bf99-5a684d024adc" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.894 2 DEBUG oslo_concurrency.lockutils [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-20f71e08-5b04-4fd1-bf99-5a684d024adc" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.918 2 DEBUG nova.objects.instance [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.942 2 DEBUG nova.virt.libvirt.vif [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.942 2 DEBUG nova.network.os_vif_util [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.943 2 DEBUG nova.network.os_vif_util [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.946 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.949 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.951 2 DEBUG nova.virt.libvirt.driver [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Attempting to detach device tap20f71e08-5b from instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.951 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:48:bf:20"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <target dev="tap20f71e08-5b"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]: </interface>
Oct 02 12:32:35 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.968 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.972 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface>not found in domain: <domain type='kvm' id='32'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <name>instance-0000004c</name>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <uuid>27ae4ec8-c996-4a05-a592-2dc8cd23d759</uuid>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:31</nova:creationTime>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:port uuid="20f71e08-5b04-4fd1-bf99-5a684d024adc">
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <nova:port uuid="b15a5cc5-f350-4edd-a0b1-caa8458fdd6a">
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:35 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <system>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <entry name='serial'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <entry name='uuid'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </system>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <os>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </os>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <features>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </features>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk' index='2'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config' index='1'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:d2:be:9d'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target dev='tap7592f97e-0e'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:48:bf:20'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target dev='tap20f71e08-5b'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='net1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:97:d5:97'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target dev='tap130ec7c9-b8'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='net2'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:c3:a2:d3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target dev='tapb15a5cc5-f3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='net3'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       </target>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </console>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <video>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </video>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c794,c953</label>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c794,c953</imagelabel>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:32:35 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:35 compute-0 nova_compute[256940]: </domain>
Oct 02 12:32:35 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.972 2 INFO nova.virt.libvirt.driver [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap20f71e08-5b from instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 from the persistent domain config.
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.972 2 DEBUG nova.virt.libvirt.driver [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] (1/8): Attempting to detach device tap20f71e08-5b with device alias net1 from instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:32:35 compute-0 nova_compute[256940]: 2025-10-02 12:32:35.973 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:48:bf:20"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]:   <target dev="tap20f71e08-5b"/>
Oct 02 12:32:35 compute-0 nova_compute[256940]: </interface>
Oct 02 12:32:35 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:32:36 compute-0 kernel: tap20f71e08-5b (unregistering): left promiscuous mode
Oct 02 12:32:36 compute-0 NetworkManager[44981]: <info>  [1759408356.0796] device (tap20f71e08-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.089 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759408356.08874, 27ae4ec8-c996-4a05-a592-2dc8cd23d759 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.090 2 DEBUG nova.virt.libvirt.driver [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Start waiting for the detach event from libvirt for device tap20f71e08-5b with device alias net1 for instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.091 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:36 compute-0 ovn_controller[148123]: 2025-10-02T12:32:36Z|00282|binding|INFO|Releasing lport 20f71e08-5b04-4fd1-bf99-5a684d024adc from this chassis (sb_readonly=0)
Oct 02 12:32:36 compute-0 ovn_controller[148123]: 2025-10-02T12:32:36Z|00283|binding|INFO|Setting lport 20f71e08-5b04-4fd1-bf99-5a684d024adc down in Southbound
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 ovn_controller[148123]: 2025-10-02T12:32:36Z|00284|binding|INFO|Removing iface tap20f71e08-5b ovn-installed in OVS
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.098 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface>not found in domain: <domain type='kvm' id='32'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <name>instance-0000004c</name>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <uuid>27ae4ec8-c996-4a05-a592-2dc8cd23d759</uuid>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:31</nova:creationTime>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:port uuid="20f71e08-5b04-4fd1-bf99-5a684d024adc">
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:port uuid="b15a5cc5-f350-4edd-a0b1-caa8458fdd6a">
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:36 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <system>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <entry name='serial'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <entry name='uuid'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </system>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <os>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </os>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <features>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </features>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk' index='2'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config' index='1'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:d2:be:9d'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target dev='tap7592f97e-0e'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:97:d5:97'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target dev='tap130ec7c9-b8'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='net2'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:c3:a2:d3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target dev='tapb15a5cc5-f3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='net3'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       </target>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </console>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <video>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </video>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c794,c953</label>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c794,c953</imagelabel>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:36 compute-0 nova_compute[256940]: </domain>
Oct 02 12:32:36 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.099 2 INFO nova.virt.libvirt.driver [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap20f71e08-5b from instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 from the live domain config.
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.100 2 DEBUG nova.virt.libvirt.vif [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.100 2 DEBUG nova.network.os_vif_util [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.101 2 DEBUG nova.network.os_vif_util [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.101 2 DEBUG os_vif [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.103 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20f71e08-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.105 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:bf:20 10.100.0.7'], port_security=['fa:16:3e:48:bf:20 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20f71e08-5b04-4fd1-bf99-5a684d024adc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.108 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20f71e08-5b04-4fd1-bf99-5a684d024adc in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.111 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.119 2 INFO os_vif [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b')
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.120 2 DEBUG nova.virt.libvirt.guest [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:36</nova:creationTime>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     <nova:port uuid="b15a5cc5-f350-4edd-a0b1-caa8458fdd6a">
Oct 02 12:32:36 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:36 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:36 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:36 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:36 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.129 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[59c4b8e0-6159-46a7-bf35-938de95d2811]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:36.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:36 compute-0 sudo[306781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.167 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cc922846-4060-4161-a475-bbab6154d18d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.170 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d309be60-a4b5-410a-b03b-d4578fa1ea7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 sudo[306781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:36 compute-0 sudo[306781]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.202 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[887dd141-6c5d-4fa9-8ecf-e5c56d8ecbb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.223 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaf16a3-63b5-4f49-98d1-8868f5a20a75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306833, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 sudo[306815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:36 compute-0 sudo[306815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.242 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b45661-6700-480d-8334-5267d7043d47]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618827, 'tstamp': 618827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306839, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618831, 'tstamp': 618831}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306839, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:36 compute-0 sudo[306815]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.244 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.247 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.247 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.247 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:36.248 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.520 2 DEBUG nova.network.neutron [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updated VIF entry in instance network info cache for port b15a5cc5-f350-4edd-a0b1-caa8458fdd6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.521 2 DEBUG nova.network.neutron [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.553 2 DEBUG oslo_concurrency.lockutils [req-e6031778-8b05-44f8-8ac4-8e05fc5e685e req-5bec2970-08d7-4f94-9b23-fffc30ab24fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:36.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.886 2 DEBUG nova.compute.manager [req-4fb83cfe-3962-4993-a5b3-57c75d9be42a req-1c4a059c-b52c-4516-b78c-7c327dcad16e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-unplugged-20f71e08-5b04-4fd1-bf99-5a684d024adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.886 2 DEBUG oslo_concurrency.lockutils [req-4fb83cfe-3962-4993-a5b3-57c75d9be42a req-1c4a059c-b52c-4516-b78c-7c327dcad16e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.887 2 DEBUG oslo_concurrency.lockutils [req-4fb83cfe-3962-4993-a5b3-57c75d9be42a req-1c4a059c-b52c-4516-b78c-7c327dcad16e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.887 2 DEBUG oslo_concurrency.lockutils [req-4fb83cfe-3962-4993-a5b3-57c75d9be42a req-1c4a059c-b52c-4516-b78c-7c327dcad16e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.887 2 DEBUG nova.compute.manager [req-4fb83cfe-3962-4993-a5b3-57c75d9be42a req-1c4a059c-b52c-4516-b78c-7c327dcad16e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-unplugged-20f71e08-5b04-4fd1-bf99-5a684d024adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:36 compute-0 nova_compute[256940]: 2025-10-02 12:32:36.887 2 WARNING nova.compute.manager [req-4fb83cfe-3962-4993-a5b3-57c75d9be42a req-1c4a059c-b52c-4516-b78c-7c327dcad16e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-unplugged-20f71e08-5b04-4fd1-bf99-5a684d024adc for instance with vm_state active and task_state None.
Oct 02 12:32:37 compute-0 ceph-mon[73668]: pgmap v1678: 305 pgs: 305 active+clean; 276 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.167 2 INFO nova.network.neutron [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating port 2ce66710-95c3-4fa0-999b-b7cf0b722cac with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:32:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 279 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.408 2 DEBUG oslo_concurrency.lockutils [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.409 2 DEBUG oslo_concurrency.lockutils [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.409 2 DEBUG nova.network.neutron [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.481 2 DEBUG nova.compute.manager [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-deleted-20f71e08-5b04-4fd1-bf99-5a684d024adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.482 2 INFO nova.compute.manager [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Neutron deleted interface 20f71e08-5b04-4fd1-bf99-5a684d024adc; detaching it from the instance and deleting it from the info cache
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.482 2 DEBUG nova.network.neutron [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.527 2 DEBUG nova.objects.instance [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'system_metadata' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.572 2 DEBUG nova.objects.instance [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'flavor' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.596 2 DEBUG nova.virt.libvirt.vif [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.597 2 DEBUG nova.network.os_vif_util [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.598 2 DEBUG nova.network.os_vif_util [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.603 2 DEBUG nova.virt.libvirt.guest [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.606 2 DEBUG nova.virt.libvirt.guest [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface>not found in domain: <domain type='kvm' id='32'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <name>instance-0000004c</name>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <uuid>27ae4ec8-c996-4a05-a592-2dc8cd23d759</uuid>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:36</nova:creationTime>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="b15a5cc5-f350-4edd-a0b1-caa8458fdd6a">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:37 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <system>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='serial'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='uuid'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </system>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <os>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </os>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <features>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </features>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk' index='2'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config' index='1'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:d2:be:9d'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='tap7592f97e-0e'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:97:d5:97'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='tap130ec7c9-b8'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='net2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:c3:a2:d3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='tapb15a5cc5-f3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='net3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </target>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </console>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <video>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </video>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c794,c953</label>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c794,c953</imagelabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]: </domain>
Oct 02 12:32:37 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.606 2 DEBUG nova.virt.libvirt.guest [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.609 2 DEBUG nova.virt.libvirt.guest [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:bf:20"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap20f71e08-5b"/></interface>not found in domain: <domain type='kvm' id='32'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <name>instance-0000004c</name>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <uuid>27ae4ec8-c996-4a05-a592-2dc8cd23d759</uuid>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:36</nova:creationTime>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="b15a5cc5-f350-4edd-a0b1-caa8458fdd6a">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:37 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <system>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='serial'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='uuid'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </system>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <os>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </os>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <features>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </features>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk' index='2'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config' index='1'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:d2:be:9d'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='tap7592f97e-0e'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:97:d5:97'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='tap130ec7c9-b8'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='net2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:c3:a2:d3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target dev='tapb15a5cc5-f3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='net3'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       </target>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </console>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <video>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </video>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c794,c953</label>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c794,c953</imagelabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:32:37 compute-0 nova_compute[256940]: </domain>
Oct 02 12:32:37 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.610 2 WARNING nova.virt.libvirt.driver [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Detaching interface fa:16:3e:48:bf:20 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap20f71e08-5b' not found.
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.610 2 DEBUG nova.virt.libvirt.vif [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.610 2 DEBUG nova.network.os_vif_util [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "address": "fa:16:3e:48:bf:20", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f71e08-5b", "ovs_interfaceid": "20f71e08-5b04-4fd1-bf99-5a684d024adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.611 2 DEBUG nova.network.os_vif_util [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.611 2 DEBUG os_vif [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20f71e08-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.614 2 INFO os_vif [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:bf:20,bridge_name='br-int',has_traffic_filtering=True,id=20f71e08-5b04-4fd1-bf99-5a684d024adc,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f71e08-5b')
Oct 02 12:32:37 compute-0 nova_compute[256940]: 2025-10-02 12:32:37.614 2 DEBUG nova.virt.libvirt.guest [req-c7245017-bc65-4ab5-ad25-9dda2f0ba517 req-03f97cd4-3be3-4dc3-900a-bb7dbff1df1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:37</nova:creationTime>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     <nova:port uuid="b15a5cc5-f350-4edd-a0b1-caa8458fdd6a">
Oct 02 12:32:37 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:32:37 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:37 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:37 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:37 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.004 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.004 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.004 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.005 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.005 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.005 2 WARNING nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state resize_migrated.
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.006 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.006 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.006 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.006 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.006 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.007 2 WARNING nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state resize_migrated.
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.083 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.083 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.084 2 DEBUG nova.network.neutron [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:38.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:38 compute-0 ceph-mon[73668]: pgmap v1679: 305 pgs: 305 active+clean; 279 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.374 158104 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 7f9fc2ac-cbe3-439d-9930-137d7ab9d0aa with type ""
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.375 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:a2:d3 10.100.0.11'], port_security=['fa:16:3e:c3:a2:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-113983688', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-113983688', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.376 158104 INFO neutron.agent.ovn.metadata.agent [-] Port b15a5cc5-f350-4edd-a0b1-caa8458fdd6a in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.378 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:32:38 compute-0 ovn_controller[148123]: 2025-10-02T12:32:38Z|00285|binding|INFO|Removing iface tapb15a5cc5-f3 ovn-installed in OVS
Oct 02 12:32:38 compute-0 ovn_controller[148123]: 2025-10-02T12:32:38Z|00286|binding|INFO|Removing lport b15a5cc5-f350-4edd-a0b1-caa8458fdd6a ovn-installed in OVS
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.396 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e14d65e6-bba4-49ea-9ec5-f77b119234f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.429 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b89a67-bf60-43a4-89a3-089f4600605a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.432 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f5274bfb-14eb-410c-9bc9-468b9c568dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.462 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[07492777-f496-43e4-b17e-bedff3d18ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.484 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c289b5c7-ca9f-4a79-a677-eeef2f7b9352]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306848, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.506 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f89dfb-10f5-4b0e-8283-67f585394f5a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618827, 'tstamp': 618827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306849, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618831, 'tstamp': 618831}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306849, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.508 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.511 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.511 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.511 2 DEBUG nova.compute.manager [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-deleted-b15a5cc5-f350-4edd-a0b1-caa8458fdd6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.512 2 INFO nova.compute.manager [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Neutron deleted interface b15a5cc5-f350-4edd-a0b1-caa8458fdd6a; detaching it from the instance and deleting it from the info cache
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.512 2 DEBUG nova.network.neutron [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.512 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:38.512 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.556 2 DEBUG nova.objects.instance [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'system_metadata' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:38.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.568 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.569 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.569 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.569 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.569 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.570 2 INFO nova.compute.manager [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Terminating instance
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.571 2 DEBUG nova.compute.manager [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.577 2 DEBUG nova.objects.instance [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'flavor' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.601 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.602 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.603 2 DEBUG nova.virt.libvirt.vif [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.604 2 DEBUG nova.network.os_vif_util [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.604 2 DEBUG nova.network.os_vif_util [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.617 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.679 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.680 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.689 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.689 2 INFO nova.compute.claims [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:32:38 compute-0 nova_compute[256940]: 2025-10-02 12:32:38.888 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:38 compute-0 kernel: tap7592f97e-0e (unregistering): left promiscuous mode
Oct 02 12:32:39 compute-0 NetworkManager[44981]: <info>  [1759408359.0035] device (tap7592f97e-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.012 2 DEBUG nova.compute.manager [req-ca3ee753-3f9f-4966-81c5-01852fe59ba7 req-35f7533e-4ad7-402e-bcfc-5d9d4f6f26c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.013 2 DEBUG oslo_concurrency.lockutils [req-ca3ee753-3f9f-4966-81c5-01852fe59ba7 req-35f7533e-4ad7-402e-bcfc-5d9d4f6f26c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.013 2 DEBUG oslo_concurrency.lockutils [req-ca3ee753-3f9f-4966-81c5-01852fe59ba7 req-35f7533e-4ad7-402e-bcfc-5d9d4f6f26c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.013 2 DEBUG oslo_concurrency.lockutils [req-ca3ee753-3f9f-4966-81c5-01852fe59ba7 req-35f7533e-4ad7-402e-bcfc-5d9d4f6f26c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.013 2 DEBUG nova.compute.manager [req-ca3ee753-3f9f-4966-81c5-01852fe59ba7 req-35f7533e-4ad7-402e-bcfc-5d9d4f6f26c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:39 compute-0 ovn_controller[148123]: 2025-10-02T12:32:39Z|00287|binding|INFO|Releasing lport 7592f97e-0ed9-49d9-a997-1e691974110b from this chassis (sb_readonly=0)
Oct 02 12:32:39 compute-0 ovn_controller[148123]: 2025-10-02T12:32:39Z|00288|binding|INFO|Setting lport 7592f97e-0ed9-49d9-a997-1e691974110b down in Southbound
Oct 02 12:32:39 compute-0 ovn_controller[148123]: 2025-10-02T12:32:39Z|00289|binding|INFO|Removing iface tap7592f97e-0e ovn-installed in OVS
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.018 2 WARNING nova.compute.manager [req-ca3ee753-3f9f-4966-81c5-01852fe59ba7 req-35f7533e-4ad7-402e-bcfc-5d9d4f6f26c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-20f71e08-5b04-4fd1-bf99-5a684d024adc for instance with vm_state active and task_state deleting.
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.024 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:be:9d 10.100.0.9'], port_security=['fa:16:3e:d2:be:9d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07d16472-ba47-4252-b39c-3a10f45e3158', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7592f97e-0ed9-49d9-a997-1e691974110b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.025 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7592f97e-0ed9-49d9-a997-1e691974110b in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.027 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:32:39 compute-0 kernel: tap130ec7c9-b8 (unregistering): left promiscuous mode
Oct 02 12:32:39 compute-0 NetworkManager[44981]: <info>  [1759408359.0465] device (tap130ec7c9-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.053 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa01f37-e753-4f42-b2d5-e23f6eb9ad24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:39 compute-0 ovn_controller[148123]: 2025-10-02T12:32:39Z|00290|binding|INFO|Releasing lport 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f from this chassis (sb_readonly=0)
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 ovn_controller[148123]: 2025-10-02T12:32:39Z|00291|binding|INFO|Setting lport 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f down in Southbound
Oct 02 12:32:39 compute-0 ovn_controller[148123]: 2025-10-02T12:32:39Z|00292|binding|INFO|Removing iface tap130ec7c9-b8 ovn-installed in OVS
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.063 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:d5:97 10.100.0.12'], port_security=['fa:16:3e:97:d5:97 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '27ae4ec8-c996-4a05-a592-2dc8cd23d759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 kernel: tapb15a5cc5-f3 (unregistering): left promiscuous mode
Oct 02 12:32:39 compute-0 NetworkManager[44981]: <info>  [1759408359.0764] device (tapb15a5cc5-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.093 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0a4eab-be07-419b-a3a0-4a06f940bfec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.096 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[808c1d33-daa7-4516-8749-62d05bbd7c8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.125 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[baab2c31-1654-4dd3-a242-5698c1e67d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.144 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9009fd-e21d-46f7-9e97-4b5ccc2effc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618813, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306892, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:39 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004c.scope: Consumed 16.721s CPU time.
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.163 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0afde9ef-2338-4917-8a3d-34fd689ebbcf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618827, 'tstamp': 618827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306893, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618831, 'tstamp': 618831}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306893, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.164 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:39 compute-0 systemd-machined[210927]: Machine qemu-32-instance-0000004c terminated.
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.176 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.176 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.176 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.176 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.178 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.180 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a187d8a-77c6-4b27-bb13-654f471c1faf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.181 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[40771bf9-a3de-4fb4-ba51-6982c911b97c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:39.181 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace which is not needed anymore
Oct 02 12:32:39 compute-0 NetworkManager[44981]: <info>  [1759408359.1905] manager: (tap7592f97e-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Oct 02 12:32:39 compute-0 NetworkManager[44981]: <info>  [1759408359.2037] manager: (tap130ec7c9-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct 02 12:32:39 compute-0 NetworkManager[44981]: <info>  [1759408359.2118] manager: (tapb15a5cc5-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.232 2 DEBUG nova.virt.libvirt.guest [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c3:a2:d3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb15a5cc5-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.237 2 DEBUG nova.virt.libvirt.driver [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Attempting to detach device tapb15a5cc5-f3 from instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.237 2 DEBUG nova.virt.libvirt.guest [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:c3:a2:d3"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <target dev="tapb15a5cc5-f3"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]: </interface>
Oct 02 12:32:39 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.239 2 INFO nova.virt.libvirt.driver [-] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Instance destroyed successfully.
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.241 2 DEBUG nova.objects.instance [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'resources' on Instance uuid 27ae4ec8-c996-4a05-a592-2dc8cd23d759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.257 2 DEBUG nova.virt.libvirt.vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.258 2 DEBUG nova.network.os_vif_util [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.259 2 DEBUG nova.network.os_vif_util [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:be:9d,bridge_name='br-int',has_traffic_filtering=True,id=7592f97e-0ed9-49d9-a997-1e691974110b,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7592f97e-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.259 2 DEBUG os_vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:be:9d,bridge_name='br-int',has_traffic_filtering=True,id=7592f97e-0ed9-49d9-a997-1e691974110b,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7592f97e-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7592f97e-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.274 2 INFO os_vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:be:9d,bridge_name='br-int',has_traffic_filtering=True,id=7592f97e-0ed9-49d9-a997-1e691974110b,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7592f97e-0e')
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.275 2 DEBUG nova.virt.libvirt.vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.275 2 DEBUG nova.network.os_vif_util [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.276 2 DEBUG nova.network.os_vif_util [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:d5:97,bridge_name='br-int',has_traffic_filtering=True,id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130ec7c9-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.276 2 DEBUG os_vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:d5:97,bridge_name='br-int',has_traffic_filtering=True,id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130ec7c9-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap130ec7c9-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.284 2 INFO os_vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:d5:97,bridge_name='br-int',has_traffic_filtering=True,id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130ec7c9-b8')
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.285 2 DEBUG nova.virt.libvirt.vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.285 2 DEBUG nova.network.os_vif_util [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.286 2 DEBUG nova.network.os_vif_util [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.286 2 DEBUG os_vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb15a5cc5-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.293 2 INFO os_vif [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3')
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.316 2 DEBUG nova.virt.libvirt.guest [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c3:a2:d3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb15a5cc5-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.321 2 DEBUG nova.virt.libvirt.guest [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c3:a2:d3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb15a5cc5-f3"/></interface>not found in domain: <domain type='kvm'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <name>instance-0000004c</name>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <uuid>27ae4ec8-c996-4a05-a592-2dc8cd23d759</uuid>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:31:27</nova:creationTime>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:39 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <system>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <entry name='serial'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <entry name='uuid'>27ae4ec8-c996-4a05-a592-2dc8cd23d759</entry>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </system>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <os>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </os>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <features>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </features>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='partial'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <model fallback='allow'>Nehalem</model>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/27ae4ec8-c996-4a05-a592-2dc8cd23d759_disk.config'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:d2:be:9d'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target dev='tap7592f97e-0e'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:97:d5:97'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target dev='tap130ec7c9-b8'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       </target>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <console type='pty'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759/console.log' append='off'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </console>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </input>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <video>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </video>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:32:39 compute-0 nova_compute[256940]: </domain>
Oct 02 12:32:39 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.321 2 INFO nova.virt.libvirt.driver [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully detached device tapb15a5cc5-f3 from instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 from the persistent domain config.
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.322 2 DEBUG nova.virt.libvirt.vif [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1890219272',display_name='tempest-AttachInterfacesTestJSON-server-1890219272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1890219272',id=76,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL3W8KohejdB5LDWGdZTAGf/bQ9tzwK1Ab528R98mWJnR29IpaP7AKHpaYJNyWlqOYjEBzyDBJwv2Ox2S677v4/2tRoqXekoDYb6eSe8pZgpx/nb4LZ2AJBbShQpUMf9Ww==',key_name='tempest-keypair-1020104003',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-koulwguc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=27ae4ec8-c996-4a05-a592-2dc8cd23d759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.322 2 DEBUG nova.network.os_vif_util [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.322 2 DEBUG nova.network.os_vif_util [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.323 2 DEBUG os_vif [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb15a5cc5-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.327 2 INFO os_vif [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a2:d3,bridge_name='br-int',has_traffic_filtering=True,id=b15a5cc5-f350-4edd-a0b1-caa8458fdd6a,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb15a5cc5-f3')
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.328 2 DEBUG nova.virt.libvirt.guest [req-a5bb3edf-926e-4414-810e-b1653e468016 req-59fdeda6-132f-44e8-8dee-4aa34b0a94e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1890219272</nova:name>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:32:39</nova:creationTime>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:port uuid="7592f97e-0ed9-49d9-a997-1e691974110b">
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     <nova:port uuid="130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f">
Oct 02 12:32:39 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:32:39 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:32:39 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:32:39 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:32:39 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:32:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 279 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 786 KiB/s rd, 1.5 MiB/s wr, 95 op/s
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.399 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.408 2 DEBUG nova.compute.provider_tree [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.424 2 DEBUG nova.scheduler.client.report [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.441 2 INFO nova.network.neutron [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Port 20f71e08-5b04-4fd1-bf99-5a684d024adc from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.449 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.449 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.502 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.503 2 DEBUG nova.network.neutron [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.531 2 INFO nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.558 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:32:39 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[304988]: [NOTICE]   (304992) : haproxy version is 2.8.14-c23fe91
Oct 02 12:32:39 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[304988]: [NOTICE]   (304992) : path to executable is /usr/sbin/haproxy
Oct 02 12:32:39 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[304988]: [WARNING]  (304992) : Exiting Master process...
Oct 02 12:32:39 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[304988]: [ALERT]    (304992) : Current worker (304994) exited with code 143 (Terminated)
Oct 02 12:32:39 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[304988]: [WARNING]  (304992) : All workers exited. Exiting... (0)
Oct 02 12:32:39 compute-0 systemd[1]: libpod-a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03.scope: Deactivated successfully.
Oct 02 12:32:39 compute-0 podman[306954]: 2025-10-02 12:32:39.671352442 +0000 UTC m=+0.364137325 container died a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.691 2 DEBUG nova.compute.manager [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-changed-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.692 2 DEBUG nova.compute.manager [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Refreshing instance network info cache due to event network-changed-2ce66710-95c3-4fa0-999b-b7cf0b722cac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.692 2 DEBUG oslo_concurrency.lockutils [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.728 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.730 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.730 2 INFO nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Creating image(s)
Oct 02 12:32:39 compute-0 nova_compute[256940]: 2025-10-02 12:32:39.962 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.006 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.049 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.055 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.099 2 DEBUG nova.policy [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1c8eed6cb806403ca545bb7b2820714e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7846694bb70143aa984e235126fbe15c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.109 2 DEBUG nova.compute.manager [req-51fb5be7-7e4e-45c3-8bdf-97aacfd1e57c req-dde7814a-3904-436a-ad1e-d6004a9e8428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-unplugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.110 2 DEBUG oslo_concurrency.lockutils [req-51fb5be7-7e4e-45c3-8bdf-97aacfd1e57c req-dde7814a-3904-436a-ad1e-d6004a9e8428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.110 2 DEBUG oslo_concurrency.lockutils [req-51fb5be7-7e4e-45c3-8bdf-97aacfd1e57c req-dde7814a-3904-436a-ad1e-d6004a9e8428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.110 2 DEBUG oslo_concurrency.lockutils [req-51fb5be7-7e4e-45c3-8bdf-97aacfd1e57c req-dde7814a-3904-436a-ad1e-d6004a9e8428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.111 2 DEBUG nova.compute.manager [req-51fb5be7-7e4e-45c3-8bdf-97aacfd1e57c req-dde7814a-3904-436a-ad1e-d6004a9e8428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-unplugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.111 2 DEBUG nova.compute.manager [req-51fb5be7-7e4e-45c3-8bdf-97aacfd1e57c req-dde7814a-3904-436a-ad1e-d6004a9e8428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-unplugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.144 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.145 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.146 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.146 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3695580193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:40.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065169597989949745 of space, bias 1.0, pg target 1.9550879396984924 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:32:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:32:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03-userdata-shm.mount: Deactivated successfully.
Oct 02 12:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ed57b3c0d469af6aadf483b8ea620b175290654bf99652762369341f8889122-merged.mount: Deactivated successfully.
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.451 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.465 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:40.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:40 compute-0 podman[306954]: 2025-10-02 12:32:40.625675436 +0000 UTC m=+1.318460349 container cleanup a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:32:40 compute-0 systemd[1]: libpod-conmon-a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03.scope: Deactivated successfully.
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.758 2 DEBUG nova.network.neutron [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.798 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.805 2 DEBUG oslo_concurrency.lockutils [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.805 2 DEBUG nova.network.neutron [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Refreshing network info cache for port 2ce66710-95c3-4fa0-999b-b7cf0b722cac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.896 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.897 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.898 2 INFO nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Creating image(s)
Oct 02 12:32:40 compute-0 nova_compute[256940]: 2025-10-02 12:32:40.952 2 DEBUG nova.storage.rbd_utils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] creating snapshot(nova-resize) on rbd image(b6c2a016-125f-4f83-a284-5a2d50805121_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:32:41 compute-0 podman[307096]: 2025-10-02 12:32:41.081604308 +0000 UTC m=+0.402376645 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.099 2 DEBUG nova.compute.manager [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-unplugged-7592f97e-0ed9-49d9-a997-1e691974110b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.100 2 DEBUG oslo_concurrency.lockutils [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.100 2 DEBUG oslo_concurrency.lockutils [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.101 2 DEBUG oslo_concurrency.lockutils [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.101 2 DEBUG nova.compute.manager [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-unplugged-7592f97e-0ed9-49d9-a997-1e691974110b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.101 2 DEBUG nova.compute.manager [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-unplugged-7592f97e-0ed9-49d9-a997-1e691974110b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.102 2 DEBUG nova.compute.manager [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.102 2 DEBUG oslo_concurrency.lockutils [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.102 2 DEBUG oslo_concurrency.lockutils [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.103 2 DEBUG oslo_concurrency.lockutils [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.103 2 DEBUG nova.compute.manager [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.103 2 WARNING nova.compute.manager [req-b6a66734-dc12-427d-a34f-b6bf97e8dec8 req-836e2a59-72b7-4128-b522-a7203483bc24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-7592f97e-0ed9-49d9-a997-1e691974110b for instance with vm_state active and task_state deleting.
Oct 02 12:32:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 281 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 790 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct 02 12:32:41 compute-0 ceph-mon[73668]: pgmap v1680: 305 pgs: 305 active+clean; 279 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 786 KiB/s rd, 1.5 MiB/s wr, 95 op/s
Oct 02 12:32:41 compute-0 podman[307095]: 2025-10-02 12:32:41.613708945 +0000 UTC m=+0.951421171 container remove a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.621 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[06a64e21-81de-4956-84d5-c56e5a59cef6]: (4, ('Thu Oct  2 12:32:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03)\na2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03\nThu Oct  2 12:32:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (a2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03)\na2a0bc571e4f3838163289a374cc1a09e752f18567fb8478be81303b20016a03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.623 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7bf213-c131-41c6-ba0b-a7ee03113ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.625 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:41 compute-0 kernel: tap6a187d8a-70: left promiscuous mode
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.656 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[94597ebd-0ffd-4a32-8226-e5cfc5bcd0c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.696 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9552709b-14b9-4208-b6d9-fd556afa6f69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.698 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2bcabf-1faa-472a-84b9-bb5d856f6ca8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-0 podman[307102]: 2025-10-02 12:32:41.70609362 +0000 UTC m=+1.020388262 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.724 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e6068fa2-3b2e-468b-9c94-9a19df0bf17d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618804, 'reachable_time': 38430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307192, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a187d8a\x2d77c6\x2d4b27\x2dbb13\x2d654f471c1faf.mount: Deactivated successfully.
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.730 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:32:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:41.730 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[925c8c3f-925a-44e6-b940-b3557a08cc72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.871 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:41 compute-0 nova_compute[256940]: 2025-10-02 12:32:41.944 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] resizing rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:32:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:42.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.180 2 DEBUG nova.objects.instance [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'migration_context' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.185 2 DEBUG nova.compute.manager [req-fa8fa8e0-25a1-43d6-80be-d99d025646c7 req-bf6e51f7-0667-4fe6-9e95-537c30eca56c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.186 2 DEBUG oslo_concurrency.lockutils [req-fa8fa8e0-25a1-43d6-80be-d99d025646c7 req-bf6e51f7-0667-4fe6-9e95-537c30eca56c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.186 2 DEBUG oslo_concurrency.lockutils [req-fa8fa8e0-25a1-43d6-80be-d99d025646c7 req-bf6e51f7-0667-4fe6-9e95-537c30eca56c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.187 2 DEBUG oslo_concurrency.lockutils [req-fa8fa8e0-25a1-43d6-80be-d99d025646c7 req-bf6e51f7-0667-4fe6-9e95-537c30eca56c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.187 2 DEBUG nova.compute.manager [req-fa8fa8e0-25a1-43d6-80be-d99d025646c7 req-bf6e51f7-0667-4fe6-9e95-537c30eca56c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] No waiting events found dispatching network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.188 2 WARNING nova.compute.manager [req-fa8fa8e0-25a1-43d6-80be-d99d025646c7 req-bf6e51f7-0667-4fe6-9e95-537c30eca56c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received unexpected event network-vif-plugged-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f for instance with vm_state active and task_state deleting.
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.194 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.194 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Ensure instance console log exists: /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.195 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.195 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.196 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 02 12:32:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.720 2 DEBUG nova.network.neutron [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Successfully created port: 6a8e44c2-0e49-44ff-815d-00dfc9de3337 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:32:42 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 02 12:32:42 compute-0 ceph-mon[73668]: pgmap v1681: 305 pgs: 305 active+clean; 281 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 790 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct 02 12:32:42 compute-0 nova_compute[256940]: 2025-10-02 12:32:42.967 2 DEBUG nova.objects.instance [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.091 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.092 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Ensure instance console log exists: /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.092 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.093 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.093 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.095 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Start _get_guest_xml network_info=[{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:35:b2:fa"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.101 2 WARNING nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.107 2 DEBUG nova.virt.libvirt.host [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.108 2 DEBUG nova.virt.libvirt.host [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.110 2 DEBUG nova.virt.libvirt.host [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.111 2 DEBUG nova.virt.libvirt.host [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.112 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.113 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.113 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.113 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.114 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.114 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.114 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.115 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.115 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.115 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.115 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.116 2 DEBUG nova.virt.hardware [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.116 2 DEBUG nova.objects.instance [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.132 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 296 MiB data, 833 MiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 813 KiB/s wr, 78 op/s
Oct 02 12:32:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887869052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.599 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.644 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.987 2 DEBUG nova.network.neutron [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updated VIF entry in instance network info cache for port 2ce66710-95c3-4fa0-999b-b7cf0b722cac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:43 compute-0 nova_compute[256940]: 2025-10-02 12:32:43.989 2 DEBUG nova.network.neutron [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.017 2 DEBUG oslo_concurrency.lockutils [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:44 compute-0 ceph-mon[73668]: osdmap e243: 3 total, 3 up, 3 in
Oct 02 12:32:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2887869052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:44.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705393334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.209 2 DEBUG nova.network.neutron [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "address": "fa:16:3e:97:d5:97", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130ec7c9-b8", "ovs_interfaceid": "130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.211 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.212 2 DEBUG nova.virt.libvirt.vif [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1932213760',display_name='tempest-ServerDiskConfigTestJSON-server-1932213760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1932213760',id=78,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-3bos98gj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:36Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b6c2a016-125f-4f83-a284-5a2d50805121,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:35:b2:fa"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.212 2 DEBUG nova.network.os_vif_util [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:35:b2:fa"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.213 2 DEBUG nova.network.os_vif_util [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.215 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <uuid>b6c2a016-125f-4f83-a284-5a2d50805121</uuid>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <name>instance-0000004e</name>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1932213760</nova:name>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:32:43</nova:creationTime>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <nova:port uuid="2ce66710-95c3-4fa0-999b-b7cf0b722cac">
Oct 02 12:32:44 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <system>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <entry name="serial">b6c2a016-125f-4f83-a284-5a2d50805121</entry>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <entry name="uuid">b6c2a016-125f-4f83-a284-5a2d50805121</entry>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </system>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <os>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   </os>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <features>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   </features>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b6c2a016-125f-4f83-a284-5a2d50805121_disk">
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b6c2a016-125f-4f83-a284-5a2d50805121_disk.config">
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:32:44 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:35:b2:fa"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <target dev="tap2ce66710-95"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/console.log" append="off"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <video>
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </video>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:32:44 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:32:44 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:32:44 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:32:44 compute-0 nova_compute[256940]: </domain>
Oct 02 12:32:44 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.216 2 DEBUG nova.virt.libvirt.vif [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1932213760',display_name='tempest-ServerDiskConfigTestJSON-server-1932213760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1932213760',id=78,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-3bos98gj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:36Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b6c2a016-125f-4f83-a284-5a2d50805121,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:35:b2:fa"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.217 2 DEBUG nova.network.os_vif_util [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:35:b2:fa"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.217 2 DEBUG nova.network.os_vif_util [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.217 2 DEBUG os_vif [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ce66710-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ce66710-95, col_values=(('external_ids', {'iface-id': '2ce66710-95c3-4fa0-999b-b7cf0b722cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:b2:fa', 'vm-uuid': 'b6c2a016-125f-4f83-a284-5a2d50805121'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 NetworkManager[44981]: <info>  [1759408364.2320] manager: (tap2ce66710-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.238 2 INFO os_vif [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95')
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.267 2 DEBUG oslo_concurrency.lockutils [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-27ae4ec8-c996-4a05-a592-2dc8cd23d759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.290 2 DEBUG oslo_concurrency.lockutils [None req-c2166709-d928-4194-9730-8d0ba9d31b64 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-27ae4ec8-c996-4a05-a592-2dc8cd23d759-20f71e08-5b04-4fd1-bf99-5a684d024adc" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 8.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.356 2 DEBUG nova.network.neutron [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Successfully updated port: 6a8e44c2-0e49-44ff-815d-00dfc9de3337 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.379 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.379 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquired lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.380 2 DEBUG nova.network.neutron [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.416 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.417 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.417 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:35:b2:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.418 2 INFO nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Using config drive
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.487 2 DEBUG nova.compute.manager [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-changed-6a8e44c2-0e49-44ff-815d-00dfc9de3337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.488 2 DEBUG nova.compute.manager [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Refreshing instance network info cache due to event network-changed-6a8e44c2-0e49-44ff-815d-00dfc9de3337. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.488 2 DEBUG oslo_concurrency.lockutils [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:44 compute-0 kernel: tap2ce66710-95: entered promiscuous mode
Oct 02 12:32:44 compute-0 ovn_controller[148123]: 2025-10-02T12:32:44Z|00293|binding|INFO|Claiming lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac for this chassis.
Oct 02 12:32:44 compute-0 ovn_controller[148123]: 2025-10-02T12:32:44Z|00294|binding|INFO|2ce66710-95c3-4fa0-999b-b7cf0b722cac: Claiming fa:16:3e:35:b2:fa 10.100.0.14
Oct 02 12:32:44 compute-0 NetworkManager[44981]: <info>  [1759408364.5292] manager: (tap2ce66710-95): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.534 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b2:fa 10.100.0.14'], port_security=['fa:16:3e:35:b2:fa 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b6c2a016-125f-4f83-a284-5a2d50805121', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2ce66710-95c3-4fa0-999b-b7cf0b722cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.535 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce66710-95c3-4fa0-999b-b7cf0b722cac in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.537 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.539 2 DEBUG nova.network.neutron [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:32:44 compute-0 ovn_controller[148123]: 2025-10-02T12:32:44Z|00295|binding|INFO|Setting lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac ovn-installed in OVS
Oct 02 12:32:44 compute-0 ovn_controller[148123]: 2025-10-02T12:32:44Z|00296|binding|INFO|Setting lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac up in Southbound
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.552 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bc69eb80-5d3c-4287-8d63-06d201b68ffb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.553 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.559 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.559 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83f93554-da62-4d43-9721-e73a7611a026]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.560 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c44b73-235d-4fa5-81e2-765b1b4ece78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 systemd-udevd[307398]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:44.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:44 compute-0 systemd-machined[210927]: New machine qemu-34-instance-0000004e.
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.572 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[39042da8-8a3a-4b39-87a9-68889243f0b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 NetworkManager[44981]: <info>  [1759408364.5847] device (tap2ce66710-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:44 compute-0 NetworkManager[44981]: <info>  [1759408364.5854] device (tap2ce66710-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:44 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000004e.
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.598 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b42a2b4d-fc32-4bbe-b764-953d03b215c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.623 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[278b1828-6f12-4d17-a297-a8912aea1419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.628 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5515cd26-8fff-4ce7-a29d-abeb31871fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 NetworkManager[44981]: <info>  [1759408364.6311] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.669 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5f7a64-d3bb-41d5-9307-7063c9df00d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.676 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5033e926-8323-4fa3-bba5-9d26f85d87dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 NetworkManager[44981]: <info>  [1759408364.7021] device (tape21cd6a6-f0): carrier: link connected
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.710 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6f76a96e-d643-4421-ac3c-1b3e85579c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.729 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f03e7b4b-b487-4b0c-a2f5-0f9cf89813a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626018, 'reachable_time': 37120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307431, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.751 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60e04d9c-619f-4361-aa32-3eb769c15544]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626018, 'tstamp': 626018}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307432, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.768 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e8d7bb-0d4b-4c59-8f67-dea3b8bf6e52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626018, 'reachable_time': 37120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307433, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.801 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7108c4-a71c-4ee1-af30-6235d6ee0c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.838 2 DEBUG nova.compute.manager [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.838 2 DEBUG oslo_concurrency.lockutils [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.839 2 DEBUG oslo_concurrency.lockutils [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.839 2 DEBUG oslo_concurrency.lockutils [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.839 2 DEBUG nova.compute.manager [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.839 2 WARNING nova.compute.manager [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state resize_finish.
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.870 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[704e250b-c922-4be2-998a-dd5395f0ac73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.871 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.871 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.872 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:44 compute-0 NetworkManager[44981]: <info>  [1759408364.8745] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Oct 02 12:32:44 compute-0 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.879 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:44 compute-0 ovn_controller[148123]: 2025-10-02T12:32:44Z|00297|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.881 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.882 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[659d4330-5e0f-4b6f-8051-5ec096a3fc53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.882 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:32:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:44.883 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:32:44 compute-0 nova_compute[256940]: 2025-10-02 12:32:44.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:45 compute-0 ceph-mon[73668]: pgmap v1683: 305 pgs: 305 active+clean; 296 MiB data, 833 MiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 813 KiB/s wr, 78 op/s
Oct 02 12:32:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3705393334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:45 compute-0 podman[307507]: 2025-10-02 12:32:45.215955373 +0000 UTC m=+0.021690431 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:32:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 290 MiB data, 834 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 02 12:32:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.578 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408365.5772808, b6c2a016-125f-4f83-a284-5a2d50805121 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.579 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] VM Resumed (Lifecycle Event)
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.581 2 DEBUG nova.compute.manager [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.585 2 INFO nova.virt.libvirt.driver [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance running successfully.
Oct 02 12:32:45 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.588 2 DEBUG nova.virt.libvirt.guest [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.588 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:32:45 compute-0 podman[307507]: 2025-10-02 12:32:45.595654041 +0000 UTC m=+0.401389069 container create 2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.612 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.617 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.655 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.656 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408365.5790832, b6c2a016-125f-4f83-a284-5a2d50805121 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.656 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] VM Started (Lifecycle Event)
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.682 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:45 compute-0 nova_compute[256940]: 2025-10-02 12:32:45.692 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:45 compute-0 systemd[1]: Started libpod-conmon-2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78.scope.
Oct 02 12:32:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9174c7d095b1e37ac549eb392ac7d76507851ec4e80ee6c9e729669c1b3bafc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:45 compute-0 podman[307507]: 2025-10-02 12:32:45.91154992 +0000 UTC m=+0.717284978 container init 2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:32:45 compute-0 podman[307507]: 2025-10-02 12:32:45.917598724 +0000 UTC m=+0.723333772 container start 2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:32:45 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [NOTICE]   (307527) : New worker (307529) forked
Oct 02 12:32:45 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [NOTICE]   (307527) : Loading success.
Oct 02 12:32:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:46.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:46 compute-0 ceph-mon[73668]: pgmap v1684: 305 pgs: 305 active+clean; 290 MiB data, 834 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.716 2 DEBUG nova.network.neutron [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.738 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Releasing lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.738 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Instance network_info: |[{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.738 2 DEBUG oslo_concurrency.lockutils [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.739 2 DEBUG nova.network.neutron [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Refreshing network info cache for port 6a8e44c2-0e49-44ff-815d-00dfc9de3337 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.742 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Start _get_guest_xml network_info=[{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.747 2 WARNING nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.753 2 DEBUG nova.virt.libvirt.host [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.755 2 DEBUG nova.virt.libvirt.host [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.762 2 DEBUG nova.virt.libvirt.host [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.764 2 DEBUG nova.virt.libvirt.host [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.765 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.765 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.766 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.766 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.766 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.767 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.767 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.767 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.767 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.768 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.768 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.768 2 DEBUG nova.virt.hardware [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.771 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.919 2 INFO nova.virt.libvirt.driver [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Deleting instance files /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759_del
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.920 2 INFO nova.virt.libvirt.driver [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Deletion of /var/lib/nova/instances/27ae4ec8-c996-4a05-a592-2dc8cd23d759_del complete
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.961 2 DEBUG nova.compute.manager [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.962 2 DEBUG oslo_concurrency.lockutils [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.962 2 DEBUG oslo_concurrency.lockutils [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.962 2 DEBUG oslo_concurrency.lockutils [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.963 2 DEBUG nova.compute.manager [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.963 2 WARNING nova.compute.manager [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state resized and task_state None.
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.984 2 INFO nova.compute.manager [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Took 8.41 seconds to destroy the instance on the hypervisor.
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.986 2 DEBUG oslo.service.loopingcall [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.987 2 DEBUG nova.compute.manager [-] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:32:46 compute-0 nova_compute[256940]: 2025-10-02 12:32:46.987 2 DEBUG nova.network.neutron [-] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:32:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/189214268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.220 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.245 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.251 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 248 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 12:32:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554296819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.686 2 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port b15a5cc5-f350-4edd-a0b1-caa8458fdd6a could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.687 2 DEBUG nova.network.neutron [-] Unable to show port b15a5cc5-f350-4edd-a0b1-caa8458fdd6a as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.707 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.708 2 DEBUG nova.virt.libvirt.vif [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.709 2 DEBUG nova.network.os_vif_util [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converting VIF {"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.710 2 DEBUG nova.network.os_vif_util [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:f6:4a,bridge_name='br-int',has_traffic_filtering=True,id=6a8e44c2-0e49-44ff-815d-00dfc9de3337,network=Network(2c5e6ab4-0e76-42fe-b8a1-d7157f964326),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8e44c2-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.711 2 DEBUG nova.objects.instance [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.725 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <uuid>6c768447-ca45-42c7-aee4-09dfe7406a2d</uuid>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <name>instance-0000004f</name>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:32:46</nova:creationTime>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:32:47 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <system>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <entry name="serial">6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <entry name="uuid">6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </system>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <os>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   </os>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <features>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   </features>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk">
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config">
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       </source>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:32:47 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9c:f6:4a"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <target dev="tap6a8e44c2-0e"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log" append="off"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <video>
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </video>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:32:47 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:32:47 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:32:47 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:32:47 compute-0 nova_compute[256940]: </domain>
Oct 02 12:32:47 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.731 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Preparing to wait for external event network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.732 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.732 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.732 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.733 2 DEBUG nova.virt.libvirt.vif [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.733 2 DEBUG nova.network.os_vif_util [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converting VIF {"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.733 2 DEBUG nova.network.os_vif_util [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:f6:4a,bridge_name='br-int',has_traffic_filtering=True,id=6a8e44c2-0e49-44ff-815d-00dfc9de3337,network=Network(2c5e6ab4-0e76-42fe-b8a1-d7157f964326),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8e44c2-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.734 2 DEBUG os_vif [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:f6:4a,bridge_name='br-int',has_traffic_filtering=True,id=6a8e44c2-0e49-44ff-815d-00dfc9de3337,network=Network(2c5e6ab4-0e76-42fe-b8a1-d7157f964326),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8e44c2-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a8e44c2-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a8e44c2-0e, col_values=(('external_ids', {'iface-id': '6a8e44c2-0e49-44ff-815d-00dfc9de3337', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:f6:4a', 'vm-uuid': '6c768447-ca45-42c7-aee4-09dfe7406a2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:47 compute-0 NetworkManager[44981]: <info>  [1759408367.7432] manager: (tap6a8e44c2-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.752 2 INFO os_vif [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:f6:4a,bridge_name='br-int',has_traffic_filtering=True,id=6a8e44c2-0e49-44ff-815d-00dfc9de3337,network=Network(2c5e6ab4-0e76-42fe-b8a1-d7157f964326),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8e44c2-0e')
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.867 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.868 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.869 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No VIF found with MAC fa:16:3e:9c:f6:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.869 2 INFO nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Using config drive
Oct 02 12:32:47 compute-0 nova_compute[256940]: 2025-10-02 12:32:47.896 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/513984406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/189214268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4025293043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/899398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/554296819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:48.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.350 2 INFO nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Creating config drive at /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/disk.config
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.355 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6g83jn2x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.455 2 DEBUG nova.compute.manager [req-c8368059-db7c-455c-af52-9210973c8e9b req-00ccad2e-b7b7-4219-9173-28d59293e59f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-deleted-130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.456 2 INFO nova.compute.manager [req-c8368059-db7c-455c-af52-9210973c8e9b req-00ccad2e-b7b7-4219-9173-28d59293e59f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Neutron deleted interface 130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f; detaching it from the instance and deleting it from the info cache
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.457 2 DEBUG nova.network.neutron [req-c8368059-db7c-455c-af52-9210973c8e9b req-00ccad2e-b7b7-4219-9173-28d59293e59f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [{"id": "7592f97e-0ed9-49d9-a997-1e691974110b", "address": "fa:16:3e:d2:be:9d", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7592f97e-0e", "ovs_interfaceid": "7592f97e-0ed9-49d9-a997-1e691974110b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "address": "fa:16:3e:c3:a2:d3", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb15a5cc5-f3", "ovs_interfaceid": "b15a5cc5-f350-4edd-a0b1-caa8458fdd6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.478 2 DEBUG nova.compute.manager [req-c8368059-db7c-455c-af52-9210973c8e9b req-00ccad2e-b7b7-4219-9173-28d59293e59f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Detach interface failed, port_id=130ec7c9-b839-4a4b-8bcd-0d84a1e23c7f, reason: Instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.491 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6g83jn2x" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.531 2 DEBUG nova.storage.rbd_utils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] rbd image 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:32:48 compute-0 nova_compute[256940]: 2025-10-02 12:32:48.536 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/disk.config 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:49 compute-0 ceph-mon[73668]: pgmap v1685: 305 pgs: 305 active+clean; 248 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 12:32:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/381182074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/586986182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1878121547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.165 2 DEBUG nova.network.neutron [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updated VIF entry in instance network info cache for port 6a8e44c2-0e49-44ff-815d-00dfc9de3337. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.167 2 DEBUG nova.network.neutron [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.186 2 DEBUG oslo_concurrency.lockutils [req-97095c6d-41df-432f-ae3c-c1890f5a442a req-7afbe444-1004-473e-a060-a461964e9aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 248 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.461 2 DEBUG nova.network.neutron [-] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.475 2 DEBUG oslo_concurrency.processutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/disk.config 6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.939s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.476 2 INFO nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Deleting local config drive /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/disk.config because it was imported into RBD.
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.490 2 INFO nova.compute.manager [-] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Took 2.50 seconds to deallocate network for instance.
Oct 02 12:32:49 compute-0 NetworkManager[44981]: <info>  [1759408369.5279] manager: (tap6a8e44c2-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Oct 02 12:32:49 compute-0 kernel: tap6a8e44c2-0e: entered promiscuous mode
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:49 compute-0 ovn_controller[148123]: 2025-10-02T12:32:49Z|00298|binding|INFO|Claiming lport 6a8e44c2-0e49-44ff-815d-00dfc9de3337 for this chassis.
Oct 02 12:32:49 compute-0 ovn_controller[148123]: 2025-10-02T12:32:49Z|00299|binding|INFO|6a8e44c2-0e49-44ff-815d-00dfc9de3337: Claiming fa:16:3e:9c:f6:4a 10.100.0.9
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.554 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:f6:4a 10.100.0.9'], port_security=['fa:16:3e:9c:f6:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6c768447-ca45-42c7-aee4-09dfe7406a2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7846694bb70143aa984e235126fbe15c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '85d62b65-e393-4974-9b22-1cd71d8260a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa8948c0-154b-4a53-b454-06726e639845, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6a8e44c2-0e49-44ff-815d-00dfc9de3337) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:49 compute-0 ovn_controller[148123]: 2025-10-02T12:32:49Z|00300|binding|INFO|Setting lport 6a8e44c2-0e49-44ff-815d-00dfc9de3337 up in Southbound
Oct 02 12:32:49 compute-0 ovn_controller[148123]: 2025-10-02T12:32:49Z|00301|binding|INFO|Setting lport 6a8e44c2-0e49-44ff-815d-00dfc9de3337 ovn-installed in OVS
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.556 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6a8e44c2-0e49-44ff-815d-00dfc9de3337 in datapath 2c5e6ab4-0e76-42fe-b8a1-d7157f964326 bound to our chassis
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.557 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c5e6ab4-0e76-42fe-b8a1-d7157f964326
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.562 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.562 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:49 compute-0 systemd-machined[210927]: New machine qemu-35-instance-0000004f.
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.572 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6abd64ea-1b66-4ca5-a507-3991c2794689]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.573 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2c5e6ab4-01 in ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.576 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2c5e6ab4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.576 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[111611b7-68dc-4505-a2ef-ddc5df13696f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.577 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[37df18af-0650-4d43-a0e7-e482ab6abc90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000004f.
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.593 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[63487ade-96fd-4d0b-b7c2-3c4baa6424aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 systemd-udevd[307679]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:49 compute-0 NetworkManager[44981]: <info>  [1759408369.6213] device (tap6a8e44c2-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:49 compute-0 NetworkManager[44981]: <info>  [1759408369.6221] device (tap6a8e44c2-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.621 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[68952499-e3ab-46b4-839a-13276a211f8f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.668 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ec48c73e-205e-4d04-9a89-bf0ea86505a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.674 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3d745130-e033-4eeb-b8e6-681b8f6b1802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 NetworkManager[44981]: <info>  [1759408369.6760] manager: (tap2c5e6ab4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Oct 02 12:32:49 compute-0 systemd-udevd[307682]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.692 2 DEBUG oslo_concurrency.processutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.714 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9b928758-a22c-47a2-81a1-7a12aa0428c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.720 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[49f8cf11-b6c4-4cd1-a50d-0027e634c694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 NetworkManager[44981]: <info>  [1759408369.7440] device (tap2c5e6ab4-00): carrier: link connected
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.752 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc8fa87-52d2-43ac-b450-0cbd2b04ec8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.771 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6883fadf-bdcb-4c8c-9931-586761e059ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c5e6ab4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:8c:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626522, 'reachable_time': 35481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307710, 'error': None, 'target': 'ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.792 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e6342d0a-1e69-41ba-968b-3525d695e6a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:8c59'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626522, 'tstamp': 626522}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307711, 'error': None, 'target': 'ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.813 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[333bbc52-6890-42a1-acae-75c746758f1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c5e6ab4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:8c:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626522, 'reachable_time': 35481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307712, 'error': None, 'target': 'ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.852 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7dc280-412e-44ea-bd59-43cc9a8180c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.894 2 DEBUG nova.compute.manager [req-4b7c8363-1fa5-4fd3-a341-7173927c98f4 req-11daf431-b5a3-4784-92eb-66f08cdc7863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.894 2 DEBUG oslo_concurrency.lockutils [req-4b7c8363-1fa5-4fd3-a341-7173927c98f4 req-11daf431-b5a3-4784-92eb-66f08cdc7863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.896 2 DEBUG oslo_concurrency.lockutils [req-4b7c8363-1fa5-4fd3-a341-7173927c98f4 req-11daf431-b5a3-4784-92eb-66f08cdc7863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.896 2 DEBUG oslo_concurrency.lockutils [req-4b7c8363-1fa5-4fd3-a341-7173927c98f4 req-11daf431-b5a3-4784-92eb-66f08cdc7863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.896 2 DEBUG nova.compute.manager [req-4b7c8363-1fa5-4fd3-a341-7173927c98f4 req-11daf431-b5a3-4784-92eb-66f08cdc7863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Processing event network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.947 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c647aad1-ea2c-413b-ab30-868f1d1ad438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.949 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c5e6ab4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.950 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.950 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c5e6ab4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:49 compute-0 kernel: tap2c5e6ab4-00: entered promiscuous mode
Oct 02 12:32:49 compute-0 NetworkManager[44981]: <info>  [1759408369.9541] manager: (tap2c5e6ab4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.957 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c5e6ab4-00, col_values=(('external_ids', {'iface-id': '59209bb4-9a3d-4a28-b9bc-6ae775c6b546'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:49 compute-0 ovn_controller[148123]: 2025-10-02T12:32:49Z|00302|binding|INFO|Releasing lport 59209bb4-9a3d-4a28-b9bc-6ae775c6b546 from this chassis (sb_readonly=0)
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.961 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2c5e6ab4-0e76-42fe-b8a1-d7157f964326.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2c5e6ab4-0e76-42fe-b8a1-d7157f964326.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.966 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[427038dd-7d80-4805-afd0-250e7cdb1efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.967 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-2c5e6ab4-0e76-42fe-b8a1-d7157f964326
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/2c5e6ab4-0e76-42fe-b8a1-d7157f964326.pid.haproxy
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 2c5e6ab4-0e76-42fe-b8a1-d7157f964326
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:32:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:49.969 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'env', 'PROCESS_TAG=haproxy-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2c5e6ab4-0e76-42fe-b8a1-d7157f964326.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:32:49 compute-0 nova_compute[256940]: 2025-10-02 12:32:49.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299185624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.162 2 DEBUG oslo_concurrency.processutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.169 2 DEBUG nova.compute.provider_tree [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:50.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.191 2 DEBUG nova.scheduler.client.report [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.225 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.250 2 INFO nova.scheduler.client.report [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Deleted allocations for instance 27ae4ec8-c996-4a05-a592-2dc8cd23d759
Oct 02 12:32:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2299185624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.325 2 DEBUG oslo_concurrency.lockutils [None req-b8cf09bb-ed9b-45bc-8cb8-0e0c18ce9d5a 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "27ae4ec8-c996-4a05-a592-2dc8cd23d759" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:50 compute-0 podman[307798]: 2025-10-02 12:32:50.403479201 +0000 UTC m=+0.092559330 container create d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:32:50 compute-0 podman[307798]: 2025-10-02 12:32:50.34078512 +0000 UTC m=+0.029865259 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:32:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:50 compute-0 systemd[1]: Started libpod-conmon-d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db.scope.
Oct 02 12:32:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b4bd6a298d318d1b5625694d3c014e07eac96f5256c3f8bcb20182f9829165/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:50 compute-0 podman[307798]: 2025-10-02 12:32:50.528969007 +0000 UTC m=+0.218049166 container init d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:32:50 compute-0 podman[307798]: 2025-10-02 12:32:50.536900458 +0000 UTC m=+0.225980597 container start d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.552 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Received event network-vif-deleted-7592f97e-0ed9-49d9-a997-1e691974110b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:50 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [NOTICE]   (307826) : New worker (307829) forked
Oct 02 12:32:50 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [NOTICE]   (307826) : Loading success.
Oct 02 12:32:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:50.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:50 compute-0 nova_compute[256940]: 2025-10-02 12:32:50.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.025 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.026 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408371.0259097, 6c768447-ca45-42c7-aee4-09dfe7406a2d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.026 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] VM Started (Lifecycle Event)
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.030 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.035 2 INFO nova.virt.libvirt.driver [-] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Instance spawned successfully.
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.036 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.079 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.085 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.089 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.090 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.091 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.091 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.092 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.092 2 DEBUG nova.virt.libvirt.driver [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.118 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.119 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408371.0259907, 6c768447-ca45-42c7-aee4-09dfe7406a2d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.119 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] VM Paused (Lifecycle Event)
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.158 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.162 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408371.02973, 6c768447-ca45-42c7-aee4-09dfe7406a2d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.163 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] VM Resumed (Lifecycle Event)
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.169 2 INFO nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Took 11.44 seconds to spawn the instance on the hypervisor.
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.170 2 DEBUG nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.182 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.188 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.214 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.250 2 INFO nova.compute.manager [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Took 12.60 seconds to build instance.
Oct 02 12:32:51 compute-0 nova_compute[256940]: 2025-10-02 12:32:51.268 2 DEBUG oslo_concurrency.lockutils [None req-beac4d14-846c-4c2f-b338-0eda789cfb7b 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 02 12:32:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Oct 02 12:32:51 compute-0 ceph-mon[73668]: pgmap v1686: 305 pgs: 305 active+clean; 248 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 12:32:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 02 12:32:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.036 2 DEBUG nova.compute.manager [req-dd357082-cae4-45eb-8171-3701deb8dccb req-ce36346a-f911-4a3b-9edd-106e8751dc60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.037 2 DEBUG oslo_concurrency.lockutils [req-dd357082-cae4-45eb-8171-3701deb8dccb req-ce36346a-f911-4a3b-9edd-106e8751dc60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.037 2 DEBUG oslo_concurrency.lockutils [req-dd357082-cae4-45eb-8171-3701deb8dccb req-ce36346a-f911-4a3b-9edd-106e8751dc60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.038 2 DEBUG oslo_concurrency.lockutils [req-dd357082-cae4-45eb-8171-3701deb8dccb req-ce36346a-f911-4a3b-9edd-106e8751dc60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.038 2 DEBUG nova.compute.manager [req-dd357082-cae4-45eb-8171-3701deb8dccb req-ce36346a-f911-4a3b-9edd-106e8751dc60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] No waiting events found dispatching network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.038 2 WARNING nova.compute.manager [req-dd357082-cae4-45eb-8171-3701deb8dccb req-ce36346a-f911-4a3b-9edd-106e8751dc60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received unexpected event network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 for instance with vm_state active and task_state None.
Oct 02 12:32:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:52.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.237 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.238 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:52 compute-0 ceph-mon[73668]: pgmap v1687: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Oct 02 12:32:52 compute-0 ceph-mon[73668]: osdmap e244: 3 total, 3 up, 3 in
Oct 02 12:32:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2482660617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1063843622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.701 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.783 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.783 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.789 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.790 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.982 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.984 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4332MB free_disk=20.876224517822266GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.984 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:52 compute-0 nova_compute[256940]: 2025-10-02 12:32:52.985 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.052 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance b6c2a016-125f-4f83-a284-5a2d50805121 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.053 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 6c768447-ca45-42c7-aee4-09dfe7406a2d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.054 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.054 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.101 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 193 op/s
Oct 02 12:32:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409830474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.630 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.637 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1063843622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.684 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.770 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:32:53 compute-0 nova_compute[256940]: 2025-10-02 12:32:53.771 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:54.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.239 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408359.2352643, 27ae4ec8-c996-4a05-a592-2dc8cd23d759 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.240 2 INFO nova.compute.manager [-] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] VM Stopped (Lifecycle Event)
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.272 2 DEBUG nova.compute.manager [None req-a5f6c367-695d-4afa-86ec-6f5f429062b6 - - - - - -] [instance: 27ae4ec8-c996-4a05-a592-2dc8cd23d759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:54.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:54 compute-0 ceph-mon[73668]: pgmap v1689: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 193 op/s
Oct 02 12:32:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1409830474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.766 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.767 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.932 2 DEBUG nova.compute.manager [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-changed-6a8e44c2-0e49-44ff-815d-00dfc9de3337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.933 2 DEBUG nova.compute.manager [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Refreshing instance network info cache due to event network-changed-6a8e44c2-0e49-44ff-815d-00dfc9de3337. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.933 2 DEBUG oslo_concurrency.lockutils [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.933 2 DEBUG oslo_concurrency.lockutils [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:54 compute-0 nova_compute[256940]: 2025-10-02 12:32:54.934 2 DEBUG nova.network.neutron [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Refreshing network info cache for port 6a8e44c2-0e49-44ff-815d-00dfc9de3337 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.114 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.114 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.115 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.115 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.115 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.116 2 INFO nova.compute.manager [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Terminating instance
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.117 2 DEBUG nova.compute.manager [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.230 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:32:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 100 KiB/s wr, 254 op/s
Oct 02 12:32:55 compute-0 kernel: tap2ce66710-95 (unregistering): left promiscuous mode
Oct 02 12:32:55 compute-0 NetworkManager[44981]: <info>  [1759408375.3693] device (tap2ce66710-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 ovn_controller[148123]: 2025-10-02T12:32:55Z|00303|binding|INFO|Releasing lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac from this chassis (sb_readonly=0)
Oct 02 12:32:55 compute-0 ovn_controller[148123]: 2025-10-02T12:32:55Z|00304|binding|INFO|Setting lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac down in Southbound
Oct 02 12:32:55 compute-0 ovn_controller[148123]: 2025-10-02T12:32:55Z|00305|binding|INFO|Removing iface tap2ce66710-95 ovn-installed in OVS
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.397 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b2:fa 10.100.0.14'], port_security=['fa:16:3e:35:b2:fa 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b6c2a016-125f-4f83-a284-5a2d50805121', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2ce66710-95c3-4fa0-999b-b7cf0b722cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.398 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce66710-95c3-4fa0-999b-b7cf0b722cac in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.400 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.404 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[440e2d35-68fc-4a28-af21-dec153818f6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.405 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.410 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:55 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct 02 12:32:55 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004e.scope: Consumed 10.302s CPU time.
Oct 02 12:32:55 compute-0 systemd-machined[210927]: Machine qemu-34-instance-0000004e terminated.
Oct 02 12:32:55 compute-0 podman[307886]: 2025-10-02 12:32:55.43938939 +0000 UTC m=+0.098320067 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:32:55 compute-0 podman[307885]: 2025-10-02 12:32:55.439641896 +0000 UTC m=+0.099367463 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 02 12:32:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:55 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [NOTICE]   (307527) : haproxy version is 2.8.14-c23fe91
Oct 02 12:32:55 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [NOTICE]   (307527) : path to executable is /usr/sbin/haproxy
Oct 02 12:32:55 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [WARNING]  (307527) : Exiting Master process...
Oct 02 12:32:55 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [WARNING]  (307527) : Exiting Master process...
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.556 2 INFO nova.virt.libvirt.driver [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance destroyed successfully.
Oct 02 12:32:55 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [ALERT]    (307527) : Current worker (307529) exited with code 143 (Terminated)
Oct 02 12:32:55 compute-0 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[307523]: [WARNING]  (307527) : All workers exited. Exiting... (0)
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.556 2 DEBUG nova.objects.instance [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:55 compute-0 systemd[1]: libpod-2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78.scope: Deactivated successfully.
Oct 02 12:32:55 compute-0 podman[307948]: 2025-10-02 12:32:55.563819988 +0000 UTC m=+0.053862068 container died 2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.575 2 DEBUG nova.virt.libvirt.vif [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1932213760',display_name='tempest-ServerDiskConfigTestJSON-server-1932213760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1932213760',id=78,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-3bos98gj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:52Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b6c2a016-125f-4f83-a284-5a2d50805121,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.576 2 DEBUG nova.network.os_vif_util [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.577 2 DEBUG nova.network.os_vif_util [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.578 2 DEBUG os_vif [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ce66710-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78-userdata-shm.mount: Deactivated successfully.
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.598 2 INFO os_vif [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95')
Oct 02 12:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9174c7d095b1e37ac549eb392ac7d76507851ec4e80ee6c9e729669c1b3bafc-merged.mount: Deactivated successfully.
Oct 02 12:32:55 compute-0 podman[307948]: 2025-10-02 12:32:55.609325063 +0000 UTC m=+0.099367143 container cleanup 2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:32:55 compute-0 systemd[1]: libpod-conmon-2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78.scope: Deactivated successfully.
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.680 2 DEBUG nova.compute.manager [req-6afe7dcd-5e61-4500-9013-3586f79b44e6 req-917f038a-21f5-44bb-a4cc-1916c1c68f27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.681 2 DEBUG oslo_concurrency.lockutils [req-6afe7dcd-5e61-4500-9013-3586f79b44e6 req-917f038a-21f5-44bb-a4cc-1916c1c68f27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.682 2 DEBUG oslo_concurrency.lockutils [req-6afe7dcd-5e61-4500-9013-3586f79b44e6 req-917f038a-21f5-44bb-a4cc-1916c1c68f27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.682 2 DEBUG oslo_concurrency.lockutils [req-6afe7dcd-5e61-4500-9013-3586f79b44e6 req-917f038a-21f5-44bb-a4cc-1916c1c68f27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.682 2 DEBUG nova.compute.manager [req-6afe7dcd-5e61-4500-9013-3586f79b44e6 req-917f038a-21f5-44bb-a4cc-1916c1c68f27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.683 2 DEBUG nova.compute.manager [req-6afe7dcd-5e61-4500-9013-3586f79b44e6 req-917f038a-21f5-44bb-a4cc-1916c1c68f27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:32:55 compute-0 podman[307997]: 2025-10-02 12:32:55.687244001 +0000 UTC m=+0.050009580 container remove 2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.693 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[81f201d3-d6d9-45e5-a349-4416432b089a]: (4, ('Thu Oct  2 12:32:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78)\n2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78\nThu Oct  2 12:32:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78)\n2de92a958dc441afdd2c18442ac1f443ac6889705227029cbb5353d439b8db78\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.695 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[172e7394-4e40-457f-861f-7a9b083b3390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.697 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:55 compute-0 kernel: tape21cd6a6-f0: left promiscuous mode
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 nova_compute[256940]: 2025-10-02 12:32:55.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.719 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9a73971b-fc83-4d4e-a3e5-b216c6123eaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.752 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2180e775-ea41-4fa3-b70c-15770c5376d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.755 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bf066096-36ce-47ac-b33a-260986fa518e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.774 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0539f9d7-1ed5-48bf-ae68-9e381373b65b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626010, 'reachable_time': 21208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308016, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.779 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:32:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:32:55.779 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[fa807a48-71ea-4e88-b307-aef86f48f54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:55 compute-0 sudo[308017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:55 compute-0 sudo[308017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:55 compute-0 sudo[308017]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:55 compute-0 sudo[308042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:55 compute-0 sudo[308042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:55 compute-0 sudo[308042]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:55 compute-0 sudo[308067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:55 compute-0 sudo[308067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:55 compute-0 sudo[308067]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-0 sudo[308092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:32:56 compute-0 sudo[308092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:32:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:56.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:32:56 compute-0 nova_compute[256940]: 2025-10-02 12:32:56.204 2 DEBUG nova.network.neutron [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updated VIF entry in instance network info cache for port 6a8e44c2-0e49-44ff-815d-00dfc9de3337. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:56 compute-0 nova_compute[256940]: 2025-10-02 12:32:56.205 2 DEBUG nova.network.neutron [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:56 compute-0 nova_compute[256940]: 2025-10-02 12:32:56.234 2 DEBUG oslo_concurrency.lockutils [req-23f4de0e-c6d9-48b7-9b08-7f1ed6ebc49a req-e1e9b0fb-2099-4fae-b8bb-99e7741ac348 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:56 compute-0 nova_compute[256940]: 2025-10-02 12:32:56.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:56 compute-0 nova_compute[256940]: 2025-10-02 12:32:56.236 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:32:56 compute-0 nova_compute[256940]: 2025-10-02 12:32:56.236 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:56 compute-0 sudo[308130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:56 compute-0 sudo[308130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:56 compute-0 sudo[308130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-0 sudo[308158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:56 compute-0 sudo[308158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:56 compute-0 sudo[308158]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-0 sudo[308092]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:56.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:32:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:32:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:32:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:32:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4e76b442-f8c2-4e7c-aed4-824fbcc80c1e does not exist
Oct 02 12:32:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 310d8f3d-d6fb-4728-af92-d1868e63f5b5 does not exist
Oct 02 12:32:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 16864586-84f5-44b5-a1c1-dcc1b48b7c48 does not exist
Oct 02 12:32:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:32:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:32:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:32:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:56 compute-0 sudo[308198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:56 compute-0 sudo[308198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:56 compute-0 sudo[308198]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-0 ceph-mon[73668]: pgmap v1690: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 100 KiB/s wr, 254 op/s
Oct 02 12:32:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:32:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:32:56 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:56 compute-0 sudo[308223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:56 compute-0 sudo[308223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:56 compute-0 sudo[308223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-0 sudo[308248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:56 compute-0 sudo[308248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:56 compute-0 sudo[308248]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:57 compute-0 sudo[308273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:32:57 compute-0 sudo[308273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 19 KiB/s wr, 252 op/s
Oct 02 12:32:57 compute-0 podman[308339]: 2025-10-02 12:32:57.439845629 +0000 UTC m=+0.063908093 container create 3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:32:57 compute-0 systemd[1]: Started libpod-conmon-3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406.scope.
Oct 02 12:32:57 compute-0 podman[308339]: 2025-10-02 12:32:57.416514147 +0000 UTC m=+0.040576661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:57 compute-0 podman[308339]: 2025-10-02 12:32:57.545544442 +0000 UTC m=+0.169606936 container init 3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:32:57 compute-0 podman[308339]: 2025-10-02 12:32:57.558384158 +0000 UTC m=+0.182446622 container start 3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:32:57 compute-0 podman[308339]: 2025-10-02 12:32:57.564120934 +0000 UTC m=+0.188183418 container attach 3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:32:57 compute-0 focused_brattain[308355]: 167 167
Oct 02 12:32:57 compute-0 systemd[1]: libpod-3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406.scope: Deactivated successfully.
Oct 02 12:32:57 compute-0 conmon[308355]: conmon 3e1fc07bfe0e49b378ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406.scope/container/memory.events
Oct 02 12:32:57 compute-0 podman[308339]: 2025-10-02 12:32:57.571779848 +0000 UTC m=+0.195842332 container died 3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0546c31d001c5d03922a0595fca5e7aab5bddd109d30336b43ba82325a3aaa27-merged.mount: Deactivated successfully.
Oct 02 12:32:57 compute-0 podman[308339]: 2025-10-02 12:32:57.630535469 +0000 UTC m=+0.254597933 container remove 3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:32:57 compute-0 systemd[1]: libpod-conmon-3e1fc07bfe0e49b378cee0695e44b01a61ab5e2fc4e12cd8024db57a7fde3406.scope: Deactivated successfully.
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.796 2 DEBUG nova.compute.manager [req-cd28bd09-c920-407f-9527-8a952e7b08b0 req-29b55bd2-b62e-43a4-b54c-f345e5082f0d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.799 2 DEBUG oslo_concurrency.lockutils [req-cd28bd09-c920-407f-9527-8a952e7b08b0 req-29b55bd2-b62e-43a4-b54c-f345e5082f0d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.799 2 DEBUG oslo_concurrency.lockutils [req-cd28bd09-c920-407f-9527-8a952e7b08b0 req-29b55bd2-b62e-43a4-b54c-f345e5082f0d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.800 2 DEBUG oslo_concurrency.lockutils [req-cd28bd09-c920-407f-9527-8a952e7b08b0 req-29b55bd2-b62e-43a4-b54c-f345e5082f0d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.800 2 DEBUG nova.compute.manager [req-cd28bd09-c920-407f-9527-8a952e7b08b0 req-29b55bd2-b62e-43a4-b54c-f345e5082f0d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.800 2 WARNING nova.compute.manager [req-cd28bd09-c920-407f-9527-8a952e7b08b0 req-29b55bd2-b62e-43a4-b54c-f345e5082f0d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state deleting.
Oct 02 12:32:57 compute-0 podman[308378]: 2025-10-02 12:32:57.841810792 +0000 UTC m=+0.051091268 container create 0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.853 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.871 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.872 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:32:57 compute-0 nova_compute[256940]: 2025-10-02 12:32:57.872 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:57 compute-0 systemd[1]: Started libpod-conmon-0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9.scope.
Oct 02 12:32:57 compute-0 podman[308378]: 2025-10-02 12:32:57.822189874 +0000 UTC m=+0.031470380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6592010f4dc8ae2899239e301a8198d999f71a9718465e441e8d70ff4aed62a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6592010f4dc8ae2899239e301a8198d999f71a9718465e441e8d70ff4aed62a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6592010f4dc8ae2899239e301a8198d999f71a9718465e441e8d70ff4aed62a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6592010f4dc8ae2899239e301a8198d999f71a9718465e441e8d70ff4aed62a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6592010f4dc8ae2899239e301a8198d999f71a9718465e441e8d70ff4aed62a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:57 compute-0 podman[308378]: 2025-10-02 12:32:57.950851593 +0000 UTC m=+0.160132089 container init 0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:32:57 compute-0 podman[308378]: 2025-10-02 12:32:57.960862364 +0000 UTC m=+0.170142840 container start 0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:32:57 compute-0 podman[308378]: 2025-10-02 12:32:57.965043323 +0000 UTC m=+0.174323799 container attach 0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:32:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:32:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:58.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:32:58 compute-0 nova_compute[256940]: 2025-10-02 12:32:58.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:32:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:32:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:58.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:58 compute-0 pensive_pascal[308394]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:32:58 compute-0 pensive_pascal[308394]: --> relative data size: 1.0
Oct 02 12:32:58 compute-0 pensive_pascal[308394]: --> All data devices are unavailable
Oct 02 12:32:58 compute-0 systemd[1]: libpod-0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9.scope: Deactivated successfully.
Oct 02 12:32:58 compute-0 podman[308378]: 2025-10-02 12:32:58.937652916 +0000 UTC m=+1.146933422 container died 0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6592010f4dc8ae2899239e301a8198d999f71a9718465e441e8d70ff4aed62a2-merged.mount: Deactivated successfully.
Oct 02 12:32:59 compute-0 podman[308378]: 2025-10-02 12:32:59.01613828 +0000 UTC m=+1.225418756 container remove 0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:32:59 compute-0 systemd[1]: libpod-conmon-0e508c1f8ba2c0a6a7afb86173c73f677f07a344c884054cd59ba9401326a2c9.scope: Deactivated successfully.
Oct 02 12:32:59 compute-0 sudo[308273]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:59 compute-0 sudo[308424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:59 compute-0 sudo[308424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:59 compute-0 sudo[308424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:59 compute-0 sudo[308449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:59 compute-0 sudo[308449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:59 compute-0 sudo[308449]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:59 compute-0 nova_compute[256940]: 2025-10-02 12:32:59.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:59 compute-0 ceph-mon[73668]: pgmap v1691: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 19 KiB/s wr, 252 op/s
Oct 02 12:32:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3780027703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:59 compute-0 sudo[308474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:59 compute-0 sudo[308474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:59 compute-0 sudo[308474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 19 KiB/s wr, 252 op/s
Oct 02 12:32:59 compute-0 sudo[308499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:32:59 compute-0 sudo[308499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:59 compute-0 podman[308566]: 2025-10-02 12:32:59.739355951 +0000 UTC m=+0.047980651 container create 068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:32:59 compute-0 systemd[1]: Started libpod-conmon-068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a.scope.
Oct 02 12:32:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:32:59 compute-0 podman[308566]: 2025-10-02 12:32:59.718922269 +0000 UTC m=+0.027546989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:32:59 compute-0 podman[308566]: 2025-10-02 12:32:59.818252825 +0000 UTC m=+0.126877545 container init 068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:32:59 compute-0 podman[308566]: 2025-10-02 12:32:59.828515392 +0000 UTC m=+0.137140092 container start 068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:32:59 compute-0 podman[308566]: 2025-10-02 12:32:59.832435764 +0000 UTC m=+0.141060454 container attach 068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:32:59 compute-0 recursing_goldstine[308582]: 167 167
Oct 02 12:32:59 compute-0 systemd[1]: libpod-068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a.scope: Deactivated successfully.
Oct 02 12:32:59 compute-0 podman[308566]: 2025-10-02 12:32:59.83686945 +0000 UTC m=+0.145494140 container died 068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3601199395a89b410c2023f9ee5494ff5a32d2e76c246cefd04a6316734fe01-merged.mount: Deactivated successfully.
Oct 02 12:32:59 compute-0 podman[308566]: 2025-10-02 12:32:59.873175005 +0000 UTC m=+0.181799685 container remove 068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:32:59 compute-0 systemd[1]: libpod-conmon-068888c7c955aadcaa0ebb808834eccb25d4ecf74597e4a1ad23282ecd39674a.scope: Deactivated successfully.
Oct 02 12:33:00 compute-0 podman[308605]: 2025-10-02 12:33:00.072418283 +0000 UTC m=+0.050706982 container create 86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:33:00 compute-0 systemd[1]: Started libpod-conmon-86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee.scope.
Oct 02 12:33:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:00 compute-0 podman[308605]: 2025-10-02 12:33:00.048243583 +0000 UTC m=+0.026532312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97afc4a138ae1aa84f35affda236f1dc90e0bf4177796c65fbdae07959817464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97afc4a138ae1aa84f35affda236f1dc90e0bf4177796c65fbdae07959817464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97afc4a138ae1aa84f35affda236f1dc90e0bf4177796c65fbdae07959817464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97afc4a138ae1aa84f35affda236f1dc90e0bf4177796c65fbdae07959817464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:00 compute-0 podman[308605]: 2025-10-02 12:33:00.15719305 +0000 UTC m=+0.135481789 container init 86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:33:00 compute-0 podman[308605]: 2025-10-02 12:33:00.168972417 +0000 UTC m=+0.147261146 container start 86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:33:00 compute-0 podman[308605]: 2025-10-02 12:33:00.173082934 +0000 UTC m=+0.151371643 container attach 86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:33:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 02 12:33:00 compute-0 ceph-mon[73668]: pgmap v1692: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 19 KiB/s wr, 252 op/s
Oct 02 12:33:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2511685414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:00.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:00 compute-0 nova_compute[256940]: 2025-10-02 12:33:00.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:00 compute-0 nova_compute[256940]: 2025-10-02 12:33:00.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 02 12:33:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 02 12:33:01 compute-0 focused_gates[308622]: {
Oct 02 12:33:01 compute-0 focused_gates[308622]:     "1": [
Oct 02 12:33:01 compute-0 focused_gates[308622]:         {
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "devices": [
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "/dev/loop3"
Oct 02 12:33:01 compute-0 focused_gates[308622]:             ],
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "lv_name": "ceph_lv0",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "lv_size": "7511998464",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "name": "ceph_lv0",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "tags": {
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.cluster_name": "ceph",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.crush_device_class": "",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.encrypted": "0",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.osd_id": "1",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.type": "block",
Oct 02 12:33:01 compute-0 focused_gates[308622]:                 "ceph.vdo": "0"
Oct 02 12:33:01 compute-0 focused_gates[308622]:             },
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "type": "block",
Oct 02 12:33:01 compute-0 focused_gates[308622]:             "vg_name": "ceph_vg0"
Oct 02 12:33:01 compute-0 focused_gates[308622]:         }
Oct 02 12:33:01 compute-0 focused_gates[308622]:     ]
Oct 02 12:33:01 compute-0 focused_gates[308622]: }
Oct 02 12:33:01 compute-0 systemd[1]: libpod-86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee.scope: Deactivated successfully.
Oct 02 12:33:01 compute-0 podman[308605]: 2025-10-02 12:33:01.068459337 +0000 UTC m=+1.046748056 container died 86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-97afc4a138ae1aa84f35affda236f1dc90e0bf4177796c65fbdae07959817464-merged.mount: Deactivated successfully.
Oct 02 12:33:01 compute-0 podman[308605]: 2025-10-02 12:33:01.137286529 +0000 UTC m=+1.115575238 container remove 86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:33:01 compute-0 sudo[308499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:01 compute-0 systemd[1]: libpod-conmon-86bc6e737dc36cd5de7da6118fc9b997cc45fecb09bc140cdc2c89be4c7bf7ee.scope: Deactivated successfully.
Oct 02 12:33:01 compute-0 sudo[308647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:01 compute-0 sudo[308647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:01 compute-0 sudo[308647]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:01 compute-0 sudo[308672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:33:01 compute-0 sudo[308672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:01 compute-0 sudo[308672]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 202 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 927 B/s wr, 191 op/s
Oct 02 12:33:01 compute-0 sudo[308697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:01 compute-0 sudo[308697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:01 compute-0 sudo[308697]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:01 compute-0 sudo[308722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:33:01 compute-0 sudo[308722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:01 compute-0 podman[308786]: 2025-10-02 12:33:01.842929172 +0000 UTC m=+0.047701243 container create d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pike, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:33:01 compute-0 systemd[1]: Started libpod-conmon-d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c.scope.
Oct 02 12:33:01 compute-0 ceph-mon[73668]: osdmap e245: 3 total, 3 up, 3 in
Oct 02 12:33:01 compute-0 podman[308786]: 2025-10-02 12:33:01.821647108 +0000 UTC m=+0.026419219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:01 compute-0 podman[308786]: 2025-10-02 12:33:01.945153814 +0000 UTC m=+0.149925885 container init d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:33:01 compute-0 podman[308786]: 2025-10-02 12:33:01.9534503 +0000 UTC m=+0.158222351 container start d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pike, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:33:01 compute-0 stoic_pike[308802]: 167 167
Oct 02 12:33:01 compute-0 systemd[1]: libpod-d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c.scope: Deactivated successfully.
Oct 02 12:33:01 compute-0 podman[308786]: 2025-10-02 12:33:01.960458532 +0000 UTC m=+0.165230593 container attach d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pike, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:33:01 compute-0 podman[308786]: 2025-10-02 12:33:01.961718515 +0000 UTC m=+0.166490566 container died d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pike, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed417e3fb3605bbd0b651139816ec2c6b19e43d7b5c006caeacc95d677c2bc92-merged.mount: Deactivated successfully.
Oct 02 12:33:02 compute-0 podman[308786]: 2025-10-02 12:33:02.102260625 +0000 UTC m=+0.307032716 container remove d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:33:02 compute-0 systemd[1]: libpod-conmon-d225587a14547dd02b1c5d2a13b916fdddf5bf5422e76024f7f77cdd0412069c.scope: Deactivated successfully.
Oct 02 12:33:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:02.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:02 compute-0 podman[308827]: 2025-10-02 12:33:02.328357801 +0000 UTC m=+0.049658644 container create 3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:33:02 compute-0 systemd[1]: Started libpod-conmon-3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3.scope.
Oct 02 12:33:02 compute-0 podman[308827]: 2025-10-02 12:33:02.310639009 +0000 UTC m=+0.031939872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:33:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ce6a4ecda0c812b9df69f48978a7c76cd7c5da3bcc501b6d119d4e2c90d1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ce6a4ecda0c812b9df69f48978a7c76cd7c5da3bcc501b6d119d4e2c90d1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ce6a4ecda0c812b9df69f48978a7c76cd7c5da3bcc501b6d119d4e2c90d1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ce6a4ecda0c812b9df69f48978a7c76cd7c5da3bcc501b6d119d4e2c90d1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:02 compute-0 podman[308827]: 2025-10-02 12:33:02.45662414 +0000 UTC m=+0.177925083 container init 3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:33:02 compute-0 podman[308827]: 2025-10-02 12:33:02.465017129 +0000 UTC m=+0.186317992 container start 3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:33:02 compute-0 podman[308827]: 2025-10-02 12:33:02.468994102 +0000 UTC m=+0.190294945 container attach 3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:33:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:02.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:03 compute-0 ceph-mon[73668]: pgmap v1694: 305 pgs: 305 active+clean; 202 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 927 B/s wr, 191 op/s
Oct 02 12:33:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 195 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 793 KiB/s wr, 179 op/s
Oct 02 12:33:03 compute-0 beautiful_villani[308844]: {
Oct 02 12:33:03 compute-0 beautiful_villani[308844]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:33:03 compute-0 beautiful_villani[308844]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:33:03 compute-0 beautiful_villani[308844]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:33:03 compute-0 beautiful_villani[308844]:         "osd_id": 1,
Oct 02 12:33:03 compute-0 beautiful_villani[308844]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:33:03 compute-0 beautiful_villani[308844]:         "type": "bluestore"
Oct 02 12:33:03 compute-0 beautiful_villani[308844]:     }
Oct 02 12:33:03 compute-0 beautiful_villani[308844]: }
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:03.392 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:03 compute-0 nova_compute[256940]: 2025-10-02 12:33:03.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:03.396 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:33:03 compute-0 systemd[1]: libpod-3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3.scope: Deactivated successfully.
Oct 02 12:33:03 compute-0 podman[308827]: 2025-10-02 12:33:03.418551487 +0000 UTC m=+1.139852330 container died 3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-01ce6a4ecda0c812b9df69f48978a7c76cd7c5da3bcc501b6d119d4e2c90d1d3-merged.mount: Deactivated successfully.
Oct 02 12:33:03 compute-0 podman[308827]: 2025-10-02 12:33:03.500977313 +0000 UTC m=+1.222278156 container remove 3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:33:03 compute-0 systemd[1]: libpod-conmon-3340cc3fc82f57163cb194a36eea929bd2bffe5b00137ae744639c825663d6a3.scope: Deactivated successfully.
Oct 02 12:33:03 compute-0 sudo[308722]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:33:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:33:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:33:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:33:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6eef1916-a51b-4483-b0f4-f3f266788eb1 does not exist
Oct 02 12:33:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 35ca4cc1-ae52-4c99-8837-f1084523d3b8 does not exist
Oct 02 12:33:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 791a45d6-6873-486e-b3f7-7f9755f1599d does not exist
Oct 02 12:33:03 compute-0 sudo[308876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:03 compute-0 sudo[308876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:03 compute-0 sudo[308876]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:04 compute-0 sudo[308901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:33:04 compute-0 sudo[308901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:04 compute-0 sudo[308901]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:04.399 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:04 compute-0 nova_compute[256940]: 2025-10-02 12:33:04.496 2 INFO nova.virt.libvirt.driver [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Deleting instance files /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121_del
Oct 02 12:33:04 compute-0 nova_compute[256940]: 2025-10-02 12:33:04.497 2 INFO nova.virt.libvirt.driver [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Deletion of /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121_del complete
Oct 02 12:33:04 compute-0 nova_compute[256940]: 2025-10-02 12:33:04.559 2 INFO nova.compute.manager [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Took 9.44 seconds to destroy the instance on the hypervisor.
Oct 02 12:33:04 compute-0 nova_compute[256940]: 2025-10-02 12:33:04.560 2 DEBUG oslo.service.loopingcall [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:33:04 compute-0 nova_compute[256940]: 2025-10-02 12:33:04.560 2 DEBUG nova.compute.manager [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:33:04 compute-0 nova_compute[256940]: 2025-10-02 12:33:04.561 2 DEBUG nova.network.neutron [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:33:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:04.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:04 compute-0 ceph-mon[73668]: pgmap v1695: 305 pgs: 305 active+clean; 195 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 793 KiB/s wr, 179 op/s
Oct 02 12:33:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:33:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:33:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:33:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/628293893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:33:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/628293893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 254 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 148 op/s
Oct 02 12:33:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:05 compute-0 nova_compute[256940]: 2025-10-02 12:33:05.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:05 compute-0 nova_compute[256940]: 2025-10-02 12:33:05.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/628293893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/628293893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:05 compute-0 nova_compute[256940]: 2025-10-02 12:33:05.843 2 DEBUG nova.network.neutron [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:05 compute-0 nova_compute[256940]: 2025-10-02 12:33:05.881 2 INFO nova.compute.manager [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Took 1.32 seconds to deallocate network for instance.
Oct 02 12:33:05 compute-0 nova_compute[256940]: 2025-10-02 12:33:05.927 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:05 compute-0 nova_compute[256940]: 2025-10-02 12:33:05.928 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:05 compute-0 nova_compute[256940]: 2025-10-02 12:33:05.973 2 DEBUG nova.compute.manager [req-56b55ffc-8ce2-4c39-8032-2ac0bbb56ac8 req-6add8e75-59cb-433a-b763-37e236503df9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-deleted-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:06 compute-0 nova_compute[256940]: 2025-10-02 12:33:06.015 2 DEBUG oslo_concurrency.processutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:06.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:06 compute-0 ovn_controller[148123]: 2025-10-02T12:33:06Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:f6:4a 10.100.0.9
Oct 02 12:33:06 compute-0 ovn_controller[148123]: 2025-10-02T12:33:06Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:f6:4a 10.100.0.9
Oct 02 12:33:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869894575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:06 compute-0 nova_compute[256940]: 2025-10-02 12:33:06.532 2 DEBUG oslo_concurrency.processutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:06 compute-0 nova_compute[256940]: 2025-10-02 12:33:06.539 2 DEBUG nova.compute.provider_tree [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:06 compute-0 nova_compute[256940]: 2025-10-02 12:33:06.586 2 DEBUG nova.scheduler.client.report [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:06.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:06 compute-0 nova_compute[256940]: 2025-10-02 12:33:06.617 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:06 compute-0 nova_compute[256940]: 2025-10-02 12:33:06.649 2 INFO nova.scheduler.client.report [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocations for instance b6c2a016-125f-4f83-a284-5a2d50805121
Oct 02 12:33:06 compute-0 nova_compute[256940]: 2025-10-02 12:33:06.736 2 DEBUG oslo_concurrency.lockutils [None req-6b0ce79d-02a5-42f7-8825-2293e6bbac36 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:07 compute-0 ceph-mon[73668]: pgmap v1696: 305 pgs: 305 active+clean; 254 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 148 op/s
Oct 02 12:33:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3869894575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 283 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:33:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1683438901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2923262910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3684999600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/531094039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:08.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:08.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:09 compute-0 ceph-mon[73668]: pgmap v1697: 305 pgs: 305 active+clean; 283 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:33:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 283 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:33:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:10 compute-0 ceph-mon[73668]: pgmap v1698: 305 pgs: 305 active+clean; 283 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:33:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:10 compute-0 nova_compute[256940]: 2025-10-02 12:33:10.553 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408375.5508149, b6c2a016-125f-4f83-a284-5a2d50805121 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:10 compute-0 nova_compute[256940]: 2025-10-02 12:33:10.553 2 INFO nova.compute.manager [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] VM Stopped (Lifecycle Event)
Oct 02 12:33:10 compute-0 nova_compute[256940]: 2025-10-02 12:33:10.593 2 DEBUG nova.compute.manager [None req-cc628d4a-2ba1-4882-84f7-38e30a1c82bb - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:10 compute-0 nova_compute[256940]: 2025-10-02 12:33:10.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:10 compute-0 nova_compute[256940]: 2025-10-02 12:33:10.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.5 MiB/s wr, 217 op/s
Oct 02 12:33:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:12.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:12 compute-0 podman[308952]: 2025-10-02 12:33:12.433598404 +0000 UTC m=+0.092409277 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:33:12 compute-0 ceph-mon[73668]: pgmap v1699: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.5 MiB/s wr, 217 op/s
Oct 02 12:33:12 compute-0 podman[308953]: 2025-10-02 12:33:12.47722185 +0000 UTC m=+0.135763816 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:33:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.7 MiB/s wr, 227 op/s
Oct 02 12:33:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:14.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:14.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:14 compute-0 ceph-mon[73668]: pgmap v1700: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.7 MiB/s wr, 227 op/s
Oct 02 12:33:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.1 MiB/s wr, 268 op/s
Oct 02 12:33:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:15 compute-0 nova_compute[256940]: 2025-10-02 12:33:15.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:15 compute-0 nova_compute[256940]: 2025-10-02 12:33:15.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:16.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:16 compute-0 sudo[309000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:16 compute-0 sudo[309000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:16 compute-0 sudo[309000]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:16 compute-0 sudo[309025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:16 compute-0 sudo[309025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:16 compute-0 sudo[309025]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:16.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:17 compute-0 ceph-mon[73668]: pgmap v1701: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.1 MiB/s wr, 268 op/s
Oct 02 12:33:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 267 op/s
Oct 02 12:33:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:18.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:18 compute-0 ceph-mon[73668]: pgmap v1702: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 267 op/s
Oct 02 12:33:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4171719287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3641656552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 217 KiB/s wr, 190 op/s
Oct 02 12:33:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:20.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:20 compute-0 nova_compute[256940]: 2025-10-02 12:33:20.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:20 compute-0 nova_compute[256940]: 2025-10-02 12:33:20.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:20 compute-0 ceph-mon[73668]: pgmap v1703: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 217 KiB/s wr, 190 op/s
Oct 02 12:33:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 217 KiB/s wr, 203 op/s
Oct 02 12:33:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:22.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.295 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.296 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.321 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:33:22 compute-0 ceph-mon[73668]: pgmap v1704: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 217 KiB/s wr, 203 op/s
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.409 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.410 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.417 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.417 2 INFO nova.compute.claims [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:33:22 compute-0 nova_compute[256940]: 2025-10-02 12:33:22.568 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3725152310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.055 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.062 2 DEBUG nova.compute.provider_tree [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.093 2 DEBUG nova.scheduler.client.report [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.133 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.134 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.221 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.222 2 DEBUG nova.network.neutron [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.255 2 INFO nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.276 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:33:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 48 KiB/s wr, 172 op/s
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.393 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.395 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.396 2 INFO nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Creating image(s)
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.428 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.463 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.490 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.495 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.558 2 DEBUG nova.policy [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.563 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.564 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.564 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.564 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.593 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:23 compute-0 nova_compute[256940]: 2025-10-02 12:33:23.597 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 588c95ff-a590-46c2-a041-6ee318695ef1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3725152310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:24.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:24 compute-0 nova_compute[256940]: 2025-10-02 12:33:24.383 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 588c95ff-a590-46c2-a041-6ee318695ef1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:24 compute-0 nova_compute[256940]: 2025-10-02 12:33:24.465 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] resizing rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:33:24 compute-0 nova_compute[256940]: 2025-10-02 12:33:24.517 2 DEBUG nova.network.neutron [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Successfully created port: 6165c9e0-ee4f-4bc6-add1-c234918a73d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:33:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:24.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:24 compute-0 nova_compute[256940]: 2025-10-02 12:33:24.712 2 DEBUG nova.objects.instance [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 588c95ff-a590-46c2-a041-6ee318695ef1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:24 compute-0 ceph-mon[73668]: pgmap v1705: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 48 KiB/s wr, 172 op/s
Oct 02 12:33:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 313 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 181 op/s
Oct 02 12:33:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:25 compute-0 nova_compute[256940]: 2025-10-02 12:33:25.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:25 compute-0 nova_compute[256940]: 2025-10-02 12:33:25.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:25 compute-0 nova_compute[256940]: 2025-10-02 12:33:25.712 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:33:25 compute-0 nova_compute[256940]: 2025-10-02 12:33:25.713 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Ensure instance console log exists: /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:33:25 compute-0 nova_compute[256940]: 2025-10-02 12:33:25.714 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:25 compute-0 nova_compute[256940]: 2025-10-02 12:33:25.714 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:25 compute-0 nova_compute[256940]: 2025-10-02 12:33:25.714 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:26.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.249 2 DEBUG nova.network.neutron [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Successfully updated port: 6165c9e0-ee4f-4bc6-add1-c234918a73d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.277 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.277 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.278 2 DEBUG nova.network.neutron [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.341 2 DEBUG nova.compute.manager [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.342 2 DEBUG nova.compute.manager [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing instance network info cache due to event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.342 2 DEBUG oslo_concurrency.lockutils [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:26 compute-0 podman[309244]: 2025-10-02 12:33:26.40517298 +0000 UTC m=+0.071119303 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:33:26 compute-0 podman[309243]: 2025-10-02 12:33:26.405317563 +0000 UTC m=+0.073586957 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:33:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:26.469 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:26.470 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:26.471 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:26 compute-0 nova_compute[256940]: 2025-10-02 12:33:26.509 2 DEBUG nova.network.neutron [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:33:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:27 compute-0 ceph-mon[73668]: pgmap v1706: 305 pgs: 305 active+clean; 313 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 181 op/s
Oct 02 12:33:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 317 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.373 2 DEBUG nova.network.neutron [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.441 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.442 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Instance network_info: |[{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.442 2 DEBUG oslo_concurrency.lockutils [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.443 2 DEBUG nova.network.neutron [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.446 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Start _get_guest_xml network_info=[{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.454 2 WARNING nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.459 2 DEBUG nova.virt.libvirt.host [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.460 2 DEBUG nova.virt.libvirt.host [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.477 2 DEBUG nova.virt.libvirt.host [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.478 2 DEBUG nova.virt.libvirt.host [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.480 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.480 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.481 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.481 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.482 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.482 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.483 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.483 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.484 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.484 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.484 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.485 2 DEBUG nova.virt.hardware [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.490 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.534 2 DEBUG oslo_concurrency.lockutils [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "interface-6c768447-ca45-42c7-aee4-09dfe7406a2d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.535 2 DEBUG oslo_concurrency.lockutils [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "interface-6c768447-ca45-42c7-aee4-09dfe7406a2d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.536 2 DEBUG nova.objects.instance [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'flavor' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1222925000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:27 compute-0 nova_compute[256940]: 2025-10-02 12:33:27.983 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.037 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.044 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4036348665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1222925000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:28.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.479 2 DEBUG nova.objects.instance [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'pci_requests' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.526 2 DEBUG nova.network.neutron [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:33:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914263883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.567 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.570 2 DEBUG nova.virt.libvirt.vif [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.570 2 DEBUG nova.network.os_vif_util [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.572 2 DEBUG nova.network.os_vif_util [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:b1:14,bridge_name='br-int',has_traffic_filtering=True,id=6165c9e0-ee4f-4bc6-add1-c234918a73d2,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6165c9e0-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.573 2 DEBUG nova.objects.instance [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 588c95ff-a590-46c2-a041-6ee318695ef1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.609 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <uuid>588c95ff-a590-46c2-a041-6ee318695ef1</uuid>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <name>instance-00000052</name>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <nova:name>tempest-tempest.common.compute-instance-463522272</nova:name>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:33:27</nova:creationTime>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <nova:port uuid="6165c9e0-ee4f-4bc6-add1-c234918a73d2">
Oct 02 12:33:28 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <system>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <entry name="serial">588c95ff-a590-46c2-a041-6ee318695ef1</entry>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <entry name="uuid">588c95ff-a590-46c2-a041-6ee318695ef1</entry>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </system>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <os>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   </os>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <features>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   </features>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/588c95ff-a590-46c2-a041-6ee318695ef1_disk">
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/588c95ff-a590-46c2-a041-6ee318695ef1_disk.config">
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:33:28 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:ff:b1:14"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <target dev="tap6165c9e0-ee"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/console.log" append="off"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <video>
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </video>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:33:28 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:33:28 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:33:28 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:33:28 compute-0 nova_compute[256940]: </domain>
Oct 02 12:33:28 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.609 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Preparing to wait for external event network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.609 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.610 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.610 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.611 2 DEBUG nova.virt.libvirt.vif [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.611 2 DEBUG nova.network.os_vif_util [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.611 2 DEBUG nova.network.os_vif_util [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:b1:14,bridge_name='br-int',has_traffic_filtering=True,id=6165c9e0-ee4f-4bc6-add1-c234918a73d2,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6165c9e0-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.612 2 DEBUG os_vif [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:b1:14,bridge_name='br-int',has_traffic_filtering=True,id=6165c9e0-ee4f-4bc6-add1-c234918a73d2,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6165c9e0-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6165c9e0-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6165c9e0-ee, col_values=(('external_ids', {'iface-id': '6165c9e0-ee4f-4bc6-add1-c234918a73d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:b1:14', 'vm-uuid': '588c95ff-a590-46c2-a041-6ee318695ef1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:28.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:28 compute-0 NetworkManager[44981]: <info>  [1759408408.6216] manager: (tap6165c9e0-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.630 2 INFO os_vif [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:b1:14,bridge_name='br-int',has_traffic_filtering=True,id=6165c9e0-ee4f-4bc6-add1-c234918a73d2,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6165c9e0-ee')
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.741 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.742 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.742 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:ff:b1:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.743 2 INFO nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Using config drive
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:33:28
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'backups']
Oct 02 12:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.778 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:28 compute-0 nova_compute[256940]: 2025-10-02 12:33:28.853 2 DEBUG nova.policy [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1c8eed6cb806403ca545bb7b2820714e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7846694bb70143aa984e235126fbe15c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:33:29 compute-0 ceph-mon[73668]: pgmap v1707: 305 pgs: 305 active+clean; 317 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Oct 02 12:33:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2914263883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.245 2 INFO nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Creating config drive at /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/disk.config
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.251 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvn_kph0u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 317 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.387 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvn_kph0u" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.422 2 DEBUG nova.storage.rbd_utils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 588c95ff-a590-46c2-a041-6ee318695ef1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.427 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/disk.config 588c95ff-a590-46c2-a041-6ee318695ef1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.523 2 DEBUG nova.network.neutron [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updated VIF entry in instance network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.524 2 DEBUG nova.network.neutron [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.552 2 DEBUG oslo_concurrency.lockutils [req-9b22515d-64bf-47d6-b3fe-69554ce1f93c req-70a627bc-1422-42ac-b193-21eb3e7a578b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:29 compute-0 nova_compute[256940]: 2025-10-02 12:33:29.640 2 DEBUG nova.network.neutron [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Successfully created port: afeb2335-1df8-4781-afa0-29b24bf73da6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:33:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:30.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.452 2 DEBUG nova.network.neutron [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Successfully updated port: afeb2335-1df8-4781-afa0-29b24bf73da6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:33:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.477 2 DEBUG oslo_concurrency.lockutils [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.477 2 DEBUG oslo_concurrency.lockutils [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquired lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.477 2 DEBUG nova.network.neutron [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:30.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.662 2 DEBUG nova.compute.manager [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-changed-afeb2335-1df8-4781-afa0-29b24bf73da6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.662 2 DEBUG nova.compute.manager [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Refreshing instance network info cache due to event network-changed-afeb2335-1df8-4781-afa0-29b24bf73da6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.663 2 DEBUG oslo_concurrency.lockutils [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:30 compute-0 ceph-mon[73668]: pgmap v1708: 305 pgs: 305 active+clean; 317 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.823 2 DEBUG oslo_concurrency.processutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/disk.config 588c95ff-a590-46c2-a041-6ee318695ef1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.824 2 INFO nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Deleting local config drive /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/disk.config because it was imported into RBD.
Oct 02 12:33:30 compute-0 kernel: tap6165c9e0-ee: entered promiscuous mode
Oct 02 12:33:30 compute-0 NetworkManager[44981]: <info>  [1759408410.9036] manager: (tap6165c9e0-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:30 compute-0 ovn_controller[148123]: 2025-10-02T12:33:30Z|00306|binding|INFO|Claiming lport 6165c9e0-ee4f-4bc6-add1-c234918a73d2 for this chassis.
Oct 02 12:33:30 compute-0 ovn_controller[148123]: 2025-10-02T12:33:30Z|00307|binding|INFO|6165c9e0-ee4f-4bc6-add1-c234918a73d2: Claiming fa:16:3e:ff:b1:14 10.100.0.14
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.920 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:b1:14 10.100.0.14'], port_security=['fa:16:3e:ff:b1:14 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '588c95ff-a590-46c2-a041-6ee318695ef1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3852fde4-27af-4b26-ab2c-21696f5fd593', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6165c9e0-ee4f-4bc6-add1-c234918a73d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.922 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6165c9e0-ee4f-4bc6-add1-c234918a73d2 in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.925 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:33:30 compute-0 ovn_controller[148123]: 2025-10-02T12:33:30Z|00308|binding|INFO|Setting lport 6165c9e0-ee4f-4bc6-add1-c234918a73d2 ovn-installed in OVS
Oct 02 12:33:30 compute-0 ovn_controller[148123]: 2025-10-02T12:33:30Z|00309|binding|INFO|Setting lport 6165c9e0-ee4f-4bc6-add1-c234918a73d2 up in Southbound
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:30 compute-0 nova_compute[256940]: 2025-10-02 12:33:30.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.943 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5f270f7f-7a7e-4e82-921a-692d08e4dc02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.944 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a187d8a-71 in ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.946 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a187d8a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.946 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[05cd6dd9-44d6-4df3-9346-fd1af7981dea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.947 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3ba109-4c7b-447b-90fd-de3903c87a2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:30 compute-0 systemd-machined[210927]: New machine qemu-36-instance-00000052.
Oct 02 12:33:30 compute-0 systemd-udevd[309422]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:30 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000052.
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.965 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9b080a-ebfb-4ed6-97ff-92c38110e81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:30 compute-0 NetworkManager[44981]: <info>  [1759408410.9718] device (tap6165c9e0-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:33:30 compute-0 NetworkManager[44981]: <info>  [1759408410.9730] device (tap6165c9e0-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:33:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:30.982 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bf3c50-712d-41c0-a068-8b7fc0e88348]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.015 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bd07e4-a07b-4fe1-b8ad-7fa5c5f4b206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 NetworkManager[44981]: <info>  [1759408411.0249] manager: (tap6a187d8a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.024 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e83ed024-74a2-48a1-9d4d-21edc04de56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.067 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[22f46d9c-7771-41cb-a68e-fcdad45d6b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.076 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[de55727d-10f7-44ec-8dc1-723bf6c551bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 NetworkManager[44981]: <info>  [1759408411.1033] device (tap6a187d8a-70): carrier: link connected
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.113 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5a5214-ee28-460f-b9f1-b56ee851d401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.137 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60149a25-6288-4c14-9490-d280fb44378d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630658, 'reachable_time': 17793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309454, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.158 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0640949b-f104-45fa-a2a5-8f330cd59d92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:e868'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630658, 'tstamp': 630658}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309455, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.186 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6109ccc4-54a5-47e8-996b-4fc1490ab3b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630658, 'reachable_time': 17793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309456, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.228 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2dea1c97-7fe9-4cbc-bb13-8654c8c2f81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.294 2 DEBUG nova.compute.manager [req-4564933c-fe8c-4104-83d0-bc43d23723ac req-80bfc0ca-1645-4b08-9305-9f27a3fd7c6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.295 2 DEBUG oslo_concurrency.lockutils [req-4564933c-fe8c-4104-83d0-bc43d23723ac req-80bfc0ca-1645-4b08-9305-9f27a3fd7c6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.296 2 DEBUG oslo_concurrency.lockutils [req-4564933c-fe8c-4104-83d0-bc43d23723ac req-80bfc0ca-1645-4b08-9305-9f27a3fd7c6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.296 2 DEBUG oslo_concurrency.lockutils [req-4564933c-fe8c-4104-83d0-bc43d23723ac req-80bfc0ca-1645-4b08-9305-9f27a3fd7c6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.297 2 DEBUG nova.compute.manager [req-4564933c-fe8c-4104-83d0-bc43d23723ac req-80bfc0ca-1645-4b08-9305-9f27a3fd7c6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Processing event network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.306 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c843deac-0a05-4171-b2bb-a1fb5922752a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.308 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.308 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.309 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:31 compute-0 NetworkManager[44981]: <info>  [1759408411.3115] manager: (tap6a187d8a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Oct 02 12:33:31 compute-0 kernel: tap6a187d8a-70: entered promiscuous mode
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.315 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:31 compute-0 ovn_controller[148123]: 2025-10-02T12:33:31Z|00310|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:33:31 compute-0 nova_compute[256940]: 2025-10-02 12:33:31.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.335 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.336 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f8541e3b-6332-4520-ac2b-82f2ee2129b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.337 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:33:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:31.338 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'env', 'PROCESS_TAG=haproxy-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a187d8a-77c6-4b27-bb13-654f471c1faf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:33:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 191 op/s
Oct 02 12:33:31 compute-0 podman[309522]: 2025-10-02 12:33:31.758217268 +0000 UTC m=+0.053353910 container create 5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:31 compute-0 systemd[1]: Started libpod-conmon-5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b.scope.
Oct 02 12:33:31 compute-0 podman[309522]: 2025-10-02 12:33:31.731075821 +0000 UTC m=+0.026212493 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:33:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a398249e539f6b9d299a0871996084f6a713475bff3134cceadb7c19ec15fe97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:31 compute-0 podman[309522]: 2025-10-02 12:33:31.868994202 +0000 UTC m=+0.164130864 container init 5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:33:31 compute-0 podman[309522]: 2025-10-02 12:33:31.878748016 +0000 UTC m=+0.173884658 container start 5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:31 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [NOTICE]   (309548) : New worker (309550) forked
Oct 02 12:33:31 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [NOTICE]   (309548) : Loading success.
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.240 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.241 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408412.239201, 588c95ff-a590-46c2-a041-6ee318695ef1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.241 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] VM Started (Lifecycle Event)
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.244 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:33:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.249 2 INFO nova.virt.libvirt.driver [-] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Instance spawned successfully.
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.250 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.362 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.368 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.372 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.373 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.373 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.374 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.374 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.375 2 DEBUG nova.virt.libvirt.driver [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.389 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.390 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408412.2394786, 588c95ff-a590-46c2-a041-6ee318695ef1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.390 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] VM Paused (Lifecycle Event)
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.426 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.431 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408412.2433035, 588c95ff-a590-46c2-a041-6ee318695ef1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.431 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] VM Resumed (Lifecycle Event)
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.451 2 INFO nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Took 9.06 seconds to spawn the instance on the hypervisor.
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.452 2 DEBUG nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.461 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.466 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.538 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:33:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:32.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:32 compute-0 ceph-mon[73668]: pgmap v1709: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 191 op/s
Oct 02 12:33:32 compute-0 nova_compute[256940]: 2025-10-02 12:33:32.906 2 INFO nova.compute.manager [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Took 10.53 seconds to build instance.
Oct 02 12:33:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.668 2 DEBUG oslo_concurrency.lockutils [None req-d047f707-955b-479d-9fa9-36f96b644d12 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.671 2 DEBUG nova.compute.manager [req-bb18f484-2a2a-4b30-9cd8-d73d34cd3c40 req-4ba91b3a-1569-4d1d-a1da-1e965c0304f8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.671 2 DEBUG oslo_concurrency.lockutils [req-bb18f484-2a2a-4b30-9cd8-d73d34cd3c40 req-4ba91b3a-1569-4d1d-a1da-1e965c0304f8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.672 2 DEBUG oslo_concurrency.lockutils [req-bb18f484-2a2a-4b30-9cd8-d73d34cd3c40 req-4ba91b3a-1569-4d1d-a1da-1e965c0304f8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.672 2 DEBUG oslo_concurrency.lockutils [req-bb18f484-2a2a-4b30-9cd8-d73d34cd3c40 req-4ba91b3a-1569-4d1d-a1da-1e965c0304f8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.672 2 DEBUG nova.compute.manager [req-bb18f484-2a2a-4b30-9cd8-d73d34cd3c40 req-4ba91b3a-1569-4d1d-a1da-1e965c0304f8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] No waiting events found dispatching network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:33 compute-0 nova_compute[256940]: 2025-10-02 12:33:33.672 2 WARNING nova.compute.manager [req-bb18f484-2a2a-4b30-9cd8-d73d34cd3c40 req-4ba91b3a-1569-4d1d-a1da-1e965c0304f8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received unexpected event network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 for instance with vm_state active and task_state None.
Oct 02 12:33:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:34.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:34 compute-0 ceph-mon[73668]: pgmap v1710: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 02 12:33:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 328 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Oct 02 12:33:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.585 2 DEBUG nova.network.neutron [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.613 2 DEBUG oslo_concurrency.lockutils [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Releasing lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.614 2 DEBUG oslo_concurrency.lockutils [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.614 2 DEBUG nova.network.neutron [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Refreshing network info cache for port afeb2335-1df8-4781-afa0-29b24bf73da6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.618 2 DEBUG nova.virt.libvirt.vif [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.618 2 DEBUG nova.network.os_vif_util [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converting VIF {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.620 2 DEBUG nova.network.os_vif_util [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.620 2 DEBUG os_vif [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafeb2335-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapafeb2335-1d, col_values=(('external_ids', {'iface-id': 'afeb2335-1df8-4781-afa0-29b24bf73da6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:e5:68', 'vm-uuid': '6c768447-ca45-42c7-aee4-09dfe7406a2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:35 compute-0 NetworkManager[44981]: <info>  [1759408415.6276] manager: (tapafeb2335-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.635 2 INFO os_vif [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d')
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.636 2 DEBUG nova.virt.libvirt.vif [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.636 2 DEBUG nova.network.os_vif_util [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converting VIF {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.637 2 DEBUG nova.network.os_vif_util [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.639 2 DEBUG nova.virt.libvirt.guest [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:99:e5:68"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <target dev="tapafeb2335-1d"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]: </interface>
Oct 02 12:33:35 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:33:35 compute-0 NetworkManager[44981]: <info>  [1759408415.6509] manager: (tapafeb2335-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Oct 02 12:33:35 compute-0 kernel: tapafeb2335-1d: entered promiscuous mode
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:35 compute-0 ovn_controller[148123]: 2025-10-02T12:33:35Z|00311|binding|INFO|Claiming lport afeb2335-1df8-4781-afa0-29b24bf73da6 for this chassis.
Oct 02 12:33:35 compute-0 ovn_controller[148123]: 2025-10-02T12:33:35Z|00312|binding|INFO|afeb2335-1df8-4781-afa0-29b24bf73da6: Claiming fa:16:3e:99:e5:68 10.10.10.161
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.675 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:e5:68 10.10.10.161'], port_security=['fa:16:3e:99:e5:68 10.10.10.161'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.161/24', 'neutron:device_id': '6c768447-ca45-42c7-aee4-09dfe7406a2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7846694bb70143aa984e235126fbe15c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a905958-1c9d-4660-8b81-aa37aa79df93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e84e07d2-e3b8-4ca2-b17d-b1af49d38a84, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=afeb2335-1df8-4781-afa0-29b24bf73da6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.676 158104 INFO neutron.agent.ovn.metadata.agent [-] Port afeb2335-1df8-4781-afa0-29b24bf73da6 in datapath 181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5 bound to our chassis
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.678 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.692 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[337b415b-12d0-44bb-a52d-2d7925cc39ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.693 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap181ea8ee-21 in ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.695 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap181ea8ee-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.695 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffd7e83-5ec3-4404-a298-312a48735889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.696 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e34619a3-bd80-4010-b5a1-0f84297c6209]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 systemd-udevd[309568]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:35 compute-0 NetworkManager[44981]: <info>  [1759408415.7217] device (tapafeb2335-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:33:35 compute-0 NetworkManager[44981]: <info>  [1759408415.7231] device (tapafeb2335-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.717 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ade53412-41cf-4c49-b9d8-6d9171904f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_controller[148123]: 2025-10-02T12:33:35Z|00313|binding|INFO|Setting lport afeb2335-1df8-4781-afa0-29b24bf73da6 ovn-installed in OVS
Oct 02 12:33:35 compute-0 ovn_controller[148123]: 2025-10-02T12:33:35Z|00314|binding|INFO|Setting lport afeb2335-1df8-4781-afa0-29b24bf73da6 up in Southbound
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.740 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[39bd7404-9d30-4d41-a11b-348f1ebbacad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.776 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4cab16c9-35c2-45dd-bae9-ccb90b452b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.780 2 DEBUG nova.virt.libvirt.driver [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.780 2 DEBUG nova.virt.libvirt.driver [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.780 2 DEBUG nova.virt.libvirt.driver [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No VIF found with MAC fa:16:3e:9c:f6:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:33:35 compute-0 NetworkManager[44981]: <info>  [1759408415.7941] manager: (tap181ea8ee-20): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.794 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1715cb18-6071-4406-a07e-a3cf764a465c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.809 2 DEBUG nova.virt.libvirt.guest [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:33:35</nova:creationTime>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:33:35 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     <nova:port uuid="afeb2335-1df8-4781-afa0-29b24bf73da6">
Oct 02 12:33:35 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.10.10.161" ipVersion="4"/>
Oct 02 12:33:35 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:35 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:33:35 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:33:35 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.848 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c7bc8a-0787-42ea-84b7-7d4ad5d4aa70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 nova_compute[256940]: 2025-10-02 12:33:35.851 2 DEBUG oslo_concurrency.lockutils [None req-3691bd87-1029-4288-8c5a-8578d54eba1f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "interface-6c768447-ca45-42c7-aee4-09dfe7406a2d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.855 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[654450e9-05f9-4f03-b284-245b1502b093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 NetworkManager[44981]: <info>  [1759408415.8888] device (tap181ea8ee-20): carrier: link connected
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.894 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5deeb3e7-9568-4696-9e28-0734eec2c280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.916 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[253d7766-96a5-4f90-b9c8-3fc922be293c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap181ea8ee-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:05:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631137, 'reachable_time': 27354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309594, 'error': None, 'target': 'ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.939 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c620dea6-1bf9-48a0-9cb7-c13377364b61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:5d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 631137, 'tstamp': 631137}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309595, 'error': None, 'target': 'ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.958 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2668063e-d67c-4306-9ef1-9f55bd09bda9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap181ea8ee-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:05:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631137, 'reachable_time': 27354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309596, 'error': None, 'target': 'ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:35.996 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[898d2c31-ccfe-4c37-b762-c0ae14ad7d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.062 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5965b6d2-9945-476d-b771-f01187608018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.063 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap181ea8ee-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.064 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.064 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap181ea8ee-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:36 compute-0 NetworkManager[44981]: <info>  [1759408416.0677] manager: (tap181ea8ee-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Oct 02 12:33:36 compute-0 kernel: tap181ea8ee-20: entered promiscuous mode
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.073 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap181ea8ee-20, col_values=(('external_ids', {'iface-id': 'aaa37696-0d29-4dfa-a9e0-4fbb899782e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:36 compute-0 ovn_controller[148123]: 2025-10-02T12:33:36Z|00315|binding|INFO|Releasing lport aaa37696-0d29-4dfa-a9e0-4fbb899782e6 from this chassis (sb_readonly=0)
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.093 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.094 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[448d403c-c5b5-4417-82b2-89058af7fbc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.095 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5.pid.haproxy
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:33:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:36.096 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'env', 'PROCESS_TAG=haproxy-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:33:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:36 compute-0 podman[309627]: 2025-10-02 12:33:36.48688209 +0000 UTC m=+0.072317194 container create 433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:33:36 compute-0 systemd[1]: Started libpod-conmon-433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc.scope.
Oct 02 12:33:36 compute-0 podman[309627]: 2025-10-02 12:33:36.456235972 +0000 UTC m=+0.041671096 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:33:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158389fc115f744f23a6a113f5dcc96235bf8181c5c9df869b627ca15885707/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:36 compute-0 podman[309627]: 2025-10-02 12:33:36.573665889 +0000 UTC m=+0.159101013 container init 433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:33:36 compute-0 podman[309627]: 2025-10-02 12:33:36.57982856 +0000 UTC m=+0.165263664 container start 433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:33:36 compute-0 neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5[309644]: [NOTICE]   (309656) : New worker (309673) forked
Oct 02 12:33:36 compute-0 neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5[309644]: [NOTICE]   (309656) : Loading success.
Oct 02 12:33:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:36.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:36 compute-0 sudo[309647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:36 compute-0 sudo[309647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:36 compute-0 sudo[309647]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:36 compute-0 sudo[309684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:36 compute-0 sudo[309684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:36 compute-0 sudo[309684]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.755 2 DEBUG nova.compute.manager [req-f9d001e0-7698-4806-8ded-2ca43b4d9fa1 req-31e0f650-8d77-4f66-bcf4-d9d39a39b409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.757 2 DEBUG oslo_concurrency.lockutils [req-f9d001e0-7698-4806-8ded-2ca43b4d9fa1 req-31e0f650-8d77-4f66-bcf4-d9d39a39b409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.757 2 DEBUG oslo_concurrency.lockutils [req-f9d001e0-7698-4806-8ded-2ca43b4d9fa1 req-31e0f650-8d77-4f66-bcf4-d9d39a39b409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.758 2 DEBUG oslo_concurrency.lockutils [req-f9d001e0-7698-4806-8ded-2ca43b4d9fa1 req-31e0f650-8d77-4f66-bcf4-d9d39a39b409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.758 2 DEBUG nova.compute.manager [req-f9d001e0-7698-4806-8ded-2ca43b4d9fa1 req-31e0f650-8d77-4f66-bcf4-d9d39a39b409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] No waiting events found dispatching network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:36 compute-0 nova_compute[256940]: 2025-10-02 12:33:36.758 2 WARNING nova.compute.manager [req-f9d001e0-7698-4806-8ded-2ca43b4d9fa1 req-31e0f650-8d77-4f66-bcf4-d9d39a39b409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received unexpected event network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 for instance with vm_state active and task_state None.
Oct 02 12:33:37 compute-0 ceph-mon[73668]: pgmap v1711: 305 pgs: 305 active+clean; 328 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.319 2 DEBUG oslo_concurrency.lockutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.320 2 DEBUG oslo_concurrency.lockutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.358 2 DEBUG nova.objects.instance [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'flavor' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 231 op/s
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.456 2 DEBUG oslo_concurrency.lockutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:37 compute-0 ovn_controller[148123]: 2025-10-02T12:33:37Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:e5:68 10.10.10.161
Oct 02 12:33:37 compute-0 ovn_controller[148123]: 2025-10-02T12:33:37Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:e5:68 10.10.10.161
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.731 2 DEBUG oslo_concurrency.lockutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.732 2 DEBUG oslo_concurrency.lockutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.732 2 INFO nova.compute.manager [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Attaching volume c9770430-1295-4cf9-8ead-017246f69f0a to /dev/vdb
Oct 02 12:33:37 compute-0 ovn_controller[148123]: 2025-10-02T12:33:37Z|00316|binding|INFO|Releasing lport aaa37696-0d29-4dfa-a9e0-4fbb899782e6 from this chassis (sb_readonly=0)
Oct 02 12:33:37 compute-0 ovn_controller[148123]: 2025-10-02T12:33:37Z|00317|binding|INFO|Releasing lport 59209bb4-9a3d-4a28-b9bc-6ae775c6b546 from this chassis (sb_readonly=0)
Oct 02 12:33:37 compute-0 ovn_controller[148123]: 2025-10-02T12:33:37Z|00318|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:33:37 compute-0 nova_compute[256940]: 2025-10-02 12:33:37.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.003 2 DEBUG os_brick.utils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.005 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.020 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.020 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd75298-1d03-4027-918f-5053c0377005]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.021 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.031 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.032 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cad969-9818-4add-8c04-d5d9564c711d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.033 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.048 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.049 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[2001900b-bc47-4f8a-9a22-83e81dfdf604]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.052 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[08db60cf-1f55-4b1b-9150-a4817c929d49]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.053 2 DEBUG oslo_concurrency.processutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.088 2 DEBUG oslo_concurrency.processutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.093 2 DEBUG os_brick.initiator.connectors.lightos [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.094 2 DEBUG os_brick.initiator.connectors.lightos [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.095 2 DEBUG os_brick.initiator.connectors.lightos [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.096 2 DEBUG os_brick.utils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.097 2 DEBUG nova.virt.block_device [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating existing volume attachment record: 1aeb09c5-d42f-4238-8e6e-d89f3a69b10a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.104 2 DEBUG nova.network.neutron [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updated VIF entry in instance network info cache for port afeb2335-1df8-4781-afa0-29b24bf73da6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.105 2 DEBUG nova.network.neutron [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.132 2 DEBUG oslo_concurrency.lockutils [req-c022437b-fb52-488e-84ac-8b9675441a87 req-128e4fe0-1dcb-4006-bcf0-35d66588adf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:38.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:38.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.978 2 DEBUG nova.compute.manager [req-301ac952-55bb-4a7f-ae0c-6caf846ea14b req-4df8d580-e071-43af-ba4f-c74a689a86ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.978 2 DEBUG oslo_concurrency.lockutils [req-301ac952-55bb-4a7f-ae0c-6caf846ea14b req-4df8d580-e071-43af-ba4f-c74a689a86ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.979 2 DEBUG oslo_concurrency.lockutils [req-301ac952-55bb-4a7f-ae0c-6caf846ea14b req-4df8d580-e071-43af-ba4f-c74a689a86ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.979 2 DEBUG oslo_concurrency.lockutils [req-301ac952-55bb-4a7f-ae0c-6caf846ea14b req-4df8d580-e071-43af-ba4f-c74a689a86ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.980 2 DEBUG nova.compute.manager [req-301ac952-55bb-4a7f-ae0c-6caf846ea14b req-4df8d580-e071-43af-ba4f-c74a689a86ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] No waiting events found dispatching network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:38 compute-0 nova_compute[256940]: 2025-10-02 12:33:38.980 2 WARNING nova.compute.manager [req-301ac952-55bb-4a7f-ae0c-6caf846ea14b req-4df8d580-e071-43af-ba4f-c74a689a86ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received unexpected event network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 for instance with vm_state active and task_state None.
Oct 02 12:33:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4230650317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:39 compute-0 ceph-mon[73668]: pgmap v1712: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 231 op/s
Oct 02 12:33:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4230650317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:33:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8742 writes, 38K keys, 8742 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 8742 writes, 8742 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1640 writes, 7339 keys, 1640 commit groups, 1.0 writes per commit group, ingest: 10.86 MB, 0.02 MB/s
                                           Interval WAL: 1641 writes, 1641 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     42.4      1.12              0.15        22    0.051       0      0       0.0       0.0
                                             L6      1/0    9.72 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0     74.0     61.6      3.05              0.58        21    0.145    112K    12K       0.0       0.0
                                            Sum      1/0    9.72 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0     54.2     56.5      4.17              0.73        43    0.097    112K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.9     33.9     34.7      1.68              0.16        10    0.168     33K   3067       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     74.0     61.6      3.05              0.58        21    0.145    112K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.5      1.12              0.15        21    0.053       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.046, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.08 MB/s read, 4.2 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.09 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 25.34 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000277 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1496,24.47 MB,8.04811%) FilterBlock(44,319.36 KB,0.10259%) IndexBlock(44,570.80 KB,0.183361%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:33:39 compute-0 nova_compute[256940]: 2025-10-02 12:33:39.311 2 DEBUG nova.objects.instance [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'flavor' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 344 KiB/s wr, 142 op/s
Oct 02 12:33:39 compute-0 nova_compute[256940]: 2025-10-02 12:33:39.415 2 DEBUG nova.virt.libvirt.driver [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Attempting to attach volume c9770430-1295-4cf9-8ead-017246f69f0a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:33:39 compute-0 nova_compute[256940]: 2025-10-02 12:33:39.418 2 DEBUG nova.virt.libvirt.guest [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:33:39 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:33:39 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-c9770430-1295-4cf9-8ead-017246f69f0a">
Oct 02 12:33:39 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:39 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:39 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:39 compute-0 nova_compute[256940]:   </source>
Oct 02 12:33:39 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:33:39 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:33:39 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:33:39 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:33:39 compute-0 nova_compute[256940]:   <serial>c9770430-1295-4cf9-8ead-017246f69f0a</serial>
Oct 02 12:33:39 compute-0 nova_compute[256940]: </disk>
Oct 02 12:33:39 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:33:39 compute-0 nova_compute[256940]: 2025-10-02 12:33:39.783 2 DEBUG nova.virt.libvirt.driver [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:39 compute-0 nova_compute[256940]: 2025-10-02 12:33:39.784 2 DEBUG nova.virt.libvirt.driver [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:33:39 compute-0 nova_compute[256940]: 2025-10-02 12:33:39.784 2 DEBUG nova.virt.libvirt.driver [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] No VIF found with MAC fa:16:3e:9c:f6:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007514969232737763 of space, bias 1.0, pg target 2.254490769821329 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:33:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:33:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:40.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:40 compute-0 ceph-mon[73668]: pgmap v1713: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 344 KiB/s wr, 142 op/s
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.436 2 DEBUG oslo_concurrency.lockutils [None req-a747c79b-387e-4c81-b882-15f650e46d79 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.529 2 DEBUG nova.compute.manager [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.530 2 DEBUG nova.compute.manager [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing instance network info cache due to event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.530 2 DEBUG oslo_concurrency.lockutils [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.531 2 DEBUG oslo_concurrency.lockutils [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.531 2 DEBUG nova.network.neutron [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:40.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:40 compute-0 nova_compute[256940]: 2025-10-02 12:33:40.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 348 KiB/s wr, 143 op/s
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:41.899 158296 DEBUG eventlet.wsgi.server [-] (158296) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:41.901 158296 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: Accept: */*
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: Connection: close
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: Content-Type: text/plain
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: Host: 169.254.169.254
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: User-Agent: curl/7.84.0
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: X-Forwarded-For: 10.100.0.9
Oct 02 12:33:41 compute-0 ovn_metadata_agent[158078]: X-Ovn-Network-Id: 2c5e6ab4-0e76-42fe-b8a1-d7157f964326 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 12:33:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:42.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.476 2 DEBUG nova.compute.manager [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.476 2 DEBUG nova.compute.manager [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing instance network info cache due to event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.477 2 DEBUG oslo_concurrency.lockutils [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:42 compute-0 ceph-mon[73668]: pgmap v1714: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 348 KiB/s wr, 143 op/s
Oct 02 12:33:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.692 2 DEBUG nova.network.neutron [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updated VIF entry in instance network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.692 2 DEBUG nova.network.neutron [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:42.694 158296 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 12:33:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:42.695 158296 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1916 time: 0.7943232
Oct 02 12:33:42 compute-0 haproxy-metadata-proxy-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307829]: 10.100.0.9:43776 [02/Oct/2025:12:33:41.898] listener listener/metadata 0/0/0/797/797 200 1900 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.714 2 DEBUG oslo_concurrency.lockutils [req-eb90abf8-2411-4cc2-8a36-bb967666449f req-508dc9c7-3f69-41bd-9329-be92b76e259f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.716 2 DEBUG oslo_concurrency.lockutils [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:42 compute-0 nova_compute[256940]: 2025-10-02 12:33:42.716 2 DEBUG nova.network.neutron [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.326 2 DEBUG oslo_concurrency.lockutils [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.327 2 DEBUG oslo_concurrency.lockutils [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.345 2 INFO nova.compute.manager [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Detaching volume c9770430-1295-4cf9-8ead-017246f69f0a
Oct 02 12:33:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 54 KiB/s wr, 122 op/s
Oct 02 12:33:43 compute-0 podman[309739]: 2025-10-02 12:33:43.427283279 +0000 UTC m=+0.093036044 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:33:43 compute-0 podman[309740]: 2025-10-02 12:33:43.461280174 +0000 UTC m=+0.117413008 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.492 2 INFO nova.virt.block_device [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Attempting to driver detach volume c9770430-1295-4cf9-8ead-017246f69f0a from mountpoint /dev/vdb
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.501 2 DEBUG nova.virt.libvirt.driver [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Attempting to detach device vdb from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.502 2 DEBUG nova.virt.libvirt.guest [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-c9770430-1295-4cf9-8ead-017246f69f0a">
Oct 02 12:33:43 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   </source>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <serial>c9770430-1295-4cf9-8ead-017246f69f0a</serial>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]: </disk>
Oct 02 12:33:43 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.509 2 INFO nova.virt.libvirt.driver [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully detached device vdb from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the persistent domain config.
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.509 2 DEBUG nova.virt.libvirt.driver [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.510 2 DEBUG nova.virt.libvirt.guest [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-c9770430-1295-4cf9-8ead-017246f69f0a">
Oct 02 12:33:43 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   </source>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <serial>c9770430-1295-4cf9-8ead-017246f69f0a</serial>
Oct 02 12:33:43 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:33:43 compute-0 nova_compute[256940]: </disk>
Oct 02 12:33:43 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.625 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759408423.6246827, 6c768447-ca45-42c7-aee4-09dfe7406a2d => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.626 2 DEBUG nova.virt.libvirt.driver [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 6c768447-ca45-42c7-aee4-09dfe7406a2d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.629 2 INFO nova.virt.libvirt.driver [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully detached device vdb from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the live domain config.
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.865 2 DEBUG nova.objects.instance [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'flavor' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:43 compute-0 nova_compute[256940]: 2025-10-02 12:33:43.942 2 DEBUG oslo_concurrency.lockutils [None req-5f081d5d-a00e-4cab-8070-a24aeef4043f 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.047 2 DEBUG nova.network.neutron [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updated VIF entry in instance network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.048 2 DEBUG nova.network.neutron [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.083 2 DEBUG oslo_concurrency.lockutils [req-9e0f0ba1-266b-4b42-a934-f932ebfc2d61 req-19f81d90-7bc2-4afc-8cc6-3b0aa6aa8abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:44.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:44.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:44 compute-0 ceph-mon[73668]: pgmap v1715: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 54 KiB/s wr, 122 op/s
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.916 2 DEBUG oslo_concurrency.lockutils [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "interface-6c768447-ca45-42c7-aee4-09dfe7406a2d-afeb2335-1df8-4781-afa0-29b24bf73da6" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.916 2 DEBUG oslo_concurrency.lockutils [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "interface-6c768447-ca45-42c7-aee4-09dfe7406a2d-afeb2335-1df8-4781-afa0-29b24bf73da6" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.952 2 DEBUG nova.objects.instance [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'flavor' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.978 2 DEBUG nova.virt.libvirt.vif [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.979 2 DEBUG nova.network.os_vif_util [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converting VIF {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.980 2 DEBUG nova.network.os_vif_util [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.983 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.985 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.988 2 DEBUG nova.virt.libvirt.driver [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Attempting to detach device tapafeb2335-1d from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.988 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:99:e5:68"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <target dev="tapafeb2335-1d"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]: </interface>
Oct 02 12:33:44 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.994 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:33:44 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.997 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface>not found in domain: <domain type='kvm' id='35'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <name>instance-0000004f</name>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <uuid>6c768447-ca45-42c7-aee4-09dfe7406a2d</uuid>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:33:35</nova:creationTime>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <nova:port uuid="afeb2335-1df8-4781-afa0-29b24bf73da6">
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.10.10.161" ipVersion="4"/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:33:44 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <system>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <entry name='serial'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <entry name='uuid'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </system>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <os>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </os>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <features>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </features>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:33:44 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk' index='2'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config' index='1'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:9c:f6:4a'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target dev='tap6a8e44c2-0e'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:99:e5:68'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target dev='tapafeb2335-1d'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='net1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       </target>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </console>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <video>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     </video>
Oct 02 12:33:44 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c870,c963</label>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c870,c963</imagelabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]: </domain>
Oct 02 12:33:45 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.998 2 INFO nova.virt.libvirt.driver [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully detached device tapafeb2335-1d from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the persistent domain config.
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.998 2 DEBUG nova.virt.libvirt.driver [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] (1/8): Attempting to detach device tapafeb2335-1d with device alias net1 from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:44.998 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:99:e5:68"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <target dev="tapafeb2335-1d"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]: </interface>
Oct 02 12:33:45 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:33:45 compute-0 kernel: tapafeb2335-1d (unregistering): left promiscuous mode
Oct 02 12:33:45 compute-0 NetworkManager[44981]: <info>  [1759408425.1132] device (tapafeb2335-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:33:45 compute-0 ovn_controller[148123]: 2025-10-02T12:33:45Z|00319|binding|INFO|Releasing lport afeb2335-1df8-4781-afa0-29b24bf73da6 from this chassis (sb_readonly=0)
Oct 02 12:33:45 compute-0 ovn_controller[148123]: 2025-10-02T12:33:45Z|00320|binding|INFO|Setting lport afeb2335-1df8-4781-afa0-29b24bf73da6 down in Southbound
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 ovn_controller[148123]: 2025-10-02T12:33:45Z|00321|binding|INFO|Removing iface tapafeb2335-1d ovn-installed in OVS
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.139 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:e5:68 10.10.10.161'], port_security=['fa:16:3e:99:e5:68 10.10.10.161'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.161/24', 'neutron:device_id': '6c768447-ca45-42c7-aee4-09dfe7406a2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7846694bb70143aa984e235126fbe15c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a905958-1c9d-4660-8b81-aa37aa79df93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e84e07d2-e3b8-4ca2-b17d-b1af49d38a84, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=afeb2335-1df8-4781-afa0-29b24bf73da6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.140 158104 INFO neutron.agent.ovn.metadata.agent [-] Port afeb2335-1df8-4781-afa0-29b24bf73da6 in datapath 181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5 unbound from our chassis
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.141 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.143 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759408425.1436775, 6c768447-ca45-42c7-aee4-09dfe7406a2d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.143 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[938ada04-f027-4683-8703-8a5001f44173]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.143 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5 namespace which is not needed anymore
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.147 2 DEBUG nova.virt.libvirt.driver [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Start waiting for the detach event from libvirt for device tapafeb2335-1d with device alias net1 for instance 6c768447-ca45-42c7-aee4-09dfe7406a2d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.148 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.161 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface>not found in domain: <domain type='kvm' id='35'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <name>instance-0000004f</name>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <uuid>6c768447-ca45-42c7-aee4-09dfe7406a2d</uuid>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:33:35</nova:creationTime>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:port uuid="afeb2335-1df8-4781-afa0-29b24bf73da6">
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.10.10.161" ipVersion="4"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:33:45 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <system>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <entry name='serial'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <entry name='uuid'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </system>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <os>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </os>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <features>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </features>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk' index='2'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config' index='1'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:9c:f6:4a'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target dev='tap6a8e44c2-0e'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       </target>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </console>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <video>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </video>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c870,c963</label>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c870,c963</imagelabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:45 compute-0 nova_compute[256940]: </domain>
Oct 02 12:33:45 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.161 2 INFO nova.virt.libvirt.driver [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully detached device tapafeb2335-1d from instance 6c768447-ca45-42c7-aee4-09dfe7406a2d from the live domain config.
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.162 2 DEBUG nova.virt.libvirt.vif [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.163 2 DEBUG nova.network.os_vif_util [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converting VIF {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.164 2 DEBUG nova.network.os_vif_util [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.164 2 DEBUG os_vif [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafeb2335-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.181 2 INFO os_vif [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d')
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.182 2 DEBUG nova.virt.libvirt.guest [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:33:45</nova:creationTime>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:33:45 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:33:45 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:45 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:33:45 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:33:45 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:33:45 compute-0 neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5[309644]: [NOTICE]   (309656) : haproxy version is 2.8.14-c23fe91
Oct 02 12:33:45 compute-0 neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5[309644]: [NOTICE]   (309656) : path to executable is /usr/sbin/haproxy
Oct 02 12:33:45 compute-0 neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5[309644]: [WARNING]  (309656) : Exiting Master process...
Oct 02 12:33:45 compute-0 neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5[309644]: [ALERT]    (309656) : Current worker (309673) exited with code 143 (Terminated)
Oct 02 12:33:45 compute-0 neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5[309644]: [WARNING]  (309656) : All workers exited. Exiting... (0)
Oct 02 12:33:45 compute-0 systemd[1]: libpod-433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc.scope: Deactivated successfully.
Oct 02 12:33:45 compute-0 podman[309811]: 2025-10-02 12:33:45.326471637 +0000 UTC m=+0.077647882 container died 433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.360 2 DEBUG nova.compute.manager [req-534a88eb-1ff8-4a95-923a-b0f0028d3501 req-6424d1b4-5c48-4f2f-a533-82c9d7da63ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-unplugged-afeb2335-1df8-4781-afa0-29b24bf73da6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.361 2 DEBUG oslo_concurrency.lockutils [req-534a88eb-1ff8-4a95-923a-b0f0028d3501 req-6424d1b4-5c48-4f2f-a533-82c9d7da63ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.361 2 DEBUG oslo_concurrency.lockutils [req-534a88eb-1ff8-4a95-923a-b0f0028d3501 req-6424d1b4-5c48-4f2f-a533-82c9d7da63ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.361 2 DEBUG oslo_concurrency.lockutils [req-534a88eb-1ff8-4a95-923a-b0f0028d3501 req-6424d1b4-5c48-4f2f-a533-82c9d7da63ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.361 2 DEBUG nova.compute.manager [req-534a88eb-1ff8-4a95-923a-b0f0028d3501 req-6424d1b4-5c48-4f2f-a533-82c9d7da63ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] No waiting events found dispatching network-vif-unplugged-afeb2335-1df8-4781-afa0-29b24bf73da6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.361 2 WARNING nova.compute.manager [req-534a88eb-1ff8-4a95-923a-b0f0028d3501 req-6424d1b4-5c48-4f2f-a533-82c9d7da63ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received unexpected event network-vif-unplugged-afeb2335-1df8-4781-afa0-29b24bf73da6 for instance with vm_state active and task_state None.
Oct 02 12:33:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 340 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.0 MiB/s wr, 113 op/s
Oct 02 12:33:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc-userdata-shm.mount: Deactivated successfully.
Oct 02 12:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9158389fc115f744f23a6a113f5dcc96235bf8181c5c9df869b627ca15885707-merged.mount: Deactivated successfully.
Oct 02 12:33:45 compute-0 podman[309811]: 2025-10-02 12:33:45.619829876 +0000 UTC m=+0.371006151 container cleanup 433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:33:45 compute-0 systemd[1]: libpod-conmon-433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc.scope: Deactivated successfully.
Oct 02 12:33:45 compute-0 podman[309841]: 2025-10-02 12:33:45.71562679 +0000 UTC m=+0.055974838 container remove 433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.728 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3061d5fc-a6e6-485b-ae81-41a8766f42ad]: (4, ('Thu Oct  2 12:33:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5 (433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc)\n433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc\nThu Oct  2 12:33:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5 (433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc)\n433547affc2565bdc6535e6bfec7c3a8ca91f9892be20e715dbe028f8dd577fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.730 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[80060bca-c1cf-4288-a6e6-617b2161f79b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.731 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap181ea8ee-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:45 compute-0 kernel: tap181ea8ee-20: left promiscuous mode
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.776 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb32348-3b07-4a76-9546-8be0413651a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:45 compute-0 nova_compute[256940]: 2025-10-02 12:33:45.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.806 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3878e32f-38de-4734-9ad1-a09a943dd2da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.808 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c027106e-a2d4-4762-9c26-ff34332d3704]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.827 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f0143ec0-9209-4175-882a-8cbadb095ca4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631125, 'reachable_time': 20286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309856, 'error': None, 'target': 'ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.831 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:33:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d181ea8ee\x2d2600\x2d4fad\x2dbf21\x2da5ec5bb6d7d5.mount: Deactivated successfully.
Oct 02 12:33:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:45.831 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[011b95e6-6fa8-4110-954b-4fc0de886ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:46 compute-0 ovn_controller[148123]: 2025-10-02T12:33:46Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:b1:14 10.100.0.14
Oct 02 12:33:46 compute-0 ovn_controller[148123]: 2025-10-02T12:33:46Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:b1:14 10.100.0.14
Oct 02 12:33:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:46 compute-0 nova_compute[256940]: 2025-10-02 12:33:46.449 2 DEBUG oslo_concurrency.lockutils [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:46 compute-0 nova_compute[256940]: 2025-10-02 12:33:46.449 2 DEBUG oslo_concurrency.lockutils [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquired lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:46 compute-0 nova_compute[256940]: 2025-10-02 12:33:46.450 2 DEBUG nova.network.neutron [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:46.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:46 compute-0 ceph-mon[73668]: pgmap v1716: 305 pgs: 305 active+clean; 340 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.0 MiB/s wr, 113 op/s
Oct 02 12:33:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 353 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.763 2 DEBUG nova.compute.manager [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.763 2 DEBUG oslo_concurrency.lockutils [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.764 2 DEBUG oslo_concurrency.lockutils [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.765 2 DEBUG oslo_concurrency.lockutils [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.765 2 DEBUG nova.compute.manager [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] No waiting events found dispatching network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.767 2 WARNING nova.compute.manager [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received unexpected event network-vif-plugged-afeb2335-1df8-4781-afa0-29b24bf73da6 for instance with vm_state active and task_state None.
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.768 2 DEBUG nova.compute.manager [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-deleted-afeb2335-1df8-4781-afa0-29b24bf73da6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.768 2 INFO nova.compute.manager [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Neutron deleted interface afeb2335-1df8-4781-afa0-29b24bf73da6; detaching it from the instance and deleting it from the info cache
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.770 2 DEBUG nova.network.neutron [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.807 2 DEBUG nova.objects.instance [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'system_metadata' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.842 2 DEBUG nova.objects.instance [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'flavor' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.866 2 DEBUG nova.virt.libvirt.vif [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.867 2 DEBUG nova.network.os_vif_util [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.867 2 DEBUG nova.network.os_vif_util [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.871 2 DEBUG nova.virt.libvirt.guest [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.876 2 DEBUG nova.virt.libvirt.guest [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface>not found in domain: <domain type='kvm' id='35'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <name>instance-0000004f</name>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <uuid>6c768447-ca45-42c7-aee4-09dfe7406a2d</uuid>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:33:45</nova:creationTime>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:33:47 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <system>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='serial'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='uuid'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </system>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <os>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </os>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <features>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </features>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk' index='2'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config' index='1'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:9c:f6:4a'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target dev='tap6a8e44c2-0e'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </target>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </console>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <video>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </video>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c870,c963</label>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c870,c963</imagelabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]: </domain>
Oct 02 12:33:47 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.877 2 DEBUG nova.virt.libvirt.guest [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.883 2 DEBUG nova.virt.libvirt.guest [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:99:e5:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapafeb2335-1d"/></interface>not found in domain: <domain type='kvm' id='35'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <name>instance-0000004f</name>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <uuid>6c768447-ca45-42c7-aee4-09dfe7406a2d</uuid>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:33:45</nova:creationTime>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:33:47 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <system>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='serial'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='uuid'>6c768447-ca45-42c7-aee4-09dfe7406a2d</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </system>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <os>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </os>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <features>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </features>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk' index='2'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/6c768447-ca45-42c7-aee4-09dfe7406a2d_disk.config' index='1'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </source>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:9c:f6:4a'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target dev='tap6a8e44c2-0e'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       </target>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d/console.log' append='off'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </console>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </input>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <video>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </video>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c870,c963</label>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c870,c963</imagelabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:33:47 compute-0 nova_compute[256940]: </domain>
Oct 02 12:33:47 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.883 2 WARNING nova.virt.libvirt.driver [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Detaching interface fa:16:3e:99:e5:68 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapafeb2335-1d' not found.
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.884 2 DEBUG nova.virt.libvirt.vif [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.884 2 DEBUG nova.network.os_vif_util [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "afeb2335-1df8-4781-afa0-29b24bf73da6", "address": "fa:16:3e:99:e5:68", "network": {"id": "181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1474549431", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.161", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafeb2335-1d", "ovs_interfaceid": "afeb2335-1df8-4781-afa0-29b24bf73da6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.885 2 DEBUG nova.network.os_vif_util [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.886 2 DEBUG os_vif [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafeb2335-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.891 2 INFO os_vif [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:e5:68,bridge_name='br-int',has_traffic_filtering=True,id=afeb2335-1df8-4781-afa0-29b24bf73da6,network=Network(181ea8ee-2600-4fad-bf21-a5ec5bb6d7d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafeb2335-1d')
Oct 02 12:33:47 compute-0 nova_compute[256940]: 2025-10-02 12:33:47.892 2 DEBUG nova.virt.libvirt.guest [req-19657c40-b0ab-4d24-af97-1c69653ff85d req-4d3ce367-ae36-41bf-8b0c-0d680fee0d12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:name>tempest-device-tagging-server-944169650</nova:name>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:33:47</nova:creationTime>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:user uuid="1c8eed6cb806403ca545bb7b2820714e">tempest-TaggedAttachmentsTest-1251797518-project-member</nova:user>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:project uuid="7846694bb70143aa984e235126fbe15c">tempest-TaggedAttachmentsTest-1251797518</nova:project>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     <nova:port uuid="6a8e44c2-0e49-44ff-815d-00dfc9de3337">
Oct 02 12:33:47 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:33:47 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:33:47 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:33:47 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:33:47 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:33:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/84496681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:48 compute-0 nova_compute[256940]: 2025-10-02 12:33:48.160 2 INFO nova.network.neutron [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Port afeb2335-1df8-4781-afa0-29b24bf73da6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 12:33:48 compute-0 nova_compute[256940]: 2025-10-02 12:33:48.161 2 DEBUG nova.network.neutron [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [{"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:48 compute-0 nova_compute[256940]: 2025-10-02 12:33:48.186 2 DEBUG oslo_concurrency.lockutils [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Releasing lock "refresh_cache-6c768447-ca45-42c7-aee4-09dfe7406a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:48 compute-0 nova_compute[256940]: 2025-10-02 12:33:48.214 2 DEBUG oslo_concurrency.lockutils [None req-ee3c9ae6-a04b-4da7-bdec-cc11236197ad 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "interface-6c768447-ca45-42c7-aee4-09dfe7406a2d-afeb2335-1df8-4781-afa0-29b24bf73da6" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:48.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.282 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.282 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.283 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.283 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.283 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.285 2 INFO nova.compute.manager [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Terminating instance
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.286 2 DEBUG nova.compute.manager [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:33:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 353 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 02 12:33:49 compute-0 ceph-mon[73668]: pgmap v1717: 305 pgs: 305 active+clean; 353 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 12:33:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2807161124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2927055532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2501477229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:49 compute-0 kernel: tap6a8e44c2-0e (unregistering): left promiscuous mode
Oct 02 12:33:49 compute-0 NetworkManager[44981]: <info>  [1759408429.4726] device (tap6a8e44c2-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:33:49 compute-0 ovn_controller[148123]: 2025-10-02T12:33:49Z|00322|binding|INFO|Releasing lport 6a8e44c2-0e49-44ff-815d-00dfc9de3337 from this chassis (sb_readonly=0)
Oct 02 12:33:49 compute-0 ovn_controller[148123]: 2025-10-02T12:33:49Z|00323|binding|INFO|Setting lport 6a8e44c2-0e49-44ff-815d-00dfc9de3337 down in Southbound
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:49 compute-0 ovn_controller[148123]: 2025-10-02T12:33:49Z|00324|binding|INFO|Removing iface tap6a8e44c2-0e ovn-installed in OVS
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.495 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:f6:4a 10.100.0.9'], port_security=['fa:16:3e:9c:f6:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6c768447-ca45-42c7-aee4-09dfe7406a2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7846694bb70143aa984e235126fbe15c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '85d62b65-e393-4974-9b22-1cd71d8260a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa8948c0-154b-4a53-b454-06726e639845, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6a8e44c2-0e49-44ff-815d-00dfc9de3337) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.496 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6a8e44c2-0e49-44ff-815d-00dfc9de3337 in datapath 2c5e6ab4-0e76-42fe-b8a1-d7157f964326 unbound from our chassis
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.499 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2c5e6ab4-0e76-42fe-b8a1-d7157f964326, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.501 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc94b4c-61b5-4ae6-8fe4-7ecb8e273432]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.502 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326 namespace which is not needed anymore
Oct 02 12:33:49 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 02 12:33:49 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004f.scope: Consumed 16.606s CPU time.
Oct 02 12:33:49 compute-0 systemd-machined[210927]: Machine qemu-35-instance-0000004f terminated.
Oct 02 12:33:49 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [NOTICE]   (307826) : haproxy version is 2.8.14-c23fe91
Oct 02 12:33:49 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [NOTICE]   (307826) : path to executable is /usr/sbin/haproxy
Oct 02 12:33:49 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [WARNING]  (307826) : Exiting Master process...
Oct 02 12:33:49 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [WARNING]  (307826) : Exiting Master process...
Oct 02 12:33:49 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [ALERT]    (307826) : Current worker (307829) exited with code 143 (Terminated)
Oct 02 12:33:49 compute-0 neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326[307816]: [WARNING]  (307826) : All workers exited. Exiting... (0)
Oct 02 12:33:49 compute-0 systemd[1]: libpod-d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db.scope: Deactivated successfully.
Oct 02 12:33:49 compute-0 podman[309882]: 2025-10-02 12:33:49.646773106 +0000 UTC m=+0.052011465 container died d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db-userdata-shm.mount: Deactivated successfully.
Oct 02 12:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-42b4bd6a298d318d1b5625694d3c014e07eac96f5256c3f8bcb20182f9829165-merged.mount: Deactivated successfully.
Oct 02 12:33:49 compute-0 podman[309882]: 2025-10-02 12:33:49.687036994 +0000 UTC m=+0.092275373 container cleanup d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:33:49 compute-0 systemd[1]: libpod-conmon-d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db.scope: Deactivated successfully.
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.728 2 INFO nova.virt.libvirt.driver [-] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Instance destroyed successfully.
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.728 2 DEBUG nova.objects.instance [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lazy-loading 'resources' on Instance uuid 6c768447-ca45-42c7-aee4-09dfe7406a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:49 compute-0 podman[309914]: 2025-10-02 12:33:49.77214885 +0000 UTC m=+0.048170425 container remove d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.778 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[41f0d485-ddf8-482e-bfd3-0cdd5ff2084f]: (4, ('Thu Oct  2 12:33:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326 (d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db)\nd67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db\nThu Oct  2 12:33:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326 (d67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db)\nd67d563916f75bdd592010137edf4945ccf78837d8e839e03ca946feb7f019db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.781 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[32506ac8-c724-461e-9a15-2e33d61fabbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.782 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c5e6ab4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:49 compute-0 kernel: tap2c5e6ab4-00: left promiscuous mode
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.786 2 DEBUG nova.virt.libvirt.vif [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-944169650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-944169650',id=79,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1OxY2DVEmsYsEDm0Sc5ed587tPIISXcjMxSRr0KEBpstr1yOtJ/VDZcedrDmY3WZYAWWUtd20hrc01JjuxPZyBb3Xm2I2Icjd1tVEa05Vj7ZJoHpBHLMVOaOV09oxQVQ==',key_name='tempest-keypair-1402892331',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7846694bb70143aa984e235126fbe15c',ramdisk_id='',reservation_id='r-0xf4fa1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1251797518',owner_user_name='tempest-TaggedAttachmentsTest-1251797518-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c8eed6cb806403ca545bb7b2820714e',uuid=6c768447-ca45-42c7-aee4-09dfe7406a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.787 2 DEBUG nova.network.os_vif_util [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converting VIF {"id": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "address": "fa:16:3e:9c:f6:4a", "network": {"id": "2c5e6ab4-0e76-42fe-b8a1-d7157f964326", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-408113366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7846694bb70143aa984e235126fbe15c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8e44c2-0e", "ovs_interfaceid": "6a8e44c2-0e49-44ff-815d-00dfc9de3337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.787 2 DEBUG nova.network.os_vif_util [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:f6:4a,bridge_name='br-int',has_traffic_filtering=True,id=6a8e44c2-0e49-44ff-815d-00dfc9de3337,network=Network(2c5e6ab4-0e76-42fe-b8a1-d7157f964326),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8e44c2-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.788 2 DEBUG os_vif [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:f6:4a,bridge_name='br-int',has_traffic_filtering=True,id=6a8e44c2-0e49-44ff-815d-00dfc9de3337,network=Network(2c5e6ab4-0e76-42fe-b8a1-d7157f964326),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8e44c2-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a8e44c2-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:49 compute-0 nova_compute[256940]: 2025-10-02 12:33:49.808 2 INFO os_vif [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:f6:4a,bridge_name='br-int',has_traffic_filtering=True,id=6a8e44c2-0e49-44ff-815d-00dfc9de3337,network=Network(2c5e6ab4-0e76-42fe-b8a1-d7157f964326),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8e44c2-0e')
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.809 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[241c18e9-5461-44d4-9711-16860e3c8fa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.843 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8a02a93b-bd95-4e60-bc78-4fc1ee5b1dd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.844 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2427dc9f-1184-4f60-9b2f-bba213963ee1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.870 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[577ec6dc-a264-46fe-b479-332deec85aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626514, 'reachable_time': 23586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309955, 'error': None, 'target': 'ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d2c5e6ab4\x2d0e76\x2d42fe\x2db8a1\x2dd7157f964326.mount: Deactivated successfully.
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.873 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2c5e6ab4-0e76-42fe-b8a1-d7157f964326 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:33:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:33:49.873 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[563a3dbb-ec37-4692-ae36-1fc457dd4c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.063 2 DEBUG nova.compute.manager [req-2a6a8a18-5aa2-420f-b9f6-f76c2cea239f req-65146e7c-4a0b-4148-ac72-f059ecfd14f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-unplugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.063 2 DEBUG oslo_concurrency.lockutils [req-2a6a8a18-5aa2-420f-b9f6-f76c2cea239f req-65146e7c-4a0b-4148-ac72-f059ecfd14f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.064 2 DEBUG oslo_concurrency.lockutils [req-2a6a8a18-5aa2-420f-b9f6-f76c2cea239f req-65146e7c-4a0b-4148-ac72-f059ecfd14f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.064 2 DEBUG oslo_concurrency.lockutils [req-2a6a8a18-5aa2-420f-b9f6-f76c2cea239f req-65146e7c-4a0b-4148-ac72-f059ecfd14f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.064 2 DEBUG nova.compute.manager [req-2a6a8a18-5aa2-420f-b9f6-f76c2cea239f req-65146e7c-4a0b-4148-ac72-f059ecfd14f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] No waiting events found dispatching network-vif-unplugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.064 2 DEBUG nova.compute.manager [req-2a6a8a18-5aa2-420f-b9f6-f76c2cea239f req-65146e7c-4a0b-4148-ac72-f059ecfd14f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-unplugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:33:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:50.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:50 compute-0 ceph-mon[73668]: pgmap v1718: 305 pgs: 305 active+clean; 353 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 02 12:33:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:50 compute-0 nova_compute[256940]: 2025-10-02 12:33:50.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:51 compute-0 nova_compute[256940]: 2025-10-02 12:33:51.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 361 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 71 op/s
Oct 02 12:33:51 compute-0 nova_compute[256940]: 2025-10-02 12:33:51.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.160 2 DEBUG nova.compute.manager [req-d7b1ff8e-8ede-4316-b97f-ee7ffa2a7618 req-2c563306-bc9d-4a01-bbbc-78b9c07b1c29 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.161 2 DEBUG oslo_concurrency.lockutils [req-d7b1ff8e-8ede-4316-b97f-ee7ffa2a7618 req-2c563306-bc9d-4a01-bbbc-78b9c07b1c29 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.162 2 DEBUG oslo_concurrency.lockutils [req-d7b1ff8e-8ede-4316-b97f-ee7ffa2a7618 req-2c563306-bc9d-4a01-bbbc-78b9c07b1c29 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.162 2 DEBUG oslo_concurrency.lockutils [req-d7b1ff8e-8ede-4316-b97f-ee7ffa2a7618 req-2c563306-bc9d-4a01-bbbc-78b9c07b1c29 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.163 2 DEBUG nova.compute.manager [req-d7b1ff8e-8ede-4316-b97f-ee7ffa2a7618 req-2c563306-bc9d-4a01-bbbc-78b9c07b1c29 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] No waiting events found dispatching network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.164 2 WARNING nova.compute.manager [req-d7b1ff8e-8ede-4316-b97f-ee7ffa2a7618 req-2c563306-bc9d-4a01-bbbc-78b9c07b1c29 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received unexpected event network-vif-plugged-6a8e44c2-0e49-44ff-815d-00dfc9de3337 for instance with vm_state active and task_state deleting.
Oct 02 12:33:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:52.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.510 2 INFO nova.virt.libvirt.driver [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Deleting instance files /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d_del
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.511 2 INFO nova.virt.libvirt.driver [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Deletion of /var/lib/nova/instances/6c768447-ca45-42c7-aee4-09dfe7406a2d_del complete
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.591 2 INFO nova.compute.manager [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Took 3.30 seconds to destroy the instance on the hypervisor.
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.592 2 DEBUG oslo.service.loopingcall [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.593 2 DEBUG nova.compute.manager [-] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:33:52 compute-0 nova_compute[256940]: 2025-10-02 12:33:52.593 2 DEBUG nova.network.neutron [-] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:33:52 compute-0 ceph-mon[73668]: pgmap v1719: 305 pgs: 305 active+clean; 361 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 71 op/s
Oct 02 12:33:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.209 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.259 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.260 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.260 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.261 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.261 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 343 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Oct 02 12:33:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99379198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.814 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.918 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.918 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:53 compute-0 nova_compute[256940]: 2025-10-02 12:33:53.960 2 DEBUG nova.network.neutron [-] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.011 2 INFO nova.compute.manager [-] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Took 1.42 seconds to deallocate network for instance.
Oct 02 12:33:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/99379198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.090 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.090 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.121 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.122 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4409MB free_disk=20.805957794189453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.122 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.216 2 DEBUG oslo_concurrency.processutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.301 2 DEBUG nova.compute.manager [req-dfcebbb7-dd7d-4d1e-a8ce-b9423d6e0085 req-26a9e4c9-5d99-48f0-a7b6-15836cb66c39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Received event network-vif-deleted-6a8e44c2-0e49-44ff-815d-00dfc9de3337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2996107139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:54.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.655 2 DEBUG oslo_concurrency.processutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.662 2 DEBUG nova.compute.provider_tree [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.682 2 DEBUG nova.scheduler.client.report [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.712 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.715 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.744 2 INFO nova.scheduler.client.report [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Deleted allocations for instance 6c768447-ca45-42c7-aee4-09dfe7406a2d
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.812 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 588c95ff-a590-46c2-a041-6ee318695ef1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.813 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.813 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.860 2 DEBUG oslo_concurrency.lockutils [None req-dd5fbb32-f07f-40c2-9461-af4f6f8d3922 1c8eed6cb806403ca545bb7b2820714e 7846694bb70143aa984e235126fbe15c - - default default] Lock "6c768447-ca45-42c7-aee4-09dfe7406a2d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:54 compute-0 nova_compute[256940]: 2025-10-02 12:33:54.911 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:55 compute-0 ceph-mon[73668]: pgmap v1720: 305 pgs: 305 active+clean; 343 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Oct 02 12:33:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2996107139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1696718552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:55 compute-0 nova_compute[256940]: 2025-10-02 12:33:55.358 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 294 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 427 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 02 12:33:55 compute-0 nova_compute[256940]: 2025-10-02 12:33:55.380 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:55 compute-0 nova_compute[256940]: 2025-10-02 12:33:55.398 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:55 compute-0 nova_compute[256940]: 2025-10-02 12:33:55.429 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:33:55 compute-0 nova_compute[256940]: 2025-10-02 12:33:55.430 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:55 compute-0 nova_compute[256940]: 2025-10-02 12:33:55.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:56.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1696718552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:56 compute-0 nova_compute[256940]: 2025-10-02 12:33:56.430 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:56 compute-0 nova_compute[256940]: 2025-10-02 12:33:56.431 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:56.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:56 compute-0 sudo[310031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:56 compute-0 sudo[310031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:56 compute-0 sudo[310031]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:56 compute-0 podman[310055]: 2025-10-02 12:33:56.878688924 +0000 UTC m=+0.075040955 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:33:56 compute-0 sudo[310072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:56 compute-0 podman[310056]: 2025-10-02 12:33:56.888396477 +0000 UTC m=+0.080510618 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:33:56 compute-0 sudo[310072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:56 compute-0 sudo[310072]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.119 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.120 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.137 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.224 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.224 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.233 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.233 2 INFO nova.compute.claims [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.374 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 281 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 1.2 MiB/s wr, 94 op/s
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.408 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.409 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.409 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.409 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 588c95ff-a590-46c2-a041-6ee318695ef1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:57 compute-0 ceph-mon[73668]: pgmap v1721: 305 pgs: 305 active+clean; 294 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 427 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 02 12:33:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500834686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.828 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.835 2 DEBUG nova.compute.provider_tree [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.867 2 DEBUG nova.scheduler.client.report [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.893 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.894 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.945 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.945 2 DEBUG nova.network.neutron [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.963 2 INFO nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:33:57 compute-0 nova_compute[256940]: 2025-10-02 12:33:57.979 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.084 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.086 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.087 2 INFO nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Creating image(s)
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.125 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.162 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.198 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.203 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.247 2 DEBUG nova.policy [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9626ead6e742493492143362866a3aa3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c363199c26c141ca8fac3c71b399ad3c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.282 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.283 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.284 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.284 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:33:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:58.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.314 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.318 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9a901125-894a-48b8-8b98-3f5265a97052_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:33:58 compute-0 ovn_controller[148123]: 2025-10-02T12:33:58Z|00325|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:33:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:58.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:58 compute-0 nova_compute[256940]: 2025-10-02 12:33:58.953 2 DEBUG nova.network.neutron [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Successfully created port: 8dcf81ee-c284-4142-99d5-361ce7fc2b58 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:33:58 compute-0 ceph-mon[73668]: pgmap v1722: 305 pgs: 305 active+clean; 281 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 1.2 MiB/s wr, 94 op/s
Oct 02 12:33:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2500834686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.283 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.300 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.301 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.302 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 281 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 55 KiB/s wr, 52 op/s
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.424 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9a901125-894a-48b8-8b98-3f5265a97052_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.535 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] resizing rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.669 2 DEBUG nova.compute.manager [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.669 2 DEBUG nova.compute.manager [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing instance network info cache due to event network-changed-6165c9e0-ee4f-4bc6-add1-c234918a73d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.670 2 DEBUG oslo_concurrency.lockutils [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.670 2 DEBUG oslo_concurrency.lockutils [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.670 2 DEBUG nova.network.neutron [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.678 2 DEBUG nova.objects.instance [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lazy-loading 'migration_context' on Instance uuid 9a901125-894a-48b8-8b98-3f5265a97052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.700 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.701 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Ensure instance console log exists: /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.701 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.702 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.702 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.795 2 DEBUG nova.network.neutron [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Successfully updated port: 8dcf81ee-c284-4142-99d5-361ce7fc2b58 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.811 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "refresh_cache-9a901125-894a-48b8-8b98-3f5265a97052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.812 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquired lock "refresh_cache-9a901125-894a-48b8-8b98-3f5265a97052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.812 2 DEBUG nova.network.neutron [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:59 compute-0 nova_compute[256940]: 2025-10-02 12:33:59.982 2 DEBUG nova.network.neutron [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:34:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:00.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:00 compute-0 nova_compute[256940]: 2025-10-02 12:34:00.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:01 compute-0 ceph-mon[73668]: pgmap v1723: 305 pgs: 305 active+clean; 281 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 55 KiB/s wr, 52 op/s
Oct 02 12:34:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:34:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:34:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1544477566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 315 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 248 KiB/s rd, 1.3 MiB/s wr, 79 op/s
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.903 2 DEBUG nova.network.neutron [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Updating instance_info_cache with network_info: [{"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.934 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Releasing lock "refresh_cache-9a901125-894a-48b8-8b98-3f5265a97052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.935 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Instance network_info: |[{"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.940 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Start _get_guest_xml network_info=[{"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.949 2 WARNING nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.955 2 DEBUG nova.virt.libvirt.host [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.956 2 DEBUG nova.virt.libvirt.host [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.964 2 DEBUG nova.virt.libvirt.host [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.965 2 DEBUG nova.virt.libvirt.host [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.967 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.968 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.969 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.969 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.970 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.970 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.970 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.971 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.971 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.972 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.972 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.973 2 DEBUG nova.virt.hardware [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:01 compute-0 nova_compute[256940]: 2025-10-02 12:34:01.978 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.028 2 DEBUG nova.compute.manager [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received event network-changed-8dcf81ee-c284-4142-99d5-361ce7fc2b58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.030 2 DEBUG nova.compute.manager [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Refreshing instance network info cache due to event network-changed-8dcf81ee-c284-4142-99d5-361ce7fc2b58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.030 2 DEBUG oslo_concurrency.lockutils [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9a901125-894a-48b8-8b98-3f5265a97052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.031 2 DEBUG oslo_concurrency.lockutils [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9a901125-894a-48b8-8b98-3f5265a97052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.031 2 DEBUG nova.network.neutron [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Refreshing network info cache for port 8dcf81ee-c284-4142-99d5-361ce7fc2b58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.130 2 DEBUG oslo_concurrency.lockutils [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-588c95ff-a590-46c2-a041-6ee318695ef1-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.131 2 DEBUG oslo_concurrency.lockutils [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-588c95ff-a590-46c2-a041-6ee318695ef1-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.132 2 DEBUG nova.objects.instance [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 588c95ff-a590-46c2-a041-6ee318695ef1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.008000208s ======
Oct 02 12:34:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:02.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.008000208s
Oct 02 12:34:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565392405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.495 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.542 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.548 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.583 2 DEBUG nova.network.neutron [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updated VIF entry in instance network info cache for port 6165c9e0-ee4f-4bc6-add1-c234918a73d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.585 2 DEBUG nova.network.neutron [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:02 compute-0 nova_compute[256940]: 2025-10-02 12:34:02.614 2 DEBUG oslo_concurrency.lockutils [req-e162f268-5394-42cc-9363-47f86e849e71 req-584669ed-b1be-4eaf-b92e-02ca2bb63a72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227295627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.006 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.007 2 DEBUG nova.virt.libvirt.vif [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1709983847',display_name='tempest-InstanceActionsNegativeTestJSON-server-1709983847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1709983847',id=83,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c363199c26c141ca8fac3c71b399ad3c',ramdisk_id='',reservation_id='r-hf3y3gnk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1324680419',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1324680419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:58Z,user_data=None,user_id='9626ead6e742493492143362866a3aa3',uuid=9a901125-894a-48b8-8b98-3f5265a97052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.008 2 DEBUG nova.network.os_vif_util [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Converting VIF {"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.010 2 DEBUG nova.network.os_vif_util [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:3d:e9,bridge_name='br-int',has_traffic_filtering=True,id=8dcf81ee-c284-4142-99d5-361ce7fc2b58,network=Network(734275fe-fe61-4461-9851-66707dbd8296),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dcf81ee-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.017 2 DEBUG nova.objects.instance [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a901125-894a-48b8-8b98-3f5265a97052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.042 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <uuid>9a901125-894a-48b8-8b98-3f5265a97052</uuid>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <name>instance-00000053</name>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1709983847</nova:name>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:34:01</nova:creationTime>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:user uuid="9626ead6e742493492143362866a3aa3">tempest-InstanceActionsNegativeTestJSON-1324680419-project-member</nova:user>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:project uuid="c363199c26c141ca8fac3c71b399ad3c">tempest-InstanceActionsNegativeTestJSON-1324680419</nova:project>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <nova:port uuid="8dcf81ee-c284-4142-99d5-361ce7fc2b58">
Oct 02 12:34:03 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <system>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <entry name="serial">9a901125-894a-48b8-8b98-3f5265a97052</entry>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <entry name="uuid">9a901125-894a-48b8-8b98-3f5265a97052</entry>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </system>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <os>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   </os>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <features>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   </features>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/9a901125-894a-48b8-8b98-3f5265a97052_disk">
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/9a901125-894a-48b8-8b98-3f5265a97052_disk.config">
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:34:03 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:aa:3d:e9"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <target dev="tap8dcf81ee-c2"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/console.log" append="off"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <video>
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </video>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:34:03 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:34:03 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:34:03 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:34:03 compute-0 nova_compute[256940]: </domain>
Oct 02 12:34:03 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.044 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Preparing to wait for external event network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.046 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.046 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.047 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.048 2 DEBUG nova.virt.libvirt.vif [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1709983847',display_name='tempest-InstanceActionsNegativeTestJSON-server-1709983847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1709983847',id=83,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c363199c26c141ca8fac3c71b399ad3c',ramdisk_id='',reservation_id='r-hf3y3gnk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1324680419',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1324680419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:58Z,user_data=None,user_id='9626ead6e742493492143362866a3aa3',uuid=9a901125-894a-48b8-8b98-3f5265a97052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.049 2 DEBUG nova.network.os_vif_util [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Converting VIF {"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.050 2 DEBUG nova.network.os_vif_util [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:3d:e9,bridge_name='br-int',has_traffic_filtering=True,id=8dcf81ee-c284-4142-99d5-361ce7fc2b58,network=Network(734275fe-fe61-4461-9851-66707dbd8296),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dcf81ee-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.051 2 DEBUG os_vif [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:3d:e9,bridge_name='br-int',has_traffic_filtering=True,id=8dcf81ee-c284-4142-99d5-361ce7fc2b58,network=Network(734275fe-fe61-4461-9851-66707dbd8296),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dcf81ee-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.060 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8dcf81ee-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8dcf81ee-c2, col_values=(('external_ids', {'iface-id': '8dcf81ee-c284-4142-99d5-361ce7fc2b58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:3d:e9', 'vm-uuid': '9a901125-894a-48b8-8b98-3f5265a97052'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:03 compute-0 NetworkManager[44981]: <info>  [1759408443.0649] manager: (tap8dcf81ee-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.073 2 INFO os_vif [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:3d:e9,bridge_name='br-int',has_traffic_filtering=True,id=8dcf81ee-c284-4142-99d5-361ce7fc2b58,network=Network(734275fe-fe61-4461-9851-66707dbd8296),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dcf81ee-c2')
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.143 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.144 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.144 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] No VIF found with MAC fa:16:3e:aa:3d:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.145 2 INFO nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Using config drive
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.183 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:03 compute-0 ceph-mon[73668]: pgmap v1724: 305 pgs: 305 active+clean; 315 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 248 KiB/s rd, 1.3 MiB/s wr, 79 op/s
Oct 02 12:34:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/565392405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2227295627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.191 2 DEBUG nova.objects.instance [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 588c95ff-a590-46c2-a041-6ee318695ef1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.211 2 DEBUG nova.network.neutron [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:34:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 345 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.670 2 INFO nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Creating config drive at /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/disk.config
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.682 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p24xw_r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.727 2 DEBUG nova.network.neutron [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Updated VIF entry in instance network info cache for port 8dcf81ee-c284-4142-99d5-361ce7fc2b58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.729 2 DEBUG nova.network.neutron [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Updating instance_info_cache with network_info: [{"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.736 2 DEBUG nova.policy [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.749 2 DEBUG oslo_concurrency.lockutils [req-a542f403-f63a-40ac-b84d-4b7de2b7b11a req-f6962c75-03a3-4a78-864e-08dab13d31e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9a901125-894a-48b8-8b98-3f5265a97052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.840 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p24xw_r" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.884 2 DEBUG nova.storage.rbd_utils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] rbd image 9a901125-894a-48b8-8b98-3f5265a97052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:03 compute-0 nova_compute[256940]: 2025-10-02 12:34:03.890 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/disk.config 9a901125-894a-48b8-8b98-3f5265a97052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:04.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.346 2 DEBUG oslo_concurrency.processutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/disk.config 9a901125-894a-48b8-8b98-3f5265a97052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.348 2 INFO nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Deleting local config drive /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052/disk.config because it was imported into RBD.
Oct 02 12:34:04 compute-0 kernel: tap8dcf81ee-c2: entered promiscuous mode
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:04 compute-0 NetworkManager[44981]: <info>  [1759408444.4368] manager: (tap8dcf81ee-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Oct 02 12:34:04 compute-0 ovn_controller[148123]: 2025-10-02T12:34:04Z|00326|binding|INFO|Claiming lport 8dcf81ee-c284-4142-99d5-361ce7fc2b58 for this chassis.
Oct 02 12:34:04 compute-0 ovn_controller[148123]: 2025-10-02T12:34:04Z|00327|binding|INFO|8dcf81ee-c284-4142-99d5-361ce7fc2b58: Claiming fa:16:3e:aa:3d:e9 10.100.0.9
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.444 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:3d:e9 10.100.0.9'], port_security=['fa:16:3e:aa:3d:e9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9a901125-894a-48b8-8b98-3f5265a97052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-734275fe-fe61-4461-9851-66707dbd8296', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c363199c26c141ca8fac3c71b399ad3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '85224cd4-7b89-466b-bfbd-734cb044ae5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f20fe1d-4849-47c7-a264-624ad3bb9a62, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8dcf81ee-c284-4142-99d5-361ce7fc2b58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.447 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8dcf81ee-c284-4142-99d5-361ce7fc2b58 in datapath 734275fe-fe61-4461-9851-66707dbd8296 bound to our chassis
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.451 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 734275fe-fe61-4461-9851-66707dbd8296
Oct 02 12:34:04 compute-0 ovn_controller[148123]: 2025-10-02T12:34:04Z|00328|binding|INFO|Setting lport 8dcf81ee-c284-4142-99d5-361ce7fc2b58 ovn-installed in OVS
Oct 02 12:34:04 compute-0 ovn_controller[148123]: 2025-10-02T12:34:04Z|00329|binding|INFO|Setting lport 8dcf81ee-c284-4142-99d5-361ce7fc2b58 up in Southbound
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.473 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[68a94aea-862d-4d3d-93bc-a7b82337a6ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.475 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap734275fe-f1 in ovnmeta-734275fe-fe61-4461-9851-66707dbd8296 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.480 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap734275fe-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.481 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cc375658-f8cb-431d-9ac5-3745857d3f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.482 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e5bc802f-2b66-41bf-a5af-e12bccc42afb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 sudo[310435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:04 compute-0 sudo[310435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.497 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd14470-f3f2-42b8-a8fb-c8e6de15d497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 sudo[310435]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:04 compute-0 systemd-machined[210927]: New machine qemu-37-instance-00000053.
Oct 02 12:34:04 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000053.
Oct 02 12:34:04 compute-0 systemd-udevd[310473]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.526 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7ae905-ea20-4e66-9707-4bd1f5f16c0f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 NetworkManager[44981]: <info>  [1759408444.5369] device (tap8dcf81ee-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:04 compute-0 NetworkManager[44981]: <info>  [1759408444.5378] device (tap8dcf81ee-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.566 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[66a70e17-af77-4e46-8c7b-2a623c9f1a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 systemd-udevd[310483]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:04 compute-0 NetworkManager[44981]: <info>  [1759408444.5769] manager: (tap734275fe-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.575 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6e58cb-0c5f-4307-af20-728e2084209d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 sudo[310474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:04 compute-0 sudo[310474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-0 sudo[310474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.617 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a93d7e-5d2f-4f69-8005-1f7b37ab72ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.623 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fc16a619-2478-40b0-8e49-3fd6b9aaac0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 NetworkManager[44981]: <info>  [1759408444.6588] device (tap734275fe-f0): carrier: link connected
Oct 02 12:34:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:04.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:04 compute-0 sudo[310527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.673 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[51faf6b6-12ef-4060-9fc0-8052ff06bf3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 sudo[310527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-0 sudo[310527]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.700 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[43e9701d-e19a-4598-a3f5-ea20cde3702c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap734275fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:d9:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634014, 'reachable_time': 18261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310569, 'error': None, 'target': 'ovnmeta-734275fe-fe61-4461-9851-66707dbd8296', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.723 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2c1203-06d6-4c21-8fbd-38e4d18958fc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:d97a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634014, 'tstamp': 634014}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310585, 'error': None, 'target': 'ovnmeta-734275fe-fe61-4461-9851-66707dbd8296', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.723 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408429.7225022, 6c768447-ca45-42c7-aee4-09dfe7406a2d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.724 2 INFO nova.compute.manager [-] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] VM Stopped (Lifecycle Event)
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.746 2 DEBUG nova.compute.manager [None req-8a26cff9-c4d6-4a55-8df1-17565272008b - - - - - -] [instance: 6c768447-ca45-42c7-aee4-09dfe7406a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.749 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e81b4813-5ffa-4359-8fc0-94f36faf8286]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap734275fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:d9:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634014, 'reachable_time': 18261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310611, 'error': None, 'target': 'ovnmeta-734275fe-fe61-4461-9851-66707dbd8296', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 sudo[310571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:34:04 compute-0 sudo[310571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.794 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfc4e41-3357-472c-b197-4c2cb1ef49fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.860 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[03bf5430-c5a1-4f15-8836-d3a04f33e3b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.862 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap734275fe-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.862 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.863 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap734275fe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:04 compute-0 kernel: tap734275fe-f0: entered promiscuous mode
Oct 02 12:34:04 compute-0 NetworkManager[44981]: <info>  [1759408444.8660] manager: (tap734275fe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.869 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap734275fe-f0, col_values=(('external_ids', {'iface-id': '4c89110c-0ec2-4c41-934e-65fbcc5f8ba0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:04 compute-0 ovn_controller[148123]: 2025-10-02T12:34:04Z|00330|binding|INFO|Releasing lport 4c89110c-0ec2-4c41-934e-65fbcc5f8ba0 from this chassis (sb_readonly=0)
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.873 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/734275fe-fe61-4461-9851-66707dbd8296.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/734275fe-fe61-4461-9851-66707dbd8296.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.878 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4266a800-49b6-459b-8f30-47538bc304ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.880 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-734275fe-fe61-4461-9851-66707dbd8296
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/734275fe-fe61-4461-9851-66707dbd8296.pid.haproxy
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 734275fe-fe61-4461-9851-66707dbd8296
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:34:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:04.881 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-734275fe-fe61-4461-9851-66707dbd8296', 'env', 'PROCESS_TAG=haproxy-734275fe-fe61-4461-9851-66707dbd8296', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/734275fe-fe61-4461-9851-66707dbd8296.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.944 2 DEBUG nova.compute.manager [req-f80d6d27-fcdf-495b-8b99-986de3dc2331 req-f6331b9f-aa49-498a-82b7-9242e4b4d19e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received event network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.944 2 DEBUG oslo_concurrency.lockutils [req-f80d6d27-fcdf-495b-8b99-986de3dc2331 req-f6331b9f-aa49-498a-82b7-9242e4b4d19e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.945 2 DEBUG oslo_concurrency.lockutils [req-f80d6d27-fcdf-495b-8b99-986de3dc2331 req-f6331b9f-aa49-498a-82b7-9242e4b4d19e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.945 2 DEBUG oslo_concurrency.lockutils [req-f80d6d27-fcdf-495b-8b99-986de3dc2331 req-f6331b9f-aa49-498a-82b7-9242e4b4d19e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:04 compute-0 nova_compute[256940]: 2025-10-02 12:34:04.945 2 DEBUG nova.compute.manager [req-f80d6d27-fcdf-495b-8b99-986de3dc2331 req-f6331b9f-aa49-498a-82b7-9242e4b4d19e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Processing event network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:05.068 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:05 compute-0 sudo[310571]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:05 compute-0 podman[310678]: 2025-10-02 12:34:05.322075798 +0000 UTC m=+0.062096138 container create 7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:34:05 compute-0 ceph-mon[73668]: pgmap v1725: 305 pgs: 305 active+clean; 345 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 02 12:34:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:34:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.347 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.348 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408445.3459585, 9a901125-894a-48b8-8b98-3f5265a97052 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.348 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] VM Started (Lifecycle Event)
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.352 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.358 2 INFO nova.virt.libvirt.driver [-] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Instance spawned successfully.
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.359 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:05 compute-0 systemd[1]: Started libpod-conmon-7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964.scope.
Oct 02 12:34:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 349 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.5 MiB/s wr, 87 op/s
Oct 02 12:34:05 compute-0 podman[310678]: 2025-10-02 12:34:05.289820108 +0000 UTC m=+0.029840468 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.387 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.397 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.397 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.398 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.399 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.399 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.400 2 DEBUG nova.virt.libvirt.driver [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.404 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ac4cb314a75fb322d7508b44db00334dad1e026dace17d7aa5f1821017a984/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:05 compute-0 podman[310678]: 2025-10-02 12:34:05.435705586 +0000 UTC m=+0.175725936 container init 7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:05 compute-0 podman[310678]: 2025-10-02 12:34:05.441426345 +0000 UTC m=+0.181446685 container start 7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.442 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.443 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408445.3461342, 9a901125-894a-48b8-8b98-3f5265a97052 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.443 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] VM Paused (Lifecycle Event)
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.464 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.469 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408445.3519967, 9a901125-894a-48b8-8b98-3f5265a97052 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.469 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] VM Resumed (Lifecycle Event)
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.472 2 INFO nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Took 7.39 seconds to spawn the instance on the hypervisor.
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.473 2 DEBUG nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:05 compute-0 neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296[310701]: [NOTICE]   (310705) : New worker (310707) forked
Oct 02 12:34:05 compute-0 neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296[310701]: [NOTICE]   (310705) : Loading success.
Oct 02 12:34:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.485 2 DEBUG nova.network.neutron [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Successfully updated port: 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.510 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.514 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.524 2 DEBUG oslo_concurrency.lockutils [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.525 2 DEBUG oslo_concurrency.lockutils [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.525 2 DEBUG nova.network.neutron [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:05.542 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.554 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.568 2 INFO nova.compute.manager [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Took 8.37 seconds to build instance.
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.595 2 DEBUG oslo_concurrency.lockutils [None req-115397a9-e4d3-4bf6-98db-1a4c8dbfc70f 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.630 2 DEBUG nova.compute.manager [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-changed-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.630 2 DEBUG nova.compute.manager [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing instance network info cache due to event network-changed-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.630 2 DEBUG oslo_concurrency.lockutils [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.768 2 WARNING nova.network.neutron [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it
Oct 02 12:34:05 compute-0 nova_compute[256940]: 2025-10-02 12:34:05.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:34:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:34:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:06.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.062 2 DEBUG nova.compute.manager [req-25dbb19a-398d-4409-921b-2bb3764c8866 req-0497607b-16ad-47c7-89ea-f48b737c2fdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received event network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.063 2 DEBUG oslo_concurrency.lockutils [req-25dbb19a-398d-4409-921b-2bb3764c8866 req-0497607b-16ad-47c7-89ea-f48b737c2fdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.063 2 DEBUG oslo_concurrency.lockutils [req-25dbb19a-398d-4409-921b-2bb3764c8866 req-0497607b-16ad-47c7-89ea-f48b737c2fdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.064 2 DEBUG oslo_concurrency.lockutils [req-25dbb19a-398d-4409-921b-2bb3764c8866 req-0497607b-16ad-47c7-89ea-f48b737c2fdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.064 2 DEBUG nova.compute.manager [req-25dbb19a-398d-4409-921b-2bb3764c8866 req-0497607b-16ad-47c7-89ea-f48b737c2fdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] No waiting events found dispatching network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.064 2 WARNING nova.compute.manager [req-25dbb19a-398d-4409-921b-2bb3764c8866 req-0497607b-16ad-47c7-89ea-f48b737c2fdf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received unexpected event network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 for instance with vm_state active and task_state None.
Oct 02 12:34:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:34:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:34:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:34:07 compute-0 ovn_controller[148123]: 2025-10-02T12:34:07Z|00331|binding|INFO|Releasing lport 4c89110c-0ec2-4c41-934e-65fbcc5f8ba0 from this chassis (sb_readonly=0)
Oct 02 12:34:07 compute-0 ovn_controller[148123]: 2025-10-02T12:34:07Z|00332|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:34:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.253 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.254 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.254 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.254 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.255 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.256 2 INFO nova.compute.manager [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Terminating instance
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.257 2 DEBUG nova.compute.manager [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:34:07 compute-0 kernel: tap8dcf81ee-c2 (unregistering): left promiscuous mode
Oct 02 12:34:07 compute-0 NetworkManager[44981]: <info>  [1759408447.3099] device (tap8dcf81ee-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e6c8365e-2132-4161-b93d-aebc0a23e563 does not exist
Oct 02 12:34:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e8f5eb2b-4360-47fe-95f6-27675e01136a does not exist
Oct 02 12:34:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f3755c11-7d03-4ebe-9ca5-e24279bf43ff does not exist
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 ovn_controller[148123]: 2025-10-02T12:34:07Z|00333|binding|INFO|Releasing lport 8dcf81ee-c284-4142-99d5-361ce7fc2b58 from this chassis (sb_readonly=0)
Oct 02 12:34:07 compute-0 ovn_controller[148123]: 2025-10-02T12:34:07Z|00334|binding|INFO|Setting lport 8dcf81ee-c284-4142-99d5-361ce7fc2b58 down in Southbound
Oct 02 12:34:07 compute-0 ovn_controller[148123]: 2025-10-02T12:34:07Z|00335|binding|INFO|Removing iface tap8dcf81ee-c2 ovn-installed in OVS
Oct 02 12:34:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:34:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.333 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:3d:e9 10.100.0.9'], port_security=['fa:16:3e:aa:3d:e9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9a901125-894a-48b8-8b98-3f5265a97052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-734275fe-fe61-4461-9851-66707dbd8296', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c363199c26c141ca8fac3c71b399ad3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '85224cd4-7b89-466b-bfbd-734cb044ae5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f20fe1d-4849-47c7-a264-624ad3bb9a62, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8dcf81ee-c284-4142-99d5-361ce7fc2b58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.335 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8dcf81ee-c284-4142-99d5-361ce7fc2b58 in datapath 734275fe-fe61-4461-9851-66707dbd8296 unbound from our chassis
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.337 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 734275fe-fe61-4461-9851-66707dbd8296, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.338 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c072a214-e3fb-4e3c-b932-461a817423da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.339 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-734275fe-fe61-4461-9851-66707dbd8296 namespace which is not needed anymore
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:34:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:34:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:34:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:07 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000053.scope: Deactivated successfully.
Oct 02 12:34:07 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000053.scope: Consumed 2.732s CPU time.
Oct 02 12:34:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 374 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Oct 02 12:34:07 compute-0 systemd-machined[210927]: Machine qemu-37-instance-00000053 terminated.
Oct 02 12:34:07 compute-0 sudo[310723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:07 compute-0 sudo[310723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:07 compute-0 sudo[310723]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:07 compute-0 ceph-mon[73668]: pgmap v1726: 305 pgs: 305 active+clean; 349 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.5 MiB/s wr, 87 op/s
Oct 02 12:34:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/364482634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:34:07 compute-0 neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296[310701]: [NOTICE]   (310705) : haproxy version is 2.8.14-c23fe91
Oct 02 12:34:07 compute-0 neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296[310701]: [NOTICE]   (310705) : path to executable is /usr/sbin/haproxy
Oct 02 12:34:07 compute-0 neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296[310701]: [WARNING]  (310705) : Exiting Master process...
Oct 02 12:34:07 compute-0 neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296[310701]: [ALERT]    (310705) : Current worker (310707) exited with code 143 (Terminated)
Oct 02 12:34:07 compute-0 neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296[310701]: [WARNING]  (310705) : All workers exited. Exiting... (0)
Oct 02 12:34:07 compute-0 systemd[1]: libpod-7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964.scope: Deactivated successfully.
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.502 2 INFO nova.virt.libvirt.driver [-] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Instance destroyed successfully.
Oct 02 12:34:07 compute-0 sudo[310764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.504 2 DEBUG nova.objects.instance [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lazy-loading 'resources' on Instance uuid 9a901125-894a-48b8-8b98-3f5265a97052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:07 compute-0 podman[310763]: 2025-10-02 12:34:07.507635973 +0000 UTC m=+0.057435926 container died 7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:07 compute-0 sudo[310764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:07 compute-0 sudo[310764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.525 2 DEBUG nova.virt.libvirt.vif [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1709983847',display_name='tempest-InstanceActionsNegativeTestJSON-server-1709983847',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1709983847',id=83,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c363199c26c141ca8fac3c71b399ad3c',ramdisk_id='',reservation_id='r-hf3y3gnk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1324680419',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1324680419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:05Z,user_data=None,user_id='9626ead6e742493492143362866a3aa3',uuid=9a901125-894a-48b8-8b98-3f5265a97052,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.525 2 DEBUG nova.network.os_vif_util [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Converting VIF {"id": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "address": "fa:16:3e:aa:3d:e9", "network": {"id": "734275fe-fe61-4461-9851-66707dbd8296", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1408980355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363199c26c141ca8fac3c71b399ad3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dcf81ee-c2", "ovs_interfaceid": "8dcf81ee-c284-4142-99d5-361ce7fc2b58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.526 2 DEBUG nova.network.os_vif_util [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:3d:e9,bridge_name='br-int',has_traffic_filtering=True,id=8dcf81ee-c284-4142-99d5-361ce7fc2b58,network=Network(734275fe-fe61-4461-9851-66707dbd8296),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dcf81ee-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.527 2 DEBUG os_vif [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:3d:e9,bridge_name='br-int',has_traffic_filtering=True,id=8dcf81ee-c284-4142-99d5-361ce7fc2b58,network=Network(734275fe-fe61-4461-9851-66707dbd8296),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dcf81ee-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8dcf81ee-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.539 2 INFO os_vif [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:3d:e9,bridge_name='br-int',has_traffic_filtering=True,id=8dcf81ee-c284-4142-99d5-361ce7fc2b58,network=Network(734275fe-fe61-4461-9851-66707dbd8296),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dcf81ee-c2')
Oct 02 12:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964-userdata-shm.mount: Deactivated successfully.
Oct 02 12:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-91ac4cb314a75fb322d7508b44db00334dad1e026dace17d7aa5f1821017a984-merged.mount: Deactivated successfully.
Oct 02 12:34:07 compute-0 podman[310763]: 2025-10-02 12:34:07.559192576 +0000 UTC m=+0.108992539 container cleanup 7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:34:07 compute-0 systemd[1]: libpod-conmon-7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964.scope: Deactivated successfully.
Oct 02 12:34:07 compute-0 sudo[310813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:07 compute-0 sudo[310813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:07 compute-0 sudo[310813]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:07 compute-0 podman[310866]: 2025-10-02 12:34:07.649724723 +0000 UTC m=+0.056114752 container remove 7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.658 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ad3c2d-710f-4636-ab8b-da2fd1f58079]: (4, ('Thu Oct  2 12:34:07 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296 (7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964)\n7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964\nThu Oct  2 12:34:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-734275fe-fe61-4461-9851-66707dbd8296 (7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964)\n7b924ed3db67fd95209f0e6700ce16aee7097c884fe2ad32ebdbefab063f9964\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.662 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[85355836-8da1-49e8-bb60-80a2906e736c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 sudo[310878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.663 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap734275fe-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 sudo[310878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:07 compute-0 kernel: tap734275fe-f0: left promiscuous mode
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 nova_compute[256940]: 2025-10-02 12:34:07.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.693 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf52dd3-3a4a-415b-b6a5-a876336e2377]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.729 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa421da-f72c-4b06-80f3-8df049c89d25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.731 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[501405b3-4288-471f-956c-b9b828a78445]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.750 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3959dd75-25a1-48d6-9e10-010c5d17f184]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634004, 'reachable_time': 40587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310910, 'error': None, 'target': 'ovnmeta-734275fe-fe61-4461-9851-66707dbd8296', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.753 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-734275fe-fe61-4461-9851-66707dbd8296 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:34:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:07.753 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[f5db01a2-90a6-4003-8ba2-1faa95096d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d734275fe\x2dfe61\x2d4461\x2d9851\x2d66707dbd8296.mount: Deactivated successfully.
Oct 02 12:34:08 compute-0 nova_compute[256940]: 2025-10-02 12:34:08.017 2 INFO nova.virt.libvirt.driver [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Deleting instance files /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052_del
Oct 02 12:34:08 compute-0 nova_compute[256940]: 2025-10-02 12:34:08.018 2 INFO nova.virt.libvirt.driver [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Deletion of /var/lib/nova/instances/9a901125-894a-48b8-8b98-3f5265a97052_del complete
Oct 02 12:34:08 compute-0 podman[310952]: 2025-10-02 12:34:08.036194396 +0000 UTC m=+0.045261360 container create bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:34:08 compute-0 nova_compute[256940]: 2025-10-02 12:34:08.075 2 INFO nova.compute.manager [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 02 12:34:08 compute-0 nova_compute[256940]: 2025-10-02 12:34:08.076 2 DEBUG oslo.service.loopingcall [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:34:08 compute-0 nova_compute[256940]: 2025-10-02 12:34:08.076 2 DEBUG nova.compute.manager [-] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:34:08 compute-0 nova_compute[256940]: 2025-10-02 12:34:08.076 2 DEBUG nova.network.neutron [-] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:34:08 compute-0 systemd[1]: Started libpod-conmon-bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f.scope.
Oct 02 12:34:08 compute-0 podman[310952]: 2025-10-02 12:34:08.018172886 +0000 UTC m=+0.027239850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:08 compute-0 podman[310952]: 2025-10-02 12:34:08.139172117 +0000 UTC m=+0.148239121 container init bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:34:08 compute-0 podman[310952]: 2025-10-02 12:34:08.148631623 +0000 UTC m=+0.157698587 container start bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:34:08 compute-0 podman[310952]: 2025-10-02 12:34:08.152067933 +0000 UTC m=+0.161134947 container attach bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:34:08 compute-0 silly_nash[310968]: 167 167
Oct 02 12:34:08 compute-0 podman[310952]: 2025-10-02 12:34:08.153338206 +0000 UTC m=+0.162405170 container died bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:34:08 compute-0 systemd[1]: libpod-bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f.scope: Deactivated successfully.
Oct 02 12:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-12b42924119dc0a3570b892340bee772cf7d31c590660bc8ef0ce37fb0331f81-merged.mount: Deactivated successfully.
Oct 02 12:34:08 compute-0 podman[310952]: 2025-10-02 12:34:08.188786059 +0000 UTC m=+0.197853043 container remove bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:08 compute-0 systemd[1]: libpod-conmon-bddd0b1ee18a5652e4e58ac59a9f4536311b5aa1250d64290b5e0d3c0522141f.scope: Deactivated successfully.
Oct 02 12:34:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:08.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:08 compute-0 podman[310992]: 2025-10-02 12:34:08.339715288 +0000 UTC m=+0.032121107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:08 compute-0 podman[310992]: 2025-10-02 12:34:08.531770009 +0000 UTC m=+0.224175808 container create c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:34:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:34:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:34:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:08 compute-0 ceph-mon[73668]: pgmap v1727: 305 pgs: 305 active+clean; 374 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Oct 02 12:34:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2879726199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:08 compute-0 systemd[1]: Started libpod-conmon-c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94.scope.
Oct 02 12:34:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9bd6e6631d798d56a29f70b13021c35c79533afbdb2605ae3661130edc1ccf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9bd6e6631d798d56a29f70b13021c35c79533afbdb2605ae3661130edc1ccf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9bd6e6631d798d56a29f70b13021c35c79533afbdb2605ae3661130edc1ccf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9bd6e6631d798d56a29f70b13021c35c79533afbdb2605ae3661130edc1ccf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9bd6e6631d798d56a29f70b13021c35c79533afbdb2605ae3661130edc1ccf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:08 compute-0 podman[310992]: 2025-10-02 12:34:08.982447404 +0000 UTC m=+0.674853263 container init c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:34:08 compute-0 podman[310992]: 2025-10-02 12:34:08.991402287 +0000 UTC m=+0.683808056 container start c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:34:09 compute-0 podman[310992]: 2025-10-02 12:34:09.083618688 +0000 UTC m=+0.776024477 container attach c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.182 2 DEBUG nova.compute.manager [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received event network-vif-unplugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.184 2 DEBUG oslo_concurrency.lockutils [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.184 2 DEBUG oslo_concurrency.lockutils [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.185 2 DEBUG oslo_concurrency.lockutils [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.185 2 DEBUG nova.compute.manager [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] No waiting events found dispatching network-vif-unplugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.185 2 DEBUG nova.compute.manager [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received event network-vif-unplugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.185 2 DEBUG nova.compute.manager [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received event network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.186 2 DEBUG oslo_concurrency.lockutils [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9a901125-894a-48b8-8b98-3f5265a97052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.186 2 DEBUG oslo_concurrency.lockutils [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.186 2 DEBUG oslo_concurrency.lockutils [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.186 2 DEBUG nova.compute.manager [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] No waiting events found dispatching network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.187 2 WARNING nova.compute.manager [req-90a38d7a-034e-4af3-9f00-1eb8e2791ee7 req-bf6910b6-a711-4d2d-8418-0b122a9c5f46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received unexpected event network-vif-plugged-8dcf81ee-c284-4142-99d5-361ce7fc2b58 for instance with vm_state active and task_state deleting.
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.200 2 DEBUG nova.network.neutron [-] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.222 2 INFO nova.compute.manager [-] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Took 1.15 seconds to deallocate network for instance.
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.262 2 DEBUG nova.compute.manager [req-8668390e-29f7-44af-a692-8fe5e265cc9b req-81ee860f-9d1b-430f-95be-e23bb267b95a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Received event network-vif-deleted-8dcf81ee-c284-4142-99d5-361ce7fc2b58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.265 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.266 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.294 2 DEBUG nova.network.neutron [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.313 2 DEBUG oslo_concurrency.lockutils [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.317 2 DEBUG oslo_concurrency.lockutils [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.317 2 DEBUG nova.network.neutron [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Refreshing network info cache for port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.321 2 DEBUG nova.virt.libvirt.vif [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.322 2 DEBUG nova.network.os_vif_util [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.323 2 DEBUG nova.network.os_vif_util [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.323 2 DEBUG os_vif [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7548de71-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7548de71-bb, col_values=(('external_ids', {'iface-id': '7548de71-bb71-4ee1-98c9-ad2fd1f6f61f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:08:b5', 'vm-uuid': '588c95ff-a590-46c2-a041-6ee318695ef1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.338 2 DEBUG oslo_concurrency.processutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:09 compute-0 NetworkManager[44981]: <info>  [1759408449.3392] manager: (tap7548de71-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 374 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 440 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.390 2 INFO os_vif [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb')
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.391 2 DEBUG nova.virt.libvirt.vif [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.392 2 DEBUG nova.network.os_vif_util [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.393 2 DEBUG nova.network.os_vif_util [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.398 2 DEBUG nova.virt.libvirt.guest [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:f9:08:b5"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <target dev="tap7548de71-bb"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]: </interface>
Oct 02 12:34:09 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:34:09 compute-0 kernel: tap7548de71-bb: entered promiscuous mode
Oct 02 12:34:09 compute-0 NetworkManager[44981]: <info>  [1759408449.4131] manager: (tap7548de71-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Oct 02 12:34:09 compute-0 systemd-udevd[310865]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 ovn_controller[148123]: 2025-10-02T12:34:09Z|00336|binding|INFO|Claiming lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for this chassis.
Oct 02 12:34:09 compute-0 ovn_controller[148123]: 2025-10-02T12:34:09Z|00337|binding|INFO|7548de71-bb71-4ee1-98c9-ad2fd1f6f61f: Claiming fa:16:3e:f9:08:b5 10.100.0.3
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.428 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:08:b5 10.100.0.3'], port_security=['fa:16:3e:f9:08:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '588c95ff-a590-46c2-a041-6ee318695ef1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.429 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.432 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:34:09 compute-0 NetworkManager[44981]: <info>  [1759408449.4347] device (tap7548de71-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:09 compute-0 NetworkManager[44981]: <info>  [1759408449.4361] device (tap7548de71-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:09 compute-0 ovn_controller[148123]: 2025-10-02T12:34:09Z|00338|binding|INFO|Setting lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f ovn-installed in OVS
Oct 02 12:34:09 compute-0 ovn_controller[148123]: 2025-10-02T12:34:09Z|00339|binding|INFO|Setting lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f up in Southbound
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.456 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2874ce52-98ea-4456-9b13-52b8e5cd0b63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.515 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b24d176b-ae15-4cb9-973a-06069a373a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.520 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a73d8519-494a-494f-b1e8-fa095b19620a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.563 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b77fb784-0d2c-4b8f-9f8c-c0d779406a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.586 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[32a2e6c4-c4ec-4e56-ab2a-e68256607ba1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630658, 'reachable_time': 17793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311046, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.616 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe683a0-0feb-412a-a306-02c6104a6392]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630674, 'tstamp': 630674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311047, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630678, 'tstamp': 630678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311047, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.618 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.622 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.622 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.623 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:09.623 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.662 2 DEBUG nova.virt.libvirt.driver [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.662 2 DEBUG nova.virt.libvirt.driver [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.663 2 DEBUG nova.virt.libvirt.driver [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:ff:b1:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.664 2 DEBUG nova.virt.libvirt.driver [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:f9:08:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.695 2 DEBUG nova.virt.libvirt.guest [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <nova:name>tempest-tempest.common.compute-instance-463522272</nova:name>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:34:09</nova:creationTime>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:port uuid="6165c9e0-ee4f-4bc6-add1-c234918a73d2">
Oct 02 12:34:09 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     <nova:port uuid="7548de71-bb71-4ee1-98c9-ad2fd1f6f61f">
Oct 02 12:34:09 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:34:09 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:34:09 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:34:09 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:34:09 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.718 2 DEBUG nova.compute.manager [req-b26c719b-94dc-4a90-9c5e-136fa8623724 req-f8d838a4-b52a-47a3-8bfd-a510e83a0ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.719 2 DEBUG oslo_concurrency.lockutils [req-b26c719b-94dc-4a90-9c5e-136fa8623724 req-f8d838a4-b52a-47a3-8bfd-a510e83a0ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.719 2 DEBUG oslo_concurrency.lockutils [req-b26c719b-94dc-4a90-9c5e-136fa8623724 req-f8d838a4-b52a-47a3-8bfd-a510e83a0ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.719 2 DEBUG oslo_concurrency.lockutils [req-b26c719b-94dc-4a90-9c5e-136fa8623724 req-f8d838a4-b52a-47a3-8bfd-a510e83a0ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.720 2 DEBUG nova.compute.manager [req-b26c719b-94dc-4a90-9c5e-136fa8623724 req-f8d838a4-b52a-47a3-8bfd-a510e83a0ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] No waiting events found dispatching network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.720 2 WARNING nova.compute.manager [req-b26c719b-94dc-4a90-9c5e-136fa8623724 req-f8d838a4-b52a-47a3-8bfd-a510e83a0ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received unexpected event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.731 2 DEBUG oslo_concurrency.lockutils [None req-890c80c0-1cd8-49b6-b3f3-beb2fc03d2bd 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-588c95ff-a590-46c2-a041-6ee318695ef1-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4090946625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.836 2 DEBUG oslo_concurrency.processutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.842 2 DEBUG nova.compute.provider_tree [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:09 compute-0 festive_rubin[311009]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:34:09 compute-0 festive_rubin[311009]: --> relative data size: 1.0
Oct 02 12:34:09 compute-0 festive_rubin[311009]: --> All data devices are unavailable
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.861 2 DEBUG nova.scheduler.client.report [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.886 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:09 compute-0 systemd[1]: libpod-c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94.scope: Deactivated successfully.
Oct 02 12:34:09 compute-0 podman[310992]: 2025-10-02 12:34:09.889516982 +0000 UTC m=+1.581922761 container died c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.914 2 INFO nova.scheduler.client.report [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Deleted allocations for instance 9a901125-894a-48b8-8b98-3f5265a97052
Oct 02 12:34:09 compute-0 nova_compute[256940]: 2025-10-02 12:34:09.985 2 DEBUG oslo_concurrency.lockutils [None req-6d236ace-9327-48e1-9928-fe2ffd1b38e4 9626ead6e742493492143362866a3aa3 c363199c26c141ca8fac3c71b399ad3c - - default default] Lock "9a901125-894a-48b8-8b98-3f5265a97052" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4090946625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:10.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9bd6e6631d798d56a29f70b13021c35c79533afbdb2605ae3661130edc1ccf-merged.mount: Deactivated successfully.
Oct 02 12:34:10 compute-0 podman[310992]: 2025-10-02 12:34:10.434739366 +0000 UTC m=+2.127145135 container remove c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:34:10 compute-0 systemd[1]: libpod-conmon-c226476607755147ee4c85351c3260ee4110b81b6d9a1b64bc277adcdb91dc94.scope: Deactivated successfully.
Oct 02 12:34:10 compute-0 sudo[310878]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:10 compute-0 sudo[311072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:10 compute-0 sudo[311072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:10 compute-0 sudo[311072]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:10 compute-0 sudo[311098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:10 compute-0 sudo[311098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:10 compute-0 sudo[311098]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:10.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:10 compute-0 sudo[311123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:10 compute-0 sudo[311123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:10 compute-0 sudo[311123]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:10 compute-0 sudo[311148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:34:10 compute-0 sudo[311148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:10 compute-0 nova_compute[256940]: 2025-10-02 12:34:10.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-0 nova_compute[256940]: 2025-10-02 12:34:10.901 2 DEBUG nova.network.neutron [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updated VIF entry in instance network info cache for port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:10 compute-0 nova_compute[256940]: 2025-10-02 12:34:10.901 2 DEBUG nova.network.neutron [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:10 compute-0 nova_compute[256940]: 2025-10-02 12:34:10.923 2 DEBUG oslo_concurrency.lockutils [req-43e0f54b-c1c8-498b-a428-e54fd9334aec req-c4a62572-1486-44b4-b1a5-ed10352fd9f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:11 compute-0 ceph-mon[73668]: pgmap v1728: 305 pgs: 305 active+clean; 374 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 440 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Oct 02 12:34:11 compute-0 podman[311216]: 2025-10-02 12:34:11.145802341 +0000 UTC m=+0.064096800 container create 295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_benz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:11 compute-0 podman[311216]: 2025-10-02 12:34:11.107418451 +0000 UTC m=+0.025712980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:11 compute-0 systemd[1]: Started libpod-conmon-295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977.scope.
Oct 02 12:34:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:11 compute-0 podman[311216]: 2025-10-02 12:34:11.273832404 +0000 UTC m=+0.192126933 container init 295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_benz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:34:11 compute-0 podman[311216]: 2025-10-02 12:34:11.28519783 +0000 UTC m=+0.203492279 container start 295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_benz, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:34:11 compute-0 podman[311216]: 2025-10-02 12:34:11.292542791 +0000 UTC m=+0.210837320 container attach 295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_benz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:34:11 compute-0 romantic_benz[311233]: 167 167
Oct 02 12:34:11 compute-0 systemd[1]: libpod-295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977.scope: Deactivated successfully.
Oct 02 12:34:11 compute-0 podman[311216]: 2025-10-02 12:34:11.297428509 +0000 UTC m=+0.215722988 container died 295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_benz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:34:11 compute-0 ovn_controller[148123]: 2025-10-02T12:34:11Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:08:b5 10.100.0.3
Oct 02 12:34:11 compute-0 ovn_controller[148123]: 2025-10-02T12:34:11Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:08:b5 10.100.0.3
Oct 02 12:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb98c54199a73ade561ed2e75c64602648abca3028480a93acf2b5a447f02cf8-merged.mount: Deactivated successfully.
Oct 02 12:34:11 compute-0 podman[311216]: 2025-10-02 12:34:11.35894783 +0000 UTC m=+0.277242299 container remove 295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:34:11 compute-0 systemd[1]: libpod-conmon-295a99ee7403f670dcdaf34c844c3e22dd90c1d213ff6457592374aea5467977.scope: Deactivated successfully.
Oct 02 12:34:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 341 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Oct 02 12:34:11 compute-0 podman[311258]: 2025-10-02 12:34:11.601386263 +0000 UTC m=+0.064334466 container create ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:34:11 compute-0 systemd[1]: Started libpod-conmon-ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc.scope.
Oct 02 12:34:11 compute-0 podman[311258]: 2025-10-02 12:34:11.573750283 +0000 UTC m=+0.036698556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8033399b155cedfe07c0acf2bcd47c4093ab8f6f8965a8cbb2b134250c0f77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8033399b155cedfe07c0acf2bcd47c4093ab8f6f8965a8cbb2b134250c0f77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8033399b155cedfe07c0acf2bcd47c4093ab8f6f8965a8cbb2b134250c0f77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8033399b155cedfe07c0acf2bcd47c4093ab8f6f8965a8cbb2b134250c0f77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:11 compute-0 podman[311258]: 2025-10-02 12:34:11.702927787 +0000 UTC m=+0.165875990 container init ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:34:11 compute-0 podman[311258]: 2025-10-02 12:34:11.71187133 +0000 UTC m=+0.174819513 container start ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:34:11 compute-0 podman[311258]: 2025-10-02 12:34:11.715469393 +0000 UTC m=+0.178417596 container attach ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.787 2 DEBUG oslo_concurrency.lockutils [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-588c95ff-a590-46c2-a041-6ee318695ef1-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.789 2 DEBUG oslo_concurrency.lockutils [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-588c95ff-a590-46c2-a041-6ee318695ef1-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.805 2 DEBUG nova.objects.instance [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 588c95ff-a590-46c2-a041-6ee318695ef1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.810 2 DEBUG nova.compute.manager [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.811 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.811 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.811 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.812 2 DEBUG nova.compute.manager [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] No waiting events found dispatching network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.812 2 WARNING nova.compute.manager [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received unexpected event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.845 2 DEBUG nova.virt.libvirt.vif [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.846 2 DEBUG nova.network.os_vif_util [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.847 2 DEBUG nova.network.os_vif_util [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.852 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.855 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.857 2 DEBUG nova.virt.libvirt.driver [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Attempting to detach device tap7548de71-bb from instance 588c95ff-a590-46c2-a041-6ee318695ef1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.858 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:f9:08:b5"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <target dev="tap7548de71-bb"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]: </interface>
Oct 02 12:34:11 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.866 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.870 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface>not found in domain: <domain type='kvm' id='36'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <name>instance-00000052</name>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <uuid>588c95ff-a590-46c2-a041-6ee318695ef1</uuid>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:name>tempest-tempest.common.compute-instance-463522272</nova:name>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:34:09</nova:creationTime>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:port uuid="6165c9e0-ee4f-4bc6-add1-c234918a73d2">
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:port uuid="7548de71-bb71-4ee1-98c9-ad2fd1f6f61f">
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:34:11 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <system>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='serial'>588c95ff-a590-46c2-a041-6ee318695ef1</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='uuid'>588c95ff-a590-46c2-a041-6ee318695ef1</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </system>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <os>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </os>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <features>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </features>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/588c95ff-a590-46c2-a041-6ee318695ef1_disk' index='2'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/588c95ff-a590-46c2-a041-6ee318695ef1_disk.config' index='1'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:ff:b1:14'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target dev='tap6165c9e0-ee'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:f9:08:b5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target dev='tap7548de71-bb'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='net1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/console.log' append='off'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </target>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/console.log' append='off'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </console>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </input>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </input>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </input>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <video>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </video>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c49,c975</label>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c49,c975</imagelabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]: </domain>
Oct 02 12:34:11 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.885 2 INFO nova.virt.libvirt.driver [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap7548de71-bb from instance 588c95ff-a590-46c2-a041-6ee318695ef1 from the persistent domain config.
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.886 2 DEBUG nova.virt.libvirt.driver [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] (1/8): Attempting to detach device tap7548de71-bb with device alias net1 from instance 588c95ff-a590-46c2-a041-6ee318695ef1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.886 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:f9:08:b5"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <target dev="tap7548de71-bb"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]: </interface>
Oct 02 12:34:11 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:34:11 compute-0 kernel: tap7548de71-bb (unregistering): left promiscuous mode
Oct 02 12:34:11 compute-0 NetworkManager[44981]: <info>  [1759408451.9458] device (tap7548de71-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:11 compute-0 ovn_controller[148123]: 2025-10-02T12:34:11Z|00340|binding|INFO|Releasing lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f from this chassis (sb_readonly=0)
Oct 02 12:34:11 compute-0 ovn_controller[148123]: 2025-10-02T12:34:11Z|00341|binding|INFO|Setting lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f down in Southbound
Oct 02 12:34:11 compute-0 ovn_controller[148123]: 2025-10-02T12:34:11Z|00342|binding|INFO|Removing iface tap7548de71-bb ovn-installed in OVS
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:11.966 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:08:b5 10.100.0.3'], port_security=['fa:16:3e:f9:08:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '588c95ff-a590-46c2-a041-6ee318695ef1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:11.968 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:34:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:11.970 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.972 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759408451.9674594, 588c95ff-a590-46c2-a041-6ee318695ef1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.972 2 DEBUG nova.virt.libvirt.driver [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Start waiting for the detach event from libvirt for device tap7548de71-bb with device alias net1 for instance 588c95ff-a590-46c2-a041-6ee318695ef1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.973 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.982 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface>not found in domain: <domain type='kvm' id='36'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <name>instance-00000052</name>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <uuid>588c95ff-a590-46c2-a041-6ee318695ef1</uuid>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:name>tempest-tempest.common.compute-instance-463522272</nova:name>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:34:09</nova:creationTime>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:port uuid="6165c9e0-ee4f-4bc6-add1-c234918a73d2">
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <nova:port uuid="7548de71-bb71-4ee1-98c9-ad2fd1f6f61f">
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:34:11 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <system>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='serial'>588c95ff-a590-46c2-a041-6ee318695ef1</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='uuid'>588c95ff-a590-46c2-a041-6ee318695ef1</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </system>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <os>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </os>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <features>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </features>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/588c95ff-a590-46c2-a041-6ee318695ef1_disk' index='2'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/588c95ff-a590-46c2-a041-6ee318695ef1_disk.config' index='1'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:ff:b1:14'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target dev='tap6165c9e0-ee'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/console.log' append='off'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       </target>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <source path='/dev/pts/0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1/console.log' append='off'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </console>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </input>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </input>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </input>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <video>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </video>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c49,c975</label>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c49,c975</imagelabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:34:11 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:34:11 compute-0 nova_compute[256940]: </domain>
Oct 02 12:34:11 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.986 2 INFO nova.virt.libvirt.driver [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap7548de71-bb from instance 588c95ff-a590-46c2-a041-6ee318695ef1 from the live domain config.
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.986 2 DEBUG nova.virt.libvirt.vif [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.987 2 DEBUG nova.network.os_vif_util [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.988 2 DEBUG nova.network.os_vif_util [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.988 2 DEBUG os_vif [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7548de71-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:11 compute-0 nova_compute[256940]: 2025-10-02 12:34:11.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:11.997 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[97186dec-7cf6-4e0d-b874-b510edc4b955]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:12 compute-0 nova_compute[256940]: 2025-10-02 12:34:12.000 2 INFO os_vif [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb')
Oct 02 12:34:12 compute-0 nova_compute[256940]: 2025-10-02 12:34:12.001 2 DEBUG nova.virt.libvirt.guest [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:12 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   <nova:name>tempest-tempest.common.compute-instance-463522272</nova:name>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:34:12</nova:creationTime>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     <nova:port uuid="6165c9e0-ee4f-4bc6-add1-c234918a73d2">
Oct 02 12:34:12 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:34:12 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:34:12 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:34:12 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:34:12 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.042 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fae685de-9021-4b7d-aabc-6cb949a05e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.046 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e94c848c-6bdc-42cc-884f-bfeb48bbd07a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.086 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f7082d5b-9cc0-414c-bfd2-39c414882f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.111 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9a823745-e16d-4798-9baf-478653ad6741]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630658, 'reachable_time': 17793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311291, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.133 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8867f6-3c30-40aa-810c-ffa1abd15543]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630674, 'tstamp': 630674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311292, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630678, 'tstamp': 630678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311292, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.135 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:12 compute-0 nova_compute[256940]: 2025-10-02 12:34:12.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:12 compute-0 nova_compute[256940]: 2025-10-02 12:34:12.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.139 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.139 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.139 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:12.139 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:12.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:12 compute-0 infallible_williams[311275]: {
Oct 02 12:34:12 compute-0 infallible_williams[311275]:     "1": [
Oct 02 12:34:12 compute-0 infallible_williams[311275]:         {
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "devices": [
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "/dev/loop3"
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             ],
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "lv_name": "ceph_lv0",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "lv_size": "7511998464",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "name": "ceph_lv0",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "tags": {
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.cluster_name": "ceph",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.crush_device_class": "",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.encrypted": "0",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.osd_id": "1",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.type": "block",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:                 "ceph.vdo": "0"
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             },
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "type": "block",
Oct 02 12:34:12 compute-0 infallible_williams[311275]:             "vg_name": "ceph_vg0"
Oct 02 12:34:12 compute-0 infallible_williams[311275]:         }
Oct 02 12:34:12 compute-0 infallible_williams[311275]:     ]
Oct 02 12:34:12 compute-0 infallible_williams[311275]: }
Oct 02 12:34:12 compute-0 systemd[1]: libpod-ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc.scope: Deactivated successfully.
Oct 02 12:34:12 compute-0 podman[311258]: 2025-10-02 12:34:12.616662558 +0000 UTC m=+1.079610751 container died ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c8033399b155cedfe07c0acf2bcd47c4093ab8f6f8965a8cbb2b134250c0f77-merged.mount: Deactivated successfully.
Oct 02 12:34:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:12 compute-0 podman[311258]: 2025-10-02 12:34:12.691039285 +0000 UTC m=+1.153987478 container remove ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:34:12 compute-0 systemd[1]: libpod-conmon-ea6a210b71cc94dbf1309e910438094db8aa2f757dcffc3201e0b545e3af22fc.scope: Deactivated successfully.
Oct 02 12:34:12 compute-0 sudo[311148]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:12 compute-0 sudo[311310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:12 compute-0 sudo[311310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:12 compute-0 sudo[311310]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:12 compute-0 sudo[311335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:12 compute-0 sudo[311335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:12 compute-0 sudo[311335]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:12 compute-0 sudo[311360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:12 compute-0 sudo[311360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:12 compute-0 sudo[311360]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:13 compute-0 ceph-mon[73668]: pgmap v1729: 305 pgs: 305 active+clean; 341 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Oct 02 12:34:13 compute-0 sudo[311385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:34:13 compute-0 sudo[311385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 147 op/s
Oct 02 12:34:13 compute-0 podman[311450]: 2025-10-02 12:34:13.481166878 +0000 UTC m=+0.050864546 container create 468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elbakyan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:34:13 compute-0 systemd[1]: Started libpod-conmon-468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80.scope.
Oct 02 12:34:13 compute-0 podman[311450]: 2025-10-02 12:34:13.456562317 +0000 UTC m=+0.026260015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:13 compute-0 podman[311450]: 2025-10-02 12:34:13.606161192 +0000 UTC m=+0.175858920 container init 468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elbakyan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:34:13 compute-0 podman[311465]: 2025-10-02 12:34:13.612383424 +0000 UTC m=+0.081836952 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:34:13 compute-0 podman[311450]: 2025-10-02 12:34:13.614037007 +0000 UTC m=+0.183734675 container start 468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elbakyan, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 12:34:13 compute-0 jovial_elbakyan[311469]: 167 167
Oct 02 12:34:13 compute-0 systemd[1]: libpod-468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80.scope: Deactivated successfully.
Oct 02 12:34:13 compute-0 podman[311450]: 2025-10-02 12:34:13.629830378 +0000 UTC m=+0.199528066 container attach 468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:34:13 compute-0 podman[311450]: 2025-10-02 12:34:13.630355902 +0000 UTC m=+0.200053580 container died 468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:34:13 compute-0 podman[311466]: 2025-10-02 12:34:13.670030725 +0000 UTC m=+0.138314092 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 12:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-eab23e656698ebdbd0ab9912b363980ee0e2eebb8d68fefbfbbeae6f59715602-merged.mount: Deactivated successfully.
Oct 02 12:34:13 compute-0 podman[311450]: 2025-10-02 12:34:13.799118105 +0000 UTC m=+0.368815763 container remove 468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:34:13 compute-0 systemd[1]: libpod-conmon-468ef7076443c64894445798932551c96f5ab0cdee17fa58f8a060dc7c71ca80.scope: Deactivated successfully.
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.948 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-unplugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.949 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.949 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.949 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.950 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] No waiting events found dispatching network-vif-unplugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.950 2 WARNING nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received unexpected event network-vif-unplugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.950 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.951 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.951 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.951 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.951 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] No waiting events found dispatching network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:13 compute-0 nova_compute[256940]: 2025-10-02 12:34:13.952 2 WARNING nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received unexpected event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.
Oct 02 12:34:14 compute-0 podman[311539]: 2025-10-02 12:34:14.053475858 +0000 UTC m=+0.078985537 container create 45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:34:14 compute-0 podman[311539]: 2025-10-02 12:34:14.010842658 +0000 UTC m=+0.036352387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:34:14 compute-0 systemd[1]: Started libpod-conmon-45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e.scope.
Oct 02 12:34:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c638d52424fbb36dfab9aae15a758c6abf7ddcad9fafa1b383fe553830195553/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c638d52424fbb36dfab9aae15a758c6abf7ddcad9fafa1b383fe553830195553/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c638d52424fbb36dfab9aae15a758c6abf7ddcad9fafa1b383fe553830195553/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c638d52424fbb36dfab9aae15a758c6abf7ddcad9fafa1b383fe553830195553/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:14 compute-0 podman[311539]: 2025-10-02 12:34:14.202917989 +0000 UTC m=+0.228427688 container init 45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:34:14 compute-0 podman[311539]: 2025-10-02 12:34:14.21100119 +0000 UTC m=+0.236510909 container start 45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:34:14 compute-0 podman[311539]: 2025-10-02 12:34:14.220436625 +0000 UTC m=+0.245946334 container attach 45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:34:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:14.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:14.545 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:14.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.882 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.883 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.884 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.884 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.884 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.886 2 INFO nova.compute.manager [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Terminating instance
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.887 2 DEBUG nova.compute.manager [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.972 2 DEBUG oslo_concurrency.lockutils [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.972 2 DEBUG oslo_concurrency.lockutils [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:14 compute-0 nova_compute[256940]: 2025-10-02 12:34:14.973 2 DEBUG nova.network.neutron [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:15 compute-0 kernel: tap6165c9e0-ee (unregistering): left promiscuous mode
Oct 02 12:34:15 compute-0 NetworkManager[44981]: <info>  [1759408455.0407] device (tap6165c9e0-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:15 compute-0 ovn_controller[148123]: 2025-10-02T12:34:15Z|00343|binding|INFO|Releasing lport 6165c9e0-ee4f-4bc6-add1-c234918a73d2 from this chassis (sb_readonly=0)
Oct 02 12:34:15 compute-0 ovn_controller[148123]: 2025-10-02T12:34:15Z|00344|binding|INFO|Setting lport 6165c9e0-ee4f-4bc6-add1-c234918a73d2 down in Southbound
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 ovn_controller[148123]: 2025-10-02T12:34:15Z|00345|binding|INFO|Removing iface tap6165c9e0-ee ovn-installed in OVS
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.064 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:b1:14 10.100.0.14'], port_security=['fa:16:3e:ff:b1:14 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '588c95ff-a590-46c2-a041-6ee318695ef1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3852fde4-27af-4b26-ab2c-21696f5fd593', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6165c9e0-ee4f-4bc6-add1-c234918a73d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.067 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6165c9e0-ee4f-4bc6-add1-c234918a73d2 in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.069 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a187d8a-77c6-4b27-bb13-654f471c1faf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.071 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bb640ff9-4b9e-468e-9038-3158067c104c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.072 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace which is not needed anymore
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 02 12:34:15 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000052.scope: Consumed 15.466s CPU time.
Oct 02 12:34:15 compute-0 systemd-machined[210927]: Machine qemu-36-instance-00000052 terminated.
Oct 02 12:34:15 compute-0 ceph-mon[73668]: pgmap v1730: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 147 op/s
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]: {
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]:         "osd_id": 1,
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]:         "type": "bluestore"
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]:     }
Oct 02 12:34:15 compute-0 stupefied_ishizaka[311556]: }
Oct 02 12:34:15 compute-0 ovn_controller[148123]: 2025-10-02T12:34:15Z|00346|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 systemd[1]: libpod-45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e.scope: Deactivated successfully.
Oct 02 12:34:15 compute-0 podman[311600]: 2025-10-02 12:34:15.25414059 +0000 UTC m=+0.051304376 container died 45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:34:15 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [NOTICE]   (309548) : haproxy version is 2.8.14-c23fe91
Oct 02 12:34:15 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [NOTICE]   (309548) : path to executable is /usr/sbin/haproxy
Oct 02 12:34:15 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [WARNING]  (309548) : Exiting Master process...
Oct 02 12:34:15 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [WARNING]  (309548) : Exiting Master process...
Oct 02 12:34:15 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [ALERT]    (309548) : Current worker (309550) exited with code 143 (Terminated)
Oct 02 12:34:15 compute-0 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[309544]: [WARNING]  (309548) : All workers exited. Exiting... (0)
Oct 02 12:34:15 compute-0 systemd[1]: libpod-5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b.scope: Deactivated successfully.
Oct 02 12:34:15 compute-0 podman[311599]: 2025-10-02 12:34:15.275163948 +0000 UTC m=+0.080919388 container died 5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c638d52424fbb36dfab9aae15a758c6abf7ddcad9fafa1b383fe553830195553-merged.mount: Deactivated successfully.
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.346 2 INFO nova.virt.libvirt.driver [-] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Instance destroyed successfully.
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.347 2 DEBUG nova.objects.instance [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'resources' on Instance uuid 588c95ff-a590-46c2-a041-6ee318695ef1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.367 2 DEBUG nova.compute.manager [req-a6504939-538c-44b9-92e7-56c924f18545 req-92edcc6d-c62f-4fa8-921a-82af6b11f0cc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-unplugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.369 2 DEBUG oslo_concurrency.lockutils [req-a6504939-538c-44b9-92e7-56c924f18545 req-92edcc6d-c62f-4fa8-921a-82af6b11f0cc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.370 2 DEBUG oslo_concurrency.lockutils [req-a6504939-538c-44b9-92e7-56c924f18545 req-92edcc6d-c62f-4fa8-921a-82af6b11f0cc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.370 2 DEBUG oslo_concurrency.lockutils [req-a6504939-538c-44b9-92e7-56c924f18545 req-92edcc6d-c62f-4fa8-921a-82af6b11f0cc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.370 2 DEBUG nova.compute.manager [req-a6504939-538c-44b9-92e7-56c924f18545 req-92edcc6d-c62f-4fa8-921a-82af6b11f0cc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] No waiting events found dispatching network-vif-unplugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.371 2 DEBUG nova.compute.manager [req-a6504939-538c-44b9-92e7-56c924f18545 req-92edcc6d-c62f-4fa8-921a-82af6b11f0cc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-unplugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.376 2 DEBUG nova.virt.libvirt.vif [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.376 2 DEBUG nova.network.os_vif_util [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.377 2 DEBUG nova.network.os_vif_util [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:b1:14,bridge_name='br-int',has_traffic_filtering=True,id=6165c9e0-ee4f-4bc6-add1-c234918a73d2,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6165c9e0-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.377 2 DEBUG os_vif [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:b1:14,bridge_name='br-int',has_traffic_filtering=True,id=6165c9e0-ee4f-4bc6-add1-c234918a73d2,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6165c9e0-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.379 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6165c9e0-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a398249e539f6b9d299a0871996084f6a713475bff3134cceadb7c19ec15fe97-merged.mount: Deactivated successfully.
Oct 02 12:34:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 125 op/s
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:15 compute-0 podman[311600]: 2025-10-02 12:34:15.404456324 +0000 UTC m=+0.201620090 container remove 45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:34:15 compute-0 systemd[1]: libpod-conmon-45045bd3a4fb9f475f9313d7114bfbf3176ec38bf12a92ed7fcbe56563592a0e.scope: Deactivated successfully.
Oct 02 12:34:15 compute-0 podman[311599]: 2025-10-02 12:34:15.414114976 +0000 UTC m=+0.219870406 container cleanup 5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 systemd[1]: libpod-conmon-5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b.scope: Deactivated successfully.
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.445 2 INFO os_vif [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:b1:14,bridge_name='br-int',has_traffic_filtering=True,id=6165c9e0-ee4f-4bc6-add1-c234918a73d2,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6165c9e0-ee')
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.446 2 DEBUG nova.virt.libvirt.vif [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-463522272',display_name='tempest-tempest.common.compute-instance-463522272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-463522272',id=82,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-lw1ulmfy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=588c95ff-a590-46c2-a041-6ee318695ef1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.447 2 DEBUG nova.network.os_vif_util [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.448 2 DEBUG nova.network.os_vif_util [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.448 2 DEBUG os_vif [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7548de71-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.453 2 INFO os_vif [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb')
Oct 02 12:34:15 compute-0 ovn_controller[148123]: 2025-10-02T12:34:15Z|00347|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct 02 12:34:15 compute-0 sudo[311385]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 podman[311649]: 2025-10-02 12:34:15.541494992 +0000 UTC m=+0.090353623 container remove 5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.549 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b22e6986-121a-4a8f-8c3c-90de8432bcbe]: (4, ('Thu Oct  2 12:34:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b)\n5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b\nThu Oct  2 12:34:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b)\n5378f737072c371ad2b4365d0715d36789be013b770105b7787b01da596dc36b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.551 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ad704f88-7d6a-414e-95e2-c8891031f4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.552 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:15 compute-0 kernel: tap6a187d8a-70: left promiscuous mode
Oct 02 12:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.572 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c9773297-c4fd-4278-a262-ed1ae932822a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.600 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[007c9ba1-6e82-45cf-bab2-c052a254ee16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.602 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f50a341a-537b-4efe-b193-67c9902a63a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.622 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[892e9b8a-816e-440f-8a17-cece526c84ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630649, 'reachable_time': 44114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311682, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.625 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:34:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a187d8a\x2d77c6\x2d4b27\x2dbb13\x2d654f471c1faf.mount: Deactivated successfully.
Oct 02 12:34:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:15.625 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f1d849-2daf-49a9-9efd-c857b3482f34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6616f0b6-93fb-4f92-9190-3268226f13d6 does not exist
Oct 02 12:34:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6f1bafdf-47c6-402d-8dc0-2af801273055 does not exist
Oct 02 12:34:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f1ca344b-2bdf-4a46-bc51-087e371fefd8 does not exist
Oct 02 12:34:15 compute-0 sudo[311683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:15 compute-0 sudo[311683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:15 compute-0 sudo[311683]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.804826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455804857, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2170, "num_deletes": 253, "total_data_size": 3785518, "memory_usage": 3845440, "flush_reason": "Manual Compaction"}
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 02 12:34:15 compute-0 nova_compute[256940]: 2025-10-02 12:34:15.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-0 sudo[311708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:34:15 compute-0 sudo[311708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455861012, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3718074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36688, "largest_seqno": 38857, "table_properties": {"data_size": 3708324, "index_size": 6116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20779, "raw_average_key_size": 20, "raw_value_size": 3688661, "raw_average_value_size": 3670, "num_data_blocks": 266, "num_entries": 1005, "num_filter_entries": 1005, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408249, "oldest_key_time": 1759408249, "file_creation_time": 1759408455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 56243 microseconds, and 8311 cpu microseconds.
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:34:15 compute-0 sudo[311708]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.861062) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3718074 bytes OK
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.861086) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.878295) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.878348) EVENT_LOG_v1 {"time_micros": 1759408455878330, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.878375) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3776663, prev total WAL file size 3776663, number of live WAL files 2.
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.880027) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3630KB)], [80(9948KB)]
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455880199, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13905700, "oldest_snapshot_seqno": -1}
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6661 keys, 11949494 bytes, temperature: kUnknown
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455991861, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11949494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11902309, "index_size": 29409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 170695, "raw_average_key_size": 25, "raw_value_size": 11780441, "raw_average_value_size": 1768, "num_data_blocks": 1177, "num_entries": 6661, "num_filter_entries": 6661, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:34:15 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.992361) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11949494 bytes
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:16.007239) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.4 rd, 106.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7185, records dropped: 524 output_compression: NoCompression
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:16.007295) EVENT_LOG_v1 {"time_micros": 1759408456007275, "job": 46, "event": "compaction_finished", "compaction_time_micros": 111777, "compaction_time_cpu_micros": 44820, "output_level": 6, "num_output_files": 1, "total_output_size": 11949494, "num_input_records": 7185, "num_output_records": 6661, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408456008259, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408456010611, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:15.879797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:16.010650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:16.010655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:16.010657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:16.010659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:34:16.010661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000053s ======
Oct 02 12:34:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:16.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct 02 12:34:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:16 compute-0 ceph-mon[73668]: pgmap v1731: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 125 op/s
Oct 02 12:34:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:16 compute-0 nova_compute[256940]: 2025-10-02 12:34:16.803 2 INFO nova.network.neutron [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 12:34:16 compute-0 nova_compute[256940]: 2025-10-02 12:34:16.803 2 DEBUG nova.network.neutron [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [{"id": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "address": "fa:16:3e:ff:b1:14", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6165c9e0-ee", "ovs_interfaceid": "6165c9e0-ee4f-4bc6-add1-c234918a73d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:16 compute-0 nova_compute[256940]: 2025-10-02 12:34:16.845 2 DEBUG oslo_concurrency.lockutils [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-588c95ff-a590-46c2-a041-6ee318695ef1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:16 compute-0 nova_compute[256940]: 2025-10-02 12:34:16.868 2 DEBUG oslo_concurrency.lockutils [None req-1e3f9b77-4ed2-484c-b10b-5789b5adf7cb 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-588c95ff-a590-46c2-a041-6ee318695ef1-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:16 compute-0 sudo[311734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:16 compute-0 sudo[311734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:16 compute-0 sudo[311734]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:17 compute-0 sudo[311759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:17 compute-0 sudo[311759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:17 compute-0 sudo[311759]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.469 2 DEBUG nova.compute.manager [req-0799276e-df80-48df-8b4c-7236a95cdf4c req-f68d4ef0-6626-40f9-ad03-11eb75bfe628 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.470 2 DEBUG oslo_concurrency.lockutils [req-0799276e-df80-48df-8b4c-7236a95cdf4c req-f68d4ef0-6626-40f9-ad03-11eb75bfe628 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.470 2 DEBUG oslo_concurrency.lockutils [req-0799276e-df80-48df-8b4c-7236a95cdf4c req-f68d4ef0-6626-40f9-ad03-11eb75bfe628 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.470 2 DEBUG oslo_concurrency.lockutils [req-0799276e-df80-48df-8b4c-7236a95cdf4c req-f68d4ef0-6626-40f9-ad03-11eb75bfe628 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.470 2 DEBUG nova.compute.manager [req-0799276e-df80-48df-8b4c-7236a95cdf4c req-f68d4ef0-6626-40f9-ad03-11eb75bfe628 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] No waiting events found dispatching network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.471 2 WARNING nova.compute.manager [req-0799276e-df80-48df-8b4c-7236a95cdf4c req-f68d4ef0-6626-40f9-ad03-11eb75bfe628 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received unexpected event network-vif-plugged-6165c9e0-ee4f-4bc6-add1-c234918a73d2 for instance with vm_state active and task_state deleting.
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.898 2 INFO nova.virt.libvirt.driver [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Deleting instance files /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1_del
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.899 2 INFO nova.virt.libvirt.driver [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Deletion of /var/lib/nova/instances/588c95ff-a590-46c2-a041-6ee318695ef1_del complete
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.961 2 INFO nova.compute.manager [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Took 3.07 seconds to destroy the instance on the hypervisor.
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.962 2 DEBUG oslo.service.loopingcall [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.962 2 DEBUG nova.compute.manager [-] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:34:17 compute-0 nova_compute[256940]: 2025-10-02 12:34:17.962 2 DEBUG nova.network.neutron [-] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:34:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:18.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:18 compute-0 ceph-mon[73668]: pgmap v1732: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.145 2 DEBUG nova.network.neutron [-] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.173 2 INFO nova.compute.manager [-] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Took 1.21 seconds to deallocate network for instance.
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.225 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.226 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.279 2 DEBUG oslo_concurrency.processutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.332 2 DEBUG nova.compute.manager [req-2c69599d-e4f5-4638-8d38-a610fdefcd38 req-865c5dab-e6d2-4b01-bc57-2e872d89792d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Received event network-vif-deleted-6165c9e0-ee4f-4bc6-add1-c234918a73d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 14 KiB/s wr, 155 op/s
Oct 02 12:34:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4284340140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.778 2 DEBUG oslo_concurrency.processutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.786 2 DEBUG nova.compute.provider_tree [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.802 2 DEBUG nova.scheduler.client.report [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4284340140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.847 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.875 2 INFO nova.scheduler.client.report [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Deleted allocations for instance 588c95ff-a590-46c2-a041-6ee318695ef1
Oct 02 12:34:19 compute-0 nova_compute[256940]: 2025-10-02 12:34:19.959 2 DEBUG oslo_concurrency.lockutils [None req-20b40170-da4d-463c-91b3-ba9371e9b3a8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "588c95ff-a590-46c2-a041-6ee318695ef1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:20.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:20 compute-0 nova_compute[256940]: 2025-10-02 12:34:20.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:20.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:20 compute-0 ceph-mon[73668]: pgmap v1733: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 14 KiB/s wr, 155 op/s
Oct 02 12:34:20 compute-0 nova_compute[256940]: 2025-10-02 12:34:20.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 274 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 179 op/s
Oct 02 12:34:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:22.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:22 compute-0 nova_compute[256940]: 2025-10-02 12:34:22.499 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408447.498498, 9a901125-894a-48b8-8b98-3f5265a97052 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:22 compute-0 nova_compute[256940]: 2025-10-02 12:34:22.500 2 INFO nova.compute.manager [-] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] VM Stopped (Lifecycle Event)
Oct 02 12:34:22 compute-0 nova_compute[256940]: 2025-10-02 12:34:22.529 2 DEBUG nova.compute.manager [None req-b989aac9-8487-41b0-b341-b1b398b58057 - - - - - -] [instance: 9a901125-894a-48b8-8b98-3f5265a97052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:22 compute-0 ceph-mon[73668]: pgmap v1734: 305 pgs: 305 active+clean; 274 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 179 op/s
Oct 02 12:34:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 233 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 118 op/s
Oct 02 12:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/186885800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:24.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:24.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:24 compute-0 ceph-mon[73668]: pgmap v1735: 305 pgs: 305 active+clean; 233 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 118 op/s
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 201 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 390 KiB/s wr, 127 op/s
Oct 02 12:34:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.563 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.563 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.584 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.675 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.675 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.682 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.683 2 INFO nova.compute.claims [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.803 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:25 compute-0 nova_compute[256940]: 2025-10-02 12:34:25.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798561716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.296 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.303 2 DEBUG nova.compute.provider_tree [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:26.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.331 2 DEBUG nova.scheduler.client.report [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.393 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.394 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.436 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.436 2 DEBUG nova.network.neutron [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.454 2 INFO nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:34:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:26.470 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:26.471 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:26.471 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.478 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.574 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.576 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.576 2 INFO nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Creating image(s)
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.604 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.640 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.673 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.677 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:26.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.721 2 DEBUG nova.policy [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c0d7f2725ce3440b9e998e6efddc4628', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35e544965644721a29ebea7dd0cc74e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.755 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.756 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.756 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.757 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.788 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:26 compute-0 nova_compute[256940]: 2025-10-02 12:34:26.793 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:27 compute-0 ceph-mon[73668]: pgmap v1736: 305 pgs: 305 active+clean; 201 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 390 KiB/s wr, 127 op/s
Oct 02 12:34:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3798561716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 202 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Oct 02 12:34:27 compute-0 podman[311931]: 2025-10-02 12:34:27.415191491 +0000 UTC m=+0.075210590 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:34:27 compute-0 podman[311930]: 2025-10-02 12:34:27.43783252 +0000 UTC m=+0.102274284 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.619 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.737 2 DEBUG nova.network.neutron [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Successfully created port: cd41c84b-4c34-45f1-85a1-9d7d2f482414 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.745 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] resizing rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.893 2 DEBUG nova.objects.instance [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'migration_context' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.919 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.919 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Ensure instance console log exists: /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.920 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.920 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:27 compute-0 nova_compute[256940]: 2025-10-02 12:34:27.921 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:28.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:28.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:34:28
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.meta']
Oct 02 12:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:34:29 compute-0 ceph-mon[73668]: pgmap v1737: 305 pgs: 305 active+clean; 202 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Oct 02 12:34:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/629050486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.304 2 DEBUG nova.network.neutron [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Successfully updated port: cd41c84b-4c34-45f1-85a1-9d7d2f482414 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.321 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.322 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquired lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.322 2 DEBUG nova.network.neutron [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 202 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.683 2 DEBUG nova.compute.manager [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-changed-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.684 2 DEBUG nova.compute.manager [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Refreshing instance network info cache due to event network-changed-cd41c84b-4c34-45f1-85a1-9d7d2f482414. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.684 2 DEBUG oslo_concurrency.lockutils [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:29 compute-0 nova_compute[256940]: 2025-10-02 12:34:29.918 2 DEBUG nova.network.neutron [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:34:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:30.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.343 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408455.342605, 588c95ff-a590-46c2-a041-6ee318695ef1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.344 2 INFO nova.compute.manager [-] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] VM Stopped (Lifecycle Event)
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.353 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.353 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.400 2 DEBUG nova.compute.manager [None req-aa02b2f6-152c-418a-bc42-6cd0bc75a183 - - - - - -] [instance: 588c95ff-a590-46c2-a041-6ee318695ef1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.407 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:34:30 compute-0 ceph-mon[73668]: pgmap v1738: 305 pgs: 305 active+clean; 202 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.505 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.506 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.515 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.516 2 INFO nova.compute.claims [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:34:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.646 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:30.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:30 compute-0 nova_compute[256940]: 2025-10-02 12:34:30.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231160067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.105 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.114 2 DEBUG nova.compute.provider_tree [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.139 2 DEBUG nova.scheduler.client.report [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.168 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.169 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.250 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.251 2 DEBUG nova.network.neutron [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.300 2 INFO nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.371 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:34:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 190 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.7 MiB/s wr, 148 op/s
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.447 2 DEBUG nova.network.neutron [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Updating instance_info_cache with network_info: [{"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.497 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.498 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.499 2 INFO nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Creating image(s)
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.530 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.557 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.586 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.591 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.618 2 DEBUG nova.policy [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c0d7f2725ce3440b9e998e6efddc4628', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35e544965644721a29ebea7dd0cc74e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.623 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Releasing lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.624 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance network_info: |[{"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.625 2 DEBUG oslo_concurrency.lockutils [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.626 2 DEBUG nova.network.neutron [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Refreshing network info cache for port cd41c84b-4c34-45f1-85a1-9d7d2f482414 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.629 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Start _get_guest_xml network_info=[{"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.633 2 WARNING nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.640 2 DEBUG nova.virt.libvirt.host [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.641 2 DEBUG nova.virt.libvirt.host [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.651 2 DEBUG nova.virt.libvirt.host [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.652 2 DEBUG nova.virt.libvirt.host [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.654 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.654 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.655 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.655 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.655 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.655 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.655 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.656 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.656 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.656 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.656 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.657 2 DEBUG nova.virt.hardware [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.660 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.691 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.692 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.693 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.693 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.724 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-0 nova_compute[256940]: 2025-10-02 12:34:31.729 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4231160067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.243 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.276 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.283 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:32.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2658987936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.793 2 DEBUG nova.network.neutron [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Successfully created port: 8a3f0797-397c-4ab5-9436-f371ba35d714 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:34:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605827056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.869 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.871 2 DEBUG nova.virt.libvirt.vif [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-305635351',display_name='tempest-ListServerFiltersTestJSON-instance-305635351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-305635351',id=85,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-nczxypp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:26Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=e4c06c33-d7c3-4a4d-b181-45b9422173ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.871 2 DEBUG nova.network.os_vif_util [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.872 2 DEBUG nova.network.os_vif_util [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.873 2 DEBUG nova.objects.instance [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'pci_devices' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.908 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <uuid>e4c06c33-d7c3-4a4d-b181-45b9422173ca</uuid>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <name>instance-00000055</name>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-305635351</nova:name>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:34:31</nova:creationTime>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:user uuid="c0d7f2725ce3440b9e998e6efddc4628">tempest-ListServerFiltersTestJSON-495234271-project-member</nova:user>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:project uuid="b35e544965644721a29ebea7dd0cc74e">tempest-ListServerFiltersTestJSON-495234271</nova:project>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <nova:port uuid="cd41c84b-4c34-45f1-85a1-9d7d2f482414">
Oct 02 12:34:32 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <system>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <entry name="serial">e4c06c33-d7c3-4a4d-b181-45b9422173ca</entry>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <entry name="uuid">e4c06c33-d7c3-4a4d-b181-45b9422173ca</entry>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </system>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <os>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   </os>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <features>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   </features>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk">
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk.config">
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:34:32 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:a7:26:6c"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <target dev="tapcd41c84b-4c"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/console.log" append="off"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <video>
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </video>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:34:32 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:34:32 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:34:32 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:34:32 compute-0 nova_compute[256940]: </domain>
Oct 02 12:34:32 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.910 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Preparing to wait for external event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.911 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.911 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.911 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.912 2 DEBUG nova.virt.libvirt.vif [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-305635351',display_name='tempest-ListServerFiltersTestJSON-instance-305635351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-305635351',id=85,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-nczxypp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:26Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=e4c06c33-d7c3-4a4d-b181-45b9422173ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.913 2 DEBUG nova.network.os_vif_util [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.914 2 DEBUG nova.network.os_vif_util [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.914 2 DEBUG os_vif [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd41c84b-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd41c84b-4c, col_values=(('external_ids', {'iface-id': 'cd41c84b-4c34-45f1-85a1-9d7d2f482414', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:26:6c', 'vm-uuid': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 NetworkManager[44981]: <info>  [1759408472.9248] manager: (tapcd41c84b-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.933 2 INFO os_vif [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c')
Oct 02 12:34:32 compute-0 nova_compute[256940]: 2025-10-02 12:34:32.936 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:32 compute-0 ceph-mon[73668]: pgmap v1739: 305 pgs: 305 active+clean; 190 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.7 MiB/s wr, 148 op/s
Oct 02 12:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1778796044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1785355022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2658987936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/605827056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1293371938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.171 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] resizing rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.267 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.267 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.267 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No VIF found with MAC fa:16:3e:a7:26:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.268 2 INFO nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Using config drive
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.302 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 169 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.549 2 DEBUG nova.network.neutron [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Updated VIF entry in instance network info cache for port cd41c84b-4c34-45f1-85a1-9d7d2f482414. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.549 2 DEBUG nova.network.neutron [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Updating instance_info_cache with network_info: [{"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.638 2 DEBUG oslo_concurrency.lockutils [req-2251fae4-4ba9-46ee-82da-aa1b087ef524 req-f48444e2-aa2a-4044-a0fa-8088484503df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.658 2 DEBUG nova.objects.instance [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'migration_context' on Instance uuid 72600c3a-7fa7-4179-905f-f02e91b0efc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.673 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.674 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Ensure instance console log exists: /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.675 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.675 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:33 compute-0 nova_compute[256940]: 2025-10-02 12:34:33.676 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4020740269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.579 2 INFO nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Creating config drive at /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/disk.config
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.584 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv6wi2i1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:34.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.725 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv6wi2i1" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.759 2 DEBUG nova.storage.rbd_utils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.764 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/disk.config e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.909 2 DEBUG nova.network.neutron [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Successfully updated port: 8a3f0797-397c-4ab5-9436-f371ba35d714 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.943 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "refresh_cache-72600c3a-7fa7-4179-905f-f02e91b0efc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.943 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquired lock "refresh_cache-72600c3a-7fa7-4179-905f-f02e91b0efc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:34 compute-0 nova_compute[256940]: 2025-10-02 12:34:34.944 2 DEBUG nova.network.neutron [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.006 2 DEBUG nova.compute.manager [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received event network-changed-8a3f0797-397c-4ab5-9436-f371ba35d714 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.007 2 DEBUG nova.compute.manager [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Refreshing instance network info cache due to event network-changed-8a3f0797-397c-4ab5-9436-f371ba35d714. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.007 2 DEBUG oslo_concurrency.lockutils [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-72600c3a-7fa7-4179-905f-f02e91b0efc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.149 2 DEBUG nova.network.neutron [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:34:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 247 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.2 MiB/s wr, 184 op/s
Oct 02 12:34:35 compute-0 ceph-mon[73668]: pgmap v1740: 305 pgs: 305 active+clean; 169 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Oct 02 12:34:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3308431859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.724 2 DEBUG oslo_concurrency.processutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/disk.config e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.961s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.725 2 INFO nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Deleting local config drive /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/disk.config because it was imported into RBD.
Oct 02 12:34:35 compute-0 kernel: tapcd41c84b-4c: entered promiscuous mode
Oct 02 12:34:35 compute-0 NetworkManager[44981]: <info>  [1759408475.8239] manager: (tapcd41c84b-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:35 compute-0 ovn_controller[148123]: 2025-10-02T12:34:35Z|00348|binding|INFO|Claiming lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 for this chassis.
Oct 02 12:34:35 compute-0 ovn_controller[148123]: 2025-10-02T12:34:35Z|00349|binding|INFO|cd41c84b-4c34-45f1-85a1-9d7d2f482414: Claiming fa:16:3e:a7:26:6c 10.100.0.13
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.847 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:6c 10.100.0.13'], port_security=['fa:16:3e:a7:26:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cd41c84b-4c34-45f1-85a1-9d7d2f482414) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.851 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cd41c84b-4c34-45f1-85a1-9d7d2f482414 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a bound to our chassis
Oct 02 12:34:35 compute-0 systemd-udevd[312370]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.858 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct 02 12:34:35 compute-0 NetworkManager[44981]: <info>  [1759408475.8731] device (tapcd41c84b-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:35 compute-0 NetworkManager[44981]: <info>  [1759408475.8748] device (tapcd41c84b-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:35 compute-0 systemd-machined[210927]: New machine qemu-38-instance-00000055.
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.874 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[359eaac1-2ca0-4a2f-9159-a52d0fc9321c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.876 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4dd1e489-91 in ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.877 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4dd1e489-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.877 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[14db9f4a-4b22-4f26-8350-11d424838a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.878 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3ebcbb-5371-4af4-a8ba-ecd4bba0cac8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:35 compute-0 ovn_controller[148123]: 2025-10-02T12:34:35Z|00350|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 ovn-installed in OVS
Oct 02 12:34:35 compute-0 ovn_controller[148123]: 2025-10-02T12:34:35Z|00351|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 up in Southbound
Oct 02 12:34:35 compute-0 nova_compute[256940]: 2025-10-02 12:34:35.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:35 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000055.
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.894 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[80fe55eb-c864-4739-bc1a-938b3ee5dc76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.908 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c5711c97-25bc-45f9-9164-d58ae7912a7c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.948 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ff54b717-de76-45dd-90bd-760d9f92b754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:35 compute-0 systemd-udevd[312374]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:35 compute-0 NetworkManager[44981]: <info>  [1759408475.9606] manager: (tap4dd1e489-90): new Veth device (/org/freedesktop/NetworkManager/Devices/169)
Oct 02 12:34:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:35.961 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[243df3df-7a5b-434e-8380-18aa9c3c1db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.004 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[68ec86ee-0956-453a-9580-0ec1e9a27ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.009 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4335b1-2fc9-4011-8d4d-cdcb4ccca231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 NetworkManager[44981]: <info>  [1759408476.0420] device (tap4dd1e489-90): carrier: link connected
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.054 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9d0b16-5f61-420e-ba5d-48e169ab7bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.082 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[22c7f54a-6eba-4fbe-bc9e-4e1a203550c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637152, 'reachable_time': 25794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312404, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.114 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8419015d-ae74-4713-908d-8badfa54291e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:9264'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637152, 'tstamp': 637152}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312405, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.143 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc5a159-4597-4b1b-8c43-67cc6adce1ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637152, 'reachable_time': 25794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312406, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.186 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ff240d7b-f4c4-4b52-a391-6a29ff14d179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.269 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[15b8f4a1-dd89-4b97-b51a-383ee513ba15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.271 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.272 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.273 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd1e489-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:36 compute-0 nova_compute[256940]: 2025-10-02 12:34:36.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-0 NetworkManager[44981]: <info>  [1759408476.2762] manager: (tap4dd1e489-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct 02 12:34:36 compute-0 kernel: tap4dd1e489-90: entered promiscuous mode
Oct 02 12:34:36 compute-0 nova_compute[256940]: 2025-10-02 12:34:36.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.280 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dd1e489-90, col_values=(('external_ids', {'iface-id': 'a25f83dc-1cb4-467b-80c7-496d6a4bfac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:36 compute-0 nova_compute[256940]: 2025-10-02 12:34:36.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-0 nova_compute[256940]: 2025-10-02 12:34:36.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-0 ovn_controller[148123]: 2025-10-02T12:34:36Z|00352|binding|INFO|Releasing lport a25f83dc-1cb4-467b-80c7-496d6a4bfac5 from this chassis (sb_readonly=0)
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.283 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4dd1e489-9cc3-4420-8577-3a250b110c9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4dd1e489-9cc3-4420-8577-3a250b110c9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.296 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e5dedf-6b7e-4be8-a4db-43ae644801fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.297 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/4dd1e489-9cc3-4420-8577-3a250b110c9a.pid.haproxy
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:34:36 compute-0 nova_compute[256940]: 2025-10-02 12:34:36.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:36.298 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'env', 'PROCESS_TAG=haproxy-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4dd1e489-9cc3-4420-8577-3a250b110c9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:34:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:36 compute-0 ceph-mon[73668]: pgmap v1741: 305 pgs: 305 active+clean; 247 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.2 MiB/s wr, 184 op/s
Oct 02 12:34:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3443077683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:36 compute-0 podman[312439]: 2025-10-02 12:34:36.663257494 +0000 UTC m=+0.025159146 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.009 2 DEBUG nova.network.neutron [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Updating instance_info_cache with network_info: [{"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.052 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Releasing lock "refresh_cache-72600c3a-7fa7-4179-905f-f02e91b0efc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.052 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Instance network_info: |[{"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.053 2 DEBUG oslo_concurrency.lockutils [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-72600c3a-7fa7-4179-905f-f02e91b0efc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.053 2 DEBUG nova.network.neutron [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Refreshing network info cache for port 8a3f0797-397c-4ab5-9436-f371ba35d714 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.057 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Start _get_guest_xml network_info=[{"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.064 2 WARNING nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.072 2 DEBUG nova.virt.libvirt.host [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.073 2 DEBUG nova.virt.libvirt.host [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.079 2 DEBUG nova.virt.libvirt.host [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.079 2 DEBUG nova.virt.libvirt.host [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.081 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.081 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.081 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.082 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.082 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.082 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.082 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.083 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.083 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.084 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.084 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.084 2 DEBUG nova.virt.hardware [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.087 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:37 compute-0 podman[312439]: 2025-10-02 12:34:37.094167394 +0000 UTC m=+0.456069026 container create acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:34:37 compute-0 sudo[312452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:37 compute-0 sudo[312452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:37 compute-0 sudo[312452]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:37 compute-0 sudo[312478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:37 compute-0 sudo[312478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:37 compute-0 sudo[312478]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:37 compute-0 systemd[1]: Started libpod-conmon-acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3.scope.
Oct 02 12:34:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52990e5012d0a0cd70ed0bbfc7e35fc63f289de0cdc92bcbca448b22ef6fd920/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 381 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 MiB/s wr, 259 op/s
Oct 02 12:34:37 compute-0 podman[312439]: 2025-10-02 12:34:37.457993767 +0000 UTC m=+0.819895429 container init acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:34:37 compute-0 podman[312439]: 2025-10-02 12:34:37.464323972 +0000 UTC m=+0.826225604 container start acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:34:37 compute-0 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[312524]: [NOTICE]   (312569) : New worker (312572) forked
Oct 02 12:34:37 compute-0 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[312524]: [NOTICE]   (312569) : Loading success.
Oct 02 12:34:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3481265486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.555 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.588 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.604 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.635 2 DEBUG nova.compute.manager [req-f93fa0e5-da7e-45ff-b459-95e212d5b198 req-9d912bdf-d1f4-4484-8fb8-c25e6270b94f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.635 2 DEBUG oslo_concurrency.lockutils [req-f93fa0e5-da7e-45ff-b459-95e212d5b198 req-9d912bdf-d1f4-4484-8fb8-c25e6270b94f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.636 2 DEBUG oslo_concurrency.lockutils [req-f93fa0e5-da7e-45ff-b459-95e212d5b198 req-9d912bdf-d1f4-4484-8fb8-c25e6270b94f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.636 2 DEBUG oslo_concurrency.lockutils [req-f93fa0e5-da7e-45ff-b459-95e212d5b198 req-9d912bdf-d1f4-4484-8fb8-c25e6270b94f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.636 2 DEBUG nova.compute.manager [req-f93fa0e5-da7e-45ff-b459-95e212d5b198 req-9d912bdf-d1f4-4484-8fb8-c25e6270b94f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Processing event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3481265486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.943 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408477.9428113, e4c06c33-d7c3-4a4d-b181-45b9422173ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.943 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] VM Started (Lifecycle Event)
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.946 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.949 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.952 2 INFO nova.virt.libvirt.driver [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance spawned successfully.
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.952 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.976 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.983 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.987 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.988 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.989 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.989 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.990 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-0 nova_compute[256940]: 2025-10-02 12:34:37.991 2 DEBUG nova.virt.libvirt.driver [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299077736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.064 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.066 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408477.9436793, e4c06c33-d7c3-4a4d-b181-45b9422173ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.066 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] VM Paused (Lifecycle Event)
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.067 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.069 2 DEBUG nova.virt.libvirt.vif [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1202112014',display_name='tempest-ListServerFiltersTestJSON-instance-1202112014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1202112014',id=87,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-4hy4cijr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:31Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=72600c3a-7fa7-4179-905f-f02e91b0efc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.069 2 DEBUG nova.network.os_vif_util [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.070 2 DEBUG nova.network.os_vif_util [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:0c:74,bridge_name='br-int',has_traffic_filtering=True,id=8a3f0797-397c-4ab5-9436-f371ba35d714,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3f0797-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.072 2 DEBUG nova.objects.instance [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'pci_devices' on Instance uuid 72600c3a-7fa7-4179-905f-f02e91b0efc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.131 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.133 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <uuid>72600c3a-7fa7-4179-905f-f02e91b0efc0</uuid>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <name>instance-00000057</name>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1202112014</nova:name>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:34:37</nova:creationTime>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:user uuid="c0d7f2725ce3440b9e998e6efddc4628">tempest-ListServerFiltersTestJSON-495234271-project-member</nova:user>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:project uuid="b35e544965644721a29ebea7dd0cc74e">tempest-ListServerFiltersTestJSON-495234271</nova:project>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <nova:port uuid="8a3f0797-397c-4ab5-9436-f371ba35d714">
Oct 02 12:34:38 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <system>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <entry name="serial">72600c3a-7fa7-4179-905f-f02e91b0efc0</entry>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <entry name="uuid">72600c3a-7fa7-4179-905f-f02e91b0efc0</entry>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </system>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <os>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   </os>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <features>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   </features>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/72600c3a-7fa7-4179-905f-f02e91b0efc0_disk">
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/72600c3a-7fa7-4179-905f-f02e91b0efc0_disk.config">
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:34:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:49:0c:74"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <target dev="tap8a3f0797-39"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/console.log" append="off"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <video>
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </video>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:34:38 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:34:38 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:34:38 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:34:38 compute-0 nova_compute[256940]: </domain>
Oct 02 12:34:38 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.139 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Preparing to wait for external event network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.139 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.139 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.140 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.140 2 DEBUG nova.virt.libvirt.vif [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1202112014',display_name='tempest-ListServerFiltersTestJSON-instance-1202112014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1202112014',id=87,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-4hy4cijr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:31Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=72600c3a-7fa7-4179-905f-f02e91b0efc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.141 2 DEBUG nova.network.os_vif_util [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.142 2 DEBUG nova.network.os_vif_util [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:0c:74,bridge_name='br-int',has_traffic_filtering=True,id=8a3f0797-397c-4ab5-9436-f371ba35d714,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3f0797-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.142 2 DEBUG os_vif [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:0c:74,bridge_name='br-int',has_traffic_filtering=True,id=8a3f0797-397c-4ab5-9436-f371ba35d714,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3f0797-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.143 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.143 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.147 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408477.948937, e4c06c33-d7c3-4a4d-b181-45b9422173ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.147 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] VM Resumed (Lifecycle Event)
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a3f0797-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.151 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a3f0797-39, col_values=(('external_ids', {'iface-id': '8a3f0797-397c-4ab5-9436-f371ba35d714', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:0c:74', 'vm-uuid': '72600c3a-7fa7-4179-905f-f02e91b0efc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:38 compute-0 NetworkManager[44981]: <info>  [1759408478.1541] manager: (tap8a3f0797-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.163 2 INFO os_vif [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:0c:74,bridge_name='br-int',has_traffic_filtering=True,id=8a3f0797-397c-4ab5-9436-f371ba35d714,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3f0797-39')
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.176 2 INFO nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Took 11.60 seconds to spawn the instance on the hypervisor.
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.177 2 DEBUG nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.229 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.234 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.259 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.333 2 INFO nova.compute.manager [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Took 12.70 seconds to build instance.
Oct 02 12:34:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:38.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.361 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.362 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.362 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No VIF found with MAC fa:16:3e:49:0c:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.363 2 INFO nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Using config drive
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.393 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:38 compute-0 nova_compute[256940]: 2025-10-02 12:34:38.403 2 DEBUG oslo_concurrency.lockutils [None req-8d9019f4-a64c-46ff-b209-307c06b4ca1b c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:38.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:38 compute-0 ceph-mon[73668]: pgmap v1742: 305 pgs: 305 active+clean; 381 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 MiB/s wr, 259 op/s
Oct 02 12:34:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3299077736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1600688275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.095 2 DEBUG nova.network.neutron [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Updated VIF entry in instance network info cache for port 8a3f0797-397c-4ab5-9436-f371ba35d714. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.095 2 DEBUG nova.network.neutron [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Updating instance_info_cache with network_info: [{"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.123 2 DEBUG oslo_concurrency.lockutils [req-2d723a54-6a42-4826-94ee-880b82d26feb req-16c43563-1097-49b0-bb4a-f97a509856f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-72600c3a-7fa7-4179-905f-f02e91b0efc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.294 2 INFO nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Creating config drive at /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/disk.config
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.299 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14vzlv_d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 381 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.8 MiB/s wr, 200 op/s
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.453 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14vzlv_d" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.491 2 DEBUG nova.storage.rbd_utils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.496 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/disk.config 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.720 2 DEBUG nova.compute.manager [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.721 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.721 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.722 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.722 2 DEBUG nova.compute.manager [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:39 compute-0 nova_compute[256940]: 2025-10-02 12:34:39.722 2 WARNING nova.compute.manager [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state active and task_state None.
Oct 02 12:34:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1102977573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007657645868230039 of space, bias 1.0, pg target 2.297293760469012 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:34:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:34:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:40.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.703 2 DEBUG oslo_concurrency.processutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/disk.config 72600c3a-7fa7-4179-905f-f02e91b0efc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.704 2 INFO nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Deleting local config drive /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0/disk.config because it was imported into RBD.
Oct 02 12:34:40 compute-0 kernel: tap8a3f0797-39: entered promiscuous mode
Oct 02 12:34:40 compute-0 NetworkManager[44981]: <info>  [1759408480.7589] manager: (tap8a3f0797-39): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Oct 02 12:34:40 compute-0 ovn_controller[148123]: 2025-10-02T12:34:40Z|00353|binding|INFO|Claiming lport 8a3f0797-397c-4ab5-9436-f371ba35d714 for this chassis.
Oct 02 12:34:40 compute-0 ovn_controller[148123]: 2025-10-02T12:34:40Z|00354|binding|INFO|8a3f0797-397c-4ab5-9436-f371ba35d714: Claiming fa:16:3e:49:0c:74 10.100.0.12
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.772 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:0c:74 10.100.0.12'], port_security=['fa:16:3e:49:0c:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72600c3a-7fa7-4179-905f-f02e91b0efc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8a3f0797-397c-4ab5-9436-f371ba35d714) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.774 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8a3f0797-397c-4ab5-9436-f371ba35d714 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a bound to our chassis
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.776 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct 02 12:34:40 compute-0 ovn_controller[148123]: 2025-10-02T12:34:40Z|00355|binding|INFO|Setting lport 8a3f0797-397c-4ab5-9436-f371ba35d714 ovn-installed in OVS
Oct 02 12:34:40 compute-0 ovn_controller[148123]: 2025-10-02T12:34:40Z|00356|binding|INFO|Setting lport 8a3f0797-397c-4ab5-9436-f371ba35d714 up in Southbound
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.796 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[671075ad-d8fd-4bdd-888a-d3d176fb3769]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:40 compute-0 systemd-udevd[312699]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:40 compute-0 NetworkManager[44981]: <info>  [1759408480.8190] device (tap8a3f0797-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:40 compute-0 NetworkManager[44981]: <info>  [1759408480.8199] device (tap8a3f0797-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:40 compute-0 systemd-machined[210927]: New machine qemu-39-instance-00000057.
Oct 02 12:34:40 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000057.
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.842 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c2c154-0aae-45cf-8220-b2ae7992dd9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.845 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[06188ee9-a4f2-4966-ad98-78c41696ab64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.882 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[19937361-8d6d-43f1-bc7a-165b8060d2d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.938 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd927b3-605f-4258-ac1e-851b0fe4f2ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637152, 'reachable_time': 25794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312713, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.959 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[96ecfcf7-21b1-4702-82df-10c21e711e32]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637170, 'tstamp': 637170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312714, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637174, 'tstamp': 637174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312714, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.961 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:40 compute-0 nova_compute[256940]: 2025-10-02 12:34:40.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.965 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd1e489-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.965 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.966 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dd1e489-90, col_values=(('external_ids', {'iface-id': 'a25f83dc-1cb4-467b-80c7-496d6a4bfac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:40.966 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 401 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 11 MiB/s wr, 332 op/s
Oct 02 12:34:41 compute-0 ceph-mon[73668]: pgmap v1743: 305 pgs: 305 active+clean; 381 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.8 MiB/s wr, 200 op/s
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.105 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408482.1042821, 72600c3a-7fa7-4179-905f-f02e91b0efc0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.108 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] VM Started (Lifecycle Event)
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.155 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.160 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408482.1045132, 72600c3a-7fa7-4179-905f-f02e91b0efc0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.161 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] VM Paused (Lifecycle Event)
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.187 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.191 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:42 compute-0 nova_compute[256940]: 2025-10-02 12:34:42.213 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:42.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:42.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-0 ceph-mon[73668]: pgmap v1744: 305 pgs: 305 active+clean; 401 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 11 MiB/s wr, 332 op/s
Oct 02 12:34:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1368115747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 401 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 9.1 MiB/s wr, 359 op/s
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.980 2 DEBUG nova.compute.manager [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received event network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.981 2 DEBUG oslo_concurrency.lockutils [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.981 2 DEBUG oslo_concurrency.lockutils [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.981 2 DEBUG oslo_concurrency.lockutils [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.982 2 DEBUG nova.compute.manager [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Processing event network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.982 2 DEBUG nova.compute.manager [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received event network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.982 2 DEBUG oslo_concurrency.lockutils [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.982 2 DEBUG oslo_concurrency.lockutils [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.983 2 DEBUG oslo_concurrency.lockutils [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.983 2 DEBUG nova.compute.manager [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] No waiting events found dispatching network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.983 2 WARNING nova.compute.manager [req-98728a0e-38de-4488-9533-794735518ad2 req-8e013bf6-5d90-4d4f-a04e-95ae21b5950e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received unexpected event network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 for instance with vm_state building and task_state spawning.
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.984 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.988 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408483.9882267, 72600c3a-7fa7-4179-905f-f02e91b0efc0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.988 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] VM Resumed (Lifecycle Event)
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.990 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.995 2 INFO nova.virt.libvirt.driver [-] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Instance spawned successfully.
Oct 02 12:34:43 compute-0 nova_compute[256940]: 2025-10-02 12:34:43.995 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.018 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.026 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.032 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.032 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.033 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.033 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.034 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.034 2 DEBUG nova.virt.libvirt.driver [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.079 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.125 2 INFO nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Took 12.63 seconds to spawn the instance on the hypervisor.
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.126 2 DEBUG nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.207 2 INFO nova.compute.manager [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Took 13.73 seconds to build instance.
Oct 02 12:34:44 compute-0 nova_compute[256940]: 2025-10-02 12:34:44.268 2 DEBUG oslo_concurrency.lockutils [None req-97d9589f-982b-46c9-824b-a582b013bc4e c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/810285189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:44 compute-0 ceph-mon[73668]: pgmap v1745: 305 pgs: 305 active+clean; 401 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 9.1 MiB/s wr, 359 op/s
Oct 02 12:34:44 compute-0 podman[312758]: 2025-10-02 12:34:44.444720261 +0000 UTC m=+0.105438256 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:34:44 compute-0 podman[312759]: 2025-10-02 12:34:44.472269529 +0000 UTC m=+0.127982024 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:34:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:44.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 376 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.9 MiB/s wr, 415 op/s
Oct 02 12:34:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:45 compute-0 nova_compute[256940]: 2025-10-02 12:34:45.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:34:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:34:46 compute-0 ceph-mon[73668]: pgmap v1746: 305 pgs: 305 active+clean; 376 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.9 MiB/s wr, 415 op/s
Oct 02 12:34:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:46.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.6 MiB/s wr, 476 op/s
Oct 02 12:34:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2709834565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:48 compute-0 nova_compute[256940]: 2025-10-02 12:34:48.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:49 compute-0 ceph-mon[73668]: pgmap v1747: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.6 MiB/s wr, 476 op/s
Oct 02 12:34:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2943096415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1981752576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 905 KiB/s wr, 380 op/s
Oct 02 12:34:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:50.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:50.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:50 compute-0 nova_compute[256940]: 2025-10-02 12:34:50.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:51 compute-0 nova_compute[256940]: 2025-10-02 12:34:51.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:51 compute-0 nova_compute[256940]: 2025-10-02 12:34:51.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:34:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 358 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 1.3 MiB/s wr, 457 op/s
Oct 02 12:34:51 compute-0 ceph-mon[73668]: pgmap v1748: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 905 KiB/s wr, 380 op/s
Oct 02 12:34:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1478600791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3763843049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:52 compute-0 nova_compute[256940]: 2025-10-02 12:34:52.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:52.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:52 compute-0 nova_compute[256940]: 2025-10-02 12:34:52.819 2 DEBUG oslo_concurrency.lockutils [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:52 compute-0 nova_compute[256940]: 2025-10-02 12:34:52.820 2 DEBUG oslo_concurrency.lockutils [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:52 compute-0 nova_compute[256940]: 2025-10-02 12:34:52.821 2 DEBUG nova.compute.manager [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:52 compute-0 nova_compute[256940]: 2025-10-02 12:34:52.826 2 DEBUG nova.compute.manager [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:34:52 compute-0 nova_compute[256940]: 2025-10-02 12:34:52.827 2 DEBUG nova.objects.instance [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'flavor' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:52 compute-0 nova_compute[256940]: 2025-10-02 12:34:52.912 2 DEBUG nova.virt.libvirt.driver [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:34:52 compute-0 ceph-mon[73668]: pgmap v1749: 305 pgs: 305 active+clean; 358 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 1.3 MiB/s wr, 457 op/s
Oct 02 12:34:53 compute-0 nova_compute[256940]: 2025-10-02 12:34:53.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:53 compute-0 nova_compute[256940]: 2025-10-02 12:34:53.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 324 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 1.0 MiB/s wr, 377 op/s
Oct 02 12:34:54 compute-0 ovn_controller[148123]: 2025-10-02T12:34:54Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:26:6c 10.100.0.13
Oct 02 12:34:54 compute-0 ovn_controller[148123]: 2025-10-02T12:34:54Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:26:6c 10.100.0.13
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.296 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.297 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.297 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.298 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.298 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:54.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:54.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143947822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:54 compute-0 nova_compute[256940]: 2025-10-02 12:34:54.778 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.009 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.010 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.017 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.018 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.225 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.227 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4228MB free_disk=20.839309692382812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.227 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.228 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 315 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 350 op/s
Oct 02 12:34:55 compute-0 ceph-mon[73668]: pgmap v1750: 305 pgs: 305 active+clean; 324 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 1.0 MiB/s wr, 377 op/s
Oct 02 12:34:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/143947822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.496 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e4c06c33-d7c3-4a4d-b181-45b9422173ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.497 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 72600c3a-7fa7-4179-905f-f02e91b0efc0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.497 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.498 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.571 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:55 compute-0 nova_compute[256940]: 2025-10-02 12:34:55.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757785379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-0 nova_compute[256940]: 2025-10-02 12:34:56.063 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:56 compute-0 nova_compute[256940]: 2025-10-02 12:34:56.070 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:56 compute-0 nova_compute[256940]: 2025-10-02 12:34:56.089 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:56 compute-0 nova_compute[256940]: 2025-10-02 12:34:56.116 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:34:56 compute-0 nova_compute[256940]: 2025-10-02 12:34:56.117 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:34:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:56.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:34:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:56.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1320649666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-0 ceph-mon[73668]: pgmap v1751: 305 pgs: 305 active+clean; 315 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 350 op/s
Oct 02 12:34:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/43636201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2693978985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3757785379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:57 compute-0 sudo[312854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:57 compute-0 sudo[312854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:57 compute-0 sudo[312854]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 323 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:34:57 compute-0 sudo[312879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:57 compute-0 sudo[312879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:57 compute-0 sudo[312879]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:58 compute-0 nova_compute[256940]: 2025-10-02 12:34:58.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:58.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:58 compute-0 podman[312904]: 2025-10-02 12:34:58.433295516 +0000 UTC m=+0.092007617 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 12:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:34:58 compute-0 podman[312905]: 2025-10-02 12:34:58.447087045 +0000 UTC m=+0.107750737 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:34:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:34:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:58.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:58 compute-0 nova_compute[256940]: 2025-10-02 12:34:58.953 2 INFO nova.virt.libvirt.driver [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance shutdown successfully after 6 seconds.
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.116 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:34:59 compute-0 ovn_controller[148123]: 2025-10-02T12:34:59Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:0c:74 10.100.0.12
Oct 02 12:34:59 compute-0 ovn_controller[148123]: 2025-10-02T12:34:59Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:0c:74 10.100.0.12
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.238 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.239 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:59 compute-0 kernel: tapcd41c84b-4c (unregistering): left promiscuous mode
Oct 02 12:34:59 compute-0 NetworkManager[44981]: <info>  [1759408499.2794] device (tapcd41c84b-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:59 compute-0 ovn_controller[148123]: 2025-10-02T12:34:59Z|00357|binding|INFO|Releasing lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 from this chassis (sb_readonly=0)
Oct 02 12:34:59 compute-0 ovn_controller[148123]: 2025-10-02T12:34:59Z|00358|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 down in Southbound
Oct 02 12:34:59 compute-0 ovn_controller[148123]: 2025-10-02T12:34:59Z|00359|binding|INFO|Removing iface tapcd41c84b-4c ovn-installed in OVS
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:59 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 12:34:59 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000055.scope: Consumed 15.726s CPU time.
Oct 02 12:34:59 compute-0 systemd-machined[210927]: Machine qemu-38-instance-00000055 terminated.
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.365 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:6c 10.100.0.13'], port_security=['fa:16:3e:a7:26:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cd41c84b-4c34-45f1-85a1-9d7d2f482414) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.367 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cd41c84b-4c34-45f1-85a1-9d7d2f482414 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a unbound from our chassis
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.368 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.391 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3d152f4a-588b-4491-a63a-9e3d2fd0aa9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.406 2 INFO nova.virt.libvirt.driver [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance destroyed successfully.
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.407 2 DEBUG nova.objects.instance [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'numa_topology' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 323 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.8 MiB/s wr, 256 op/s
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.432 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[00081162-4544-460f-aff2-fa641d7b5f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:59 compute-0 ceph-mon[73668]: pgmap v1752: 305 pgs: 305 active+clean; 323 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.438 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b4959fe7-743f-4226-9739-a73f72624631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.484 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ac4713-4215-4554-a3fa-15a1da4459e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.510 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a260a1cf-0119-4318-b440-7b3a535e0c85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637152, 'reachable_time': 25794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312969, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.529 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6f677ed9-0c9d-4b45-9c57-306c885009e8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637170, 'tstamp': 637170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312970, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637174, 'tstamp': 637174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312970, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.531 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.539 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd1e489-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.540 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.540 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dd1e489-90, col_values=(('external_ids', {'iface-id': 'a25f83dc-1cb4-467b-80c7-496d6a4bfac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:34:59.540 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.558 2 DEBUG nova.compute.manager [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.639 2 DEBUG oslo_concurrency.lockutils [None req-d7b1b8e0-f3fc-4c5e-8c44-f755d84de8aa c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 6.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.843 2 DEBUG nova.compute.manager [req-91b0ec1b-7f1f-48b1-96b3-ad3e767f150c req-0b08386f-4ab9-4ecd-bf0e-b55b4234b8a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.844 2 DEBUG oslo_concurrency.lockutils [req-91b0ec1b-7f1f-48b1-96b3-ad3e767f150c req-0b08386f-4ab9-4ecd-bf0e-b55b4234b8a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.845 2 DEBUG oslo_concurrency.lockutils [req-91b0ec1b-7f1f-48b1-96b3-ad3e767f150c req-0b08386f-4ab9-4ecd-bf0e-b55b4234b8a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.845 2 DEBUG oslo_concurrency.lockutils [req-91b0ec1b-7f1f-48b1-96b3-ad3e767f150c req-0b08386f-4ab9-4ecd-bf0e-b55b4234b8a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.846 2 DEBUG nova.compute.manager [req-91b0ec1b-7f1f-48b1-96b3-ad3e767f150c req-0b08386f-4ab9-4ecd-bf0e-b55b4234b8a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:59 compute-0 nova_compute[256940]: 2025-10-02 12:34:59.847 2 WARNING nova.compute.manager [req-91b0ec1b-7f1f-48b1-96b3-ad3e767f150c req-0b08386f-4ab9-4ecd-bf0e-b55b4234b8a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state stopped and task_state None.
Oct 02 12:35:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:00.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:00.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:00 compute-0 nova_compute[256940]: 2025-10-02 12:35:00.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-0 ceph-mon[73668]: pgmap v1753: 305 pgs: 305 active+clean; 323 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.8 MiB/s wr, 256 op/s
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.115 2 DEBUG nova.objects.instance [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'flavor' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.148 2 DEBUG oslo_concurrency.lockutils [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 345 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.2 MiB/s wr, 303 op/s
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.510 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.511 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.538 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.623 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.624 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.635 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.636 2 INFO nova.compute.claims [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:35:01 compute-0 nova_compute[256940]: 2025-10-02 12:35:01.806 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.076 2 DEBUG nova.compute.manager [req-5b95bb36-f547-47c9-9c7e-e3a00040a592 req-a5eccde8-b960-48fd-aedb-ec649a1e8a70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.077 2 DEBUG oslo_concurrency.lockutils [req-5b95bb36-f547-47c9-9c7e-e3a00040a592 req-a5eccde8-b960-48fd-aedb-ec649a1e8a70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.077 2 DEBUG oslo_concurrency.lockutils [req-5b95bb36-f547-47c9-9c7e-e3a00040a592 req-a5eccde8-b960-48fd-aedb-ec649a1e8a70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.077 2 DEBUG oslo_concurrency.lockutils [req-5b95bb36-f547-47c9-9c7e-e3a00040a592 req-a5eccde8-b960-48fd-aedb-ec649a1e8a70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.077 2 DEBUG nova.compute.manager [req-5b95bb36-f547-47c9-9c7e-e3a00040a592 req-a5eccde8-b960-48fd-aedb-ec649a1e8a70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.077 2 WARNING nova.compute.manager [req-5b95bb36-f547-47c9-9c7e-e3a00040a592 req-a5eccde8-b960-48fd-aedb-ec649a1e8a70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:35:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:02.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1318539499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.418 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.427 2 DEBUG nova.compute.provider_tree [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.534 2 DEBUG nova.scheduler.client.report [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.625 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Updating instance_info_cache with network_info: [{"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:02.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.726 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.726 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.727 2 DEBUG oslo_concurrency.lockutils [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquired lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.728 2 DEBUG nova.network.neutron [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.729 2 DEBUG nova.objects.instance [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'info_cache' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.731 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.732 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.788 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.790 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.966 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.967 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:35:02 compute-0 nova_compute[256940]: 2025-10-02 12:35:02.988 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.027 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.148 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.150 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.151 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Creating image(s)
Oct 02 12:35:03 compute-0 ceph-mon[73668]: pgmap v1754: 305 pgs: 305 active+clean; 345 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.2 MiB/s wr, 303 op/s
Oct 02 12:35:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3510125751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1318539499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.408 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 353 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 245 op/s
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.603 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.638 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.644 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.687 2 DEBUG nova.policy [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34a9da53e0cc446593d0cea2f498c53e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.717 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.718 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.719 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.719 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.750 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:03 compute-0 nova_compute[256940]: 2025-10-02 12:35:03.754 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3aa671dd-41aa-4333-9883-6df66314117e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:04.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:04 compute-0 ceph-mon[73668]: pgmap v1755: 305 pgs: 305 active+clean; 353 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 245 op/s
Oct 02 12:35:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:04.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:04 compute-0 nova_compute[256940]: 2025-10-02 12:35:04.869 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Successfully created port: c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:35:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:35:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3183312011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:35:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3183312011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:05.299 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:05.300 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 356 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 204 op/s
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.594 2 DEBUG nova.network.neutron [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Updating instance_info_cache with network_info: [{"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.618 2 DEBUG oslo_concurrency.lockutils [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Releasing lock "refresh_cache-e4c06c33-d7c3-4a4d-b181-45b9422173ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.645 2 INFO nova.virt.libvirt.driver [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance destroyed successfully.
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.646 2 DEBUG nova.objects.instance [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'numa_topology' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.674 2 DEBUG nova.objects.instance [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'resources' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.691 2 DEBUG nova.virt.libvirt.vif [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-305635351',display_name='tempest-ListServerFiltersTestJSON-instance-305635351',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-305635351',id=85,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-nczxypp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:59Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=e4c06c33-d7c3-4a4d-b181-45b9422173ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.692 2 DEBUG nova.network.os_vif_util [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.692 2 DEBUG nova.network.os_vif_util [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.693 2 DEBUG os_vif [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd41c84b-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.702 2 INFO os_vif [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c')
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.714 2 DEBUG nova.virt.libvirt.driver [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Start _get_guest_xml network_info=[{"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.719 2 WARNING nova.virt.libvirt.driver [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.734 2 DEBUG nova.virt.libvirt.host [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.735 2 DEBUG nova.virt.libvirt.host [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.741 2 DEBUG nova.virt.libvirt.host [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.742 2 DEBUG nova.virt.libvirt.host [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.744 2 DEBUG nova.virt.libvirt.driver [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.744 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.745 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.745 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.746 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.746 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.747 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.747 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.748 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.748 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.749 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.749 2 DEBUG nova.virt.hardware [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.750 2 DEBUG nova.objects.instance [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'vcpu_model' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.771 2 DEBUG oslo_concurrency.processutils [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:05 compute-0 nova_compute[256940]: 2025-10-02 12:35:05.805 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3aa671dd-41aa-4333-9883-6df66314117e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 02 12:35:06 compute-0 nova_compute[256940]: 2025-10-02 12:35:06.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:06 compute-0 nova_compute[256940]: 2025-10-02 12:35:06.148 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] resizing rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:35:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:35:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:35:06 compute-0 nova_compute[256940]: 2025-10-02 12:35:06.414 2 DEBUG oslo_concurrency.processutils [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:06 compute-0 nova_compute[256940]: 2025-10-02 12:35:06.486 2 DEBUG oslo_concurrency.processutils [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 02 12:35:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3183312011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3183312011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:06 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 02 12:35:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:06.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751850354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:06 compute-0 nova_compute[256940]: 2025-10-02 12:35:06.988 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Successfully updated port: c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.001 2 DEBUG oslo_concurrency.processutils [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.003 2 DEBUG nova.virt.libvirt.vif [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-305635351',display_name='tempest-ListServerFiltersTestJSON-instance-305635351',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-305635351',id=85,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-nczxypp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:59Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=e4c06c33-d7c3-4a4d-b181-45b9422173ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.003 2 DEBUG nova.network.os_vif_util [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.004 2 DEBUG nova.network.os_vif_util [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.005 2 DEBUG nova.objects.instance [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'pci_devices' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.069 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "refresh_cache-3aa671dd-41aa-4333-9883-6df66314117e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.070 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquired lock "refresh_cache-3aa671dd-41aa-4333-9883-6df66314117e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.070 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.076 2 DEBUG nova.virt.libvirt.driver [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <uuid>e4c06c33-d7c3-4a4d-b181-45b9422173ca</uuid>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <name>instance-00000055</name>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-305635351</nova:name>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:35:05</nova:creationTime>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:user uuid="c0d7f2725ce3440b9e998e6efddc4628">tempest-ListServerFiltersTestJSON-495234271-project-member</nova:user>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:project uuid="b35e544965644721a29ebea7dd0cc74e">tempest-ListServerFiltersTestJSON-495234271</nova:project>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <nova:port uuid="cd41c84b-4c34-45f1-85a1-9d7d2f482414">
Oct 02 12:35:07 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <system>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <entry name="serial">e4c06c33-d7c3-4a4d-b181-45b9422173ca</entry>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <entry name="uuid">e4c06c33-d7c3-4a4d-b181-45b9422173ca</entry>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </system>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <os>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   </os>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <features>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   </features>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk">
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e4c06c33-d7c3-4a4d-b181-45b9422173ca_disk.config">
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:35:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:a7:26:6c"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <target dev="tapcd41c84b-4c"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca/console.log" append="off"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <video>
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </video>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:35:07 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:35:07 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:35:07 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:35:07 compute-0 nova_compute[256940]: </domain>
Oct 02 12:35:07 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.078 2 DEBUG nova.virt.libvirt.driver [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.078 2 DEBUG nova.virt.libvirt.driver [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.079 2 DEBUG nova.virt.libvirt.vif [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-305635351',display_name='tempest-ListServerFiltersTestJSON-instance-305635351',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-305635351',id=85,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-nczxypp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:59Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=e4c06c33-d7c3-4a4d-b181-45b9422173ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.079 2 DEBUG nova.network.os_vif_util [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.080 2 DEBUG nova.network.os_vif_util [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.081 2 DEBUG os_vif [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd41c84b-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd41c84b-4c, col_values=(('external_ids', {'iface-id': 'cd41c84b-4c34-45f1-85a1-9d7d2f482414', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:26:6c', 'vm-uuid': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 NetworkManager[44981]: <info>  [1759408507.0909] manager: (tapcd41c84b-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.097 2 INFO os_vif [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c')
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.116 2 DEBUG nova.compute.manager [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-changed-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.117 2 DEBUG nova.compute.manager [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Refreshing instance network info cache due to event network-changed-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.117 2 DEBUG oslo_concurrency.lockutils [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3aa671dd-41aa-4333-9883-6df66314117e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:07 compute-0 kernel: tapcd41c84b-4c: entered promiscuous mode
Oct 02 12:35:07 compute-0 NetworkManager[44981]: <info>  [1759408507.2792] manager: (tapcd41c84b-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Oct 02 12:35:07 compute-0 ovn_controller[148123]: 2025-10-02T12:35:07Z|00360|binding|INFO|Claiming lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 for this chassis.
Oct 02 12:35:07 compute-0 ovn_controller[148123]: 2025-10-02T12:35:07Z|00361|binding|INFO|cd41c84b-4c34-45f1-85a1-9d7d2f482414: Claiming fa:16:3e:a7:26:6c 10.100.0.13
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 ovn_controller[148123]: 2025-10-02T12:35:07Z|00362|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 ovn-installed in OVS
Oct 02 12:35:07 compute-0 ovn_controller[148123]: 2025-10-02T12:35:07Z|00363|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 up in Southbound
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.298 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:6c 10.100.0.13'], port_security=['fa:16:3e:a7:26:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cd41c84b-4c34-45f1-85a1-9d7d2f482414) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.299 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cd41c84b-4c34-45f1-85a1-9d7d2f482414 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a bound to our chassis
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.301 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 systemd-udevd[313224]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:07 compute-0 systemd-machined[210927]: New machine qemu-40-instance-00000055.
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.328 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[36c89403-8225-4a2e-916d-ed38a37d0099]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:07 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000055.
Oct 02 12:35:07 compute-0 NetworkManager[44981]: <info>  [1759408507.3487] device (tapcd41c84b-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:35:07 compute-0 NetworkManager[44981]: <info>  [1759408507.3494] device (tapcd41c84b-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.378 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0f717272-d797-48bd-8dce-9c3c49712b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.387 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7146202b-1c3c-4fb5-bd8d-0a61a2414391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.396 2 DEBUG nova.objects.instance [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'migration_context' on Instance uuid 3aa671dd-41aa-4333-9883-6df66314117e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 393 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 138 op/s
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.424 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[50ac5591-3920-498b-89a7-17e586606268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.440 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7c22151d-28e0-4c70-99bd-6fef6db0302f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 10, 'rx_bytes': 1042, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 10, 'rx_bytes': 1042, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637152, 'reachable_time': 25794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313254, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.462 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d4be1777-c2bf-4365-b40a-3ec4f4e3f3a9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637170, 'tstamp': 637170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313256, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637174, 'tstamp': 637174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313256, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.465 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.470 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd1e489-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.470 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.471 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dd1e489-90, col_values=(('external_ids', {'iface-id': 'a25f83dc-1cb4-467b-80c7-496d6a4bfac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:07.472 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.472 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.473 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Ensure instance console log exists: /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.473 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.474 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.474 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.644 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:35:07 compute-0 ceph-mon[73668]: pgmap v1756: 305 pgs: 305 active+clean; 356 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 204 op/s
Oct 02 12:35:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1426181501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:07 compute-0 ceph-mon[73668]: osdmap e246: 3 total, 3 up, 3 in
Oct 02 12:35:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1751850354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.948 2 DEBUG nova.compute.manager [req-ca3cf51d-9f17-47c3-976d-69c97875a5a2 req-07440e13-1a27-4861-a8bc-16fa35d7136c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.949 2 DEBUG oslo_concurrency.lockutils [req-ca3cf51d-9f17-47c3-976d-69c97875a5a2 req-07440e13-1a27-4861-a8bc-16fa35d7136c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.950 2 DEBUG oslo_concurrency.lockutils [req-ca3cf51d-9f17-47c3-976d-69c97875a5a2 req-07440e13-1a27-4861-a8bc-16fa35d7136c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.950 2 DEBUG oslo_concurrency.lockutils [req-ca3cf51d-9f17-47c3-976d-69c97875a5a2 req-07440e13-1a27-4861-a8bc-16fa35d7136c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.950 2 DEBUG nova.compute.manager [req-ca3cf51d-9f17-47c3-976d-69c97875a5a2 req-07440e13-1a27-4861-a8bc-16fa35d7136c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:07 compute-0 nova_compute[256940]: 2025-10-02 12:35:07.951 2 WARNING nova.compute.manager [req-ca3cf51d-9f17-47c3-976d-69c97875a5a2 req-07440e13-1a27-4861-a8bc-16fa35d7136c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:35:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:08.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:08.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.773 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Updating instance_info_cache with network_info: [{"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.822 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Releasing lock "refresh_cache-3aa671dd-41aa-4333-9883-6df66314117e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.823 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Instance network_info: |[{"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.824 2 DEBUG oslo_concurrency.lockutils [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3aa671dd-41aa-4333-9883-6df66314117e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.824 2 DEBUG nova.network.neutron [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Refreshing network info cache for port c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.828 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Start _get_guest_xml network_info=[{"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.833 2 WARNING nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.841 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.842 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.848 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.849 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.850 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.851 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.851 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.852 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.852 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.853 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.853 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.854 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.854 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.854 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.855 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.855 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.858 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:08 compute-0 ceph-mon[73668]: pgmap v1758: 305 pgs: 305 active+clean; 393 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 138 op/s
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.960 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for e4c06c33-d7c3-4a4d-b181-45b9422173ca due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.961 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408508.9599407, e4c06c33-d7c3-4a4d-b181-45b9422173ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.961 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] VM Resumed (Lifecycle Event)
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.965 2 DEBUG nova.compute.manager [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.968 2 INFO nova.virt.libvirt.driver [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance rebooted successfully.
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.969 2 DEBUG nova.compute.manager [None req-11342e40-4305-4318-8978-8a75142290b2 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.989 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:08 compute-0 nova_compute[256940]: 2025-10-02 12:35:08.996 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.046 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.047 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408508.9616761, e4c06c33-d7c3-4a4d-b181-45b9422173ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.047 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] VM Started (Lifecycle Event)
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.075 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.079 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947683889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.371 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.402 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.408 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 393 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 138 op/s
Oct 02 12:35:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3718680792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.872 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.874 2 DEBUG nova.virt.libvirt.vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-119748352',display_name='tempest-MultipleCreateTestJSON-server-119748352-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-119748352-2',id=91,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-tcned940',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:03Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=3aa671dd-41aa-4333-9883-6df66314117e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.875 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.875 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:71:f2,bridge_name='br-int',has_traffic_filtering=True,id=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23d4bbd-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.877 2 DEBUG nova.objects.instance [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3aa671dd-41aa-4333-9883-6df66314117e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.907 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <uuid>3aa671dd-41aa-4333-9883-6df66314117e</uuid>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <name>instance-0000005b</name>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <nova:name>tempest-MultipleCreateTestJSON-server-119748352-2</nova:name>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:35:08</nova:creationTime>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:user uuid="34a9da53e0cc446593d0cea2f498c53e">tempest-MultipleCreateTestJSON-1074010337-project-member</nova:user>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:project uuid="ed58e2bfccb04353b29ae652cfed3546">tempest-MultipleCreateTestJSON-1074010337</nova:project>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <nova:port uuid="c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22">
Oct 02 12:35:09 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <system>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <entry name="serial">3aa671dd-41aa-4333-9883-6df66314117e</entry>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <entry name="uuid">3aa671dd-41aa-4333-9883-6df66314117e</entry>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </system>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <os>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   </os>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <features>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   </features>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/3aa671dd-41aa-4333-9883-6df66314117e_disk">
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/3aa671dd-41aa-4333-9883-6df66314117e_disk.config">
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:35:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:4a:71:f2"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <target dev="tapc23d4bbd-ee"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/console.log" append="off"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <video>
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </video>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:35:09 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:35:09 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:35:09 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:35:09 compute-0 nova_compute[256940]: </domain>
Oct 02 12:35:09 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.909 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Preparing to wait for external event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.909 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.909 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.910 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.910 2 DEBUG nova.virt.libvirt.vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-119748352',display_name='tempest-MultipleCreateTestJSON-server-119748352-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-119748352-2',id=91,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-tcned940',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:03Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=3aa671dd-41aa-4333-9883-6df66314117e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.911 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.911 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:71:f2,bridge_name='br-int',has_traffic_filtering=True,id=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23d4bbd-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.912 2 DEBUG os_vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:71:f2,bridge_name='br-int',has_traffic_filtering=True,id=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23d4bbd-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc23d4bbd-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc23d4bbd-ee, col_values=(('external_ids', {'iface-id': 'c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:71:f2', 'vm-uuid': '3aa671dd-41aa-4333-9883-6df66314117e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-0 NetworkManager[44981]: <info>  [1759408509.9197] manager: (tapc23d4bbd-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-0 nova_compute[256940]: 2025-10-02 12:35:09.925 2 INFO os_vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:71:f2,bridge_name='br-int',has_traffic_filtering=True,id=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23d4bbd-ee')
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.093 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.093 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.093 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No VIF found with MAC fa:16:3e:4a:71:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.094 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Using config drive
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.123 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3932958621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3868317450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/947683889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1976850205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4223741338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3718680792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:10.303 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.711 2 DEBUG nova.network.neutron [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Updated VIF entry in instance network info cache for port c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.711 2 DEBUG nova.network.neutron [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Updating instance_info_cache with network_info: [{"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:10.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.764 2 DEBUG oslo_concurrency.lockutils [req-0621c38e-d61a-4fbb-a254-a3c951d745b4 req-feb2957b-369b-4c28-bd53-bdce5d313cc0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3aa671dd-41aa-4333-9883-6df66314117e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.823 2 DEBUG nova.compute.manager [req-ce579147-13a5-4e37-a440-74e581def5b3 req-20e4639b-81ff-43ab-b1a8-fb1213663c7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.823 2 DEBUG oslo_concurrency.lockutils [req-ce579147-13a5-4e37-a440-74e581def5b3 req-20e4639b-81ff-43ab-b1a8-fb1213663c7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.824 2 DEBUG oslo_concurrency.lockutils [req-ce579147-13a5-4e37-a440-74e581def5b3 req-20e4639b-81ff-43ab-b1a8-fb1213663c7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.824 2 DEBUG oslo_concurrency.lockutils [req-ce579147-13a5-4e37-a440-74e581def5b3 req-20e4639b-81ff-43ab-b1a8-fb1213663c7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.824 2 DEBUG nova.compute.manager [req-ce579147-13a5-4e37-a440-74e581def5b3 req-20e4639b-81ff-43ab-b1a8-fb1213663c7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.824 2 WARNING nova.compute.manager [req-ce579147-13a5-4e37-a440-74e581def5b3 req-20e4639b-81ff-43ab-b1a8-fb1213663c7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state active and task_state None.
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.944 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Creating config drive at /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/disk.config
Oct 02 12:35:10 compute-0 nova_compute[256940]: 2025-10-02 12:35:10.949 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqm24nj2n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:11 compute-0 nova_compute[256940]: 2025-10-02 12:35:11.091 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqm24nj2n" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:11 compute-0 nova_compute[256940]: 2025-10-02 12:35:11.127 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 3aa671dd-41aa-4333-9883-6df66314117e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:11 compute-0 nova_compute[256940]: 2025-10-02 12:35:11.131 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/disk.config 3aa671dd-41aa-4333-9883-6df66314117e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 473 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.2 MiB/s wr, 199 op/s
Oct 02 12:35:11 compute-0 ceph-mon[73668]: pgmap v1759: 305 pgs: 305 active+clean; 393 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 138 op/s
Oct 02 12:35:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:12.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:12 compute-0 nova_compute[256940]: 2025-10-02 12:35:12.631 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/disk.config 3aa671dd-41aa-4333-9883-6df66314117e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:12 compute-0 nova_compute[256940]: 2025-10-02 12:35:12.633 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Deleting local config drive /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e/disk.config because it was imported into RBD.
Oct 02 12:35:12 compute-0 NetworkManager[44981]: <info>  [1759408512.6976] manager: (tapc23d4bbd-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Oct 02 12:35:12 compute-0 kernel: tapc23d4bbd-ee: entered promiscuous mode
Oct 02 12:35:12 compute-0 ovn_controller[148123]: 2025-10-02T12:35:12Z|00364|binding|INFO|Claiming lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for this chassis.
Oct 02 12:35:12 compute-0 ovn_controller[148123]: 2025-10-02T12:35:12Z|00365|binding|INFO|c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22: Claiming fa:16:3e:4a:71:f2 10.100.0.5
Oct 02 12:35:12 compute-0 nova_compute[256940]: 2025-10-02 12:35:12.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:12.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:12 compute-0 systemd-udevd[313437]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:12 compute-0 systemd-machined[210927]: New machine qemu-41-instance-0000005b.
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.748 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:71:f2 10.100.0.5'], port_security=['fa:16:3e:4a:71:f2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3aa671dd-41aa-4333-9883-6df66314117e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.749 158104 INFO neutron.agent.ovn.metadata.agent [-] Port c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 bound to our chassis
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.751 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct 02 12:35:12 compute-0 NetworkManager[44981]: <info>  [1759408512.7533] device (tapc23d4bbd-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:35:12 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-0000005b.
Oct 02 12:35:12 compute-0 NetworkManager[44981]: <info>  [1759408512.7544] device (tapc23d4bbd-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.765 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[070e5df1-d17e-4181-a02c-cd22937ac1c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.766 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap885ece2c-b1 in ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.768 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap885ece2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.768 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[988d4bb5-12a0-4c12-aa0c-68017ab631c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.769 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eb890fab-c897-42f4-a045-75876be6c752]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 nova_compute[256940]: 2025-10-02 12:35:12.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:12 compute-0 ovn_controller[148123]: 2025-10-02T12:35:12Z|00366|binding|INFO|Setting lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 ovn-installed in OVS
Oct 02 12:35:12 compute-0 ovn_controller[148123]: 2025-10-02T12:35:12Z|00367|binding|INFO|Setting lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 up in Southbound
Oct 02 12:35:12 compute-0 nova_compute[256940]: 2025-10-02 12:35:12.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.813 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b50279-2853-4431-a31d-b81af91cfb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.837 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[41679a84-3e8e-49e2-8cc6-4d4a98d39c48]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.873 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3b68f1af-94f4-4384-90ef-c0b213b77980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 systemd-udevd[313440]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:12 compute-0 NetworkManager[44981]: <info>  [1759408512.8800] manager: (tap885ece2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.878 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[633362b2-9130-49cf-a5dd-6dcdbe17efa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.923 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8eca1d75-b2ad-432c-bcba-6923f32c6e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.927 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a65ab527-94e1-4190-a5b4-6588afed53fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 NetworkManager[44981]: <info>  [1759408512.9569] device (tap885ece2c-b0): carrier: link connected
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.966 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a26986a2-2924-4ba6-9fbc-286e3b667717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:12.986 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[268c210c-57bd-4910-8e87-10be1d31ae1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640844, 'reachable_time': 16999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313471, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.009 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[01375578-9cc2-44ea-b966-666308df6acc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:5893'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640844, 'tstamp': 640844}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313472, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.038 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a08dcfda-d43c-4fb2-a9f4-3ec4aaad4b73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640844, 'reachable_time': 16999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313473, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.082 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6562dcfd-f313-402e-91aa-ba85f89584fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:13 compute-0 ceph-mon[73668]: pgmap v1760: 305 pgs: 305 active+clean; 473 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.2 MiB/s wr, 199 op/s
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.164 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[86d008c8-4c0b-4a46-8fdc-deeabe33662d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.166 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.166 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.167 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap885ece2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:13 compute-0 kernel: tap885ece2c-b0: entered promiscuous mode
Oct 02 12:35:13 compute-0 NetworkManager[44981]: <info>  [1759408513.1699] manager: (tap885ece2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.172 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap885ece2c-b0, col_values=(('external_ids', {'iface-id': '24355553-27f6-4ebd-99c0-4f861ce0339d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:13 compute-0 ovn_controller[148123]: 2025-10-02T12:35:13Z|00368|binding|INFO|Releasing lport 24355553-27f6-4ebd-99c0-4f861ce0339d from this chassis (sb_readonly=0)
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.190 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.191 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[702a40bf-0d88-4521-89b4-846b33f28cc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.192 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:35:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:13.193 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'env', 'PROCESS_TAG=haproxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:35:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 499 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.5 MiB/s wr, 197 op/s
Oct 02 12:35:13 compute-0 podman[313547]: 2025-10-02 12:35:13.589232895 +0000 UTC m=+0.032773705 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.804 2 DEBUG nova.compute.manager [req-b61c7e9f-8b0b-4e3e-932b-6edfd0915292 req-f7bb9baf-4f15-4787-9ffa-32aade7a20fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.805 2 DEBUG oslo_concurrency.lockutils [req-b61c7e9f-8b0b-4e3e-932b-6edfd0915292 req-f7bb9baf-4f15-4787-9ffa-32aade7a20fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.806 2 DEBUG oslo_concurrency.lockutils [req-b61c7e9f-8b0b-4e3e-932b-6edfd0915292 req-f7bb9baf-4f15-4787-9ffa-32aade7a20fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.806 2 DEBUG oslo_concurrency.lockutils [req-b61c7e9f-8b0b-4e3e-932b-6edfd0915292 req-f7bb9baf-4f15-4787-9ffa-32aade7a20fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.806 2 DEBUG nova.compute.manager [req-b61c7e9f-8b0b-4e3e-932b-6edfd0915292 req-f7bb9baf-4f15-4787-9ffa-32aade7a20fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Processing event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:35:13 compute-0 podman[313547]: 2025-10-02 12:35:13.904281759 +0000 UTC m=+0.347822549 container create b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.958 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.960 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408513.9580007, 3aa671dd-41aa-4333-9883-6df66314117e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.960 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] VM Started (Lifecycle Event)
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.964 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.969 2 INFO nova.virt.libvirt.driver [-] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Instance spawned successfully.
Oct 02 12:35:13 compute-0 nova_compute[256940]: 2025-10-02 12:35:13.971 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.045 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.050 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.051 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.052 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.052 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.053 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.053 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.058 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:14 compute-0 systemd[1]: Started libpod-conmon-b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f.scope.
Oct 02 12:35:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ca6f1592bef0d63d28fde2281d19be3d3ebd3569ffb997bbbc7abfeda3c7c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:14 compute-0 podman[313547]: 2025-10-02 12:35:14.226328674 +0000 UTC m=+0.669869484 container init b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:14 compute-0 podman[313547]: 2025-10-02 12:35:14.235562214 +0000 UTC m=+0.679103004 container start b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:35:14 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [NOTICE]   (313566) : New worker (313568) forked
Oct 02 12:35:14 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [NOTICE]   (313566) : Loading success.
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.274 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.275 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408513.959713, 3aa671dd-41aa-4333-9883-6df66314117e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.275 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] VM Paused (Lifecycle Event)
Oct 02 12:35:14 compute-0 ceph-mon[73668]: pgmap v1761: 305 pgs: 305 active+clean; 499 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.5 MiB/s wr, 197 op/s
Oct 02 12:35:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:14.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.524 2 INFO nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Took 11.38 seconds to spawn the instance on the hypervisor.
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.525 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.545 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.550 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408513.966499, 3aa671dd-41aa-4333-9883-6df66314117e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.551 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] VM Resumed (Lifecycle Event)
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.657 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.667 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.731 2 INFO nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Took 13.13 seconds to build instance.
Oct 02 12:35:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:14.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.919 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:14 compute-0 nova_compute[256940]: 2025-10-02 12:35:14.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:15 compute-0 podman[313578]: 2025-10-02 12:35:15.426230286 +0000 UTC m=+0.064576433 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 499 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 270 op/s
Oct 02 12:35:15 compute-0 podman[313579]: 2025-10-02 12:35:15.481898976 +0000 UTC m=+0.112874491 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2851248602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:15 compute-0 nova_compute[256940]: 2025-10-02 12:35:15.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:16 compute-0 sudo[313621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:16 compute-0 sudo[313621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-0 sudo[313621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:16 compute-0 sudo[313646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:35:16 compute-0 sudo[313646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-0 sudo[313646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:16 compute-0 sudo[313671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:16 compute-0 sudo[313671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-0 sudo[313671]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:16.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:16 compute-0 sudo[313696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:35:16 compute-0 sudo[313696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:16.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:16 compute-0 sudo[313696]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:35:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:35:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:35:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:35:17 compute-0 nova_compute[256940]: 2025-10-02 12:35:17.273 2 DEBUG nova.compute.manager [req-58620eff-f376-4a12-b686-92ae6d824989 req-29d9aa0e-de68-46a1-bdb6-afbcf873d65d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:17 compute-0 nova_compute[256940]: 2025-10-02 12:35:17.276 2 DEBUG oslo_concurrency.lockutils [req-58620eff-f376-4a12-b686-92ae6d824989 req-29d9aa0e-de68-46a1-bdb6-afbcf873d65d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:17 compute-0 nova_compute[256940]: 2025-10-02 12:35:17.276 2 DEBUG oslo_concurrency.lockutils [req-58620eff-f376-4a12-b686-92ae6d824989 req-29d9aa0e-de68-46a1-bdb6-afbcf873d65d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:17 compute-0 nova_compute[256940]: 2025-10-02 12:35:17.276 2 DEBUG oslo_concurrency.lockutils [req-58620eff-f376-4a12-b686-92ae6d824989 req-29d9aa0e-de68-46a1-bdb6-afbcf873d65d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:17 compute-0 nova_compute[256940]: 2025-10-02 12:35:17.276 2 DEBUG nova.compute.manager [req-58620eff-f376-4a12-b686-92ae6d824989 req-29d9aa0e-de68-46a1-bdb6-afbcf873d65d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] No waiting events found dispatching network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:17 compute-0 nova_compute[256940]: 2025-10-02 12:35:17.277 2 WARNING nova.compute.manager [req-58620eff-f376-4a12-b686-92ae6d824989 req-29d9aa0e-de68-46a1-bdb6-afbcf873d65d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received unexpected event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for instance with vm_state active and task_state None.
Oct 02 12:35:17 compute-0 ceph-mon[73668]: pgmap v1762: 305 pgs: 305 active+clean; 499 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 270 op/s
Oct 02 12:35:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/265947530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 500 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.0 MiB/s wr, 402 op/s
Oct 02 12:35:17 compute-0 sudo[313753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:17 compute-0 sudo[313753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:17 compute-0 sudo[313753]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:17 compute-0 sudo[313778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:17 compute-0 sudo[313778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:17 compute-0 sudo[313778]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ee21aaf3-e05e-4df5-8405-5c8bdc240b9d does not exist
Oct 02 12:35:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ffef7379-d696-485b-a574-e6823f487864 does not exist
Oct 02 12:35:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev acb30d88-efe9-4bf7-baf1-781a62007333 does not exist
Oct 02 12:35:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:35:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:35:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:35:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:35:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:35:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:17 compute-0 sudo[313803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:17 compute-0 sudo[313803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:17 compute-0 sudo[313803]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:17 compute-0 sudo[313828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:35:17 compute-0 sudo[313828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:17 compute-0 sudo[313828]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:17 compute-0 sudo[313853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:17 compute-0 sudo[313853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:17 compute-0 sudo[313853]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:17 compute-0 sudo[313878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:35:17 compute-0 sudo[313878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:18.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:18 compute-0 podman[313943]: 2025-10-02 12:35:18.309050237 +0000 UTC m=+0.024176361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:18 compute-0 podman[313943]: 2025-10-02 12:35:18.423440745 +0000 UTC m=+0.138566869 container create b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shirley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:35:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:35:18 compute-0 ceph-mon[73668]: pgmap v1763: 305 pgs: 305 active+clean; 500 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.0 MiB/s wr, 402 op/s
Oct 02 12:35:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:35:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:35:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:18 compute-0 systemd[1]: Started libpod-conmon-b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541.scope.
Oct 02 12:35:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:35:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:35:18 compute-0 podman[313943]: 2025-10-02 12:35:18.763280374 +0000 UTC m=+0.478406588 container init b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shirley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:35:18 compute-0 podman[313943]: 2025-10-02 12:35:18.773049748 +0000 UTC m=+0.488175882 container start b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:18 compute-0 serene_shirley[313960]: 167 167
Oct 02 12:35:18 compute-0 systemd[1]: libpod-b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541.scope: Deactivated successfully.
Oct 02 12:35:19 compute-0 podman[313943]: 2025-10-02 12:35:19.012834811 +0000 UTC m=+0.727960935 container attach b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shirley, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:35:19 compute-0 podman[313943]: 2025-10-02 12:35:19.013884399 +0000 UTC m=+0.729010533 container died b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.131 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.134 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.134 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.135 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.135 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.136 2 INFO nova.compute.manager [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Terminating instance
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.137 2 DEBUG nova.compute.manager [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:35:19 compute-0 kernel: tapc23d4bbd-ee (unregistering): left promiscuous mode
Oct 02 12:35:19 compute-0 NetworkManager[44981]: <info>  [1759408519.2663] device (tapc23d4bbd-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00369|binding|INFO|Releasing lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 from this chassis (sb_readonly=0)
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00370|binding|INFO|Setting lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 down in Southbound
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00371|binding|INFO|Removing iface tapc23d4bbd-ee ovn-installed in OVS
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:19.291 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:71:f2 10.100.0.5'], port_security=['fa:16:3e:4a:71:f2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3aa671dd-41aa-4333-9883-6df66314117e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:19.293 158104 INFO neutron.agent.ovn.metadata.agent [-] Port c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 unbound from our chassis
Oct 02 12:35:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:19.295 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:19.297 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe9fc0c-d069-4b95-97fe-821406d96d51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:19.297 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace which is not needed anymore
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 02 12:35:19 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005b.scope: Consumed 6.240s CPU time.
Oct 02 12:35:19 compute-0 systemd-machined[210927]: Machine qemu-41-instance-0000005b terminated.
Oct 02 12:35:19 compute-0 kernel: tapc23d4bbd-ee: entered promiscuous mode
Oct 02 12:35:19 compute-0 NetworkManager[44981]: <info>  [1759408519.3617] manager: (tapc23d4bbd-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Oct 02 12:35:19 compute-0 kernel: tapc23d4bbd-ee (unregistering): left promiscuous mode
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00372|binding|INFO|Claiming lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for this chassis.
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00373|binding|INFO|c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22: Claiming fa:16:3e:4a:71:f2 10.100.0.5
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:19.372 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:71:f2 10.100.0.5'], port_security=['fa:16:3e:4a:71:f2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3aa671dd-41aa-4333-9883-6df66314117e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc882d27cbcdddb6b9d51a0cab685d8aca1efc7392bf7b783da2af102efdb71c-merged.mount: Deactivated successfully.
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.394 2 INFO nova.virt.libvirt.driver [-] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Instance destroyed successfully.
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.395 2 DEBUG nova.objects.instance [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'resources' on Instance uuid 3aa671dd-41aa-4333-9883-6df66314117e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00374|binding|INFO|Setting lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 ovn-installed in OVS
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00375|binding|INFO|Setting lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 up in Southbound
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00376|binding|INFO|Releasing lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 from this chassis (sb_readonly=1)
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00377|if_status|INFO|Dropped 2 log messages in last 267 seconds (most recently, 267 seconds ago) due to excessive rate
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00378|if_status|INFO|Not setting lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 down as sb is readonly
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00379|binding|INFO|Removing iface tapc23d4bbd-ee ovn-installed in OVS
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00380|binding|INFO|Releasing lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 from this chassis (sb_readonly=0)
Oct 02 12:35:19 compute-0 ovn_controller[148123]: 2025-10-02T12:35:19Z|00381|binding|INFO|Setting lport c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 down in Southbound
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.418 2 DEBUG nova.virt.libvirt.vif [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-119748352',display_name='tempest-MultipleCreateTestJSON-server-119748352-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-119748352-2',id=91,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:35:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-tcned940',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:14Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=3aa671dd-41aa-4333-9883-6df66314117e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.419 2 DEBUG nova.network.os_vif_util [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "address": "fa:16:3e:4a:71:f2", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc23d4bbd-ee", "ovs_interfaceid": "c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.420 2 DEBUG nova.network.os_vif_util [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:71:f2,bridge_name='br-int',has_traffic_filtering=True,id=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23d4bbd-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.420 2 DEBUG os_vif [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:71:f2,bridge_name='br-int',has_traffic_filtering=True,id=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23d4bbd-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc23d4bbd-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:19.428 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:71:f2 10.100.0.5'], port_security=['fa:16:3e:4a:71:f2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3aa671dd-41aa-4333-9883-6df66314117e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.429 2 INFO os_vif [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:71:f2,bridge_name='br-int',has_traffic_filtering=True,id=c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc23d4bbd-ee')
Oct 02 12:35:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 500 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.8 MiB/s wr, 330 op/s
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.540 2 DEBUG nova.compute.manager [req-86204951-f436-4f27-a71e-61788755f9b9 req-8cc170a5-6e6f-4956-b897-bca4d052d4a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-unplugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.546 2 DEBUG oslo_concurrency.lockutils [req-86204951-f436-4f27-a71e-61788755f9b9 req-8cc170a5-6e6f-4956-b897-bca4d052d4a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.546 2 DEBUG oslo_concurrency.lockutils [req-86204951-f436-4f27-a71e-61788755f9b9 req-8cc170a5-6e6f-4956-b897-bca4d052d4a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.547 2 DEBUG oslo_concurrency.lockutils [req-86204951-f436-4f27-a71e-61788755f9b9 req-8cc170a5-6e6f-4956-b897-bca4d052d4a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.547 2 DEBUG nova.compute.manager [req-86204951-f436-4f27-a71e-61788755f9b9 req-8cc170a5-6e6f-4956-b897-bca4d052d4a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] No waiting events found dispatching network-vif-unplugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:19 compute-0 nova_compute[256940]: 2025-10-02 12:35:19.547 2 DEBUG nova.compute.manager [req-86204951-f436-4f27-a71e-61788755f9b9 req-8cc170a5-6e6f-4956-b897-bca4d052d4a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-unplugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:19 compute-0 podman[313943]: 2025-10-02 12:35:19.851093958 +0000 UTC m=+1.566220082 container remove b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shirley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:35:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 02 12:35:19 compute-0 systemd[1]: libpod-conmon-b6e6b0ae0fbe3d136bb703271cdf296c01b52fcc4aec593a7eade73208c6c541.scope: Deactivated successfully.
Oct 02 12:35:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 02 12:35:20 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [NOTICE]   (313566) : haproxy version is 2.8.14-c23fe91
Oct 02 12:35:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 02 12:35:20 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [NOTICE]   (313566) : path to executable is /usr/sbin/haproxy
Oct 02 12:35:20 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [WARNING]  (313566) : Exiting Master process...
Oct 02 12:35:20 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [WARNING]  (313566) : Exiting Master process...
Oct 02 12:35:20 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [ALERT]    (313566) : Current worker (313568) exited with code 143 (Terminated)
Oct 02 12:35:20 compute-0 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[313562]: [WARNING]  (313566) : All workers exited. Exiting... (0)
Oct 02 12:35:20 compute-0 systemd[1]: libpod-b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f.scope: Deactivated successfully.
Oct 02 12:35:20 compute-0 podman[314032]: 2025-10-02 12:35:20.065133471 +0000 UTC m=+0.076272727 container died b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:35:20 compute-0 podman[314042]: 2025-10-02 12:35:20.056135566 +0000 UTC m=+0.049499190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.220 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.222 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.222 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.223 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.223 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.224 2 INFO nova.compute.manager [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Terminating instance
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.225 2 DEBUG nova.compute.manager [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3ca6f1592bef0d63d28fde2281d19be3d3ebd3569ffb997bbbc7abfeda3c7c6-merged.mount: Deactivated successfully.
Oct 02 12:35:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:20.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:20 compute-0 podman[314042]: 2025-10-02 12:35:20.52136569 +0000 UTC m=+0.514729284 container create 2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:35:20 compute-0 podman[314032]: 2025-10-02 12:35:20.544772169 +0000 UTC m=+0.555911425 container cleanup b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:35:20 compute-0 systemd[1]: libpod-conmon-b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f.scope: Deactivated successfully.
Oct 02 12:35:20 compute-0 kernel: tap8a3f0797-39 (unregistering): left promiscuous mode
Oct 02 12:35:20 compute-0 NetworkManager[44981]: <info>  [1759408520.6572] device (tap8a3f0797-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-0 ovn_controller[148123]: 2025-10-02T12:35:20Z|00382|binding|INFO|Releasing lport 8a3f0797-397c-4ab5-9436-f371ba35d714 from this chassis (sb_readonly=0)
Oct 02 12:35:20 compute-0 ovn_controller[148123]: 2025-10-02T12:35:20Z|00383|binding|INFO|Setting lport 8a3f0797-397c-4ab5-9436-f371ba35d714 down in Southbound
Oct 02 12:35:20 compute-0 ovn_controller[148123]: 2025-10-02T12:35:20Z|00384|binding|INFO|Removing iface tap8a3f0797-39 ovn-installed in OVS
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:20.679 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:0c:74 10.100.0.12'], port_security=['fa:16:3e:49:0c:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72600c3a-7fa7-4179-905f-f02e91b0efc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8a3f0797-397c-4ab5-9436-f371ba35d714) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:20 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 02 12:35:20 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000057.scope: Consumed 14.712s CPU time.
Oct 02 12:35:20 compute-0 systemd-machined[210927]: Machine qemu-39-instance-00000057 terminated.
Oct 02 12:35:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:20 compute-0 systemd[1]: Started libpod-conmon-2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6.scope.
Oct 02 12:35:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cc0ebd7b1498ea133ccd59be07cf4cc456c1b20ab491a4cd8cb4f089baacfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cc0ebd7b1498ea133ccd59be07cf4cc456c1b20ab491a4cd8cb4f089baacfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cc0ebd7b1498ea133ccd59be07cf4cc456c1b20ab491a4cd8cb4f089baacfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cc0ebd7b1498ea133ccd59be07cf4cc456c1b20ab491a4cd8cb4f089baacfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cc0ebd7b1498ea133ccd59be07cf4cc456c1b20ab491a4cd8cb4f089baacfb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.879 2 INFO nova.virt.libvirt.driver [-] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Instance destroyed successfully.
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.880 2 DEBUG nova.objects.instance [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'resources' on Instance uuid 72600c3a-7fa7-4179-905f-f02e91b0efc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:20 compute-0 podman[314042]: 2025-10-02 12:35:20.883053257 +0000 UTC m=+0.876416881 container init 2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:20 compute-0 podman[314042]: 2025-10-02 12:35:20.895143322 +0000 UTC m=+0.888506926 container start 2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.903 2 DEBUG nova.virt.libvirt.vif [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1202112014',display_name='tempest-ListServerFiltersTestJSON-instance-1202112014',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1202112014',id=87,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-4hy4cijr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:44Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=72600c3a-7fa7-4179-905f-f02e91b0efc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.903 2 DEBUG nova.network.os_vif_util [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "8a3f0797-397c-4ab5-9436-f371ba35d714", "address": "fa:16:3e:49:0c:74", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a3f0797-39", "ovs_interfaceid": "8a3f0797-397c-4ab5-9436-f371ba35d714", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.904 2 DEBUG nova.network.os_vif_util [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:0c:74,bridge_name='br-int',has_traffic_filtering=True,id=8a3f0797-397c-4ab5-9436-f371ba35d714,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3f0797-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.905 2 DEBUG os_vif [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:0c:74,bridge_name='br-int',has_traffic_filtering=True,id=8a3f0797-397c-4ab5-9436-f371ba35d714,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3f0797-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a3f0797-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.912 2 INFO os_vif [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:0c:74,bridge_name='br-int',has_traffic_filtering=True,id=8a3f0797-397c-4ab5-9436-f371ba35d714,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a3f0797-39')
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.917764) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520917828, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 859, "num_deletes": 250, "total_data_size": 1171062, "memory_usage": 1189336, "flush_reason": "Manual Compaction"}
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 02 12:35:20 compute-0 podman[314042]: 2025-10-02 12:35:20.93654319 +0000 UTC m=+0.929906884 container attach 2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:35:20 compute-0 nova_compute[256940]: 2025-10-02 12:35:20.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520975574, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 743955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38858, "largest_seqno": 39716, "table_properties": {"data_size": 740342, "index_size": 1329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10119, "raw_average_key_size": 21, "raw_value_size": 732439, "raw_average_value_size": 1525, "num_data_blocks": 59, "num_entries": 480, "num_filter_entries": 480, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408456, "oldest_key_time": 1759408456, "file_creation_time": 1759408520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 57883 microseconds, and 4072 cpu microseconds.
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.975648) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 743955 bytes OK
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.975680) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.993888) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.993961) EVENT_LOG_v1 {"time_micros": 1759408520993943, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.994001) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:35:20 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1166851, prev total WAL file size 1166851, number of live WAL files 2.
Oct 02 12:35:20 compute-0 podman[314082]: 2025-10-02 12:35:20.998317138 +0000 UTC m=+0.426978168 container remove b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.994839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(726KB)], [83(11MB)]
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520994871, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 12693449, "oldest_snapshot_seqno": -1}
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.013 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4da23d5d-9843-4708-b6a9-6148ebd23e72]: (4, ('Thu Oct  2 12:35:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f)\nb0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f\nThu Oct  2 12:35:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (b0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f)\nb0e7b087a1bfad347cb568ed4dd698a0d5c8412630a9791e4366c39b438d1d3f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.019 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a95300-7811-45e0-98b4-4e1bb81213f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.020 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 kernel: tap885ece2c-b0: left promiscuous mode
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.060 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bec814e2-8718-4449-8d70-83fc655fab72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.090 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4c70af20-fe86-4d1d-9fe9-ded775ec5054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.092 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d1420cbf-6464-4e3a-9283-1599fbc82a40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.111 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[92e6e849-38e9-40e4-90ac-5bce4f500c26]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640835, 'reachable_time': 39217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314134, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d885ece2c\x2db1ca\x2d4d5a\x2d9ddf\x2d20d1baf155c7.mount: Deactivated successfully.
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.115 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.115 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ceedf401-ab79-4544-b1ee-614514db36f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.117 158104 INFO neutron.agent.ovn.metadata.agent [-] Port c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 unbound from our chassis
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.118 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.120 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[debfabb8-9ce5-422a-950a-e8a05fff9f52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.120 158104 INFO neutron.agent.ovn.metadata.agent [-] Port c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 unbound from our chassis
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.121 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.122 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9002135a-0da6-49a1-96e5-951917e9e10f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.122 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8a3f0797-397c-4ab5-9436-f371ba35d714 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a unbound from our chassis
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.123 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.143 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09eb7af4-2207-4acd-ac94-ab277be94624]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.174 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[94ccfe66-f502-483b-9991-d47372168db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.178 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4db47144-44ca-4846-bff4-8202a16f72ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.211 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cc3502-61c6-42af-822c-cd25505013a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.237 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d0583a0b-647d-4880-96a0-17ce762163b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 12, 'rx_bytes': 1546, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 12, 'rx_bytes': 1546, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637152, 'reachable_time': 25794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314143, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.260 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd91e18-aae8-40b0-82d8-e03fc39768cf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637170, 'tstamp': 637170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314144, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4dd1e489-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637174, 'tstamp': 637174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314144, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.262 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.265 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd1e489-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.265 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.265 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dd1e489-90, col_values=(('external_ids', {'iface-id': 'a25f83dc-1cb4-467b-80c7-496d6a4bfac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:21.265 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 493 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 1.4 MiB/s wr, 328 op/s
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6654 keys, 9260618 bytes, temperature: kUnknown
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521462521, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9260618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9217475, "index_size": 25431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 170950, "raw_average_key_size": 25, "raw_value_size": 9099659, "raw_average_value_size": 1367, "num_data_blocks": 1012, "num_entries": 6654, "num_filter_entries": 6654, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.462958) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9260618 bytes
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.643532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.1 rd, 19.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(29.5) write-amplify(12.4) OK, records in: 7141, records dropped: 487 output_compression: NoCompression
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.643590) EVENT_LOG_v1 {"time_micros": 1759408521643570, "job": 48, "event": "compaction_finished", "compaction_time_micros": 467723, "compaction_time_cpu_micros": 27824, "output_level": 6, "num_output_files": 1, "total_output_size": 9260618, "num_input_records": 7141, "num_output_records": 6654, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521643980, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521646698, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:20.994768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.646747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.646754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.646755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.646757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:35:21.646758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.650 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.650 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.651 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.651 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.652 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] No waiting events found dispatching network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.652 2 WARNING nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received unexpected event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for instance with vm_state active and task_state deleting.
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.652 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.653 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.653 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.654 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.654 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] No waiting events found dispatching network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.655 2 WARNING nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received unexpected event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for instance with vm_state active and task_state deleting.
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.655 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.655 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.656 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.656 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.657 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] No waiting events found dispatching network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.657 2 WARNING nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received unexpected event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for instance with vm_state active and task_state deleting.
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.658 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-unplugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.658 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.659 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.659 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.659 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] No waiting events found dispatching network-vif-unplugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.660 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-unplugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.660 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.661 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3aa671dd-41aa-4333-9883-6df66314117e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.661 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.661 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.662 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] No waiting events found dispatching network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.662 2 WARNING nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received unexpected event network-vif-plugged-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 for instance with vm_state active and task_state deleting.
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.663 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received event network-vif-unplugged-8a3f0797-397c-4ab5-9436-f371ba35d714 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.663 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.663 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.664 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.664 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] No waiting events found dispatching network-vif-unplugged-8a3f0797-397c-4ab5-9436-f371ba35d714 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.664 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received event network-vif-unplugged-8a3f0797-397c-4ab5-9436-f371ba35d714 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.665 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received event network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.665 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.666 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.666 2 DEBUG oslo_concurrency.lockutils [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.666 2 DEBUG nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] No waiting events found dispatching network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:21 compute-0 nova_compute[256940]: 2025-10-02 12:35:21.667 2 WARNING nova.compute.manager [req-689e7ea1-7d3e-425f-921a-bb0bac9ae9fd req-8cf3b528-7c55-49af-9290-3b880a75a9f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received unexpected event network-vif-plugged-8a3f0797-397c-4ab5-9436-f371ba35d714 for instance with vm_state active and task_state deleting.
Oct 02 12:35:21 compute-0 ceph-mon[73668]: pgmap v1764: 305 pgs: 305 active+clean; 500 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.8 MiB/s wr, 330 op/s
Oct 02 12:35:21 compute-0 ceph-mon[73668]: osdmap e247: 3 total, 3 up, 3 in
Oct 02 12:35:21 compute-0 friendly_pare[314101]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:35:21 compute-0 friendly_pare[314101]: --> relative data size: 1.0
Oct 02 12:35:21 compute-0 friendly_pare[314101]: --> All data devices are unavailable
Oct 02 12:35:21 compute-0 systemd[1]: libpod-2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6.scope: Deactivated successfully.
Oct 02 12:35:21 compute-0 podman[314042]: 2025-10-02 12:35:21.895750824 +0000 UTC m=+1.889114418 container died 2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-70cc0ebd7b1498ea133ccd59be07cf4cc456c1b20ab491a4cd8cb4f089baacfb-merged.mount: Deactivated successfully.
Oct 02 12:35:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:22.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:22 compute-0 podman[314042]: 2025-10-02 12:35:22.432463089 +0000 UTC m=+2.425826683 container remove 2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:35:22 compute-0 systemd[1]: libpod-conmon-2baf562900378356a673e7be39c79f37516e36e9bb04a7984543f074cd45d4f6.scope: Deactivated successfully.
Oct 02 12:35:22 compute-0 sudo[313878]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:22 compute-0 sudo[314167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:22 compute-0 sudo[314167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:22 compute-0 sudo[314167]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:22 compute-0 sudo[314193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:35:22 compute-0 sudo[314193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:22 compute-0 sudo[314193]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:22 compute-0 sudo[314218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:22 compute-0 sudo[314218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:22 compute-0 sudo[314218]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:22.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:22 compute-0 sudo[314243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:35:22 compute-0 sudo[314243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:22 compute-0 ceph-mon[73668]: pgmap v1766: 305 pgs: 305 active+clean; 493 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 1.4 MiB/s wr, 328 op/s
Oct 02 12:35:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/112017634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3810058863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:23 compute-0 podman[314308]: 2025-10-02 12:35:23.14379323 +0000 UTC m=+0.025378101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:23 compute-0 podman[314308]: 2025-10-02 12:35:23.413133943 +0000 UTC m=+0.294718784 container create 0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:35:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 478 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 65 KiB/s wr, 329 op/s
Oct 02 12:35:23 compute-0 systemd[1]: Started libpod-conmon-0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085.scope.
Oct 02 12:35:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:23 compute-0 podman[314308]: 2025-10-02 12:35:23.634401235 +0000 UTC m=+0.515986096 container init 0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:35:23 compute-0 podman[314308]: 2025-10-02 12:35:23.643627145 +0000 UTC m=+0.525211996 container start 0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:23 compute-0 gracious_babbage[314325]: 167 167
Oct 02 12:35:23 compute-0 systemd[1]: libpod-0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085.scope: Deactivated successfully.
Oct 02 12:35:23 compute-0 conmon[314325]: conmon 0cff026acd3e9ab59d70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085.scope/container/memory.events
Oct 02 12:35:23 compute-0 podman[314308]: 2025-10-02 12:35:23.732639903 +0000 UTC m=+0.614224764 container attach 0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:35:23 compute-0 podman[314308]: 2025-10-02 12:35:23.733241008 +0000 UTC m=+0.614825859 container died 0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:35:23 compute-0 ovn_controller[148123]: 2025-10-02T12:35:23Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:26:6c 10.100.0.13
Oct 02 12:35:23 compute-0 ovn_controller[148123]: 2025-10-02T12:35:23Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:26:6c 10.100.0.13
Oct 02 12:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ab131e72ba6821edea40386512a612b05396eaa423fc42a1c5eae4017f58c77-merged.mount: Deactivated successfully.
Oct 02 12:35:24 compute-0 podman[314308]: 2025-10-02 12:35:24.272773186 +0000 UTC m=+1.154358037 container remove 0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:35:24 compute-0 systemd[1]: libpod-conmon-0cff026acd3e9ab59d70c46ebd8b2b0296c4effcc95d538b3c43b35dfeabe085.scope: Deactivated successfully.
Oct 02 12:35:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:24.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:24 compute-0 podman[314349]: 2025-10-02 12:35:24.459031536 +0000 UTC m=+0.022855076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:24 compute-0 podman[314349]: 2025-10-02 12:35:24.590592861 +0000 UTC m=+0.154416371 container create d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_yalow, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:24 compute-0 systemd[1]: Started libpod-conmon-d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f.scope.
Oct 02 12:35:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a665e82ede961762bb307528b417adad6d835ad7f3a0cf9ae70d3c1b4d5518aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a665e82ede961762bb307528b417adad6d835ad7f3a0cf9ae70d3c1b4d5518aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a665e82ede961762bb307528b417adad6d835ad7f3a0cf9ae70d3c1b4d5518aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a665e82ede961762bb307528b417adad6d835ad7f3a0cf9ae70d3c1b4d5518aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:25 compute-0 podman[314349]: 2025-10-02 12:35:25.138793295 +0000 UTC m=+0.702616895 container init d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:35:25 compute-0 podman[314349]: 2025-10-02 12:35:25.148598751 +0000 UTC m=+0.712422261 container start d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:35:25 compute-0 podman[314349]: 2025-10-02 12:35:25.352447877 +0000 UTC m=+0.916271407 container attach d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_yalow, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:35:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 396 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 51 KiB/s wr, 297 op/s
Oct 02 12:35:25 compute-0 ceph-mon[73668]: pgmap v1767: 305 pgs: 305 active+clean; 478 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 65 KiB/s wr, 329 op/s
Oct 02 12:35:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:25 compute-0 nova_compute[256940]: 2025-10-02 12:35:25.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-0 nova_compute[256940]: 2025-10-02 12:35:25.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:26 compute-0 amazing_yalow[314366]: {
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:     "1": [
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:         {
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "devices": [
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "/dev/loop3"
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             ],
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "lv_name": "ceph_lv0",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "lv_size": "7511998464",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "name": "ceph_lv0",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "tags": {
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.cluster_name": "ceph",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.crush_device_class": "",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.encrypted": "0",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.osd_id": "1",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.type": "block",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:                 "ceph.vdo": "0"
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             },
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "type": "block",
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:             "vg_name": "ceph_vg0"
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:         }
Oct 02 12:35:26 compute-0 amazing_yalow[314366]:     ]
Oct 02 12:35:26 compute-0 amazing_yalow[314366]: }
Oct 02 12:35:26 compute-0 systemd[1]: libpod-d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f.scope: Deactivated successfully.
Oct 02 12:35:26 compute-0 podman[314349]: 2025-10-02 12:35:26.0499859 +0000 UTC m=+1.613809420 container died d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a665e82ede961762bb307528b417adad6d835ad7f3a0cf9ae70d3c1b4d5518aa-merged.mount: Deactivated successfully.
Oct 02 12:35:26 compute-0 podman[314349]: 2025-10-02 12:35:26.354891978 +0000 UTC m=+1.918715488 container remove d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_yalow, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:35:26 compute-0 systemd[1]: libpod-conmon-d97217326fc97e1283889949a9ce5bb3b791f2789df4deb6e12f522f04e0895f.scope: Deactivated successfully.
Oct 02 12:35:26 compute-0 sudo[314243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:26.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:26.471 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:26.472 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:26.473 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:26 compute-0 sudo[314390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:26 compute-0 sudo[314390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:26 compute-0 sudo[314390]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:26 compute-0 sudo[314415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:35:26 compute-0 sudo[314415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:26 compute-0 sudo[314415]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:26 compute-0 sudo[314441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:26 compute-0 sudo[314441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:26 compute-0 sudo[314441]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:26 compute-0 sudo[314466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:35:26 compute-0 sudo[314466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:26.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:26 compute-0 ceph-mon[73668]: pgmap v1768: 305 pgs: 305 active+clean; 396 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 51 KiB/s wr, 297 op/s
Oct 02 12:35:26 compute-0 nova_compute[256940]: 2025-10-02 12:35:26.972 2 INFO nova.virt.libvirt.driver [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Deleting instance files /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e_del
Oct 02 12:35:26 compute-0 nova_compute[256940]: 2025-10-02 12:35:26.973 2 INFO nova.virt.libvirt.driver [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Deletion of /var/lib/nova/instances/3aa671dd-41aa-4333-9883-6df66314117e_del complete
Oct 02 12:35:27 compute-0 podman[314530]: 2025-10-02 12:35:27.01322213 +0000 UTC m=+0.045123876 container create 72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.041 2 INFO nova.compute.manager [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Took 7.90 seconds to destroy the instance on the hypervisor.
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.041 2 DEBUG oslo.service.loopingcall [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.042 2 DEBUG nova.compute.manager [-] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.042 2 DEBUG nova.network.neutron [-] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:35:27 compute-0 systemd[1]: Started libpod-conmon-72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6.scope.
Oct 02 12:35:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:27 compute-0 podman[314530]: 2025-10-02 12:35:26.996776831 +0000 UTC m=+0.028678607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:27 compute-0 podman[314530]: 2025-10-02 12:35:27.105568234 +0000 UTC m=+0.137470010 container init 72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 12:35:27 compute-0 podman[314530]: 2025-10-02 12:35:27.116128779 +0000 UTC m=+0.148030555 container start 72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nobel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:35:27 compute-0 podman[314530]: 2025-10-02 12:35:27.121688074 +0000 UTC m=+0.153589830 container attach 72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:35:27 compute-0 epic_nobel[314546]: 167 167
Oct 02 12:35:27 compute-0 systemd[1]: libpod-72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6.scope: Deactivated successfully.
Oct 02 12:35:27 compute-0 podman[314530]: 2025-10-02 12:35:27.124791525 +0000 UTC m=+0.156693311 container died 72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ff646149eafeec23e0749da0bc461d12cdf7b1b0b324e154af1a9684cdf993e-merged.mount: Deactivated successfully.
Oct 02 12:35:27 compute-0 podman[314530]: 2025-10-02 12:35:27.197391145 +0000 UTC m=+0.229292901 container remove 72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:35:27 compute-0 systemd[1]: libpod-conmon-72598cce9986ff3b95607721180fdd00dae6c36d671f448f2dacabbe237342b6.scope: Deactivated successfully.
Oct 02 12:35:27 compute-0 podman[314568]: 2025-10-02 12:35:27.397585808 +0000 UTC m=+0.055341612 container create d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:35:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 328 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 02 12:35:27 compute-0 systemd[1]: Started libpod-conmon-d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351.scope.
Oct 02 12:35:27 compute-0 podman[314568]: 2025-10-02 12:35:27.371295333 +0000 UTC m=+0.029051167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:35:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0c42eacb24527a81e8b198148d7aad27e53f706eb1079fdbb04fa62d646689/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0c42eacb24527a81e8b198148d7aad27e53f706eb1079fdbb04fa62d646689/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0c42eacb24527a81e8b198148d7aad27e53f706eb1079fdbb04fa62d646689/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0c42eacb24527a81e8b198148d7aad27e53f706eb1079fdbb04fa62d646689/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:27 compute-0 podman[314568]: 2025-10-02 12:35:27.505076896 +0000 UTC m=+0.162832720 container init d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hoover, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:35:27 compute-0 podman[314568]: 2025-10-02 12:35:27.514662516 +0000 UTC m=+0.172418320 container start d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:35:27 compute-0 podman[314568]: 2025-10-02 12:35:27.522057859 +0000 UTC m=+0.179813683 container attach d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.595 2 INFO nova.virt.libvirt.driver [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Deleting instance files /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0_del
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.596 2 INFO nova.virt.libvirt.driver [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Deletion of /var/lib/nova/instances/72600c3a-7fa7-4179-905f-f02e91b0efc0_del complete
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.647 2 INFO nova.compute.manager [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Took 7.42 seconds to destroy the instance on the hypervisor.
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.648 2 DEBUG oslo.service.loopingcall [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.648 2 DEBUG nova.compute.manager [-] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.649 2 DEBUG nova.network.neutron [-] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.782 2 DEBUG nova.network.neutron [-] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.805 2 INFO nova.compute.manager [-] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Took 0.76 seconds to deallocate network for instance.
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.914 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:27 compute-0 nova_compute[256940]: 2025-10-02 12:35:27.915 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3724517341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:28 compute-0 nova_compute[256940]: 2025-10-02 12:35:28.064 2 DEBUG oslo_concurrency.processutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:28 compute-0 agitated_hoover[314584]: {
Oct 02 12:35:28 compute-0 agitated_hoover[314584]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:35:28 compute-0 agitated_hoover[314584]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:35:28 compute-0 agitated_hoover[314584]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:35:28 compute-0 agitated_hoover[314584]:         "osd_id": 1,
Oct 02 12:35:28 compute-0 agitated_hoover[314584]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:35:28 compute-0 agitated_hoover[314584]:         "type": "bluestore"
Oct 02 12:35:28 compute-0 agitated_hoover[314584]:     }
Oct 02 12:35:28 compute-0 agitated_hoover[314584]: }
Oct 02 12:35:28 compute-0 systemd[1]: libpod-d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351.scope: Deactivated successfully.
Oct 02 12:35:28 compute-0 conmon[314584]: conmon d098b0caf6a5738f6f13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351.scope/container/memory.events
Oct 02 12:35:28 compute-0 podman[314568]: 2025-10-02 12:35:28.410326057 +0000 UTC m=+1.068081861 container died d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:35:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:28.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f0c42eacb24527a81e8b198148d7aad27e53f706eb1079fdbb04fa62d646689-merged.mount: Deactivated successfully.
Oct 02 12:35:28 compute-0 podman[314568]: 2025-10-02 12:35:28.508538994 +0000 UTC m=+1.166294798 container remove d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hoover, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:35:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882280986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:28 compute-0 systemd[1]: libpod-conmon-d098b0caf6a5738f6f136fbb70122fde025112540955b05df41226d31fc9b351.scope: Deactivated successfully.
Oct 02 12:35:28 compute-0 sudo[314466]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:35:28 compute-0 nova_compute[256940]: 2025-10-02 12:35:28.556 2 DEBUG oslo_concurrency.processutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:28 compute-0 podman[314638]: 2025-10-02 12:35:28.563412993 +0000 UTC m=+0.071204025 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 02 12:35:28 compute-0 nova_compute[256940]: 2025-10-02 12:35:28.569 2 DEBUG nova.compute.provider_tree [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:28 compute-0 podman[314639]: 2025-10-02 12:35:28.574149712 +0000 UTC m=+0.084225144 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:35:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:35:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:28 compute-0 nova_compute[256940]: 2025-10-02 12:35:28.593 2 DEBUG nova.scheduler.client.report [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 92154f82-df67-4e8e-9d06-b6498722a938 does not exist
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 834daa81-337d-4c14-a4f1-c28718fc028c does not exist
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d772ce3b-9c76-46da-bcdb-09b370b6f704 does not exist
Oct 02 12:35:28 compute-0 nova_compute[256940]: 2025-10-02 12:35:28.630 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:28 compute-0 sudo[314679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:28 compute-0 sudo[314679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:28 compute-0 sudo[314679]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:28 compute-0 nova_compute[256940]: 2025-10-02 12:35:28.712 2 INFO nova.scheduler.client.report [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Deleted allocations for instance 3aa671dd-41aa-4333-9883-6df66314117e
Oct 02 12:35:28 compute-0 sudo[314704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:35:28 compute-0 sudo[314704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:28 compute-0 sudo[314704]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:28.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:35:28
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'images', 'volumes', '.rgw.root', 'backups']
Oct 02 12:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:35:28 compute-0 nova_compute[256940]: 2025-10-02 12:35:28.847 2 DEBUG oslo_concurrency.lockutils [None req-1f77ca33-f3f9-4d25-8bf3-00686be3d660 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "3aa671dd-41aa-4333-9883-6df66314117e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.029 2 DEBUG nova.network.neutron [-] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.049 2 INFO nova.compute.manager [-] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Took 1.40 seconds to deallocate network for instance.
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.099 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.100 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.129 2 DEBUG nova.compute.manager [req-8f5bc3fa-c80d-4b8b-905c-6a820eaeddf6 req-c434b5ce-404f-4c1d-ba73-5759cc2106cc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Received event network-vif-deleted-8a3f0797-397c-4ab5-9436-f371ba35d714 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.232 2 DEBUG oslo_concurrency.processutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:29 compute-0 ceph-mon[73668]: pgmap v1769: 305 pgs: 305 active+clean; 328 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 02 12:35:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/882280986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 328 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 02 12:35:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947376582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.702 2 DEBUG oslo_concurrency.processutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.714 2 DEBUG nova.compute.provider_tree [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.746 2 DEBUG nova.scheduler.client.report [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.776 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:29 compute-0 nova_compute[256940]: 2025-10-02 12:35:29.834 2 INFO nova.scheduler.client.report [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Deleted allocations for instance 72600c3a-7fa7-4179-905f-f02e91b0efc0
Oct 02 12:35:30 compute-0 nova_compute[256940]: 2025-10-02 12:35:30.079 2 DEBUG oslo_concurrency.lockutils [None req-6e5b6ef9-c8ae-4638-b2ed-6d0666ef5842 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "72600c3a-7fa7-4179-905f-f02e91b0efc0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:30 compute-0 nova_compute[256940]: 2025-10-02 12:35:30.139 2 DEBUG nova.compute.manager [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Received event network-vif-deleted-c23d4bbd-ee18-4c79-a6cc-875ec5e0ae22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:30 compute-0 ceph-mon[73668]: pgmap v1770: 305 pgs: 305 active+clean; 328 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 02 12:35:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/947376582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:30.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 02 12:35:30 compute-0 nova_compute[256940]: 2025-10-02 12:35:30.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:30 compute-0 nova_compute[256940]: 2025-10-02 12:35:30.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 02 12:35:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 02 12:35:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 268 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 67 KiB/s wr, 262 op/s
Oct 02 12:35:32 compute-0 ceph-mon[73668]: osdmap e248: 3 total, 3 up, 3 in
Oct 02 12:35:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:32.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:33 compute-0 ceph-mon[73668]: pgmap v1772: 305 pgs: 305 active+clean; 268 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 67 KiB/s wr, 262 op/s
Oct 02 12:35:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 248 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 51 KiB/s wr, 301 op/s
Oct 02 12:35:34 compute-0 nova_compute[256940]: 2025-10-02 12:35:34.389 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408519.3890097, 3aa671dd-41aa-4333-9883-6df66314117e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:34 compute-0 nova_compute[256940]: 2025-10-02 12:35:34.390 2 INFO nova.compute.manager [-] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] VM Stopped (Lifecycle Event)
Oct 02 12:35:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:34 compute-0 nova_compute[256940]: 2025-10-02 12:35:34.420 2 DEBUG nova.compute.manager [None req-abbf1bf3-4c76-4e85-93a3-286418e82f86 - - - - - -] [instance: 3aa671dd-41aa-4333-9883-6df66314117e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:34 compute-0 ceph-mon[73668]: pgmap v1773: 305 pgs: 305 active+clean; 248 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 51 KiB/s wr, 301 op/s
Oct 02 12:35:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:35:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:34.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:35:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 222 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 48 KiB/s wr, 247 op/s
Oct 02 12:35:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:35 compute-0 nova_compute[256940]: 2025-10-02 12:35:35.877 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408520.8764124, 72600c3a-7fa7-4179-905f-f02e91b0efc0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:35 compute-0 nova_compute[256940]: 2025-10-02 12:35:35.878 2 INFO nova.compute.manager [-] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] VM Stopped (Lifecycle Event)
Oct 02 12:35:35 compute-0 nova_compute[256940]: 2025-10-02 12:35:35.925 2 DEBUG nova.compute.manager [None req-e827df3e-f3fe-455d-bdbb-3425b8bed262 - - - - - -] [instance: 72600c3a-7fa7-4179-905f-f02e91b0efc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:35 compute-0 nova_compute[256940]: 2025-10-02 12:35:35.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:36 compute-0 ovn_controller[148123]: 2025-10-02T12:35:36Z|00385|binding|INFO|Releasing lport a25f83dc-1cb4-467b-80c7-496d6a4bfac5 from this chassis (sb_readonly=0)
Oct 02 12:35:36 compute-0 nova_compute[256940]: 2025-10-02 12:35:36.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:36.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:36.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:36 compute-0 ceph-mon[73668]: pgmap v1774: 305 pgs: 305 active+clean; 222 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 48 KiB/s wr, 247 op/s
Oct 02 12:35:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 169 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 181 op/s
Oct 02 12:35:37 compute-0 sudo[314755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:37 compute-0 sudo[314755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:37 compute-0 sudo[314755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:37 compute-0 sudo[314780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:37 compute-0 sudo[314780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:37 compute-0 sudo[314780]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:38.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:38.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:35:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 26K writes, 106K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 7990 syncs, 3.26 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6792 writes, 24K keys, 6792 commit groups, 1.0 writes per commit group, ingest: 25.49 MB, 0.04 MB/s
                                           Interval WAL: 6791 writes, 2546 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:35:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 169 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 181 op/s
Oct 02 12:35:39 compute-0 nova_compute[256940]: 2025-10-02 12:35:39.467 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:39 compute-0 nova_compute[256940]: 2025-10-02 12:35:39.468 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:39 compute-0 nova_compute[256940]: 2025-10-02 12:35:39.468 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:39 compute-0 nova_compute[256940]: 2025-10-02 12:35:39.469 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:39 compute-0 nova_compute[256940]: 2025-10-02 12:35:39.469 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:39 compute-0 nova_compute[256940]: 2025-10-02 12:35:39.471 2 INFO nova.compute.manager [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Terminating instance
Oct 02 12:35:39 compute-0 nova_compute[256940]: 2025-10-02 12:35:39.473 2 DEBUG nova.compute.manager [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:35:39 compute-0 ceph-mon[73668]: pgmap v1775: 305 pgs: 305 active+clean; 169 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 181 op/s
Oct 02 12:35:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1625734000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:40 compute-0 kernel: tapcd41c84b-4c (unregistering): left promiscuous mode
Oct 02 12:35:40 compute-0 NetworkManager[44981]: <info>  [1759408540.1895] device (tapcd41c84b-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00386|binding|INFO|Releasing lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 from this chassis (sb_readonly=0)
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00387|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 down in Southbound
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00388|binding|INFO|Removing iface tapcd41c84b-4c ovn-installed in OVS
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:40.258 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:6c 10.100.0.13'], port_security=['fa:16:3e:a7:26:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cd41c84b-4c34-45f1-85a1-9d7d2f482414) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:40.262 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cd41c84b-4c34-45f1-85a1-9d7d2f482414 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a unbound from our chassis
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:40.264 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4dd1e489-9cc3-4420-8577-3a250b110c9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:40.267 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83f0452f-eb9a-4314-ba2f-3f9d2b45a4e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:40.268 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a namespace which is not needed anymore
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021757732528382653 of space, bias 1.0, pg target 0.6527319758514796 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:35:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:35:40 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 12:35:40 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000055.scope: Consumed 15.464s CPU time.
Oct 02 12:35:40 compute-0 systemd-machined[210927]: Machine qemu-40-instance-00000055 terminated.
Oct 02 12:35:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:40.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:40 compute-0 kernel: tapcd41c84b-4c: entered promiscuous mode
Oct 02 12:35:40 compute-0 NetworkManager[44981]: <info>  [1759408540.5042] manager: (tapcd41c84b-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00389|binding|INFO|Claiming lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 for this chassis.
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00390|binding|INFO|cd41c84b-4c34-45f1-85a1-9d7d2f482414: Claiming fa:16:3e:a7:26:6c 10.100.0.13
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:40.519 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:6c 10.100.0.13'], port_security=['fa:16:3e:a7:26:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cd41c84b-4c34-45f1-85a1-9d7d2f482414) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:40 compute-0 kernel: tapcd41c84b-4c (unregistering): left promiscuous mode
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.559 2 INFO nova.virt.libvirt.driver [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Instance destroyed successfully.
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.559 2 DEBUG nova.objects.instance [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'resources' on Instance uuid e4c06c33-d7c3-4a4d-b181-45b9422173ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00391|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 ovn-installed in OVS
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00392|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 up in Southbound
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00393|binding|INFO|Releasing lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 from this chassis (sb_readonly=1)
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00394|if_status|INFO|Dropped 2 log messages in last 22 seconds (most recently, 22 seconds ago) due to excessive rate
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00395|if_status|INFO|Not setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 down as sb is readonly
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00396|binding|INFO|Removing iface tapcd41c84b-4c ovn-installed in OVS
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00397|binding|INFO|Releasing lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 from this chassis (sb_readonly=0)
Oct 02 12:35:40 compute-0 ovn_controller[148123]: 2025-10-02T12:35:40Z|00398|binding|INFO|Setting lport cd41c84b-4c34-45f1-85a1-9d7d2f482414 down in Southbound
Oct 02 12:35:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:40.574 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:6c 10.100.0.13'], port_security=['fa:16:3e:a7:26:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4c06c33-d7c3-4a4d-b181-45b9422173ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=cd41c84b-4c34-45f1-85a1-9d7d2f482414) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.589 2 DEBUG nova.virt.libvirt.vif [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-305635351',display_name='tempest-ListServerFiltersTestJSON-instance-305635351',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-305635351',id=85,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-nczxypp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:09Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=e4c06c33-d7c3-4a4d-b181-45b9422173ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.590 2 DEBUG nova.network.os_vif_util [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "address": "fa:16:3e:a7:26:6c", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd41c84b-4c", "ovs_interfaceid": "cd41c84b-4c34-45f1-85a1-9d7d2f482414", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.590 2 DEBUG nova.network.os_vif_util [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.591 2 DEBUG os_vif [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd41c84b-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.599 2 INFO os_vif [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:6c,bridge_name='br-int',has_traffic_filtering=True,id=cd41c84b-4c34-45f1-85a1-9d7d2f482414,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd41c84b-4c')
Oct 02 12:35:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:40.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:40 compute-0 nova_compute[256940]: 2025-10-02 12:35:40.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:40 compute-0 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[312524]: [NOTICE]   (312569) : haproxy version is 2.8.14-c23fe91
Oct 02 12:35:40 compute-0 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[312524]: [NOTICE]   (312569) : path to executable is /usr/sbin/haproxy
Oct 02 12:35:40 compute-0 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[312524]: [WARNING]  (312569) : Exiting Master process...
Oct 02 12:35:40 compute-0 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[312524]: [ALERT]    (312569) : Current worker (312572) exited with code 143 (Terminated)
Oct 02 12:35:40 compute-0 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[312524]: [WARNING]  (312569) : All workers exited. Exiting... (0)
Oct 02 12:35:40 compute-0 systemd[1]: libpod-acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3.scope: Deactivated successfully.
Oct 02 12:35:40 compute-0 conmon[312524]: conmon acd11f839361ec82cf9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3.scope/container/memory.events
Oct 02 12:35:40 compute-0 podman[314829]: 2025-10-02 12:35:40.9909456 +0000 UTC m=+0.619603864 container died acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:35:41 compute-0 ceph-mon[73668]: pgmap v1776: 305 pgs: 305 active+clean; 169 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 181 op/s
Oct 02 12:35:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/9764884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:41 compute-0 nova_compute[256940]: 2025-10-02 12:35:41.387 2 DEBUG nova.compute.manager [req-68bfaa3f-4fb5-47ee-9d19-2bb75d5608ff req-0a69ce4f-cab7-4d6f-a0ae-1004d6426bc3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:41 compute-0 nova_compute[256940]: 2025-10-02 12:35:41.388 2 DEBUG oslo_concurrency.lockutils [req-68bfaa3f-4fb5-47ee-9d19-2bb75d5608ff req-0a69ce4f-cab7-4d6f-a0ae-1004d6426bc3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:41 compute-0 nova_compute[256940]: 2025-10-02 12:35:41.388 2 DEBUG oslo_concurrency.lockutils [req-68bfaa3f-4fb5-47ee-9d19-2bb75d5608ff req-0a69ce4f-cab7-4d6f-a0ae-1004d6426bc3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:41 compute-0 nova_compute[256940]: 2025-10-02 12:35:41.389 2 DEBUG oslo_concurrency.lockutils [req-68bfaa3f-4fb5-47ee-9d19-2bb75d5608ff req-0a69ce4f-cab7-4d6f-a0ae-1004d6426bc3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:41 compute-0 nova_compute[256940]: 2025-10-02 12:35:41.389 2 DEBUG nova.compute.manager [req-68bfaa3f-4fb5-47ee-9d19-2bb75d5608ff req-0a69ce4f-cab7-4d6f-a0ae-1004d6426bc3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:41 compute-0 nova_compute[256940]: 2025-10-02 12:35:41.389 2 DEBUG nova.compute.manager [req-68bfaa3f-4fb5-47ee-9d19-2bb75d5608ff req-0a69ce4f-cab7-4d6f-a0ae-1004d6426bc3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 172 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 667 KiB/s wr, 105 op/s
Oct 02 12:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3-userdata-shm.mount: Deactivated successfully.
Oct 02 12:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-52990e5012d0a0cd70ed0bbfc7e35fc63f289de0cdc92bcbca448b22ef6fd920-merged.mount: Deactivated successfully.
Oct 02 12:35:41 compute-0 podman[314829]: 2025-10-02 12:35:41.830584453 +0000 UTC m=+1.459242747 container cleanup acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:35:41 compute-0 systemd[1]: libpod-conmon-acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3.scope: Deactivated successfully.
Oct 02 12:35:42 compute-0 podman[314883]: 2025-10-02 12:35:42.119371642 +0000 UTC m=+0.247509575 container remove acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.126 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9ff174-f8ed-44d1-bd39-871f012c6228]: (4, ('Thu Oct  2 12:35:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a (acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3)\nacd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3\nThu Oct  2 12:35:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a (acd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3)\nacd11f839361ec82cf9b18895c4ab0a4034c95b1806c4d487d58aad3801d46a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.128 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f9c6f5-afb8-4235-899e-a9cc1bd0e15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.129 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:42 compute-0 nova_compute[256940]: 2025-10-02 12:35:42.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:42 compute-0 kernel: tap4dd1e489-90: left promiscuous mode
Oct 02 12:35:42 compute-0 nova_compute[256940]: 2025-10-02 12:35:42.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:42 compute-0 nova_compute[256940]: 2025-10-02 12:35:42.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.171 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf48e6a-59c2-4fac-9e19-fd32fd593676]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.207 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d44332a1-91ae-4c41-9853-ad91d73f036d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.209 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aecc8ffc-2597-4d4a-956f-cc8abdbaa4b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.227 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0d5d70-52a1-4731-b356-3c3c8c0749ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637142, 'reachable_time': 20437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314900, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.230 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.230 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[76619a92-fa94-48e7-84ac-97420213d919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.230 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cd41c84b-4c34-45f1-85a1-9d7d2f482414 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a unbound from our chassis
Oct 02 12:35:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d4dd1e489\x2d9cc3\x2d4420\x2d8577\x2d3a250b110c9a.mount: Deactivated successfully.
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.232 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4dd1e489-9cc3-4420-8577-3a250b110c9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.233 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[897a48c8-c439-4c52-8ea4-c922f6a6e4a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.233 158104 INFO neutron.agent.ovn.metadata.agent [-] Port cd41c84b-4c34-45f1-85a1-9d7d2f482414 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a unbound from our chassis
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.234 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4dd1e489-9cc3-4420-8577-3a250b110c9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:42.235 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ed72f7da-c124-4b3d-b073-75659e764d0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:42.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:42 compute-0 ceph-mon[73668]: pgmap v1777: 305 pgs: 305 active+clean; 172 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 667 KiB/s wr, 105 op/s
Oct 02 12:35:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 189 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 110 op/s
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.500 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.500 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.501 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.501 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.501 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.501 2 WARNING nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state active and task_state deleting.
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.502 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.502 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.502 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.502 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.502 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.503 2 WARNING nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state active and task_state deleting.
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.503 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.503 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.503 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.504 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.504 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.504 2 WARNING nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state active and task_state deleting.
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.504 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.504 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.505 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.505 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.505 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.505 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-unplugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.505 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.506 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.506 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.506 2 DEBUG oslo_concurrency.lockutils [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.506 2 DEBUG nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] No waiting events found dispatching network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:43 compute-0 nova_compute[256940]: 2025-10-02 12:35:43.506 2 WARNING nova.compute.manager [req-9397a03f-c328-4d10-9e71-fdf67c80f6e8 req-99d5052d-5123-4d91-9d7d-d298389d81df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received unexpected event network-vif-plugged-cd41c84b-4c34-45f1-85a1-9d7d2f482414 for instance with vm_state active and task_state deleting.
Oct 02 12:35:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:44.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:45 compute-0 ceph-mon[73668]: pgmap v1778: 305 pgs: 305 active+clean; 189 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 110 op/s
Oct 02 12:35:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 176 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Oct 02 12:35:45 compute-0 nova_compute[256940]: 2025-10-02 12:35:45.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.228 2 INFO nova.virt.libvirt.driver [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Deleting instance files /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca_del
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.229 2 INFO nova.virt.libvirt.driver [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Deletion of /var/lib/nova/instances/e4c06c33-d7c3-4a4d-b181-45b9422173ca_del complete
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.307 2 INFO nova.compute.manager [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Took 6.83 seconds to destroy the instance on the hypervisor.
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.308 2 DEBUG oslo.service.loopingcall [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.308 2 DEBUG nova.compute.manager [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.308 2 DEBUG nova.network.neutron [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:35:46 compute-0 nova_compute[256940]: 2025-10-02 12:35:46.382 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:46 compute-0 podman[314904]: 2025-10-02 12:35:46.405992636 +0000 UTC m=+0.063046393 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:35:46 compute-0 ceph-mon[73668]: pgmap v1779: 305 pgs: 305 active+clean; 176 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Oct 02 12:35:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:46 compute-0 podman[314905]: 2025-10-02 12:35:46.462074686 +0000 UTC m=+0.118672931 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:46.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 165 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:35:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/169662621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4057833807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 12:35:47 compute-0 nova_compute[256940]: 2025-10-02 12:35:47.877 2 DEBUG nova.network.neutron [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:47 compute-0 nova_compute[256940]: 2025-10-02 12:35:47.921 2 INFO nova.compute.manager [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Took 1.61 seconds to deallocate network for instance.
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:47.999 2 DEBUG nova.compute.manager [req-be130535-29df-4d04-bbfe-e729ed429834 req-8793716d-1c73-484c-b07a-0e9e9931a6ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Received event network-vif-deleted-cd41c84b-4c34-45f1-85a1-9d7d2f482414 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.001 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.002 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.063 2 DEBUG oslo_concurrency.processutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888421837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.570 2 DEBUG oslo_concurrency.processutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.577 2 DEBUG nova.compute.provider_tree [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.605 2 DEBUG nova.scheduler.client.report [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.632 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.678 2 INFO nova.scheduler.client.report [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Deleted allocations for instance e4c06c33-d7c3-4a4d-b181-45b9422173ca
Oct 02 12:35:48 compute-0 nova_compute[256940]: 2025-10-02 12:35:48.769 2 DEBUG oslo_concurrency.lockutils [None req-b847b615-cc85-4e91-9946-dfe1c6a6652f c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "e4c06c33-d7c3-4a4d-b181-45b9422173ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:48.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:48 compute-0 ceph-mon[73668]: pgmap v1780: 305 pgs: 305 active+clean; 165 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:35:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3582682749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/888421837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 165 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Oct 02 12:35:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2847610097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:50.137 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:50 compute-0 nova_compute[256940]: 2025-10-02 12:35:50.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:50.138 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:35:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:35:50.139 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:50 compute-0 nova_compute[256940]: 2025-10-02 12:35:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:50.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:51 compute-0 nova_compute[256940]: 2025-10-02 12:35:51.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:51 compute-0 ceph-mon[73668]: pgmap v1781: 305 pgs: 305 active+clean; 165 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Oct 02 12:35:51 compute-0 nova_compute[256940]: 2025-10-02 12:35:51.214 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:51 compute-0 nova_compute[256940]: 2025-10-02 12:35:51.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:35:51 compute-0 nova_compute[256940]: 2025-10-02 12:35:51.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Oct 02 12:35:52 compute-0 nova_compute[256940]: 2025-10-02 12:35:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:52 compute-0 ceph-mon[73668]: pgmap v1782: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Oct 02 12:35:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:52.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:53 compute-0 nova_compute[256940]: 2025-10-02 12:35:53.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 3.4 MiB/s wr, 126 op/s
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.249 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.249 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.249 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:35:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:35:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3276537174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:54 compute-0 ceph-mon[73668]: pgmap v1783: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 3.4 MiB/s wr, 126 op/s
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.705 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:54.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.847 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.848 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4570MB free_disk=20.967193603515625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.848 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:54 compute-0 nova_compute[256940]: 2025-10-02 12:35:54.849 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.116 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.117 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.226 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.257 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.257 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.271 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.317 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.355 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.557 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408540.5567675, e4c06c33-d7c3-4a4d-b181-45b9422173ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.558 2 INFO nova.compute.manager [-] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] VM Stopped (Lifecycle Event)
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.602 2 DEBUG nova.compute.manager [None req-06ade8ef-eb12-467a-8447-9130577a33b0 - - - - - -] [instance: e4c06c33-d7c3-4a4d-b181-45b9422173ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632050585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3276537174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2667916916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3979436471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.847 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.853 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.891 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.963 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:35:55 compute-0 nova_compute[256940]: 2025-10-02 12:35:55.964 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:56 compute-0 nova_compute[256940]: 2025-10-02 12:35:56.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:56.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:56.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:57 compute-0 ceph-mon[73668]: pgmap v1784: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Oct 02 12:35:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2632050585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/177129438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 99 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 157 op/s
Oct 02 12:35:57 compute-0 sudo[315023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:57 compute-0 sudo[315023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:57 compute-0 sudo[315023]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:57 compute-0 sudo[315048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:57 compute-0 sudo[315048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:57 compute-0 sudo[315048]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:57 compute-0 nova_compute[256940]: 2025-10-02 12:35:57.965 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:57 compute-0 nova_compute[256940]: 2025-10-02 12:35:57.966 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:35:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:58.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.673 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.674 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.707 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:35:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:35:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:58.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.796 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.797 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.802 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.802 2 INFO nova.compute.claims [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:35:58 compute-0 nova_compute[256940]: 2025-10-02 12:35:58.954 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.246 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.246 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:59 compute-0 ceph-mon[73668]: pgmap v1785: 305 pgs: 305 active+clean; 99 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 157 op/s
Oct 02 12:35:59 compute-0 podman[315094]: 2025-10-02 12:35:59.396367849 +0000 UTC m=+0.061848781 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:59 compute-0 podman[315095]: 2025-10-02 12:35:59.405938938 +0000 UTC m=+0.065459605 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 99 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 113 op/s
Oct 02 12:35:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172926211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.527 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.535 2 DEBUG nova.compute.provider_tree [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.557 2 DEBUG nova.scheduler.client.report [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.595 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.596 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.655 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.656 2 DEBUG nova.network.neutron [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.696 2 INFO nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.729 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.890 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.891 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.892 2 INFO nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Creating image(s)
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.924 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.962 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:59 compute-0 nova_compute[256940]: 2025-10-02 12:35:59.997 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.003 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.075 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.076 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.077 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.077 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.110 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.115 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.209 2 DEBUG nova.policy [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a6ed6c81c3f041efba3eeba9fc5a4060', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67164307b6c54c85818a4f6aa13779f8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:36:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1172926211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:00.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:00 compute-0 nova_compute[256940]: 2025-10-02 12:36:00.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:01 compute-0 nova_compute[256940]: 2025-10-02 12:36:01.043 2 DEBUG nova.network.neutron [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Successfully created port: dfcf2008-94a2-47ee-bb17-1651231ca8f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:36:01 compute-0 nova_compute[256940]: 2025-10-02 12:36:01.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:01 compute-0 nova_compute[256940]: 2025-10-02 12:36:01.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 88 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 116 op/s
Oct 02 12:36:01 compute-0 ceph-mon[73668]: pgmap v1786: 305 pgs: 305 active+clean; 99 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 113 op/s
Oct 02 12:36:01 compute-0 nova_compute[256940]: 2025-10-02 12:36:01.767 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:01 compute-0 nova_compute[256940]: 2025-10-02 12:36:01.841 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] resizing rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.086 2 DEBUG nova.objects.instance [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lazy-loading 'migration_context' on Instance uuid e1ea4895-562b-4c2e-9305-e9cb44590b8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.114 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.114 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Ensure instance console log exists: /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.114 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.115 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.115 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.387 2 DEBUG nova.network.neutron [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Successfully updated port: dfcf2008-94a2-47ee-bb17-1651231ca8f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.406 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "refresh_cache-e1ea4895-562b-4c2e-9305-e9cb44590b8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.407 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquired lock "refresh_cache-e1ea4895-562b-4c2e-9305-e9cb44590b8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.407 2 DEBUG nova.network.neutron [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:36:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:02.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.532 2 DEBUG nova.compute.manager [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Received event network-changed-dfcf2008-94a2-47ee-bb17-1651231ca8f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.532 2 DEBUG nova.compute.manager [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Refreshing instance network info cache due to event network-changed-dfcf2008-94a2-47ee-bb17-1651231ca8f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.532 2 DEBUG oslo_concurrency.lockutils [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e1ea4895-562b-4c2e-9305-e9cb44590b8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:02 compute-0 nova_compute[256940]: 2025-10-02 12:36:02.696 2 DEBUG nova.network.neutron [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:36:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:02.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:02 compute-0 ceph-mon[73668]: pgmap v1787: 305 pgs: 305 active+clean; 88 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 116 op/s
Oct 02 12:36:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 104 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 456 KiB/s wr, 103 op/s
Oct 02 12:36:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:04.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:04 compute-0 ceph-mon[73668]: pgmap v1788: 305 pgs: 305 active+clean; 104 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 456 KiB/s wr, 103 op/s
Oct 02 12:36:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:36:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/102703640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:36:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/102703640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.237 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.238 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.255 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:36:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 118 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 107 op/s
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.834 2 DEBUG nova.network.neutron [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Updating instance_info_cache with network_info: [{"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.870 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Releasing lock "refresh_cache-e1ea4895-562b-4c2e-9305-e9cb44590b8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.871 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Instance network_info: |[{"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.871 2 DEBUG oslo_concurrency.lockutils [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e1ea4895-562b-4c2e-9305-e9cb44590b8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.871 2 DEBUG nova.network.neutron [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Refreshing network info cache for port dfcf2008-94a2-47ee-bb17-1651231ca8f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.874 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Start _get_guest_xml network_info=[{"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.880 2 WARNING nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.885 2 DEBUG nova.virt.libvirt.host [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.885 2 DEBUG nova.virt.libvirt.host [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.893 2 DEBUG nova.virt.libvirt.host [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.894 2 DEBUG nova.virt.libvirt.host [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.895 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.895 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.896 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.896 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.897 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.897 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.897 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.897 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.898 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.898 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.898 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.898 2 DEBUG nova.virt.hardware [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:36:05 compute-0 nova_compute[256940]: 2025-10-02 12:36:05.902 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/102703640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/102703640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421537231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.414 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.443 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.449 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:06.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:06.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2301724489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.928 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.930 2 DEBUG nova.virt.libvirt.vif [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-38842122',display_name='tempest-ServerAddressesTestJSON-server-38842122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-38842122',id=94,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67164307b6c54c85818a4f6aa13779f8',ramdisk_id='',reservation_id='r-jgyde1yp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1201151267',owner_user_name='tempest-ServerAddressesTestJSON-1201151267-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:59Z,user_data=None,user_id='a6ed6c81c3f041efba3eeba9fc5a4060',uuid=e1ea4895-562b-4c2e-9305-e9cb44590b8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.930 2 DEBUG nova.network.os_vif_util [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Converting VIF {"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.931 2 DEBUG nova.network.os_vif_util [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:33:92,bridge_name='br-int',has_traffic_filtering=True,id=dfcf2008-94a2-47ee-bb17-1651231ca8f6,network=Network(6445fee4-215a-43c8-ac0b-1a65a7eb79a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfcf2008-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.932 2 DEBUG nova.objects.instance [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1ea4895-562b-4c2e-9305-e9cb44590b8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.972 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <uuid>e1ea4895-562b-4c2e-9305-e9cb44590b8a</uuid>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <name>instance-0000005e</name>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerAddressesTestJSON-server-38842122</nova:name>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:36:05</nova:creationTime>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:user uuid="a6ed6c81c3f041efba3eeba9fc5a4060">tempest-ServerAddressesTestJSON-1201151267-project-member</nova:user>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:project uuid="67164307b6c54c85818a4f6aa13779f8">tempest-ServerAddressesTestJSON-1201151267</nova:project>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <nova:port uuid="dfcf2008-94a2-47ee-bb17-1651231ca8f6">
Oct 02 12:36:06 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <system>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <entry name="serial">e1ea4895-562b-4c2e-9305-e9cb44590b8a</entry>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <entry name="uuid">e1ea4895-562b-4c2e-9305-e9cb44590b8a</entry>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </system>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <os>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   </os>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <features>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   </features>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk">
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       </source>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk.config">
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       </source>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:36:06 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:5d:33:92"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <target dev="tapdfcf2008-94"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/console.log" append="off"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <video>
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </video>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:36:06 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:36:06 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:36:06 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:36:06 compute-0 nova_compute[256940]: </domain>
Oct 02 12:36:06 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.974 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Preparing to wait for external event network-vif-plugged-dfcf2008-94a2-47ee-bb17-1651231ca8f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.975 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.975 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.975 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.976 2 DEBUG nova.virt.libvirt.vif [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-38842122',display_name='tempest-ServerAddressesTestJSON-server-38842122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-38842122',id=94,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67164307b6c54c85818a4f6aa13779f8',ramdisk_id='',reservation_id='r-jgyde1yp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1201151267',owner_user_name='tempest-ServerAddressesTestJSON-1201151267-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:59Z,user_data=None,user_id='a6ed6c81c3f041efba3eeba9fc5a4060',uuid=e1ea4895-562b-4c2e-9305-e9cb44590b8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.976 2 DEBUG nova.network.os_vif_util [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Converting VIF {"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.977 2 DEBUG nova.network.os_vif_util [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:33:92,bridge_name='br-int',has_traffic_filtering=True,id=dfcf2008-94a2-47ee-bb17-1651231ca8f6,network=Network(6445fee4-215a-43c8-ac0b-1a65a7eb79a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfcf2008-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.978 2 DEBUG os_vif [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:33:92,bridge_name='br-int',has_traffic_filtering=True,id=dfcf2008-94a2-47ee-bb17-1651231ca8f6,network=Network(6445fee4-215a-43c8-ac0b-1a65a7eb79a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfcf2008-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfcf2008-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdfcf2008-94, col_values=(('external_ids', {'iface-id': 'dfcf2008-94a2-47ee-bb17-1651231ca8f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:33:92', 'vm-uuid': 'e1ea4895-562b-4c2e-9305-e9cb44590b8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:06 compute-0 NetworkManager[44981]: <info>  [1759408566.9875] manager: (tapdfcf2008-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:06 compute-0 nova_compute[256940]: 2025-10-02 12:36:06.996 2 INFO os_vif [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:33:92,bridge_name='br-int',has_traffic_filtering=True,id=dfcf2008-94a2-47ee-bb17-1651231ca8f6,network=Network(6445fee4-215a-43c8-ac0b-1a65a7eb79a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfcf2008-94')
Oct 02 12:36:07 compute-0 nova_compute[256940]: 2025-10-02 12:36:07.305 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:07 compute-0 nova_compute[256940]: 2025-10-02 12:36:07.305 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:07 compute-0 nova_compute[256940]: 2025-10-02 12:36:07.306 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] No VIF found with MAC fa:16:3e:5d:33:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:36:07 compute-0 nova_compute[256940]: 2025-10-02 12:36:07.306 2 INFO nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Using config drive
Oct 02 12:36:07 compute-0 nova_compute[256940]: 2025-10-02 12:36:07.332 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:07 compute-0 ceph-mon[73668]: pgmap v1789: 305 pgs: 305 active+clean; 118 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 107 op/s
Oct 02 12:36:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1421537231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2301724489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 166 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Oct 02 12:36:07 compute-0 nova_compute[256940]: 2025-10-02 12:36:07.995 2 INFO nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Creating config drive at /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/disk.config
Oct 02 12:36:08 compute-0 nova_compute[256940]: 2025-10-02 12:36:08.001 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90wel8j4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:08 compute-0 nova_compute[256940]: 2025-10-02 12:36:08.155 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90wel8j4" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:08 compute-0 nova_compute[256940]: 2025-10-02 12:36:08.186 2 DEBUG nova.storage.rbd_utils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] rbd image e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:08 compute-0 nova_compute[256940]: 2025-10-02 12:36:08.190 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/disk.config e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:08 compute-0 nova_compute[256940]: 2025-10-02 12:36:08.293 2 DEBUG nova.network.neutron [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Updated VIF entry in instance network info cache for port dfcf2008-94a2-47ee-bb17-1651231ca8f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:08 compute-0 nova_compute[256940]: 2025-10-02 12:36:08.295 2 DEBUG nova.network.neutron [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Updating instance_info_cache with network_info: [{"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:08 compute-0 nova_compute[256940]: 2025-10-02 12:36:08.317 2 DEBUG oslo_concurrency.lockutils [req-e5abd424-dfc3-4cda-b3b6-29eb41fbb40b req-dff80c7a-3e38-4e67-a245-421559734843 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e1ea4895-562b-4c2e-9305-e9cb44590b8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:08.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 166 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Oct 02 12:36:09 compute-0 ceph-mon[73668]: pgmap v1790: 305 pgs: 305 active+clean; 166 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.262 2 DEBUG oslo_concurrency.processutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/disk.config e1ea4895-562b-4c2e-9305-e9cb44590b8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.263 2 INFO nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Deleting local config drive /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a/disk.config because it was imported into RBD.
Oct 02 12:36:10 compute-0 kernel: tapdfcf2008-94: entered promiscuous mode
Oct 02 12:36:10 compute-0 ovn_controller[148123]: 2025-10-02T12:36:10Z|00399|binding|INFO|Claiming lport dfcf2008-94a2-47ee-bb17-1651231ca8f6 for this chassis.
Oct 02 12:36:10 compute-0 ovn_controller[148123]: 2025-10-02T12:36:10Z|00400|binding|INFO|dfcf2008-94a2-47ee-bb17-1651231ca8f6: Claiming fa:16:3e:5d:33:92 10.100.0.14
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:10 compute-0 NetworkManager[44981]: <info>  [1759408570.3389] manager: (tapdfcf2008-94): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.352 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:33:92 10.100.0.14'], port_security=['fa:16:3e:5d:33:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e1ea4895-562b-4c2e-9305-e9cb44590b8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67164307b6c54c85818a4f6aa13779f8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ef9b075-53d8-4bbe-9aca-6d8bd0086ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d7a2fb8-01fe-47af-8e7a-991cb199e121, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=dfcf2008-94a2-47ee-bb17-1651231ca8f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.353 158104 INFO neutron.agent.ovn.metadata.agent [-] Port dfcf2008-94a2-47ee-bb17-1651231ca8f6 in datapath 6445fee4-215a-43c8-ac0b-1a65a7eb79a0 bound to our chassis
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.355 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6445fee4-215a-43c8-ac0b-1a65a7eb79a0
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.368 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[35c0a41d-2936-4dd5-914a-ef064dfa67c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.370 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6445fee4-21 in ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.372 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6445fee4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.372 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cc6367-9709-4d6e-93cb-5be4bb56405b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 systemd-udevd[315443]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.374 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[134ed21f-b831-4879-90cb-a91e98644f7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 systemd-machined[210927]: New machine qemu-42-instance-0000005e.
Oct 02 12:36:10 compute-0 NetworkManager[44981]: <info>  [1759408570.3912] device (tapdfcf2008-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:36:10 compute-0 NetworkManager[44981]: <info>  [1759408570.3921] device (tapdfcf2008-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.393 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff8fc0b-ce62-4217-9a8c-378999eef026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-0000005e.
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.424 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2addb389-7018-4acb-934d-4de03eeea1ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:10 compute-0 ovn_controller[148123]: 2025-10-02T12:36:10Z|00401|binding|INFO|Setting lport dfcf2008-94a2-47ee-bb17-1651231ca8f6 ovn-installed in OVS
Oct 02 12:36:10 compute-0 ovn_controller[148123]: 2025-10-02T12:36:10Z|00402|binding|INFO|Setting lport dfcf2008-94a2-47ee-bb17-1651231ca8f6 up in Southbound
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.463 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[68045af6-7084-455a-9384-7290a0786889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.468 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[acedcc2c-0e2d-4507-a734-cdc2023f0590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 systemd-udevd[315446]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:36:10 compute-0 NetworkManager[44981]: <info>  [1759408570.4708] manager: (tap6445fee4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Oct 02 12:36:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:10.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.514 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f2c48f-7418-478b-a74f-128f96fc5145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.519 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[67f14f4b-0cbd-429e-b65c-f239538a761c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 NetworkManager[44981]: <info>  [1759408570.5533] device (tap6445fee4-20): carrier: link connected
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.564 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8df7a0a9-c8d1-461d-8df0-e6ba99a6561c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.586 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09de04ab-f5b7-46c2-9d83-af53a9bf3af1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6445fee4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:58:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646603, 'reachable_time': 32203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315476, 'error': None, 'target': 'ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.615 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[58bc1aea-c7ea-4ea6-9a0e-d32157c073e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:583f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646603, 'tstamp': 646603}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315477, 'error': None, 'target': 'ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.640 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[90616453-650a-4c6b-8de0-e0636def167e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6445fee4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:58:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646603, 'reachable_time': 32203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315485, 'error': None, 'target': 'ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.687 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2d853091-4829-42cf-a6d0-314757b4125f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.761 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[223a8ecc-fe13-4db4-b3b9-da09bdb17ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.763 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6445fee4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.763 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.763 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6445fee4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:10 compute-0 kernel: tap6445fee4-20: entered promiscuous mode
Oct 02 12:36:10 compute-0 NetworkManager[44981]: <info>  [1759408570.7664] manager: (tap6445fee4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.768 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6445fee4-20, col_values=(('external_ids', {'iface-id': 'fa7b7474-6027-45bf-9607-38650ab7a910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:10 compute-0 ovn_controller[148123]: 2025-10-02T12:36:10Z|00403|binding|INFO|Releasing lport fa7b7474-6027-45bf-9607-38650ab7a910 from this chassis (sb_readonly=0)
Oct 02 12:36:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:10 compute-0 nova_compute[256940]: 2025-10-02 12:36:10.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.788 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6445fee4-215a-43c8-ac0b-1a65a7eb79a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6445fee4-215a-43c8-ac0b-1a65a7eb79a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.789 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd8851f-1024-42cb-b124-68e8a01f76f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.790 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-6445fee4-215a-43c8-ac0b-1a65a7eb79a0
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/6445fee4-215a-43c8-ac0b-1a65a7eb79a0.pid.haproxy
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 6445fee4-215a-43c8-ac0b-1a65a7eb79a0
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:36:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:10.791 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'env', 'PROCESS_TAG=haproxy-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6445fee4-215a-43c8-ac0b-1a65a7eb79a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:36:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:10.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:11 compute-0 ceph-mon[73668]: pgmap v1791: 305 pgs: 305 active+clean; 166 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Oct 02 12:36:11 compute-0 podman[315552]: 2025-10-02 12:36:11.150132258 +0000 UTC m=+0.024493890 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.441 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408571.4168217, e1ea4895-562b-4c2e-9305-e9cb44590b8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.442 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] VM Started (Lifecycle Event)
Oct 02 12:36:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 167 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.476 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.483 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408571.4169962, e1ea4895-562b-4c2e-9305-e9cb44590b8a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.484 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] VM Paused (Lifecycle Event)
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.514 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.518 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.545 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:36:11 compute-0 nova_compute[256940]: 2025-10-02 12:36:11.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.191 2 DEBUG nova.compute.manager [req-7b77fbd7-3f3e-42e2-8698-9acc9cdda868 req-e6938a5f-fe71-4532-b666-830e9f25e7c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Received event network-vif-plugged-dfcf2008-94a2-47ee-bb17-1651231ca8f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.194 2 DEBUG oslo_concurrency.lockutils [req-7b77fbd7-3f3e-42e2-8698-9acc9cdda868 req-e6938a5f-fe71-4532-b666-830e9f25e7c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.194 2 DEBUG oslo_concurrency.lockutils [req-7b77fbd7-3f3e-42e2-8698-9acc9cdda868 req-e6938a5f-fe71-4532-b666-830e9f25e7c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.195 2 DEBUG oslo_concurrency.lockutils [req-7b77fbd7-3f3e-42e2-8698-9acc9cdda868 req-e6938a5f-fe71-4532-b666-830e9f25e7c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.195 2 DEBUG nova.compute.manager [req-7b77fbd7-3f3e-42e2-8698-9acc9cdda868 req-e6938a5f-fe71-4532-b666-830e9f25e7c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Processing event network-vif-plugged-dfcf2008-94a2-47ee-bb17-1651231ca8f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.196 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.201 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408572.201461, e1ea4895-562b-4c2e-9305-e9cb44590b8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.202 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] VM Resumed (Lifecycle Event)
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.206 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.211 2 INFO nova.virt.libvirt.driver [-] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Instance spawned successfully.
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.212 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.242 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.248 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.265 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.266 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.267 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.267 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.268 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.268 2 DEBUG nova.virt.libvirt.driver [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.273 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:36:12 compute-0 podman[315552]: 2025-10-02 12:36:12.301243956 +0000 UTC m=+1.175605568 container create 2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.324 2 INFO nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Took 12.43 seconds to spawn the instance on the hypervisor.
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.325 2 DEBUG nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.398 2 INFO nova.compute.manager [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Took 13.62 seconds to build instance.
Oct 02 12:36:12 compute-0 nova_compute[256940]: 2025-10-02 12:36:12.417 2 DEBUG oslo_concurrency.lockutils [None req-4ac55d8f-eed4-431d-8915-e3eb8c33893d a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:12.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:12 compute-0 systemd[1]: Started libpod-conmon-2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db.scope.
Oct 02 12:36:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d78ca7c7a3aee72898082db4014f3f3540b60192c62e7f2f40006e0a32dad329/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:12.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:12 compute-0 ceph-mon[73668]: pgmap v1792: 305 pgs: 305 active+clean; 167 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:36:12 compute-0 podman[315552]: 2025-10-02 12:36:12.863230987 +0000 UTC m=+1.737592619 container init 2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:36:12 compute-0 podman[315552]: 2025-10-02 12:36:12.871306198 +0000 UTC m=+1.745667820 container start 2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:36:12 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [NOTICE]   (315574) : New worker (315576) forked
Oct 02 12:36:12 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [NOTICE]   (315574) : Loading success.
Oct 02 12:36:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 167 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:36:14 compute-0 nova_compute[256940]: 2025-10-02 12:36:14.329 2 DEBUG nova.compute.manager [req-b7230bf0-fd04-4836-baf9-b0abbaabed9f req-52cc504a-358e-4c87-adf0-70d7ed11cb4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Received event network-vif-plugged-dfcf2008-94a2-47ee-bb17-1651231ca8f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:14 compute-0 nova_compute[256940]: 2025-10-02 12:36:14.330 2 DEBUG oslo_concurrency.lockutils [req-b7230bf0-fd04-4836-baf9-b0abbaabed9f req-52cc504a-358e-4c87-adf0-70d7ed11cb4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:14 compute-0 nova_compute[256940]: 2025-10-02 12:36:14.330 2 DEBUG oslo_concurrency.lockutils [req-b7230bf0-fd04-4836-baf9-b0abbaabed9f req-52cc504a-358e-4c87-adf0-70d7ed11cb4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:14 compute-0 nova_compute[256940]: 2025-10-02 12:36:14.330 2 DEBUG oslo_concurrency.lockutils [req-b7230bf0-fd04-4836-baf9-b0abbaabed9f req-52cc504a-358e-4c87-adf0-70d7ed11cb4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:14 compute-0 nova_compute[256940]: 2025-10-02 12:36:14.331 2 DEBUG nova.compute.manager [req-b7230bf0-fd04-4836-baf9-b0abbaabed9f req-52cc504a-358e-4c87-adf0-70d7ed11cb4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] No waiting events found dispatching network-vif-plugged-dfcf2008-94a2-47ee-bb17-1651231ca8f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:14 compute-0 nova_compute[256940]: 2025-10-02 12:36:14.331 2 WARNING nova.compute.manager [req-b7230bf0-fd04-4836-baf9-b0abbaabed9f req-52cc504a-358e-4c87-adf0-70d7ed11cb4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Received unexpected event network-vif-plugged-dfcf2008-94a2-47ee-bb17-1651231ca8f6 for instance with vm_state active and task_state None.
Oct 02 12:36:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:15 compute-0 ceph-mon[73668]: pgmap v1793: 305 pgs: 305 active+clean; 167 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:36:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 642 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Oct 02 12:36:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.295 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.296 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.297 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.297 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.298 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.300 2 INFO nova.compute.manager [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Terminating instance
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.302 2 DEBUG nova.compute.manager [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:36:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:16.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:16 compute-0 kernel: tapdfcf2008-94 (unregistering): left promiscuous mode
Oct 02 12:36:16 compute-0 NetworkManager[44981]: <info>  [1759408576.5413] device (tapdfcf2008-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:36:16 compute-0 ceph-mon[73668]: pgmap v1794: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 642 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Oct 02 12:36:16 compute-0 ovn_controller[148123]: 2025-10-02T12:36:16Z|00404|binding|INFO|Releasing lport dfcf2008-94a2-47ee-bb17-1651231ca8f6 from this chassis (sb_readonly=0)
Oct 02 12:36:16 compute-0 ovn_controller[148123]: 2025-10-02T12:36:16Z|00405|binding|INFO|Setting lport dfcf2008-94a2-47ee-bb17-1651231ca8f6 down in Southbound
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 ovn_controller[148123]: 2025-10-02T12:36:16Z|00406|binding|INFO|Removing iface tapdfcf2008-94 ovn-installed in OVS
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:16.577 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:33:92 10.100.0.14'], port_security=['fa:16:3e:5d:33:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e1ea4895-562b-4c2e-9305-e9cb44590b8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67164307b6c54c85818a4f6aa13779f8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ef9b075-53d8-4bbe-9aca-6d8bd0086ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d7a2fb8-01fe-47af-8e7a-991cb199e121, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=dfcf2008-94a2-47ee-bb17-1651231ca8f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:16.578 158104 INFO neutron.agent.ovn.metadata.agent [-] Port dfcf2008-94a2-47ee-bb17-1651231ca8f6 in datapath 6445fee4-215a-43c8-ac0b-1a65a7eb79a0 unbound from our chassis
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:16.580 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6445fee4-215a-43c8-ac0b-1a65a7eb79a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:16.581 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a94a67b5-a622-46cf-8511-d2175578cf89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:16.583 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0 namespace which is not needed anymore
Oct 02 12:36:16 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 02 12:36:16 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005e.scope: Consumed 4.919s CPU time.
Oct 02 12:36:16 compute-0 systemd-machined[210927]: Machine qemu-42-instance-0000005e terminated.
Oct 02 12:36:16 compute-0 podman[315586]: 2025-10-02 12:36:16.630245373 +0000 UTC m=+0.060971022 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:36:16 compute-0 podman[315590]: 2025-10-02 12:36:16.67304194 +0000 UTC m=+0.100294019 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.738 2 INFO nova.virt.libvirt.driver [-] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Instance destroyed successfully.
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.739 2 DEBUG nova.objects.instance [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lazy-loading 'resources' on Instance uuid e1ea4895-562b-4c2e-9305-e9cb44590b8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.763 2 DEBUG nova.virt.libvirt.vif [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-38842122',display_name='tempest-ServerAddressesTestJSON-server-38842122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-38842122',id=94,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67164307b6c54c85818a4f6aa13779f8',ramdisk_id='',reservation_id='r-jgyde1yp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1201151267',owner_user_name='tempest-ServerAddressesTestJSON-1201151267-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:12Z,user_data=None,user_id='a6ed6c81c3f041efba3eeba9fc5a4060',uuid=e1ea4895-562b-4c2e-9305-e9cb44590b8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.763 2 DEBUG nova.network.os_vif_util [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Converting VIF {"id": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "address": "fa:16:3e:5d:33:92", "network": {"id": "6445fee4-215a-43c8-ac0b-1a65a7eb79a0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1019618447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67164307b6c54c85818a4f6aa13779f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfcf2008-94", "ovs_interfaceid": "dfcf2008-94a2-47ee-bb17-1651231ca8f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.764 2 DEBUG nova.network.os_vif_util [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:33:92,bridge_name='br-int',has_traffic_filtering=True,id=dfcf2008-94a2-47ee-bb17-1651231ca8f6,network=Network(6445fee4-215a-43c8-ac0b-1a65a7eb79a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfcf2008-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.764 2 DEBUG os_vif [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:33:92,bridge_name='br-int',has_traffic_filtering=True,id=dfcf2008-94a2-47ee-bb17-1651231ca8f6,network=Network(6445fee4-215a-43c8-ac0b-1a65a7eb79a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfcf2008-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.766 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfcf2008-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:16.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:16 compute-0 nova_compute[256940]: 2025-10-02 12:36:16.821 2 INFO os_vif [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:33:92,bridge_name='br-int',has_traffic_filtering=True,id=dfcf2008-94a2-47ee-bb17-1651231ca8f6,network=Network(6445fee4-215a-43c8-ac0b-1a65a7eb79a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfcf2008-94')
Oct 02 12:36:16 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [NOTICE]   (315574) : haproxy version is 2.8.14-c23fe91
Oct 02 12:36:16 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [NOTICE]   (315574) : path to executable is /usr/sbin/haproxy
Oct 02 12:36:16 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [WARNING]  (315574) : Exiting Master process...
Oct 02 12:36:16 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [WARNING]  (315574) : Exiting Master process...
Oct 02 12:36:16 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [ALERT]    (315574) : Current worker (315576) exited with code 143 (Terminated)
Oct 02 12:36:16 compute-0 neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0[315568]: [WARNING]  (315574) : All workers exited. Exiting... (0)
Oct 02 12:36:16 compute-0 systemd[1]: libpod-2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db.scope: Deactivated successfully.
Oct 02 12:36:16 compute-0 podman[315656]: 2025-10-02 12:36:16.87564825 +0000 UTC m=+0.197820726 container died 2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d78ca7c7a3aee72898082db4014f3f3540b60192c62e7f2f40006e0a32dad329-merged.mount: Deactivated successfully.
Oct 02 12:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db-userdata-shm.mount: Deactivated successfully.
Oct 02 12:36:16 compute-0 podman[315656]: 2025-10-02 12:36:16.960364931 +0000 UTC m=+0.282537407 container cleanup 2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:36:16 compute-0 systemd[1]: libpod-conmon-2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db.scope: Deactivated successfully.
Oct 02 12:36:17 compute-0 podman[315714]: 2025-10-02 12:36:17.038327146 +0000 UTC m=+0.049850912 container remove 2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.047 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[77d37673-1222-4b01-832b-187d5966e04d]: (4, ('Thu Oct  2 12:36:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0 (2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db)\n2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db\nThu Oct  2 12:36:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0 (2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db)\n2677ce49868405f3df2d6a38e50755d87d80a5960472fac78fe5f7aae19c28db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.049 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ab8258-b283-48b2-acf8-02f23d16180e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.050 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6445fee4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:17 compute-0 nova_compute[256940]: 2025-10-02 12:36:17.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:17 compute-0 kernel: tap6445fee4-20: left promiscuous mode
Oct 02 12:36:17 compute-0 nova_compute[256940]: 2025-10-02 12:36:17.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.092 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[689caa7e-532e-4a1c-99f9-c2068fc7250b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.130 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e5601e-55f1-4223-9f1f-31b2ed7a977e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.132 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[285d74f8-fa5b-4531-9ba9-f54169a63114]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.153 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5ed380-d57c-48b1-813a-31b30fa01b37]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646594, 'reachable_time': 23290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315729, 'error': None, 'target': 'ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d6445fee4\x2d215a\x2d43c8\x2dac0b\x2d1a65a7eb79a0.mount: Deactivated successfully.
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.157 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6445fee4-215a-43c8-ac0b-1a65a7eb79a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:36:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:17.157 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e99c8dae-3796-4815-a157-fc6ab1b02992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 142 op/s
Oct 02 12:36:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3738831801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:17 compute-0 sudo[315731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:17 compute-0 sudo[315731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:17 compute-0 sudo[315731]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:18 compute-0 sudo[315756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:18 compute-0 sudo[315756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:18 compute-0 sudo[315756]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:18.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:18.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:18 compute-0 ceph-mon[73668]: pgmap v1795: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 142 op/s
Oct 02 12:36:19 compute-0 nova_compute[256940]: 2025-10-02 12:36:19.119 2 INFO nova.virt.libvirt.driver [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Deleting instance files /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a_del
Oct 02 12:36:19 compute-0 nova_compute[256940]: 2025-10-02 12:36:19.120 2 INFO nova.virt.libvirt.driver [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Deletion of /var/lib/nova/instances/e1ea4895-562b-4c2e-9305-e9cb44590b8a_del complete
Oct 02 12:36:19 compute-0 nova_compute[256940]: 2025-10-02 12:36:19.225 2 INFO nova.compute.manager [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Took 2.92 seconds to destroy the instance on the hypervisor.
Oct 02 12:36:19 compute-0 nova_compute[256940]: 2025-10-02 12:36:19.225 2 DEBUG oslo.service.loopingcall [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:36:19 compute-0 nova_compute[256940]: 2025-10-02 12:36:19.226 2 DEBUG nova.compute.manager [-] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:36:19 compute-0 nova_compute[256940]: 2025-10-02 12:36:19.226 2 DEBUG nova.network.neutron [-] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:36:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 45 KiB/s wr, 76 op/s
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.252 2 DEBUG nova.network.neutron [-] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.293 2 INFO nova.compute.manager [-] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Took 1.07 seconds to deallocate network for instance.
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.385 2 DEBUG nova.compute.manager [req-9badff37-7f23-4ed3-bf48-37101f4e2458 req-88d8f78c-25d7-4ae2-89d2-04bd7f698087 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Received event network-vif-deleted-dfcf2008-94a2-47ee-bb17-1651231ca8f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.397 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.398 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:20.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.485 2 DEBUG oslo_concurrency.processutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685352000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.948 2 DEBUG oslo_concurrency.processutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:20 compute-0 nova_compute[256940]: 2025-10-02 12:36:20.955 2 DEBUG nova.compute.provider_tree [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:21 compute-0 nova_compute[256940]: 2025-10-02 12:36:21.002 2 DEBUG nova.scheduler.client.report [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:21 compute-0 nova_compute[256940]: 2025-10-02 12:36:21.064 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:21 compute-0 ceph-mon[73668]: pgmap v1796: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 45 KiB/s wr, 76 op/s
Oct 02 12:36:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/995817035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1685352000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:21 compute-0 nova_compute[256940]: 2025-10-02 12:36:21.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:21 compute-0 nova_compute[256940]: 2025-10-02 12:36:21.147 2 INFO nova.scheduler.client.report [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Deleted allocations for instance e1ea4895-562b-4c2e-9305-e9cb44590b8a
Oct 02 12:36:21 compute-0 nova_compute[256940]: 2025-10-02 12:36:21.246 2 DEBUG oslo_concurrency.lockutils [None req-71b0e840-cf8a-4aae-8f1d-3b87b10bb63e a6ed6c81c3f041efba3eeba9fc5a4060 67164307b6c54c85818a4f6aa13779f8 - - default default] Lock "e1ea4895-562b-4c2e-9305-e9cb44590b8a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 137 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 50 KiB/s wr, 100 op/s
Oct 02 12:36:21 compute-0 nova_compute[256940]: 2025-10-02 12:36:21.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:22.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:22.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:23 compute-0 ceph-mon[73668]: pgmap v1797: 305 pgs: 305 active+clean; 137 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 50 KiB/s wr, 100 op/s
Oct 02 12:36:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 129 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 320 KiB/s wr, 107 op/s
Oct 02 12:36:24 compute-0 nova_compute[256940]: 2025-10-02 12:36:24.377 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:24.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:24 compute-0 ceph-mon[73668]: pgmap v1798: 305 pgs: 305 active+clean; 129 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 320 KiB/s wr, 107 op/s
Oct 02 12:36:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:24.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 159 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Oct 02 12:36:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:26 compute-0 nova_compute[256940]: 2025-10-02 12:36:26.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:26.472 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:26.472 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:26.472 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:26.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:26 compute-0 nova_compute[256940]: 2025-10-02 12:36:26.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:27 compute-0 nova_compute[256940]: 2025-10-02 12:36:27.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:27 compute-0 nova_compute[256940]: 2025-10-02 12:36:27.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:36:27 compute-0 ceph-mon[73668]: pgmap v1799: 305 pgs: 305 active+clean; 159 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Oct 02 12:36:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:36:27 compute-0 nova_compute[256940]: 2025-10-02 12:36:27.902 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:27 compute-0 nova_compute[256940]: 2025-10-02 12:36:27.902 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:28 compute-0 nova_compute[256940]: 2025-10-02 12:36:28.021 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:36:28 compute-0 nova_compute[256940]: 2025-10-02 12:36:28.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:28 compute-0 ceph-mon[73668]: pgmap v1800: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:36:28 compute-0 nova_compute[256940]: 2025-10-02 12:36:28.421 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:28 compute-0 nova_compute[256940]: 2025-10-02 12:36:28.422 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:28 compute-0 nova_compute[256940]: 2025-10-02 12:36:28.429 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:36:28 compute-0 nova_compute[256940]: 2025-10-02 12:36:28.429 2 INFO nova.compute.claims [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:28 compute-0 nova_compute[256940]: 2025-10-02 12:36:28.730 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:36:28
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', 'backups', 'images', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 02 12:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:36:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:29 compute-0 sudo[315829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:29 compute-0 sudo[315829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-0 sudo[315829]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:36:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622962585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:29 compute-0 sudo[315854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:29 compute-0 sudo[315854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-0 sudo[315854]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.199 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.206 2 DEBUG nova.compute.provider_tree [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:29 compute-0 sudo[315881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:29 compute-0 sudo[315881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-0 sudo[315881]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.294 2 DEBUG nova.scheduler.client.report [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:29 compute-0 sudo[315906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:36:29 compute-0 sudo[315906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3622962585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2996427117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.608 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.609 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.732 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.733 2 DEBUG nova.network.neutron [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:36:29 compute-0 sudo[315906]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.840 2 INFO nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:36:29 compute-0 nova_compute[256940]: 2025-10-02 12:36:29.881 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:36:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:36:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:36:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:36:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev faed038c-c1a7-47aa-9898-9161a17ffff5 does not exist
Oct 02 12:36:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bd974db6-2931-4757-8d40-5b8f96daf3e3 does not exist
Oct 02 12:36:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 63685009-b16f-408f-b5c2-20a705b59583 does not exist
Oct 02 12:36:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:36:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:36:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:36:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.147 2 DEBUG nova.policy [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2f79bec3812947f09a3b04d1f31f08fa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '32951c3cc7a049e8b3ecc438ec73ccdd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:36:30 compute-0 sudo[315962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:30 compute-0 sudo[315962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:30 compute-0 sudo[315962]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:30 compute-0 sudo[315994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:30 compute-0 sudo[315994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:30 compute-0 podman[315986]: 2025-10-02 12:36:30.232625807 +0000 UTC m=+0.061698882 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 02 12:36:30 compute-0 sudo[315994]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:30 compute-0 podman[315987]: 2025-10-02 12:36:30.270318151 +0000 UTC m=+0.087400053 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd)
Oct 02 12:36:30 compute-0 sudo[316049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:30 compute-0 sudo[316049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:30 compute-0 sudo[316049]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.305 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.307 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.308 2 INFO nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Creating image(s)
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.343 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:30 compute-0 sudo[316081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:36:30 compute-0 sudo[316081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.379 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.405 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.409 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.473 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.475 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.476 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.476 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.509 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.513 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3d8f25cb-a0ea-445c-8254-925defd1187a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:30 compute-0 ceph-mon[73668]: pgmap v1801: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:36:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/795182459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:36:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:30 compute-0 podman[316226]: 2025-10-02 12:36:30.752132848 +0000 UTC m=+0.082621067 container create ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 02 12:36:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:30 compute-0 podman[316226]: 2025-10-02 12:36:30.692792049 +0000 UTC m=+0.023280278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:30 compute-0 systemd[1]: Started libpod-conmon-ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0.scope.
Oct 02 12:36:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:30.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:30 compute-0 podman[316226]: 2025-10-02 12:36:30.903990303 +0000 UTC m=+0.234478552 container init ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:36:30 compute-0 podman[316226]: 2025-10-02 12:36:30.914852406 +0000 UTC m=+0.245340635 container start ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:36:30 compute-0 musing_bhabha[316254]: 167 167
Oct 02 12:36:30 compute-0 systemd[1]: libpod-ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0.scope: Deactivated successfully.
Oct 02 12:36:30 compute-0 conmon[316254]: conmon ebc802f4c8a7e854361a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0.scope/container/memory.events
Oct 02 12:36:30 compute-0 podman[316226]: 2025-10-02 12:36:30.946345068 +0000 UTC m=+0.276833287 container attach ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 02 12:36:30 compute-0 podman[316226]: 2025-10-02 12:36:30.947905279 +0000 UTC m=+0.278393498 container died ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:36:30 compute-0 nova_compute[256940]: 2025-10-02 12:36:30.975 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3d8f25cb-a0ea-445c-8254-925defd1187a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e3e649e4f5ee15be54df8acdea084e8bb1893de2edf37cd0f712e9125c4bbe9-merged.mount: Deactivated successfully.
Oct 02 12:36:31 compute-0 podman[316226]: 2025-10-02 12:36:31.027209569 +0000 UTC m=+0.357697788 container remove ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:36:31 compute-0 systemd[1]: libpod-conmon-ebc802f4c8a7e854361aa9bc5ea9e2174a6cc33cf066220791f2caa090a84ff0.scope: Deactivated successfully.
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.083 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] resizing rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.238 2 DEBUG nova.objects.instance [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lazy-loading 'migration_context' on Instance uuid 3d8f25cb-a0ea-445c-8254-925defd1187a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:31 compute-0 podman[316334]: 2025-10-02 12:36:31.287568586 +0000 UTC m=+0.106366898 container create 3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.303 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.304 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Ensure instance console log exists: /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.305 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.305 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.306 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:31 compute-0 podman[316334]: 2025-10-02 12:36:31.231756029 +0000 UTC m=+0.050554431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:31 compute-0 systemd[1]: Started libpod-conmon-3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad.scope.
Oct 02 12:36:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59e8e48ad8263c9bf2db19965b50d9150747b0ff943db6c6de559141288c071/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59e8e48ad8263c9bf2db19965b50d9150747b0ff943db6c6de559141288c071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59e8e48ad8263c9bf2db19965b50d9150747b0ff943db6c6de559141288c071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59e8e48ad8263c9bf2db19965b50d9150747b0ff943db6c6de559141288c071/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59e8e48ad8263c9bf2db19965b50d9150747b0ff943db6c6de559141288c071/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:31 compute-0 podman[316334]: 2025-10-02 12:36:31.409411367 +0000 UTC m=+0.228209679 container init 3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:36:31 compute-0 podman[316334]: 2025-10-02 12:36:31.420709212 +0000 UTC m=+0.239507514 container start 3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_tharp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:31 compute-0 podman[316334]: 2025-10-02 12:36:31.430958519 +0000 UTC m=+0.249756841 container attach 3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:36:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.736 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408576.7354734, e1ea4895-562b-4c2e-9305-e9cb44590b8a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.738 2 INFO nova.compute.manager [-] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] VM Stopped (Lifecycle Event)
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:31 compute-0 nova_compute[256940]: 2025-10-02 12:36:31.850 2 DEBUG nova.compute.manager [None req-da8070c2-d2b4-44c8-aab0-2e38ebb59c4a - - - - - -] [instance: e1ea4895-562b-4c2e-9305-e9cb44590b8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:32.289 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:32.292 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:36:32 compute-0 nova_compute[256940]: 2025-10-02 12:36:32.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:32 compute-0 dazzling_tharp[316368]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:36:32 compute-0 dazzling_tharp[316368]: --> relative data size: 1.0
Oct 02 12:36:32 compute-0 dazzling_tharp[316368]: --> All data devices are unavailable
Oct 02 12:36:32 compute-0 systemd[1]: libpod-3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad.scope: Deactivated successfully.
Oct 02 12:36:32 compute-0 podman[316334]: 2025-10-02 12:36:32.365201408 +0000 UTC m=+1.183999730 container died 3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f59e8e48ad8263c9bf2db19965b50d9150747b0ff943db6c6de559141288c071-merged.mount: Deactivated successfully.
Oct 02 12:36:32 compute-0 podman[316334]: 2025-10-02 12:36:32.479674756 +0000 UTC m=+1.298473068 container remove 3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:32 compute-0 systemd[1]: libpod-conmon-3a880a6f81593174d9348fc91a41164aff3e5bfcb972ba4bb91ff7862a2d88ad.scope: Deactivated successfully.
Oct 02 12:36:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:32 compute-0 sudo[316081]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:32 compute-0 sudo[316396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:32 compute-0 sudo[316396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:32 compute-0 sudo[316396]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:32 compute-0 nova_compute[256940]: 2025-10-02 12:36:32.691 2 DEBUG nova.network.neutron [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Successfully created port: 52c44aa2-66b1-4b0a-9772-db0485c82d75 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:36:32 compute-0 sudo[316422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:32 compute-0 sudo[316422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:32 compute-0 sudo[316422]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:32 compute-0 sudo[316447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:32 compute-0 sudo[316447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:32 compute-0 sudo[316447]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 02 12:36:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:32.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:32 compute-0 sudo[316472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:36:32 compute-0 sudo[316472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:32 compute-0 ceph-mon[73668]: pgmap v1802: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:36:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 02 12:36:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 02 12:36:33 compute-0 podman[316537]: 2025-10-02 12:36:33.274122695 +0000 UTC m=+0.027887059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:33 compute-0 podman[316537]: 2025-10-02 12:36:33.399817215 +0000 UTC m=+0.153581559 container create 11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:36:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 192 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 3.2 MiB/s wr, 25 op/s
Oct 02 12:36:33 compute-0 systemd[1]: Started libpod-conmon-11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a.scope.
Oct 02 12:36:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:33 compute-0 podman[316537]: 2025-10-02 12:36:33.800345291 +0000 UTC m=+0.554109665 container init 11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:36:33 compute-0 podman[316537]: 2025-10-02 12:36:33.813394741 +0000 UTC m=+0.567159125 container start 11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:33 compute-0 unruffled_proskuriakova[316554]: 167 167
Oct 02 12:36:33 compute-0 systemd[1]: libpod-11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a.scope: Deactivated successfully.
Oct 02 12:36:33 compute-0 podman[316537]: 2025-10-02 12:36:33.885523384 +0000 UTC m=+0.639287758 container attach 11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:36:33 compute-0 podman[316537]: 2025-10-02 12:36:33.887387783 +0000 UTC m=+0.641152157 container died 11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ca9051e36a441f10e1c6378ac83504f3c65bf66e60e0769c709cca04f31ca96-merged.mount: Deactivated successfully.
Oct 02 12:36:34 compute-0 podman[316537]: 2025-10-02 12:36:34.198638688 +0000 UTC m=+0.952403042 container remove 11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_proskuriakova, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:34 compute-0 ceph-mon[73668]: osdmap e249: 3 total, 3 up, 3 in
Oct 02 12:36:34 compute-0 systemd[1]: libpod-conmon-11a0f2cd45a6728082e04547233b3901adad050a103e1d241684fecfed76114a.scope: Deactivated successfully.
Oct 02 12:36:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:34.294 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:34 compute-0 podman[316580]: 2025-10-02 12:36:34.455639757 +0000 UTC m=+0.065635234 container create 9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:36:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:34.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:34 compute-0 podman[316580]: 2025-10-02 12:36:34.427693498 +0000 UTC m=+0.037688965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:34 compute-0 systemd[1]: Started libpod-conmon-9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737.scope.
Oct 02 12:36:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95d60c9070d97b93016da0a1ccfc1ae8446b8e1c4b47bca4a7d6a52acf3c9ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95d60c9070d97b93016da0a1ccfc1ae8446b8e1c4b47bca4a7d6a52acf3c9ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95d60c9070d97b93016da0a1ccfc1ae8446b8e1c4b47bca4a7d6a52acf3c9ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95d60c9070d97b93016da0a1ccfc1ae8446b8e1c4b47bca4a7d6a52acf3c9ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:34 compute-0 podman[316580]: 2025-10-02 12:36:34.665803483 +0000 UTC m=+0.275798950 container init 9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:36:34 compute-0 podman[316580]: 2025-10-02 12:36:34.679738817 +0000 UTC m=+0.289734294 container start 9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nobel, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:36:34 compute-0 podman[316580]: 2025-10-02 12:36:34.724493996 +0000 UTC m=+0.334489443 container attach 9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:34.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:35 compute-0 ceph-mon[73668]: pgmap v1804: 305 pgs: 305 active+clean; 192 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 3.2 MiB/s wr, 25 op/s
Oct 02 12:36:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4108097679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 210 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 02 12:36:35 compute-0 musing_nobel[316598]: {
Oct 02 12:36:35 compute-0 musing_nobel[316598]:     "1": [
Oct 02 12:36:35 compute-0 musing_nobel[316598]:         {
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "devices": [
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "/dev/loop3"
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             ],
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "lv_name": "ceph_lv0",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "lv_size": "7511998464",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "name": "ceph_lv0",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "tags": {
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.cluster_name": "ceph",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.crush_device_class": "",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.encrypted": "0",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.osd_id": "1",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.type": "block",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:                 "ceph.vdo": "0"
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             },
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "type": "block",
Oct 02 12:36:35 compute-0 musing_nobel[316598]:             "vg_name": "ceph_vg0"
Oct 02 12:36:35 compute-0 musing_nobel[316598]:         }
Oct 02 12:36:35 compute-0 musing_nobel[316598]:     ]
Oct 02 12:36:35 compute-0 musing_nobel[316598]: }
Oct 02 12:36:35 compute-0 systemd[1]: libpod-9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737.scope: Deactivated successfully.
Oct 02 12:36:35 compute-0 podman[316580]: 2025-10-02 12:36:35.553497076 +0000 UTC m=+1.163492563 container died 9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nobel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:36:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e95d60c9070d97b93016da0a1ccfc1ae8446b8e1c4b47bca4a7d6a52acf3c9ac-merged.mount: Deactivated successfully.
Oct 02 12:36:36 compute-0 podman[316580]: 2025-10-02 12:36:36.049547776 +0000 UTC m=+1.659543263 container remove 9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nobel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:36:36 compute-0 systemd[1]: libpod-conmon-9c81ac0124bf0afe51138189916a79ff5931cfec21499e6b22159e46210a8737.scope: Deactivated successfully.
Oct 02 12:36:36 compute-0 sudo[316472]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 sudo[316621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:36 compute-0 sudo[316621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:36 compute-0 sudo[316621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 nova_compute[256940]: 2025-10-02 12:36:36.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:36 compute-0 sudo[316646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:36 compute-0 sudo[316646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:36 compute-0 sudo[316646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 sudo[316671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:36 compute-0 sudo[316671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:36 compute-0 sudo[316671]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:36 compute-0 sudo[316696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:36:36 compute-0 sudo[316696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3385465174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:36 compute-0 ceph-mon[73668]: pgmap v1805: 305 pgs: 305 active+clean; 210 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 02 12:36:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:36 compute-0 nova_compute[256940]: 2025-10-02 12:36:36.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:36 compute-0 podman[316762]: 2025-10-02 12:36:36.878769863 +0000 UTC m=+0.039716478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:37 compute-0 podman[316762]: 2025-10-02 12:36:37.046451359 +0000 UTC m=+0.207397994 container create f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:36:37 compute-0 systemd[1]: Started libpod-conmon-f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a.scope.
Oct 02 12:36:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:37 compute-0 podman[316762]: 2025-10-02 12:36:37.37291901 +0000 UTC m=+0.533865685 container init f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:37 compute-0 podman[316762]: 2025-10-02 12:36:37.388826745 +0000 UTC m=+0.549773380 container start f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:36:37 compute-0 podman[316762]: 2025-10-02 12:36:37.393525428 +0000 UTC m=+0.554472103 container attach f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:37 compute-0 pedantic_meitner[316779]: 167 167
Oct 02 12:36:37 compute-0 systemd[1]: libpod-f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a.scope: Deactivated successfully.
Oct 02 12:36:37 compute-0 podman[316762]: 2025-10-02 12:36:37.398463716 +0000 UTC m=+0.559410341 container died f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8475e70effe49a7c97d30eb42bf6e96a59da79f42a0f215ead719b78fb98acce-merged.mount: Deactivated successfully.
Oct 02 12:36:37 compute-0 podman[316762]: 2025-10-02 12:36:37.452549948 +0000 UTC m=+0.613496583 container remove f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meitner, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:36:37 compute-0 systemd[1]: libpod-conmon-f9c409c1268c2023986e5af67eb3a30b83aed32562541cf5163cc17c2fb42f9a.scope: Deactivated successfully.
Oct 02 12:36:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 02 12:36:37 compute-0 podman[316801]: 2025-10-02 12:36:37.706769025 +0000 UTC m=+0.070271235 container create 331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:36:37 compute-0 systemd[1]: Started libpod-conmon-331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e.scope.
Oct 02 12:36:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:37 compute-0 podman[316801]: 2025-10-02 12:36:37.686894616 +0000 UTC m=+0.050396816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb890b14d8e059871f939915e9f780f2835b7859f47fcc9ad2d308e8b0cc73e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb890b14d8e059871f939915e9f780f2835b7859f47fcc9ad2d308e8b0cc73e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb890b14d8e059871f939915e9f780f2835b7859f47fcc9ad2d308e8b0cc73e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb890b14d8e059871f939915e9f780f2835b7859f47fcc9ad2d308e8b0cc73e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:37 compute-0 podman[316801]: 2025-10-02 12:36:37.793027427 +0000 UTC m=+0.156529707 container init 331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ramanujan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:36:37 compute-0 podman[316801]: 2025-10-02 12:36:37.805196605 +0000 UTC m=+0.168698785 container start 331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ramanujan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:36:37 compute-0 podman[316801]: 2025-10-02 12:36:37.808339227 +0000 UTC m=+0.171841497 container attach 331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ramanujan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:36:38 compute-0 sudo[316822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:38 compute-0 sudo[316822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:38 compute-0 sudo[316822]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:38 compute-0 sudo[316847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:38 compute-0 sudo[316847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:38 compute-0 sudo[316847]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:38 compute-0 ceph-mon[73668]: pgmap v1806: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]: {
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]:         "osd_id": 1,
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]:         "type": "bluestore"
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]:     }
Oct 02 12:36:38 compute-0 elegant_ramanujan[316817]: }
Oct 02 12:36:38 compute-0 systemd[1]: libpod-331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e.scope: Deactivated successfully.
Oct 02 12:36:38 compute-0 podman[316801]: 2025-10-02 12:36:38.68526896 +0000 UTC m=+1.048771150 container died 331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cb890b14d8e059871f939915e9f780f2835b7859f47fcc9ad2d308e8b0cc73e-merged.mount: Deactivated successfully.
Oct 02 12:36:38 compute-0 podman[316801]: 2025-10-02 12:36:38.746788316 +0000 UTC m=+1.110290506 container remove 331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:36:38 compute-0 systemd[1]: libpod-conmon-331107b932b2736f16d66a32e92734808c3421f01b800f7ff8220ab22e7db81e.scope: Deactivated successfully.
Oct 02 12:36:38 compute-0 sudo[316696]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:36:38 compute-0 nova_compute[256940]: 2025-10-02 12:36:38.813 2 DEBUG nova.network.neutron [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Successfully updated port: 52c44aa2-66b1-4b0a-9772-db0485c82d75 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:36:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:36:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:38.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ce99ca2a-ced5-4847-ac42-7a9804dc3283 does not exist
Oct 02 12:36:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 16b9a811-0e14-4f64-ae0a-b76b5f99c02f does not exist
Oct 02 12:36:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a2077382-2882-402e-b145-522198188e8b does not exist
Oct 02 12:36:38 compute-0 nova_compute[256940]: 2025-10-02 12:36:38.859 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:38 compute-0 nova_compute[256940]: 2025-10-02 12:36:38.859 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquired lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:38 compute-0 nova_compute[256940]: 2025-10-02 12:36:38.860 2 DEBUG nova.network.neutron [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:36:38 compute-0 sudo[316903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:38 compute-0 sudo[316903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:38 compute-0 sudo[316903]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:38 compute-0 sudo[316928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:36:38 compute-0 sudo[316928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:38 compute-0 sudo[316928]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:39 compute-0 nova_compute[256940]: 2025-10-02 12:36:39.191 2 DEBUG nova.compute.manager [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-changed-52c44aa2-66b1-4b0a-9772-db0485c82d75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:39 compute-0 nova_compute[256940]: 2025-10-02 12:36:39.192 2 DEBUG nova.compute.manager [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Refreshing instance network info cache due to event network-changed-52c44aa2-66b1-4b0a-9772-db0485c82d75. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:36:39 compute-0 nova_compute[256940]: 2025-10-02 12:36:39.192 2 DEBUG oslo_concurrency.lockutils [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:39 compute-0 nova_compute[256940]: 2025-10-02 12:36:39.454 2 DEBUG nova.network.neutron [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:36:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 02 12:36:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0041597963777219435 of space, bias 1.0, pg target 1.247938913316583 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:36:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:36:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:40.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:40 compute-0 ceph-mon[73668]: pgmap v1807: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.862 2 DEBUG nova.network.neutron [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updating instance_info_cache with network_info: [{"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.924 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Releasing lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.925 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Instance network_info: |[{"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.926 2 DEBUG oslo_concurrency.lockutils [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.926 2 DEBUG nova.network.neutron [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Refreshing network info cache for port 52c44aa2-66b1-4b0a-9772-db0485c82d75 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.929 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Start _get_guest_xml network_info=[{"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.935 2 WARNING nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.942 2 DEBUG nova.virt.libvirt.host [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.943 2 DEBUG nova.virt.libvirt.host [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.948 2 DEBUG nova.virt.libvirt.host [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.948 2 DEBUG nova.virt.libvirt.host [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.949 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.949 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.949 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.950 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.950 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.950 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.950 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.950 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.950 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.951 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.951 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.951 2 DEBUG nova.virt.hardware [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:36:41 compute-0 nova_compute[256940]: 2025-10-02 12:36:41.953 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682488130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:42 compute-0 nova_compute[256940]: 2025-10-02 12:36:42.464 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:42 compute-0 nova_compute[256940]: 2025-10-02 12:36:42.501 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:42 compute-0 nova_compute[256940]: 2025-10-02 12:36:42.506 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:42.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912308509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.049 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.052 2 DEBUG nova.virt.libvirt.vif [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1579090700',display_name='tempest-ServersTestManualDisk-server-1579090700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1579090700',id=96,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcyTDdHWCjcrlrzjUMQFOw+PKZZUofeBU2XYQVOssn4YojM8Umy+faXGoO16vbsgqX4Nd2s/2SgTZarLK50U4dnjZWKpFSW2Q5vMylx0D0vYnx/Qs2hg5MObd8Cr/2/FQ==',key_name='tempest-keypair-1735883953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='32951c3cc7a049e8b3ecc438ec73ccdd',ramdisk_id='',reservation_id='r-fgpn7puw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1257845924',owner_user_name='tempest-ServersTestManualDisk-1257845924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f79bec3812947f09a3b04d1f31f08fa',uuid=3d8f25cb-a0ea-445c-8254-925defd1187a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.052 2 DEBUG nova.network.os_vif_util [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Converting VIF {"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.053 2 DEBUG nova.network.os_vif_util [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:86:e3,bridge_name='br-int',has_traffic_filtering=True,id=52c44aa2-66b1-4b0a-9772-db0485c82d75,network=Network(983208d8-4fdb-4b06-b8f5-fa1d075931d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c44aa2-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.055 2 DEBUG nova.objects.instance [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d8f25cb-a0ea-445c-8254-925defd1187a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:43 compute-0 ceph-mon[73668]: pgmap v1808: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Oct 02 12:36:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/682488130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.078 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <uuid>3d8f25cb-a0ea-445c-8254-925defd1187a</uuid>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <name>instance-00000060</name>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersTestManualDisk-server-1579090700</nova:name>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:36:41</nova:creationTime>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:user uuid="2f79bec3812947f09a3b04d1f31f08fa">tempest-ServersTestManualDisk-1257845924-project-member</nova:user>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:project uuid="32951c3cc7a049e8b3ecc438ec73ccdd">tempest-ServersTestManualDisk-1257845924</nova:project>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <nova:port uuid="52c44aa2-66b1-4b0a-9772-db0485c82d75">
Oct 02 12:36:43 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <system>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <entry name="serial">3d8f25cb-a0ea-445c-8254-925defd1187a</entry>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <entry name="uuid">3d8f25cb-a0ea-445c-8254-925defd1187a</entry>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </system>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <os>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   </os>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <features>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   </features>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/3d8f25cb-a0ea-445c-8254-925defd1187a_disk">
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       </source>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/3d8f25cb-a0ea-445c-8254-925defd1187a_disk.config">
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       </source>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:36:43 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:20:86:e3"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <target dev="tap52c44aa2-66"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/console.log" append="off"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <video>
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </video>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:36:43 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:36:43 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:36:43 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:36:43 compute-0 nova_compute[256940]: </domain>
Oct 02 12:36:43 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.080 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Preparing to wait for external event network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.080 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.081 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.081 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.082 2 DEBUG nova.virt.libvirt.vif [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1579090700',display_name='tempest-ServersTestManualDisk-server-1579090700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1579090700',id=96,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcyTDdHWCjcrlrzjUMQFOw+PKZZUofeBU2XYQVOssn4YojM8Umy+faXGoO16vbsgqX4Nd2s/2SgTZarLK50U4dnjZWKpFSW2Q5vMylx0D0vYnx/Qs2hg5MObd8Cr/2/FQ==',key_name='tempest-keypair-1735883953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='32951c3cc7a049e8b3ecc438ec73ccdd',ramdisk_id='',reservation_id='r-fgpn7puw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1257845924',owner_user_name='tempest-ServersTestManualDisk-1257845924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f79bec3812947f09a3b04d1f31f08fa',uuid=3d8f25cb-a0ea-445c-8254-925defd1187a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.082 2 DEBUG nova.network.os_vif_util [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Converting VIF {"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.083 2 DEBUG nova.network.os_vif_util [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:86:e3,bridge_name='br-int',has_traffic_filtering=True,id=52c44aa2-66b1-4b0a-9772-db0485c82d75,network=Network(983208d8-4fdb-4b06-b8f5-fa1d075931d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c44aa2-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.084 2 DEBUG os_vif [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:86:e3,bridge_name='br-int',has_traffic_filtering=True,id=52c44aa2-66b1-4b0a-9772-db0485c82d75,network=Network(983208d8-4fdb-4b06-b8f5-fa1d075931d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c44aa2-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52c44aa2-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52c44aa2-66, col_values=(('external_ids', {'iface-id': '52c44aa2-66b1-4b0a-9772-db0485c82d75', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:86:e3', 'vm-uuid': '3d8f25cb-a0ea-445c-8254-925defd1187a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:36:43 compute-0 NetworkManager[44981]: <info>  [1759408603.0963] manager: (tap52c44aa2-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.107 2 INFO os_vif [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:86:e3,bridge_name='br-int',has_traffic_filtering=True,id=52c44aa2-66b1-4b0a-9772-db0485c82d75,network=Network(983208d8-4fdb-4b06-b8f5-fa1d075931d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c44aa2-66')
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.334 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.336 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.337 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] No VIF found with MAC fa:16:3e:20:86:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.338 2 INFO nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Using config drive
Oct 02 12:36:43 compute-0 nova_compute[256940]: 2025-10-02 12:36:43.379 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 205 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 754 KiB/s wr, 228 op/s
Oct 02 12:36:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1912308509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2064274681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.182 2 INFO nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Creating config drive at /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/disk.config
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.189 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsdzr9qpg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.342 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsdzr9qpg" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.387 2 DEBUG nova.storage.rbd_utils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] rbd image 3d8f25cb-a0ea-445c-8254-925defd1187a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.395 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/disk.config 3d8f25cb-a0ea-445c-8254-925defd1187a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.610 2 DEBUG oslo_concurrency.processutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/disk.config 3d8f25cb-a0ea-445c-8254-925defd1187a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.611 2 INFO nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Deleting local config drive /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a/disk.config because it was imported into RBD.
Oct 02 12:36:44 compute-0 NetworkManager[44981]: <info>  [1759408604.6900] manager: (tap52c44aa2-66): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Oct 02 12:36:44 compute-0 kernel: tap52c44aa2-66: entered promiscuous mode
Oct 02 12:36:44 compute-0 ovn_controller[148123]: 2025-10-02T12:36:44Z|00407|binding|INFO|Claiming lport 52c44aa2-66b1-4b0a-9772-db0485c82d75 for this chassis.
Oct 02 12:36:44 compute-0 ovn_controller[148123]: 2025-10-02T12:36:44Z|00408|binding|INFO|52c44aa2-66b1-4b0a-9772-db0485c82d75: Claiming fa:16:3e:20:86:e3 10.100.0.4
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.715 2 DEBUG nova.network.neutron [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updated VIF entry in instance network info cache for port 52c44aa2-66b1-4b0a-9772-db0485c82d75. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.716 2 DEBUG nova.network.neutron [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updating instance_info_cache with network_info: [{"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:44 compute-0 systemd-udevd[317092]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:36:44 compute-0 systemd-machined[210927]: New machine qemu-43-instance-00000060.
Oct 02 12:36:44 compute-0 NetworkManager[44981]: <info>  [1759408604.7648] device (tap52c44aa2-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:36:44 compute-0 NetworkManager[44981]: <info>  [1759408604.7659] device (tap52c44aa2-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:36:44 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000060.
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:44 compute-0 nova_compute[256940]: 2025-10-02 12:36:44.814 2 DEBUG oslo_concurrency.lockutils [req-b01e304c-5c8e-471f-83e8-807dc8a574cb req-9305ca6d-e80d-4f20-b29d-17ec64eef466 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:44.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.108 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:86:e3 10.100.0.4'], port_security=['fa:16:3e:20:86:e3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3d8f25cb-a0ea-445c-8254-925defd1187a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32951c3cc7a049e8b3ecc438ec73ccdd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d775c01-501d-4e1f-b108-95d014a8bb6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=120e66d2-5e50-4d01-98c8-69b208054b0a, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=52c44aa2-66b1-4b0a-9772-db0485c82d75) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.110 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 52c44aa2-66b1-4b0a-9772-db0485c82d75 in datapath 983208d8-4fdb-4b06-b8f5-fa1d075931d1 bound to our chassis
Oct 02 12:36:45 compute-0 ovn_controller[148123]: 2025-10-02T12:36:45Z|00409|binding|INFO|Setting lport 52c44aa2-66b1-4b0a-9772-db0485c82d75 ovn-installed in OVS
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.112 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 983208d8-4fdb-4b06-b8f5-fa1d075931d1
Oct 02 12:36:45 compute-0 ovn_controller[148123]: 2025-10-02T12:36:45Z|00410|binding|INFO|Setting lport 52c44aa2-66b1-4b0a-9772-db0485c82d75 up in Southbound
Oct 02 12:36:45 compute-0 nova_compute[256940]: 2025-10-02 12:36:45.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.127 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[456f66c1-9d86-42a7-af98-0fbf51e47465]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.128 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap983208d8-41 in ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.129 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap983208d8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.129 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[48004ce1-1bc2-46c8-a550-d19f5e300b0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.131 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cbffbf62-890e-4692-8948-ebf029c1e2a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.145 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6b26ef-9bc2-41c9-97d2-a598123a08d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.173 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7168bc-b0a9-413e-845d-6215df5fd46c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ceph-mon[73668]: pgmap v1809: 305 pgs: 305 active+clean; 205 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 754 KiB/s wr, 228 op/s
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.211 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[90ff6a73-1e00-4682-b5d6-cba06a2760c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.219 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[309b9466-5073-4bee-bb3c-ae686098930e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 NetworkManager[44981]: <info>  [1759408605.2204] manager: (tap983208d8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/187)
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.262 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bc497721-480e-44fa-91e0-e1a926043102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.266 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c896c4e7-0974-49c8-a3f4-bd6936dbf4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 NetworkManager[44981]: <info>  [1759408605.2922] device (tap983208d8-40): carrier: link connected
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.299 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[03de4843-9b74-4988-83d6-9824e5962d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.314 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[13a9c36b-099e-4e0a-a597-4bedab595820]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap983208d8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:a7:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650077, 'reachable_time': 34936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317167, 'error': None, 'target': 'ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.327 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[500f1072-5842-489e-9e51-89c61d9ddb94]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:a742'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650077, 'tstamp': 650077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317168, 'error': None, 'target': 'ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.346 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[22ed45dc-721f-443b-97a0-b368fb8b217d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap983208d8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:a7:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650077, 'reachable_time': 34936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317169, 'error': None, 'target': 'ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.386 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[12852cca-5725-44ea-8f71-d7a8b285f4b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.457 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[16ceb0fe-2ff1-45f9-8e56-16d7d606842a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.459 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap983208d8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.459 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.460 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap983208d8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:45 compute-0 kernel: tap983208d8-40: entered promiscuous mode
Oct 02 12:36:45 compute-0 NetworkManager[44981]: <info>  [1759408605.4626] manager: (tap983208d8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.465 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap983208d8-40, col_values=(('external_ids', {'iface-id': '8b62626e-2555-41ca-af6b-d6d928ab73e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:45 compute-0 ovn_controller[148123]: 2025-10-02T12:36:45Z|00411|binding|INFO|Releasing lport 8b62626e-2555-41ca-af6b-d6d928ab73e6 from this chassis (sb_readonly=1)
Oct 02 12:36:45 compute-0 nova_compute[256940]: 2025-10-02 12:36:45.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:45 compute-0 nova_compute[256940]: 2025-10-02 12:36:45.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.482 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/983208d8-4fdb-4b06-b8f5-fa1d075931d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/983208d8-4fdb-4b06-b8f5-fa1d075931d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.483 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[963596e2-fa94-41cb-90d2-981593742f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.484 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-983208d8-4fdb-4b06-b8f5-fa1d075931d1
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/983208d8-4fdb-4b06-b8f5-fa1d075931d1.pid.haproxy
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 983208d8-4fdb-4b06-b8f5-fa1d075931d1
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:36:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:36:45.485 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'env', 'PROCESS_TAG=haproxy-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/983208d8-4fdb-4b06-b8f5-fa1d075931d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:36:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 175 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 661 KiB/s wr, 209 op/s
Oct 02 12:36:45 compute-0 nova_compute[256940]: 2025-10-02 12:36:45.584 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408605.583714, 3d8f25cb-a0ea-445c-8254-925defd1187a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:45 compute-0 nova_compute[256940]: 2025-10-02 12:36:45.585 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] VM Started (Lifecycle Event)
Oct 02 12:36:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:45 compute-0 podman[317201]: 2025-10-02 12:36:45.918819408 +0000 UTC m=+0.090163534 container create e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:36:45 compute-0 podman[317201]: 2025-10-02 12:36:45.850513525 +0000 UTC m=+0.021857631 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:36:45 compute-0 systemd[1]: Started libpod-conmon-e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78.scope.
Oct 02 12:36:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b963bc54c097049aaca2ce83a10beab70034566734f11e0a0e1ad75dfe3c0ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:46 compute-0 podman[317201]: 2025-10-02 12:36:46.035802472 +0000 UTC m=+0.207146618 container init e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:36:46 compute-0 podman[317201]: 2025-10-02 12:36:46.042789535 +0000 UTC m=+0.214133671 container start e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:36:46 compute-0 neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1[317216]: [NOTICE]   (317220) : New worker (317222) forked
Oct 02 12:36:46 compute-0 neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1[317216]: [NOTICE]   (317220) : Loading success.
Oct 02 12:36:46 compute-0 nova_compute[256940]: 2025-10-02 12:36:46.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:46 compute-0 nova_compute[256940]: 2025-10-02 12:36:46.652 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:46 compute-0 nova_compute[256940]: 2025-10-02 12:36:46.659 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408605.583845, 3d8f25cb-a0ea-445c-8254-925defd1187a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:46 compute-0 nova_compute[256940]: 2025-10-02 12:36:46.660 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] VM Paused (Lifecycle Event)
Oct 02 12:36:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:46.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:46 compute-0 nova_compute[256940]: 2025-10-02 12:36:46.848 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:46 compute-0 nova_compute[256940]: 2025-10-02 12:36:46.854 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:46 compute-0 nova_compute[256940]: 2025-10-02 12:36:46.883 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.173 2 DEBUG nova.compute.manager [req-04303552-e91b-42a9-95e4-4f404b627135 req-fb056547-82a4-4144-b9b4-6d39f1a82d4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.174 2 DEBUG oslo_concurrency.lockutils [req-04303552-e91b-42a9-95e4-4f404b627135 req-fb056547-82a4-4144-b9b4-6d39f1a82d4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.174 2 DEBUG oslo_concurrency.lockutils [req-04303552-e91b-42a9-95e4-4f404b627135 req-fb056547-82a4-4144-b9b4-6d39f1a82d4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.175 2 DEBUG oslo_concurrency.lockutils [req-04303552-e91b-42a9-95e4-4f404b627135 req-fb056547-82a4-4144-b9b4-6d39f1a82d4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.175 2 DEBUG nova.compute.manager [req-04303552-e91b-42a9-95e4-4f404b627135 req-fb056547-82a4-4144-b9b4-6d39f1a82d4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Processing event network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.176 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.182 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408607.1822476, 3d8f25cb-a0ea-445c-8254-925defd1187a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.183 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] VM Resumed (Lifecycle Event)
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.186 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.189 2 INFO nova.virt.libvirt.driver [-] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Instance spawned successfully.
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.190 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.217 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.222 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.240 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.241 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.242 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.242 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.243 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.243 2 DEBUG nova.virt.libvirt.driver [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:36:47 compute-0 ceph-mon[73668]: pgmap v1810: 305 pgs: 305 active+clean; 175 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 661 KiB/s wr, 209 op/s
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.266 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.317 2 INFO nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Took 17.01 seconds to spawn the instance on the hypervisor.
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.318 2 DEBUG nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:47 compute-0 podman[317232]: 2025-10-02 12:36:47.431537428 +0000 UTC m=+0.090376701 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.438 2 INFO nova.compute.manager [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Took 19.05 seconds to build instance.
Oct 02 12:36:47 compute-0 nova_compute[256940]: 2025-10-02 12:36:47.468 2 DEBUG oslo_concurrency.lockutils [None req-de525515-b3b0-4506-92ed-ba1ad55e963d 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:47 compute-0 podman[317233]: 2025-10-02 12:36:47.486579464 +0000 UTC m=+0.144865572 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 143 op/s
Oct 02 12:36:48 compute-0 nova_compute[256940]: 2025-10-02 12:36:48.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1057747748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:48.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:49 compute-0 nova_compute[256940]: 2025-10-02 12:36:49.279 2 DEBUG nova.compute.manager [req-69787664-7f41-455c-84b6-8794bc659826 req-ca7c1c6a-6450-4f61-85dc-8f42c77d5d40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:49 compute-0 nova_compute[256940]: 2025-10-02 12:36:49.279 2 DEBUG oslo_concurrency.lockutils [req-69787664-7f41-455c-84b6-8794bc659826 req-ca7c1c6a-6450-4f61-85dc-8f42c77d5d40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:49 compute-0 nova_compute[256940]: 2025-10-02 12:36:49.280 2 DEBUG oslo_concurrency.lockutils [req-69787664-7f41-455c-84b6-8794bc659826 req-ca7c1c6a-6450-4f61-85dc-8f42c77d5d40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:49 compute-0 nova_compute[256940]: 2025-10-02 12:36:49.281 2 DEBUG oslo_concurrency.lockutils [req-69787664-7f41-455c-84b6-8794bc659826 req-ca7c1c6a-6450-4f61-85dc-8f42c77d5d40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:49 compute-0 nova_compute[256940]: 2025-10-02 12:36:49.282 2 DEBUG nova.compute.manager [req-69787664-7f41-455c-84b6-8794bc659826 req-ca7c1c6a-6450-4f61-85dc-8f42c77d5d40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] No waiting events found dispatching network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:49 compute-0 nova_compute[256940]: 2025-10-02 12:36:49.282 2 WARNING nova.compute.manager [req-69787664-7f41-455c-84b6-8794bc659826 req-ca7c1c6a-6450-4f61-85dc-8f42c77d5d40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received unexpected event network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 for instance with vm_state active and task_state None.
Oct 02 12:36:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 103 op/s
Oct 02 12:36:49 compute-0 ceph-mon[73668]: pgmap v1811: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 143 op/s
Oct 02 12:36:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:50.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:50 compute-0 ceph-mon[73668]: pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 103 op/s
Oct 02 12:36:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4257555595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:51 compute-0 nova_compute[256940]: 2025-10-02 12:36:51.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 16 KiB/s wr, 130 op/s
Oct 02 12:36:51 compute-0 NetworkManager[44981]: <info>  [1759408611.8504] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct 02 12:36:51 compute-0 NetworkManager[44981]: <info>  [1759408611.8517] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct 02 12:36:51 compute-0 nova_compute[256940]: 2025-10-02 12:36:51.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:51 compute-0 nova_compute[256940]: 2025-10-02 12:36:51.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:51 compute-0 ovn_controller[148123]: 2025-10-02T12:36:51Z|00412|binding|INFO|Releasing lport 8b62626e-2555-41ca-af6b-d6d928ab73e6 from this chassis (sb_readonly=0)
Oct 02 12:36:51 compute-0 nova_compute[256940]: 2025-10-02 12:36:51.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3035015947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:36:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:36:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:53 compute-0 nova_compute[256940]: 2025-10-02 12:36:53.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:53 compute-0 nova_compute[256940]: 2025-10-02 12:36:53.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:53 compute-0 ceph-mon[73668]: pgmap v1813: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 16 KiB/s wr, 130 op/s
Oct 02 12:36:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2931010887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 109 op/s
Oct 02 12:36:53 compute-0 nova_compute[256940]: 2025-10-02 12:36:53.508 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:53 compute-0 nova_compute[256940]: 2025-10-02 12:36:53.509 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:53 compute-0 nova_compute[256940]: 2025-10-02 12:36:53.509 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:36:54 compute-0 nova_compute[256940]: 2025-10-02 12:36:54.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:54 compute-0 nova_compute[256940]: 2025-10-02 12:36:54.344 2 DEBUG nova.compute.manager [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-changed-52c44aa2-66b1-4b0a-9772-db0485c82d75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:54 compute-0 nova_compute[256940]: 2025-10-02 12:36:54.345 2 DEBUG nova.compute.manager [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Refreshing instance network info cache due to event network-changed-52c44aa2-66b1-4b0a-9772-db0485c82d75. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:36:54 compute-0 nova_compute[256940]: 2025-10-02 12:36:54.346 2 DEBUG oslo_concurrency.lockutils [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:54 compute-0 nova_compute[256940]: 2025-10-02 12:36:54.346 2 DEBUG oslo_concurrency.lockutils [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:54 compute-0 nova_compute[256940]: 2025-10-02 12:36:54.347 2 DEBUG nova.network.neutron [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Refreshing network info cache for port 52c44aa2-66b1-4b0a-9772-db0485c82d75 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:36:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:54.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3774149522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:54.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Oct 02 12:36:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:56 compute-0 ceph-mon[73668]: pgmap v1814: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 109 op/s
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.257 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.257 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.257 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.258 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.258 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:56.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580588440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.748 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:56 compute-0 systemd[1]: Starting dnf makecache...
Oct 02 12:36:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:36:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:56.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.912 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:56 compute-0 nova_compute[256940]: 2025-10-02 12:36:56.913 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:36:57 compute-0 dnf[317305]: Metadata cache refreshed recently.
Oct 02 12:36:57 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 12:36:57 compute-0 systemd[1]: Finished dnf makecache.
Oct 02 12:36:57 compute-0 nova_compute[256940]: 2025-10-02 12:36:57.073 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:36:57 compute-0 nova_compute[256940]: 2025-10-02 12:36:57.075 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4419MB free_disk=20.921703338623047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:36:57 compute-0 nova_compute[256940]: 2025-10-02 12:36:57.075 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:57 compute-0 nova_compute[256940]: 2025-10-02 12:36:57.076 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:57 compute-0 ceph-mon[73668]: pgmap v1815: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Oct 02 12:36:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1580588440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 02 12:36:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 82 op/s
Oct 02 12:36:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 02 12:36:57 compute-0 nova_compute[256940]: 2025-10-02 12:36:57.800 2 DEBUG nova.network.neutron [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updated VIF entry in instance network info cache for port 52c44aa2-66b1-4b0a-9772-db0485c82d75. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:57 compute-0 nova_compute[256940]: 2025-10-02 12:36:57.802 2 DEBUG nova.network.neutron [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updating instance_info_cache with network_info: [{"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:57 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 02 12:36:58 compute-0 nova_compute[256940]: 2025-10-02 12:36:58.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:58 compute-0 sudo[317307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:58 compute-0 sudo[317307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:58 compute-0 sudo[317307]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:58 compute-0 sudo[317332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:58 compute-0 sudo[317332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:58 compute-0 sudo[317332]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:36:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:36:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:58.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:36:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:36:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:58.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:59 compute-0 ceph-mon[73668]: pgmap v1816: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 82 op/s
Oct 02 12:36:59 compute-0 ceph-mon[73668]: osdmap e250: 3 total, 3 up, 3 in
Oct 02 12:36:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 89 op/s
Oct 02 12:36:59 compute-0 nova_compute[256940]: 2025-10-02 12:36:59.573 2 DEBUG oslo_concurrency.lockutils [req-4e78a93f-f578-44f9-9a84-acbb61ad1da7 req-dc9262ab-aac9-4105-b4d5-d4ba9a0cd907 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:59 compute-0 nova_compute[256940]: 2025-10-02 12:36:59.603 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 3d8f25cb-a0ea-445c-8254-925defd1187a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:36:59 compute-0 nova_compute[256940]: 2025-10-02 12:36:59.603 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:36:59 compute-0 nova_compute[256940]: 2025-10-02 12:36:59.603 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:36:59 compute-0 nova_compute[256940]: 2025-10-02 12:36:59.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:59 compute-0 nova_compute[256940]: 2025-10-02 12:36:59.866 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862867357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:00 compute-0 nova_compute[256940]: 2025-10-02 12:37:00.319 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:00 compute-0 nova_compute[256940]: 2025-10-02 12:37:00.332 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:00 compute-0 podman[317380]: 2025-10-02 12:37:00.41904026 +0000 UTC m=+0.080541504 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:37:00 compute-0 podman[317381]: 2025-10-02 12:37:00.419232955 +0000 UTC m=+0.071204660 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:00.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3305874569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:00 compute-0 nova_compute[256940]: 2025-10-02 12:37:00.719 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:00 compute-0 nova_compute[256940]: 2025-10-02 12:37:00.805 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:37:00 compute-0 nova_compute[256940]: 2025-10-02 12:37:00.805 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:01 compute-0 nova_compute[256940]: 2025-10-02 12:37:01.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 175 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 987 KiB/s wr, 81 op/s
Oct 02 12:37:01 compute-0 nova_compute[256940]: 2025-10-02 12:37:01.806 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:01 compute-0 nova_compute[256940]: 2025-10-02 12:37:01.807 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:37:01 compute-0 nova_compute[256940]: 2025-10-02 12:37:01.807 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:37:01 compute-0 ceph-mon[73668]: pgmap v1818: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 89 op/s
Oct 02 12:37:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3862867357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1562042604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:02 compute-0 nova_compute[256940]: 2025-10-02 12:37:02.107 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:02 compute-0 nova_compute[256940]: 2025-10-02 12:37:02.107 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:02 compute-0 nova_compute[256940]: 2025-10-02 12:37:02.108 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:37:02 compute-0 nova_compute[256940]: 2025-10-02 12:37:02.108 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3d8f25cb-a0ea-445c-8254-925defd1187a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:02.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:02 compute-0 ceph-mon[73668]: pgmap v1819: 305 pgs: 305 active+clean; 175 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 987 KiB/s wr, 81 op/s
Oct 02 12:37:03 compute-0 nova_compute[256940]: 2025-10-02 12:37:03.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 183 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.521 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updating instance_info_cache with network_info: [{"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.540 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-3d8f25cb-a0ea-445c-8254-925defd1187a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.540 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.541 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.541 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.541 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.542 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:04.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:04 compute-0 ovn_controller[148123]: 2025-10-02T12:37:04Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:86:e3 10.100.0.4
Oct 02 12:37:04 compute-0 ovn_controller[148123]: 2025-10-02T12:37:04Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:86:e3 10.100.0.4
Oct 02 12:37:04 compute-0 nova_compute[256940]: 2025-10-02 12:37:04.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:04.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:05 compute-0 ceph-mon[73668]: pgmap v1820: 305 pgs: 305 active+clean; 183 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Oct 02 12:37:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 189 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 103 op/s
Oct 02 12:37:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 02 12:37:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 02 12:37:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 02 12:37:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:37:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:37:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4173785915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:06 compute-0 ceph-mon[73668]: osdmap e251: 3 total, 3 up, 3 in
Oct 02 12:37:06 compute-0 nova_compute[256940]: 2025-10-02 12:37:06.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:06.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:07 compute-0 ceph-mon[73668]: pgmap v1821: 305 pgs: 305 active+clean; 189 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 103 op/s
Oct 02 12:37:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 199 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 161 op/s
Oct 02 12:37:08 compute-0 nova_compute[256940]: 2025-10-02 12:37:08.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:08 compute-0 nova_compute[256940]: 2025-10-02 12:37:08.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:08.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:37:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:37:09 compute-0 ceph-mon[73668]: pgmap v1823: 305 pgs: 305 active+clean; 199 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 161 op/s
Oct 02 12:37:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 199 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Oct 02 12:37:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2542507121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:10.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:11 compute-0 ceph-mon[73668]: pgmap v1824: 305 pgs: 305 active+clean; 199 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Oct 02 12:37:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1031192021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:11 compute-0 nova_compute[256940]: 2025-10-02 12:37:11.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 199 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Oct 02 12:37:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/850267506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:12.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:12.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:13 compute-0 nova_compute[256940]: 2025-10-02 12:37:13.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-0 ceph-mon[73668]: pgmap v1825: 305 pgs: 305 active+clean; 199 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Oct 02 12:37:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 167 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 12:37:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1533087876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.306 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.306 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.328 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.434 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.434 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.443 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.443 2 INFO nova.compute.claims [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:37:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:14.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:14 compute-0 nova_compute[256940]: 2025-10-02 12:37:14.600 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:14.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103544154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.075 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.081 2 DEBUG nova.compute.provider_tree [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.131 2 DEBUG nova.scheduler.client.report [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.170 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.171 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:37:15 compute-0 ceph-mon[73668]: pgmap v1826: 305 pgs: 305 active+clean; 167 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 12:37:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2103544154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.259 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.259 2 DEBUG nova.network.neutron [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.287 2 INFO nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.330 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:37:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 12:37:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.490 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.492 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.492 2 INFO nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Creating image(s)
Oct 02 12:37:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 182 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.522 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.554 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.589 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.593 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.623 2 DEBUG nova.policy [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71d69bc37f274fad8a0b06c0b96f2a64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.659 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.660 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.660 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.660 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.689 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:15 compute-0 nova_compute[256940]: 2025-10-02 12:37:15.693 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.949857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635949910, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1365, "num_deletes": 261, "total_data_size": 2091959, "memory_usage": 2121336, "flush_reason": "Manual Compaction"}
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635965441, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2055198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39717, "largest_seqno": 41081, "table_properties": {"data_size": 2048960, "index_size": 3377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14033, "raw_average_key_size": 20, "raw_value_size": 2036037, "raw_average_value_size": 2904, "num_data_blocks": 149, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408521, "oldest_key_time": 1759408521, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 15647 microseconds, and 6183 cpu microseconds.
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.965498) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2055198 bytes OK
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.965524) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.967036) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.967139) EVENT_LOG_v1 {"time_micros": 1759408635967071, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.967171) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2085947, prev total WAL file size 2085947, number of live WAL files 2.
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.968461) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2007KB)], [86(9043KB)]
Oct 02 12:37:15 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635968537, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11315816, "oldest_snapshot_seqno": -1}
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.026 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6816 keys, 11169514 bytes, temperature: kUnknown
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636071600, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11169514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11123105, "index_size": 28249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 175406, "raw_average_key_size": 25, "raw_value_size": 11000429, "raw_average_value_size": 1613, "num_data_blocks": 1128, "num_entries": 6816, "num_filter_entries": 6816, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.071873) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11169514 bytes
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.100459) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.7 rd, 108.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 7355, records dropped: 539 output_compression: NoCompression
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.100510) EVENT_LOG_v1 {"time_micros": 1759408636100490, "job": 50, "event": "compaction_finished", "compaction_time_micros": 103156, "compaction_time_cpu_micros": 30796, "output_level": 6, "num_output_files": 1, "total_output_size": 11169514, "num_input_records": 7355, "num_output_records": 6816, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636101532, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636104035, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:15.968356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.104149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.104155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.104159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.104161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:16.104166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.113 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.426 2 DEBUG nova.objects.instance [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.453 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.454 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Ensure instance console log exists: /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.455 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.455 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.455 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:16.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:16.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:16 compute-0 nova_compute[256940]: 2025-10-02 12:37:16.947 2 DEBUG nova.network.neutron [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Successfully created port: f6d0b513-9da0-4deb-a98c-8812c9ddd074 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:37:17 compute-0 ceph-mon[73668]: pgmap v1827: 305 pgs: 305 active+clean; 182 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 12:37:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 219 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 278 op/s
Oct 02 12:37:18 compute-0 nova_compute[256940]: 2025-10-02 12:37:18.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/650801237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4088164253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:18 compute-0 podman[317617]: 2025-10-02 12:37:18.383195376 +0000 UTC m=+0.051561507 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:37:18 compute-0 podman[317618]: 2025-10-02 12:37:18.422191984 +0000 UTC m=+0.083541682 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:37:18 compute-0 sudo[317663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:18 compute-0 sudo[317663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:18 compute-0 sudo[317663]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:18 compute-0 sudo[317688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:18 compute-0 sudo[317688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:18 compute-0 sudo[317688]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:18.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.253 2 DEBUG nova.network.neutron [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Successfully updated port: f6d0b513-9da0-4deb-a98c-8812c9ddd074 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.281 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.281 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.281 2 DEBUG nova.network.neutron [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:37:19 compute-0 ceph-mon[73668]: pgmap v1828: 305 pgs: 305 active+clean; 219 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 278 op/s
Oct 02 12:37:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 219 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 4.0 MiB/s wr, 248 op/s
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.570 2 DEBUG nova.network.neutron [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.876 2 DEBUG nova.compute.manager [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-changed-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.876 2 DEBUG nova.compute.manager [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Refreshing instance network info cache due to event network-changed-f6d0b513-9da0-4deb-a98c-8812c9ddd074. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:37:19 compute-0 nova_compute[256940]: 2025-10-02 12:37:19.876 2 DEBUG oslo_concurrency.lockutils [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:20 compute-0 ceph-mon[73668]: pgmap v1829: 305 pgs: 305 active+clean; 219 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 4.0 MiB/s wr, 248 op/s
Oct 02 12:37:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:20.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:20.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:21 compute-0 nova_compute[256940]: 2025-10-02 12:37:21.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 215 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 381 op/s
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.483 2 DEBUG nova.network.neutron [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updating instance_info_cache with network_info: [{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.544 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.544 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance network_info: |[{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.545 2 DEBUG oslo_concurrency.lockutils [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.546 2 DEBUG nova.network.neutron [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Refreshing network info cache for port f6d0b513-9da0-4deb-a98c-8812c9ddd074 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.549 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Start _get_guest_xml network_info=[{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.554 2 WARNING nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.564 2 DEBUG nova.virt.libvirt.host [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.565 2 DEBUG nova.virt.libvirt.host [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.572 2 DEBUG nova.virt.libvirt.host [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.573 2 DEBUG nova.virt.libvirt.host [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.574 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.574 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.575 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.575 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.575 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.576 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.576 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.576 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.576 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.576 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.577 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.577 2 DEBUG nova.virt.hardware [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:37:22 compute-0 nova_compute[256940]: 2025-10-02 12:37:22.580 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:22.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:22 compute-0 ceph-mon[73668]: pgmap v1830: 305 pgs: 305 active+clean; 215 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 381 op/s
Oct 02 12:37:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2159854273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:22.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206218469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.023 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.059 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.065 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 213 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 336 op/s
Oct 02 12:37:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/895846916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.572 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.575 2 DEBUG nova.virt.libvirt.vif [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1852157817',display_name='tempest-ServerActionsTestJSON-server-1852157817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1852157817',id=99,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-e22q2mba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a5e40085-6c27-4b57-96db-89ecc7ac2e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.576 2 DEBUG nova.network.os_vif_util [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.578 2 DEBUG nova.network.os_vif_util [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.580 2 DEBUG nova.objects.instance [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.853 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <uuid>a5e40085-6c27-4b57-96db-89ecc7ac2e48</uuid>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <name>instance-00000063</name>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestJSON-server-1852157817</nova:name>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:37:22</nova:creationTime>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <nova:port uuid="f6d0b513-9da0-4deb-a98c-8812c9ddd074">
Oct 02 12:37:23 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <system>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <entry name="serial">a5e40085-6c27-4b57-96db-89ecc7ac2e48</entry>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <entry name="uuid">a5e40085-6c27-4b57-96db-89ecc7ac2e48</entry>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </system>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <os>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   </os>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <features>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   </features>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk">
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       </source>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk.config">
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       </source>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:37:23 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:49:53:e6"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <target dev="tapf6d0b513-9d"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/console.log" append="off"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <video>
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </video>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:37:23 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:37:23 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:37:23 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:37:23 compute-0 nova_compute[256940]: </domain>
Oct 02 12:37:23 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.855 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Preparing to wait for external event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.855 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.856 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.856 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.857 2 DEBUG nova.virt.libvirt.vif [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1852157817',display_name='tempest-ServerActionsTestJSON-server-1852157817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1852157817',id=99,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-e22q2mba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a5e40085-6c27-4b57-96db-89ecc7ac2e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.857 2 DEBUG nova.network.os_vif_util [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.858 2 DEBUG nova.network.os_vif_util [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.859 2 DEBUG os_vif [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6d0b513-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.866 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6d0b513-9d, col_values=(('external_ids', {'iface-id': 'f6d0b513-9da0-4deb-a98c-8812c9ddd074', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:53:e6', 'vm-uuid': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:23 compute-0 NetworkManager[44981]: <info>  [1759408643.8689] manager: (tapf6d0b513-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.878 2 INFO os_vif [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d')
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.988 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.989 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.989 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:49:53:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:37:23 compute-0 nova_compute[256940]: 2025-10-02 12:37:23.990 2 INFO nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Using config drive
Oct 02 12:37:24 compute-0 nova_compute[256940]: 2025-10-02 12:37:24.020 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1206218469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/895846916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:24 compute-0 nova_compute[256940]: 2025-10-02 12:37:24.997 2 INFO nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Creating config drive at /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/disk.config
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.004 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2_33rykk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:25 compute-0 ceph-mon[73668]: pgmap v1831: 305 pgs: 305 active+clean; 213 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 336 op/s
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.142 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2_33rykk" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.177 2 DEBUG nova.storage.rbd_utils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.182 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/disk.config a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.379 2 DEBUG oslo_concurrency.processutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/disk.config a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.381 2 INFO nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Deleting local config drive /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/disk.config because it was imported into RBD.
Oct 02 12:37:25 compute-0 kernel: tapf6d0b513-9d: entered promiscuous mode
Oct 02 12:37:25 compute-0 NetworkManager[44981]: <info>  [1759408645.4469] manager: (tapf6d0b513-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Oct 02 12:37:25 compute-0 ovn_controller[148123]: 2025-10-02T12:37:25Z|00413|binding|INFO|Claiming lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 for this chassis.
Oct 02 12:37:25 compute-0 ovn_controller[148123]: 2025-10-02T12:37:25Z|00414|binding|INFO|f6d0b513-9da0-4deb-a98c-8812c9ddd074: Claiming fa:16:3e:49:53:e6 10.100.0.10
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.468 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.470 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.474 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:37:25 compute-0 ovn_controller[148123]: 2025-10-02T12:37:25Z|00415|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 ovn-installed in OVS
Oct 02 12:37:25 compute-0 ovn_controller[148123]: 2025-10-02T12:37:25Z|00416|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 up in Southbound
Oct 02 12:37:25 compute-0 systemd-udevd[317849]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.492 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d18efc-9c1b-4be3-9426-ba9cbccdf62e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.493 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.499 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.499 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0b0341-9760-46f5-88f6-7becc3715e77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.502 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbb42c2-f3d6-4fb2-bb7e-59dec62ee46a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 NetworkManager[44981]: <info>  [1759408645.5049] device (tapf6d0b513-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:37:25 compute-0 NetworkManager[44981]: <info>  [1759408645.5058] device (tapf6d0b513-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:37:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 213 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 345 op/s
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.515 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e45314ea-69c6-485c-99ad-4b739bb2e43f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 systemd-machined[210927]: New machine qemu-44-instance-00000063.
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.533 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[08e62f4d-77aa-4044-82c0-042dcaa236ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000063.
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.570 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f752ff60-3f79-4318-976b-7b0c2466cb79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.577 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa596dc-a4f0-489d-9b42-c963c4c78d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 NetworkManager[44981]: <info>  [1759408645.5789] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Oct 02 12:37:25 compute-0 systemd-udevd[317854]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.618 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5e3419-9367-4de8-8510-8c1439f8a2c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.620 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[208d25f7-b760-42cc-ad9d-f374cf271f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 NetworkManager[44981]: <info>  [1759408645.6548] device (tapf011efa4-00): carrier: link connected
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.662 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c76198a2-73d4-4ecd-add6-1fde89a3c414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.680 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[999a9fde-fef3-40ba-b58b-1bc76413f897]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654113, 'reachable_time': 38945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317885, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.704 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fea384b3-8d2b-42f1-a3fe-130eb89f521f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654113, 'tstamp': 654113}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317886, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.726 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[46b218ba-38af-42ec-83bf-d84b63d95b42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654113, 'reachable_time': 38945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317887, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.768 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fcffbb91-01e5-4fe6-80a0-a0f8fe486c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.847 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb8b716-63f9-4eeb-bd15-d6d9cb302ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.849 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.849 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.850 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:25 compute-0 NetworkManager[44981]: <info>  [1759408645.8526] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Oct 02 12:37:25 compute-0 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.854 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:25 compute-0 ovn_controller[148123]: 2025-10-02T12:37:25Z|00417|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:37:25 compute-0 nova_compute[256940]: 2025-10-02 12:37:25.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.872 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.873 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fca929ad-731e-4ca7-9c54-38d211837581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.874 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:37:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:25.876 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:37:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:26 compute-0 podman[317956]: 2025-10-02 12:37:26.271638971 +0000 UTC m=+0.028430773 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:37:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:26.472 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:26.473 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:26.474 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:26.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:26 compute-0 podman[317956]: 2025-10-02 12:37:26.68226757 +0000 UTC m=+0.439059342 container create a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.741 2 DEBUG nova.network.neutron [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updated VIF entry in instance network info cache for port f6d0b513-9da0-4deb-a98c-8812c9ddd074. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.742 2 DEBUG nova.network.neutron [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updating instance_info_cache with network_info: [{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.768 2 DEBUG oslo_concurrency.lockutils [req-388c7d68-625a-47a5-bef3-cb5cf4a74498 req-2bbcfcf3-07e9-4782-a4c5-bb1b2547f07b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.790 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408646.7891743, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.790 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Started (Lifecycle Event)
Oct 02 12:37:26 compute-0 systemd[1]: Started libpod-conmon-a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7.scope.
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.830 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.835 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408646.7897358, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.835 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Paused (Lifecycle Event)
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.860 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.864 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88a9b739d398e08c4e193a3b5b1b95407bf33c1466761d9edda349504e3bb562/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:26 compute-0 nova_compute[256940]: 2025-10-02 12:37:26.914 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:37:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:37:26 compute-0 podman[317956]: 2025-10-02 12:37:26.920220852 +0000 UTC m=+0.677012644 container init a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:37:26 compute-0 podman[317956]: 2025-10-02 12:37:26.926761793 +0000 UTC m=+0.683553565 container start a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:37:26 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [NOTICE]   (317980) : New worker (317982) forked
Oct 02 12:37:26 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [NOTICE]   (317980) : Loading success.
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.172 2 DEBUG nova.compute.manager [req-953fe1bd-fda4-4bfa-8fcd-2a3ab30e78d3 req-92234b8e-583b-4c44-93e0-04006de98504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.172 2 DEBUG oslo_concurrency.lockutils [req-953fe1bd-fda4-4bfa-8fcd-2a3ab30e78d3 req-92234b8e-583b-4c44-93e0-04006de98504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.173 2 DEBUG oslo_concurrency.lockutils [req-953fe1bd-fda4-4bfa-8fcd-2a3ab30e78d3 req-92234b8e-583b-4c44-93e0-04006de98504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.173 2 DEBUG oslo_concurrency.lockutils [req-953fe1bd-fda4-4bfa-8fcd-2a3ab30e78d3 req-92234b8e-583b-4c44-93e0-04006de98504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.173 2 DEBUG nova.compute.manager [req-953fe1bd-fda4-4bfa-8fcd-2a3ab30e78d3 req-92234b8e-583b-4c44-93e0-04006de98504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Processing event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.174 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:37:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.178 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408647.1785128, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.179 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Resumed (Lifecycle Event)
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.181 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.184 2 INFO nova.virt.libvirt.driver [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance spawned successfully.
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.184 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.206 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.211 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 02 12:37:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.223 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.223 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.224 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.224 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.225 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.225 2 DEBUG nova.virt.libvirt.driver [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.256 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.292 2 INFO nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Took 11.80 seconds to spawn the instance on the hypervisor.
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.292 2 DEBUG nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:27 compute-0 ceph-mon[73668]: pgmap v1832: 305 pgs: 305 active+clean; 213 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 345 op/s
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.489 2 INFO nova.compute.manager [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Took 13.09 seconds to build instance.
Oct 02 12:37:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 214 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Oct 02 12:37:27 compute-0 nova_compute[256940]: 2025-10-02 12:37:27.581 2 DEBUG oslo_concurrency.lockutils [None req-4f6faf22-7bb3-4d2f-ae0c-f7a8facea87c 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.358 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.358 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.358 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.359 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.359 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.360 2 INFO nova.compute.manager [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Terminating instance
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.361 2 DEBUG nova.compute.manager [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:37:28 compute-0 ceph-mon[73668]: osdmap e252: 3 total, 3 up, 3 in
Oct 02 12:37:28 compute-0 ceph-mon[73668]: pgmap v1834: 305 pgs: 305 active+clean; 214 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:28 compute-0 kernel: tap52c44aa2-66 (unregistering): left promiscuous mode
Oct 02 12:37:28 compute-0 NetworkManager[44981]: <info>  [1759408648.5636] device (tap52c44aa2-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 ovn_controller[148123]: 2025-10-02T12:37:28Z|00418|binding|INFO|Releasing lport 52c44aa2-66b1-4b0a-9772-db0485c82d75 from this chassis (sb_readonly=0)
Oct 02 12:37:28 compute-0 ovn_controller[148123]: 2025-10-02T12:37:28Z|00419|binding|INFO|Setting lport 52c44aa2-66b1-4b0a-9772-db0485c82d75 down in Southbound
Oct 02 12:37:28 compute-0 ovn_controller[148123]: 2025-10-02T12:37:28Z|00420|binding|INFO|Removing iface tap52c44aa2-66 ovn-installed in OVS
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.584 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:86:e3 10.100.0.4'], port_security=['fa:16:3e:20:86:e3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3d8f25cb-a0ea-445c-8254-925defd1187a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32951c3cc7a049e8b3ecc438ec73ccdd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d775c01-501d-4e1f-b108-95d014a8bb6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=120e66d2-5e50-4d01-98c8-69b208054b0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=52c44aa2-66b1-4b0a-9772-db0485c82d75) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.585 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 52c44aa2-66b1-4b0a-9772-db0485c82d75 in datapath 983208d8-4fdb-4b06-b8f5-fa1d075931d1 unbound from our chassis
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.586 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 983208d8-4fdb-4b06-b8f5-fa1d075931d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.588 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[47135921-a9f7-418a-a6aa-ad52787abc94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.589 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1 namespace which is not needed anymore
Oct 02 12:37:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:28.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 02 12:37:28 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000060.scope: Consumed 14.462s CPU time.
Oct 02 12:37:28 compute-0 systemd-machined[210927]: Machine qemu-43-instance-00000060 terminated.
Oct 02 12:37:28 compute-0 neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1[317216]: [NOTICE]   (317220) : haproxy version is 2.8.14-c23fe91
Oct 02 12:37:28 compute-0 neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1[317216]: [NOTICE]   (317220) : path to executable is /usr/sbin/haproxy
Oct 02 12:37:28 compute-0 neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1[317216]: [WARNING]  (317220) : Exiting Master process...
Oct 02 12:37:28 compute-0 neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1[317216]: [ALERT]    (317220) : Current worker (317222) exited with code 143 (Terminated)
Oct 02 12:37:28 compute-0 neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1[317216]: [WARNING]  (317220) : All workers exited. Exiting... (0)
Oct 02 12:37:28 compute-0 systemd[1]: libpod-e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78.scope: Deactivated successfully.
Oct 02 12:37:28 compute-0 podman[318015]: 2025-10-02 12:37:28.742183493 +0000 UTC m=+0.049294288 container died e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78-userdata-shm.mount: Deactivated successfully.
Oct 02 12:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b963bc54c097049aaca2ce83a10beab70034566734f11e0a0e1ad75dfe3c0ec-merged.mount: Deactivated successfully.
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:37:28
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images']
Oct 02 12:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.806 2 INFO nova.virt.libvirt.driver [-] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Instance destroyed successfully.
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.807 2 DEBUG nova.objects.instance [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lazy-loading 'resources' on Instance uuid 3d8f25cb-a0ea-445c-8254-925defd1187a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:28 compute-0 podman[318015]: 2025-10-02 12:37:28.807830097 +0000 UTC m=+0.114940892 container cleanup e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:37:28 compute-0 systemd[1]: libpod-conmon-e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78.scope: Deactivated successfully.
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 podman[318054]: 2025-10-02 12:37:28.892056495 +0000 UTC m=+0.048982639 container remove e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.899 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d0009c57-27a6-4966-ae3e-855cb71d0423]: (4, ('Thu Oct  2 12:37:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1 (e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78)\ne58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78\nThu Oct  2 12:37:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1 (e58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78)\ne58b287e852d8b7fcef132b5c49d2d4180efbd96bd43dd9ff11251eaf06b4b78\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.901 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bcd210-e7b7-455c-b463-8f9c4d08db98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.901 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap983208d8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 kernel: tap983208d8-40: left promiscuous mode
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.917 2 DEBUG nova.virt.libvirt.vif [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:36:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1579090700',display_name='tempest-ServersTestManualDisk-server-1579090700',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1579090700',id=96,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcyTDdHWCjcrlrzjUMQFOw+PKZZUofeBU2XYQVOssn4YojM8Umy+faXGoO16vbsgqX4Nd2s/2SgTZarLK50U4dnjZWKpFSW2Q5vMylx0D0vYnx/Qs2hg5MObd8Cr/2/FQ==',key_name='tempest-keypair-1735883953',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='32951c3cc7a049e8b3ecc438ec73ccdd',ramdisk_id='',reservation_id='r-fgpn7puw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1257845924',owner_user_name='tempest-ServersTestManualDisk-1257845924-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f79bec3812947f09a3b04d1f31f08fa',uuid=3d8f25cb-a0ea-445c-8254-925defd1187a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:37:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.918 2 DEBUG nova.network.os_vif_util [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Converting VIF {"id": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "address": "fa:16:3e:20:86:e3", "network": {"id": "983208d8-4fdb-4b06-b8f5-fa1d075931d1", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-907471263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "32951c3cc7a049e8b3ecc438ec73ccdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c44aa2-66", "ovs_interfaceid": "52c44aa2-66b1-4b0a-9772-db0485c82d75", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.919 2 DEBUG nova.network.os_vif_util [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:86:e3,bridge_name='br-int',has_traffic_filtering=True,id=52c44aa2-66b1-4b0a-9772-db0485c82d75,network=Network(983208d8-4fdb-4b06-b8f5-fa1d075931d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c44aa2-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.919 2 DEBUG os_vif [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:86:e3,bridge_name='br-int',has_traffic_filtering=True,id=52c44aa2-66b1-4b0a-9772-db0485c82d75,network=Network(983208d8-4fdb-4b06-b8f5-fa1d075931d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c44aa2-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52c44aa2-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.933 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d536fdfe-a1d5-498c-8bb9-91a009373c38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:28 compute-0 nova_compute[256940]: 2025-10-02 12:37:28.934 2 INFO os_vif [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:86:e3,bridge_name='br-int',has_traffic_filtering=True,id=52c44aa2-66b1-4b0a-9772-db0485c82d75,network=Network(983208d8-4fdb-4b06-b8f5-fa1d075931d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c44aa2-66')
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.963 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dfdc589a-3550-4ad0-8453-14a1364b36a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.964 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[088a19d5-54a2-4b0c-8f8f-5c1f89818f41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.979 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4d83f9f0-e7af-428f-80f7-fda4f5e0af83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650069, 'reachable_time': 17794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318086, 'error': None, 'target': 'ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d983208d8\x2d4fdb\x2d4b06\x2db8f5\x2dfa1d075931d1.mount: Deactivated successfully.
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.984 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-983208d8-4fdb-4b06-b8f5-fa1d075931d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:37:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:28.984 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca917ce-2e68-4100-8c3e-e3904cd35f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.317 2 DEBUG nova.compute.manager [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.318 2 DEBUG oslo_concurrency.lockutils [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.319 2 DEBUG oslo_concurrency.lockutils [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.319 2 DEBUG oslo_concurrency.lockutils [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.320 2 DEBUG nova.compute.manager [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.320 2 WARNING nova.compute.manager [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state active and task_state None.
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.321 2 DEBUG nova.compute.manager [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-vif-unplugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.321 2 DEBUG oslo_concurrency.lockutils [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.322 2 DEBUG oslo_concurrency.lockutils [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.322 2 DEBUG oslo_concurrency.lockutils [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.323 2 DEBUG nova.compute.manager [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] No waiting events found dispatching network-vif-unplugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.323 2 DEBUG nova.compute.manager [req-ce7422d7-0014-4c48-b325-5fcd87b471b2 req-57644ab8-9637-4595-a39d-d0c56992facc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-vif-unplugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:37:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 214 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.764 2 DEBUG nova.compute.manager [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-changed-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.765 2 DEBUG nova.compute.manager [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Refreshing instance network info cache due to event network-changed-f6d0b513-9da0-4deb-a98c-8812c9ddd074. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.765 2 DEBUG oslo_concurrency.lockutils [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.765 2 DEBUG oslo_concurrency.lockutils [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:29 compute-0 nova_compute[256940]: 2025-10-02 12:37:29.765 2 DEBUG nova.network.neutron [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Refreshing network info cache for port f6d0b513-9da0-4deb-a98c-8812c9ddd074 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:37:29 compute-0 ovn_controller[148123]: 2025-10-02T12:37:29Z|00421|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:37:30 compute-0 nova_compute[256940]: 2025-10-02 12:37:30.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:30.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 02 12:37:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 02 12:37:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 02 12:37:30 compute-0 ceph-mon[73668]: pgmap v1835: 305 pgs: 305 active+clean; 214 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Oct 02 12:37:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:30.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:31 compute-0 podman[318092]: 2025-10-02 12:37:31.400811445 +0000 UTC m=+0.063424417 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:37:31 compute-0 podman[318091]: 2025-10-02 12:37:31.414288097 +0000 UTC m=+0.082037103 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.443 2 DEBUG nova.compute.manager [req-6bc87f49-aa36-48c8-a97a-b0adbaebc4de req-78cac203-fd9c-4168-9966-380fdf9a6d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.444 2 DEBUG oslo_concurrency.lockutils [req-6bc87f49-aa36-48c8-a97a-b0adbaebc4de req-78cac203-fd9c-4168-9966-380fdf9a6d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.444 2 DEBUG oslo_concurrency.lockutils [req-6bc87f49-aa36-48c8-a97a-b0adbaebc4de req-78cac203-fd9c-4168-9966-380fdf9a6d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.444 2 DEBUG oslo_concurrency.lockutils [req-6bc87f49-aa36-48c8-a97a-b0adbaebc4de req-78cac203-fd9c-4168-9966-380fdf9a6d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.445 2 DEBUG nova.compute.manager [req-6bc87f49-aa36-48c8-a97a-b0adbaebc4de req-78cac203-fd9c-4168-9966-380fdf9a6d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] No waiting events found dispatching network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.445 2 WARNING nova.compute.manager [req-6bc87f49-aa36-48c8-a97a-b0adbaebc4de req-78cac203-fd9c-4168-9966-380fdf9a6d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received unexpected event network-vif-plugged-52c44aa2-66b1-4b0a-9772-db0485c82d75 for instance with vm_state active and task_state deleting.
Oct 02 12:37:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 230 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 656 KiB/s wr, 236 op/s
Oct 02 12:37:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 02 12:37:31 compute-0 ceph-mon[73668]: osdmap e253: 3 total, 3 up, 3 in
Oct 02 12:37:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 02 12:37:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.861 2 DEBUG nova.network.neutron [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updated VIF entry in instance network info cache for port f6d0b513-9da0-4deb-a98c-8812c9ddd074. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.862 2 DEBUG nova.network.neutron [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updating instance_info_cache with network_info: [{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.885 2 INFO nova.virt.libvirt.driver [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Deleting instance files /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a_del
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.886 2 INFO nova.virt.libvirt.driver [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Deletion of /var/lib/nova/instances/3d8f25cb-a0ea-445c-8254-925defd1187a_del complete
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.898 2 DEBUG oslo_concurrency.lockutils [req-e36c245d-b372-4790-a2ef-8cedd072f158 req-cc461d19-1e78-4fd7-9775-af2c52539681 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.943 2 INFO nova.compute.manager [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Took 3.58 seconds to destroy the instance on the hypervisor.
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.944 2 DEBUG oslo.service.loopingcall [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.944 2 DEBUG nova.compute.manager [-] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:37:31 compute-0 nova_compute[256940]: 2025-10-02 12:37:31.945 2 DEBUG nova.network.neutron [-] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:37:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:32 compute-0 nova_compute[256940]: 2025-10-02 12:37:32.707 2 DEBUG nova.network.neutron [-] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:32 compute-0 nova_compute[256940]: 2025-10-02 12:37:32.728 2 INFO nova.compute.manager [-] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Took 0.78 seconds to deallocate network for instance.
Oct 02 12:37:32 compute-0 ceph-mon[73668]: pgmap v1837: 305 pgs: 305 active+clean; 230 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 656 KiB/s wr, 236 op/s
Oct 02 12:37:32 compute-0 ceph-mon[73668]: osdmap e254: 3 total, 3 up, 3 in
Oct 02 12:37:32 compute-0 nova_compute[256940]: 2025-10-02 12:37:32.794 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:32 compute-0 nova_compute[256940]: 2025-10-02 12:37:32.795 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:32 compute-0 nova_compute[256940]: 2025-10-02 12:37:32.901 2 DEBUG oslo_concurrency.processutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:32.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:32 compute-0 nova_compute[256940]: 2025-10-02 12:37:32.942 2 DEBUG nova.compute.manager [req-3e2dcd8c-c5f3-46b7-9b3f-7084dc8e1301 req-3830f6b9-b9d6-4cc9-b27d-42ed30ec2d33 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Received event network-vif-deleted-52c44aa2-66b1-4b0a-9772-db0485c82d75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/577101976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:33 compute-0 nova_compute[256940]: 2025-10-02 12:37:33.375 2 DEBUG oslo_concurrency.processutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:33 compute-0 nova_compute[256940]: 2025-10-02 12:37:33.384 2 DEBUG nova.compute.provider_tree [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:33 compute-0 nova_compute[256940]: 2025-10-02 12:37:33.412 2 DEBUG nova.scheduler.client.report [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:33 compute-0 nova_compute[256940]: 2025-10-02 12:37:33.443 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:33 compute-0 nova_compute[256940]: 2025-10-02 12:37:33.476 2 INFO nova.scheduler.client.report [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Deleted allocations for instance 3d8f25cb-a0ea-445c-8254-925defd1187a
Oct 02 12:37:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 223 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Oct 02 12:37:33 compute-0 nova_compute[256940]: 2025-10-02 12:37:33.554 2 DEBUG oslo_concurrency.lockutils [None req-78a78314-d49c-4174-9e8a-072f7cae2fed 2f79bec3812947f09a3b04d1f31f08fa 32951c3cc7a049e8b3ecc438ec73ccdd - - default default] Lock "3d8f25cb-a0ea-445c-8254-925defd1187a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/577101976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:33 compute-0 nova_compute[256940]: 2025-10-02 12:37:33.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:34.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:34.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:35 compute-0 ceph-mon[73668]: pgmap v1839: 305 pgs: 305 active+clean; 223 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Oct 02 12:37:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 207 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 261 op/s
Oct 02 12:37:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.064572) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656064625, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 467, "num_deletes": 251, "total_data_size": 423161, "memory_usage": 432960, "flush_reason": "Manual Compaction"}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656095444, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 418588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41082, "largest_seqno": 41548, "table_properties": {"data_size": 415917, "index_size": 707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 410545, "raw_average_value_size": 1200, "num_data_blocks": 31, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408636, "oldest_key_time": 1759408636, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 30963 microseconds, and 2316 cpu microseconds.
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.095501) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 418588 bytes OK
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.095559) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.097234) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.097248) EVENT_LOG_v1 {"time_micros": 1759408656097243, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.097271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 420397, prev total WAL file size 420397, number of live WAL files 2.
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.097881) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(408KB)], [89(10MB)]
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656097965, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11588102, "oldest_snapshot_seqno": -1}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6644 keys, 9628030 bytes, temperature: kUnknown
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656181319, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9628030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584314, "index_size": 26004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172559, "raw_average_key_size": 25, "raw_value_size": 9466144, "raw_average_value_size": 1424, "num_data_blocks": 1026, "num_entries": 6644, "num_filter_entries": 6644, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.181607) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9628030 bytes
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.193876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.9 rd, 115.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(50.7) write-amplify(23.0) OK, records in: 7158, records dropped: 514 output_compression: NoCompression
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.193903) EVENT_LOG_v1 {"time_micros": 1759408656193892, "job": 52, "event": "compaction_finished", "compaction_time_micros": 83443, "compaction_time_cpu_micros": 29246, "output_level": 6, "num_output_files": 1, "total_output_size": 9628030, "num_input_records": 7158, "num_output_records": 6644, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656194175, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656196929, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.097750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.197016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.197026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.197030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.197035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:37:36.197039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-0 nova_compute[256940]: 2025-10-02 12:37:36.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:36.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:36.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:37 compute-0 ceph-mon[73668]: pgmap v1840: 305 pgs: 305 active+clean; 207 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 261 op/s
Oct 02 12:37:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 211 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 324 op/s
Oct 02 12:37:37 compute-0 nova_compute[256940]: 2025-10-02 12:37:37.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:38.068 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:38.069 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:37:38 compute-0 nova_compute[256940]: 2025-10-02 12:37:38.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:38 compute-0 ceph-mon[73668]: pgmap v1841: 305 pgs: 305 active+clean; 211 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 324 op/s
Oct 02 12:37:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:38.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:38 compute-0 sudo[318159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:38 compute-0 sudo[318159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:38 compute-0 sudo[318159]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:38 compute-0 sudo[318184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:38 compute-0 sudo[318184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:38 compute-0 sudo[318184]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:38 compute-0 nova_compute[256940]: 2025-10-02 12:37:38.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:39 compute-0 ovn_controller[148123]: 2025-10-02T12:37:39Z|00422|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:37:39 compute-0 nova_compute[256940]: 2025-10-02 12:37:39.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:39 compute-0 sudo[318209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:39 compute-0 sudo[318209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-0 sudo[318209]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:39 compute-0 sudo[318234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:39 compute-0 sudo[318234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-0 sudo[318234]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:39 compute-0 sudo[318259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:39 compute-0 sudo[318259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-0 sudo[318259]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:39 compute-0 sudo[318284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:37:39 compute-0 sudo[318284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 211 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.7 MiB/s wr, 217 op/s
Oct 02 12:37:39 compute-0 sudo[318284]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:37:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:37:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:37:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:37:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-0 sudo[318329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:40 compute-0 sudo[318329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-0 sudo[318329]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-0 sudo[318354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:40 compute-0 sudo[318354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-0 sudo[318354]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-0 sudo[318379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:40 compute-0 sudo[318379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-0 sudo[318379]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031561525800297787 of space, bias 1.0, pg target 0.9468457740089337 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8676193467336684 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:37:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 12:37:40 compute-0 sudo[318404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:37:40 compute-0 sudo[318404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:40.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:40 compute-0 sudo[318404]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-0 ceph-mon[73668]: pgmap v1842: 305 pgs: 305 active+clean; 211 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.7 MiB/s wr, 217 op/s
Oct 02 12:37:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:40.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 02 12:37:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 02 12:37:41 compute-0 nova_compute[256940]: 2025-10-02 12:37:41.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 02 12:37:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:37:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:37:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:37:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:37:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7a83e60a-0d13-467b-924f-8b3cd2dc7a41 does not exist
Oct 02 12:37:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 25f9011b-b108-48d6-83d5-f22ec0e07d2e does not exist
Oct 02 12:37:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4d8e8ba9-7c09-4afc-95ab-f12591e5c719 does not exist
Oct 02 12:37:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:37:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:37:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:37:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:37:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:37:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:41 compute-0 sudo[318461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:41 compute-0 sudo[318461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:41 compute-0 sudo[318461]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:41 compute-0 sudo[318486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:41 compute-0 sudo[318486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:41 compute-0 sudo[318486]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:41 compute-0 sudo[318511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:41 compute-0 sudo[318511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:41 compute-0 sudo[318511]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:41 compute-0 nova_compute[256940]: 2025-10-02 12:37:41.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:41 compute-0 sudo[318536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:37:41 compute-0 sudo[318536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 229 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.4 MiB/s wr, 215 op/s
Oct 02 12:37:41 compute-0 podman[318601]: 2025-10-02 12:37:41.859362921 +0000 UTC m=+0.071752915 container create b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:37:41 compute-0 podman[318601]: 2025-10-02 12:37:41.815029213 +0000 UTC m=+0.027419227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:42 compute-0 systemd[1]: Started libpod-conmon-b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe.scope.
Oct 02 12:37:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:37:42.070 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:42 compute-0 ceph-mon[73668]: osdmap e255: 3 total, 3 up, 3 in
Oct 02 12:37:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:37:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:37:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:37:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:42 compute-0 podman[318601]: 2025-10-02 12:37:42.209776958 +0000 UTC m=+0.422166982 container init b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:37:42 compute-0 podman[318601]: 2025-10-02 12:37:42.219405489 +0000 UTC m=+0.431795503 container start b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:37:42 compute-0 eloquent_heyrovsky[318618]: 167 167
Oct 02 12:37:42 compute-0 systemd[1]: libpod-b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe.scope: Deactivated successfully.
Oct 02 12:37:42 compute-0 podman[318601]: 2025-10-02 12:37:42.269705292 +0000 UTC m=+0.482095336 container attach b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:37:42 compute-0 podman[318601]: 2025-10-02 12:37:42.27037556 +0000 UTC m=+0.482765604 container died b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:37:42 compute-0 ovn_controller[148123]: 2025-10-02T12:37:42Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:53:e6 10.100.0.10
Oct 02 12:37:42 compute-0 ovn_controller[148123]: 2025-10-02T12:37:42Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:53:e6 10.100.0.10
Oct 02 12:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-773bbd5342dddf415972ba3fc5a6157409a5e804eae636945db173a13d179a77-merged.mount: Deactivated successfully.
Oct 02 12:37:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:42.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:42 compute-0 podman[318601]: 2025-10-02 12:37:42.822331268 +0000 UTC m=+1.034721292 container remove b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:37:42 compute-0 systemd[1]: libpod-conmon-b57cd52f6ac15c80ed9a149cf1d9fad62eb02633d56e6f1bff13e1cd32aa4ebe.scope: Deactivated successfully.
Oct 02 12:37:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:42.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:43 compute-0 podman[318643]: 2025-10-02 12:37:43.090561521 +0000 UTC m=+0.094925689 container create 225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_morse, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:37:43 compute-0 podman[318643]: 2025-10-02 12:37:43.031052687 +0000 UTC m=+0.035416845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:43 compute-0 ceph-mon[73668]: pgmap v1844: 305 pgs: 305 active+clean; 229 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.4 MiB/s wr, 215 op/s
Oct 02 12:37:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2454314841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/673595144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3144403304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:43 compute-0 systemd[1]: Started libpod-conmon-225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030.scope.
Oct 02 12:37:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac471f9dbd9f46e2dd73b1ac0f6d549153b49d5f996c57a1855b75cd3a6c813/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac471f9dbd9f46e2dd73b1ac0f6d549153b49d5f996c57a1855b75cd3a6c813/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac471f9dbd9f46e2dd73b1ac0f6d549153b49d5f996c57a1855b75cd3a6c813/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac471f9dbd9f46e2dd73b1ac0f6d549153b49d5f996c57a1855b75cd3a6c813/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac471f9dbd9f46e2dd73b1ac0f6d549153b49d5f996c57a1855b75cd3a6c813/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:43 compute-0 podman[318643]: 2025-10-02 12:37:43.487321658 +0000 UTC m=+0.491685896 container init 225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:37:43 compute-0 podman[318643]: 2025-10-02 12:37:43.497323949 +0000 UTC m=+0.501688097 container start 225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:37:43 compute-0 podman[318643]: 2025-10-02 12:37:43.515790001 +0000 UTC m=+0.520154189 container attach 225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_morse, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:37:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 239 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 5.0 MiB/s wr, 160 op/s
Oct 02 12:37:43 compute-0 nova_compute[256940]: 2025-10-02 12:37:43.805 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408648.8047082, 3d8f25cb-a0ea-445c-8254-925defd1187a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:43 compute-0 nova_compute[256940]: 2025-10-02 12:37:43.808 2 INFO nova.compute.manager [-] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] VM Stopped (Lifecycle Event)
Oct 02 12:37:43 compute-0 nova_compute[256940]: 2025-10-02 12:37:43.836 2 DEBUG nova.compute.manager [None req-16553df1-ad10-4994-bf72-f5c838330a5b - - - - - -] [instance: 3d8f25cb-a0ea-445c-8254-925defd1187a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:43 compute-0 nova_compute[256940]: 2025-10-02 12:37:43.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:44 compute-0 eloquent_morse[318659]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:37:44 compute-0 eloquent_morse[318659]: --> relative data size: 1.0
Oct 02 12:37:44 compute-0 eloquent_morse[318659]: --> All data devices are unavailable
Oct 02 12:37:44 compute-0 systemd[1]: libpod-225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030.scope: Deactivated successfully.
Oct 02 12:37:44 compute-0 podman[318643]: 2025-10-02 12:37:44.37727031 +0000 UTC m=+1.381634438 container died 225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ac471f9dbd9f46e2dd73b1ac0f6d549153b49d5f996c57a1855b75cd3a6c813-merged.mount: Deactivated successfully.
Oct 02 12:37:44 compute-0 ceph-mon[73668]: pgmap v1845: 305 pgs: 305 active+clean; 239 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 5.0 MiB/s wr, 160 op/s
Oct 02 12:37:44 compute-0 podman[318643]: 2025-10-02 12:37:44.554834285 +0000 UTC m=+1.559198423 container remove 225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:37:44 compute-0 systemd[1]: libpod-conmon-225122f07f65deb0e2f91d6f453b5c600f7f5e36fc44350225713379d78ff030.scope: Deactivated successfully.
Oct 02 12:37:44 compute-0 sudo[318536]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:44.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:44 compute-0 sudo[318689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:44 compute-0 sudo[318689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:44 compute-0 sudo[318689]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:44 compute-0 sudo[318714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:44 compute-0 sudo[318714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:44 compute-0 sudo[318714]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:44 compute-0 sudo[318739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:44 compute-0 sudo[318739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:44 compute-0 sudo[318739]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:44 compute-0 sudo[318764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:37:44 compute-0 sudo[318764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:44.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:45 compute-0 podman[318831]: 2025-10-02 12:37:45.423800688 +0000 UTC m=+0.103823921 container create e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:37:45 compute-0 podman[318831]: 2025-10-02 12:37:45.348077222 +0000 UTC m=+0.028100475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:45 compute-0 systemd[1]: Started libpod-conmon-e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6.scope.
Oct 02 12:37:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 244 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct 02 12:37:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:45 compute-0 podman[318831]: 2025-10-02 12:37:45.597979355 +0000 UTC m=+0.278002628 container init e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:37:45 compute-0 podman[318831]: 2025-10-02 12:37:45.618813409 +0000 UTC m=+0.298836682 container start e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:37:45 compute-0 priceless_lewin[318848]: 167 167
Oct 02 12:37:45 compute-0 systemd[1]: libpod-e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6.scope: Deactivated successfully.
Oct 02 12:37:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1181985338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:45 compute-0 podman[318831]: 2025-10-02 12:37:45.642559049 +0000 UTC m=+0.322582312 container attach e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:37:45 compute-0 podman[318831]: 2025-10-02 12:37:45.643967276 +0000 UTC m=+0.323990549 container died e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f039fb5c804f4b32cdce49eef4acc6f3edb3a5b0bd034e2bb7a33091d65976-merged.mount: Deactivated successfully.
Oct 02 12:37:45 compute-0 podman[318831]: 2025-10-02 12:37:45.800586584 +0000 UTC m=+0.480609817 container remove e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:37:45 compute-0 systemd[1]: libpod-conmon-e573510fa8d29d51a86656b3677732b8b113a0b46d9a1d7f990a920b3abf0da6.scope: Deactivated successfully.
Oct 02 12:37:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:46 compute-0 podman[318872]: 2025-10-02 12:37:46.001738085 +0000 UTC m=+0.063665463 container create f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:37:46 compute-0 systemd[1]: Started libpod-conmon-f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1.scope.
Oct 02 12:37:46 compute-0 podman[318872]: 2025-10-02 12:37:45.965177311 +0000 UTC m=+0.027104699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615769c15318fe4096c675fafca019036fb709421e9f6d20c608257e879cbf98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615769c15318fe4096c675fafca019036fb709421e9f6d20c608257e879cbf98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615769c15318fe4096c675fafca019036fb709421e9f6d20c608257e879cbf98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615769c15318fe4096c675fafca019036fb709421e9f6d20c608257e879cbf98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:46 compute-0 podman[318872]: 2025-10-02 12:37:46.125893486 +0000 UTC m=+0.187820864 container init f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:37:46 compute-0 podman[318872]: 2025-10-02 12:37:46.133446264 +0000 UTC m=+0.195373672 container start f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:37:46 compute-0 podman[318872]: 2025-10-02 12:37:46.145615721 +0000 UTC m=+0.207543109 container attach f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:37:46 compute-0 nova_compute[256940]: 2025-10-02 12:37:46.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:46.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:46 compute-0 nova_compute[256940]: 2025-10-02 12:37:46.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:46.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]: {
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:     "1": [
Oct 02 12:37:46 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:         {
Oct 02 12:37:46 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "devices": [
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "/dev/loop3"
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             ],
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "lv_name": "ceph_lv0",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "lv_size": "7511998464",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "name": "ceph_lv0",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "tags": {
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.cluster_name": "ceph",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.crush_device_class": "",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.encrypted": "0",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.osd_id": "1",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.type": "block",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:                 "ceph.vdo": "0"
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             },
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "type": "block",
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:             "vg_name": "ceph_vg0"
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:         }
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]:     ]
Oct 02 12:37:46 compute-0 peaceful_meninsky[318889]: }
Oct 02 12:37:47 compute-0 systemd[1]: libpod-f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1.scope: Deactivated successfully.
Oct 02 12:37:47 compute-0 podman[318900]: 2025-10-02 12:37:47.068746589 +0000 UTC m=+0.036731559 container died f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:37:47 compute-0 ceph-mon[73668]: pgmap v1846: 305 pgs: 305 active+clean; 244 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct 02 12:37:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 257 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-615769c15318fe4096c675fafca019036fb709421e9f6d20c608257e879cbf98-merged.mount: Deactivated successfully.
Oct 02 12:37:47 compute-0 podman[318900]: 2025-10-02 12:37:47.932148258 +0000 UTC m=+0.900133208 container remove f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:37:47 compute-0 systemd[1]: libpod-conmon-f316ab365449e839c5eec84dd98490080f619a0ae77d68da108a000a27411eb1.scope: Deactivated successfully.
Oct 02 12:37:47 compute-0 sudo[318764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:48 compute-0 sudo[318916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:48 compute-0 sudo[318916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:48 compute-0 sudo[318916]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:48 compute-0 sudo[318941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:48 compute-0 sudo[318941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:48 compute-0 sudo[318941]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:48 compute-0 sudo[318966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:48 compute-0 sudo[318966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:48 compute-0 sudo[318966]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:48 compute-0 sudo[318991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:37:48 compute-0 sudo[318991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:48.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:48 compute-0 podman[319058]: 2025-10-02 12:37:48.612946049 +0000 UTC m=+0.025662861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:48 compute-0 podman[319058]: 2025-10-02 12:37:48.717890129 +0000 UTC m=+0.130606931 container create 16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:37:48 compute-0 systemd[1]: Started libpod-conmon-16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942.scope.
Oct 02 12:37:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:48 compute-0 podman[319058]: 2025-10-02 12:37:48.836592448 +0000 UTC m=+0.249309260 container init 16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:37:48 compute-0 podman[319058]: 2025-10-02 12:37:48.846298251 +0000 UTC m=+0.259015043 container start 16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermi, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:37:48 compute-0 vigilant_fermi[319096]: 167 167
Oct 02 12:37:48 compute-0 systemd[1]: libpod-16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942.scope: Deactivated successfully.
Oct 02 12:37:48 compute-0 podman[319058]: 2025-10-02 12:37:48.907924 +0000 UTC m=+0.320640792 container attach 16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:37:48 compute-0 podman[319058]: 2025-10-02 12:37:48.909171902 +0000 UTC m=+0.321888694 container died 16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 12:37:48 compute-0 nova_compute[256940]: 2025-10-02 12:37:48.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:48.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:48 compute-0 podman[319073]: 2025-10-02 12:37:48.987202749 +0000 UTC m=+0.219022578 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 12:37:48 compute-0 podman[319072]: 2025-10-02 12:37:48.999267334 +0000 UTC m=+0.236061793 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 12:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-988b5eaa52ba42916adb6768fd70b6fd471646cffbc4347f360df151711006ae-merged.mount: Deactivated successfully.
Oct 02 12:37:49 compute-0 podman[319058]: 2025-10-02 12:37:49.218269991 +0000 UTC m=+0.630986783 container remove 16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:37:49 compute-0 systemd[1]: libpod-conmon-16e31b87b98f9a4bfdbf914f52f45f04b7a853d8acc3e83a718027b7d5be2942.scope: Deactivated successfully.
Oct 02 12:37:49 compute-0 ceph-mon[73668]: pgmap v1847: 305 pgs: 305 active+clean; 257 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:37:49 compute-0 podman[319143]: 2025-10-02 12:37:49.412991734 +0000 UTC m=+0.031628646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:37:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 257 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:37:49 compute-0 podman[319143]: 2025-10-02 12:37:49.527706039 +0000 UTC m=+0.146342901 container create 7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:37:49 compute-0 systemd[1]: Started libpod-conmon-7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c.scope.
Oct 02 12:37:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c583b5d04654c0dd87890e6fb0f56165d144ac00432c32136d854a41f383e20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c583b5d04654c0dd87890e6fb0f56165d144ac00432c32136d854a41f383e20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c583b5d04654c0dd87890e6fb0f56165d144ac00432c32136d854a41f383e20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c583b5d04654c0dd87890e6fb0f56165d144ac00432c32136d854a41f383e20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:49 compute-0 nova_compute[256940]: 2025-10-02 12:37:49.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:49 compute-0 podman[319143]: 2025-10-02 12:37:49.686361361 +0000 UTC m=+0.304998253 container init 7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:37:49 compute-0 podman[319143]: 2025-10-02 12:37:49.694024811 +0000 UTC m=+0.312661673 container start 7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:37:49 compute-0 podman[319143]: 2025-10-02 12:37:49.731080478 +0000 UTC m=+0.349717360 container attach 7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:37:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/811997453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3644381549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:50 compute-0 infallible_napier[319160]: {
Oct 02 12:37:50 compute-0 infallible_napier[319160]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:37:50 compute-0 infallible_napier[319160]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:37:50 compute-0 infallible_napier[319160]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:37:50 compute-0 infallible_napier[319160]:         "osd_id": 1,
Oct 02 12:37:50 compute-0 infallible_napier[319160]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:37:50 compute-0 infallible_napier[319160]:         "type": "bluestore"
Oct 02 12:37:50 compute-0 infallible_napier[319160]:     }
Oct 02 12:37:50 compute-0 infallible_napier[319160]: }
Oct 02 12:37:50 compute-0 systemd[1]: libpod-7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c.scope: Deactivated successfully.
Oct 02 12:37:50 compute-0 podman[319143]: 2025-10-02 12:37:50.541454103 +0000 UTC m=+1.160090965 container died 7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c583b5d04654c0dd87890e6fb0f56165d144ac00432c32136d854a41f383e20-merged.mount: Deactivated successfully.
Oct 02 12:37:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:37:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:50.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:37:50 compute-0 podman[319143]: 2025-10-02 12:37:50.689741744 +0000 UTC m=+1.308378626 container remove 7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:37:50 compute-0 systemd[1]: libpod-conmon-7741c50b8e8f923a25ddc9244429e6f0f6e5b14a1100197ef67e3c706486da4c.scope: Deactivated successfully.
Oct 02 12:37:50 compute-0 sudo[318991]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:37:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:37:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cfedeb6d-cf02-4dcc-808a-e0f8f58fa64e does not exist
Oct 02 12:37:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 63a21b91-565f-4fe1-b57e-5049cb691705 does not exist
Oct 02 12:37:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 690a889f-87df-4c81-bb9f-97490a617241 does not exist
Oct 02 12:37:50 compute-0 sudo[319194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:50 compute-0 sudo[319194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:50 compute-0 sudo[319194]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:37:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:50.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:37:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:51 compute-0 sudo[319219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:37:51 compute-0 sudo[319219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:51 compute-0 sudo[319219]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:51 compute-0 ceph-mon[73668]: pgmap v1848: 305 pgs: 305 active+clean; 257 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:37:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:51 compute-0 nova_compute[256940]: 2025-10-02 12:37:51.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 293 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 967 KiB/s rd, 2.6 MiB/s wr, 128 op/s
Oct 02 12:37:52 compute-0 ceph-mon[73668]: pgmap v1849: 305 pgs: 305 active+clean; 293 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 967 KiB/s rd, 2.6 MiB/s wr, 128 op/s
Oct 02 12:37:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1024383535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:52.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3572908026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 293 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 178 op/s
Oct 02 12:37:53 compute-0 nova_compute[256940]: 2025-10-02 12:37:53.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:53 compute-0 nova_compute[256940]: 2025-10-02 12:37:53.975 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:53 compute-0 nova_compute[256940]: 2025-10-02 12:37:53.976 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.016 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.097 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.098 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.105 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.105 2 INFO nova.compute.claims [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.441 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:54 compute-0 ceph-mon[73668]: pgmap v1850: 305 pgs: 305 active+clean; 293 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 178 op/s
Oct 02 12:37:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:54.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.702 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.702 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.769 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:37:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239222305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.885 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.890 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.896 2 DEBUG nova.compute.provider_tree [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.920 2 DEBUG nova.scheduler.client.report [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.947 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.948 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.950 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.958 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:37:54 compute-0 nova_compute[256940]: 2025-10-02 12:37:54.958 2 INFO nova.compute.claims [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.025 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.026 2 DEBUG nova.network.neutron [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.056 2 INFO nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.085 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.132 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.255 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.257 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.257 2 INFO nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Creating image(s)
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.281 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.308 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.335 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.339 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.412 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.413 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.414 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.414 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.441 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.446 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 293 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.9 MiB/s wr, 208 op/s
Oct 02 12:37:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1993557897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3239222305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1142469038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475252102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.584 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.590 2 DEBUG nova.compute.provider_tree [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.624 2 DEBUG nova.scheduler.client.report [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.652 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.653 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.699 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.700 2 DEBUG nova.network.neutron [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.720 2 INFO nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.729 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.770 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.814 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] resizing rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.929 2 DEBUG nova.policy [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e01aec1bfdd145338d35a6ab232794a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ac7abbc0acb45d68edcf67f909a8c90', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.934 2 DEBUG nova.policy [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdbe447f49374937a828d6281949a2a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.936 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.937 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.938 2 INFO nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Creating image(s)
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.964 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:55 compute-0 nova_compute[256940]: 2025-10-02 12:37:55.996 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.023 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.026 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.060 2 DEBUG nova.objects.instance [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.081 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.082 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Ensure instance console log exists: /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.082 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.083 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.083 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.091 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.092 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.092 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.092 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.114 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.118 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 92e58b0c-6934-4931-8ae8-c1a631797f57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.542 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 92e58b0c-6934-4931-8ae8-c1a631797f57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:56 compute-0 ceph-mon[73668]: pgmap v1851: 305 pgs: 305 active+clean; 293 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.9 MiB/s wr, 208 op/s
Oct 02 12:37:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1475252102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:56.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.663 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] resizing rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.892 2 DEBUG nova.objects.instance [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lazy-loading 'migration_context' on Instance uuid 92e58b0c-6934-4931-8ae8-c1a631797f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.923 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.924 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Ensure instance console log exists: /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.925 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.925 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:56 compute-0 nova_compute[256940]: 2025-10-02 12:37:56.925 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.240 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.241 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 341 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.607 2 DEBUG nova.network.neutron [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Successfully created port: 422859c2-b5cd-467a-85cf-ff82d92d7d87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:37:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/825430397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.699 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.777 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.778 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.916 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.917 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4327MB free_disk=20.856021881103516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.917 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:57 compute-0 nova_compute[256940]: 2025-10-02 12:37:57.917 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.006 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance a5e40085-6c27-4b57-96db-89ecc7ac2e48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.006 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 1546ac1d-4a04-4c5e-ae02-b005461c7731 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.007 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 92e58b0c-6934-4931-8ae8-c1a631797f57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.007 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.007 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.034 2 DEBUG nova.network.neutron [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Successfully created port: 85de21e0-dbc3-4791-baf5-1f9b78e35b29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.088 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:37:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1338803148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.575 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.582 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.600 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.623 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.623 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:58.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:58 compute-0 ceph-mon[73668]: pgmap v1852: 305 pgs: 305 active+clean; 341 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Oct 02 12:37:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/825430397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1338803148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:58 compute-0 sudo[319669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:58 compute-0 sudo[319669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:58 compute-0 sudo[319669]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:58 compute-0 sudo[319694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:58 compute-0 sudo[319694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:58 compute-0 sudo[319694]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:58 compute-0 nova_compute[256940]: 2025-10-02 12:37:58.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:37:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:58.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.469 2 DEBUG nova.network.neutron [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Successfully updated port: 422859c2-b5cd-467a-85cf-ff82d92d7d87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.485 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.486 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.486 2 DEBUG nova.network.neutron [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:37:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 341 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 225 op/s
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.568 2 DEBUG nova.compute.manager [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-changed-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.568 2 DEBUG nova.compute.manager [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Refreshing instance network info cache due to event network-changed-422859c2-b5cd-467a-85cf-ff82d92d7d87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.568 2 DEBUG oslo_concurrency.lockutils [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.701 2 DEBUG nova.network.neutron [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.943 2 DEBUG nova.network.neutron [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Successfully updated port: 85de21e0-dbc3-4791-baf5-1f9b78e35b29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.977 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.978 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquired lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:59 compute-0 nova_compute[256940]: 2025-10-02 12:37:59.978 2 DEBUG nova.network.neutron [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:38:00 compute-0 nova_compute[256940]: 2025-10-02 12:38:00.127 2 DEBUG nova.network.neutron [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:38:00 compute-0 nova_compute[256940]: 2025-10-02 12:38:00.624 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:00.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:00 compute-0 ceph-mon[73668]: pgmap v1853: 305 pgs: 305 active+clean; 341 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 225 op/s
Oct 02 12:38:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:00.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.238 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.238 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.295 2 DEBUG nova.network.neutron [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.311 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.311 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance network_info: |[{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.312 2 DEBUG oslo_concurrency.lockutils [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.312 2 DEBUG nova.network.neutron [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Refreshing network info cache for port 422859c2-b5cd-467a-85cf-ff82d92d7d87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.315 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Start _get_guest_xml network_info=[{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.319 2 WARNING nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.323 2 DEBUG nova.virt.libvirt.host [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.324 2 DEBUG nova.virt.libvirt.host [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.329 2 DEBUG nova.virt.libvirt.host [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.330 2 DEBUG nova.virt.libvirt.host [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.331 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.331 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.332 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.332 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.332 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.332 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.332 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.333 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.333 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.333 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.333 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.333 2 DEBUG nova.virt.hardware [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.336 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 349 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.6 MiB/s wr, 334 op/s
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.650 2 DEBUG nova.compute.manager [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-changed-85de21e0-dbc3-4791-baf5-1f9b78e35b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.650 2 DEBUG nova.compute.manager [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Refreshing instance network info cache due to event network-changed-85de21e0-dbc3-4791-baf5-1f9b78e35b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.651 2 DEBUG oslo_concurrency.lockutils [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508779376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.780 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.829 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.835 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.912 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.913 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.913 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.913 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.934 2 DEBUG oslo_concurrency.lockutils [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.935 2 DEBUG oslo_concurrency.lockutils [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.935 2 DEBUG nova.compute.manager [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.944 2 DEBUG nova.compute.manager [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.945 2 DEBUG nova.objects.instance [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:01 compute-0 nova_compute[256940]: 2025-10-02 12:38:01.998 2 DEBUG nova.virt.libvirt.driver [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.015 2 DEBUG nova.network.neutron [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Updating instance_info_cache with network_info: [{"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.078 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Releasing lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.079 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Instance network_info: |[{"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.079 2 DEBUG oslo_concurrency.lockutils [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.080 2 DEBUG nova.network.neutron [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Refreshing network info cache for port 85de21e0-dbc3-4791-baf5-1f9b78e35b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.083 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Start _get_guest_xml network_info=[{"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.087 2 WARNING nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.092 2 DEBUG nova.virt.libvirt.host [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.093 2 DEBUG nova.virt.libvirt.host [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.095 2 DEBUG nova.virt.libvirt.host [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.096 2 DEBUG nova.virt.libvirt.host [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.097 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.097 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.097 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.098 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.098 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.098 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.098 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.099 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.099 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.099 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.099 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.100 2 DEBUG nova.virt.hardware [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.103 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1351175108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.326 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.329 2 DEBUG nova.virt.libvirt.vif [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185783620',display_name='tempest-ServerStableDeviceRescueTest-server-185783620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185783620',id=101,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-300aqtzl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:55Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=1546ac1d-4a04-4c5e-ae02-b005461c7731,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.331 2 DEBUG nova.network.os_vif_util [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.332 2 DEBUG nova.network.os_vif_util [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.334 2 DEBUG nova.objects.instance [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.355 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <uuid>1546ac1d-4a04-4c5e-ae02-b005461c7731</uuid>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <name>instance-00000065</name>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-185783620</nova:name>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:38:01</nova:creationTime>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <nova:port uuid="422859c2-b5cd-467a-85cf-ff82d92d7d87">
Oct 02 12:38:02 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <system>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <entry name="serial">1546ac1d-4a04-4c5e-ae02-b005461c7731</entry>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <entry name="uuid">1546ac1d-4a04-4c5e-ae02-b005461c7731</entry>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </system>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <os>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   </os>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <features>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   </features>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1546ac1d-4a04-4c5e-ae02-b005461c7731_disk">
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config">
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:02 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:51:2f:ac"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <target dev="tap422859c2-b5"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/console.log" append="off"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <video>
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </video>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:38:02 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:38:02 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:38:02 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:38:02 compute-0 nova_compute[256940]: </domain>
Oct 02 12:38:02 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.356 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Preparing to wait for external event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.356 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.357 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.357 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.358 2 DEBUG nova.virt.libvirt.vif [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185783620',display_name='tempest-ServerStableDeviceRescueTest-server-185783620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185783620',id=101,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-300aqtzl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:55Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=1546ac1d-4a04-4c5e-ae02-b005461c7731,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.358 2 DEBUG nova.network.os_vif_util [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.359 2 DEBUG nova.network.os_vif_util [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.360 2 DEBUG os_vif [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap422859c2-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap422859c2-b5, col_values=(('external_ids', {'iface-id': '422859c2-b5cd-467a-85cf-ff82d92d7d87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:2f:ac', 'vm-uuid': '1546ac1d-4a04-4c5e-ae02-b005461c7731'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:02 compute-0 NetworkManager[44981]: <info>  [1759408682.3710] manager: (tap422859c2-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.380 2 INFO os_vif [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5')
Oct 02 12:38:02 compute-0 podman[319803]: 2025-10-02 12:38:02.408947368 +0000 UTC m=+0.072681778 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:38:02 compute-0 podman[319801]: 2025-10-02 12:38:02.425781267 +0000 UTC m=+0.093815950 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.430 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.431 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.431 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:51:2f:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.431 2 INFO nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Using config drive
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.458 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009324824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.556 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.583 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:02 compute-0 nova_compute[256940]: 2025-10-02 12:38:02.589 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:02.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:02 compute-0 ceph-mon[73668]: pgmap v1854: 305 pgs: 305 active+clean; 349 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.6 MiB/s wr, 334 op/s
Oct 02 12:38:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3508779376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1351175108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3009324824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395417240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.009 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.011 2 DEBUG nova.virt.libvirt.vif [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=102,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN9PUURmVgE9ctR2+bF/OaxYPDf/H6sxDJscvgXkAhinHH2AlmBFTWMVyJqQ1jYV6wxN0gV9xeFZ/1/bUEy+QwyLwsdqYLnM8xQrEZ9oQLRE3bDjarqFwNJf4PbZdutnYw==',key_name='tempest-keypair-868284423',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ac7abbc0acb45d68edcf67f909a8c90',ramdisk_id='',reservation_id='r-r3f5m00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1084750913',owner_user_name='tempest-ServersV294TestFqdnHostnames-1084750913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e01aec1bfdd145338d35a6ab232794a4',uuid=92e58b0c-6934-4931-8ae8-c1a631797f57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.011 2 DEBUG nova.network.os_vif_util [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Converting VIF {"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.012 2 DEBUG nova.network.os_vif_util [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:1e:42,bridge_name='br-int',has_traffic_filtering=True,id=85de21e0-dbc3-4791-baf5-1f9b78e35b29,network=Network(7d34b628-3897-482a-882b-4fa0a641cd85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85de21e0-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.013 2 DEBUG nova.objects.instance [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92e58b0c-6934-4931-8ae8-c1a631797f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.036 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <uuid>92e58b0c-6934-4931-8ae8-c1a631797f57</uuid>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <name>instance-00000066</name>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <nova:name>guest-instance-1</nova:name>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:38:02</nova:creationTime>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:user uuid="e01aec1bfdd145338d35a6ab232794a4">tempest-ServersV294TestFqdnHostnames-1084750913-project-member</nova:user>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:project uuid="7ac7abbc0acb45d68edcf67f909a8c90">tempest-ServersV294TestFqdnHostnames-1084750913</nova:project>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <nova:port uuid="85de21e0-dbc3-4791-baf5-1f9b78e35b29">
Oct 02 12:38:03 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <system>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <entry name="serial">92e58b0c-6934-4931-8ae8-c1a631797f57</entry>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <entry name="uuid">92e58b0c-6934-4931-8ae8-c1a631797f57</entry>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </system>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <os>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   </os>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <features>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   </features>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/92e58b0c-6934-4931-8ae8-c1a631797f57_disk">
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/92e58b0c-6934-4931-8ae8-c1a631797f57_disk.config">
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:03 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:f8:1e:42"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <target dev="tap85de21e0-db"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/console.log" append="off"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <video>
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </video>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:38:03 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:38:03 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:38:03 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:38:03 compute-0 nova_compute[256940]: </domain>
Oct 02 12:38:03 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.036 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Preparing to wait for external event network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.037 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.037 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.037 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.038 2 DEBUG nova.virt.libvirt.vif [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=102,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN9PUURmVgE9ctR2+bF/OaxYPDf/H6sxDJscvgXkAhinHH2AlmBFTWMVyJqQ1jYV6wxN0gV9xeFZ/1/bUEy+QwyLwsdqYLnM8xQrEZ9oQLRE3bDjarqFwNJf4PbZdutnYw==',key_name='tempest-keypair-868284423',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ac7abbc0acb45d68edcf67f909a8c90',ramdisk_id='',reservation_id='r-r3f5m00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1084750913',owner_user_name='tempest-ServersV294TestFqdnHostnames-1084750913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e01aec1bfdd145338d35a6ab232794a4',uuid=92e58b0c-6934-4931-8ae8-c1a631797f57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.038 2 DEBUG nova.network.os_vif_util [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Converting VIF {"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.039 2 DEBUG nova.network.os_vif_util [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:1e:42,bridge_name='br-int',has_traffic_filtering=True,id=85de21e0-dbc3-4791-baf5-1f9b78e35b29,network=Network(7d34b628-3897-482a-882b-4fa0a641cd85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85de21e0-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.039 2 DEBUG os_vif [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:1e:42,bridge_name='br-int',has_traffic_filtering=True,id=85de21e0-dbc3-4791-baf5-1f9b78e35b29,network=Network(7d34b628-3897-482a-882b-4fa0a641cd85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85de21e0-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85de21e0-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85de21e0-db, col_values=(('external_ids', {'iface-id': '85de21e0-dbc3-4791-baf5-1f9b78e35b29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:1e:42', 'vm-uuid': '92e58b0c-6934-4931-8ae8-c1a631797f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 NetworkManager[44981]: <info>  [1759408683.0467] manager: (tap85de21e0-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.053 2 INFO os_vif [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:1e:42,bridge_name='br-int',has_traffic_filtering=True,id=85de21e0-dbc3-4791-baf5-1f9b78e35b29,network=Network(7d34b628-3897-482a-882b-4fa0a641cd85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85de21e0-db')
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.112 2 INFO nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Creating config drive at /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.118 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_zb0wa2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.156 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.157 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.157 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] No VIF found with MAC fa:16:3e:f8:1e:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.158 2 INFO nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Using config drive
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.186 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.253 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_zb0wa2" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.281 2 DEBUG nova.storage.rbd_utils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.284 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.484 2 DEBUG oslo_concurrency.processutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.485 2 INFO nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Deleting local config drive /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config because it was imported into RBD.
Oct 02 12:38:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 339 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 02 12:38:03 compute-0 kernel: tap422859c2-b5: entered promiscuous mode
Oct 02 12:38:03 compute-0 NetworkManager[44981]: <info>  [1759408683.5375] manager: (tap422859c2-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 ovn_controller[148123]: 2025-10-02T12:38:03Z|00423|binding|INFO|Claiming lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 for this chassis.
Oct 02 12:38:03 compute-0 ovn_controller[148123]: 2025-10-02T12:38:03Z|00424|binding|INFO|422859c2-b5cd-467a-85cf-ff82d92d7d87: Claiming fa:16:3e:51:2f:ac 10.100.0.13
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.553 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2f:ac 10.100.0.13'], port_security=['fa:16:3e:51:2f:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1546ac1d-4a04-4c5e-ae02-b005461c7731', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=422859c2-b5cd-467a-85cf-ff82d92d7d87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.554 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 422859c2-b5cd-467a-85cf-ff82d92d7d87 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.556 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:03 compute-0 ovn_controller[148123]: 2025-10-02T12:38:03Z|00425|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 ovn-installed in OVS
Oct 02 12:38:03 compute-0 ovn_controller[148123]: 2025-10-02T12:38:03Z|00426|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 up in Southbound
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.572 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[27ec5add-3aec-4c8d-86f2-6ed43e536e47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.573 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap494beff4-71 in ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.575 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap494beff4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.575 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[511c807a-f576-4134-8edd-159be5843085]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.576 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[76b1d417-f482-412d-91fd-3a03aa465fed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 systemd-machined[210927]: New machine qemu-45-instance-00000065.
Oct 02 12:38:03 compute-0 systemd-udevd[319980]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:03 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000065.
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.592 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a3dc3c-2e4c-4036-ab15-dd6d371b1841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 NetworkManager[44981]: <info>  [1759408683.5961] device (tap422859c2-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:38:03 compute-0 NetworkManager[44981]: <info>  [1759408683.5968] device (tap422859c2-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.609 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e1904365-22ed-45ea-9472-b39cd2c12470]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.637 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9061a2f1-f1dc-4a47-8ba2-1b84bab46974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.642 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a00fd399-799b-4fcd-8dd8-441bee3ad701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 NetworkManager[44981]: <info>  [1759408683.6435] manager: (tap494beff4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.675 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2318ada9-16a0-4912-82d3-136751812c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.678 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc5eab2-1806-4165-a82d-dd9db9bb5e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 NetworkManager[44981]: <info>  [1759408683.7010] device (tap494beff4-70): carrier: link connected
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.709 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cac02be1-a7f2-43a2-9aad-c38e89fdceb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.729 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cd77f79e-e8ff-495f-89b5-73b795b5aaf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657918, 'reachable_time': 19526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320015, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.747 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6680a575-24d0-465a-b3bc-3a6e1dbd3600]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:4a01'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657918, 'tstamp': 657918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320016, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.767 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ff24e2-28db-414b-aec3-951658129789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657918, 'reachable_time': 19526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320017, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.802 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd8d1fd-63cb-4aef-9656-67e68ccb9969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2909892755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2395417240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.885 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[15d6ad4b-c6c9-4855-b0a9-17ccb0d6e76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.888 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.889 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.890 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 kernel: tap494beff4-70: entered promiscuous mode
Oct 02 12:38:03 compute-0 NetworkManager[44981]: <info>  [1759408683.9256] manager: (tap494beff4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.931 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:03 compute-0 ovn_controller[148123]: 2025-10-02T12:38:03Z|00427|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 nova_compute[256940]: 2025-10-02 12:38:03.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.957 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.958 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2c25318e-42cf-4b5d-a11f-622f01433d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.959 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:38:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:03.960 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'env', 'PROCESS_TAG=haproxy-494beff4-7fba-4749-8998-3432c91ac5d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/494beff4-7fba-4749-8998-3432c91ac5d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:38:04 compute-0 kernel: tapf6d0b513-9d (unregistering): left promiscuous mode
Oct 02 12:38:04 compute-0 NetworkManager[44981]: <info>  [1759408684.3121] device (tapf6d0b513-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.313 2 INFO nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Creating config drive at /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/disk.config
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.320 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3_mn12r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00428|binding|INFO|Releasing lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 from this chassis (sb_readonly=0)
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00429|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 down in Southbound
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00430|binding|INFO|Removing iface tapf6d0b513-9d ovn-installed in OVS
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.335 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 02 12:38:04 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000063.scope: Consumed 15.398s CPU time.
Oct 02 12:38:04 compute-0 systemd-machined[210927]: Machine qemu-44-instance-00000063 terminated.
Oct 02 12:38:04 compute-0 podman[320097]: 2025-10-02 12:38:04.324319357 +0000 UTC m=+0.031785260 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:38:04 compute-0 podman[320097]: 2025-10-02 12:38:04.438492188 +0000 UTC m=+0.145958091 container create 82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.462 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3_mn12r" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:04 compute-0 systemd[1]: Started libpod-conmon-82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153.scope.
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.507 2 DEBUG nova.storage.rbd_utils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] rbd image 92e58b0c-6934-4931-8ae8-c1a631797f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.516 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/disk.config 92e58b0c-6934-4931-8ae8-c1a631797f57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb20a4e42920d5751a52b460d643dd07bed1e493a2100524c74f3c288f3b3ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:04 compute-0 podman[320097]: 2025-10-02 12:38:04.552277458 +0000 UTC m=+0.259743361 container init 82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:04 compute-0 kernel: tapf6d0b513-9d: entered promiscuous mode
Oct 02 12:38:04 compute-0 kernel: tapf6d0b513-9d (unregistering): left promiscuous mode
Oct 02 12:38:04 compute-0 NetworkManager[44981]: <info>  [1759408684.5613] manager: (tapf6d0b513-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Oct 02 12:38:04 compute-0 podman[320097]: 2025-10-02 12:38:04.563170383 +0000 UTC m=+0.270636266 container start 82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00431|binding|INFO|Claiming lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 for this chassis.
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00432|binding|INFO|f6d0b513-9da0-4deb-a98c-8812c9ddd074: Claiming fa:16:3e:49:53:e6 10.100.0.10
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.582 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00433|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 ovn-installed in OVS
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00434|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 up in Southbound
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00435|binding|INFO|Releasing lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 from this chassis (sb_readonly=1)
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00436|if_status|INFO|Dropped 2 log messages in last 144 seconds (most recently, 144 seconds ago) due to excessive rate
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00437|if_status|INFO|Not setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 down as sb is readonly
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00438|binding|INFO|Removing iface tapf6d0b513-9d ovn-installed in OVS
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00439|binding|INFO|Releasing lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 from this chassis (sb_readonly=0)
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00440|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 down in Southbound
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [NOTICE]   (320147) : New worker (320167) forked
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [NOTICE]   (320147) : Loading success.
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.609 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.626 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updating instance_info_cache with network_info: [{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.628 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408684.6282117, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.629 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Started (Lifecycle Event)
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.639 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.641 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.642 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1e920ad0-c36a-4143-873f-f775aaf8d361]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.643 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:38:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:04.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.647 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.647 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.651 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.652 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.653 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.653 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.659 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408684.6284094, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.659 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Paused (Lifecycle Event)
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.677 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.680 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.696 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.718 2 DEBUG oslo_concurrency.processutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/disk.config 92e58b0c-6934-4931-8ae8-c1a631797f57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.719 2 INFO nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Deleting local config drive /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57/disk.config because it was imported into RBD.
Oct 02 12:38:04 compute-0 NetworkManager[44981]: <info>  [1759408684.7917] manager: (tap85de21e0-db): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Oct 02 12:38:04 compute-0 kernel: tap85de21e0-db: entered promiscuous mode
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00441|binding|INFO|Claiming lport 85de21e0-dbc3-4791-baf5-1f9b78e35b29 for this chassis.
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-0 systemd-udevd[320005]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00442|binding|INFO|85de21e0-dbc3-4791-baf5-1f9b78e35b29: Claiming fa:16:3e:f8:1e:42 10.100.0.3
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [NOTICE]   (317980) : haproxy version is 2.8.14-c23fe91
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [NOTICE]   (317980) : path to executable is /usr/sbin/haproxy
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [WARNING]  (317980) : Exiting Master process...
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [WARNING]  (317980) : Exiting Master process...
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [ALERT]    (317980) : Current worker (317982) exited with code 143 (Terminated)
Oct 02 12:38:04 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[317976]: [WARNING]  (317980) : All workers exited. Exiting... (0)
Oct 02 12:38:04 compute-0 systemd[1]: libpod-a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7.scope: Deactivated successfully.
Oct 02 12:38:04 compute-0 NetworkManager[44981]: <info>  [1759408684.8169] device (tap85de21e0-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:38:04 compute-0 podman[320197]: 2025-10-02 12:38:04.814939595 +0000 UTC m=+0.057467671 container died a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:38:04 compute-0 NetworkManager[44981]: <info>  [1759408684.8178] device (tap85de21e0-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00443|binding|INFO|Setting lport 85de21e0-dbc3-4791-baf5-1f9b78e35b29 ovn-installed in OVS
Oct 02 12:38:04 compute-0 nova_compute[256940]: 2025-10-02 12:38:04.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-0 ceph-mon[73668]: pgmap v1855: 305 pgs: 305 active+clean; 339 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 02 12:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7-userdata-shm.mount: Deactivated successfully.
Oct 02 12:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-88a9b739d398e08c4e193a3b5b1b95407bf33c1466761d9edda349504e3bb562-merged.mount: Deactivated successfully.
Oct 02 12:38:04 compute-0 podman[320197]: 2025-10-02 12:38:04.919418592 +0000 UTC m=+0.161946658 container cleanup a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:38:04 compute-0 systemd[1]: libpod-conmon-a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7.scope: Deactivated successfully.
Oct 02 12:38:04 compute-0 systemd-machined[210927]: New machine qemu-46-instance-00000066.
Oct 02 12:38:04 compute-0 ovn_controller[148123]: 2025-10-02T12:38:04Z|00444|binding|INFO|Setting lport 85de21e0-dbc3-4791-baf5-1f9b78e35b29 up in Southbound
Oct 02 12:38:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:04.942 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:1e:42 10.100.0.3'], port_security=['fa:16:3e:f8:1e:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '92e58b0c-6934-4931-8ae8-c1a631797f57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d34b628-3897-482a-882b-4fa0a641cd85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ac7abbc0acb45d68edcf67f909a8c90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b5869b3-04e1-4eb4-a836-48f27b60d345', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=972694ee-45c6-477b-b2a4-191954fc6d7c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=85de21e0-dbc3-4791-baf5-1f9b78e35b29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:04.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:04 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000066.
Oct 02 12:38:05 compute-0 podman[320244]: 2025-10-02 12:38:05.007344598 +0000 UTC m=+0.058078957 container remove a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.015 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e55c634c-4840-4771-90b1-f2073d64e588]: (4, ('Thu Oct  2 12:38:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7)\na0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7\nThu Oct  2 12:38:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (a0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7)\na0e03bc0a3a1cfd7f47bfe3bb73e7d36f895df984a962f81cacdfd8719f6d7a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.017 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b284ee7d-8969-43bc-8ca3-565efaa21e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.018 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.020 2 INFO nova.virt.libvirt.driver [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance shutdown successfully after 3 seconds.
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.027 2 INFO nova.virt.libvirt.driver [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance destroyed successfully.
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.027 2 DEBUG nova.objects.instance [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.044 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c35fe218-abc6-4b86-9e74-978d245b1477]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.046 2 DEBUG nova.compute.manager [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.070 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[698e31a6-2cda-4bdd-9fd0-d099e06ebcb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.072 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[489466ee-b692-447b-b621-d18d66bdee1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.087 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1fc0bf-bca8-46b0-8ddf-2ce4b6349b61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654104, 'reachable_time': 43605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320268, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.089 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.090 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[8d862eb4-c870-418b-b0b6-b4c8342790e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.094 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.095 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.096 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[be6f1db0-1032-4b1c-bf90-848b082488d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.097 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.099 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.099 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b3f1be-29c0-41ea-b5b2-3b6c0009d2d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.100 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 85de21e0-dbc3-4791-baf5-1f9b78e35b29 in datapath 7d34b628-3897-482a-882b-4fa0a641cd85 unbound from our chassis
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.102 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7d34b628-3897-482a-882b-4fa0a641cd85
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.118 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[94d99114-329d-496b-b7bb-0bb3bfbee38b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.118 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7d34b628-31 in ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.120 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7d34b628-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.123 2 DEBUG oslo_concurrency.lockutils [None req-e198b6e4-995d-47b5-8c9d-9fc468377130 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.126 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[290ba156-3bc3-4f86-934d-2f0a7527576c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.127 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c01a9d84-94eb-425f-a014-42139392d2e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.143 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[17c9b4ee-8c56-42c7-ae6d-fcc5b3f03d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.169 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f6de79b4-5cc8-44a5-9c18-873bde90e9f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.190 2 DEBUG nova.network.neutron [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updated VIF entry in instance network info cache for port 422859c2-b5cd-467a-85cf-ff82d92d7d87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.190 2 DEBUG nova.network.neutron [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.204 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[faa04557-034a-448d-938a-ce7abccf517a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.205 2 DEBUG oslo_concurrency.lockutils [req-684d43c6-5a3c-455f-8b68-9d6378b5bcc6 req-46baff5d-2283-4049-b519-971c53fedd74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:05 compute-0 NetworkManager[44981]: <info>  [1759408685.2159] manager: (tap7d34b628-30): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.217 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[157a049c-c2ac-4e85-9e30-2cc66c365428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.255 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1466c5-65c7-44b8-a967-26932cab98ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.258 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e78d8c7f-3315-4fe7-ae25-5f20440014ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 NetworkManager[44981]: <info>  [1759408685.2850] device (tap7d34b628-30): carrier: link connected
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.292 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2816102b-1a6e-4d0c-84c3-1ce5b97d960a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.313 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed65fd7-8059-4a9a-8ce4-1ddfa81be6c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7d34b628-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:68:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658076, 'reachable_time': 22749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320297, 'error': None, 'target': 'ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.332 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2b0234-31d6-4b64-882b-c300936f8fa3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:6890'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658076, 'tstamp': 658076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320301, 'error': None, 'target': 'ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.351 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1a15d4-d52b-4bcb-9147-7a7981ce497e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7d34b628-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:68:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658076, 'reachable_time': 22749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320309, 'error': None, 'target': 'ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.389 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[970fa6f7-2121-45c5-b3a7-51242038a84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.465 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8484c4-0dd4-4a07-8728-90b10dd2d802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.466 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d34b628-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.467 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.467 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d34b628-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 NetworkManager[44981]: <info>  [1759408685.4698] manager: (tap7d34b628-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct 02 12:38:05 compute-0 kernel: tap7d34b628-30: entered promiscuous mode
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.478 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7d34b628-30, col_values=(('external_ids', {'iface-id': '0f6cfe52-3f15-4e90-b21e-85f85326e533'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 ovn_controller[148123]: 2025-10-02T12:38:05Z|00445|binding|INFO|Releasing lport 0f6cfe52-3f15-4e90-b21e-85f85326e533 from this chassis (sb_readonly=0)
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.503 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7d34b628-3897-482a-882b-4fa0a641cd85.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7d34b628-3897-482a-882b-4fa0a641cd85.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.504 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4de8a03e-f28e-4966-a20b-31edcb0adcc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.505 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-7d34b628-3897-482a-882b-4fa0a641cd85
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/7d34b628-3897-482a-882b-4fa0a641cd85.pid.haproxy
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 7d34b628-3897-482a-882b-4fa0a641cd85
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:38:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:05.506 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85', 'env', 'PROCESS_TAG=haproxy-7d34b628-3897-482a-882b-4fa0a641cd85', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7d34b628-3897-482a-882b-4fa0a641cd85.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:38:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 259 op/s
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.876 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408685.8757803, 92e58b0c-6934-4931-8ae8-c1a631797f57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.876 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] VM Started (Lifecycle Event)
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.894 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.898 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408685.8779979, 92e58b0c-6934-4931-8ae8-c1a631797f57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.898 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] VM Paused (Lifecycle Event)
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.916 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.921 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:05 compute-0 nova_compute[256940]: 2025-10-02 12:38:05.941 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:38:05 compute-0 podman[320363]: 2025-10-02 12:38:05.879967567 +0000 UTC m=+0.027557600 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:38:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2560162661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:38:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2560162661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.068 2 DEBUG nova.network.neutron [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Updated VIF entry in instance network info cache for port 85de21e0-dbc3-4791-baf5-1f9b78e35b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.069 2 DEBUG nova.network.neutron [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Updating instance_info_cache with network_info: [{"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:06 compute-0 podman[320363]: 2025-10-02 12:38:06.071753404 +0000 UTC m=+0.219343417 container create bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.087 2 DEBUG oslo_concurrency.lockutils [req-f72c88de-9893-4892-8a3d-ab6f1945bb1b req-c90d05c3-dcec-4791-b139-e56cc93a3d9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:06 compute-0 systemd[1]: Started libpod-conmon-bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f.scope.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.139 2 DEBUG nova.compute.manager [req-b8c08551-759a-42c2-999d-d2d68bd32a14 req-0098e5c8-670a-4ab5-aef5-5b22d2e252e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.140 2 DEBUG oslo_concurrency.lockutils [req-b8c08551-759a-42c2-999d-d2d68bd32a14 req-0098e5c8-670a-4ab5-aef5-5b22d2e252e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.140 2 DEBUG oslo_concurrency.lockutils [req-b8c08551-759a-42c2-999d-d2d68bd32a14 req-0098e5c8-670a-4ab5-aef5-5b22d2e252e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.140 2 DEBUG oslo_concurrency.lockutils [req-b8c08551-759a-42c2-999d-d2d68bd32a14 req-0098e5c8-670a-4ab5-aef5-5b22d2e252e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.140 2 DEBUG nova.compute.manager [req-b8c08551-759a-42c2-999d-d2d68bd32a14 req-0098e5c8-670a-4ab5-aef5-5b22d2e252e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Processing event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.141 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:38:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a9ac30e30246d87289c44a39040cf24d58371e78fc32a2cc0b6f9bae034001/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.147 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408686.1475673, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.148 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Resumed (Lifecycle Event)
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.153 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.157 2 INFO nova.virt.libvirt.driver [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance spawned successfully.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.159 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:38:06 compute-0 podman[320363]: 2025-10-02 12:38:06.163685044 +0000 UTC m=+0.311275087 container init bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.164 2 DEBUG nova.compute.manager [req-40c3cf0e-18f6-4f70-ad8d-d60fdbf80080 req-b690dce6-f7f7-4c7c-a79d-4f4e99eb4aa6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.164 2 DEBUG oslo_concurrency.lockutils [req-40c3cf0e-18f6-4f70-ad8d-d60fdbf80080 req-b690dce6-f7f7-4c7c-a79d-4f4e99eb4aa6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.165 2 DEBUG oslo_concurrency.lockutils [req-40c3cf0e-18f6-4f70-ad8d-d60fdbf80080 req-b690dce6-f7f7-4c7c-a79d-4f4e99eb4aa6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.165 2 DEBUG oslo_concurrency.lockutils [req-40c3cf0e-18f6-4f70-ad8d-d60fdbf80080 req-b690dce6-f7f7-4c7c-a79d-4f4e99eb4aa6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.165 2 DEBUG nova.compute.manager [req-40c3cf0e-18f6-4f70-ad8d-d60fdbf80080 req-b690dce6-f7f7-4c7c-a79d-4f4e99eb4aa6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Processing event network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.166 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.170 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:38:06 compute-0 podman[320363]: 2025-10-02 12:38:06.171259111 +0000 UTC m=+0.318849124 container start bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.175 2 INFO nova.virt.libvirt.driver [-] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Instance spawned successfully.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.175 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.179 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.182 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.192 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.192 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.193 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.193 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.194 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.194 2 DEBUG nova.virt.libvirt.driver [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [NOTICE]   (320382) : New worker (320384) forked
Oct 02 12:38:06 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [NOTICE]   (320382) : Loading success.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.204 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.204 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408686.1690822, 92e58b0c-6934-4931-8ae8-c1a631797f57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.205 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] VM Resumed (Lifecycle Event)
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.210 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.210 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.211 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.211 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.212 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.212 2 DEBUG nova.virt.libvirt.driver [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.242 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.246 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.251 2 DEBUG nova.compute.manager [req-f55ef1dd-2c0d-4feb-ae2e-82bd4a08e495 req-c3c2dfca-7ac8-4770-9f08-6fdcfa131b0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.252 2 DEBUG oslo_concurrency.lockutils [req-f55ef1dd-2c0d-4feb-ae2e-82bd4a08e495 req-c3c2dfca-7ac8-4770-9f08-6fdcfa131b0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.252 2 DEBUG oslo_concurrency.lockutils [req-f55ef1dd-2c0d-4feb-ae2e-82bd4a08e495 req-c3c2dfca-7ac8-4770-9f08-6fdcfa131b0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.252 2 DEBUG oslo_concurrency.lockutils [req-f55ef1dd-2c0d-4feb-ae2e-82bd4a08e495 req-c3c2dfca-7ac8-4770-9f08-6fdcfa131b0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.253 2 DEBUG nova.compute.manager [req-f55ef1dd-2c0d-4feb-ae2e-82bd4a08e495 req-c3c2dfca-7ac8-4770-9f08-6fdcfa131b0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.253 2 WARNING nova.compute.manager [req-f55ef1dd-2c0d-4feb-ae2e-82bd4a08e495 req-c3c2dfca-7ac8-4770-9f08-6fdcfa131b0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state stopped and task_state None.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.275 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.288 2 INFO nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Took 10.35 seconds to spawn the instance on the hypervisor.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.289 2 DEBUG nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.293 2 INFO nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Took 11.04 seconds to spawn the instance on the hypervisor.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.293 2 DEBUG nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.369 2 INFO nova.compute.manager [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Took 11.50 seconds to build instance.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.373 2 INFO nova.compute.manager [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Took 12.31 seconds to build instance.
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.390 2 DEBUG oslo_concurrency.lockutils [None req-f049d829-ce91-4d98-8c33-1341a34e02c6 e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:06 compute-0 nova_compute[256940]: 2025-10-02 12:38:06.392 2 DEBUG oslo_concurrency.lockutils [None req-f1f0fc48-1130-4bfd-8c16-b8301a95435b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:06.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:38:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:06.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:38:07 compute-0 ceph-mon[73668]: pgmap v1856: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 259 op/s
Oct 02 12:38:07 compute-0 ovn_controller[148123]: 2025-10-02T12:38:07Z|00446|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:07 compute-0 ovn_controller[148123]: 2025-10-02T12:38:07Z|00447|binding|INFO|Releasing lport 0f6cfe52-3f15-4e90-b21e-85f85326e533 from this chassis (sb_readonly=0)
Oct 02 12:38:07 compute-0 nova_compute[256940]: 2025-10-02 12:38:07.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 248 op/s
Oct 02 12:38:07 compute-0 ovn_controller[148123]: 2025-10-02T12:38:07Z|00448|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:07 compute-0 ovn_controller[148123]: 2025-10-02T12:38:07Z|00449|binding|INFO|Releasing lport 0f6cfe52-3f15-4e90-b21e-85f85326e533 from this chassis (sb_readonly=0)
Oct 02 12:38:07 compute-0 nova_compute[256940]: 2025-10-02 12:38:07.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:07 compute-0 nova_compute[256940]: 2025-10-02 12:38:07.934 2 DEBUG nova.objects.instance [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:07 compute-0 nova_compute[256940]: 2025-10-02 12:38:07.960 2 DEBUG oslo_concurrency.lockutils [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:07 compute-0 nova_compute[256940]: 2025-10-02 12:38:07.961 2 DEBUG oslo_concurrency.lockutils [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:07 compute-0 nova_compute[256940]: 2025-10-02 12:38:07.961 2 DEBUG nova.network.neutron [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:38:07 compute-0 nova_compute[256940]: 2025-10-02 12:38:07.961 2 DEBUG nova.objects.instance [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'info_cache' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.229 2 DEBUG nova.compute.manager [req-c182fed0-24cf-4849-a404-566e5a80e418 req-a6e3319a-feb6-4729-bdd9-d75aa2c66ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.229 2 DEBUG oslo_concurrency.lockutils [req-c182fed0-24cf-4849-a404-566e5a80e418 req-a6e3319a-feb6-4729-bdd9-d75aa2c66ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.230 2 DEBUG oslo_concurrency.lockutils [req-c182fed0-24cf-4849-a404-566e5a80e418 req-a6e3319a-feb6-4729-bdd9-d75aa2c66ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.230 2 DEBUG oslo_concurrency.lockutils [req-c182fed0-24cf-4849-a404-566e5a80e418 req-a6e3319a-feb6-4729-bdd9-d75aa2c66ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.230 2 DEBUG nova.compute.manager [req-c182fed0-24cf-4849-a404-566e5a80e418 req-a6e3319a-feb6-4729-bdd9-d75aa2c66ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.230 2 WARNING nova.compute.manager [req-c182fed0-24cf-4849-a404-566e5a80e418 req-a6e3319a-feb6-4729-bdd9-d75aa2c66ccb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state None.
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.328 2 DEBUG nova.compute.manager [req-15d73843-ca53-4c9f-b2d8-b2a9c88786cd req-8bbb8215-d0a5-4235-b675-2cbe95b600fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.329 2 DEBUG oslo_concurrency.lockutils [req-15d73843-ca53-4c9f-b2d8-b2a9c88786cd req-8bbb8215-d0a5-4235-b675-2cbe95b600fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.331 2 DEBUG oslo_concurrency.lockutils [req-15d73843-ca53-4c9f-b2d8-b2a9c88786cd req-8bbb8215-d0a5-4235-b675-2cbe95b600fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.332 2 DEBUG oslo_concurrency.lockutils [req-15d73843-ca53-4c9f-b2d8-b2a9c88786cd req-8bbb8215-d0a5-4235-b675-2cbe95b600fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.333 2 DEBUG nova.compute.manager [req-15d73843-ca53-4c9f-b2d8-b2a9c88786cd req-8bbb8215-d0a5-4235-b675-2cbe95b600fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] No waiting events found dispatching network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.334 2 WARNING nova.compute.manager [req-15d73843-ca53-4c9f-b2d8-b2a9c88786cd req-8bbb8215-d0a5-4235-b675-2cbe95b600fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received unexpected event network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 for instance with vm_state active and task_state None.
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.378 2 DEBUG nova.compute.manager [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.408 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.409 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.410 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.410 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.411 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.412 2 WARNING nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.413 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.414 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.415 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.416 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.416 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.417 2 WARNING nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.418 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.419 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.420 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.421 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.422 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.423 2 WARNING nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.423 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.424 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.425 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.426 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.426 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.428 2 WARNING nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.429 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.430 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.431 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.432 2 DEBUG oslo_concurrency.lockutils [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.433 2 DEBUG nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.434 2 WARNING nova.compute.manager [req-fe095d42-4b4e-4eb5-a40d-75b45d179333 req-44412cd6-40b9-45ec-af85-164ef79fe9c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.443 2 INFO nova.compute.manager [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] instance snapshotting
Oct 02 12:38:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:08.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:08 compute-0 nova_compute[256940]: 2025-10-02 12:38:08.684 2 INFO nova.virt.libvirt.driver [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Beginning live snapshot process
Oct 02 12:38:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:08.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:09 compute-0 ceph-mon[73668]: pgmap v1857: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 248 op/s
Oct 02 12:38:09 compute-0 nova_compute[256940]: 2025-10-02 12:38:09.199 2 DEBUG nova.virt.libvirt.imagebackend [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:38:09 compute-0 NetworkManager[44981]: <info>  [1759408689.4326] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Oct 02 12:38:09 compute-0 NetworkManager[44981]: <info>  [1759408689.4337] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct 02 12:38:09 compute-0 nova_compute[256940]: 2025-10-02 12:38:09.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:09 compute-0 nova_compute[256940]: 2025-10-02 12:38:09.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 185 op/s
Oct 02 12:38:09 compute-0 ovn_controller[148123]: 2025-10-02T12:38:09Z|00450|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:09 compute-0 ovn_controller[148123]: 2025-10-02T12:38:09Z|00451|binding|INFO|Releasing lport 0f6cfe52-3f15-4e90-b21e-85f85326e533 from this chassis (sb_readonly=0)
Oct 02 12:38:09 compute-0 nova_compute[256940]: 2025-10-02 12:38:09.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:09 compute-0 nova_compute[256940]: 2025-10-02 12:38:09.617 2 DEBUG nova.storage.rbd_utils [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(327cb8281f144f6399a719e2862fd271) on rbd image(1546ac1d-4a04-4c5e-ae02-b005461c7731_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:38:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 02 12:38:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 02 12:38:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.316 2 DEBUG nova.storage.rbd_utils [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning vms/1546ac1d-4a04-4c5e-ae02-b005461c7731_disk@327cb8281f144f6399a719e2862fd271 to images/44da8c0b-0cdf-4949-9ff7-57e81c0d3d70 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.370 2 DEBUG nova.network.neutron [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updating instance_info_cache with network_info: [{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.394 2 DEBUG oslo_concurrency.lockutils [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.421 2 INFO nova.virt.libvirt.driver [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance destroyed successfully.
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.421 2 DEBUG nova.objects.instance [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.428 2 DEBUG nova.compute.manager [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-changed-85de21e0-dbc3-4791-baf5-1f9b78e35b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.429 2 DEBUG nova.compute.manager [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Refreshing instance network info cache due to event network-changed-85de21e0-dbc3-4791-baf5-1f9b78e35b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.429 2 DEBUG oslo_concurrency.lockutils [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.429 2 DEBUG oslo_concurrency.lockutils [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.430 2 DEBUG nova.network.neutron [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Refreshing network info cache for port 85de21e0-dbc3-4791-baf5-1f9b78e35b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.434 2 DEBUG nova.objects.instance [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.449 2 DEBUG nova.virt.libvirt.vif [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1852157817',display_name='tempest-ServerActionsTestJSON-server-1852157817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1852157817',id=99,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-e22q2mba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a5e40085-6c27-4b57-96db-89ecc7ac2e48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.449 2 DEBUG nova.network.os_vif_util [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.450 2 DEBUG nova.network.os_vif_util [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.451 2 DEBUG os_vif [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6d0b513-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.465 2 DEBUG nova.storage.rbd_utils [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] flattening images/44da8c0b-0cdf-4949-9ff7-57e81c0d3d70 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.514 2 INFO os_vif [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d')
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.524 2 DEBUG nova.virt.libvirt.driver [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Start _get_guest_xml network_info=[{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.544 2 WARNING nova.virt.libvirt.driver [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.552 2 DEBUG nova.virt.libvirt.host [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.553 2 DEBUG nova.virt.libvirt.host [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.573 2 DEBUG nova.virt.libvirt.host [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.577 2 DEBUG nova.virt.libvirt.host [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.579 2 DEBUG nova.virt.libvirt.driver [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.579 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.580 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.580 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.581 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.581 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.582 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.582 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.583 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.583 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.583 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.584 2 DEBUG nova.virt.hardware [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.584 2 DEBUG nova.objects.instance [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.601 2 DEBUG oslo_concurrency.processutils [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:10 compute-0 nova_compute[256940]: 2025-10-02 12:38:10.802 2 DEBUG nova.storage.rbd_utils [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] removing snapshot(327cb8281f144f6399a719e2862fd271) on rbd image(1546ac1d-4a04-4c5e-ae02-b005461c7731_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:38:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:10.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494670598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.084 2 DEBUG oslo_concurrency.processutils [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.125 2 DEBUG oslo_concurrency.processutils [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 353 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 818 KiB/s wr, 226 op/s
Oct 02 12:38:11 compute-0 ceph-mon[73668]: pgmap v1858: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 185 op/s
Oct 02 12:38:11 compute-0 ceph-mon[73668]: osdmap e256: 3 total, 3 up, 3 in
Oct 02 12:38:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1494670598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3763526026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.588 2 DEBUG oslo_concurrency.processutils [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.590 2 DEBUG nova.virt.libvirt.vif [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1852157817',display_name='tempest-ServerActionsTestJSON-server-1852157817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1852157817',id=99,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-e22q2mba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a5e40085-6c27-4b57-96db-89ecc7ac2e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.591 2 DEBUG nova.network.os_vif_util [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.592 2 DEBUG nova.network.os_vif_util [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.593 2 DEBUG nova.objects.instance [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.616 2 DEBUG nova.virt.libvirt.driver [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <uuid>a5e40085-6c27-4b57-96db-89ecc7ac2e48</uuid>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <name>instance-00000063</name>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestJSON-server-1852157817</nova:name>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:38:10</nova:creationTime>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <nova:port uuid="f6d0b513-9da0-4deb-a98c-8812c9ddd074">
Oct 02 12:38:11 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <system>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <entry name="serial">a5e40085-6c27-4b57-96db-89ecc7ac2e48</entry>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <entry name="uuid">a5e40085-6c27-4b57-96db-89ecc7ac2e48</entry>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </system>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <os>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   </os>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <features>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   </features>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk">
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a5e40085-6c27-4b57-96db-89ecc7ac2e48_disk.config">
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:11 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:49:53:e6"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <target dev="tapf6d0b513-9d"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48/console.log" append="off"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <video>
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </video>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:38:11 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:38:11 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:38:11 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:38:11 compute-0 nova_compute[256940]: </domain>
Oct 02 12:38:11 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.624 2 DEBUG nova.virt.libvirt.driver [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.624 2 DEBUG nova.virt.libvirt.driver [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.625 2 DEBUG nova.virt.libvirt.vif [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1852157817',display_name='tempest-ServerActionsTestJSON-server-1852157817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1852157817',id=99,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-e22q2mba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a5e40085-6c27-4b57-96db-89ecc7ac2e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.626 2 DEBUG nova.network.os_vif_util [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.627 2 DEBUG nova.network.os_vif_util [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.627 2 DEBUG os_vif [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6d0b513-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6d0b513-9d, col_values=(('external_ids', {'iface-id': 'f6d0b513-9da0-4deb-a98c-8812c9ddd074', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:53:e6', 'vm-uuid': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:11 compute-0 NetworkManager[44981]: <info>  [1759408691.6359] manager: (tapf6d0b513-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.641 2 INFO os_vif [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d')
Oct 02 12:38:11 compute-0 kernel: tapf6d0b513-9d: entered promiscuous mode
Oct 02 12:38:11 compute-0 NetworkManager[44981]: <info>  [1759408691.7343] manager: (tapf6d0b513-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:11 compute-0 ovn_controller[148123]: 2025-10-02T12:38:11Z|00452|binding|INFO|Claiming lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 for this chassis.
Oct 02 12:38:11 compute-0 ovn_controller[148123]: 2025-10-02T12:38:11Z|00453|binding|INFO|f6d0b513-9da0-4deb-a98c-8812c9ddd074: Claiming fa:16:3e:49:53:e6 10.100.0.10
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.747 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '7', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.749 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.750 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:38:11 compute-0 systemd-udevd[320597]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:11 compute-0 ovn_controller[148123]: 2025-10-02T12:38:11Z|00454|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 ovn-installed in OVS
Oct 02 12:38:11 compute-0 ovn_controller[148123]: 2025-10-02T12:38:11Z|00455|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 up in Southbound
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.767 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e364e2ab-0d6c-45bf-ac31-7b9b067e0ddf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.768 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:38:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.774 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.774 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6c47a288-d28e-4049-bc58-a42b4fc3d203]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 NetworkManager[44981]: <info>  [1759408691.7749] device (tapf6d0b513-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:38:11 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 02 12:38:11 compute-0 NetworkManager[44981]: <info>  [1759408691.7762] device (tapf6d0b513-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.776 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9dd42f-4b27-4361-8938-6ea02c3b54c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 systemd-machined[210927]: New machine qemu-47-instance-00000063.
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.796 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3daeae-c14e-4129-9812-81a29ea909ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-00000063.
Oct 02 12:38:11 compute-0 nova_compute[256940]: 2025-10-02 12:38:11.816 2 DEBUG nova.storage.rbd_utils [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(snap) on rbd image(44da8c0b-0cdf-4949-9ff7-57e81c0d3d70) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.823 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0133b7-5231-48f7-ad06-16a0044b0257]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.866 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3c592112-c819-4c9d-b327-0b8f306d40fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 NetworkManager[44981]: <info>  [1759408691.8821] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/208)
Oct 02 12:38:11 compute-0 systemd-udevd[320602]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.883 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83b7b27a-5c82-4391-8557-59f74db486c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.936 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbdf6dc-143a-4e01-ad27-61c8fd39051f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.940 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab083cc-e786-43a9-b87a-8942e479c191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:11 compute-0 NetworkManager[44981]: <info>  [1759408691.9722] device (tapf011efa4-00): carrier: link connected
Oct 02 12:38:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:11.979 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[91fe076c-7ae4-404b-98f5-c72aef09f477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.004 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc17bd54-25d2-4f8d-8a3e-8ab21bf6f2bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658745, 'reachable_time': 21669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320651, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.023 2 DEBUG nova.network.neutron [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Updated VIF entry in instance network info cache for port 85de21e0-dbc3-4791-baf5-1f9b78e35b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.024 2 DEBUG nova.network.neutron [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Updating instance_info_cache with network_info: [{"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.028 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ad7306-7eaf-4b48-9315-0949ba9a0aff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658745, 'tstamp': 658745}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320652, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.048 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ab12d2ac-4aa6-4251-a5e9-090b23120d8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658745, 'reachable_time': 21669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320653, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.052 2 DEBUG oslo_concurrency.lockutils [req-aab93a0b-3e92-4e8f-836b-dd212fc82790 req-1df55069-d511-4b3d-ad85-04a8399f0924 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92e58b0c-6934-4931-8ae8-c1a631797f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.082 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[046e70e8-40f6-4a74-8151-e716f2f33c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.143 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fa2508-7c16-4e83-b388-c5e4d5b03a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.144 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.144 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.144 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:12 compute-0 NetworkManager[44981]: <info>  [1759408692.1472] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Oct 02 12:38:12 compute-0 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.155 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:12 compute-0 ovn_controller[148123]: 2025-10-02T12:38:12Z|00456|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.173 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.174 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6bca64-252e-4ceb-9f55-9043129ae7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.175 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:38:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:12.175 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.200 2 DEBUG nova.compute.manager [req-f4e27f90-9f5a-46a8-8350-2f9c6ab6176b req-0a27d491-052a-4b74-988a-60868851d1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.201 2 DEBUG oslo_concurrency.lockutils [req-f4e27f90-9f5a-46a8-8350-2f9c6ab6176b req-0a27d491-052a-4b74-988a-60868851d1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.201 2 DEBUG oslo_concurrency.lockutils [req-f4e27f90-9f5a-46a8-8350-2f9c6ab6176b req-0a27d491-052a-4b74-988a-60868851d1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.202 2 DEBUG oslo_concurrency.lockutils [req-f4e27f90-9f5a-46a8-8350-2f9c6ab6176b req-0a27d491-052a-4b74-988a-60868851d1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.203 2 DEBUG nova.compute.manager [req-f4e27f90-9f5a-46a8-8350-2f9c6ab6176b req-0a27d491-052a-4b74-988a-60868851d1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.203 2 WARNING nova.compute.manager [req-f4e27f90-9f5a-46a8-8350-2f9c6ab6176b req-0a27d491-052a-4b74-988a-60868851d1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:38:12 compute-0 ceph-mon[73668]: pgmap v1860: 305 pgs: 305 active+clean; 353 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 818 KiB/s wr, 226 op/s
Oct 02 12:38:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3763526026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:12 compute-0 ceph-mon[73668]: osdmap e257: 3 total, 3 up, 3 in
Oct 02 12:38:12 compute-0 podman[320726]: 2025-10-02 12:38:12.635326823 +0000 UTC m=+0.051330401 container create b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:38:12 compute-0 nova_compute[256940]: 2025-10-02 12:38:12.649 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:12.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:12 compute-0 systemd[1]: Started libpod-conmon-b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8.scope.
Oct 02 12:38:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:12 compute-0 podman[320726]: 2025-10-02 12:38:12.611057419 +0000 UTC m=+0.027061017 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f292e8181ff0ad574f905543dbd5c63f444c933d11d3ab2175d9fb77463cf165/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:12 compute-0 podman[320726]: 2025-10-02 12:38:12.725480686 +0000 UTC m=+0.141484264 container init b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:12 compute-0 podman[320726]: 2025-10-02 12:38:12.73367363 +0000 UTC m=+0.149677198 container start b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:38:12 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[320742]: [NOTICE]   (320746) : New worker (320748) forked
Oct 02 12:38:12 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[320742]: [NOTICE]   (320746) : Loading success.
Oct 02 12:38:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 02 12:38:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 02 12:38:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 02 12:38:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.042 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for a5e40085-6c27-4b57-96db-89ecc7ac2e48 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.043 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408693.0416424, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.043 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Resumed (Lifecycle Event)
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.046 2 DEBUG nova.compute.manager [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.049 2 INFO nova.virt.libvirt.driver [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance rebooted successfully.
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.050 2 DEBUG nova.compute.manager [None req-504b8aa0-b4c6-4525-b548-89f4bec1fe3a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.066 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.071 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.105 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.106 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408693.04617, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.106 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Started (Lifecycle Event)
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.133 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:13 compute-0 nova_compute[256940]: 2025-10-02 12:38:13.137 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 374 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 2.9 MiB/s wr, 312 op/s
Oct 02 12:38:14 compute-0 ceph-mon[73668]: osdmap e258: 3 total, 3 up, 3 in
Oct 02 12:38:14 compute-0 nova_compute[256940]: 2025-10-02 12:38:14.300 2 DEBUG nova.compute.manager [req-3627a5c6-94cd-46ff-8f67-c59f3c3b9b23 req-a12d28d2-dffe-40e1-ad89-1a7d0c591180 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:14 compute-0 nova_compute[256940]: 2025-10-02 12:38:14.301 2 DEBUG oslo_concurrency.lockutils [req-3627a5c6-94cd-46ff-8f67-c59f3c3b9b23 req-a12d28d2-dffe-40e1-ad89-1a7d0c591180 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:14 compute-0 nova_compute[256940]: 2025-10-02 12:38:14.301 2 DEBUG oslo_concurrency.lockutils [req-3627a5c6-94cd-46ff-8f67-c59f3c3b9b23 req-a12d28d2-dffe-40e1-ad89-1a7d0c591180 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:14 compute-0 nova_compute[256940]: 2025-10-02 12:38:14.301 2 DEBUG oslo_concurrency.lockutils [req-3627a5c6-94cd-46ff-8f67-c59f3c3b9b23 req-a12d28d2-dffe-40e1-ad89-1a7d0c591180 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:14 compute-0 nova_compute[256940]: 2025-10-02 12:38:14.302 2 DEBUG nova.compute.manager [req-3627a5c6-94cd-46ff-8f67-c59f3c3b9b23 req-a12d28d2-dffe-40e1-ad89-1a7d0c591180 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:14 compute-0 nova_compute[256940]: 2025-10-02 12:38:14.302 2 WARNING nova.compute.manager [req-3627a5c6-94cd-46ff-8f67-c59f3c3b9b23 req-a12d28d2-dffe-40e1-ad89-1a7d0c591180 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state active and task_state None.
Oct 02 12:38:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:15 compute-0 ceph-mon[73668]: pgmap v1863: 305 pgs: 305 active+clean; 374 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 2.9 MiB/s wr, 312 op/s
Oct 02 12:38:15 compute-0 nova_compute[256940]: 2025-10-02 12:38:15.081 2 INFO nova.virt.libvirt.driver [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Snapshot image upload complete
Oct 02 12:38:15 compute-0 nova_compute[256940]: 2025-10-02 12:38:15.082 2 INFO nova.compute.manager [None req-6b4ee333-e970-47b4-9d17-05bfcdfa8e4c fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Took 6.64 seconds to snapshot the instance on the hypervisor.
Oct 02 12:38:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 387 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 3.6 MiB/s wr, 410 op/s
Oct 02 12:38:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:16 compute-0 nova_compute[256940]: 2025-10-02 12:38:16.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:16 compute-0 nova_compute[256940]: 2025-10-02 12:38:16.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:16.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:16.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.124 2 DEBUG nova.objects.instance [None req-7d9ad64f-bf85-47ed-97ed-19c1d0daed1f 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.151 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408697.1509242, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.151 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Paused (Lifecycle Event)
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.170 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.175 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.192 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.232 2 INFO nova.compute.manager [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Rescuing
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.233 2 DEBUG oslo_concurrency.lockutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.233 2 DEBUG oslo_concurrency.lockutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.234 2 DEBUG nova.network.neutron [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:38:17 compute-0 ceph-mon[73668]: pgmap v1864: 305 pgs: 305 active+clean; 387 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 3.6 MiB/s wr, 410 op/s
Oct 02 12:38:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 387 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 2.9 MiB/s wr, 374 op/s
Oct 02 12:38:17 compute-0 kernel: tapf6d0b513-9d (unregistering): left promiscuous mode
Oct 02 12:38:17 compute-0 NetworkManager[44981]: <info>  [1759408697.6134] device (tapf6d0b513-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:38:17 compute-0 ovn_controller[148123]: 2025-10-02T12:38:17Z|00457|binding|INFO|Releasing lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 from this chassis (sb_readonly=0)
Oct 02 12:38:17 compute-0 ovn_controller[148123]: 2025-10-02T12:38:17Z|00458|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 down in Southbound
Oct 02 12:38:17 compute-0 ovn_controller[148123]: 2025-10-02T12:38:17Z|00459|binding|INFO|Removing iface tapf6d0b513-9d ovn-installed in OVS
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.658 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.660 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.661 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.662 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3c527614-9433-4ac4-9754-4a7f27121be3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.663 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 02 12:38:17 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000063.scope: Consumed 5.224s CPU time.
Oct 02 12:38:17 compute-0 systemd-machined[210927]: Machine qemu-47-instance-00000063 terminated.
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.796 2 DEBUG nova.compute.manager [None req-7d9ad64f-bf85-47ed-97ed-19c1d0daed1f 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:17 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[320742]: [NOTICE]   (320746) : haproxy version is 2.8.14-c23fe91
Oct 02 12:38:17 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[320742]: [NOTICE]   (320746) : path to executable is /usr/sbin/haproxy
Oct 02 12:38:17 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[320742]: [WARNING]  (320746) : Exiting Master process...
Oct 02 12:38:17 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[320742]: [ALERT]    (320746) : Current worker (320748) exited with code 143 (Terminated)
Oct 02 12:38:17 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[320742]: [WARNING]  (320746) : All workers exited. Exiting... (0)
Oct 02 12:38:17 compute-0 systemd[1]: libpod-b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8.scope: Deactivated successfully.
Oct 02 12:38:17 compute-0 conmon[320742]: conmon b0d388546812b3f98c55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8.scope/container/memory.events
Oct 02 12:38:17 compute-0 podman[320786]: 2025-10-02 12:38:17.81058954 +0000 UTC m=+0.053370014 container died b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f292e8181ff0ad574f905543dbd5c63f444c933d11d3ab2175d9fb77463cf165-merged.mount: Deactivated successfully.
Oct 02 12:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8-userdata-shm.mount: Deactivated successfully.
Oct 02 12:38:17 compute-0 podman[320786]: 2025-10-02 12:38:17.855522373 +0000 UTC m=+0.098302847 container cleanup b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:38:17 compute-0 systemd[1]: libpod-conmon-b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8.scope: Deactivated successfully.
Oct 02 12:38:17 compute-0 podman[320826]: 2025-10-02 12:38:17.919796641 +0000 UTC m=+0.043034335 container remove b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.925 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9d326938-c4fb-40ad-bacd-c5e4e6a3ad65]: (4, ('Thu Oct  2 12:38:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8)\nb0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8\nThu Oct  2 12:38:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (b0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8)\nb0d388546812b3f98c55b03a7ff22fab9548d0e77b6ac31a3d9da0b6add8e7e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.927 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[639d0ff9-6a4f-4794-9838-f87093e996e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.928 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 nova_compute[256940]: 2025-10-02 12:38:17.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.953 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6b385a-95a6-46da-8014-b08f202227d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.975 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b66823ec-d32f-42ea-a10a-37484b2638c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.977 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[15d08383-7177-4f50-87fa-56c98a89c9eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:17.999 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[79c395ba-b90b-499f-8e7a-56e04b9c8459]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658734, 'reachable_time': 32861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320845, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:18 compute-0 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:38:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:18.004 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:38:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:18.004 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d878d6-eef7-4749-a6de-6cc0246c3cd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.171 2 DEBUG nova.compute.manager [req-8a531228-c39c-4b2f-8dbc-6a280f22d946 req-6cd7bc9a-3446-4b85-907b-b4830c2e1e91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.171 2 DEBUG oslo_concurrency.lockutils [req-8a531228-c39c-4b2f-8dbc-6a280f22d946 req-6cd7bc9a-3446-4b85-907b-b4830c2e1e91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.171 2 DEBUG oslo_concurrency.lockutils [req-8a531228-c39c-4b2f-8dbc-6a280f22d946 req-6cd7bc9a-3446-4b85-907b-b4830c2e1e91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.172 2 DEBUG oslo_concurrency.lockutils [req-8a531228-c39c-4b2f-8dbc-6a280f22d946 req-6cd7bc9a-3446-4b85-907b-b4830c2e1e91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.172 2 DEBUG nova.compute.manager [req-8a531228-c39c-4b2f-8dbc-6a280f22d946 req-6cd7bc9a-3446-4b85-907b-b4830c2e1e91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.172 2 WARNING nova.compute.manager [req-8a531228-c39c-4b2f-8dbc-6a280f22d946 req-6cd7bc9a-3446-4b85-907b-b4830c2e1e91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state suspended and task_state None.
Oct 02 12:38:18 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.558 2 DEBUG nova.network.neutron [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.577 2 DEBUG oslo_concurrency.lockutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:18.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:18 compute-0 nova_compute[256940]: 2025-10-02 12:38:18.836 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:38:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:18.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:19 compute-0 sudo[320847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:19 compute-0 sudo[320847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:19 compute-0 sudo[320847]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:19 compute-0 podman[320872]: 2025-10-02 12:38:19.093818409 +0000 UTC m=+0.057422171 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 02 12:38:19 compute-0 sudo[320884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:19 compute-0 sudo[320884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:19 compute-0 sudo[320884]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:19 compute-0 podman[320871]: 2025-10-02 12:38:19.129817498 +0000 UTC m=+0.098285646 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 02 12:38:19 compute-0 ceph-mon[73668]: pgmap v1865: 305 pgs: 305 active+clean; 387 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 2.9 MiB/s wr, 374 op/s
Oct 02 12:38:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/666265248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 387 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Oct 02 12:38:19 compute-0 nova_compute[256940]: 2025-10-02 12:38:19.747 2 INFO nova.compute.manager [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Resuming
Oct 02 12:38:19 compute-0 nova_compute[256940]: 2025-10-02 12:38:19.749 2 DEBUG nova.objects.instance [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:19 compute-0 nova_compute[256940]: 2025-10-02 12:38:19.792 2 DEBUG oslo_concurrency.lockutils [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:19 compute-0 nova_compute[256940]: 2025-10-02 12:38:19.793 2 DEBUG oslo_concurrency.lockutils [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:19 compute-0 nova_compute[256940]: 2025-10-02 12:38:19.793 2 DEBUG nova.network.neutron [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:38:20 compute-0 nova_compute[256940]: 2025-10-02 12:38:20.257 2 DEBUG nova.compute.manager [req-446531d7-ab7e-4fa7-ac91-caedafe6bd5b req-a196d1f9-1ecf-4078-b7eb-f287d5f68820 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:20 compute-0 nova_compute[256940]: 2025-10-02 12:38:20.258 2 DEBUG oslo_concurrency.lockutils [req-446531d7-ab7e-4fa7-ac91-caedafe6bd5b req-a196d1f9-1ecf-4078-b7eb-f287d5f68820 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:20 compute-0 nova_compute[256940]: 2025-10-02 12:38:20.258 2 DEBUG oslo_concurrency.lockutils [req-446531d7-ab7e-4fa7-ac91-caedafe6bd5b req-a196d1f9-1ecf-4078-b7eb-f287d5f68820 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:20 compute-0 nova_compute[256940]: 2025-10-02 12:38:20.258 2 DEBUG oslo_concurrency.lockutils [req-446531d7-ab7e-4fa7-ac91-caedafe6bd5b req-a196d1f9-1ecf-4078-b7eb-f287d5f68820 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:20 compute-0 nova_compute[256940]: 2025-10-02 12:38:20.258 2 DEBUG nova.compute.manager [req-446531d7-ab7e-4fa7-ac91-caedafe6bd5b req-a196d1f9-1ecf-4078-b7eb-f287d5f68820 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:20 compute-0 nova_compute[256940]: 2025-10-02 12:38:20.259 2 WARNING nova.compute.manager [req-446531d7-ab7e-4fa7-ac91-caedafe6bd5b req-a196d1f9-1ecf-4078-b7eb-f287d5f68820 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state suspended and task_state resuming.
Oct 02 12:38:20 compute-0 ceph-mon[73668]: pgmap v1866: 305 pgs: 305 active+clean; 387 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Oct 02 12:38:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:38:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:38:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:20.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:20 compute-0 nova_compute[256940]: 2025-10-02 12:38:20.981 2 DEBUG nova.network.neutron [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updating instance_info_cache with network_info: [{"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.012 2 DEBUG oslo_concurrency.lockutils [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a5e40085-6c27-4b57-96db-89ecc7ac2e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.016 2 DEBUG nova.virt.libvirt.vif [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1852157817',display_name='tempest-ServerActionsTestJSON-server-1852157817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1852157817',id=99,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-e22q2mba',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a5e40085-6c27-4b57-96db-89ecc7ac2e48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.017 2 DEBUG nova.network.os_vif_util [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.017 2 DEBUG nova.network.os_vif_util [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.018 2 DEBUG os_vif [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6d0b513-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6d0b513-9d, col_values=(('external_ids', {'iface-id': 'f6d0b513-9da0-4deb-a98c-8812c9ddd074', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:53:e6', 'vm-uuid': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.022 2 INFO os_vif [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d')
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.045 2 DEBUG nova.objects.instance [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 02 12:38:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 02 12:38:21 compute-0 kernel: tapf6d0b513-9d: entered promiscuous mode
Oct 02 12:38:21 compute-0 NetworkManager[44981]: <info>  [1759408701.1221] manager: (tapf6d0b513-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Oct 02 12:38:21 compute-0 ovn_controller[148123]: 2025-10-02T12:38:21Z|00460|binding|INFO|Claiming lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 for this chassis.
Oct 02 12:38:21 compute-0 ovn_controller[148123]: 2025-10-02T12:38:21Z|00461|binding|INFO|f6d0b513-9da0-4deb-a98c-8812c9ddd074: Claiming fa:16:3e:49:53:e6 10.100.0.10
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.132 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.133 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.135 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:38:21 compute-0 ovn_controller[148123]: 2025-10-02T12:38:21Z|00462|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 ovn-installed in OVS
Oct 02 12:38:21 compute-0 ovn_controller[148123]: 2025-10-02T12:38:21Z|00463|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 up in Southbound
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.145 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b6399eed-b1c6-48b0-a42d-6b1513de3cc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.147 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.149 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.149 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a850b3c4-8878-4b81-b661-79069e04c944]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.149 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[721e9d6f-1d7c-45f7-a05d-f9e72a119c15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.160 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9911c645-c676-4fa4-8c53-610738587a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 systemd-machined[210927]: New machine qemu-48-instance-00000063.
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.175 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[19110077-bc43-4fbb-9946-70ed43b4c67a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-00000063.
Oct 02 12:38:21 compute-0 systemd-udevd[320962]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.200 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad2acd0-5d57-4bda-8d2d-15483639ddf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.205 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7481f1ed-52ab-4a1e-876a-21b1ad4d101b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 NetworkManager[44981]: <info>  [1759408701.2064] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Oct 02 12:38:21 compute-0 systemd-udevd[320964]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:21 compute-0 NetworkManager[44981]: <info>  [1759408701.2122] device (tapf6d0b513-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:38:21 compute-0 NetworkManager[44981]: <info>  [1759408701.2128] device (tapf6d0b513-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.239 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1848d8-c6c2-450e-a835-ef36fb0e3ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.242 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e68c8ebf-fd67-4d44-bc2f-a408ef18e333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 NetworkManager[44981]: <info>  [1759408701.2713] device (tapf011efa4-00): carrier: link connected
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.275 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[755f27cf-a9a4-4e62-b07e-a5212ed44a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.294 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc20fad-362f-4d30-a6de-e5cfdb0525b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659675, 'reachable_time': 35200, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320990, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.311 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4b808137-58c4-45fc-a33b-2ac63a4ebc4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659675, 'tstamp': 659675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320991, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.333 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf44e74-48ce-4a0c-9f93-3ca081341a0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659675, 'reachable_time': 35200, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320992, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.379 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[82fdf640-3acc-4d45-a5e5-a929ec82ccf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.449 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f71343-f8b7-4f84-8505-d9977eba551f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.452 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.453 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.453 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 NetworkManager[44981]: <info>  [1759408701.4558] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct 02 12:38:21 compute-0 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.458 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 ovn_controller[148123]: 2025-10-02T12:38:21Z|00464|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.463 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.464 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcee9d9-653f-4385-8687-748f059eb73b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.465 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:38:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:21.466 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 410 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 206 op/s
Oct 02 12:38:21 compute-0 nova_compute[256940]: 2025-10-02 12:38:21.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:21 compute-0 podman[321065]: 2025-10-02 12:38:21.877600868 +0000 UTC m=+0.070096561 container create 995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:38:21 compute-0 systemd[1]: Started libpod-conmon-995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c.scope.
Oct 02 12:38:21 compute-0 podman[321065]: 2025-10-02 12:38:21.833181478 +0000 UTC m=+0.025677201 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:38:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84a5ba15028e5fa2c4a26dce5e9a40827b833a69485d0f0cb4b24c888e1349c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:22 compute-0 podman[321065]: 2025-10-02 12:38:22.01865114 +0000 UTC m=+0.211146853 container init 995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:38:22 compute-0 ovn_controller[148123]: 2025-10-02T12:38:22Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:1e:42 10.100.0.3
Oct 02 12:38:22 compute-0 ovn_controller[148123]: 2025-10-02T12:38:22Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:1e:42 10.100.0.3
Oct 02 12:38:22 compute-0 podman[321065]: 2025-10-02 12:38:22.02556843 +0000 UTC m=+0.218064123 container start 995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:38:22 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [NOTICE]   (321087) : New worker (321089) forked
Oct 02 12:38:22 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [NOTICE]   (321087) : Loading success.
Oct 02 12:38:22 compute-0 ceph-mon[73668]: osdmap e259: 3 total, 3 up, 3 in
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.353 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for a5e40085-6c27-4b57-96db-89ecc7ac2e48 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.355 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408702.3533337, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.355 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Started (Lifecycle Event)
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.377 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.387 2 DEBUG nova.compute.manager [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.388 2 DEBUG nova.objects.instance [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.391 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.406 2 INFO nova.virt.libvirt.driver [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance running successfully.
Oct 02 12:38:22 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.410 2 DEBUG nova.virt.libvirt.guest [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.411 2 DEBUG nova.compute.manager [None req-3285bd76-9946-4657-8dbe-61ddd575ece5 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.420 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.421 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408702.3599691, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.421 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Resumed (Lifecycle Event)
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.445 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:22 compute-0 nova_compute[256940]: 2025-10-02 12:38:22.451 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:22.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:22.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:23 compute-0 nova_compute[256940]: 2025-10-02 12:38:23.051 2 DEBUG nova.compute.manager [req-9acf56d2-29fa-4b64-9e9c-27ae4cc98482 req-92a3d366-ba4a-43cd-8bad-b9c6c6a0c33e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:23 compute-0 nova_compute[256940]: 2025-10-02 12:38:23.051 2 DEBUG oslo_concurrency.lockutils [req-9acf56d2-29fa-4b64-9e9c-27ae4cc98482 req-92a3d366-ba4a-43cd-8bad-b9c6c6a0c33e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:23 compute-0 nova_compute[256940]: 2025-10-02 12:38:23.052 2 DEBUG oslo_concurrency.lockutils [req-9acf56d2-29fa-4b64-9e9c-27ae4cc98482 req-92a3d366-ba4a-43cd-8bad-b9c6c6a0c33e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:23 compute-0 nova_compute[256940]: 2025-10-02 12:38:23.052 2 DEBUG oslo_concurrency.lockutils [req-9acf56d2-29fa-4b64-9e9c-27ae4cc98482 req-92a3d366-ba4a-43cd-8bad-b9c6c6a0c33e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:23 compute-0 nova_compute[256940]: 2025-10-02 12:38:23.052 2 DEBUG nova.compute.manager [req-9acf56d2-29fa-4b64-9e9c-27ae4cc98482 req-92a3d366-ba4a-43cd-8bad-b9c6c6a0c33e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:23 compute-0 nova_compute[256940]: 2025-10-02 12:38:23.053 2 WARNING nova.compute.manager [req-9acf56d2-29fa-4b64-9e9c-27ae4cc98482 req-92a3d366-ba4a-43cd-8bad-b9c6c6a0c33e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state active and task_state None.
Oct 02 12:38:23 compute-0 ceph-mon[73668]: pgmap v1868: 305 pgs: 305 active+clean; 410 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 206 op/s
Oct 02 12:38:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2278695610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:23 compute-0 ovn_controller[148123]: 2025-10-02T12:38:23Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:2f:ac 10.100.0.13
Oct 02 12:38:23 compute-0 ovn_controller[148123]: 2025-10-02T12:38:23Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:2f:ac 10.100.0.13
Oct 02 12:38:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 449 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Oct 02 12:38:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1185421420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:24.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:38:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:24.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:38:25 compute-0 nova_compute[256940]: 2025-10-02 12:38:25.161 2 DEBUG nova.compute.manager [req-339880a0-6a9d-4554-a1a1-869971c5fc55 req-1d0650d6-d104-44f1-958a-9c5bc2e1eb1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:25 compute-0 nova_compute[256940]: 2025-10-02 12:38:25.161 2 DEBUG oslo_concurrency.lockutils [req-339880a0-6a9d-4554-a1a1-869971c5fc55 req-1d0650d6-d104-44f1-958a-9c5bc2e1eb1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:25 compute-0 nova_compute[256940]: 2025-10-02 12:38:25.162 2 DEBUG oslo_concurrency.lockutils [req-339880a0-6a9d-4554-a1a1-869971c5fc55 req-1d0650d6-d104-44f1-958a-9c5bc2e1eb1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:25 compute-0 nova_compute[256940]: 2025-10-02 12:38:25.162 2 DEBUG oslo_concurrency.lockutils [req-339880a0-6a9d-4554-a1a1-869971c5fc55 req-1d0650d6-d104-44f1-958a-9c5bc2e1eb1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:25 compute-0 nova_compute[256940]: 2025-10-02 12:38:25.162 2 DEBUG nova.compute.manager [req-339880a0-6a9d-4554-a1a1-869971c5fc55 req-1d0650d6-d104-44f1-958a-9c5bc2e1eb1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:25 compute-0 nova_compute[256940]: 2025-10-02 12:38:25.162 2 WARNING nova.compute.manager [req-339880a0-6a9d-4554-a1a1-869971c5fc55 req-1d0650d6-d104-44f1-958a-9c5bc2e1eb1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state active and task_state None.
Oct 02 12:38:25 compute-0 ceph-mon[73668]: pgmap v1869: 305 pgs: 305 active+clean; 449 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Oct 02 12:38:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 471 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 223 op/s
Oct 02 12:38:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.473 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.474 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.474 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:26 compute-0 ceph-mon[73668]: pgmap v1870: 305 pgs: 305 active+clean; 471 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 223 op/s
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:26.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.891 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.892 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.892 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.892 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.892 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.893 2 INFO nova.compute.manager [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Terminating instance
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.894 2 DEBUG nova.compute.manager [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:38:26 compute-0 kernel: tapf6d0b513-9d (unregistering): left promiscuous mode
Oct 02 12:38:26 compute-0 NetworkManager[44981]: <info>  [1759408706.9312] device (tapf6d0b513-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:38:26 compute-0 ovn_controller[148123]: 2025-10-02T12:38:26Z|00465|binding|INFO|Releasing lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 from this chassis (sb_readonly=0)
Oct 02 12:38:26 compute-0 ovn_controller[148123]: 2025-10-02T12:38:26Z|00466|binding|INFO|Setting lport f6d0b513-9da0-4deb-a98c-8812c9ddd074 down in Southbound
Oct 02 12:38:26 compute-0 ovn_controller[148123]: 2025-10-02T12:38:26Z|00467|binding|INFO|Removing iface tapf6d0b513-9d ovn-installed in OVS
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.949 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:e6 10.100.0.10'], port_security=['fa:16:3e:49:53:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5e40085-6c27-4b57-96db-89ecc7ac2e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '10', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6d0b513-9da0-4deb-a98c-8812c9ddd074) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.951 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6d0b513-9da0-4deb-a98c-8812c9ddd074 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.952 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.953 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f17c23-89b0-44f2-8ca3-5ffeb00f9106]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:26.954 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:38:26 compute-0 nova_compute[256940]: 2025-10-02 12:38:26.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:26.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:27 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 02 12:38:27 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000063.scope: Consumed 5.416s CPU time.
Oct 02 12:38:27 compute-0 systemd-machined[210927]: Machine qemu-48-instance-00000063 terminated.
Oct 02 12:38:27 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [NOTICE]   (321087) : haproxy version is 2.8.14-c23fe91
Oct 02 12:38:27 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [NOTICE]   (321087) : path to executable is /usr/sbin/haproxy
Oct 02 12:38:27 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [WARNING]  (321087) : Exiting Master process...
Oct 02 12:38:27 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [WARNING]  (321087) : Exiting Master process...
Oct 02 12:38:27 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [ALERT]    (321087) : Current worker (321089) exited with code 143 (Terminated)
Oct 02 12:38:27 compute-0 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[321083]: [WARNING]  (321087) : All workers exited. Exiting... (0)
Oct 02 12:38:27 compute-0 systemd[1]: libpod-995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c.scope: Deactivated successfully.
Oct 02 12:38:27 compute-0 podman[321124]: 2025-10-02 12:38:27.096970667 +0000 UTC m=+0.043890726 container died 995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d84a5ba15028e5fa2c4a26dce5e9a40827b833a69485d0f0cb4b24c888e1349c-merged.mount: Deactivated successfully.
Oct 02 12:38:27 compute-0 podman[321124]: 2025-10-02 12:38:27.141700025 +0000 UTC m=+0.088620094 container cleanup 995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.141 2 INFO nova.virt.libvirt.driver [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Instance destroyed successfully.
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.143 2 DEBUG nova.objects.instance [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid a5e40085-6c27-4b57-96db-89ecc7ac2e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:27 compute-0 systemd[1]: libpod-conmon-995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c.scope: Deactivated successfully.
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.160 2 DEBUG nova.virt.libvirt.vif [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1852157817',display_name='tempest-ServerActionsTestJSON-server-1852157817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1852157817',id=99,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-e22q2mba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a5e40085-6c27-4b57-96db-89ecc7ac2e48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.162 2 DEBUG nova.network.os_vif_util [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "address": "fa:16:3e:49:53:e6", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d0b513-9d", "ovs_interfaceid": "f6d0b513-9da0-4deb-a98c-8812c9ddd074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.163 2 DEBUG nova.network.os_vif_util [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.163 2 DEBUG os_vif [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6d0b513-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.173 2 INFO os_vif [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:e6,bridge_name='br-int',has_traffic_filtering=True,id=f6d0b513-9da0-4deb-a98c-8812c9ddd074,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d0b513-9d')
Oct 02 12:38:27 compute-0 podman[321160]: 2025-10-02 12:38:27.22961288 +0000 UTC m=+0.055229103 container remove 995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.237 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7e89044f-dba4-4eda-b9f1-8b2c92f02afb]: (4, ('Thu Oct  2 12:38:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c)\n995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c\nThu Oct  2 12:38:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c)\n995192ae1897e47c89a882d146d608bf44457c9a5eec652c9dfa69b8dcbd395c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.241 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[caa0dfc5-3e55-4fd9-addb-b8fe9b02246e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.242 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:27 compute-0 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.250 2 DEBUG nova.compute.manager [req-eb5b00ed-fc57-484a-b7f2-878d62a85747 req-108d4ade-4826-4a0d-a3f2-a713e49d0a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.251 2 DEBUG oslo_concurrency.lockutils [req-eb5b00ed-fc57-484a-b7f2-878d62a85747 req-108d4ade-4826-4a0d-a3f2-a713e49d0a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.251 2 DEBUG oslo_concurrency.lockutils [req-eb5b00ed-fc57-484a-b7f2-878d62a85747 req-108d4ade-4826-4a0d-a3f2-a713e49d0a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.252 2 DEBUG oslo_concurrency.lockutils [req-eb5b00ed-fc57-484a-b7f2-878d62a85747 req-108d4ade-4826-4a0d-a3f2-a713e49d0a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.252 2 DEBUG nova.compute.manager [req-eb5b00ed-fc57-484a-b7f2-878d62a85747 req-108d4ade-4826-4a0d-a3f2-a713e49d0a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.252 2 DEBUG nova.compute.manager [req-eb5b00ed-fc57-484a-b7f2-878d62a85747 req-108d4ade-4826-4a0d-a3f2-a713e49d0a6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-unplugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:38:27 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.264 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cd725a23-9731-4131-8d26-1e991127f552]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.288 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3079d305-33e3-4f85-9e8f-e1a58db74bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.289 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9770e3-03ab-4752-acda-b56b060bb0a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.305 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[785dd155-f9ed-4ff4-bb03-5511cb5a0c25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659668, 'reachable_time': 28359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321193, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:27 compute-0 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.307 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:38:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:27.307 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[7f58f888-72aa-4ce3-9dea-7cfc2df30b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 7.2 MiB/s wr, 194 op/s
Oct 02 12:38:28 compute-0 nova_compute[256940]: 2025-10-02 12:38:27.999 2 INFO nova.virt.libvirt.driver [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Deleting instance files /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48_del
Oct 02 12:38:28 compute-0 nova_compute[256940]: 2025-10-02 12:38:28.002 2 INFO nova.virt.libvirt.driver [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Deletion of /var/lib/nova/instances/a5e40085-6c27-4b57-96db-89ecc7ac2e48_del complete
Oct 02 12:38:28 compute-0 nova_compute[256940]: 2025-10-02 12:38:28.077 2 INFO nova.compute.manager [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Took 1.18 seconds to destroy the instance on the hypervisor.
Oct 02 12:38:28 compute-0 nova_compute[256940]: 2025-10-02 12:38:28.078 2 DEBUG oslo.service.loopingcall [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:38:28 compute-0 nova_compute[256940]: 2025-10-02 12:38:28.079 2 DEBUG nova.compute.manager [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:38:28 compute-0 nova_compute[256940]: 2025-10-02 12:38:28.079 2 DEBUG nova.network.neutron [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:28 compute-0 ceph-mon[73668]: pgmap v1871: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 7.2 MiB/s wr, 194 op/s
Oct 02 12:38:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:28.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:38:28
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'vms', 'volumes', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images']
Oct 02 12:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:38:28 compute-0 nova_compute[256940]: 2025-10-02 12:38:28.892 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:38:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:28.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.220 2 DEBUG nova.network.neutron [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.256 2 INFO nova.compute.manager [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Took 1.18 seconds to deallocate network for instance.
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.317 2 DEBUG nova.compute.manager [req-a888f349-78da-44c6-ade7-fd3aaaddfcf1 req-eea581bb-c5c6-4fa7-9c66-527b43fc960f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-deleted-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.329 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.330 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.346 2 DEBUG nova.compute.manager [req-0a82efc7-a027-4015-986d-dd9ef53e20ea req-e9a79897-32d7-42ab-9d5b-0ff70fea76b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.347 2 DEBUG oslo_concurrency.lockutils [req-0a82efc7-a027-4015-986d-dd9ef53e20ea req-e9a79897-32d7-42ab-9d5b-0ff70fea76b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.348 2 DEBUG oslo_concurrency.lockutils [req-0a82efc7-a027-4015-986d-dd9ef53e20ea req-e9a79897-32d7-42ab-9d5b-0ff70fea76b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.348 2 DEBUG oslo_concurrency.lockutils [req-0a82efc7-a027-4015-986d-dd9ef53e20ea req-e9a79897-32d7-42ab-9d5b-0ff70fea76b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.349 2 DEBUG nova.compute.manager [req-0a82efc7-a027-4015-986d-dd9ef53e20ea req-e9a79897-32d7-42ab-9d5b-0ff70fea76b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] No waiting events found dispatching network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.349 2 WARNING nova.compute.manager [req-0a82efc7-a027-4015-986d-dd9ef53e20ea req-e9a79897-32d7-42ab-9d5b-0ff70fea76b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Received unexpected event network-vif-plugged-f6d0b513-9da0-4deb-a98c-8812c9ddd074 for instance with vm_state deleted and task_state None.
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.421 2 DEBUG oslo_concurrency.processutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 7.2 MiB/s wr, 194 op/s
Oct 02 12:38:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566839075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.911 2 DEBUG oslo_concurrency.processutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.918 2 DEBUG nova.compute.provider_tree [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.935 2 DEBUG nova.scheduler.client.report [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:29 compute-0 nova_compute[256940]: 2025-10-02 12:38:29.978 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:30 compute-0 nova_compute[256940]: 2025-10-02 12:38:30.039 2 INFO nova.scheduler.client.report [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocations for instance a5e40085-6c27-4b57-96db-89ecc7ac2e48
Oct 02 12:38:30 compute-0 nova_compute[256940]: 2025-10-02 12:38:30.138 2 DEBUG oslo_concurrency.lockutils [None req-5f302ac9-4462-4bdf-bf25-b9da79578555 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a5e40085-6c27-4b57-96db-89ecc7ac2e48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:30.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:30 compute-0 ceph-mon[73668]: pgmap v1872: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 7.2 MiB/s wr, 194 op/s
Oct 02 12:38:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2566839075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:31 compute-0 nova_compute[256940]: 2025-10-02 12:38:31.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 437 MiB data, 997 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Oct 02 12:38:31 compute-0 nova_compute[256940]: 2025-10-02 12:38:31.910 2 INFO nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance shutdown successfully after 13 seconds.
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 kernel: tap422859c2-b5 (unregistering): left promiscuous mode
Oct 02 12:38:32 compute-0 NetworkManager[44981]: <info>  [1759408712.3017] device (tap422859c2-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 ovn_controller[148123]: 2025-10-02T12:38:32Z|00468|binding|INFO|Releasing lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 from this chassis (sb_readonly=0)
Oct 02 12:38:32 compute-0 ovn_controller[148123]: 2025-10-02T12:38:32Z|00469|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 down in Southbound
Oct 02 12:38:32 compute-0 ovn_controller[148123]: 2025-10-02T12:38:32Z|00470|binding|INFO|Removing iface tap422859c2-b5 ovn-installed in OVS
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.322 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2f:ac 10.100.0.13'], port_security=['fa:16:3e:51:2f:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1546ac1d-4a04-4c5e-ae02-b005461c7731', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=422859c2-b5cd-467a-85cf-ff82d92d7d87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.324 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 422859c2-b5cd-467a-85cf-ff82d92d7d87 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.326 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 494beff4-7fba-4749-8998-3432c91ac5d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.327 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f08a80-5658-4669-b7db-7475d703df13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.329 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace which is not needed anymore
Oct 02 12:38:32 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 02 12:38:32 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000065.scope: Consumed 15.232s CPU time.
Oct 02 12:38:32 compute-0 systemd-machined[210927]: Machine qemu-45-instance-00000065 terminated.
Oct 02 12:38:32 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [NOTICE]   (320147) : haproxy version is 2.8.14-c23fe91
Oct 02 12:38:32 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [NOTICE]   (320147) : path to executable is /usr/sbin/haproxy
Oct 02 12:38:32 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [WARNING]  (320147) : Exiting Master process...
Oct 02 12:38:32 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [WARNING]  (320147) : Exiting Master process...
Oct 02 12:38:32 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [ALERT]    (320147) : Current worker (320167) exited with code 143 (Terminated)
Oct 02 12:38:32 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[320135]: [WARNING]  (320147) : All workers exited. Exiting... (0)
Oct 02 12:38:32 compute-0 systemd[1]: libpod-82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153.scope: Deactivated successfully.
Oct 02 12:38:32 compute-0 podman[321242]: 2025-10-02 12:38:32.463673392 +0000 UTC m=+0.045685244 container died 82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cb20a4e42920d5751a52b460d643dd07bed1e493a2100524c74f3c288f3b3ba-merged.mount: Deactivated successfully.
Oct 02 12:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153-userdata-shm.mount: Deactivated successfully.
Oct 02 12:38:32 compute-0 podman[321242]: 2025-10-02 12:38:32.500837892 +0000 UTC m=+0.082849744 container cleanup 82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:38:32 compute-0 systemd[1]: libpod-conmon-82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153.scope: Deactivated successfully.
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.550 2 INFO nova.virt.libvirt.driver [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance destroyed successfully.
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.551 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:32 compute-0 podman[321289]: 2025-10-02 12:38:32.579873375 +0000 UTC m=+0.058553549 container remove 82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:38:32 compute-0 podman[321258]: 2025-10-02 12:38:32.581837657 +0000 UTC m=+0.097028754 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:38:32 compute-0 podman[321266]: 2025-10-02 12:38:32.583779747 +0000 UTC m=+0.096955482 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.580 2 INFO nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Attempting a stable device rescue
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.587 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a5567f-fb32-4969-9b33-203e93862e71]: (4, ('Thu Oct  2 12:38:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153)\n82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153\nThu Oct  2 12:38:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153)\n82b1fdbf0b735accab893021732621487af695fa7b3176fd40c799ba6ec09153\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.592 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[42175685-f963-4664-9e06-5b282fb70497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.595 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 kernel: tap494beff4-70: left promiscuous mode
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.620 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[23f6aace-067f-45f8-bd8b-c22aa8b6d9c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.666 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d5966b66-f807-428d-866f-ef828ad27211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.668 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[38b5ad2d-53bd-4695-b6a9-e9a2693f153c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.682 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e504e0-9a3e-45ff-91dc-322fc9424794]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657911, 'reachable_time': 33535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321339, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:32.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.685 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:38:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d494beff4\x2d7fba\x2d4749\x2d8998\x2d3432c91ac5d2.mount: Deactivated successfully.
Oct 02 12:38:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:32.685 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ed58b274-2187-4727-b9a5-2f9d9e5e3b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:32 compute-0 ceph-mon[73668]: pgmap v1873: 305 pgs: 305 active+clean; 437 MiB data, 997 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.981 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:38:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.987 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:38:32 compute-0 nova_compute[256940]: 2025-10-02 12:38:32.988 2 INFO nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Creating image(s)
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.019 2 DEBUG nova.storage.rbd_utils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.024 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.080 2 DEBUG nova.storage.rbd_utils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.103 2 DEBUG nova.storage.rbd_utils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.106 2 DEBUG oslo_concurrency.lockutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "a085b69cdd6b3c50e0ed58989ae9b7e3c4c71d3e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.106 2 DEBUG oslo_concurrency.lockutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "a085b69cdd6b3c50e0ed58989ae9b7e3c4c71d3e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.269 2 DEBUG nova.compute.manager [req-cc8ec1ec-3cef-48d5-a774-3a9423660325 req-98d21945-01a2-48a6-8e43-7eb458757feb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.269 2 DEBUG oslo_concurrency.lockutils [req-cc8ec1ec-3cef-48d5-a774-3a9423660325 req-98d21945-01a2-48a6-8e43-7eb458757feb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.270 2 DEBUG oslo_concurrency.lockutils [req-cc8ec1ec-3cef-48d5-a774-3a9423660325 req-98d21945-01a2-48a6-8e43-7eb458757feb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.271 2 DEBUG oslo_concurrency.lockutils [req-cc8ec1ec-3cef-48d5-a774-3a9423660325 req-98d21945-01a2-48a6-8e43-7eb458757feb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.271 2 DEBUG nova.compute.manager [req-cc8ec1ec-3cef-48d5-a774-3a9423660325 req-98d21945-01a2-48a6-8e43-7eb458757feb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.271 2 WARNING nova.compute.manager [req-cc8ec1ec-3cef-48d5-a774-3a9423660325 req-98d21945-01a2-48a6-8e43-7eb458757feb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state rescuing.
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.471 2 DEBUG nova.virt.libvirt.imagebackend [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/44da8c0b-0cdf-4949-9ff7-57e81c0d3d70/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/44da8c0b-0cdf-4949-9ff7-57e81c0d3d70/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:38:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 230 op/s
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.552 2 DEBUG nova.virt.libvirt.imagebackend [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/44da8c0b-0cdf-4949-9ff7-57e81c0d3d70/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.553 2 DEBUG nova.storage.rbd_utils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning images/44da8c0b-0cdf-4949-9ff7-57e81c0d3d70@snap to None/1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.749 2 DEBUG oslo_concurrency.lockutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "a085b69cdd6b3c50e0ed58989ae9b7e3c4c71d3e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.800 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.834 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.837 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Start _get_guest_xml network_info=[{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:51:2f:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '44da8c0b-0cdf-4949-9ff7-57e81c0d3d70', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.837 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.861 2 WARNING nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.866 2 DEBUG nova.virt.libvirt.host [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.867 2 DEBUG nova.virt.libvirt.host [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.872 2 DEBUG nova.virt.libvirt.host [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.873 2 DEBUG nova.virt.libvirt.host [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.874 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.875 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.875 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.876 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.876 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.876 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.876 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.877 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.877 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.877 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.878 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.878 2 DEBUG nova.virt.hardware [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.878 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:33 compute-0 nova_compute[256940]: 2025-10-02 12:38:33.894 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755282525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:34 compute-0 nova_compute[256940]: 2025-10-02 12:38:34.346 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:34 compute-0 nova_compute[256940]: 2025-10-02 12:38:34.400 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:34.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557882276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:34 compute-0 nova_compute[256940]: 2025-10-02 12:38:34.868 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:34 compute-0 nova_compute[256940]: 2025-10-02 12:38:34.869 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:34 compute-0 ceph-mon[73668]: pgmap v1874: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 230 op/s
Oct 02 12:38:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1755282525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2557882276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815357803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.306 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.310 2 DEBUG nova.virt.libvirt.vif [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185783620',display_name='tempest-ServerStableDeviceRescueTest-server-185783620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185783620',id=101,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:38:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-300aqtzl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:15Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=1546ac1d-4a04-4c5e-ae02-b005461c7731,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:51:2f:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.311 2 DEBUG nova.network.os_vif_util [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:51:2f:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.312 2 DEBUG nova.network.os_vif_util [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.314 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.333 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <uuid>1546ac1d-4a04-4c5e-ae02-b005461c7731</uuid>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <name>instance-00000065</name>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-185783620</nova:name>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:38:33</nova:creationTime>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <nova:port uuid="422859c2-b5cd-467a-85cf-ff82d92d7d87">
Oct 02 12:38:35 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <system>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <entry name="serial">1546ac1d-4a04-4c5e-ae02-b005461c7731</entry>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <entry name="uuid">1546ac1d-4a04-4c5e-ae02-b005461c7731</entry>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </system>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <os>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   </os>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <features>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   </features>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1546ac1d-4a04-4c5e-ae02-b005461c7731_disk">
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config">
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.rescue">
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:38:35 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <target dev="sdb" bus="usb"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <boot order="1"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:51:2f:ac"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <target dev="tap422859c2-b5"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/console.log" append="off"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <video>
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </video>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:38:35 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:38:35 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:38:35 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:38:35 compute-0 nova_compute[256940]: </domain>
Oct 02 12:38:35 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.346 2 INFO nova.virt.libvirt.driver [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance destroyed successfully.
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.399 2 DEBUG nova.compute.manager [req-d5872229-32dd-4b54-b133-6fa63060941b req-d48f16bf-eff1-4177-ad2e-e0a1cf94444d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.400 2 DEBUG oslo_concurrency.lockutils [req-d5872229-32dd-4b54-b133-6fa63060941b req-d48f16bf-eff1-4177-ad2e-e0a1cf94444d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.401 2 DEBUG oslo_concurrency.lockutils [req-d5872229-32dd-4b54-b133-6fa63060941b req-d48f16bf-eff1-4177-ad2e-e0a1cf94444d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.401 2 DEBUG oslo_concurrency.lockutils [req-d5872229-32dd-4b54-b133-6fa63060941b req-d48f16bf-eff1-4177-ad2e-e0a1cf94444d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.401 2 DEBUG nova.compute.manager [req-d5872229-32dd-4b54-b133-6fa63060941b req-d48f16bf-eff1-4177-ad2e-e0a1cf94444d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.402 2 WARNING nova.compute.manager [req-d5872229-32dd-4b54-b133-6fa63060941b req-d48f16bf-eff1-4177-ad2e-e0a1cf94444d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state rescuing.
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.415 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.415 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.416 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.416 2 DEBUG nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:51:2f:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.417 2 INFO nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Using config drive
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.455 2 DEBUG nova.storage.rbd_utils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.478 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:35 compute-0 nova_compute[256940]: 2025-10-02 12:38:35.520 2 DEBUG nova.objects.instance [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'keypairs' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 203 op/s
Oct 02 12:38:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/815357803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.275 2 INFO nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Creating config drive at /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config.rescue
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.281 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj16220a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.419 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj16220a" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.445 2 DEBUG nova.storage.rbd_utils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.449 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config.rescue 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.651 2 DEBUG oslo_concurrency.processutils [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config.rescue 1546ac1d-4a04-4c5e-ae02-b005461c7731_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.653 2 INFO nova.virt.libvirt.driver [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Deleting local config drive /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731/disk.config.rescue because it was imported into RBD.
Oct 02 12:38:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:36.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:36 compute-0 kernel: tap422859c2-b5: entered promiscuous mode
Oct 02 12:38:36 compute-0 NetworkManager[44981]: <info>  [1759408716.7143] manager: (tap422859c2-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Oct 02 12:38:36 compute-0 systemd-udevd[321636]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:36 compute-0 ovn_controller[148123]: 2025-10-02T12:38:36Z|00471|binding|INFO|Claiming lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 for this chassis.
Oct 02 12:38:36 compute-0 ovn_controller[148123]: 2025-10-02T12:38:36Z|00472|binding|INFO|422859c2-b5cd-467a-85cf-ff82d92d7d87: Claiming fa:16:3e:51:2f:ac 10.100.0.13
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.759 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2f:ac 10.100.0.13'], port_security=['fa:16:3e:51:2f:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1546ac1d-4a04-4c5e-ae02-b005461c7731', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=422859c2-b5cd-467a-85cf-ff82d92d7d87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.762 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 422859c2-b5cd-467a-85cf-ff82d92d7d87 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.764 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:36 compute-0 NetworkManager[44981]: <info>  [1759408716.7736] device (tap422859c2-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:38:36 compute-0 NetworkManager[44981]: <info>  [1759408716.7745] device (tap422859c2-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:36 compute-0 ovn_controller[148123]: 2025-10-02T12:38:36Z|00473|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 ovn-installed in OVS
Oct 02 12:38:36 compute-0 ovn_controller[148123]: 2025-10-02T12:38:36Z|00474|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 up in Southbound
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.777 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3d94cf-916d-41ec-8d24-803584bcc354]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.778 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap494beff4-71 in ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:38:36 compute-0 nova_compute[256940]: 2025-10-02 12:38:36.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.781 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap494beff4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.782 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[683fd778-67bb-4e06-8d3e-2c52a5801cd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.782 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f40fc780-db59-4f2d-b4e7-cf053b2bee00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 systemd-machined[210927]: New machine qemu-49-instance-00000065.
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.799 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfabc51-412b-45a4-a274-5fcdcac74675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-00000065.
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.820 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dc89f50b-fd5c-40a5-aa03-0a066304fdc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.855 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1a56e8-8703-46ed-817a-9790a73b4eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 NetworkManager[44981]: <info>  [1759408716.8646] manager: (tap494beff4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/214)
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.864 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[37fefd72-6dfa-40e3-b400-4211ca1bf0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.902 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bfac1a89-b22d-497b-b76c-70cafa2ba0cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.905 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1b10807e-b958-4438-85d3-2ac7da405ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 NetworkManager[44981]: <info>  [1759408716.9297] device (tap494beff4-70): carrier: link connected
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.935 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a34acc8f-c94d-4fdd-ac68-da417e0f7a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.958 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[798f1e65-d5f4-48b8-8b66-72ea6f4c09ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 20256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321672, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.974 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[245f606b-c273-4d12-9832-40901189724f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:4a01'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661241, 'tstamp': 661241}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321673, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:36 compute-0 ceph-mon[73668]: pgmap v1875: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 203 op/s
Oct 02 12:38:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:36.999 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[545589bb-6dec-4945-b750-abf3d5f23bbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 20256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321674, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.038 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a48b8899-8a34-4af5-bb9d-af80e0d1319e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.101 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e08ef1-fc63-45f4-bfe1-1451026c87b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.103 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.103 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.103 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:37 compute-0 NetworkManager[44981]: <info>  [1759408717.1057] manager: (tap494beff4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Oct 02 12:38:37 compute-0 kernel: tap494beff4-70: entered promiscuous mode
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.111 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:37 compute-0 ovn_controller[148123]: 2025-10-02T12:38:37Z|00475|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.150 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.151 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0a952a14-bc79-4af8-8ed0-06f939f235ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.152 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:38:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:37.154 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'env', 'PROCESS_TAG=haproxy-494beff4-7fba-4749-8998-3432c91ac5d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/494beff4-7fba-4749-8998-3432c91ac5d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 653 KiB/s wr, 166 op/s
Oct 02 12:38:37 compute-0 podman[321707]: 2025-10-02 12:38:37.58949673 +0000 UTC m=+0.058469898 container create 77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:38:37 compute-0 systemd[1]: Started libpod-conmon-77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43.scope.
Oct 02 12:38:37 compute-0 podman[321707]: 2025-10-02 12:38:37.561504479 +0000 UTC m=+0.030477677 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:38:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b79d83966034ee8b76bccc2345b269744517dfdfd16133f2b6e0e75d9aabd75/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:37 compute-0 podman[321707]: 2025-10-02 12:38:37.71786008 +0000 UTC m=+0.186833288 container init 77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:38:37 compute-0 podman[321707]: 2025-10-02 12:38:37.725411028 +0000 UTC m=+0.194384236 container start 77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:38:37 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[321755]: [NOTICE]   (321780) : New worker (321784) forked
Oct 02 12:38:37 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[321755]: [NOTICE]   (321780) : Loading success.
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.769 2 DEBUG nova.compute.manager [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.770 2 DEBUG oslo_concurrency.lockutils [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.770 2 DEBUG oslo_concurrency.lockutils [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.771 2 DEBUG oslo_concurrency.lockutils [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.771 2 DEBUG nova.compute.manager [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.772 2 WARNING nova.compute.manager [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state rescuing.
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.772 2 DEBUG nova.compute.manager [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.773 2 DEBUG oslo_concurrency.lockutils [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.773 2 DEBUG oslo_concurrency.lockutils [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.774 2 DEBUG oslo_concurrency.lockutils [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.774 2 DEBUG nova.compute.manager [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:37 compute-0 nova_compute[256940]: 2025-10-02 12:38:37.774 2 WARNING nova.compute.manager [req-98d41713-2a1c-4686-9525-3c073e22ece0 req-8b396745-a214-4ec2-bce3-c22761c8e962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state rescuing.
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.267 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for 1546ac1d-4a04-4c5e-ae02-b005461c7731 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.268 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408718.2666745, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.268 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Resumed (Lifecycle Event)
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.276 2 DEBUG nova.compute.manager [None req-5b26a8c8-3c68-48c2-a513-9c5d0f12008d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.307 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.310 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.346 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.347 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408718.2708755, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.347 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Started (Lifecycle Event)
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.369 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:38 compute-0 nova_compute[256940]: 2025-10-02 12:38:38.372 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:38.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:38.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:39 compute-0 ceph-mon[73668]: pgmap v1876: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 653 KiB/s wr, 166 op/s
Oct 02 12:38:39 compute-0 sudo[321798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:39 compute-0 sudo[321798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:39 compute-0 sudo[321798]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:39 compute-0 sudo[321823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:39 compute-0 sudo[321823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:39 compute-0 sudo[321823]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 92 KiB/s wr, 132 op/s
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007511515912897822 of space, bias 1.0, pg target 2.253454773869347 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1565338992415783 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:38:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:38:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:40.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:41 compute-0 ceph-mon[73668]: pgmap v1877: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 92 KiB/s wr, 132 op/s
Oct 02 12:38:41 compute-0 nova_compute[256940]: 2025-10-02 12:38:41.155 2 INFO nova.compute.manager [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Unrescuing
Oct 02 12:38:41 compute-0 nova_compute[256940]: 2025-10-02 12:38:41.156 2 DEBUG oslo_concurrency.lockutils [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:41 compute-0 nova_compute[256940]: 2025-10-02 12:38:41.156 2 DEBUG oslo_concurrency.lockutils [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:41 compute-0 nova_compute[256940]: 2025-10-02 12:38:41.157 2 DEBUG nova.network.neutron [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:38:41 compute-0 nova_compute[256940]: 2025-10-02 12:38:41.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 450 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 172 op/s
Oct 02 12:38:42 compute-0 nova_compute[256940]: 2025-10-02 12:38:42.138 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408707.1367154, a5e40085-6c27-4b57-96db-89ecc7ac2e48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:42 compute-0 nova_compute[256940]: 2025-10-02 12:38:42.139 2 INFO nova.compute.manager [-] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] VM Stopped (Lifecycle Event)
Oct 02 12:38:42 compute-0 nova_compute[256940]: 2025-10-02 12:38:42.160 2 DEBUG nova.compute.manager [None req-11dce884-f377-45cb-af98-fa1241712292 - - - - - -] [instance: a5e40085-6c27-4b57-96db-89ecc7ac2e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:42 compute-0 nova_compute[256940]: 2025-10-02 12:38:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:42.236 158296 DEBUG eventlet.wsgi.server [-] (158296) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:42.239 158296 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: Accept: */*
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: Connection: close
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: Content-Type: text/plain
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: Host: 169.254.169.254
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: User-Agent: curl/7.84.0
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: X-Forwarded-For: 10.100.0.3
Oct 02 12:38:42 compute-0 ovn_metadata_agent[158078]: X-Ovn-Network-Id: 7d34b628-3897-482a-882b-4fa0a641cd85 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 12:38:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:38:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:42.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:38:42 compute-0 ovn_controller[148123]: 2025-10-02T12:38:42Z|00476|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:42 compute-0 ovn_controller[148123]: 2025-10-02T12:38:42Z|00477|binding|INFO|Releasing lport 0f6cfe52-3f15-4e90-b21e-85f85326e533 from this chassis (sb_readonly=0)
Oct 02 12:38:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:42.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.024 158296 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.024 158296 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1671 time: 0.7858469
Oct 02 12:38:43 compute-0 haproxy-metadata-proxy-7d34b628-3897-482a-882b-4fa0a641cd85[320384]: 10.100.0.3:45876 [02/Oct/2025:12:38:42.235] listener listener/metadata 0/0/0/789/789 200 1655 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct 02 12:38:43 compute-0 ceph-mon[73668]: pgmap v1878: 305 pgs: 305 active+clean; 450 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 172 op/s
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.280 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.281 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.282 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.395 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.396 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.396 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.397 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.397 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.400 2 INFO nova.compute.manager [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Terminating instance
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.402 2 DEBUG nova.compute.manager [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.452 2 DEBUG nova.network.neutron [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:43 compute-0 kernel: tap85de21e0-db (unregistering): left promiscuous mode
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.469 2 DEBUG oslo_concurrency.lockutils [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.470 2 DEBUG nova.objects.instance [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'flavor' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:43 compute-0 NetworkManager[44981]: <info>  [1759408723.4713] device (tap85de21e0-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00478|binding|INFO|Releasing lport 85de21e0-dbc3-4791-baf5-1f9b78e35b29 from this chassis (sb_readonly=0)
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00479|binding|INFO|Setting lport 85de21e0-dbc3-4791-baf5-1f9b78e35b29 down in Southbound
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00480|binding|INFO|Removing iface tap85de21e0-db ovn-installed in OVS
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.502 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:1e:42 10.100.0.3'], port_security=['fa:16:3e:f8:1e:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '92e58b0c-6934-4931-8ae8-c1a631797f57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d34b628-3897-482a-882b-4fa0a641cd85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ac7abbc0acb45d68edcf67f909a8c90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b5869b3-04e1-4eb4-a836-48f27b60d345', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=972694ee-45c6-477b-b2a4-191954fc6d7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=85de21e0-dbc3-4791-baf5-1f9b78e35b29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.503 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 85de21e0-dbc3-4791-baf5-1f9b78e35b29 in datapath 7d34b628-3897-482a-882b-4fa0a641cd85 unbound from our chassis
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.505 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d34b628-3897-482a-882b-4fa0a641cd85, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.507 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[95d5743d-b771-4ec7-8825-70ee5083ac9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.509 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85 namespace which is not needed anymore
Oct 02 12:38:43 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 02 12:38:43 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000066.scope: Consumed 14.957s CPU time.
Oct 02 12:38:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 453 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 12:38:43 compute-0 systemd-machined[210927]: Machine qemu-46-instance-00000066 terminated.
Oct 02 12:38:43 compute-0 kernel: tap422859c2-b5 (unregistering): left promiscuous mode
Oct 02 12:38:43 compute-0 NetworkManager[44981]: <info>  [1759408723.5716] device (tap422859c2-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00481|binding|INFO|Releasing lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 from this chassis (sb_readonly=0)
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00482|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 down in Southbound
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00483|binding|INFO|Removing iface tap422859c2-b5 ovn-installed in OVS
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.603 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2f:ac 10.100.0.13'], port_security=['fa:16:3e:51:2f:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1546ac1d-4a04-4c5e-ae02-b005461c7731', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=422859c2-b5cd-467a-85cf-ff82d92d7d87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 NetworkManager[44981]: <info>  [1759408723.6249] manager: (tap85de21e0-db): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Oct 02 12:38:43 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 02 12:38:43 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000065.scope: Consumed 6.604s CPU time.
Oct 02 12:38:43 compute-0 systemd-machined[210927]: Machine qemu-49-instance-00000065 terminated.
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.649 2 INFO nova.virt.libvirt.driver [-] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Instance destroyed successfully.
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.650 2 DEBUG nova.objects.instance [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lazy-loading 'resources' on Instance uuid 92e58b0c-6934-4931-8ae8-c1a631797f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.667 2 DEBUG nova.virt.libvirt.vif [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=102,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN9PUURmVgE9ctR2+bF/OaxYPDf/H6sxDJscvgXkAhinHH2AlmBFTWMVyJqQ1jYV6wxN0gV9xeFZ/1/bUEy+QwyLwsdqYLnM8xQrEZ9oQLRE3bDjarqFwNJf4PbZdutnYw==',key_name='tempest-keypair-868284423',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:38:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ac7abbc0acb45d68edcf67f909a8c90',ramdisk_id='',reservation_id='r-r3f5m00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-1084750913',owner_user_name='tempest-ServersV294TestFqdnHostnames-1084750913-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e01aec1bfdd145338d35a6ab232794a4',uuid=92e58b0c-6934-4931-8ae8-c1a631797f57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.667 2 DEBUG nova.network.os_vif_util [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Converting VIF {"id": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "address": "fa:16:3e:f8:1e:42", "network": {"id": "7d34b628-3897-482a-882b-4fa0a641cd85", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1539302209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ac7abbc0acb45d68edcf67f909a8c90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85de21e0-db", "ovs_interfaceid": "85de21e0-dbc3-4791-baf5-1f9b78e35b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.668 2 DEBUG nova.network.os_vif_util [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:1e:42,bridge_name='br-int',has_traffic_filtering=True,id=85de21e0-dbc3-4791-baf5-1f9b78e35b29,network=Network(7d34b628-3897-482a-882b-4fa0a641cd85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85de21e0-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.669 2 DEBUG os_vif [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:1e:42,bridge_name='br-int',has_traffic_filtering=True,id=85de21e0-dbc3-4791-baf5-1f9b78e35b29,network=Network(7d34b628-3897-482a-882b-4fa0a641cd85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85de21e0-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.671 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85de21e0-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.680 2 INFO os_vif [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:1e:42,bridge_name='br-int',has_traffic_filtering=True,id=85de21e0-dbc3-4791-baf5-1f9b78e35b29,network=Network(7d34b628-3897-482a-882b-4fa0a641cd85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85de21e0-db')
Oct 02 12:38:43 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [NOTICE]   (320382) : haproxy version is 2.8.14-c23fe91
Oct 02 12:38:43 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [NOTICE]   (320382) : path to executable is /usr/sbin/haproxy
Oct 02 12:38:43 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [WARNING]  (320382) : Exiting Master process...
Oct 02 12:38:43 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [WARNING]  (320382) : Exiting Master process...
Oct 02 12:38:43 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [ALERT]    (320382) : Current worker (320384) exited with code 143 (Terminated)
Oct 02 12:38:43 compute-0 neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85[320378]: [WARNING]  (320382) : All workers exited. Exiting... (0)
Oct 02 12:38:43 compute-0 systemd[1]: libpod-bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f.scope: Deactivated successfully.
Oct 02 12:38:43 compute-0 podman[321877]: 2025-10-02 12:38:43.704617422 +0000 UTC m=+0.065226574 container died bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.741 2 INFO nova.virt.libvirt.driver [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance destroyed successfully.
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.741 2 DEBUG nova.objects.instance [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-77a9ac30e30246d87289c44a39040cf24d58371e78fc32a2cc0b6f9bae034001-merged.mount: Deactivated successfully.
Oct 02 12:38:43 compute-0 podman[321877]: 2025-10-02 12:38:43.768485769 +0000 UTC m=+0.129094931 container cleanup bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:43 compute-0 systemd[1]: libpod-conmon-bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f.scope: Deactivated successfully.
Oct 02 12:38:43 compute-0 kernel: tap422859c2-b5: entered promiscuous mode
Oct 02 12:38:43 compute-0 systemd-udevd[321857]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:38:43 compute-0 NetworkManager[44981]: <info>  [1759408723.8485] manager: (tap422859c2-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/217)
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00484|binding|INFO|Claiming lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 for this chassis.
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00485|binding|INFO|422859c2-b5cd-467a-85cf-ff82d92d7d87: Claiming fa:16:3e:51:2f:ac 10.100.0.13
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 podman[321949]: 2025-10-02 12:38:43.854544965 +0000 UTC m=+0.061352082 container remove bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:38:43 compute-0 NetworkManager[44981]: <info>  [1759408723.8631] device (tap422859c2-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:38:43 compute-0 NetworkManager[44981]: <info>  [1759408723.8650] device (tap422859c2-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.865 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2f:ac 10.100.0.13'], port_security=['fa:16:3e:51:2f:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1546ac1d-4a04-4c5e-ae02-b005461c7731', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=422859c2-b5cd-467a-85cf-ff82d92d7d87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.866 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fafbe2c2-8d7a-41b3-a01f-910132384ee0]: (4, ('Thu Oct  2 12:38:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85 (bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f)\nbbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f\nThu Oct  2 12:38:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85 (bbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f)\nbbc85214262bdeade2657dd08da9660b07fe8503578e4c4d47459da4b84a486f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.868 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9a2b45-c0e3-4609-9c7e-5a0f1f27ec97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.869 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d34b628-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00486|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 ovn-installed in OVS
Oct 02 12:38:43 compute-0 ovn_controller[148123]: 2025-10-02T12:38:43Z|00487|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 up in Southbound
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 kernel: tap7d34b628-30: left promiscuous mode
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.892 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbdf49f-61bd-4105-a32e-d77949164ecb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 systemd-machined[210927]: New machine qemu-50-instance-00000065.
Oct 02 12:38:43 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-00000065.
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.920 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[301706f1-1e0c-476d-85ae-c92a807a2618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.921 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc37c6e-835d-40bb-b4d0-e5d8d7ec41b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.940 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc83e11e-8a66-4263-ad86-99428f5960d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658068, 'reachable_time': 43311, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321979, 'error': None, 'target': 'ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d7d34b628\x2d3897\x2d482a\x2d882b\x2d4fa0a641cd85.mount: Deactivated successfully.
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.946 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7d34b628-3897-482a-882b-4fa0a641cd85 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.946 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[699d6953-08e4-431b-a67d-51480ed6edb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.947 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 422859c2-b5cd-467a-85cf-ff82d92d7d87 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.948 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.965 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c81c1b-c58c-43f8-917d-e60c30d9a16a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.990 2 DEBUG nova.compute.manager [req-c267133a-c976-49f4-b5a5-e299bfbd08bd req-bd94f829-d4e1-4b15-befc-9180a5043419 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-vif-unplugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.990 2 DEBUG oslo_concurrency.lockutils [req-c267133a-c976-49f4-b5a5-e299bfbd08bd req-bd94f829-d4e1-4b15-befc-9180a5043419 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.991 2 DEBUG oslo_concurrency.lockutils [req-c267133a-c976-49f4-b5a5-e299bfbd08bd req-bd94f829-d4e1-4b15-befc-9180a5043419 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.991 2 DEBUG oslo_concurrency.lockutils [req-c267133a-c976-49f4-b5a5-e299bfbd08bd req-bd94f829-d4e1-4b15-befc-9180a5043419 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.991 2 DEBUG nova.compute.manager [req-c267133a-c976-49f4-b5a5-e299bfbd08bd req-bd94f829-d4e1-4b15-befc-9180a5043419 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] No waiting events found dispatching network-vif-unplugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:43 compute-0 nova_compute[256940]: 2025-10-02 12:38:43.992 2 DEBUG nova.compute.manager [req-c267133a-c976-49f4-b5a5-e299bfbd08bd req-bd94f829-d4e1-4b15-befc-9180a5043419 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-vif-unplugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:43.999 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f1073767-ee8e-4947-940b-fccce81d1501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.005 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8111bfcb-546f-4f1c-8540-57c0505e3b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.041 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fb498bc3-6e23-44a5-870b-c364cb1e8d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.062 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e53d3d-4a2a-404c-88cf-4332f24bd9c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 20256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321993, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.084 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c522a861-a45d-4ebf-a20c-5512e1a0125c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321994, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321994, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.088 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.092 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.093 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.093 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.094 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.097 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 422859c2-b5cd-467a-85cf-ff82d92d7d87 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.099 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.118 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a36310e6-fab2-40ea-8847-50888950009b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.140 2 DEBUG nova.compute.manager [req-e59a5ea0-188b-48e4-be2b-c2a2b3ff7c63 req-1e9776d3-e043-4063-a70b-9887f77b2bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.141 2 DEBUG oslo_concurrency.lockutils [req-e59a5ea0-188b-48e4-be2b-c2a2b3ff7c63 req-1e9776d3-e043-4063-a70b-9887f77b2bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.141 2 DEBUG oslo_concurrency.lockutils [req-e59a5ea0-188b-48e4-be2b-c2a2b3ff7c63 req-1e9776d3-e043-4063-a70b-9887f77b2bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.141 2 DEBUG oslo_concurrency.lockutils [req-e59a5ea0-188b-48e4-be2b-c2a2b3ff7c63 req-1e9776d3-e043-4063-a70b-9887f77b2bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.142 2 DEBUG nova.compute.manager [req-e59a5ea0-188b-48e4-be2b-c2a2b3ff7c63 req-1e9776d3-e043-4063-a70b-9887f77b2bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.142 2 WARNING nova.compute.manager [req-e59a5ea0-188b-48e4-be2b-c2a2b3ff7c63 req-1e9776d3-e043-4063-a70b-9887f77b2bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.143 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[af17c3ff-00c9-4ca1-9ef8-bfe6868e82e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.151 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4dd3fa-3345-4702-8933-f09fd46ccfcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.182 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c9e59c-ba64-4c89-82e8-b69421ca18e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.208 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d7881857-44d5-4613-b832-e9e56655cc62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 20256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322037, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.227 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[49c6b50f-1525-4b25-a391-2ffdb7dd2165]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322042, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322042, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.228 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.231 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.231 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.232 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:38:44.232 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.321 2 INFO nova.virt.libvirt.driver [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Deleting instance files /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57_del
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.322 2 INFO nova.virt.libvirt.driver [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Deletion of /var/lib/nova/instances/92e58b0c-6934-4931-8ae8-c1a631797f57_del complete
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.409 2 INFO nova.compute.manager [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Took 1.01 seconds to destroy the instance on the hypervisor.
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.410 2 DEBUG oslo.service.loopingcall [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.411 2 DEBUG nova.compute.manager [-] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.411 2 DEBUG nova.network.neutron [-] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:38:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:44.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.708 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for 1546ac1d-4a04-4c5e-ae02-b005461c7731 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.709 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408724.708423, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.709 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Resumed (Lifecycle Event)
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.739 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.743 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.759 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.760 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408724.709063, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.760 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Started (Lifecycle Event)
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.777 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.783 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:44 compute-0 nova_compute[256940]: 2025-10-02 12:38:44.813 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:38:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:45 compute-0 ceph-mon[73668]: pgmap v1879: 305 pgs: 305 active+clean; 453 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 12:38:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3017170306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1256335225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 404 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.5 MiB/s wr, 131 op/s
Oct 02 12:38:45 compute-0 nova_compute[256940]: 2025-10-02 12:38:45.985 2 DEBUG nova.network.neutron [-] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.124 2 INFO nova.compute.manager [-] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Took 1.71 seconds to deallocate network for instance.
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.130 2 DEBUG nova.compute.manager [None req-194358c1-4d9c-472b-8ecb-2f0f4e436fd5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.343 2 DEBUG nova.compute.manager [req-3a7b8969-4c6b-426c-a850-b98f76be9e4e req-cd2e8782-9ee0-4979-a136-5dc0d48c1caa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.344 2 DEBUG oslo_concurrency.lockutils [req-3a7b8969-4c6b-426c-a850-b98f76be9e4e req-cd2e8782-9ee0-4979-a136-5dc0d48c1caa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.344 2 DEBUG oslo_concurrency.lockutils [req-3a7b8969-4c6b-426c-a850-b98f76be9e4e req-cd2e8782-9ee0-4979-a136-5dc0d48c1caa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.345 2 DEBUG oslo_concurrency.lockutils [req-3a7b8969-4c6b-426c-a850-b98f76be9e4e req-cd2e8782-9ee0-4979-a136-5dc0d48c1caa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.345 2 DEBUG nova.compute.manager [req-3a7b8969-4c6b-426c-a850-b98f76be9e4e req-cd2e8782-9ee0-4979-a136-5dc0d48c1caa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] No waiting events found dispatching network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.346 2 WARNING nova.compute.manager [req-3a7b8969-4c6b-426c-a850-b98f76be9e4e req-cd2e8782-9ee0-4979-a136-5dc0d48c1caa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received unexpected event network-vif-plugged-85de21e0-dbc3-4791-baf5-1f9b78e35b29 for instance with vm_state active and task_state deleting.
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.346 2 DEBUG nova.compute.manager [req-3a7b8969-4c6b-426c-a850-b98f76be9e4e req-cd2e8782-9ee0-4979-a136-5dc0d48c1caa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Received event network-vif-deleted-85de21e0-dbc3-4791-baf5-1f9b78e35b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.384 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.385 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.467 2 DEBUG oslo_concurrency.processutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.508 2 DEBUG nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.509 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.510 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.510 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.511 2 DEBUG nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.511 2 WARNING nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state None.
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.511 2 DEBUG nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.512 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.512 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.513 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.513 2 DEBUG nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.513 2 WARNING nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state None.
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.514 2 DEBUG nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.514 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.515 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.515 2 DEBUG oslo_concurrency.lockutils [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.515 2 DEBUG nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.516 2 WARNING nova.compute.manager [req-4b6f1787-549c-4203-8caa-2655e77dc1af req-f5f9ef2a-9dfc-4eeb-a0e0-43f3b4959b3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state None.
Oct 02 12:38:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:46.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193470462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.915 2 DEBUG oslo_concurrency.processutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.920 2 DEBUG nova.compute.provider_tree [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.950 2 DEBUG nova.scheduler.client.report [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:46 compute-0 nova_compute[256940]: 2025-10-02 12:38:46.995 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:47 compute-0 nova_compute[256940]: 2025-10-02 12:38:47.059 2 INFO nova.scheduler.client.report [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Deleted allocations for instance 92e58b0c-6934-4931-8ae8-c1a631797f57
Oct 02 12:38:47 compute-0 nova_compute[256940]: 2025-10-02 12:38:47.126 2 DEBUG oslo_concurrency.lockutils [None req-2f8f628a-511c-45f2-9d48-e95e6a6fa00e e01aec1bfdd145338d35a6ab232794a4 7ac7abbc0acb45d68edcf67f909a8c90 - - default default] Lock "92e58b0c-6934-4931-8ae8-c1a631797f57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:47 compute-0 ceph-mon[73668]: pgmap v1880: 305 pgs: 305 active+clean; 404 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.5 MiB/s wr, 131 op/s
Oct 02 12:38:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4193470462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 341 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 235 op/s
Oct 02 12:38:48 compute-0 nova_compute[256940]: 2025-10-02 12:38:48.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:48.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:49.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:49 compute-0 ceph-mon[73668]: pgmap v1881: 305 pgs: 305 active+clean; 341 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 235 op/s
Oct 02 12:38:49 compute-0 podman[322089]: 2025-10-02 12:38:49.438506092 +0000 UTC m=+0.089046196 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:38:49 compute-0 podman[322088]: 2025-10-02 12:38:49.438712887 +0000 UTC m=+0.089770674 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:38:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 341 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Oct 02 12:38:50 compute-0 ceph-mon[73668]: pgmap v1882: 305 pgs: 305 active+clean; 341 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Oct 02 12:38:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:50.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:51 compute-0 nova_compute[256940]: 2025-10-02 12:38:51.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:51 compute-0 sudo[322132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:51 compute-0 sudo[322132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:51 compute-0 sudo[322132]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:51 compute-0 sudo[322157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:38:51 compute-0 sudo[322157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:51 compute-0 sudo[322157]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 309 op/s
Oct 02 12:38:51 compute-0 sudo[322182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:51 compute-0 sudo[322182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:51 compute-0 sudo[322182]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:51 compute-0 sudo[322207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:38:51 compute-0 sudo[322207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:38:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:38:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:52 compute-0 sudo[322207]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:38:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:38:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:38:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:38:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:38:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:38:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:38:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:38:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:53.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 26da4a36-70fd-42f2-bc63-b2a657c2ce5e does not exist
Oct 02 12:38:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 69a21b11-c9c1-4513-8c7e-68c034b0576d does not exist
Oct 02 12:38:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bd07ec26-27ea-400f-adc3-41954971ec24 does not exist
Oct 02 12:38:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:38:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:38:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:38:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:38:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:38:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:38:53 compute-0 sudo[322263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:53 compute-0 sudo[322263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:53 compute-0 sudo[322263]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:53 compute-0 sudo[322288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:38:53 compute-0 sudo[322288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:53 compute-0 ceph-mon[73668]: pgmap v1883: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 309 op/s
Oct 02 12:38:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:38:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:38:53 compute-0 sudo[322288]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:53 compute-0 sudo[322313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:53 compute-0 sudo[322313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:53 compute-0 sudo[322313]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:53 compute-0 sudo[322338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:38:53 compute-0 sudo[322338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 294 op/s
Oct 02 12:38:53 compute-0 nova_compute[256940]: 2025-10-02 12:38:53.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:53 compute-0 podman[322402]: 2025-10-02 12:38:53.6878656 +0000 UTC m=+0.083509851 container create 78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:38:53 compute-0 podman[322402]: 2025-10-02 12:38:53.636665093 +0000 UTC m=+0.032309364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:38:53 compute-0 systemd[1]: Started libpod-conmon-78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127.scope.
Oct 02 12:38:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:53 compute-0 podman[322402]: 2025-10-02 12:38:53.855817714 +0000 UTC m=+0.251461995 container init 78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:38:53 compute-0 podman[322402]: 2025-10-02 12:38:53.865874026 +0000 UTC m=+0.261518277 container start 78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:38:53 compute-0 reverent_mendeleev[322418]: 167 167
Oct 02 12:38:53 compute-0 systemd[1]: libpod-78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127.scope: Deactivated successfully.
Oct 02 12:38:53 compute-0 podman[322402]: 2025-10-02 12:38:53.874543593 +0000 UTC m=+0.270187864 container attach 78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:38:53 compute-0 podman[322402]: 2025-10-02 12:38:53.875021905 +0000 UTC m=+0.270666156 container died 78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-447fc5498e3bc6862bc64226a5899b96e22506830f10a45eb9d9051ee00c3144-merged.mount: Deactivated successfully.
Oct 02 12:38:53 compute-0 podman[322402]: 2025-10-02 12:38:53.926696224 +0000 UTC m=+0.322340475 container remove 78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:38:53 compute-0 systemd[1]: libpod-conmon-78cd3b34b0538cbf00f87509655c41a458d0e697401c43b6cf0ed8cf79698127.scope: Deactivated successfully.
Oct 02 12:38:54 compute-0 podman[322442]: 2025-10-02 12:38:54.110912063 +0000 UTC m=+0.053898568 container create f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:38:54 compute-0 podman[322442]: 2025-10-02 12:38:54.086925217 +0000 UTC m=+0.029911732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:38:54 compute-0 systemd[1]: Started libpod-conmon-f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c.scope.
Oct 02 12:38:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b594e6d8ab707dd46d3e31a5a5de1943b8cf78ab30397ed1b18069193ba5f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b594e6d8ab707dd46d3e31a5a5de1943b8cf78ab30397ed1b18069193ba5f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b594e6d8ab707dd46d3e31a5a5de1943b8cf78ab30397ed1b18069193ba5f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b594e6d8ab707dd46d3e31a5a5de1943b8cf78ab30397ed1b18069193ba5f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b594e6d8ab707dd46d3e31a5a5de1943b8cf78ab30397ed1b18069193ba5f3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:54 compute-0 podman[322442]: 2025-10-02 12:38:54.266199617 +0000 UTC m=+0.209186142 container init f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:38:54 compute-0 podman[322442]: 2025-10-02 12:38:54.276085655 +0000 UTC m=+0.219072160 container start f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:38:54 compute-0 podman[322442]: 2025-10-02 12:38:54.295156233 +0000 UTC m=+0.238142748 container attach f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:38:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:38:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:38:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:38:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3361022258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:54.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:55.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:55 compute-0 dreamy_thompson[322458]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:38:55 compute-0 dreamy_thompson[322458]: --> relative data size: 1.0
Oct 02 12:38:55 compute-0 dreamy_thompson[322458]: --> All data devices are unavailable
Oct 02 12:38:55 compute-0 systemd[1]: libpod-f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c.scope: Deactivated successfully.
Oct 02 12:38:55 compute-0 podman[322442]: 2025-10-02 12:38:55.194422128 +0000 UTC m=+1.137408633 container died f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_thompson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:38:55 compute-0 nova_compute[256940]: 2025-10-02 12:38:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1b594e6d8ab707dd46d3e31a5a5de1943b8cf78ab30397ed1b18069193ba5f3-merged.mount: Deactivated successfully.
Oct 02 12:38:55 compute-0 podman[322442]: 2025-10-02 12:38:55.257762801 +0000 UTC m=+1.200749306 container remove f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:38:55 compute-0 systemd[1]: libpod-conmon-f689e3ec6b1b71a1e2daf312b3a0f177b1825ece85d47b79a2fb0476b518213c.scope: Deactivated successfully.
Oct 02 12:38:55 compute-0 sudo[322338]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:55 compute-0 sudo[322487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:55 compute-0 sudo[322487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:55 compute-0 sudo[322487]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:55 compute-0 sudo[322512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:38:55 compute-0 sudo[322512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:55 compute-0 sudo[322512]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:55 compute-0 sudo[322537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:55 compute-0 sudo[322537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:55 compute-0 sudo[322537]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 333 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Oct 02 12:38:55 compute-0 sudo[322562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:38:55 compute-0 sudo[322562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:55 compute-0 ceph-mon[73668]: pgmap v1884: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 294 op/s
Oct 02 12:38:55 compute-0 nova_compute[256940]: 2025-10-02 12:38:55.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:55 compute-0 podman[322626]: 2025-10-02 12:38:55.902196574 +0000 UTC m=+0.036256967 container create 141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:38:55 compute-0 systemd[1]: Started libpod-conmon-141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd.scope.
Oct 02 12:38:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:55 compute-0 podman[322626]: 2025-10-02 12:38:55.977524331 +0000 UTC m=+0.111584734 container init 141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:38:55 compute-0 podman[322626]: 2025-10-02 12:38:55.886811703 +0000 UTC m=+0.020872106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:38:55 compute-0 podman[322626]: 2025-10-02 12:38:55.986151056 +0000 UTC m=+0.120211459 container start 141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:38:55 compute-0 peaceful_wing[322642]: 167 167
Oct 02 12:38:55 compute-0 systemd[1]: libpod-141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd.scope: Deactivated successfully.
Oct 02 12:38:55 compute-0 podman[322626]: 2025-10-02 12:38:55.994586916 +0000 UTC m=+0.128647399 container attach 141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:38:55 compute-0 podman[322626]: 2025-10-02 12:38:55.995725076 +0000 UTC m=+0.129785469 container died 141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:38:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f71f6c0bafa61eb1e829a69cd6ec6976346cc072e0b7ec92ebff999deee8b3a8-merged.mount: Deactivated successfully.
Oct 02 12:38:56 compute-0 podman[322626]: 2025-10-02 12:38:56.039638002 +0000 UTC m=+0.173698395 container remove 141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:38:56 compute-0 systemd[1]: libpod-conmon-141d8bb71f881175729949725c5e99dd4daf13fb06dd00a02929bf3ffe0faadd.scope: Deactivated successfully.
Oct 02 12:38:56 compute-0 nova_compute[256940]: 2025-10-02 12:38:56.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:56 compute-0 nova_compute[256940]: 2025-10-02 12:38:56.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:56 compute-0 nova_compute[256940]: 2025-10-02 12:38:56.210 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:38:56 compute-0 podman[322665]: 2025-10-02 12:38:56.252323994 +0000 UTC m=+0.046628128 container create f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:38:56 compute-0 systemd[1]: Started libpod-conmon-f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72.scope.
Oct 02 12:38:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bddfb6430b76262d97c18cddda94223c67cd486b592baf0b971ddc17352532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:56 compute-0 podman[322665]: 2025-10-02 12:38:56.231574323 +0000 UTC m=+0.025878477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bddfb6430b76262d97c18cddda94223c67cd486b592baf0b971ddc17352532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bddfb6430b76262d97c18cddda94223c67cd486b592baf0b971ddc17352532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bddfb6430b76262d97c18cddda94223c67cd486b592baf0b971ddc17352532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:56 compute-0 podman[322665]: 2025-10-02 12:38:56.334875029 +0000 UTC m=+0.129179193 container init f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:38:56 compute-0 podman[322665]: 2025-10-02 12:38:56.344532781 +0000 UTC m=+0.138836915 container start f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:38:56 compute-0 podman[322665]: 2025-10-02 12:38:56.349577433 +0000 UTC m=+0.143881587 container attach f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:38:56 compute-0 nova_compute[256940]: 2025-10-02 12:38:56.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:56 compute-0 ovn_controller[148123]: 2025-10-02T12:38:56Z|00488|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:56 compute-0 nova_compute[256940]: 2025-10-02 12:38:56.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:56 compute-0 ovn_controller[148123]: 2025-10-02T12:38:56Z|00489|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:38:56 compute-0 nova_compute[256940]: 2025-10-02 12:38:56.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:56.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:56 compute-0 ceph-mon[73668]: pgmap v1885: 305 pgs: 305 active+clean; 333 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Oct 02 12:38:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1343751637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1239224870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:57.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:57 compute-0 blissful_beaver[322681]: {
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:     "1": [
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:         {
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "devices": [
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "/dev/loop3"
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             ],
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "lv_name": "ceph_lv0",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "lv_size": "7511998464",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "name": "ceph_lv0",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "tags": {
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.cluster_name": "ceph",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.crush_device_class": "",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.encrypted": "0",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.osd_id": "1",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.type": "block",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:                 "ceph.vdo": "0"
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             },
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "type": "block",
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:             "vg_name": "ceph_vg0"
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:         }
Oct 02 12:38:57 compute-0 blissful_beaver[322681]:     ]
Oct 02 12:38:57 compute-0 blissful_beaver[322681]: }
Oct 02 12:38:57 compute-0 systemd[1]: libpod-f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72.scope: Deactivated successfully.
Oct 02 12:38:57 compute-0 conmon[322681]: conmon f43255f268ece0d27833 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72.scope/container/memory.events
Oct 02 12:38:57 compute-0 podman[322665]: 2025-10-02 12:38:57.172570016 +0000 UTC m=+0.966874150 container died f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-60bddfb6430b76262d97c18cddda94223c67cd486b592baf0b971ddc17352532-merged.mount: Deactivated successfully.
Oct 02 12:38:57 compute-0 podman[322665]: 2025-10-02 12:38:57.237896132 +0000 UTC m=+1.032200266 container remove f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:38:57 compute-0 systemd[1]: libpod-conmon-f43255f268ece0d278330eedd813cb910e478d3d772196bf4585385bb47b5b72.scope: Deactivated successfully.
Oct 02 12:38:57 compute-0 sudo[322562]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:57 compute-0 sudo[322705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:57 compute-0 sudo[322705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:57 compute-0 sudo[322705]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:57 compute-0 sudo[322730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:38:57 compute-0 sudo[322730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:57 compute-0 sudo[322730]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:57 compute-0 sudo[322755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:57 compute-0 sudo[322755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:57 compute-0 sudo[322755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 294 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 239 op/s
Oct 02 12:38:57 compute-0 sudo[322780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:38:57 compute-0 sudo[322780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:57 compute-0 podman[322845]: 2025-10-02 12:38:57.930700607 +0000 UTC m=+0.045135969 container create 92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 12:38:57 compute-0 systemd[1]: Started libpod-conmon-92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3.scope.
Oct 02 12:38:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:58 compute-0 podman[322845]: 2025-10-02 12:38:57.911418024 +0000 UTC m=+0.025853406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:38:58 compute-0 podman[322845]: 2025-10-02 12:38:58.029676521 +0000 UTC m=+0.144111913 container init 92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:38:58 compute-0 podman[322845]: 2025-10-02 12:38:58.037087534 +0000 UTC m=+0.151522896 container start 92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:38:58 compute-0 charming_moser[322861]: 167 167
Oct 02 12:38:58 compute-0 systemd[1]: libpod-92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3.scope: Deactivated successfully.
Oct 02 12:38:58 compute-0 podman[322845]: 2025-10-02 12:38:58.044243621 +0000 UTC m=+0.158679003 container attach 92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:38:58 compute-0 podman[322845]: 2025-10-02 12:38:58.04457406 +0000 UTC m=+0.159009422 container died 92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 12:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-84d25fe13e0cfcf0dc90793f573effdabcd3ca09b1bc477c8799ceb8c988187d-merged.mount: Deactivated successfully.
Oct 02 12:38:58 compute-0 ovn_controller[148123]: 2025-10-02T12:38:58Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:2f:ac 10.100.0.13
Oct 02 12:38:58 compute-0 podman[322845]: 2025-10-02 12:38:58.160088035 +0000 UTC m=+0.274523397 container remove 92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:38:58 compute-0 systemd[1]: libpod-conmon-92fdfdcafa30392be0648eb7e7aa34fd9d86fa1fcbf46eec2a2c9a522869c3a3.scope: Deactivated successfully.
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.270 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.270 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.270 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.271 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.271 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:58 compute-0 podman[322885]: 2025-10-02 12:38:58.355002313 +0000 UTC m=+0.063605771 container create 52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:38:58 compute-0 podman[322885]: 2025-10-02 12:38:58.316048206 +0000 UTC m=+0.024651684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:38:58 compute-0 systemd[1]: Started libpod-conmon-52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea.scope.
Oct 02 12:38:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d58358284faf3626c9b9e39b0219052bdf9d5ef8d4c134ceea21633866cf14fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d58358284faf3626c9b9e39b0219052bdf9d5ef8d4c134ceea21633866cf14fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d58358284faf3626c9b9e39b0219052bdf9d5ef8d4c134ceea21633866cf14fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d58358284faf3626c9b9e39b0219052bdf9d5ef8d4c134ceea21633866cf14fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:38:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/668701634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:58 compute-0 podman[322885]: 2025-10-02 12:38:58.624254222 +0000 UTC m=+0.332857700 container init 52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:38:58 compute-0 podman[322885]: 2025-10-02 12:38:58.634539111 +0000 UTC m=+0.343142559 container start 52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.647 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408723.6460772, 92e58b0c-6934-4931-8ae8-c1a631797f57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.648 2 INFO nova.compute.manager [-] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] VM Stopped (Lifecycle Event)
Oct 02 12:38:58 compute-0 podman[322885]: 2025-10-02 12:38:58.664720168 +0000 UTC m=+0.373323646 container attach 52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.712 2 DEBUG nova.compute.manager [None req-b0aaa309-7226-4cd3-bd2d-453f19ba0820 - - - - - -] [instance: 92e58b0c-6934-4931-8ae8-c1a631797f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883040753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:58.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:58 compute-0 nova_compute[256940]: 2025-10-02 12:38:58.749 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:38:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:38:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:59.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.277 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.279 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:59 compute-0 sudo[322931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:59 compute-0 sudo[322931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:59 compute-0 sudo[322931]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:59 compute-0 sudo[322957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:59 compute-0 sudo[322957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:59 compute-0 sudo[322957]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.482 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.483 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4276MB free_disk=20.897083282470703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.483 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.484 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]: {
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]:         "osd_id": 1,
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]:         "type": "bluestore"
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]:     }
Oct 02 12:38:59 compute-0 hardcore_kowalevski[322922]: }
Oct 02 12:38:59 compute-0 systemd[1]: libpod-52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea.scope: Deactivated successfully.
Oct 02 12:38:59 compute-0 podman[322885]: 2025-10-02 12:38:59.528167809 +0000 UTC m=+1.236771257 container died 52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:38:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 294 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 14 KiB/s wr, 127 op/s
Oct 02 12:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d58358284faf3626c9b9e39b0219052bdf9d5ef8d4c134ceea21633866cf14fd-merged.mount: Deactivated successfully.
Oct 02 12:38:59 compute-0 podman[322885]: 2025-10-02 12:38:59.592243641 +0000 UTC m=+1.300847089 container remove 52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:38:59 compute-0 systemd[1]: libpod-conmon-52c721b2bba0bfb992a22ab41d2340e09f3a98ffede54fe86cb0a92dab2008ea.scope: Deactivated successfully.
Oct 02 12:38:59 compute-0 sudo[322780]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.705 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 1546ac1d-4a04-4c5e-ae02-b005461c7731 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.706 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.706 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:38:59 compute-0 nova_compute[256940]: 2025-10-02 12:38:59.810 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573262845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:00 compute-0 nova_compute[256940]: 2025-10-02 12:39:00.273 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:00 compute-0 nova_compute[256940]: 2025-10-02 12:39:00.280 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:39:00 compute-0 nova_compute[256940]: 2025-10-02 12:39:00.317 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:39:00 compute-0 nova_compute[256940]: 2025-10-02 12:39:00.377 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:39:00 compute-0 nova_compute[256940]: 2025-10-02 12:39:00.378 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:00.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:00 compute-0 ceph-mon[73668]: pgmap v1886: 305 pgs: 305 active+clean; 294 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 239 op/s
Oct 02 12:39:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/714823305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/883040753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:01.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:01 compute-0 nova_compute[256940]: 2025-10-02 12:39:01.379 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:01 compute-0 nova_compute[256940]: 2025-10-02 12:39:01.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 300 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 169 KiB/s wr, 169 op/s
Oct 02 12:39:02 compute-0 nova_compute[256940]: 2025-10-02 12:39:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:02 compute-0 nova_compute[256940]: 2025-10-02 12:39:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:39:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:39:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:03 compute-0 ceph-mon[73668]: pgmap v1887: 305 pgs: 305 active+clean; 294 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 14 KiB/s wr, 127 op/s
Oct 02 12:39:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2573262845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:03.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:39:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:39:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e92fe609-2a42-47cf-9fcd-a7e81c30b3db does not exist
Oct 02 12:39:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8630d365-4609-47e8-bec2-5ef90a0259e2 does not exist
Oct 02 12:39:03 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7f10b613-36a1-435f-a73c-eb885bb9a161 does not exist
Oct 02 12:39:03 compute-0 podman[323036]: 2025-10-02 12:39:03.407015354 +0000 UTC m=+0.070388809 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:39:03 compute-0 podman[323035]: 2025-10-02 12:39:03.40879537 +0000 UTC m=+0.072499703 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 12:39:03 compute-0 sudo[323075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:03 compute-0 sudo[323075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:03 compute-0 sudo[323075]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:03 compute-0 sudo[323101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:39:03 compute-0 sudo[323101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:03 compute-0 sudo[323101]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 317 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 94 op/s
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.689 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.691 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.691 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.692 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:03 compute-0 nova_compute[256940]: 2025-10-02 12:39:03.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:04.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:04 compute-0 ceph-mon[73668]: pgmap v1888: 305 pgs: 305 active+clean; 300 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 169 KiB/s wr, 169 op/s
Oct 02 12:39:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:39:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:39:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:05.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:39:05 compute-0 ceph-mon[73668]: pgmap v1889: 305 pgs: 305 active+clean; 317 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 94 op/s
Oct 02 12:39:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3364850339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:39:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3364850339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:39:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:06 compute-0 nova_compute[256940]: 2025-10-02 12:39:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:06 compute-0 nova_compute[256940]: 2025-10-02 12:39:06.729 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:06.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:06 compute-0 nova_compute[256940]: 2025-10-02 12:39:06.753 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:39:06 compute-0 nova_compute[256940]: 2025-10-02 12:39:06.754 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:39:06 compute-0 nova_compute[256940]: 2025-10-02 12:39:06.754 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:07.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:07 compute-0 ceph-mon[73668]: pgmap v1890: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:39:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 12:39:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2854324697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4233012682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:08 compute-0 nova_compute[256940]: 2025-10-02 12:39:08.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:08.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 12:39:09 compute-0 ceph-mon[73668]: pgmap v1891: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 12:39:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3862223501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:10 compute-0 ceph-mon[73668]: pgmap v1892: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 12:39:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:10.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:11 compute-0 nova_compute[256940]: 2025-10-02 12:39:11.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 342 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Oct 02 12:39:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:12.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:13.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:13 compute-0 ceph-mon[73668]: pgmap v1893: 305 pgs: 305 active+clean; 342 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Oct 02 12:39:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 1.7 MiB/s wr, 45 op/s
Oct 02 12:39:13 compute-0 nova_compute[256940]: 2025-10-02 12:39:13.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:14.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:15 compute-0 ceph-mon[73668]: pgmap v1894: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 1.7 MiB/s wr, 45 op/s
Oct 02 12:39:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 945 KiB/s rd, 828 KiB/s wr, 64 op/s
Oct 02 12:39:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1379615158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:16 compute-0 nova_compute[256940]: 2025-10-02 12:39:16.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:16.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:17 compute-0 ceph-mon[73668]: pgmap v1895: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 945 KiB/s rd, 828 KiB/s wr, 64 op/s
Oct 02 12:39:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 91 op/s
Oct 02 12:39:18 compute-0 ceph-mon[73668]: pgmap v1896: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 91 op/s
Oct 02 12:39:18 compute-0 nova_compute[256940]: 2025-10-02 12:39:18.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:18.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:19 compute-0 sudo[323135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:19 compute-0 sudo[323135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:19 compute-0 sudo[323135]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:19 compute-0 podman[323159]: 2025-10-02 12:39:19.535289154 +0000 UTC m=+0.052931693 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:39:19 compute-0 sudo[323172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:19 compute-0 sudo[323172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:19 compute-0 sudo[323172]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Oct 02 12:39:19 compute-0 podman[323160]: 2025-10-02 12:39:19.573975744 +0000 UTC m=+0.089565399 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:39:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 02 12:39:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 02 12:39:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 02 12:39:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:20.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:20 compute-0 ceph-mon[73668]: pgmap v1897: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Oct 02 12:39:20 compute-0 ceph-mon[73668]: osdmap e260: 3 total, 3 up, 3 in
Oct 02 12:39:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:21 compute-0 nova_compute[256940]: 2025-10-02 12:39:21.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 381 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 02 12:39:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:22.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 02 12:39:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 02 12:39:23 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 02 12:39:23 compute-0 ceph-mon[73668]: pgmap v1899: 305 pgs: 305 active+clean; 381 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 02 12:39:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 408 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 129 op/s
Oct 02 12:39:23 compute-0 nova_compute[256940]: 2025-10-02 12:39:23.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 02 12:39:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/280546518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:24 compute-0 ceph-mon[73668]: osdmap e261: 3 total, 3 up, 3 in
Oct 02 12:39:24 compute-0 ceph-mon[73668]: pgmap v1901: 305 pgs: 305 active+clean; 408 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 129 op/s
Oct 02 12:39:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1127676684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:24.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 02 12:39:24 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 02 12:39:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:25.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 419 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.9 MiB/s wr, 159 op/s
Oct 02 12:39:26 compute-0 ceph-mon[73668]: osdmap e262: 3 total, 3 up, 3 in
Oct 02 12:39:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:26 compute-0 nova_compute[256940]: 2025-10-02 12:39:26.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:39:26.473 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:39:26.474 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:39:26.474 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:26.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:27 compute-0 ceph-mon[73668]: pgmap v1903: 305 pgs: 305 active+clean; 419 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.9 MiB/s wr, 159 op/s
Oct 02 12:39:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 455 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.8 MiB/s wr, 193 op/s
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:28 compute-0 nova_compute[256940]: 2025-10-02 12:39:28.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:28 compute-0 ceph-mon[73668]: pgmap v1904: 305 pgs: 305 active+clean; 455 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.8 MiB/s wr, 193 op/s
Oct 02 12:39:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:28.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:39:28
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.control']
Oct 02 12:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:39:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:39:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:29.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:39:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 455 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.8 MiB/s wr, 166 op/s
Oct 02 12:39:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:30.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 02 12:39:31 compute-0 ceph-mon[73668]: pgmap v1905: 305 pgs: 305 active+clean; 455 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.8 MiB/s wr, 166 op/s
Oct 02 12:39:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 02 12:39:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 02 12:39:31 compute-0 nova_compute[256940]: 2025-10-02 12:39:31.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 461 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 214 op/s
Oct 02 12:39:32 compute-0 ceph-mon[73668]: osdmap e263: 3 total, 3 up, 3 in
Oct 02 12:39:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:32.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 463 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Oct 02 12:39:33 compute-0 ceph-mon[73668]: pgmap v1907: 305 pgs: 305 active+clean; 461 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 214 op/s
Oct 02 12:39:33 compute-0 nova_compute[256940]: 2025-10-02 12:39:33.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:34 compute-0 podman[323233]: 2025-10-02 12:39:34.384860595 +0000 UTC m=+0.060544522 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:39:34 compute-0 podman[323234]: 2025-10-02 12:39:34.389582078 +0000 UTC m=+0.061419644 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:39:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:34.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.032 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.032 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.154 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:39:35 compute-0 ceph-mon[73668]: pgmap v1908: 305 pgs: 305 active+clean; 463 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.382 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.383 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.390 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.391 2 INFO nova.compute.claims [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:39:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 467 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 02 12:39:35 compute-0 nova_compute[256940]: 2025-10-02 12:39:35.878 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441741427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.384 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.391 2 DEBUG nova.compute.provider_tree [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.432 2 DEBUG nova.scheduler.client.report [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.516 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.517 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.573 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.573 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:39:36 compute-0 ceph-mon[73668]: pgmap v1909: 305 pgs: 305 active+clean; 467 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 02 12:39:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1441741427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.609 2 INFO nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.632 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.763 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.764 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.765 2 INFO nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Creating image(s)
Oct 02 12:39:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:36.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.789 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.817 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.845 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.849 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.884 2 DEBUG nova.policy [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ef7a5dbc3524ee8a7efcd0d3ae36787', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a82ed194b379425aa5e1f31b993eee81', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.926 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.926 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.927 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.927 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.952 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:36 compute-0 nova_compute[256940]: 2025-10-02 12:39:36.956 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:37.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 473 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 569 KiB/s wr, 126 op/s
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.683 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.765 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] resizing rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.882 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Successfully created port: 8dd716ff-bca3-4034-8507-003cbbc312c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.889 2 DEBUG nova.objects.instance [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'migration_context' on Instance uuid 81cb0055-c522-4ea7-b77e-e21e72c7782e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.910 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.911 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Ensure instance console log exists: /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.911 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.912 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:37 compute-0 nova_compute[256940]: 2025-10-02 12:39:37.912 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:38 compute-0 nova_compute[256940]: 2025-10-02 12:39:38.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:38 compute-0 ceph-mon[73668]: pgmap v1910: 305 pgs: 305 active+clean; 473 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 569 KiB/s wr, 126 op/s
Oct 02 12:39:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:39.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 473 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 569 KiB/s wr, 126 op/s
Oct 02 12:39:39 compute-0 nova_compute[256940]: 2025-10-02 12:39:39.571 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Successfully created port: ab029d2e-6d57-498c-b55b-7f90b732cfd4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:39:39 compute-0 sudo[323462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:39 compute-0 sudo[323462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:39 compute-0 sudo[323462]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:39 compute-0 sudo[323487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:39 compute-0 sudo[323487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:39 compute-0 sudo[323487]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007984984238321237 of space, bias 1.0, pg target 2.395495271496371 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4512325807277127 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:39:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:39:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:40 compute-0 nova_compute[256940]: 2025-10-02 12:39:40.837 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Successfully created port: 10723c51-5f55-4285-a6e6-bee50993385c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:39:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:41.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:41 compute-0 ceph-mon[73668]: pgmap v1911: 305 pgs: 305 active+clean; 473 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 569 KiB/s wr, 126 op/s
Oct 02 12:39:41 compute-0 nova_compute[256940]: 2025-10-02 12:39:41.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 504 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 874 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 12:39:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2123990606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/753406698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:43.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:43 compute-0 ceph-mon[73668]: pgmap v1912: 305 pgs: 305 active+clean; 504 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 874 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 12:39:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2694472616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 524 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 2.8 MiB/s wr, 96 op/s
Oct 02 12:39:43 compute-0 nova_compute[256940]: 2025-10-02 12:39:43.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:44 compute-0 ceph-mon[73668]: pgmap v1913: 305 pgs: 305 active+clean; 524 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 2.8 MiB/s wr, 96 op/s
Oct 02 12:39:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:45.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 542 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 951 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Oct 02 12:39:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:46 compute-0 nova_compute[256940]: 2025-10-02 12:39:46.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:46.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:47 compute-0 ceph-mon[73668]: pgmap v1914: 305 pgs: 305 active+clean; 542 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 951 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Oct 02 12:39:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:47.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 989 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.108 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Successfully updated port: 8dd716ff-bca3-4034-8507-003cbbc312c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.249 2 DEBUG nova.compute.manager [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-changed-8dd716ff-bca3-4034-8507-003cbbc312c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.249 2 DEBUG nova.compute.manager [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Refreshing instance network info cache due to event network-changed-8dd716ff-bca3-4034-8507-003cbbc312c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.249 2 DEBUG oslo_concurrency.lockutils [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.249 2 DEBUG oslo_concurrency.lockutils [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.249 2 DEBUG nova.network.neutron [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Refreshing network info cache for port 8dd716ff-bca3-4034-8507-003cbbc312c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.648 2 DEBUG nova.network.neutron [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:39:48 compute-0 nova_compute[256940]: 2025-10-02 12:39:48.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:48.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:49.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:49 compute-0 ceph-mon[73668]: pgmap v1915: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 989 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:39:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 978 KiB/s rd, 3.7 MiB/s wr, 145 op/s
Oct 02 12:39:50 compute-0 nova_compute[256940]: 2025-10-02 12:39:50.095 2 DEBUG nova.network.neutron [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:50 compute-0 nova_compute[256940]: 2025-10-02 12:39:50.150 2 DEBUG oslo_concurrency.lockutils [req-6fcbe8e8-45a9-4a0e-a470-512fc60ea0b5 req-fc7ef732-60e7-4a73-b1eb-9be198a47e42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:39:50 compute-0 podman[323517]: 2025-10-02 12:39:50.377033793 +0000 UTC m=+0.053308953 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 02 12:39:50 compute-0 podman[323518]: 2025-10-02 12:39:50.506523823 +0000 UTC m=+0.180008930 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:39:50 compute-0 ceph-mon[73668]: pgmap v1916: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 978 KiB/s rd, 3.7 MiB/s wr, 145 op/s
Oct 02 12:39:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:51 compute-0 nova_compute[256940]: 2025-10-02 12:39:51.360 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Successfully updated port: ab029d2e-6d57-498c-b55b-7f90b732cfd4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:39:51 compute-0 nova_compute[256940]: 2025-10-02 12:39:51.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 176 op/s
Oct 02 12:39:51 compute-0 nova_compute[256940]: 2025-10-02 12:39:51.662 2 DEBUG nova.compute.manager [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-changed-ab029d2e-6d57-498c-b55b-7f90b732cfd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:51 compute-0 nova_compute[256940]: 2025-10-02 12:39:51.663 2 DEBUG nova.compute.manager [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Refreshing instance network info cache due to event network-changed-ab029d2e-6d57-498c-b55b-7f90b732cfd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:39:51 compute-0 nova_compute[256940]: 2025-10-02 12:39:51.663 2 DEBUG oslo_concurrency.lockutils [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:51 compute-0 nova_compute[256940]: 2025-10-02 12:39:51.663 2 DEBUG oslo_concurrency.lockutils [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:51 compute-0 nova_compute[256940]: 2025-10-02 12:39:51.663 2 DEBUG nova.network.neutron [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Refreshing network info cache for port ab029d2e-6d57-498c-b55b-7f90b732cfd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:39:52 compute-0 nova_compute[256940]: 2025-10-02 12:39:52.167 2 DEBUG nova.network.neutron [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:39:52 compute-0 ceph-mon[73668]: pgmap v1917: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 176 op/s
Oct 02 12:39:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:52.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:52 compute-0 nova_compute[256940]: 2025-10-02 12:39:52.973 2 DEBUG nova.network.neutron [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:52 compute-0 nova_compute[256940]: 2025-10-02 12:39:52.995 2 DEBUG oslo_concurrency.lockutils [req-d5dfae8f-fe3a-4957-8c0d-9b04b779b8be req-4fa2513a-8783-472a-9090-36f88a47ab87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:39:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.468 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Successfully updated port: 10723c51-5f55-4285-a6e6-bee50993385c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:39:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 156 op/s
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.578 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.578 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquired lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.578 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.887 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.944 2 DEBUG nova.compute.manager [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-changed-10723c51-5f55-4285-a6e6-bee50993385c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.945 2 DEBUG nova.compute.manager [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Refreshing instance network info cache due to event network-changed-10723c51-5f55-4285-a6e6-bee50993385c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:39:53 compute-0 nova_compute[256940]: 2025-10-02 12:39:53.945 2 DEBUG oslo_concurrency.lockutils [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:54 compute-0 ceph-mon[73668]: pgmap v1918: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 156 op/s
Oct 02 12:39:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:55.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 02 12:39:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2458645433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/717183492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:56 compute-0 nova_compute[256940]: 2025-10-02 12:39:56.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:57.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:57 compute-0 ceph-mon[73668]: pgmap v1919: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 02 12:39:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2398810375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3992094948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:57 compute-0 nova_compute[256940]: 2025-10-02 12:39:57.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 555 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 433 KiB/s wr, 181 op/s
Oct 02 12:39:58 compute-0 nova_compute[256940]: 2025-10-02 12:39:58.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:58 compute-0 nova_compute[256940]: 2025-10-02 12:39:58.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:58 compute-0 nova_compute[256940]: 2025-10-02 12:39:58.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:39:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4133083068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:39:58 compute-0 nova_compute[256940]: 2025-10-02 12:39:58.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:39:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:39:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:59.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.265 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.266 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.266 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.266 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.266 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:59 compute-0 ceph-mon[73668]: pgmap v1920: 305 pgs: 305 active+clean; 555 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 433 KiB/s wr, 181 op/s
Oct 02 12:39:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 555 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 379 KiB/s wr, 142 op/s
Oct 02 12:39:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023802060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:59 compute-0 sudo[323587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:59 compute-0 sudo[323587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:59 compute-0 sudo[323587]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.742 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:59 compute-0 sudo[323614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:59 compute-0 sudo[323614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:59 compute-0 sudo[323614]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.959 2 DEBUG nova.network.neutron [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updating instance_info_cache with network_info: [{"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.979 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:39:59 compute-0 nova_compute[256940]: 2025-10-02 12:39:59.979 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:40:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.121 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.122 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4363MB free_disk=20.776840209960938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.122 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.123 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.288 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Releasing lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.288 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Instance network_info: |[{"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.289 2 DEBUG oslo_concurrency.lockutils [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.289 2 DEBUG nova.network.neutron [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Refreshing network info cache for port 10723c51-5f55-4285-a6e6-bee50993385c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.295 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Start _get_guest_xml network_info=[{"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.301 2 WARNING nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.307 2 DEBUG nova.virt.libvirt.host [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.308 2 DEBUG nova.virt.libvirt.host [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.311 2 DEBUG nova.virt.libvirt.host [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.312 2 DEBUG nova.virt.libvirt.host [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.313 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.313 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.314 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.314 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.315 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.315 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.315 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.316 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.316 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.316 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.316 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.317 2 DEBUG nova.virt.hardware [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.320 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3023802060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.409 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 1546ac1d-4a04-4c5e-ae02-b005461c7731 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.409 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 81cb0055-c522-4ea7-b77e-e21e72c7782e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.410 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.410 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.517 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:40:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158801099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:00.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.836 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.869 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.874 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698379328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.990 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:00 compute-0 nova_compute[256940]: 2025-10-02 12:40:00.995 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:40:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:01.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.092 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:40:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.256 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.257 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:40:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2482779898' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.337 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.340 2 DEBUG nova.virt.libvirt.vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:36Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.340 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.342 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:1c:14,bridge_name='br-int',has_traffic_filtering=True,id=8dd716ff-bca3-4034-8507-003cbbc312c9,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dd716ff-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.344 2 DEBUG nova.virt.libvirt.vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:36Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.345 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.346 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1e:e8,bridge_name='br-int',has_traffic_filtering=True,id=ab029d2e-6d57-498c-b55b-7f90b732cfd4,network=Network(4d974926-8d89-4280-b731-7d249a8b4980),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab029d2e-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.348 2 DEBUG nova.virt.libvirt.vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:36Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.349 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.350 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:d8:a2,bridge_name='br-int',has_traffic_filtering=True,id=10723c51-5f55-4285-a6e6-bee50993385c,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10723c51-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.352 2 DEBUG nova.objects.instance [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81cb0055-c522-4ea7-b77e-e21e72c7782e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.379 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <uuid>81cb0055-c522-4ea7-b77e-e21e72c7782e</uuid>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <name>instance-0000006a</name>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersTestMultiNic-server-1228626942</nova:name>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:40:00</nova:creationTime>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:user uuid="9ef7a5dbc3524ee8a7efcd0d3ae36787">tempest-ServersTestMultiNic-2055566246-project-member</nova:user>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:project uuid="a82ed194b379425aa5e1f31b993eee81">tempest-ServersTestMultiNic-2055566246</nova:project>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:port uuid="8dd716ff-bca3-4034-8507-003cbbc312c9">
Oct 02 12:40:01 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.180" ipVersion="4"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:port uuid="ab029d2e-6d57-498c-b55b-7f90b732cfd4">
Oct 02 12:40:01 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.1.109" ipVersion="4"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <nova:port uuid="10723c51-5f55-4285-a6e6-bee50993385c">
Oct 02 12:40:01 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.32" ipVersion="4"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <system>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <entry name="serial">81cb0055-c522-4ea7-b77e-e21e72c7782e</entry>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <entry name="uuid">81cb0055-c522-4ea7-b77e-e21e72c7782e</entry>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </system>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <os>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   </os>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <features>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   </features>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/81cb0055-c522-4ea7-b77e-e21e72c7782e_disk">
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       </source>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/81cb0055-c522-4ea7-b77e-e21e72c7782e_disk.config">
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       </source>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:40:01 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:52:1c:14"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <target dev="tap8dd716ff-bc"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:c3:1e:e8"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <target dev="tapab029d2e-6d"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:dd:d8:a2"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <target dev="tap10723c51-5f"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/console.log" append="off"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <video>
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </video>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:40:01 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:40:01 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:40:01 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:40:01 compute-0 nova_compute[256940]: </domain>
Oct 02 12:40:01 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.381 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Preparing to wait for external event network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.381 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.382 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.382 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.382 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Preparing to wait for external event network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.383 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.383 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.384 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.384 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Preparing to wait for external event network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.384 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.385 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.385 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.386 2 DEBUG nova.virt.libvirt.vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:36Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.387 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.388 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:1c:14,bridge_name='br-int',has_traffic_filtering=True,id=8dd716ff-bca3-4034-8507-003cbbc312c9,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dd716ff-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.388 2 DEBUG os_vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:1c:14,bridge_name='br-int',has_traffic_filtering=True,id=8dd716ff-bca3-4034-8507-003cbbc312c9,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dd716ff-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.398 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8dd716ff-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8dd716ff-bc, col_values=(('external_ids', {'iface-id': '8dd716ff-bca3-4034-8507-003cbbc312c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:1c:14', 'vm-uuid': '81cb0055-c522-4ea7-b77e-e21e72c7782e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 NetworkManager[44981]: <info>  [1759408801.4029] manager: (tap8dd716ff-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.414 2 INFO os_vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:1c:14,bridge_name='br-int',has_traffic_filtering=True,id=8dd716ff-bca3-4034-8507-003cbbc312c9,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dd716ff-bc')
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.415 2 DEBUG nova.virt.libvirt.vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:36Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.415 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.416 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1e:e8,bridge_name='br-int',has_traffic_filtering=True,id=ab029d2e-6d57-498c-b55b-7f90b732cfd4,network=Network(4d974926-8d89-4280-b731-7d249a8b4980),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab029d2e-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.416 2 DEBUG os_vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1e:e8,bridge_name='br-int',has_traffic_filtering=True,id=ab029d2e-6d57-498c-b55b-7f90b732cfd4,network=Network(4d974926-8d89-4280-b731-7d249a8b4980),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab029d2e-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab029d2e-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 ceph-mon[73668]: pgmap v1921: 305 pgs: 305 active+clean; 555 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 379 KiB/s wr, 142 op/s
Oct 02 12:40:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3158801099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3170026607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1698379328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2482779898' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab029d2e-6d, col_values=(('external_ids', {'iface-id': 'ab029d2e-6d57-498c-b55b-7f90b732cfd4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:1e:e8', 'vm-uuid': '81cb0055-c522-4ea7-b77e-e21e72c7782e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 NetworkManager[44981]: <info>  [1759408801.4228] manager: (tapab029d2e-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.430 2 INFO os_vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1e:e8,bridge_name='br-int',has_traffic_filtering=True,id=ab029d2e-6d57-498c-b55b-7f90b732cfd4,network=Network(4d974926-8d89-4280-b731-7d249a8b4980),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab029d2e-6d')
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.431 2 DEBUG nova.virt.libvirt.vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:36Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.431 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.432 2 DEBUG nova.network.os_vif_util [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:d8:a2,bridge_name='br-int',has_traffic_filtering=True,id=10723c51-5f55-4285-a6e6-bee50993385c,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10723c51-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.432 2 DEBUG os_vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:d8:a2,bridge_name='br-int',has_traffic_filtering=True,id=10723c51-5f55-4285-a6e6-bee50993385c,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10723c51-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10723c51-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10723c51-5f, col_values=(('external_ids', {'iface-id': '10723c51-5f55-4285-a6e6-bee50993385c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:d8:a2', 'vm-uuid': '81cb0055-c522-4ea7-b77e-e21e72c7782e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:01 compute-0 NetworkManager[44981]: <info>  [1759408801.4376] manager: (tap10723c51-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.448 2 INFO os_vif [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:d8:a2,bridge_name='br-int',has_traffic_filtering=True,id=10723c51-5f55-4285-a6e6-bee50993385c,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10723c51-5f')
Oct 02 12:40:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 579 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 168 op/s
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.603 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.603 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.603 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No VIF found with MAC fa:16:3e:52:1c:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.604 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No VIF found with MAC fa:16:3e:c3:1e:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.604 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No VIF found with MAC fa:16:3e:dd:d8:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.604 2 INFO nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Using config drive
Oct 02 12:40:01 compute-0 nova_compute[256940]: 2025-10-02 12:40:01.636 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:02 compute-0 nova_compute[256940]: 2025-10-02 12:40:02.257 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:02 compute-0 nova_compute[256940]: 2025-10-02 12:40:02.257 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:02.342 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:02.343 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:40:02 compute-0 nova_compute[256940]: 2025-10-02 12:40:02.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:02 compute-0 ceph-mon[73668]: pgmap v1922: 305 pgs: 305 active+clean; 579 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 168 op/s
Oct 02 12:40:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:02.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:03.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.138 2 INFO nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Creating config drive at /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/disk.config
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.143 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzgegiuj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.276 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzgegiuj" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.309 2 DEBUG nova.storage.rbd_utils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.314 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/disk.config 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 594 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.581 2 DEBUG nova.network.neutron [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updated VIF entry in instance network info cache for port 10723c51-5f55-4285-a6e6-bee50993385c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.582 2 DEBUG nova.network.neutron [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updating instance_info_cache with network_info: [{"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.679 2 DEBUG oslo_concurrency.lockutils [req-542e606d-bb17-4caf-af18-bb91d10591b7 req-f61fb2b1-5922-4638-8df3-c8b3c5005355 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-81cb0055-c522-4ea7-b77e-e21e72c7782e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.702 2 DEBUG oslo_concurrency.processutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/disk.config 81cb0055-c522-4ea7-b77e-e21e72c7782e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.703 2 INFO nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Deleting local config drive /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e/disk.config because it was imported into RBD.
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.7497] manager: (tap8dd716ff-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Oct 02 12:40:03 compute-0 kernel: tap8dd716ff-bc: entered promiscuous mode
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00490|binding|INFO|Claiming lport 8dd716ff-bca3-4034-8507-003cbbc312c9 for this chassis.
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00491|binding|INFO|8dd716ff-bca3-4034-8507-003cbbc312c9: Claiming fa:16:3e:52:1c:14 10.100.0.180
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.7678] manager: (tapab029d2e-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Oct 02 12:40:03 compute-0 systemd-udevd[323807]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:40:03 compute-0 systemd-udevd[323809]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.7844] manager: (tap10723c51-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.785 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:1c:14 10.100.0.180'], port_security=['fa:16:3e:52:1c:14 10.100.0.180'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.180/24', 'neutron:device_id': '81cb0055-c522-4ea7-b77e-e21e72c7782e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1168d29-288f-4f27-9a1f-f792d581c8c6, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8dd716ff-bca3-4034-8507-003cbbc312c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:03 compute-0 systemd-udevd[323813]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.786 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8dd716ff-bca3-4034-8507-003cbbc312c9 in datapath 05d77b1e-d823-4ea1-8adf-0b9faac2499a bound to our chassis
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.789 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05d77b1e-d823-4ea1-8adf-0b9faac2499a
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.7988] device (tap8dd716ff-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.7998] device (tap8dd716ff-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.806 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[90c78370-df97-40d0-b105-112a1d8cc282]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.808 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap05d77b1e-d1 in ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.810 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap05d77b1e-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.810 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[51b8dd61-f0c9-42cc-927e-bf7dc5681811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.811 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb29434-9282-4ce0-8304-d656089a3296]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.8137] device (tapab029d2e-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:40:03 compute-0 kernel: tapab029d2e-6d: entered promiscuous mode
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.8156] device (tap10723c51-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:40:03 compute-0 kernel: tap10723c51-5f: entered promiscuous mode
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.8168] device (tapab029d2e-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.8174] device (tap10723c51-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00492|binding|INFO|Claiming lport ab029d2e-6d57-498c-b55b-7f90b732cfd4 for this chassis.
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00493|binding|INFO|ab029d2e-6d57-498c-b55b-7f90b732cfd4: Claiming fa:16:3e:c3:1e:e8 10.100.1.109
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00494|binding|INFO|Claiming lport 10723c51-5f55-4285-a6e6-bee50993385c for this chassis.
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00495|binding|INFO|10723c51-5f55-4285-a6e6-bee50993385c: Claiming fa:16:3e:dd:d8:a2 10.100.0.32
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.826 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1185d3-70de-4b82-90b0-e6565296bea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00496|binding|INFO|Setting lport 8dd716ff-bca3-4034-8507-003cbbc312c9 ovn-installed in OVS
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:03 compute-0 systemd-machined[210927]: New machine qemu-51-instance-0000006a.
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.851 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[419e484a-403f-4263-af5c-9831acab20cd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000006a.
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00497|binding|INFO|Setting lport 8dd716ff-bca3-4034-8507-003cbbc312c9 up in Southbound
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.864 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:1e:e8 10.100.1.109'], port_security=['fa:16:3e:c3:1e:e8 10.100.1.109'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.109/24', 'neutron:device_id': '81cb0055-c522-4ea7-b77e-e21e72c7782e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d974926-8d89-4280-b731-7d249a8b4980', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f9ed708-7e4e-44c6-83e2-e16fb95e9139, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ab029d2e-6d57-498c-b55b-7f90b732cfd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.866 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:d8:a2 10.100.0.32'], port_security=['fa:16:3e:dd:d8:a2 10.100.0.32'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.32/24', 'neutron:device_id': '81cb0055-c522-4ea7-b77e-e21e72c7782e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1168d29-288f-4f27-9a1f-f792d581c8c6, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=10723c51-5f55-4285-a6e6-bee50993385c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00498|binding|INFO|Setting lport 10723c51-5f55-4285-a6e6-bee50993385c ovn-installed in OVS
Oct 02 12:40:03 compute-0 ovn_controller[148123]: 2025-10-02T12:40:03Z|00499|binding|INFO|Setting lport ab029d2e-6d57-498c-b55b-7f90b732cfd4 ovn-installed in OVS
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.892 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[353538f0-f2ea-486c-929a-00d6a249fcbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 nova_compute[256940]: 2025-10-02 12:40:03.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.897 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f69b6611-4b6e-43dc-9787-4601c2536579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.8979] manager: (tap05d77b1e-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/224)
Oct 02 12:40:03 compute-0 systemd-udevd[323816]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:40:03 compute-0 sudo[323823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:03 compute-0 sudo[323823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:03 compute-0 sudo[323823]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.942 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8a4672-3946-4967-94d4-947bb72c63fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.946 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f230be21-9b10-4008-b92f-75cb1122eecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 NetworkManager[44981]: <info>  [1759408803.9787] device (tap05d77b1e-d0): carrier: link connected
Oct 02 12:40:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:03.986 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5030d1-f931-44d3-a335-acf366c97331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:03 compute-0 sudo[323870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:03 compute-0 sudo[323870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:03 compute-0 sudo[323870]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:04 compute-0 ovn_controller[148123]: 2025-10-02T12:40:04Z|00500|binding|INFO|Setting lport 10723c51-5f55-4285-a6e6-bee50993385c up in Southbound
Oct 02 12:40:04 compute-0 ovn_controller[148123]: 2025-10-02T12:40:04Z|00501|binding|INFO|Setting lport ab029d2e-6d57-498c-b55b-7f90b732cfd4 up in Southbound
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.019 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[452db854-bcc8-489e-a212-5dedd5dca7fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05d77b1e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:26:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669946, 'reachable_time': 28774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323897, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.037 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[051f4abd-f052-40ef-94c6-425e1e781323]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:269b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669946, 'tstamp': 669946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323905, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.060 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c3dc6177-b1ee-4756-840a-a8766c2bc1e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05d77b1e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:26:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669946, 'reachable_time': 28774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323920, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:04 compute-0 sudo[323899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:04 compute-0 sudo[323899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:04 compute-0 sudo[323899]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.105 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aad38939-3f98-44a2-bfe7-1df7cc9c3a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:04 compute-0 sudo[323928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:40:04 compute-0 sudo[323928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.179 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b164b0-7ce3-42ff-a495-a12fb3a13854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.181 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05d77b1e-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.181 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.181 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05d77b1e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:04 compute-0 NetworkManager[44981]: <info>  [1759408804.1841] manager: (tap05d77b1e-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct 02 12:40:04 compute-0 kernel: tap05d77b1e-d0: entered promiscuous mode
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.186 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05d77b1e-d0, col_values=(('external_ids', {'iface-id': '3c165bbe-0a4c-4ff3-a272-ee3abbb82d68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.189 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/05d77b1e-d823-4ea1-8adf-0b9faac2499a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/05d77b1e-d823-4ea1-8adf-0b9faac2499a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.189 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4a716e-e2c8-4750-9a41-26c6ec9030b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.190 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-05d77b1e-d823-4ea1-8adf-0b9faac2499a
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/05d77b1e-d823-4ea1-8adf-0b9faac2499a.pid.haproxy
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 05d77b1e-d823-4ea1-8adf-0b9faac2499a
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:40:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:04.191 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'env', 'PROCESS_TAG=haproxy-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/05d77b1e-d823-4ea1-8adf-0b9faac2499a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:40:04 compute-0 ovn_controller[148123]: 2025-10-02T12:40:04Z|00502|binding|INFO|Releasing lport 3c165bbe-0a4c-4ff3-a272-ee3abbb82d68 from this chassis (sb_readonly=0)
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.446 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:40:04 compute-0 podman[324043]: 2025-10-02 12:40:04.567007826 +0000 UTC m=+0.023611068 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:40:04 compute-0 sudo[323928]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:40:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:40:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:40:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:40:04 compute-0 podman[324043]: 2025-10-02 12:40:04.731476799 +0000 UTC m=+0.188080011 container create 568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:40:04 compute-0 ceph-mon[73668]: pgmap v1923: 305 pgs: 305 active+clean; 594 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 12:40:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1075071985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/938358496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:04 compute-0 systemd[1]: Started libpod-conmon-568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693.scope.
Oct 02 12:40:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:04.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.829 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408804.8284686, 81cb0055-c522-4ea7-b77e-e21e72c7782e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.829 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] VM Started (Lifecycle Event)
Oct 02 12:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ecd4fc79665f47984f7cf03e7d4dc50d22a85ab7f1817005cec19b0611a818/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:04 compute-0 podman[324043]: 2025-10-02 12:40:04.920197685 +0000 UTC m=+0.376800927 container init 568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:40:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:04 compute-0 podman[324070]: 2025-10-02 12:40:04.924852457 +0000 UTC m=+0.149197856 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:04 compute-0 podman[324043]: 2025-10-02 12:40:04.926455099 +0000 UTC m=+0.383058311 container start 568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3cf4b5a0-f1f7-4f62-b89c-5a73a4c20a67 does not exist
Oct 02 12:40:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 72c896dc-a518-43b9-aaad-05ed6f9917cb does not exist
Oct 02 12:40:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6a3d7385-a27b-4b47-90e7-e4aaf75a0640 does not exist
Oct 02 12:40:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:40:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:40:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:40:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:40:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:40:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.940 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.944 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408804.8287828, 81cb0055-c522-4ea7-b77e-e21e72c7782e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:04 compute-0 nova_compute[256940]: 2025-10-02 12:40:04.944 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] VM Paused (Lifecycle Event)
Oct 02 12:40:04 compute-0 neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a[324094]: [NOTICE]   (324112) : New worker (324115) forked
Oct 02 12:40:04 compute-0 neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a[324094]: [NOTICE]   (324112) : Loading success.
Oct 02 12:40:04 compute-0 podman[324069]: 2025-10-02 12:40:04.982883132 +0000 UTC m=+0.207489338 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:40:05 compute-0 sudo[324114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:05 compute-0 sudo[324114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:05 compute-0 sudo[324114]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.016 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ab029d2e-6d57-498c-b55b-7f90b732cfd4 in datapath 4d974926-8d89-4280-b731-7d249a8b4980 unbound from our chassis
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.018 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d974926-8d89-4280-b731-7d249a8b4980
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.031 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09081e09-3512-4a36-873e-f82707722173]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.034 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d974926-81 in ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.037 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d974926-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.037 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef38d51-3003-4233-8000-aed90ffce436]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.038 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8640ee-49a0-4bd8-ad50-53b59c6dbd0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.051 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[383c606a-8fde-44da-9369-ad697a029bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 sudo[324149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:05 compute-0 sudo[324149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.078 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b7863679-82ad-4d2e-ba69-e3f80d8a55de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:05.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:05 compute-0 sudo[324149]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.110 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[04f357a8-75c0-406c-9ab5-fc25dcf26769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 NetworkManager[44981]: <info>  [1759408805.1174] manager: (tap4d974926-80): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.116 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9d141719-653b-4277-8930-d74620770521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.138 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.139 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.139 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.139 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.152 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[90f18a35-70d5-4953-9b57-c767347c45ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 sudo[324178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.156 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ac476b4f-19e4-4c33-9d32-2a8451de3277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 sudo[324178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:05 compute-0 sudo[324178]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:05 compute-0 NetworkManager[44981]: <info>  [1759408805.1812] device (tap4d974926-80): carrier: link connected
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.189 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c23588b2-f252-4229-acf5-dea71d651104]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.192 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.197 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.214 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09153dbb-ff54-4024-a2b7-05d8bae98ea5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d974926-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:33:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670066, 'reachable_time': 20136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324226, 'error': None, 'target': 'ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 sudo[324209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.235 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5f324df0-bd44-4605-b9c3-fcb816fd70ca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:33a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670066, 'tstamp': 670066}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324233, 'error': None, 'target': 'ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 sudo[324209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.256 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce496a2c-63f8-48cd-a2f8-ec3c25977ea5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d974926-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:33:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670066, 'reachable_time': 20136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324236, 'error': None, 'target': 'ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.288 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb5e9cb-9681-42b1-ba28-a40843238ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.313 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.339 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c766642b-5760-4f57-b658-a67c8dfbc64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.341 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d974926-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.341 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.342 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d974926-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:05 compute-0 NetworkManager[44981]: <info>  [1759408805.3457] manager: (tap4d974926-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Oct 02 12:40:05 compute-0 kernel: tap4d974926-80: entered promiscuous mode
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.358 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d974926-80, col_values=(('external_ids', {'iface-id': 'a9fdf9c2-832e-4b97-bb86-271b47a56e76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:05 compute-0 ovn_controller[148123]: 2025-10-02T12:40:05Z|00503|binding|INFO|Releasing lport a9fdf9c2-832e-4b97-bb86-271b47a56e76 from this chassis (sb_readonly=0)
Oct 02 12:40:05 compute-0 nova_compute[256940]: 2025-10-02 12:40:05.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.391 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d974926-8d89-4280-b731-7d249a8b4980.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d974926-8d89-4280-b731-7d249a8b4980.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.392 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[872bf8f3-8aa7-4437-8a06-7c548ec8c6aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.393 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-4d974926-8d89-4280-b731-7d249a8b4980
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/4d974926-8d89-4280-b731-7d249a8b4980.pid.haproxy
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 4d974926-8d89-4280-b731-7d249a8b4980
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:40:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:05.394 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980', 'env', 'PROCESS_TAG=haproxy-4d974926-8d89-4280-b731-7d249a8b4980', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d974926-8d89-4280-b731-7d249a8b4980.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:40:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct 02 12:40:05 compute-0 podman[324288]: 2025-10-02 12:40:05.682277809 +0000 UTC m=+0.100294199 container create 13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pasteur, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:40:05 compute-0 podman[324288]: 2025-10-02 12:40:05.605983208 +0000 UTC m=+0.023999618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:05 compute-0 systemd[1]: Started libpod-conmon-13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83.scope.
Oct 02 12:40:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:05 compute-0 podman[324288]: 2025-10-02 12:40:05.948959111 +0000 UTC m=+0.366975501 container init 13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:40:05 compute-0 podman[324288]: 2025-10-02 12:40:05.958525361 +0000 UTC m=+0.376541751 container start 13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:40:05 compute-0 friendly_pasteur[324322]: 167 167
Oct 02 12:40:05 compute-0 systemd[1]: libpod-13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83.scope: Deactivated successfully.
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3067416809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:40:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3067416809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:40:06 compute-0 podman[324288]: 2025-10-02 12:40:06.036549488 +0000 UTC m=+0.454565888 container attach 13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pasteur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:40:06 compute-0 podman[324288]: 2025-10-02 12:40:06.037700048 +0000 UTC m=+0.455716448 container died 13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pasteur, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:40:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.211 2 DEBUG nova.compute.manager [req-7bdb3939-6c9c-4115-9971-89b0b080baf2 req-8993e2ca-d6e8-4447-b9e0-ed98a3c10c0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.212 2 DEBUG oslo_concurrency.lockutils [req-7bdb3939-6c9c-4115-9971-89b0b080baf2 req-8993e2ca-d6e8-4447-b9e0-ed98a3c10c0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.212 2 DEBUG oslo_concurrency.lockutils [req-7bdb3939-6c9c-4115-9971-89b0b080baf2 req-8993e2ca-d6e8-4447-b9e0-ed98a3c10c0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.212 2 DEBUG oslo_concurrency.lockutils [req-7bdb3939-6c9c-4115-9971-89b0b080baf2 req-8993e2ca-d6e8-4447-b9e0-ed98a3c10c0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.213 2 DEBUG nova.compute.manager [req-7bdb3939-6c9c-4115-9971-89b0b080baf2 req-8993e2ca-d6e8-4447-b9e0-ed98a3c10c0a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Processing event network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.258 2 DEBUG nova.compute.manager [req-beaddc40-b84e-4523-b78e-0967ae5803d8 req-79608859-7dfb-4c2f-9195-4a039c08c0fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.259 2 DEBUG oslo_concurrency.lockutils [req-beaddc40-b84e-4523-b78e-0967ae5803d8 req-79608859-7dfb-4c2f-9195-4a039c08c0fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.259 2 DEBUG oslo_concurrency.lockutils [req-beaddc40-b84e-4523-b78e-0967ae5803d8 req-79608859-7dfb-4c2f-9195-4a039c08c0fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.259 2 DEBUG oslo_concurrency.lockutils [req-beaddc40-b84e-4523-b78e-0967ae5803d8 req-79608859-7dfb-4c2f-9195-4a039c08c0fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.259 2 DEBUG nova.compute.manager [req-beaddc40-b84e-4523-b78e-0967ae5803d8 req-79608859-7dfb-4c2f-9195-4a039c08c0fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Processing event network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:06 compute-0 nova_compute[256940]: 2025-10-02 12:40:06.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-adcce625a9ef257d5579d3ec5305d20c6cdd5ee01b702b4e0faa8966061fc5f9-merged.mount: Deactivated successfully.
Oct 02 12:40:06 compute-0 podman[324329]: 2025-10-02 12:40:06.653149814 +0000 UTC m=+0.862290601 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:40:06 compute-0 podman[324288]: 2025-10-02 12:40:06.750622878 +0000 UTC m=+1.168639268 container remove 13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:06.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:07 compute-0 systemd[1]: libpod-conmon-13c6712076b00c6f57e974263ff738bbd5a8218f8297d38b55a6b7048ecbfa83.scope: Deactivated successfully.
Oct 02 12:40:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:07.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:07 compute-0 ceph-mon[73668]: pgmap v1924: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct 02 12:40:07 compute-0 podman[324329]: 2025-10-02 12:40:07.140423084 +0000 UTC m=+1.349563851 container create bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:40:07 compute-0 systemd[1]: Started libpod-conmon-bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46.scope.
Oct 02 12:40:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c3c704f4ece4223da346a222379e7ed3731acaf49ed32bbacadca28a2e3647/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:07 compute-0 podman[324365]: 2025-10-02 12:40:07.271920637 +0000 UTC m=+0.209352676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:07 compute-0 podman[324365]: 2025-10-02 12:40:07.379636599 +0000 UTC m=+0.317068618 container create eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:40:07 compute-0 systemd[1]: Started libpod-conmon-eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729.scope.
Oct 02 12:40:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ad78fa6c74e815e2c3c06108ed74442682d03648c28275c444f0db1d917fee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ad78fa6c74e815e2c3c06108ed74442682d03648c28275c444f0db1d917fee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ad78fa6c74e815e2c3c06108ed74442682d03648c28275c444f0db1d917fee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ad78fa6c74e815e2c3c06108ed74442682d03648c28275c444f0db1d917fee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ad78fa6c74e815e2c3c06108ed74442682d03648c28275c444f0db1d917fee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Oct 02 12:40:07 compute-0 podman[324329]: 2025-10-02 12:40:07.648257161 +0000 UTC m=+1.857398018 container init bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:40:07 compute-0 podman[324329]: 2025-10-02 12:40:07.659756611 +0000 UTC m=+1.868897418 container start bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:40:07 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [NOTICE]   (324390) : New worker (324392) forked
Oct 02 12:40:07 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [NOTICE]   (324390) : Loading success.
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.787 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 10723c51-5f55-4285-a6e6-bee50993385c in datapath 05d77b1e-d823-4ea1-8adf-0b9faac2499a unbound from our chassis
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.789 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05d77b1e-d823-4ea1-8adf-0b9faac2499a
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.806 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb06064-9466-47a6-a1ab-8dc54b39bb18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.850 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[75ba2607-bfad-4626-9953-819f1a6717df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.855 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f923dbfe-3d61-4f89-abf3-5651e40f8699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.891 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[58afdf42-9261-46a1-95ae-71c4fa1ee35a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-0 podman[324365]: 2025-10-02 12:40:07.909240064 +0000 UTC m=+0.846672133 container init eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.917 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[845b50bb-4bb3-4a94-aff0-db5958dbbc23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05d77b1e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:26:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669946, 'reachable_time': 28774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324406, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-0 podman[324365]: 2025-10-02 12:40:07.92516639 +0000 UTC m=+0.862598449 container start eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:40:07 compute-0 podman[324365]: 2025-10-02 12:40:07.953628453 +0000 UTC m=+0.891060512 container attach eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.954 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[78bd6a6b-f3ed-43a0-91bd-f83b1b2cd652]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap05d77b1e-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669962, 'tstamp': 669962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324409, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap05d77b1e-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669965, 'tstamp': 669965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324409, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.956 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05d77b1e-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:07 compute-0 nova_compute[256940]: 2025-10-02 12:40:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-0 nova_compute[256940]: 2025-10-02 12:40:07.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.960 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05d77b1e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.960 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.960 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05d77b1e-d0, col_values=(('external_ids', {'iface-id': '3c165bbe-0a4c-4ff3-a272-ee3abbb82d68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:07.960 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:08.345 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.522 2 DEBUG nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.522 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.523 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.523 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.523 2 DEBUG nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No event matching network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 in dict_keys([('network-vif-plugged', '10723c51-5f55-4285-a6e6-bee50993385c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.524 2 WARNING nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received unexpected event network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 for instance with vm_state building and task_state spawning.
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.524 2 DEBUG nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.524 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.524 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.525 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.525 2 DEBUG nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Processing event network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.525 2 DEBUG nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.525 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.525 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.526 2 DEBUG oslo_concurrency.lockutils [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.526 2 DEBUG nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.526 2 WARNING nova.compute.manager [req-356debb7-0f5a-401e-bdf4-62864219a411 req-2aed557c-9305-4bf8-8d0f-33b837c1a667 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received unexpected event network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c for instance with vm_state building and task_state spawning.
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.527 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.532 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408808.5322268, 81cb0055-c522-4ea7-b77e-e21e72c7782e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.533 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] VM Resumed (Lifecycle Event)
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.535 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.543 2 INFO nova.virt.libvirt.driver [-] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Instance spawned successfully.
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.544 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.585 2 DEBUG nova.compute.manager [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.585 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.586 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.586 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.587 2 DEBUG nova.compute.manager [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.587 2 WARNING nova.compute.manager [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received unexpected event network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 for instance with vm_state building and task_state spawning.
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.596 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.605 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.609 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.609 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.610 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.610 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.611 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.611 2 DEBUG nova.virt.libvirt.driver [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-0 cranky_jennings[324386]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:40:08 compute-0 cranky_jennings[324386]: --> relative data size: 1.0
Oct 02 12:40:08 compute-0 cranky_jennings[324386]: --> All data devices are unavailable
Oct 02 12:40:08 compute-0 systemd[1]: libpod-eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729.scope: Deactivated successfully.
Oct 02 12:40:08 compute-0 podman[324365]: 2025-10-02 12:40:08.799059082 +0000 UTC m=+1.736491111 container died eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.815 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:40:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:08.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.930 2 INFO nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Took 32.17 seconds to spawn the instance on the hypervisor.
Oct 02 12:40:08 compute-0 nova_compute[256940]: 2025-10-02 12:40:08.931 2 DEBUG nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-37ad78fa6c74e815e2c3c06108ed74442682d03648c28275c444f0db1d917fee-merged.mount: Deactivated successfully.
Oct 02 12:40:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:09 compute-0 nova_compute[256940]: 2025-10-02 12:40:09.198 2 INFO nova.compute.manager [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Took 33.84 seconds to build instance.
Oct 02 12:40:09 compute-0 ceph-mon[73668]: pgmap v1925: 305 pgs: 305 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Oct 02 12:40:09 compute-0 nova_compute[256940]: 2025-10-02 12:40:09.304 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:09 compute-0 podman[324365]: 2025-10-02 12:40:09.322803074 +0000 UTC m=+2.260235093 container remove eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:40:09 compute-0 sudo[324209]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:09 compute-0 systemd[1]: libpod-conmon-eebc59ef7fc64c9927ca6024ff439e5b592d90511bffc483e65fe0c0038b7729.scope: Deactivated successfully.
Oct 02 12:40:09 compute-0 nova_compute[256940]: 2025-10-02 12:40:09.390 2 DEBUG oslo_concurrency.lockutils [None req-1bfaca40-c1f9-49b0-9816-fad0252a819a 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:09 compute-0 sudo[324433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:09 compute-0 sudo[324433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:09 compute-0 sudo[324433]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:09 compute-0 sudo[324458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:09 compute-0 sudo[324458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:09 compute-0 sudo[324458]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:09 compute-0 nova_compute[256940]: 2025-10-02 12:40:09.530 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:09 compute-0 nova_compute[256940]: 2025-10-02 12:40:09.530 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:40:09 compute-0 nova_compute[256940]: 2025-10-02 12:40:09.531 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 498 KiB/s rd, 3.2 MiB/s wr, 106 op/s
Oct 02 12:40:09 compute-0 sudo[324483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:09 compute-0 sudo[324483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:09 compute-0 sudo[324483]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:09 compute-0 sudo[324508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:40:09 compute-0 sudo[324508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:10 compute-0 podman[324573]: 2025-10-02 12:40:10.01255632 +0000 UTC m=+0.023356461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:10 compute-0 podman[324573]: 2025-10-02 12:40:10.169352623 +0000 UTC m=+0.180152774 container create 44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:40:10 compute-0 systemd[1]: Started libpod-conmon-44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef.scope.
Oct 02 12:40:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:10 compute-0 podman[324573]: 2025-10-02 12:40:10.39908608 +0000 UTC m=+0.409886221 container init 44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:40:10 compute-0 podman[324573]: 2025-10-02 12:40:10.405776955 +0000 UTC m=+0.416577076 container start 44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:40:10 compute-0 nifty_haibt[324587]: 167 167
Oct 02 12:40:10 compute-0 systemd[1]: libpod-44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef.scope: Deactivated successfully.
Oct 02 12:40:10 compute-0 podman[324573]: 2025-10-02 12:40:10.487918569 +0000 UTC m=+0.498718690 container attach 44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:40:10 compute-0 podman[324573]: 2025-10-02 12:40:10.4898748 +0000 UTC m=+0.500674931 container died 44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1bd798c5f149aaa4b5550231394d99df365a10087aa264632fbc43ef32e0db2-merged.mount: Deactivated successfully.
Oct 02 12:40:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:10.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:10 compute-0 podman[324573]: 2025-10-02 12:40:10.968361161 +0000 UTC m=+0.979161282 container remove 44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:40:10 compute-0 systemd[1]: libpod-conmon-44794eafc73c3c2de4d6f2c00b783df5d73a0340083e57586469b30418a72bef.scope: Deactivated successfully.
Oct 02 12:40:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:11.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:11 compute-0 podman[324616]: 2025-10-02 12:40:11.166162645 +0000 UTC m=+0.028894246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.300 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.301 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:11 compute-0 podman[324616]: 2025-10-02 12:40:11.304004543 +0000 UTC m=+0.166736154 container create 602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:40:11 compute-0 systemd[1]: Started libpod-conmon-602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad.scope.
Oct 02 12:40:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812ccebe0cee0a78add9d80526ed38c282e4173f5c71fc1e2263e6a58961e5cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812ccebe0cee0a78add9d80526ed38c282e4173f5c71fc1e2263e6a58961e5cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812ccebe0cee0a78add9d80526ed38c282e4173f5c71fc1e2263e6a58961e5cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812ccebe0cee0a78add9d80526ed38c282e4173f5c71fc1e2263e6a58961e5cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:11 compute-0 ceph-mon[73668]: pgmap v1926: 305 pgs: 305 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 498 KiB/s rd, 3.2 MiB/s wr, 106 op/s
Oct 02 12:40:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2693718750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:11 compute-0 podman[324616]: 2025-10-02 12:40:11.4460092 +0000 UTC m=+0.308740791 container init 602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:40:11 compute-0 podman[324616]: 2025-10-02 12:40:11.45253793 +0000 UTC m=+0.315269501 container start 602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.473 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:40:11 compute-0 podman[324616]: 2025-10-02 12:40:11.482999265 +0000 UTC m=+0.345730867 container attach 602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:40:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 642 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 162 op/s
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.654 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.656 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.666 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:40:11 compute-0 nova_compute[256940]: 2025-10-02 12:40:11.666 2 INFO nova.compute.claims [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.077 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.077 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.078 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.078 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.078 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.080 2 INFO nova.compute.manager [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Terminating instance
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.083 2 DEBUG nova.compute.manager [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:40:12 compute-0 kernel: tap8dd716ff-bc (unregistering): left promiscuous mode
Oct 02 12:40:12 compute-0 NetworkManager[44981]: <info>  [1759408812.1420] device (tap8dd716ff-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00504|binding|INFO|Releasing lport 8dd716ff-bca3-4034-8507-003cbbc312c9 from this chassis (sb_readonly=0)
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00505|binding|INFO|Setting lport 8dd716ff-bca3-4034-8507-003cbbc312c9 down in Southbound
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00506|binding|INFO|Removing iface tap8dd716ff-bc ovn-installed in OVS
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.160 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:12 compute-0 kernel: tapab029d2e-6d (unregistering): left promiscuous mode
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.176 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:1c:14 10.100.0.180'], port_security=['fa:16:3e:52:1c:14 10.100.0.180'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.180/24', 'neutron:device_id': '81cb0055-c522-4ea7-b77e-e21e72c7782e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1168d29-288f-4f27-9a1f-f792d581c8c6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8dd716ff-bca3-4034-8507-003cbbc312c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.177 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8dd716ff-bca3-4034-8507-003cbbc312c9 in datapath 05d77b1e-d823-4ea1-8adf-0b9faac2499a unbound from our chassis
Oct 02 12:40:12 compute-0 NetworkManager[44981]: <info>  [1759408812.1789] device (tapab029d2e-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.179 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05d77b1e-d823-4ea1-8adf-0b9faac2499a
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00507|binding|INFO|Releasing lport ab029d2e-6d57-498c-b55b-7f90b732cfd4 from this chassis (sb_readonly=0)
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00508|binding|INFO|Setting lport ab029d2e-6d57-498c-b55b-7f90b732cfd4 down in Southbound
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00509|binding|INFO|Removing iface tapab029d2e-6d ovn-installed in OVS
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.202 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[faa152d7-a49e-4a64-bc7e-49e818ea64e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 kernel: tap10723c51-5f (unregistering): left promiscuous mode
Oct 02 12:40:12 compute-0 NetworkManager[44981]: <info>  [1759408812.2110] device (tap10723c51-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00510|binding|INFO|Releasing lport 10723c51-5f55-4285-a6e6-bee50993385c from this chassis (sb_readonly=1)
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00511|binding|INFO|Removing iface tap10723c51-5f ovn-installed in OVS
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00512|if_status|INFO|Dropped 2 log messages in last 127 seconds (most recently, 127 seconds ago) due to excessive rate
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00513|if_status|INFO|Not setting lport 10723c51-5f55-4285-a6e6-bee50993385c down as sb is readonly
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.237 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5516e4ce-930f-4263-af26-72b649ebe2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.239 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[387d0417-02d9-475c-a5cd-4c8bae043c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.264 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4b42d8e5-a570-42e6-90d1-22549db3ece9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:12 compute-0 ovn_controller[148123]: 2025-10-02T12:40:12Z|00514|binding|INFO|Setting lport 10723c51-5f55-4285-a6e6-bee50993385c down in Southbound
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.280 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:1e:e8 10.100.1.109'], port_security=['fa:16:3e:c3:1e:e8 10.100.1.109'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.109/24', 'neutron:device_id': '81cb0055-c522-4ea7-b77e-e21e72c7782e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d974926-8d89-4280-b731-7d249a8b4980', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f9ed708-7e4e-44c6-83e2-e16fb95e9139, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ab029d2e-6d57-498c-b55b-7f90b732cfd4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.280 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[705759d0-497d-40fa-8fbe-df97160d550e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05d77b1e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:26:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669946, 'reachable_time': 28774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324662, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:12 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 02 12:40:12 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006a.scope: Consumed 4.509s CPU time.
Oct 02 12:40:12 compute-0 systemd-machined[210927]: Machine qemu-51-instance-0000006a terminated.
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.304 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1cbc91-b7b8-4b09-928d-362e0c03e538]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap05d77b1e-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669962, 'tstamp': 669962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324663, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap05d77b1e-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669965, 'tstamp': 669965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324663, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.306 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05d77b1e-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]: {
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:     "1": [
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:         {
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "devices": [
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "/dev/loop3"
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             ],
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "lv_name": "ceph_lv0",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "lv_size": "7511998464",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "name": "ceph_lv0",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "tags": {
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.cluster_name": "ceph",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.crush_device_class": "",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.encrypted": "0",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.osd_id": "1",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.type": "block",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:                 "ceph.vdo": "0"
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             },
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "type": "block",
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:             "vg_name": "ceph_vg0"
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:         }
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]:     ]
Oct 02 12:40:12 compute-0 funny_ramanujan[324632]: }
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 NetworkManager[44981]: <info>  [1759408812.3152] manager: (tapab029d2e-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.320 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:d8:a2 10.100.0.32'], port_security=['fa:16:3e:dd:d8:a2 10.100.0.32'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.32/24', 'neutron:device_id': '81cb0055-c522-4ea7-b77e-e21e72c7782e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1168d29-288f-4f27-9a1f-f792d581c8c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=10723c51-5f55-4285-a6e6-bee50993385c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:12 compute-0 NetworkManager[44981]: <info>  [1759408812.3251] manager: (tap10723c51-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.328 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05d77b1e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.328 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.328 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05d77b1e-d0, col_values=(('external_ids', {'iface-id': '3c165bbe-0a4c-4ff3-a272-ee3abbb82d68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.329 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.336 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ab029d2e-6d57-498c-b55b-7f90b732cfd4 in datapath 4d974926-8d89-4280-b731-7d249a8b4980 unbound from our chassis
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.339 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d974926-8d89-4280-b731-7d249a8b4980, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.340 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4f31dfb1-05e8-486d-bfef-0d278174e3d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:12.343 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980 namespace which is not needed anymore
Oct 02 12:40:12 compute-0 podman[324616]: 2025-10-02 12:40:12.345061008 +0000 UTC m=+1.207792599 container died 602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:40:12 compute-0 systemd[1]: libpod-602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad.scope: Deactivated successfully.
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.352 2 INFO nova.virt.libvirt.driver [-] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Instance destroyed successfully.
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.352 2 DEBUG nova.objects.instance [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'resources' on Instance uuid 81cb0055-c522-4ea7-b77e-e21e72c7782e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.398 2 DEBUG nova.virt.libvirt.vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:40:09Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.399 2 DEBUG nova.network.os_vif_util [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.400 2 DEBUG nova.network.os_vif_util [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:1c:14,bridge_name='br-int',has_traffic_filtering=True,id=8dd716ff-bca3-4034-8507-003cbbc312c9,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dd716ff-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.400 2 DEBUG os_vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:1c:14,bridge_name='br-int',has_traffic_filtering=True,id=8dd716ff-bca3-4034-8507-003cbbc312c9,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dd716ff-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.402 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8dd716ff-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.413 2 INFO os_vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:1c:14,bridge_name='br-int',has_traffic_filtering=True,id=8dd716ff-bca3-4034-8507-003cbbc312c9,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8dd716ff-bc')
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.414 2 DEBUG nova.virt.libvirt.vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:40:09Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.414 2 DEBUG nova.network.os_vif_util [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "address": "fa:16:3e:c3:1e:e8", "network": {"id": "4d974926-8d89-4280-b731-7d249a8b4980", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-235988702", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.109", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab029d2e-6d", "ovs_interfaceid": "ab029d2e-6d57-498c-b55b-7f90b732cfd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.415 2 DEBUG nova.network.os_vif_util [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1e:e8,bridge_name='br-int',has_traffic_filtering=True,id=ab029d2e-6d57-498c-b55b-7f90b732cfd4,network=Network(4d974926-8d89-4280-b731-7d249a8b4980),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab029d2e-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.415 2 DEBUG os_vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1e:e8,bridge_name='br-int',has_traffic_filtering=True,id=ab029d2e-6d57-498c-b55b-7f90b732cfd4,network=Network(4d974926-8d89-4280-b731-7d249a8b4980),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab029d2e-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab029d2e-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.423 2 INFO os_vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1e:e8,bridge_name='br-int',has_traffic_filtering=True,id=ab029d2e-6d57-498c-b55b-7f90b732cfd4,network=Network(4d974926-8d89-4280-b731-7d249a8b4980),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab029d2e-6d')
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.424 2 DEBUG nova.virt.libvirt.vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1228626942',display_name='tempest-ServersTestMultiNic-server-1228626942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1228626942',id=106,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-orkrfypo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:40:09Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=81cb0055-c522-4ea7-b77e-e21e72c7782e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.424 2 DEBUG nova.network.os_vif_util [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.424 2 DEBUG nova.network.os_vif_util [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:d8:a2,bridge_name='br-int',has_traffic_filtering=True,id=10723c51-5f55-4285-a6e6-bee50993385c,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10723c51-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.425 2 DEBUG os_vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:d8:a2,bridge_name='br-int',has_traffic_filtering=True,id=10723c51-5f55-4285-a6e6-bee50993385c,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10723c51-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.426 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10723c51-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.431 2 INFO os_vif [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:d8:a2,bridge_name='br-int',has_traffic_filtering=True,id=10723c51-5f55-4285-a6e6-bee50993385c,network=Network(05d77b1e-d823-4ea1-8adf-0b9faac2499a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10723c51-5f')
Oct 02 12:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-812ccebe0cee0a78add9d80526ed38c282e4173f5c71fc1e2263e6a58961e5cb-merged.mount: Deactivated successfully.
Oct 02 12:40:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/306191312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:12 compute-0 ceph-mon[73668]: pgmap v1927: 305 pgs: 305 active+clean; 642 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 162 op/s
Oct 02 12:40:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926463960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.703 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.708 2 DEBUG nova.compute.provider_tree [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:40:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:12.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:12 compute-0 podman[324616]: 2025-10-02 12:40:12.831856386 +0000 UTC m=+1.694587967 container remove 602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:40:12 compute-0 sudo[324508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:12 compute-0 nova_compute[256940]: 2025-10-02 12:40:12.919 2 DEBUG nova.scheduler.client.report [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:40:12 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [NOTICE]   (324390) : haproxy version is 2.8.14-c23fe91
Oct 02 12:40:12 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [NOTICE]   (324390) : path to executable is /usr/sbin/haproxy
Oct 02 12:40:12 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [WARNING]  (324390) : Exiting Master process...
Oct 02 12:40:12 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [WARNING]  (324390) : Exiting Master process...
Oct 02 12:40:12 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [ALERT]    (324390) : Current worker (324392) exited with code 143 (Terminated)
Oct 02 12:40:12 compute-0 neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980[324380]: [WARNING]  (324390) : All workers exited. Exiting... (0)
Oct 02 12:40:12 compute-0 systemd[1]: libpod-conmon-602b152a1cd2087e892f9eeb5f2e13bfc8c8f201cf3003fd89bc48f9d394adad.scope: Deactivated successfully.
Oct 02 12:40:12 compute-0 systemd[1]: libpod-bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46.scope: Deactivated successfully.
Oct 02 12:40:12 compute-0 podman[324770]: 2025-10-02 12:40:12.952220198 +0000 UTC m=+0.065360317 container died bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:40:12 compute-0 sudo[324771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:12 compute-0 sudo[324771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:12 compute-0 sudo[324771]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:13 compute-0 sudo[324821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:13 compute-0 sudo[324821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:13 compute-0 sudo[324821]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:13 compute-0 sudo[324846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:13 compute-0 sudo[324846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:13 compute-0 sudo[324846]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:13.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.118 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.119 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46-userdata-shm.mount: Deactivated successfully.
Oct 02 12:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-96c3c704f4ece4223da346a222379e7ed3731acaf49ed32bbacadca28a2e3647-merged.mount: Deactivated successfully.
Oct 02 12:40:13 compute-0 sudo[324873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:40:13 compute-0 sudo[324873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:13 compute-0 podman[324770]: 2025-10-02 12:40:13.370799025 +0000 UTC m=+0.483939104 container cleanup bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.376 2 DEBUG nova.compute.manager [req-048a5823-637c-4345-bbb4-97a4e8753cbb req-85c590ed-727c-42a7-8c46-f2e8f945f1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-unplugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.376 2 DEBUG oslo_concurrency.lockutils [req-048a5823-637c-4345-bbb4-97a4e8753cbb req-85c590ed-727c-42a7-8c46-f2e8f945f1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.377 2 DEBUG oslo_concurrency.lockutils [req-048a5823-637c-4345-bbb4-97a4e8753cbb req-85c590ed-727c-42a7-8c46-f2e8f945f1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.377 2 DEBUG oslo_concurrency.lockutils [req-048a5823-637c-4345-bbb4-97a4e8753cbb req-85c590ed-727c-42a7-8c46-f2e8f945f1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.377 2 DEBUG nova.compute.manager [req-048a5823-637c-4345-bbb4-97a4e8753cbb req-85c590ed-727c-42a7-8c46-f2e8f945f1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-unplugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.377 2 DEBUG nova.compute.manager [req-048a5823-637c-4345-bbb4-97a4e8753cbb req-85c590ed-727c-42a7-8c46-f2e8f945f1d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-unplugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:40:13 compute-0 systemd[1]: libpod-conmon-bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46.scope: Deactivated successfully.
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.429 2 DEBUG nova.compute.manager [req-f1a8ca79-bece-414a-a559-5a070a42376f req-c251a329-7c7c-4c5f-8d4f-80c18886c57d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-unplugged-8dd716ff-bca3-4034-8507-003cbbc312c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.430 2 DEBUG oslo_concurrency.lockutils [req-f1a8ca79-bece-414a-a559-5a070a42376f req-c251a329-7c7c-4c5f-8d4f-80c18886c57d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.430 2 DEBUG oslo_concurrency.lockutils [req-f1a8ca79-bece-414a-a559-5a070a42376f req-c251a329-7c7c-4c5f-8d4f-80c18886c57d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.430 2 DEBUG oslo_concurrency.lockutils [req-f1a8ca79-bece-414a-a559-5a070a42376f req-c251a329-7c7c-4c5f-8d4f-80c18886c57d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.430 2 DEBUG nova.compute.manager [req-f1a8ca79-bece-414a-a559-5a070a42376f req-c251a329-7c7c-4c5f-8d4f-80c18886c57d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-unplugged-8dd716ff-bca3-4034-8507-003cbbc312c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.431 2 DEBUG nova.compute.manager [req-f1a8ca79-bece-414a-a559-5a070a42376f req-c251a329-7c7c-4c5f-8d4f-80c18886c57d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-unplugged-8dd716ff-bca3-4034-8507-003cbbc312c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.523 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.524 2 DEBUG nova.network.neutron [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:40:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 642 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Oct 02 12:40:13 compute-0 podman[324925]: 2025-10-02 12:40:13.62342932 +0000 UTC m=+0.225930569 container remove bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.630 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d42cbb1f-c98c-42e9-a524-686d55eadab7]: (4, ('Thu Oct  2 12:40:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980 (bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46)\nbdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46\nThu Oct  2 12:40:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980 (bdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46)\nbdb7c540a4536cef12d8964a91117c22efdd214320e0ccf634104faa96505a46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.632 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0dafefd5-9812-4243-923f-38bbd8bead44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.634 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d974926-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-0 kernel: tap4d974926-80: left promiscuous mode
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.657 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7d55fb17-0744-4422-8afa-c230cf0d7e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.684 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8466c3f6-33f8-4564-9420-4d05d20b5503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.686 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fec8ea7d-6c9c-49af-bcb5-cc04d51a844a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.704 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[470634fc-40d4-4709-866e-321ea61e6afe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670058, 'reachable_time': 26241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324961, 'error': None, 'target': 'ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.706 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d974926-8d89-4280-b731-7d249a8b4980 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.706 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[26db0ca2-46b4-40bd-a929-eeb1a68d5983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.708 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 10723c51-5f55-4285-a6e6-bee50993385c in datapath 05d77b1e-d823-4ea1-8adf-0b9faac2499a unbound from our chassis
Oct 02 12:40:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d974926\x2d8d89\x2d4280\x2db731\x2d7d249a8b4980.mount: Deactivated successfully.
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.709 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05d77b1e-d823-4ea1-8adf-0b9faac2499a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.710 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[19c6565f-8bdb-48ed-a379-96d09f6cdeb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:13.711 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a namespace which is not needed anymore
Oct 02 12:40:13 compute-0 podman[324953]: 2025-10-02 12:40:13.713705017 +0000 UTC m=+0.031785481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:13 compute-0 nova_compute[256940]: 2025-10-02 12:40:13.836 2 INFO nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:40:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/926463960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:13 compute-0 podman[324953]: 2025-10-02 12:40:13.865355995 +0000 UTC m=+0.183436439 container create b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:40:13 compute-0 systemd[1]: Started libpod-conmon-b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc.scope.
Oct 02 12:40:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.048 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:40:14 compute-0 podman[324953]: 2025-10-02 12:40:14.059788651 +0000 UTC m=+0.377869115 container init b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shannon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:40:14 compute-0 podman[324953]: 2025-10-02 12:40:14.069975087 +0000 UTC m=+0.388055541 container start b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:40:14 compute-0 intelligent_shannon[324985]: 167 167
Oct 02 12:40:14 compute-0 systemd[1]: libpod-b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc.scope: Deactivated successfully.
Oct 02 12:40:14 compute-0 podman[324953]: 2025-10-02 12:40:14.154536674 +0000 UTC m=+0.472617138 container attach b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:40:14 compute-0 podman[324953]: 2025-10-02 12:40:14.155399357 +0000 UTC m=+0.473479801 container died b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.293 2 DEBUG nova.policy [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdbe447f49374937a828d6281949a2a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.306 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.307 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.307 2 INFO nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Creating image(s)
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.332 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebe86997ecfdf2c8b3f857da860f89f40c8f99d4036a38e67b6e1307a21afa4c-merged.mount: Deactivated successfully.
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.367 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.398 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.402 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.468 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.469 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.470 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.470 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.494 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.497 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:14 compute-0 podman[324953]: 2025-10-02 12:40:14.532204313 +0000 UTC m=+0.850284757 container remove b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:40:14 compute-0 systemd[1]: libpod-conmon-b03a3ec9d226155c4243e30591071d7a57d8eb2cbe3b7fc4d3964ab673fca5cc.scope: Deactivated successfully.
Oct 02 12:40:14 compute-0 neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a[324094]: [NOTICE]   (324112) : haproxy version is 2.8.14-c23fe91
Oct 02 12:40:14 compute-0 neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a[324094]: [NOTICE]   (324112) : path to executable is /usr/sbin/haproxy
Oct 02 12:40:14 compute-0 neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a[324094]: [WARNING]  (324112) : Exiting Master process...
Oct 02 12:40:14 compute-0 neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a[324094]: [ALERT]    (324112) : Current worker (324115) exited with code 143 (Terminated)
Oct 02 12:40:14 compute-0 neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a[324094]: [WARNING]  (324112) : All workers exited. Exiting... (0)
Oct 02 12:40:14 compute-0 systemd[1]: libpod-568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693.scope: Deactivated successfully.
Oct 02 12:40:14 compute-0 podman[324990]: 2025-10-02 12:40:14.688140304 +0000 UTC m=+0.699638675 container died 568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:40:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:14.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693-userdata-shm.mount: Deactivated successfully.
Oct 02 12:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ecd4fc79665f47984f7cf03e7d4dc50d22a85ab7f1817005cec19b0611a818-merged.mount: Deactivated successfully.
Oct 02 12:40:14 compute-0 podman[325118]: 2025-10-02 12:40:14.896780431 +0000 UTC m=+0.228237640 container create 2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:40:14 compute-0 podman[325118]: 2025-10-02 12:40:14.803728571 +0000 UTC m=+0.135185800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:40:14 compute-0 nova_compute[256940]: 2025-10-02 12:40:14.908 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:14 compute-0 podman[324990]: 2025-10-02 12:40:14.933693804 +0000 UTC m=+0.945192175 container cleanup 568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:40:14 compute-0 ceph-mon[73668]: pgmap v1928: 305 pgs: 305 active+clean; 642 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Oct 02 12:40:14 compute-0 systemd[1]: Started libpod-conmon-2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df.scope.
Oct 02 12:40:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22bcb272b69e7f0d3f764e044e6c768d23ecc31cf75314043bd54ba14bd424f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22bcb272b69e7f0d3f764e044e6c768d23ecc31cf75314043bd54ba14bd424f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22bcb272b69e7f0d3f764e044e6c768d23ecc31cf75314043bd54ba14bd424f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22bcb272b69e7f0d3f764e044e6c768d23ecc31cf75314043bd54ba14bd424f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.065 2 INFO nova.virt.libvirt.driver [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Deleting instance files /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e_del
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.066 2 INFO nova.virt.libvirt.driver [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Deletion of /var/lib/nova/instances/81cb0055-c522-4ea7-b77e-e21e72c7782e_del complete
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.073 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] resizing rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:40:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:15.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:15 compute-0 podman[325118]: 2025-10-02 12:40:15.113534209 +0000 UTC m=+0.444991438 container init 2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haslett, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:40:15 compute-0 podman[325118]: 2025-10-02 12:40:15.120547562 +0000 UTC m=+0.452004771 container start 2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haslett, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:40:15 compute-0 podman[325118]: 2025-10-02 12:40:15.144308682 +0000 UTC m=+0.475765891 container attach 2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:40:15 compute-0 podman[325166]: 2025-10-02 12:40:15.176956685 +0000 UTC m=+0.219213954 container remove 568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:40:15 compute-0 systemd[1]: libpod-conmon-568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693.scope: Deactivated successfully.
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.194 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c8468b11-e803-4e1c-82b7-8dbb0909bafd]: (4, ('Thu Oct  2 12:40:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a (568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693)\n568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693\nThu Oct  2 12:40:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a (568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693)\n568b64b032f0e952f51c5db3af1497b1aa40675accbf3f59af3fa2574c6d8693\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.196 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[56cbd7ef-0683-40e9-8c66-dbf8d39e80a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.196 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05d77b1e-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:15 compute-0 kernel: tap05d77b1e-d0: left promiscuous mode
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.221 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[938d602e-81ed-44af-ae69-8b4af73e663f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.239 2 DEBUG nova.objects.instance [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.246 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[81425e7f-bad7-4eba-b824-def5ad4b5ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.248 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0c26f7a7-2244-4525-9f25-49720c16c979]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.265 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[91ff4382-2f50-4181-bc8b-81943ea6deb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669936, 'reachable_time': 40466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325242, 'error': None, 'target': 'ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.267 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-05d77b1e-d823-4ea1-8adf-0b9faac2499a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:40:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:15.267 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba9ced9-ddb2-429b-998a-803f05288530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d05d77b1e\x2dd823\x2d4ea1\x2d8adf\x2d0b9faac2499a.mount: Deactivated successfully.
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.303 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.304 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Ensure instance console log exists: /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.304 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.304 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.305 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.498 2 INFO nova.compute.manager [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Took 3.42 seconds to destroy the instance on the hypervisor.
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.499 2 DEBUG oslo.service.loopingcall [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.499 2 DEBUG nova.compute.manager [-] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:40:15 compute-0 nova_compute[256940]: 2025-10-02 12:40:15.499 2 DEBUG nova.network.neutron [-] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:40:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 648 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.9 MiB/s wr, 249 op/s
Oct 02 12:40:15 compute-0 nervous_haslett[325192]: {
Oct 02 12:40:15 compute-0 nervous_haslett[325192]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:40:15 compute-0 nervous_haslett[325192]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:40:15 compute-0 nervous_haslett[325192]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:40:15 compute-0 nervous_haslett[325192]:         "osd_id": 1,
Oct 02 12:40:15 compute-0 nervous_haslett[325192]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:40:15 compute-0 nervous_haslett[325192]:         "type": "bluestore"
Oct 02 12:40:15 compute-0 nervous_haslett[325192]:     }
Oct 02 12:40:15 compute-0 nervous_haslett[325192]: }
Oct 02 12:40:16 compute-0 systemd[1]: libpod-2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df.scope: Deactivated successfully.
Oct 02 12:40:16 compute-0 podman[325118]: 2025-10-02 12:40:16.01460767 +0000 UTC m=+1.346064889 container died 2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haslett, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.090 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.091 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.091 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.091 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.091 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.092 2 WARNING nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received unexpected event network-vif-plugged-ab029d2e-6d57-498c-b55b-7f90b732cfd4 for instance with vm_state active and task_state deleting.
Oct 02 12:40:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f22bcb272b69e7f0d3f764e044e6c768d23ecc31cf75314043bd54ba14bd424f-merged.mount: Deactivated successfully.
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.477 2 DEBUG nova.compute.manager [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.477 2 DEBUG oslo_concurrency.lockutils [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.477 2 DEBUG oslo_concurrency.lockutils [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.478 2 DEBUG oslo_concurrency.lockutils [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.478 2 DEBUG nova.compute.manager [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.478 2 WARNING nova.compute.manager [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received unexpected event network-vif-plugged-8dd716ff-bca3-4034-8507-003cbbc312c9 for instance with vm_state active and task_state deleting.
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.478 2 DEBUG nova.compute.manager [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-unplugged-10723c51-5f55-4285-a6e6-bee50993385c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.478 2 DEBUG oslo_concurrency.lockutils [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.478 2 DEBUG oslo_concurrency.lockutils [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.478 2 DEBUG oslo_concurrency.lockutils [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.479 2 DEBUG nova.compute.manager [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-unplugged-10723c51-5f55-4285-a6e6-bee50993385c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.479 2 DEBUG nova.compute.manager [req-338493f9-37cf-4f89-803e-d7a3b0a4958a req-a3f4fa41-8227-4b78-8bda-20d81f1b9a9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-unplugged-10723c51-5f55-4285-a6e6-bee50993385c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:40:16 compute-0 nova_compute[256940]: 2025-10-02 12:40:16.526 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:16 compute-0 podman[325118]: 2025-10-02 12:40:16.533970608 +0000 UTC m=+1.865427827 container remove 2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:40:16 compute-0 sudo[324873]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:40:16 compute-0 systemd[1]: libpod-conmon-2c0d21e13919844340f4b216c4c1bd3e7be8c4593305cc3d723184f65f9b28df.scope: Deactivated successfully.
Oct 02 12:40:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:40:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:16 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d2b48c2f-c28f-45e3-9ed7-3226d96dfac1 does not exist
Oct 02 12:40:16 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0337fcc9-058a-47f5-8589-874e6d7e875e does not exist
Oct 02 12:40:16 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 59ff0c29-f988-4539-900a-7b2b0d6d81f7 does not exist
Oct 02 12:40:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:16.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:16 compute-0 sudo[325276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:16 compute-0 sudo[325276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:16 compute-0 sudo[325276]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:16 compute-0 sudo[325301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:40:16 compute-0 sudo[325301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:16 compute-0 sudo[325301]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:17 compute-0 ceph-mon[73668]: pgmap v1929: 305 pgs: 305 active+clean; 648 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.9 MiB/s wr, 249 op/s
Oct 02 12:40:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:17 compute-0 nova_compute[256940]: 2025-10-02 12:40:17.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:17 compute-0 nova_compute[256940]: 2025-10-02 12:40:17.431 2 DEBUG nova.compute.manager [req-cde3b5cc-5fa4-4359-9bff-9a6254dd90a0 req-f498e1ac-8802-4eff-ad6e-5c42bccbe42d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-deleted-ab029d2e-6d57-498c-b55b-7f90b732cfd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:17 compute-0 nova_compute[256940]: 2025-10-02 12:40:17.431 2 INFO nova.compute.manager [req-cde3b5cc-5fa4-4359-9bff-9a6254dd90a0 req-f498e1ac-8802-4eff-ad6e-5c42bccbe42d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Neutron deleted interface ab029d2e-6d57-498c-b55b-7f90b732cfd4; detaching it from the instance and deleting it from the info cache
Oct 02 12:40:17 compute-0 nova_compute[256940]: 2025-10-02 12:40:17.431 2 DEBUG nova.network.neutron [req-cde3b5cc-5fa4-4359-9bff-9a6254dd90a0 req-f498e1ac-8802-4eff-ad6e-5c42bccbe42d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updating instance_info_cache with network_info: [{"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "10723c51-5f55-4285-a6e6-bee50993385c", "address": "fa:16:3e:dd:d8:a2", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10723c51-5f", "ovs_interfaceid": "10723c51-5f55-4285-a6e6-bee50993385c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.8 MiB/s wr, 264 op/s
Oct 02 12:40:17 compute-0 nova_compute[256940]: 2025-10-02 12:40:17.658 2 DEBUG nova.compute.manager [req-cde3b5cc-5fa4-4359-9bff-9a6254dd90a0 req-f498e1ac-8802-4eff-ad6e-5c42bccbe42d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Detach interface failed, port_id=ab029d2e-6d57-498c-b55b-7f90b732cfd4, reason: Instance 81cb0055-c522-4ea7-b77e-e21e72c7782e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:40:17 compute-0 nova_compute[256940]: 2025-10-02 12:40:17.752 2 DEBUG nova.network.neutron [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Successfully created port: 1613b7ad-4ed9-456e-bd64-5de6348252d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:40:18 compute-0 ceph-mon[73668]: pgmap v1930: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.8 MiB/s wr, 264 op/s
Oct 02 12:40:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:18.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.077 2 DEBUG nova.compute.manager [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.077 2 DEBUG oslo_concurrency.lockutils [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.078 2 DEBUG oslo_concurrency.lockutils [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.078 2 DEBUG oslo_concurrency.lockutils [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.078 2 DEBUG nova.compute.manager [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] No waiting events found dispatching network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.078 2 WARNING nova.compute.manager [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received unexpected event network-vif-plugged-10723c51-5f55-4285-a6e6-bee50993385c for instance with vm_state active and task_state deleting.
Oct 02 12:40:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:19.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 214 op/s
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.597 2 DEBUG nova.compute.manager [req-3c71877f-4e20-44d0-9d6d-d447fc0829b8 req-6eaec2be-f8f6-477f-aa90-68a2fafd7c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-deleted-10723c51-5f55-4285-a6e6-bee50993385c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.597 2 INFO nova.compute.manager [req-3c71877f-4e20-44d0-9d6d-d447fc0829b8 req-6eaec2be-f8f6-477f-aa90-68a2fafd7c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Neutron deleted interface 10723c51-5f55-4285-a6e6-bee50993385c; detaching it from the instance and deleting it from the info cache
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.598 2 DEBUG nova.network.neutron [req-3c71877f-4e20-44d0-9d6d-d447fc0829b8 req-6eaec2be-f8f6-477f-aa90-68a2fafd7c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updating instance_info_cache with network_info: [{"id": "8dd716ff-bca3-4034-8507-003cbbc312c9", "address": "fa:16:3e:52:1c:14", "network": {"id": "05d77b1e-d823-4ea1-8adf-0b9faac2499a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1296872912", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8dd716ff-bc", "ovs_interfaceid": "8dd716ff-bca3-4034-8507-003cbbc312c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.825 2 DEBUG nova.network.neutron [-] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:19 compute-0 sudo[325327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:19 compute-0 sudo[325327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:19 compute-0 sudo[325327]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:19 compute-0 nova_compute[256940]: 2025-10-02 12:40:19.921 2 DEBUG nova.compute.manager [req-3c71877f-4e20-44d0-9d6d-d447fc0829b8 req-6eaec2be-f8f6-477f-aa90-68a2fafd7c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Detach interface failed, port_id=10723c51-5f55-4285-a6e6-bee50993385c, reason: Instance 81cb0055-c522-4ea7-b77e-e21e72c7782e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:40:19 compute-0 sudo[325352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:19 compute-0 sudo[325352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:19 compute-0 sudo[325352]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.013 2 INFO nova.compute.manager [-] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Took 4.51 seconds to deallocate network for instance.
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.106 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.107 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.222 2 DEBUG oslo_concurrency.processutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.255 2 DEBUG nova.network.neutron [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Successfully updated port: 1613b7ad-4ed9-456e-bd64-5de6348252d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.393 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.394 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.394 2 DEBUG nova.network.neutron [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.563 2 DEBUG nova.network.neutron [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:40:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894032499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.653 2 DEBUG oslo_concurrency.processutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.663 2 DEBUG nova.compute.provider_tree [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.712 2 DEBUG nova.scheduler.client.report [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:40:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:20.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.856 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:20 compute-0 nova_compute[256940]: 2025-10-02 12:40:20.939 2 INFO nova.scheduler.client.report [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Deleted allocations for instance 81cb0055-c522-4ea7-b77e-e21e72c7782e
Oct 02 12:40:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:21.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:21 compute-0 ceph-mon[73668]: pgmap v1931: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 214 op/s
Oct 02 12:40:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2894032499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1813702915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.179 2 DEBUG oslo_concurrency.lockutils [None req-aaa03f58-7cf7-43ff-9ca8-c8fcaab5878c 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "81cb0055-c522-4ea7-b77e-e21e72c7782e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.326 2 DEBUG nova.compute.manager [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.327 2 DEBUG nova.compute.manager [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing instance network info cache due to event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.327 2 DEBUG oslo_concurrency.lockutils [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.374 2 DEBUG nova.network.neutron [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:21 compute-0 podman[325400]: 2025-10-02 12:40:21.403988428 +0000 UTC m=+0.066390404 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:21 compute-0 podman[325401]: 2025-10-02 12:40:21.431693861 +0000 UTC m=+0.093449890 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.464 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.465 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance network_info: |[{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.465 2 DEBUG oslo_concurrency.lockutils [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.465 2 DEBUG nova.network.neutron [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.468 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Start _get_guest_xml network_info=[{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.473 2 WARNING nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.478 2 DEBUG nova.virt.libvirt.host [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.478 2 DEBUG nova.virt.libvirt.host [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.481 2 DEBUG nova.virt.libvirt.host [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.482 2 DEBUG nova.virt.libvirt.host [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.483 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.483 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.484 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.484 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.484 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.484 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.485 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.485 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.485 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.485 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.486 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.486 2 DEBUG nova.virt.hardware [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.488 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.742 2 DEBUG nova.compute.manager [req-9fa4bd85-4e87-4daf-8564-c3cc85b6637e req-c772ff64-6075-4ded-bcd1-34b3447a6cbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Received event network-vif-deleted-8dd716ff-bca3-4034-8507-003cbbc312c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:40:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646677738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.934 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.965 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:21 compute-0 nova_compute[256940]: 2025-10-02 12:40:21.969 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/646677738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:40:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2568843289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.421 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.423 2 DEBUG nova.virt.libvirt.vif [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:40:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-169288015',display_name='tempest-ServerStableDeviceRescueTest-server-169288015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-169288015',id=109,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNu9tPmj4bBUF1u39kUhNMdjiWifamHbOuNkgYWVMmgjxafglnyiQJEpPAJoRypDdX6VcTNTXPo4P81OuLi9mgNJNDIlKFU24K/KSUL24g28w4rHKYOFD6uTdH/rAn40Q==',key_name='tempest-keypair-1658267838',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-uken2igj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fdbe447f49374937a828d6281949a2a4',uuid=e5acbf2d-74bd-452c-8732-db2f1c0739ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.424 2 DEBUG nova.network.os_vif_util [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.425 2 DEBUG nova.network.os_vif_util [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.427 2 DEBUG nova.objects.instance [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.821 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <uuid>e5acbf2d-74bd-452c-8732-db2f1c0739ff</uuid>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <name>instance-0000006d</name>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-169288015</nova:name>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:40:21</nova:creationTime>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <nova:port uuid="1613b7ad-4ed9-456e-bd64-5de6348252d5">
Oct 02 12:40:22 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <system>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <entry name="serial">e5acbf2d-74bd-452c-8732-db2f1c0739ff</entry>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <entry name="uuid">e5acbf2d-74bd-452c-8732-db2f1c0739ff</entry>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </system>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <os>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   </os>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <features>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   </features>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk">
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       </source>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config">
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       </source>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:40:22 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d3:f4:17"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <target dev="tap1613b7ad-4e"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/console.log" append="off"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <video>
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </video>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:40:22 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:40:22 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:40:22 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:40:22 compute-0 nova_compute[256940]: </domain>
Oct 02 12:40:22 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.823 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Preparing to wait for external event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.824 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.824 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.825 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.825 2 DEBUG nova.virt.libvirt.vif [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:40:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-169288015',display_name='tempest-ServerStableDeviceRescueTest-server-169288015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-169288015',id=109,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNu9tPmj4bBUF1u39kUhNMdjiWifamHbOuNkgYWVMmgjxafglnyiQJEpPAJoRypDdX6VcTNTXPo4P81OuLi9mgNJNDIlKFU24K/KSUL24g28w4rHKYOFD6uTdH/rAn40Q==',key_name='tempest-keypair-1658267838',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-uken2igj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fdbe447f49374937a828d6281949a2a4',uuid=e5acbf2d-74bd-452c-8732-db2f1c0739ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.826 2 DEBUG nova.network.os_vif_util [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.826 2 DEBUG nova.network.os_vif_util [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.827 2 DEBUG os_vif [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1613b7ad-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1613b7ad-4e, col_values=(('external_ids', {'iface-id': '1613b7ad-4ed9-456e-bd64-5de6348252d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:f4:17', 'vm-uuid': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:22 compute-0 NetworkManager[44981]: <info>  [1759408822.8370] manager: (tap1613b7ad-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:40:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:22.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:22 compute-0 nova_compute[256940]: 2025-10-02 12:40:22.848 2 INFO os_vif [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e')
Oct 02 12:40:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:23.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:23 compute-0 ceph-mon[73668]: pgmap v1932: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Oct 02 12:40:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2568843289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Oct 02 12:40:23 compute-0 nova_compute[256940]: 2025-10-02 12:40:23.615 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:40:23 compute-0 nova_compute[256940]: 2025-10-02 12:40:23.616 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:40:23 compute-0 nova_compute[256940]: 2025-10-02 12:40:23.616 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:d3:f4:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:40:23 compute-0 nova_compute[256940]: 2025-10-02 12:40:23.617 2 INFO nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Using config drive
Oct 02 12:40:23 compute-0 nova_compute[256940]: 2025-10-02 12:40:23.642 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:24 compute-0 ceph-mon[73668]: pgmap v1933: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Oct 02 12:40:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:24.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:25.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Oct 02 12:40:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:26 compute-0 nova_compute[256940]: 2025-10-02 12:40:26.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:26.475 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:26.475 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:26.476 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:26.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:27 compute-0 nova_compute[256940]: 2025-10-02 12:40:27.346 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408812.3454192, 81cb0055-c522-4ea7-b77e-e21e72c7782e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:27 compute-0 nova_compute[256940]: 2025-10-02 12:40:27.347 2 INFO nova.compute.manager [-] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] VM Stopped (Lifecycle Event)
Oct 02 12:40:27 compute-0 ceph-mon[73668]: pgmap v1934: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Oct 02 12:40:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 KiB/s wr, 123 op/s
Oct 02 12:40:27 compute-0 nova_compute[256940]: 2025-10-02 12:40:27.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:40:28
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root']
Oct 02 12:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:40:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:28.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:28 compute-0 ceph-mon[73668]: pgmap v1935: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 KiB/s wr, 123 op/s
Oct 02 12:40:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:29.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:40:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.9 KiB/s wr, 80 op/s
Oct 02 12:40:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:30.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:31.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:31 compute-0 ceph-mon[73668]: pgmap v1936: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.9 KiB/s wr, 80 op/s
Oct 02 12:40:31 compute-0 nova_compute[256940]: 2025-10-02 12:40:31.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.9 KiB/s wr, 80 op/s
Oct 02 12:40:32 compute-0 nova_compute[256940]: 2025-10-02 12:40:32.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:33.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:33 compute-0 ceph-mon[73668]: pgmap v1937: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.9 KiB/s wr, 80 op/s
Oct 02 12:40:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 597 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 375 KiB/s wr, 56 op/s
Oct 02 12:40:34 compute-0 ceph-mon[73668]: pgmap v1938: 305 pgs: 305 active+clean; 597 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 375 KiB/s wr, 56 op/s
Oct 02 12:40:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:34.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:35.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:35 compute-0 podman[325536]: 2025-10-02 12:40:35.410543053 +0000 UTC m=+0.068767366 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_managed=true)
Oct 02 12:40:35 compute-0 podman[325537]: 2025-10-02 12:40:35.416047047 +0000 UTC m=+0.070558693 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:40:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 610 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 813 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Oct 02 12:40:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:36 compute-0 nova_compute[256940]: 2025-10-02 12:40:36.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:36.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:36 compute-0 nova_compute[256940]: 2025-10-02 12:40:36.919 2 DEBUG nova.compute.manager [None req-42ac8a6b-a6fe-40eb-a6c9-70921bf505fb - - - - - -] [instance: 81cb0055-c522-4ea7-b77e-e21e72c7782e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:37 compute-0 ceph-mon[73668]: pgmap v1939: 305 pgs: 305 active+clean; 610 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 813 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Oct 02 12:40:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 617 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:40:37 compute-0 nova_compute[256940]: 2025-10-02 12:40:37.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:38.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:39.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.444 2 INFO nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Creating config drive at /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.450 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0g_bcg1r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:39 compute-0 ceph-mon[73668]: pgmap v1940: 305 pgs: 305 active+clean; 617 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.589 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0g_bcg1r" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 617 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.629 2 DEBUG nova.storage.rbd_utils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.633 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.664 2 DEBUG nova.network.neutron [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updated VIF entry in instance network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.665 2 DEBUG nova.network.neutron [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:39 compute-0 ovn_controller[148123]: 2025-10-02T12:40:39Z|00515|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:39 compute-0 nova_compute[256940]: 2025-10-02 12:40:39.822 2 DEBUG oslo_concurrency.lockutils [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:40 compute-0 sudo[325613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:40 compute-0 sudo[325613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:40 compute-0 sudo[325613]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:40 compute-0 sudo[325638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:40 compute-0 sudo[325638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:40 compute-0 sudo[325638]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01179817635631863 of space, bias 1.0, pg target 3.5394529068955887 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016194252512562814 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4463626727386936 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:40:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:40:40 compute-0 ceph-mon[73668]: pgmap v1941: 305 pgs: 305 active+clean; 617 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:40:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:40.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:40:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:40:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.168 2 DEBUG oslo_concurrency.processutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.169 2 INFO nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Deleting local config drive /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config because it was imported into RBD.
Oct 02 12:40:41 compute-0 kernel: tap1613b7ad-4e: entered promiscuous mode
Oct 02 12:40:41 compute-0 ovn_controller[148123]: 2025-10-02T12:40:41Z|00516|binding|INFO|Claiming lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 for this chassis.
Oct 02 12:40:41 compute-0 ovn_controller[148123]: 2025-10-02T12:40:41Z|00517|binding|INFO|1613b7ad-4ed9-456e-bd64-5de6348252d5: Claiming fa:16:3e:d3:f4:17 10.100.0.12
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:41 compute-0 NetworkManager[44981]: <info>  [1759408841.2487] manager: (tap1613b7ad-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Oct 02 12:40:41 compute-0 ovn_controller[148123]: 2025-10-02T12:40:41Z|00518|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 ovn-installed in OVS
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:41 compute-0 ovn_controller[148123]: 2025-10-02T12:40:41Z|00519|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 up in Southbound
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.276 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:f4:17 10.100.0.12'], port_security=['fa:16:3e:d3:f4:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a45bfea3-3640-4750-bd91-9e9e392369a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1613b7ad-4ed9-456e-bd64-5de6348252d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.277 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1613b7ad-4ed9-456e-bd64-5de6348252d5 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.279 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:40:41 compute-0 systemd-udevd[325678]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.295 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[71330a50-4c68-44a3-b42b-9d55212abbf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:41 compute-0 systemd-machined[210927]: New machine qemu-52-instance-0000006d.
Oct 02 12:40:41 compute-0 NetworkManager[44981]: <info>  [1759408841.3033] device (tap1613b7ad-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:40:41 compute-0 NetworkManager[44981]: <info>  [1759408841.3046] device (tap1613b7ad-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:40:41 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000006d.
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.336 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[17c7dddf-f772-43a9-99f3-c1843a492155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.340 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[83b1669b-a3e4-4e2b-99e7-e9589e0c124d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.375 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[64c0b078-4c9e-47ff-934a-9dca22bb2251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.394 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[18590d4f-d9c0-4556-8e5e-ac9fe44f861c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 24331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325694, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.415 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffe84b6-96e3-409f-80c1-f7684a7bb433]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325695, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325695, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.417 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.421 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.421 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.422 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:40:41.422 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 628 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.825 2 DEBUG nova.compute.manager [req-93bf0406-e8b2-4c7a-97b9-d31052c2c57e req-6081e23f-1299-4000-b851-9f6aeb05914c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.826 2 DEBUG oslo_concurrency.lockutils [req-93bf0406-e8b2-4c7a-97b9-d31052c2c57e req-6081e23f-1299-4000-b851-9f6aeb05914c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.827 2 DEBUG oslo_concurrency.lockutils [req-93bf0406-e8b2-4c7a-97b9-d31052c2c57e req-6081e23f-1299-4000-b851-9f6aeb05914c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.827 2 DEBUG oslo_concurrency.lockutils [req-93bf0406-e8b2-4c7a-97b9-d31052c2c57e req-6081e23f-1299-4000-b851-9f6aeb05914c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:41 compute-0 nova_compute[256940]: 2025-10-02 12:40:41.828 2 DEBUG nova.compute.manager [req-93bf0406-e8b2-4c7a-97b9-d31052c2c57e req-6081e23f-1299-4000-b851-9f6aeb05914c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Processing event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.443 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.444 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408842.4428294, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.444 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Started (Lifecycle Event)
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.449 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.453 2 INFO nova.virt.libvirt.driver [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance spawned successfully.
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.453 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.626 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.630 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.659 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.659 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.659 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.660 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.660 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.660 2 DEBUG nova.virt.libvirt.driver [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.754 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.754 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408842.4463737, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.754 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Paused (Lifecycle Event)
Oct 02 12:40:42 compute-0 nova_compute[256940]: 2025-10-02 12:40:42.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:43 compute-0 ceph-mon[73668]: pgmap v1942: 305 pgs: 305 active+clean; 628 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 12:40:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.235 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.240 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408842.448566, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.240 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Resumed (Lifecycle Event)
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.445 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.451 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.512 2 INFO nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Took 29.21 seconds to spawn the instance on the hypervisor.
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.512 2 DEBUG nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.618 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.842 2 INFO nova.compute.manager [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Took 32.23 seconds to build instance.
Oct 02 12:40:43 compute-0 nova_compute[256940]: 2025-10-02 12:40:43.894 2 DEBUG oslo_concurrency.lockutils [None req-86209844-5a9d-4304-8cb8-1257be94aa89 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 32.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:44 compute-0 nova_compute[256940]: 2025-10-02 12:40:44.216 2 DEBUG nova.compute.manager [req-1df5d308-895c-479a-bf4d-6b87e769efd1 req-fef86297-5107-4055-9433-1132a5b8d9b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:44 compute-0 nova_compute[256940]: 2025-10-02 12:40:44.217 2 DEBUG oslo_concurrency.lockutils [req-1df5d308-895c-479a-bf4d-6b87e769efd1 req-fef86297-5107-4055-9433-1132a5b8d9b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:44 compute-0 nova_compute[256940]: 2025-10-02 12:40:44.217 2 DEBUG oslo_concurrency.lockutils [req-1df5d308-895c-479a-bf4d-6b87e769efd1 req-fef86297-5107-4055-9433-1132a5b8d9b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:44 compute-0 nova_compute[256940]: 2025-10-02 12:40:44.217 2 DEBUG oslo_concurrency.lockutils [req-1df5d308-895c-479a-bf4d-6b87e769efd1 req-fef86297-5107-4055-9433-1132a5b8d9b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:44 compute-0 nova_compute[256940]: 2025-10-02 12:40:44.217 2 DEBUG nova.compute.manager [req-1df5d308-895c-479a-bf4d-6b87e769efd1 req-fef86297-5107-4055-9433-1132a5b8d9b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:44 compute-0 nova_compute[256940]: 2025-10-02 12:40:44.218 2 WARNING nova.compute.manager [req-1df5d308-895c-479a-bf4d-6b87e769efd1 req-fef86297-5107-4055-9433-1132a5b8d9b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state active and task_state None.
Oct 02 12:40:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:44.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:45.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:45 compute-0 ceph-mon[73668]: pgmap v1943: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 12:40:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 02 12:40:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:46 compute-0 nova_compute[256940]: 2025-10-02 12:40:46.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:46 compute-0 ceph-mon[73668]: pgmap v1944: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 02 12:40:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:46.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:40:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:47.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:40:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 878 KiB/s wr, 122 op/s
Oct 02 12:40:47 compute-0 nova_compute[256940]: 2025-10-02 12:40:47.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:48 compute-0 NetworkManager[44981]: <info>  [1759408848.1898] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct 02 12:40:48 compute-0 NetworkManager[44981]: <info>  [1759408848.1906] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Oct 02 12:40:48 compute-0 nova_compute[256940]: 2025-10-02 12:40:48.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:48 compute-0 nova_compute[256940]: 2025-10-02 12:40:48.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:48 compute-0 ovn_controller[148123]: 2025-10-02T12:40:48Z|00520|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:40:48 compute-0 nova_compute[256940]: 2025-10-02 12:40:48.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:48 compute-0 ceph-mon[73668]: pgmap v1945: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 878 KiB/s wr, 122 op/s
Oct 02 12:40:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:40:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:48.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:40:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:49.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 98 op/s
Oct 02 12:40:50 compute-0 ceph-mon[73668]: pgmap v1946: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 98 op/s
Oct 02 12:40:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:50.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:51.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:51 compute-0 nova_compute[256940]: 2025-10-02 12:40:51.197 2 DEBUG nova.compute.manager [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:51 compute-0 nova_compute[256940]: 2025-10-02 12:40:51.197 2 DEBUG nova.compute.manager [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing instance network info cache due to event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:40:51 compute-0 nova_compute[256940]: 2025-10-02 12:40:51.198 2 DEBUG oslo_concurrency.lockutils [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:51 compute-0 nova_compute[256940]: 2025-10-02 12:40:51.198 2 DEBUG oslo_concurrency.lockutils [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:51 compute-0 nova_compute[256940]: 2025-10-02 12:40:51.198 2 DEBUG nova.network.neutron [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:40:51 compute-0 nova_compute[256940]: 2025-10-02 12:40:51.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 98 op/s
Oct 02 12:40:52 compute-0 podman[325744]: 2025-10-02 12:40:52.394131732 +0000 UTC m=+0.057899463 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:52 compute-0 podman[325745]: 2025-10-02 12:40:52.424204257 +0000 UTC m=+0.086199251 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:52 compute-0 ceph-mon[73668]: pgmap v1947: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 98 op/s
Oct 02 12:40:52 compute-0 nova_compute[256940]: 2025-10-02 12:40:52.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 114 KiB/s wr, 89 op/s
Oct 02 12:40:54 compute-0 ceph-mon[73668]: pgmap v1948: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 114 KiB/s wr, 89 op/s
Oct 02 12:40:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:54.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:55.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 616 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 115 op/s
Oct 02 12:40:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4238417375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:56 compute-0 nova_compute[256940]: 2025-10-02 12:40:56.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:56 compute-0 ceph-mon[73668]: pgmap v1949: 305 pgs: 305 active+clean; 616 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 115 op/s
Oct 02 12:40:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/813156201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:40:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:56.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:40:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:57.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:57 compute-0 ovn_controller[148123]: 2025-10-02T12:40:57Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:f4:17 10.100.0.12
Oct 02 12:40:57 compute-0 ovn_controller[148123]: 2025-10-02T12:40:57Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:f4:17 10.100.0.12
Oct 02 12:40:57 compute-0 nova_compute[256940]: 2025-10-02 12:40:57.396 2 DEBUG nova.network.neutron [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updated VIF entry in instance network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:40:57 compute-0 nova_compute[256940]: 2025-10-02 12:40:57.397 2 DEBUG nova.network.neutron [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 624 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct 02 12:40:57 compute-0 nova_compute[256940]: 2025-10-02 12:40:57.819 2 DEBUG oslo_concurrency.lockutils [req-49fb4bbc-58e9-4b7c-bac0-b5524663687e req-5f56e2e8-7691-42fd-bc94-d41b1cfc88b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:57 compute-0 nova_compute[256940]: 2025-10-02 12:40:57.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/598766264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3880091232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.940393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857940449, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2190, "num_deletes": 254, "total_data_size": 3821856, "memory_usage": 3883392, "flush_reason": "Manual Compaction"}
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857963605, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3742892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41549, "largest_seqno": 43738, "table_properties": {"data_size": 3733017, "index_size": 6241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21101, "raw_average_key_size": 20, "raw_value_size": 3713052, "raw_average_value_size": 3658, "num_data_blocks": 271, "num_entries": 1015, "num_filter_entries": 1015, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408657, "oldest_key_time": 1759408657, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 23439 microseconds, and 9411 cpu microseconds.
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.963827) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3742892 bytes OK
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.963896) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.965843) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.965862) EVENT_LOG_v1 {"time_micros": 1759408857965854, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.965889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3812884, prev total WAL file size 3812884, number of live WAL files 2.
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.967680) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3655KB)], [92(9402KB)]
Oct 02 12:40:57 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857967797, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 13370922, "oldest_snapshot_seqno": -1}
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7131 keys, 11442871 bytes, temperature: kUnknown
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858047138, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11442871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11394352, "index_size": 29591, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 183639, "raw_average_key_size": 25, "raw_value_size": 11266136, "raw_average_value_size": 1579, "num_data_blocks": 1173, "num_entries": 7131, "num_filter_entries": 7131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.047602) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11442871 bytes
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.051169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.2 rd, 143.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 7659, records dropped: 528 output_compression: NoCompression
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.051193) EVENT_LOG_v1 {"time_micros": 1759408858051180, "job": 54, "event": "compaction_finished", "compaction_time_micros": 79494, "compaction_time_cpu_micros": 30084, "output_level": 6, "num_output_files": 1, "total_output_size": 11442871, "num_input_records": 7659, "num_output_records": 7131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858052932, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858055567, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:57.967464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.055640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.055647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.055649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.055651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:40:58.055653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:40:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:58.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:58 compute-0 ceph-mon[73668]: pgmap v1950: 305 pgs: 305 active+clean; 624 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct 02 12:40:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/395680106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:40:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:59.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.393 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.394 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.394 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.394 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.395 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 624 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 12:40:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3658988662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:59 compute-0 nova_compute[256940]: 2025-10-02 12:40:59.849 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:00 compute-0 ovn_controller[148123]: 2025-10-02T12:41:00Z|00521|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:41:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2487088342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3658988662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:00 compute-0 sudo[325816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:00 compute-0 sudo[325816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:00 compute-0 sudo[325816]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.244 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.245 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.250 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.251 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:00 compute-0 sudo[325841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:00 compute-0 sudo[325841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:00 compute-0 sudo[325841]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.431 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.433 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4137MB free_disk=20.740081787109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.433 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.434 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.946 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 1546ac1d-4a04-4c5e-ae02-b005461c7731 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.947 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e5acbf2d-74bd-452c-8732-db2f1c0739ff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.947 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:41:00 compute-0 nova_compute[256940]: 2025-10-02 12:41:00.948 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.066 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:41:01 compute-0 ceph-mon[73668]: pgmap v1951: 305 pgs: 305 active+clean; 624 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.137 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.137 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:41:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:01.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.151 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:41:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.176 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.288 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:01.377 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:01.378 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 371 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 02 12:41:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/960821142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.745 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.751 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.821 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.881 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:41:01 compute-0 nova_compute[256940]: 2025-10-02 12:41:01.881 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/960821142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:02.380 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:02 compute-0 nova_compute[256940]: 2025-10-02 12:41:02.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:02 compute-0 nova_compute[256940]: 2025-10-02 12:41:02.877 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:02 compute-0 nova_compute[256940]: 2025-10-02 12:41:02.877 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:02 compute-0 nova_compute[256940]: 2025-10-02 12:41:02.878 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:02 compute-0 nova_compute[256940]: 2025-10-02 12:41:02.878 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:41:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:03.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:03 compute-0 nova_compute[256940]: 2025-10-02 12:41:03.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:03 compute-0 ceph-mon[73668]: pgmap v1952: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 371 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 02 12:41:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:41:04 compute-0 nova_compute[256940]: 2025-10-02 12:41:04.214 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:04 compute-0 nova_compute[256940]: 2025-10-02 12:41:04.216 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3647303231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:41:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:05.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048858964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:41:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048858964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:05 compute-0 nova_compute[256940]: 2025-10-02 12:41:05.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:05 compute-0 nova_compute[256940]: 2025-10-02 12:41:05.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:41:05 compute-0 nova_compute[256940]: 2025-10-02 12:41:05.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:41:05 compute-0 ceph-mon[73668]: pgmap v1953: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:41:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4048858964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4048858964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:05 compute-0 nova_compute[256940]: 2025-10-02 12:41:05.451 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:05 compute-0 nova_compute[256940]: 2025-10-02 12:41:05.451 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:05 compute-0 nova_compute[256940]: 2025-10-02 12:41:05.451 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:41:05 compute-0 nova_compute[256940]: 2025-10-02 12:41:05.452 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 3.9 MiB/s wr, 149 op/s
Oct 02 12:41:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:06 compute-0 podman[325892]: 2025-10-02 12:41:06.4146198 +0000 UTC m=+0.081800907 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:41:06 compute-0 podman[325891]: 2025-10-02 12:41:06.415120373 +0000 UTC m=+0.086961881 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:41:06 compute-0 nova_compute[256940]: 2025-10-02 12:41:06.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:06 compute-0 ceph-mon[73668]: pgmap v1954: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 3.9 MiB/s wr, 149 op/s
Oct 02 12:41:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:06.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:07.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 658 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 964 KiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 02 12:41:07 compute-0 nova_compute[256940]: 2025-10-02 12:41:07.609 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [{"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:07 compute-0 nova_compute[256940]: 2025-10-02 12:41:07.640 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-1546ac1d-4a04-4c5e-ae02-b005461c7731" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:07 compute-0 nova_compute[256940]: 2025-10-02 12:41:07.641 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:41:07 compute-0 nova_compute[256940]: 2025-10-02 12:41:07.735 2 DEBUG nova.compute.manager [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:07 compute-0 nova_compute[256940]: 2025-10-02 12:41:07.844 2 INFO nova.compute.manager [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] instance snapshotting
Oct 02 12:41:07 compute-0 nova_compute[256940]: 2025-10-02 12:41:07.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:08 compute-0 nova_compute[256940]: 2025-10-02 12:41:08.268 2 INFO nova.virt.libvirt.driver [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Beginning live snapshot process
Oct 02 12:41:08 compute-0 nova_compute[256940]: 2025-10-02 12:41:08.521 2 DEBUG nova.virt.libvirt.imagebackend [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:41:08 compute-0 ceph-mon[73668]: pgmap v1955: 305 pgs: 305 active+clean; 658 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 964 KiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 02 12:41:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:08.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:09.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:09 compute-0 nova_compute[256940]: 2025-10-02 12:41:09.283 2 DEBUG nova.storage.rbd_utils [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(a3f16445d27b4aa181fe077b108ef9e2) on rbd image(e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:41:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 658 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 776 KiB/s rd, 1.3 MiB/s wr, 76 op/s
Oct 02 12:41:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 02 12:41:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 02 12:41:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 02 12:41:10 compute-0 nova_compute[256940]: 2025-10-02 12:41:10.374 2 DEBUG nova.storage.rbd_utils [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning vms/e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk@a3f16445d27b4aa181fe077b108ef9e2 to images/2d2735c8-c9f0-487a-8b58-3e359fca2f77 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:41:10 compute-0 nova_compute[256940]: 2025-10-02 12:41:10.533 2 DEBUG nova.storage.rbd_utils [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] flattening images/2d2735c8-c9f0-487a-8b58-3e359fca2f77 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:41:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:11.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:11 compute-0 ceph-mon[73668]: pgmap v1956: 305 pgs: 305 active+clean; 658 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 776 KiB/s rd, 1.3 MiB/s wr, 76 op/s
Oct 02 12:41:11 compute-0 ceph-mon[73668]: osdmap e264: 3 total, 3 up, 3 in
Oct 02 12:41:11 compute-0 nova_compute[256940]: 2025-10-02 12:41:11.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 646 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Oct 02 12:41:11 compute-0 nova_compute[256940]: 2025-10-02 12:41:11.636 2 DEBUG nova.storage.rbd_utils [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] removing snapshot(a3f16445d27b4aa181fe077b108ef9e2) on rbd image(e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:41:12 compute-0 nova_compute[256940]: 2025-10-02 12:41:12.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:12 compute-0 nova_compute[256940]: 2025-10-02 12:41:12.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:41:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 02 12:41:12 compute-0 nova_compute[256940]: 2025-10-02 12:41:12.320 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:41:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 02 12:41:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 02 12:41:12 compute-0 nova_compute[256940]: 2025-10-02 12:41:12.388 2 DEBUG nova.storage.rbd_utils [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(snap) on rbd image(2d2735c8-c9f0-487a-8b58-3e359fca2f77) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:41:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/791234106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:12 compute-0 nova_compute[256940]: 2025-10-02 12:41:12.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:13.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 02 12:41:13 compute-0 ceph-mon[73668]: pgmap v1958: 305 pgs: 305 active+clean; 646 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Oct 02 12:41:13 compute-0 ceph-mon[73668]: osdmap e265: 3 total, 3 up, 3 in
Oct 02 12:41:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 657 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 121 op/s
Oct 02 12:41:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 02 12:41:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 02 12:41:14 compute-0 ceph-mon[73668]: pgmap v1960: 305 pgs: 305 active+clean; 657 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 121 op/s
Oct 02 12:41:14 compute-0 ceph-mon[73668]: osdmap e266: 3 total, 3 up, 3 in
Oct 02 12:41:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 688 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.2 MiB/s wr, 182 op/s
Oct 02 12:41:15 compute-0 nova_compute[256940]: 2025-10-02 12:41:15.722 2 INFO nova.virt.libvirt.driver [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Snapshot image upload complete
Oct 02 12:41:15 compute-0 nova_compute[256940]: 2025-10-02 12:41:15.723 2 INFO nova.compute.manager [None req-cabc6450-0b96-4e21-9a3a-0b579594a7f0 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Took 7.88 seconds to snapshot the instance on the hypervisor.
Oct 02 12:41:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:16 compute-0 nova_compute[256940]: 2025-10-02 12:41:16.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:16 compute-0 ceph-mon[73668]: pgmap v1962: 305 pgs: 305 active+clean; 688 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.2 MiB/s wr, 182 op/s
Oct 02 12:41:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:16.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:17.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:17 compute-0 sudo[326075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:17 compute-0 sudo[326075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-0 sudo[326075]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:17 compute-0 sudo[326100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:41:17 compute-0 sudo[326100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-0 sudo[326100]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:17 compute-0 sudo[326125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:17 compute-0 sudo[326125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-0 sudo[326125]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:17 compute-0 sudo[326150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:41:17 compute-0 sudo[326150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 155 op/s
Oct 02 12:41:17 compute-0 nova_compute[256940]: 2025-10-02 12:41:17.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-0 podman[326248]: 2025-10-02 12:41:18.145691844 +0000 UTC m=+0.129708457 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:41:18 compute-0 podman[326248]: 2025-10-02 12:41:18.287598578 +0000 UTC m=+0.271615171 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:41:18 compute-0 ceph-mon[73668]: pgmap v1963: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 155 op/s
Oct 02 12:41:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1150966713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2588003642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:18.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:18 compute-0 podman[326383]: 2025-10-02 12:41:18.982782006 +0000 UTC m=+0.106273875 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:41:19 compute-0 podman[326405]: 2025-10-02 12:41:19.120297786 +0000 UTC m=+0.113833583 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:41:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:19 compute-0 podman[326383]: 2025-10-02 12:41:19.19130626 +0000 UTC m=+0.314798109 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:41:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:41:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:41:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:19 compute-0 podman[326450]: 2025-10-02 12:41:19.558172467 +0000 UTC m=+0.071144889 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 02 12:41:19 compute-0 podman[326450]: 2025-10-02 12:41:19.578586499 +0000 UTC m=+0.091558921 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, release=1793)
Oct 02 12:41:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Oct 02 12:41:19 compute-0 sudo[326150]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:41:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:41:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-0 sudo[326502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:20 compute-0 sudo[326502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:20 compute-0 sudo[326502]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:20 compute-0 sudo[326525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:20 compute-0 sudo[326525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:20 compute-0 sudo[326525]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-0 sudo[326548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:41:20 compute-0 sudo[326548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:20 compute-0 sudo[326548]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:20 compute-0 sudo[326575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:20 compute-0 sudo[326575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:20 compute-0 sudo[326575]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:20 compute-0 sudo[326595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:20 compute-0 sudo[326595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:20 compute-0 sudo[326595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:20 compute-0 sudo[326627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:41:20 compute-0 sudo[326627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:41:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:20.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:41:20 compute-0 sudo[326627]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:41:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:41:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:41:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e37110e2-7c68-4baf-806b-8c0ff383b34f does not exist
Oct 02 12:41:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 65be17c2-695b-4c78-9e15-3bea0864d0d2 does not exist
Oct 02 12:41:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3e165fe7-d028-4eda-9d2e-2fa2a56f7daa does not exist
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:41:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:41:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:41:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:41:21 compute-0 sudo[326685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:21 compute-0 sudo[326685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:21 compute-0 sudo[326685]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 02 12:41:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 02 12:41:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 02 12:41:21 compute-0 sudo[326710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:41:21 compute-0 sudo[326710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:21 compute-0 sudo[326710]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:21 compute-0 sudo[326735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:21 compute-0 sudo[326735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:21 compute-0 sudo[326735]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:21 compute-0 sudo[326760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:41:21 compute-0 sudo[326760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:21 compute-0 ceph-mon[73668]: pgmap v1964: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Oct 02 12:41:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:41:21 compute-0 ceph-mon[73668]: osdmap e267: 3 total, 3 up, 3 in
Oct 02 12:41:21 compute-0 nova_compute[256940]: 2025-10-02 12:41:21.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Oct 02 12:41:21 compute-0 podman[326826]: 2025-10-02 12:41:21.770721164 +0000 UTC m=+0.049216116 container create e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:41:21 compute-0 systemd[1]: Started libpod-conmon-e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0.scope.
Oct 02 12:41:21 compute-0 podman[326826]: 2025-10-02 12:41:21.74645747 +0000 UTC m=+0.024952452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:41:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:21 compute-0 podman[326826]: 2025-10-02 12:41:21.896881957 +0000 UTC m=+0.175376929 container init e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:41:21 compute-0 podman[326826]: 2025-10-02 12:41:21.905527433 +0000 UTC m=+0.184022385 container start e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:41:21 compute-0 gifted_einstein[326843]: 167 167
Oct 02 12:41:21 compute-0 systemd[1]: libpod-e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0.scope: Deactivated successfully.
Oct 02 12:41:21 compute-0 conmon[326843]: conmon e73b35b087a1f1179b6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0.scope/container/memory.events
Oct 02 12:41:21 compute-0 podman[326826]: 2025-10-02 12:41:21.920825202 +0000 UTC m=+0.199320154 container attach e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:41:21 compute-0 podman[326826]: 2025-10-02 12:41:21.922504316 +0000 UTC m=+0.200999268 container died e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-57340c8b0f0fb7dbdbf9319e53fef2045f76c5e72dc838a6a785715a407cbe10-merged.mount: Deactivated successfully.
Oct 02 12:41:22 compute-0 podman[326826]: 2025-10-02 12:41:22.055520378 +0000 UTC m=+0.334015370 container remove e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 12:41:22 compute-0 systemd[1]: libpod-conmon-e73b35b087a1f1179b6b90102cd877f01e03a2ca449617dfd00591dde32b23f0.scope: Deactivated successfully.
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.176 2 DEBUG oslo_concurrency.lockutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.177 2 DEBUG oslo_concurrency.lockutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.232 2 DEBUG nova.objects.instance [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'flavor' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.325 2 DEBUG oslo_concurrency.lockutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:22 compute-0 podman[326869]: 2025-10-02 12:41:22.329509351 +0000 UTC m=+0.094002055 container create 826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:41:22 compute-0 podman[326869]: 2025-10-02 12:41:22.280513622 +0000 UTC m=+0.045006376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:41:22 compute-0 systemd[1]: Started libpod-conmon-826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da.scope.
Oct 02 12:41:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099b14f6543b5eb0498a068a28d0e92e8e9358ee44f8a2d4372e5ee321e48e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099b14f6543b5eb0498a068a28d0e92e8e9358ee44f8a2d4372e5ee321e48e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099b14f6543b5eb0498a068a28d0e92e8e9358ee44f8a2d4372e5ee321e48e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099b14f6543b5eb0498a068a28d0e92e8e9358ee44f8a2d4372e5ee321e48e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099b14f6543b5eb0498a068a28d0e92e8e9358ee44f8a2d4372e5ee321e48e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:22 compute-0 podman[326869]: 2025-10-02 12:41:22.449888153 +0000 UTC m=+0.214380877 container init 826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:41:22 compute-0 podman[326869]: 2025-10-02 12:41:22.456791273 +0000 UTC m=+0.221283987 container start 826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:41:22 compute-0 podman[326869]: 2025-10-02 12:41:22.472872113 +0000 UTC m=+0.237364847 container attach 826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:41:22 compute-0 podman[326889]: 2025-10-02 12:41:22.525939369 +0000 UTC m=+0.083003908 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:41:22 compute-0 ceph-mon[73668]: pgmap v1966: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Oct 02 12:41:22 compute-0 podman[326910]: 2025-10-02 12:41:22.652202065 +0000 UTC m=+0.094219171 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.879 2 DEBUG oslo_concurrency.lockutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.879 2 DEBUG oslo_concurrency.lockutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:22 compute-0 nova_compute[256940]: 2025-10-02 12:41:22.880 2 INFO nova.compute.manager [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Attaching volume 76444907-9f59-47f1-99d8-aa820d2e0f3a to /dev/vdb
Oct 02 12:41:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:22.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.005 2 DEBUG os_brick.utils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.008 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.024 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.025 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[0653d900-9725-44b9-b854-4ab178d5c26d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.028 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.040 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.040 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[293ffa54-e996-4914-aa9e-0cbe07c6eda7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.043 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.056 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.058 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[0052e9c4-8851-4e8d-a9c0-975a7c1ba8dd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.060 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1337446a-d1ec-4bb1-8353-5c97c01c8de2]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.061 2 DEBUG oslo_concurrency.processutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.105 2 DEBUG oslo_concurrency.processutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "nvme version" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.109 2 DEBUG os_brick.initiator.connectors.lightos [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.110 2 DEBUG os_brick.initiator.connectors.lightos [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.110 2 DEBUG os_brick.initiator.connectors.lightos [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.111 2 DEBUG os_brick.utils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:41:23 compute-0 nova_compute[256940]: 2025-10-02 12:41:23.112 2 DEBUG nova.virt.block_device [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating existing volume attachment record: ed37b226-2e1a-4152-bdb2-a08ff6bfc01b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:41:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:23 compute-0 fervent_mcnulty[326886]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:41:23 compute-0 fervent_mcnulty[326886]: --> relative data size: 1.0
Oct 02 12:41:23 compute-0 fervent_mcnulty[326886]: --> All data devices are unavailable
Oct 02 12:41:23 compute-0 systemd[1]: libpod-826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da.scope: Deactivated successfully.
Oct 02 12:41:23 compute-0 podman[326869]: 2025-10-02 12:41:23.379202182 +0000 UTC m=+1.143694916 container died 826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9099b14f6543b5eb0498a068a28d0e92e8e9358ee44f8a2d4372e5ee321e48e4-merged.mount: Deactivated successfully.
Oct 02 12:41:23 compute-0 podman[326869]: 2025-10-02 12:41:23.503714502 +0000 UTC m=+1.268207206 container remove 826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:41:23 compute-0 systemd[1]: libpod-conmon-826eef7d2d9bad84f21df78ad5a0f3082b8443cd744bac05a348fe11be3383da.scope: Deactivated successfully.
Oct 02 12:41:23 compute-0 sudo[326760]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:23 compute-0 sudo[326969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:23 compute-0 sudo[326969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 83 op/s
Oct 02 12:41:23 compute-0 sudo[326969]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:23 compute-0 sudo[326994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:41:23 compute-0 sudo[326994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:23 compute-0 sudo[326994]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:23 compute-0 sudo[327019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:23 compute-0 sudo[327019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:23 compute-0 sudo[327019]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:23 compute-0 sudo[327044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:41:23 compute-0 sudo[327044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:24 compute-0 podman[327108]: 2025-10-02 12:41:24.191465136 +0000 UTC m=+0.053571150 container create 303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:41:24 compute-0 systemd[1]: Started libpod-conmon-303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73.scope.
Oct 02 12:41:24 compute-0 podman[327108]: 2025-10-02 12:41:24.162273044 +0000 UTC m=+0.024379078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:41:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:24 compute-0 podman[327108]: 2025-10-02 12:41:24.318952394 +0000 UTC m=+0.181058438 container init 303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:41:24 compute-0 podman[327108]: 2025-10-02 12:41:24.325753681 +0000 UTC m=+0.187859695 container start 303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:41:24 compute-0 charming_brown[327125]: 167 167
Oct 02 12:41:24 compute-0 systemd[1]: libpod-303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73.scope: Deactivated successfully.
Oct 02 12:41:24 compute-0 podman[327108]: 2025-10-02 12:41:24.369430422 +0000 UTC m=+0.231536436 container attach 303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 12:41:24 compute-0 podman[327108]: 2025-10-02 12:41:24.370412377 +0000 UTC m=+0.232518401 container died 303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:41:24 compute-0 nova_compute[256940]: 2025-10-02 12:41:24.372 2 DEBUG nova.objects.instance [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'flavor' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaf2c845f57af352284f1fe317126e7becd063860497dad0032e8bd30a09024e-merged.mount: Deactivated successfully.
Oct 02 12:41:24 compute-0 nova_compute[256940]: 2025-10-02 12:41:24.470 2 DEBUG nova.virt.libvirt.driver [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Attempting to attach volume 76444907-9f59-47f1-99d8-aa820d2e0f3a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:41:24 compute-0 nova_compute[256940]: 2025-10-02 12:41:24.476 2 DEBUG nova.virt.libvirt.guest [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:41:24 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:41:24 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-76444907-9f59-47f1-99d8-aa820d2e0f3a">
Oct 02 12:41:24 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:24 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:24 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:24 compute-0 nova_compute[256940]:   </source>
Oct 02 12:41:24 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:41:24 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:24 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:41:24 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:41:24 compute-0 nova_compute[256940]:   <serial>76444907-9f59-47f1-99d8-aa820d2e0f3a</serial>
Oct 02 12:41:24 compute-0 nova_compute[256940]: </disk>
Oct 02 12:41:24 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:41:24 compute-0 podman[327108]: 2025-10-02 12:41:24.515954867 +0000 UTC m=+0.378060881 container remove 303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:41:24 compute-0 systemd[1]: libpod-conmon-303de32ccbd2b3d5a668ca381a6b22a2b6afa18e0225ba73182d877cacc9ff73.scope: Deactivated successfully.
Oct 02 12:41:24 compute-0 podman[327172]: 2025-10-02 12:41:24.765173752 +0000 UTC m=+0.094783095 container create 17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:41:24 compute-0 ceph-mon[73668]: pgmap v1967: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 83 op/s
Oct 02 12:41:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/468430068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:24 compute-0 podman[327172]: 2025-10-02 12:41:24.698679877 +0000 UTC m=+0.028289250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:41:24 compute-0 systemd[1]: Started libpod-conmon-17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23.scope.
Oct 02 12:41:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fe78a790ac95d2cb20cc69156686d944cbba8fcabeb5a183bcea08ddc6789f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fe78a790ac95d2cb20cc69156686d944cbba8fcabeb5a183bcea08ddc6789f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fe78a790ac95d2cb20cc69156686d944cbba8fcabeb5a183bcea08ddc6789f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fe78a790ac95d2cb20cc69156686d944cbba8fcabeb5a183bcea08ddc6789f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:24 compute-0 podman[327172]: 2025-10-02 12:41:24.871491288 +0000 UTC m=+0.201100681 container init 17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:41:24 compute-0 podman[327172]: 2025-10-02 12:41:24.879657531 +0000 UTC m=+0.209266874 container start 17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:41:24 compute-0 podman[327172]: 2025-10-02 12:41:24.896877861 +0000 UTC m=+0.226487204 container attach 17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:41:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:24.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:25 compute-0 nova_compute[256940]: 2025-10-02 12:41:25.019 2 DEBUG nova.virt.libvirt.driver [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:25 compute-0 nova_compute[256940]: 2025-10-02 12:41:25.020 2 DEBUG nova.virt.libvirt.driver [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:25 compute-0 nova_compute[256940]: 2025-10-02 12:41:25.020 2 DEBUG nova.virt.libvirt.driver [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:25 compute-0 nova_compute[256940]: 2025-10-02 12:41:25.020 2 DEBUG nova.virt.libvirt.driver [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:d3:f4:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:41:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:25 compute-0 nova_compute[256940]: 2025-10-02 12:41:25.480 2 DEBUG oslo_concurrency.lockutils [None req-00342800-9c5a-4317-892b-8e1da5a3521a fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 992 KiB/s wr, 53 op/s
Oct 02 12:41:25 compute-0 brave_keller[327188]: {
Oct 02 12:41:25 compute-0 brave_keller[327188]:     "1": [
Oct 02 12:41:25 compute-0 brave_keller[327188]:         {
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "devices": [
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "/dev/loop3"
Oct 02 12:41:25 compute-0 brave_keller[327188]:             ],
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "lv_name": "ceph_lv0",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "lv_size": "7511998464",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "name": "ceph_lv0",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "tags": {
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.cluster_name": "ceph",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.crush_device_class": "",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.encrypted": "0",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.osd_id": "1",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.type": "block",
Oct 02 12:41:25 compute-0 brave_keller[327188]:                 "ceph.vdo": "0"
Oct 02 12:41:25 compute-0 brave_keller[327188]:             },
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "type": "block",
Oct 02 12:41:25 compute-0 brave_keller[327188]:             "vg_name": "ceph_vg0"
Oct 02 12:41:25 compute-0 brave_keller[327188]:         }
Oct 02 12:41:25 compute-0 brave_keller[327188]:     ]
Oct 02 12:41:25 compute-0 brave_keller[327188]: }
Oct 02 12:41:25 compute-0 systemd[1]: libpod-17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23.scope: Deactivated successfully.
Oct 02 12:41:25 compute-0 podman[327172]: 2025-10-02 12:41:25.82733754 +0000 UTC m=+1.156946893 container died 17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-50fe78a790ac95d2cb20cc69156686d944cbba8fcabeb5a183bcea08ddc6789f-merged.mount: Deactivated successfully.
Oct 02 12:41:25 compute-0 podman[327172]: 2025-10-02 12:41:25.982553562 +0000 UTC m=+1.312162905 container remove 17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:41:25 compute-0 systemd[1]: libpod-conmon-17b403c48748dde50bf93a2055dd127ec63a83674b595b41f5ebab1285e48b23.scope: Deactivated successfully.
Oct 02 12:41:26 compute-0 sudo[327044]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:26 compute-0 sudo[327211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:26 compute-0 sudo[327211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:26 compute-0 sudo[327211]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:26 compute-0 sudo[327236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:41:26 compute-0 sudo[327236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:26 compute-0 sudo[327236]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:26 compute-0 sudo[327261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:26 compute-0 sudo[327261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:26 compute-0 sudo[327261]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:26 compute-0 sudo[327286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:41:26 compute-0 sudo[327286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:26.476 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:26.477 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:26.478 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:26 compute-0 nova_compute[256940]: 2025-10-02 12:41:26.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:26 compute-0 nova_compute[256940]: 2025-10-02 12:41:26.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:26 compute-0 podman[327353]: 2025-10-02 12:41:26.665782807 +0000 UTC m=+0.022257412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:41:26 compute-0 podman[327353]: 2025-10-02 12:41:26.828776091 +0000 UTC m=+0.185250696 container create 00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:41:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:41:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:41:26 compute-0 ceph-mon[73668]: pgmap v1968: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 992 KiB/s wr, 53 op/s
Oct 02 12:41:26 compute-0 systemd[1]: Started libpod-conmon-00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee.scope.
Oct 02 12:41:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:27.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:27 compute-0 podman[327353]: 2025-10-02 12:41:27.226491034 +0000 UTC m=+0.582965649 container init 00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 12:41:27 compute-0 podman[327353]: 2025-10-02 12:41:27.235818037 +0000 UTC m=+0.592292632 container start 00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:41:27 compute-0 stoic_villani[327370]: 167 167
Oct 02 12:41:27 compute-0 systemd[1]: libpod-00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee.scope: Deactivated successfully.
Oct 02 12:41:27 compute-0 nova_compute[256940]: 2025-10-02 12:41:27.306 2 INFO nova.compute.manager [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Rescuing
Oct 02 12:41:27 compute-0 nova_compute[256940]: 2025-10-02 12:41:27.307 2 DEBUG oslo_concurrency.lockutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:27 compute-0 nova_compute[256940]: 2025-10-02 12:41:27.308 2 DEBUG oslo_concurrency.lockutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:27 compute-0 nova_compute[256940]: 2025-10-02 12:41:27.308 2 DEBUG nova.network.neutron [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:41:27 compute-0 podman[327353]: 2025-10-02 12:41:27.488680358 +0000 UTC m=+0.845154943 container attach 00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:41:27 compute-0 podman[327353]: 2025-10-02 12:41:27.489818768 +0000 UTC m=+0.846293353 container died 00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:41:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 17 KiB/s wr, 67 op/s
Oct 02 12:41:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-74891977be1d10c6901cddd85c1ea0b593be796aeab634d3a8091b275641cc7e-merged.mount: Deactivated successfully.
Oct 02 12:41:27 compute-0 nova_compute[256940]: 2025-10-02 12:41:27.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:27 compute-0 podman[327353]: 2025-10-02 12:41:27.997187552 +0000 UTC m=+1.353662137 container remove 00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:41:28 compute-0 systemd[1]: libpod-conmon-00b5df8248666cbcb6e81b33a87485467c04d09a801983d8245b1e855507b5ee.scope: Deactivated successfully.
Oct 02 12:41:28 compute-0 nova_compute[256940]: 2025-10-02 12:41:28.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:28 compute-0 nova_compute[256940]: 2025-10-02 12:41:28.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:41:28 compute-0 podman[327394]: 2025-10-02 12:41:28.247428145 +0000 UTC m=+0.072485763 container create d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:41:28 compute-0 podman[327394]: 2025-10-02 12:41:28.211787475 +0000 UTC m=+0.036845123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:41:28 compute-0 systemd[1]: Started libpod-conmon-d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b.scope.
Oct 02 12:41:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f4cd44b629ca1089432d2b54e10546837a1ab1840d82b191500f1269d80039/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f4cd44b629ca1089432d2b54e10546837a1ab1840d82b191500f1269d80039/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f4cd44b629ca1089432d2b54e10546837a1ab1840d82b191500f1269d80039/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f4cd44b629ca1089432d2b54e10546837a1ab1840d82b191500f1269d80039/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:28 compute-0 podman[327394]: 2025-10-02 12:41:28.379263797 +0000 UTC m=+0.204321435 container init d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:41:28 compute-0 podman[327394]: 2025-10-02 12:41:28.390752476 +0000 UTC m=+0.215810094 container start d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:41:28 compute-0 podman[327394]: 2025-10-02 12:41:28.430204536 +0000 UTC m=+0.255262174 container attach d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:41:28
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', 'backups', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'images']
Oct 02 12:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:41:28 compute-0 nova_compute[256940]: 2025-10-02 12:41:28.798 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:28 compute-0 nova_compute[256940]: 2025-10-02 12:41:28.799 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:28 compute-0 nova_compute[256940]: 2025-10-02 12:41:28.817 2 DEBUG nova.network.neutron [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:28.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:28 compute-0 nova_compute[256940]: 2025-10-02 12:41:28.951 2 DEBUG oslo_concurrency.lockutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:28 compute-0 nova_compute[256940]: 2025-10-02 12:41:28.981 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:41:29 compute-0 ceph-mon[73668]: pgmap v1969: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 17 KiB/s wr, 67 op/s
Oct 02 12:41:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:29.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:29 compute-0 cranky_mclean[327410]: {
Oct 02 12:41:29 compute-0 cranky_mclean[327410]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:41:29 compute-0 cranky_mclean[327410]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:41:29 compute-0 cranky_mclean[327410]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:41:29 compute-0 cranky_mclean[327410]:         "osd_id": 1,
Oct 02 12:41:29 compute-0 cranky_mclean[327410]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:41:29 compute-0 cranky_mclean[327410]:         "type": "bluestore"
Oct 02 12:41:29 compute-0 cranky_mclean[327410]:     }
Oct 02 12:41:29 compute-0 cranky_mclean[327410]: }
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:41:29 compute-0 podman[327394]: 2025-10-02 12:41:29.232066159 +0000 UTC m=+1.057123767 container died d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:41:29 compute-0 systemd[1]: libpod-d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b.scope: Deactivated successfully.
Oct 02 12:41:29 compute-0 nova_compute[256940]: 2025-10-02 12:41:29.273 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:41:29 compute-0 nova_compute[256940]: 2025-10-02 12:41:29.286 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:29 compute-0 nova_compute[256940]: 2025-10-02 12:41:29.287 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:29 compute-0 nova_compute[256940]: 2025-10-02 12:41:29.294 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:41:29 compute-0 nova_compute[256940]: 2025-10-02 12:41:29.295 2 INFO nova.compute.claims [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:41:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-10f4cd44b629ca1089432d2b54e10546837a1ab1840d82b191500f1269d80039-merged.mount: Deactivated successfully.
Oct 02 12:41:29 compute-0 podman[327394]: 2025-10-02 12:41:29.385256068 +0000 UTC m=+1.210313676 container remove d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:41:29 compute-0 systemd[1]: libpod-conmon-d71ae119a6c37c63f09a50b8c925b260c119597a722f61e3087f0dea88a6f24b.scope: Deactivated successfully.
Oct 02 12:41:29 compute-0 sudo[327286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:41:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:41:29 compute-0 nova_compute[256940]: 2025-10-02 12:41:29.585 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 17 KiB/s wr, 67 op/s
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 74bc8f4c-f436-4cd7-b8e5-f488bdbf757e does not exist
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8e5a0d8e-8c91-447c-9dba-b1f8ea8eb35d does not exist
Oct 02 12:41:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 96d5856f-366f-49e5-9042-149d03449bd2 does not exist
Oct 02 12:41:29 compute-0 sudo[327447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:29 compute-0 sudo[327447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:29 compute-0 sudo[327447]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:29 compute-0 sudo[327472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:41:29 compute-0 sudo[327472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:29 compute-0 sudo[327472]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066901759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.053 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.063 2 DEBUG nova.compute.provider_tree [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.235 2 DEBUG nova.scheduler.client.report [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.507 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.508 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:41:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:30 compute-0 ceph-mon[73668]: pgmap v1970: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 17 KiB/s wr, 67 op/s
Oct 02 12:41:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4066901759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.778 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.779 2 DEBUG nova.network.neutron [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.839 2 INFO nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.886 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:41:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:30.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:30 compute-0 nova_compute[256940]: 2025-10-02 12:41:30.981 2 DEBUG nova.policy [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.079 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.080 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.081 2 INFO nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Creating image(s)
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.111 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.145 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.174 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:31.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.179 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.248 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.249 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.249 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.250 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.277 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.281 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 680 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 19 KiB/s wr, 103 op/s
Oct 02 12:41:31 compute-0 kernel: tap1613b7ad-4e (unregistering): left promiscuous mode
Oct 02 12:41:31 compute-0 NetworkManager[44981]: <info>  [1759408891.9066] device (tap1613b7ad-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:41:31 compute-0 ovn_controller[148123]: 2025-10-02T12:41:31Z|00522|binding|INFO|Releasing lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 from this chassis (sb_readonly=0)
Oct 02 12:41:31 compute-0 ovn_controller[148123]: 2025-10-02T12:41:31Z|00523|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 down in Southbound
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:31 compute-0 ovn_controller[148123]: 2025-10-02T12:41:31Z|00524|binding|INFO|Removing iface tap1613b7ad-4e ovn-installed in OVS
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:31 compute-0 nova_compute[256940]: 2025-10-02 12:41:31.966 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:31.975 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:f4:17 10.100.0.12'], port_security=['fa:16:3e:d3:f4:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a45bfea3-3640-4750-bd91-9e9e392369a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1613b7ad-4ed9-456e-bd64-5de6348252d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:31.976 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1613b7ad-4ed9-456e-bd64-5de6348252d5 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis
Oct 02 12:41:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:31.978 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:41:31 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 02 12:41:31 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006d.scope: Consumed 15.473s CPU time.
Oct 02 12:41:31 compute-0 systemd-machined[210927]: Machine qemu-52-instance-0000006d terminated.
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.004 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[22c1cd93-3940-4304-b955-e78ab9495726]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.042 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a98c52e4-34d7-4d5b-b6ae-d4e5abffdf0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.047 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f3372b17-921a-419b-a3b3-bc2bcf9b4e3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.056 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.086 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4828fa-a386-4e5f-9287-601d02d81128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.109 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[10b7fa75-fb9d-44dd-ad1d-ad0a7048dc2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 24331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327676, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.131 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac33bf1-93db-4896-ad5d-281525532fae]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327679, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327679, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.133 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.142 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.142 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.143 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:32.143 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.287 2 DEBUG nova.objects.instance [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.315 2 INFO nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance shutdown successfully after 3 seconds.
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.322 2 INFO nova.virt.libvirt.driver [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance destroyed successfully.
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.322 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.367 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.368 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Ensure instance console log exists: /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.368 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.369 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.369 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.459 2 INFO nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Attempting a stable device rescue
Oct 02 12:41:32 compute-0 nova_compute[256940]: 2025-10-02 12:41:32.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.169 2 DEBUG nova.compute.manager [req-07a0ab07-6c5b-4e35-912b-add22a9ee7c1 req-fd32a21e-5b81-48c8-a67b-8cbde07cba5c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.170 2 DEBUG oslo_concurrency.lockutils [req-07a0ab07-6c5b-4e35-912b-add22a9ee7c1 req-fd32a21e-5b81-48c8-a67b-8cbde07cba5c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.170 2 DEBUG oslo_concurrency.lockutils [req-07a0ab07-6c5b-4e35-912b-add22a9ee7c1 req-fd32a21e-5b81-48c8-a67b-8cbde07cba5c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.170 2 DEBUG oslo_concurrency.lockutils [req-07a0ab07-6c5b-4e35-912b-add22a9ee7c1 req-fd32a21e-5b81-48c8-a67b-8cbde07cba5c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.171 2 DEBUG nova.compute.manager [req-07a0ab07-6c5b-4e35-912b-add22a9ee7c1 req-fd32a21e-5b81-48c8-a67b-8cbde07cba5c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.171 2 WARNING nova.compute.manager [req-07a0ab07-6c5b-4e35-912b-add22a9ee7c1 req-fd32a21e-5b81-48c8-a67b-8cbde07cba5c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state active and task_state rescuing.
Oct 02 12:41:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:33.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.429 2 DEBUG nova.network.neutron [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Successfully created port: da2c94a9-a16d-4ec8-9797-c39c6c89ec26 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.445 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.450 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.450 2 INFO nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Creating image(s)
Oct 02 12:41:33 compute-0 ceph-mon[73668]: pgmap v1971: 305 pgs: 305 active+clean; 680 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 19 KiB/s wr, 103 op/s
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.547 2 DEBUG nova.storage.rbd_utils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.552 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 678 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 376 KiB/s wr, 118 op/s
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.661 2 DEBUG nova.storage.rbd_utils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.697 2 DEBUG nova.storage.rbd_utils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.702 2 DEBUG oslo_concurrency.lockutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "a4fc5f03806ff71af63131023a10d6064464e77f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:33 compute-0 nova_compute[256940]: 2025-10-02 12:41:33.704 2 DEBUG oslo_concurrency.lockutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "a4fc5f03806ff71af63131023a10d6064464e77f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.248 2 DEBUG nova.virt.libvirt.imagebackend [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/2d2735c8-c9f0-487a-8b58-3e359fca2f77/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/2d2735c8-c9f0-487a-8b58-3e359fca2f77/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.322 2 DEBUG nova.virt.libvirt.imagebackend [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/2d2735c8-c9f0-487a-8b58-3e359fca2f77/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.323 2 DEBUG nova.storage.rbd_utils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning images/2d2735c8-c9f0-487a-8b58-3e359fca2f77@snap to None/e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.525 2 DEBUG oslo_concurrency.lockutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "a4fc5f03806ff71af63131023a10d6064464e77f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:34 compute-0 ceph-mon[73668]: pgmap v1972: 305 pgs: 305 active+clean; 678 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 376 KiB/s wr, 118 op/s
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.582 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.634 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.637 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Start _get_guest_xml network_info=[{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:d3:f4:17"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '2d2735c8-c9f0-487a-8b58-3e359fca2f77', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-76444907-9f59-47f1-99d8-aa820d2e0f3a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '76444907-9f59-47f1-99d8-aa820d2e0f3a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff', 'attached_at': '', 'detached_at': '', 'volume_id': '76444907-9f59-47f1-99d8-aa820d2e0f3a', 'serial': '76444907-9f59-47f1-99d8-aa820d2e0f3a'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': 'ed37b226-2e1a-4152-bdb2-a08ff6bfc01b', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.637 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.702 2 WARNING nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.707 2 DEBUG nova.virt.libvirt.host [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.708 2 DEBUG nova.virt.libvirt.host [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.711 2 DEBUG nova.virt.libvirt.host [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.712 2 DEBUG nova.virt.libvirt.host [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.712 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.713 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.713 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.713 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.713 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.714 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.714 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.714 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.714 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.715 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.715 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.716 2 DEBUG nova.virt.hardware [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.716 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:34 compute-0 nova_compute[256940]: 2025-10-02 12:41:34.738 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:35.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242631889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.222 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.265 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.299 2 DEBUG nova.compute.manager [req-e5f9d04a-d565-4b1c-b187-fcd3d94508bd req-1de954ba-43cd-40b8-8139-6b3f31a8f9c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.300 2 DEBUG oslo_concurrency.lockutils [req-e5f9d04a-d565-4b1c-b187-fcd3d94508bd req-1de954ba-43cd-40b8-8139-6b3f31a8f9c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.300 2 DEBUG oslo_concurrency.lockutils [req-e5f9d04a-d565-4b1c-b187-fcd3d94508bd req-1de954ba-43cd-40b8-8139-6b3f31a8f9c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.300 2 DEBUG oslo_concurrency.lockutils [req-e5f9d04a-d565-4b1c-b187-fcd3d94508bd req-1de954ba-43cd-40b8-8139-6b3f31a8f9c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.300 2 DEBUG nova.compute.manager [req-e5f9d04a-d565-4b1c-b187-fcd3d94508bd req-1de954ba-43cd-40b8-8139-6b3f31a8f9c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.302 2 WARNING nova.compute.manager [req-e5f9d04a-d565-4b1c-b187-fcd3d94508bd req-1de954ba-43cd-40b8-8139-6b3f31a8f9c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state active and task_state rescuing.
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.396 2 DEBUG nova.network.neutron [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Successfully updated port: da2c94a9-a16d-4ec8-9797-c39c6c89ec26 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.443 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.444 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.444 2 DEBUG nova.network.neutron [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:41:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 689 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 113 op/s
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.678 2 DEBUG nova.network.neutron [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:41:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147114996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.708 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1242631889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-0 nova_compute[256940]: 2025-10-02 12:41:35.746 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/913548972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/147114996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244284231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.203 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.206 2 DEBUG nova.virt.libvirt.vif [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:40:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-169288015',display_name='tempest-ServerStableDeviceRescueTest-server-169288015',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-169288015',id=109,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNu9tPmj4bBUF1u39kUhNMdjiWifamHbOuNkgYWVMmgjxafglnyiQJEpPAJoRypDdX6VcTNTXPo4P81OuLi9mgNJNDIlKFU24K/KSUL24g28w4rHKYOFD6uTdH/rAn40Q==',key_name='tempest-keypair-1658267838',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-uken2igj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fdbe447f49374937a828d6281949a2a4',uuid=e5acbf2d-74bd-452c-8732-db2f1c0739ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:d3:f4:17"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.206 2 DEBUG nova.network.os_vif_util [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:d3:f4:17"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.207 2 DEBUG nova.network.os_vif_util [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.209 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.252 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <uuid>e5acbf2d-74bd-452c-8732-db2f1c0739ff</uuid>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <name>instance-0000006d</name>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-169288015</nova:name>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:41:34</nova:creationTime>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <nova:port uuid="1613b7ad-4ed9-456e-bd64-5de6348252d5">
Oct 02 12:41:36 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <system>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <entry name="serial">e5acbf2d-74bd-452c-8732-db2f1c0739ff</entry>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <entry name="uuid">e5acbf2d-74bd-452c-8732-db2f1c0739ff</entry>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </system>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <os>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   </os>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <features>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   </features>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-76444907-9f59-47f1-99d8-aa820d2e0f3a">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <serial>76444907-9f59-47f1-99d8-aa820d2e0f3a</serial>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.rescue">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </source>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:41:36 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <target dev="vdc" bus="virtio"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <boot order="1"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d3:f4:17"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <target dev="tap1613b7ad-4e"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/console.log" append="off"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <video>
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </video>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:41:36 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:41:36 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:41:36 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:41:36 compute-0 nova_compute[256940]: </domain>
Oct 02 12:41:36 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.263 2 INFO nova.virt.libvirt.driver [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance destroyed successfully.
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.381 2 DEBUG nova.network.neutron [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updating instance_info_cache with network_info: [{"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.424 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.425 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.425 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.425 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.426 2 DEBUG nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:d3:f4:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.426 2 INFO nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Using config drive
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.459 2 DEBUG nova.storage.rbd_utils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.468 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.468 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance network_info: |[{"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.471 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Start _get_guest_xml network_info=[{"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.474 2 WARNING nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.480 2 DEBUG nova.virt.libvirt.host [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.481 2 DEBUG nova.virt.libvirt.host [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.493 2 DEBUG nova.virt.libvirt.host [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.494 2 DEBUG nova.virt.libvirt.host [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.495 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.495 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.495 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.495 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.495 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.496 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.496 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.496 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.496 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.497 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.497 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.497 2 DEBUG nova.virt.hardware [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.499 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.538 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.584 2 DEBUG nova.objects.instance [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'keypairs' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:36 compute-0 ceph-mon[73668]: pgmap v1973: 305 pgs: 305 active+clean; 689 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 113 op/s
Oct 02 12:41:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3244284231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756480052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.966 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:36 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.995 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:36.999 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.149 2 INFO nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Creating config drive at /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config.rescue
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.154 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j50o552 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.294 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j50o552" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.341 2 DEBUG nova.storage.rbd_utils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.348 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config.rescue e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.390 2 DEBUG nova.compute.manager [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-changed-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.390 2 DEBUG nova.compute.manager [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Refreshing instance network info cache due to event network-changed-da2c94a9-a16d-4ec8-9797-c39c6c89ec26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.391 2 DEBUG oslo_concurrency.lockutils [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.391 2 DEBUG oslo_concurrency.lockutils [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.392 2 DEBUG nova.network.neutron [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Refreshing network info cache for port da2c94a9-a16d-4ec8-9797-c39c6c89ec26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:37 compute-0 podman[328024]: 2025-10-02 12:41:37.401130589 +0000 UTC m=+0.069547387 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:41:37 compute-0 podman[328033]: 2025-10-02 12:41:37.402385582 +0000 UTC m=+0.070861711 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:41:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659227039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.459 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.460 2 DEBUG nova.virt.libvirt.vif [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-406908592',display_name='tempest-tempest.common.compute-instance-406908592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-406908592',id=111,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCSsWXrBNZ81Oa5/v3uLFZPIFpDy17i7Kz6ewOeuJDVAk5R+HQoVoDBkiXST2AxzkbF7tTgAwuMLAPF53S6z+CYB6oTV67eUJLHZsTPnYcmhNeSTiY83xjd4+pIhpS58A==',key_name='tempest-keypair-1887452136',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-11xfc9yw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=0cda84d2-b983-447e-b9ef-6888fdeb2df5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.461 2 DEBUG nova.network.os_vif_util [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.462 2 DEBUG nova.network.os_vif_util [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.463 2 DEBUG nova.objects.instance [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.571 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <uuid>0cda84d2-b983-447e-b9ef-6888fdeb2df5</uuid>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <name>instance-0000006f</name>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <nova:name>tempest-tempest.common.compute-instance-406908592</nova:name>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:41:36</nova:creationTime>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <nova:port uuid="da2c94a9-a16d-4ec8-9797-c39c6c89ec26">
Oct 02 12:41:37 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <system>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <entry name="serial">0cda84d2-b983-447e-b9ef-6888fdeb2df5</entry>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <entry name="uuid">0cda84d2-b983-447e-b9ef-6888fdeb2df5</entry>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </system>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <os>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   </os>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <features>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   </features>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk">
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config">
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       </source>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:41:37 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:3e:c8:4b"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <target dev="tapda2c94a9-a1"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/console.log" append="off"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <video>
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </video>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:41:37 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:41:37 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:41:37 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:41:37 compute-0 nova_compute[256940]: </domain>
Oct 02 12:41:37 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.572 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Preparing to wait for external event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.572 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.572 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.573 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.573 2 DEBUG nova.virt.libvirt.vif [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-406908592',display_name='tempest-tempest.common.compute-instance-406908592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-406908592',id=111,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCSsWXrBNZ81Oa5/v3uLFZPIFpDy17i7Kz6ewOeuJDVAk5R+HQoVoDBkiXST2AxzkbF7tTgAwuMLAPF53S6z+CYB6oTV67eUJLHZsTPnYcmhNeSTiY83xjd4+pIhpS58A==',key_name='tempest-keypair-1887452136',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-11xfc9yw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=0cda84d2-b983-447e-b9ef-6888fdeb2df5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.574 2 DEBUG nova.network.os_vif_util [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.574 2 DEBUG nova.network.os_vif_util [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.575 2 DEBUG os_vif [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda2c94a9-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda2c94a9-a1, col_values=(('external_ids', {'iface-id': 'da2c94a9-a16d-4ec8-9797-c39c6c89ec26', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:c8:4b', 'vm-uuid': '0cda84d2-b983-447e-b9ef-6888fdeb2df5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:37 compute-0 NetworkManager[44981]: <info>  [1759408897.6386] manager: (tapda2c94a9-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.649 2 INFO os_vif [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1')
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.731 2 DEBUG oslo_concurrency.processutils [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config.rescue e5acbf2d-74bd-452c-8732-db2f1c0739ff_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.731 2 INFO nova.virt.libvirt.driver [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Deleting local config drive /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff/disk.config.rescue because it was imported into RBD.
Oct 02 12:41:37 compute-0 kernel: tap1613b7ad-4e: entered promiscuous mode
Oct 02 12:41:37 compute-0 NetworkManager[44981]: <info>  [1759408897.7858] manager: (tap1613b7ad-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Oct 02 12:41:37 compute-0 ovn_controller[148123]: 2025-10-02T12:41:37Z|00525|binding|INFO|Claiming lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 for this chassis.
Oct 02 12:41:37 compute-0 ovn_controller[148123]: 2025-10-02T12:41:37Z|00526|binding|INFO|1613b7ad-4ed9-456e-bd64-5de6348252d5: Claiming fa:16:3e:d3:f4:17 10.100.0.12
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:37 compute-0 ovn_controller[148123]: 2025-10-02T12:41:37Z|00527|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 ovn-installed in OVS
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:37 compute-0 nova_compute[256940]: 2025-10-02 12:41:37.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:37 compute-0 systemd-udevd[328110]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:37 compute-0 NetworkManager[44981]: <info>  [1759408897.8320] device (tap1613b7ad-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:41:37 compute-0 NetworkManager[44981]: <info>  [1759408897.8333] device (tap1613b7ad-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:41:37 compute-0 systemd-machined[210927]: New machine qemu-53-instance-0000006d.
Oct 02 12:41:37 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000006d.
Oct 02 12:41:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1756480052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2659227039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:37 compute-0 ovn_controller[148123]: 2025-10-02T12:41:37Z|00528|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 up in Southbound
Oct 02 12:41:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:37.969 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:f4:17 10.100.0.12'], port_security=['fa:16:3e:d3:f4:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a45bfea3-3640-4750-bd91-9e9e392369a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1613b7ad-4ed9-456e-bd64-5de6348252d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:37.971 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1613b7ad-4ed9-456e-bd64-5de6348252d5 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis
Oct 02 12:41:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:37.973 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:41:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:37.996 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c98436-1db6-4b11-9184-667a51cb049e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.036 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.036 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.037 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:3e:c8:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.038 2 INFO nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Using config drive
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.054 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c96da5-f0f0-4538-9465-89bd1697c7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.058 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7552ba91-f361-42f7-9faa-f5b30dea4504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.091 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.103 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[68be5930-e037-49d2-ba7c-3394099d367d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.126 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[da2bf1c9-1c6c-4b92-9803-cc528d5f872e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 1000, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 1000, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 24331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328145, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.145 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b06c8487-570b-4c01-b889-3194a1fb7222]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328146, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328146, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.147 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.158 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.159 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.159 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:38.160 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:38 compute-0 ceph-mon[73668]: pgmap v1974: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.953 2 INFO nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Creating config drive at /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config
Oct 02 12:41:38 compute-0 nova_compute[256940]: 2025-10-02 12:41:38.960 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv7gk_klo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:38.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.100 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv7gk_klo" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.138 2 DEBUG nova.storage.rbd_utils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.144 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.184 2 DEBUG nova.network.neutron [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updated VIF entry in instance network info cache for port da2c94a9-a16d-4ec8-9797-c39c6c89ec26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.185 2 DEBUG nova.network.neutron [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updating instance_info_cache with network_info: [{"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.221 2 DEBUG oslo_concurrency.lockutils [req-08c29bf0-ea2c-4118-a547-33f1b0d8b414 req-075263d4-822e-4a1f-92b8-c85670821e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.382 2 DEBUG oslo_concurrency.processutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.383 2 INFO nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Deleting local config drive /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config because it was imported into RBD.
Oct 02 12:41:39 compute-0 kernel: tapda2c94a9-a1: entered promiscuous mode
Oct 02 12:41:39 compute-0 NetworkManager[44981]: <info>  [1759408899.4517] manager: (tapda2c94a9-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct 02 12:41:39 compute-0 systemd-udevd[328115]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:39 compute-0 ovn_controller[148123]: 2025-10-02T12:41:39Z|00529|binding|INFO|Claiming lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for this chassis.
Oct 02 12:41:39 compute-0 ovn_controller[148123]: 2025-10-02T12:41:39Z|00530|binding|INFO|da2c94a9-a16d-4ec8-9797-c39c6c89ec26: Claiming fa:16:3e:3e:c8:4b 10.100.0.6
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:39 compute-0 NetworkManager[44981]: <info>  [1759408899.4706] device (tapda2c94a9-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:41:39 compute-0 NetworkManager[44981]: <info>  [1759408899.4720] device (tapda2c94a9-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:41:39 compute-0 ovn_controller[148123]: 2025-10-02T12:41:39Z|00531|binding|INFO|Setting lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 ovn-installed in OVS
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:39 compute-0 systemd-machined[210927]: New machine qemu-54-instance-0000006f.
Oct 02 12:41:39 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-0000006f.
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.605 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for e5acbf2d-74bd-452c-8732-db2f1c0739ff due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.606 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408899.6051795, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.606 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Resumed (Lifecycle Event)
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.612 2 DEBUG nova.compute.manager [None req-d600407c-bb2d-4c43-b964-15b73ffe31d5 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.647 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:c8:4b 10.100.0.6'], port_security=['fa:16:3e:3e:c8:4b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0cda84d2-b983-447e-b9ef-6888fdeb2df5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4443be83-b7f7-4207-b668-d7bb46101dd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=da2c94a9-a16d-4ec8-9797-c39c6c89ec26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:39 compute-0 ovn_controller[148123]: 2025-10-02T12:41:39Z|00532|binding|INFO|Setting lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 up in Southbound
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.648 158104 INFO neutron.agent.ovn.metadata.agent [-] Port da2c94a9-a16d-4ec8-9797-c39c6c89ec26 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.649 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.664 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[772994a0-c29b-4918-93a4-4d35a1c966c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.665 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3643647-71 in ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.666 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3643647-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.667 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[56a5975c-23d9-452c-ba08-e2ccaf809352]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.667 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a632e7a7-fab8-42b8-b10f-3b62bd7df5cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.686 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca3e69a-65cc-4e48-8951-0caedaa08c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.704 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[71774510-ca4e-4939-b900-f278a54c9417]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.734 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[82f5c977-fbbf-4274-92af-174bb37f371d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 NetworkManager[44981]: <info>  [1759408899.7430] manager: (tapf3643647-70): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.742 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5eca9107-1df3-438d-9858-39f5253466b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.789 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[90e5545e-d3aa-4499-aea8-a6d925cfc1be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.793 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ab165bbe-3c04-41ca-9ddf-4e68dc3d418b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 NetworkManager[44981]: <info>  [1759408899.8245] device (tapf3643647-70): carrier: link connected
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.835 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6e2b20-5723-49fb-bc0e-c0366da94c71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.856 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[13698563-fa24-40de-bafe-49314d651e43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679530, 'reachable_time': 29921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328347, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.869 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[135cc47b-9593-4c7f-b410-0bd7e64953c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:edfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679530, 'tstamp': 679530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328350, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.888 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[220e96bd-b518-417f-a593-590658efe9dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679530, 'reachable_time': 29921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328352, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:39.921 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0448ba-63fc-4a6d-a1bb-d4cd3c6a1669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.954 2 DEBUG nova.compute.manager [req-15c3a0ef-9de6-4478-a35f-08c39aea1336 req-182c3aca-989b-4557-a278-b27474c98fee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.954 2 DEBUG oslo_concurrency.lockutils [req-15c3a0ef-9de6-4478-a35f-08c39aea1336 req-182c3aca-989b-4557-a278-b27474c98fee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.954 2 DEBUG oslo_concurrency.lockutils [req-15c3a0ef-9de6-4478-a35f-08c39aea1336 req-182c3aca-989b-4557-a278-b27474c98fee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.955 2 DEBUG oslo_concurrency.lockutils [req-15c3a0ef-9de6-4478-a35f-08c39aea1336 req-182c3aca-989b-4557-a278-b27474c98fee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.955 2 DEBUG nova.compute.manager [req-15c3a0ef-9de6-4478-a35f-08c39aea1336 req-182c3aca-989b-4557-a278-b27474c98fee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:39 compute-0 nova_compute[256940]: 2025-10-02 12:41:39.955 2 WARNING nova.compute.manager [req-15c3a0ef-9de6-4478-a35f-08c39aea1336 req-182c3aca-989b-4557-a278-b27474c98fee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state active and task_state rescuing.
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.004 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9c7153-5296-42e7-81e2-a5af5756deff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.007 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.008 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.009 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:40 compute-0 NetworkManager[44981]: <info>  [1759408900.0132] manager: (tapf3643647-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Oct 02 12:41:40 compute-0 kernel: tapf3643647-70: entered promiscuous mode
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.015 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:40 compute-0 ovn_controller[148123]: 2025-10-02T12:41:40Z|00533|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.019 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.020 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[86a7e930-ef9e-4092-839e-a45133af35d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.021 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.024 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:40.025 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'env', 'PROCESS_TAG=haproxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3643647-7cd9-4c43-8aaa-9b0f3160274b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.028 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011845977573050438 of space, bias 1.0, pg target 3.553793271915131 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007032776730876605 of space, bias 1.0, pg target 2.0887346890703515 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:41:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Oct 02 12:41:40 compute-0 sudo[328399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:40 compute-0 sudo[328399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:40 compute-0 sudo[328399]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:40 compute-0 podman[328386]: 2025-10-02 12:41:40.397509848 +0000 UTC m=+0.029241434 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:41:40 compute-0 sudo[328424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:40 compute-0 sudo[328424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:40 compute-0 sudo[328424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.651 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408899.6090379, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.652 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Started (Lifecycle Event)
Oct 02 12:41:40 compute-0 podman[328386]: 2025-10-02 12:41:40.734513055 +0000 UTC m=+0.366244661 container create 055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:41:40 compute-0 systemd[1]: Started libpod-conmon-055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728.scope.
Oct 02 12:41:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fd4236dab70fbf7ec8ac81836695654982bf6d36235c52f253a7335c3bb49c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.904 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:40 compute-0 nova_compute[256940]: 2025-10-02 12:41:40.908 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:40.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:40 compute-0 ceph-mon[73668]: pgmap v1975: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 12:41:40 compute-0 podman[328386]: 2025-10-02 12:41:40.98558203 +0000 UTC m=+0.617313616 container init 055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:41:40 compute-0 podman[328386]: 2025-10-02 12:41:40.997913531 +0000 UTC m=+0.629645097 container start 055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:41:41 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[328453]: [NOTICE]   (328457) : New worker (328459) forked
Oct 02 12:41:41 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[328453]: [NOTICE]   (328457) : Loading success.
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.122 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408900.4753017, 0cda84d2-b983-447e-b9ef-6888fdeb2df5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.123 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] VM Started (Lifecycle Event)
Oct 02 12:41:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.194 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.198 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408900.4753907, 0cda84d2-b983-447e-b9ef-6888fdeb2df5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.198 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] VM Paused (Lifecycle Event)
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.253 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.256 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.341 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.555 2 DEBUG nova.compute.manager [req-08bd1995-f419-427c-b8b8-09c87247e7f6 req-301d181c-18ef-466b-8347-e87e4f2f6dd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.556 2 DEBUG oslo_concurrency.lockutils [req-08bd1995-f419-427c-b8b8-09c87247e7f6 req-301d181c-18ef-466b-8347-e87e4f2f6dd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.556 2 DEBUG oslo_concurrency.lockutils [req-08bd1995-f419-427c-b8b8-09c87247e7f6 req-301d181c-18ef-466b-8347-e87e4f2f6dd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.557 2 DEBUG oslo_concurrency.lockutils [req-08bd1995-f419-427c-b8b8-09c87247e7f6 req-301d181c-18ef-466b-8347-e87e4f2f6dd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.557 2 DEBUG nova.compute.manager [req-08bd1995-f419-427c-b8b8-09c87247e7f6 req-301d181c-18ef-466b-8347-e87e4f2f6dd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Processing event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.558 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.561 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408901.5613377, 0cda84d2-b983-447e-b9ef-6888fdeb2df5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.562 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] VM Resumed (Lifecycle Event)
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.564 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.567 2 INFO nova.virt.libvirt.driver [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance spawned successfully.
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.567 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:41:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.691 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.696 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.697 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.698 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.698 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.699 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.699 2 DEBUG nova.virt.libvirt.driver [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.704 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:41 compute-0 nova_compute[256940]: 2025-10-02 12:41:41.915 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.034 2 INFO nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Took 10.95 seconds to spawn the instance on the hypervisor.
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.034 2 DEBUG nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.173 2 INFO nova.compute.manager [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Unrescuing
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.173 2 DEBUG oslo_concurrency.lockutils [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.174 2 DEBUG oslo_concurrency.lockutils [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.174 2 DEBUG nova.network.neutron [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.321 2 DEBUG nova.compute.manager [req-17110461-fde9-420d-88e2-9539dffdb363 req-bc33860f-73ed-471a-a779-7dcc5089295c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.322 2 DEBUG oslo_concurrency.lockutils [req-17110461-fde9-420d-88e2-9539dffdb363 req-bc33860f-73ed-471a-a779-7dcc5089295c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.322 2 DEBUG oslo_concurrency.lockutils [req-17110461-fde9-420d-88e2-9539dffdb363 req-bc33860f-73ed-471a-a779-7dcc5089295c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.322 2 DEBUG oslo_concurrency.lockutils [req-17110461-fde9-420d-88e2-9539dffdb363 req-bc33860f-73ed-471a-a779-7dcc5089295c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.322 2 DEBUG nova.compute.manager [req-17110461-fde9-420d-88e2-9539dffdb363 req-bc33860f-73ed-471a-a779-7dcc5089295c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.323 2 WARNING nova.compute.manager [req-17110461-fde9-420d-88e2-9539dffdb363 req-bc33860f-73ed-471a-a779-7dcc5089295c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.428 2 INFO nova.compute.manager [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Took 13.17 seconds to build instance.
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.457 2 DEBUG oslo_concurrency.lockutils [None req-4c561cd1-2383-4d2b-939f-4f13e8f46191 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:42 compute-0 nova_compute[256940]: 2025-10-02 12:41:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:42.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:43 compute-0 ceph-mon[73668]: pgmap v1976: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Oct 02 12:41:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 12:41:43 compute-0 nova_compute[256940]: 2025-10-02 12:41:43.687 2 DEBUG nova.compute.manager [req-2072f990-3870-4464-9318-ef399fa4232c req-755950eb-46ea-44eb-b323-502aa258edfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:43 compute-0 nova_compute[256940]: 2025-10-02 12:41:43.687 2 DEBUG oslo_concurrency.lockutils [req-2072f990-3870-4464-9318-ef399fa4232c req-755950eb-46ea-44eb-b323-502aa258edfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:43 compute-0 nova_compute[256940]: 2025-10-02 12:41:43.688 2 DEBUG oslo_concurrency.lockutils [req-2072f990-3870-4464-9318-ef399fa4232c req-755950eb-46ea-44eb-b323-502aa258edfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:43 compute-0 nova_compute[256940]: 2025-10-02 12:41:43.688 2 DEBUG oslo_concurrency.lockutils [req-2072f990-3870-4464-9318-ef399fa4232c req-755950eb-46ea-44eb-b323-502aa258edfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:43 compute-0 nova_compute[256940]: 2025-10-02 12:41:43.688 2 DEBUG nova.compute.manager [req-2072f990-3870-4464-9318-ef399fa4232c req-755950eb-46ea-44eb-b323-502aa258edfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] No waiting events found dispatching network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:43 compute-0 nova_compute[256940]: 2025-10-02 12:41:43.689 2 WARNING nova.compute.manager [req-2072f990-3870-4464-9318-ef399fa4232c req-755950eb-46ea-44eb-b323-502aa258edfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received unexpected event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for instance with vm_state active and task_state None.
Oct 02 12:41:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:44.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:45 compute-0 ceph-mon[73668]: pgmap v1977: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 12:41:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:45.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.5 MiB/s wr, 156 op/s
Oct 02 12:41:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.251 2 DEBUG nova.network.neutron [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.271 2 DEBUG oslo_concurrency.lockutils [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.272 2 DEBUG nova.objects.instance [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'flavor' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:46 compute-0 kernel: tap1613b7ad-4e (unregistering): left promiscuous mode
Oct 02 12:41:46 compute-0 NetworkManager[44981]: <info>  [1759408906.4094] device (tap1613b7ad-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:41:46 compute-0 ovn_controller[148123]: 2025-10-02T12:41:46Z|00534|binding|INFO|Releasing lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 from this chassis (sb_readonly=0)
Oct 02 12:41:46 compute-0 ovn_controller[148123]: 2025-10-02T12:41:46Z|00535|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 down in Southbound
Oct 02 12:41:46 compute-0 ovn_controller[148123]: 2025-10-02T12:41:46Z|00536|binding|INFO|Removing iface tap1613b7ad-4e ovn-installed in OVS
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.463 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:f4:17 10.100.0.12'], port_security=['fa:16:3e:d3:f4:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a45bfea3-3640-4750-bd91-9e9e392369a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1613b7ad-4ed9-456e-bd64-5de6348252d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.464 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1613b7ad-4ed9-456e-bd64-5de6348252d5 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.466 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:41:46 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 02 12:41:46 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006d.scope: Consumed 8.621s CPU time.
Oct 02 12:41:46 compute-0 systemd-machined[210927]: Machine qemu-53-instance-0000006d terminated.
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.483 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[08efeddd-5c3c-44c1-bcb4-090f53bd846e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.550 2 DEBUG nova.compute.manager [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.552 2 DEBUG nova.compute.manager [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing instance network info cache due to event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.552 2 DEBUG oslo_concurrency.lockutils [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.552 2 DEBUG oslo_concurrency.lockutils [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.553 2 DEBUG nova.network.neutron [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.555 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5eedec4c-cc21-4bcd-a7e2-4630583634d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.558 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ed81cba3-a0d0-4b7a-97e1-381860de541c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.572 2 INFO nova.virt.libvirt.driver [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance destroyed successfully.
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.573 2 DEBUG nova.objects.instance [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.592 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8719f75b-baf6-4ef3-b171-303ff825406c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.614 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[feba3a1b-25fd-4029-947c-c9a229ccb4fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 1000, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 1000, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 24331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328490, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.633 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[42fa08f1-cedd-4a60-a198-0587a79fabf0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328492, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328492, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.635 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.646 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.647 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.648 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.648 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:46 compute-0 kernel: tap1613b7ad-4e: entered promiscuous mode
Oct 02 12:41:46 compute-0 NetworkManager[44981]: <info>  [1759408906.6956] manager: (tap1613b7ad-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Oct 02 12:41:46 compute-0 ovn_controller[148123]: 2025-10-02T12:41:46Z|00537|binding|INFO|Claiming lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 for this chassis.
Oct 02 12:41:46 compute-0 ovn_controller[148123]: 2025-10-02T12:41:46Z|00538|binding|INFO|1613b7ad-4ed9-456e-bd64-5de6348252d5: Claiming fa:16:3e:d3:f4:17 10.100.0.12
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 systemd-udevd[328472]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:46 compute-0 NetworkManager[44981]: <info>  [1759408906.7145] device (tap1613b7ad-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:41:46 compute-0 NetworkManager[44981]: <info>  [1759408906.7171] device (tap1613b7ad-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.712 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:f4:17 10.100.0.12'], port_security=['fa:16:3e:d3:f4:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a45bfea3-3640-4750-bd91-9e9e392369a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1613b7ad-4ed9-456e-bd64-5de6348252d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.714 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1613b7ad-4ed9-456e-bd64-5de6348252d5 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.719 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:41:46 compute-0 ovn_controller[148123]: 2025-10-02T12:41:46Z|00539|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 ovn-installed in OVS
Oct 02 12:41:46 compute-0 ovn_controller[148123]: 2025-10-02T12:41:46Z|00540|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 up in Southbound
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 systemd-machined[210927]: New machine qemu-55-instance-0000006d.
Oct 02 12:41:46 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-0000006d.
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.743 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[23eb8df4-9abd-47a4-8160-10e4a377e914]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.789 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f38bd91a-d676-4a08-9713-4107ed0daedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.792 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[95fed49d-5efe-4b55-b2e3-90ec7da91ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.821 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[accb8741-1fb1-4865-8b5f-f61da5ce577a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.841 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3674bb03-744e-4bd0-a363-79d7bec19c53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 18, 'rx_bytes': 1000, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 18, 'rx_bytes': 1000, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 24331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328518, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.860 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdcc889-b20e-426f-ac43-3386930744aa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328519, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328519, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.863 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 nova_compute[256940]: 2025-10-02 12:41:46.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.866 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.867 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.868 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:41:46.868 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:46.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.189 2 DEBUG nova.compute.manager [req-90c2e46e-8b1f-4e1d-a372-09e836134c55 req-086cde5e-15f6-40da-9fb0-f53948a0f158 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.189 2 DEBUG oslo_concurrency.lockutils [req-90c2e46e-8b1f-4e1d-a372-09e836134c55 req-086cde5e-15f6-40da-9fb0-f53948a0f158 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.189 2 DEBUG oslo_concurrency.lockutils [req-90c2e46e-8b1f-4e1d-a372-09e836134c55 req-086cde5e-15f6-40da-9fb0-f53948a0f158 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.190 2 DEBUG oslo_concurrency.lockutils [req-90c2e46e-8b1f-4e1d-a372-09e836134c55 req-086cde5e-15f6-40da-9fb0-f53948a0f158 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.190 2 DEBUG nova.compute.manager [req-90c2e46e-8b1f-4e1d-a372-09e836134c55 req-086cde5e-15f6-40da-9fb0-f53948a0f158 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.190 2 WARNING nova.compute.manager [req-90c2e46e-8b1f-4e1d-a372-09e836134c55 req-086cde5e-15f6-40da-9fb0-f53948a0f158 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:41:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:47.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:47 compute-0 ceph-mon[73668]: pgmap v1978: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.5 MiB/s wr, 156 op/s
Oct 02 12:41:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1208385054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 787 KiB/s wr, 187 op/s
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.682 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for e5acbf2d-74bd-452c-8732-db2f1c0739ff due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.687 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408907.68171, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.687 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Resumed (Lifecycle Event)
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.797 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.802 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.912 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.912 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408907.6890988, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:47 compute-0 nova_compute[256940]: 2025-10-02 12:41:47.913 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Started (Lifecycle Event)
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.085 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.088 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.199 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.648 2 DEBUG nova.compute.manager [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.649 2 DEBUG nova.compute.manager [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing instance network info cache due to event network-changed-1613b7ad-4ed9-456e-bd64-5de6348252d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.649 2 DEBUG oslo_concurrency.lockutils [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.864 2 DEBUG nova.network.neutron [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updated VIF entry in instance network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.865 2 DEBUG nova.network.neutron [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.960 2 DEBUG oslo_concurrency.lockutils [req-0c4db887-4ef0-48e9-ad94-e0f7a2d83225 req-843486a4-2baa-491a-b486-9e8eabf9340a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.961 2 DEBUG oslo_concurrency.lockutils [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:48 compute-0 nova_compute[256940]: 2025-10-02 12:41:48.961 2 DEBUG nova.network.neutron [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Refreshing network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:48.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:48 compute-0 ovn_controller[148123]: 2025-10-02T12:41:48Z|00541|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:41:48 compute-0 ovn_controller[148123]: 2025-10-02T12:41:48Z|00542|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct 02 12:41:49 compute-0 nova_compute[256940]: 2025-10-02 12:41:49.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:49 compute-0 nova_compute[256940]: 2025-10-02 12:41:49.142 2 DEBUG nova.compute.manager [None req-1a62e7d4-c524-4a5f-82ad-51e8da4de89d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:49 compute-0 nova_compute[256940]: 2025-10-02 12:41:49.335 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-changed-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:49 compute-0 nova_compute[256940]: 2025-10-02 12:41:49.335 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Refreshing instance network info cache due to event network-changed-da2c94a9-a16d-4ec8-9797-c39c6c89ec26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:49 compute-0 nova_compute[256940]: 2025-10-02 12:41:49.336 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:49 compute-0 nova_compute[256940]: 2025-10-02 12:41:49.336 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:49 compute-0 nova_compute[256940]: 2025-10-02 12:41:49.336 2 DEBUG nova.network.neutron [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Refreshing network info cache for port da2c94a9-a16d-4ec8-9797-c39c6c89ec26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:49 compute-0 ceph-mon[73668]: pgmap v1979: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 787 KiB/s wr, 187 op/s
Oct 02 12:41:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 157 op/s
Oct 02 12:41:50 compute-0 ceph-mon[73668]: pgmap v1980: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 157 op/s
Oct 02 12:41:50 compute-0 nova_compute[256940]: 2025-10-02 12:41:50.514 2 DEBUG nova.network.neutron [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updated VIF entry in instance network info cache for port 1613b7ad-4ed9-456e-bd64-5de6348252d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:50 compute-0 nova_compute[256940]: 2025-10-02 12:41:50.516 2 DEBUG nova.network.neutron [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:50 compute-0 nova_compute[256940]: 2025-10-02 12:41:50.546 2 DEBUG oslo_concurrency.lockutils [req-71200065-5059-4829-a13e-2a4aa74810a3 req-2a0ff66f-3ad3-4cf2-ba9d-9ddf687b22b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:50.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:51.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.428 2 DEBUG nova.network.neutron [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updated VIF entry in instance network info cache for port da2c94a9-a16d-4ec8-9797-c39c6c89ec26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.429 2 DEBUG nova.network.neutron [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updating instance_info_cache with network_info: [{"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.540 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0cda84d2-b983-447e-b9ef-6888fdeb2df5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.541 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.541 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.541 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.541 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.542 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.542 2 WARNING nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.542 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.542 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.543 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.543 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.543 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.543 2 WARNING nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.544 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.544 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.544 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.544 2 DEBUG oslo_concurrency.lockutils [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.545 2 DEBUG nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:51 compute-0 nova_compute[256940]: 2025-10-02 12:41:51.545 2 WARNING nova.compute.manager [req-3b9ec886-c4f8-45b1-ae78-d73838383011 req-e34c18aa-6565-4887-832e-be9cf6f0e72e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:41:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 738 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.2 MiB/s wr, 248 op/s
Oct 02 12:41:52 compute-0 nova_compute[256940]: 2025-10-02 12:41:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:52 compute-0 ceph-mon[73668]: pgmap v1981: 305 pgs: 305 active+clean; 738 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.2 MiB/s wr, 248 op/s
Oct 02 12:41:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/281551588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:52.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:53 compute-0 podman[328602]: 2025-10-02 12:41:53.400356291 +0000 UTC m=+0.064763192 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 12:41:53 compute-0 podman[328603]: 2025-10-02 12:41:53.433807114 +0000 UTC m=+0.093185864 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:41:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 754 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 251 op/s
Oct 02 12:41:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4130622140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:54.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:55 compute-0 ceph-mon[73668]: pgmap v1982: 305 pgs: 305 active+clean; 754 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 251 op/s
Oct 02 12:41:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 764 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.3 MiB/s wr, 216 op/s
Oct 02 12:41:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:56 compute-0 nova_compute[256940]: 2025-10-02 12:41:56.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:56 compute-0 ceph-mon[73668]: pgmap v1983: 305 pgs: 305 active+clean; 764 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.3 MiB/s wr, 216 op/s
Oct 02 12:41:56 compute-0 ovn_controller[148123]: 2025-10-02T12:41:56Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:c8:4b 10.100.0.6
Oct 02 12:41:56 compute-0 ovn_controller[148123]: 2025-10-02T12:41:56Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:c8:4b 10.100.0.6
Oct 02 12:41:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:41:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:56.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:41:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 779 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Oct 02 12:41:57 compute-0 nova_compute[256940]: 2025-10-02 12:41:57.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3154034574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:41:58 compute-0 nova_compute[256940]: 2025-10-02 12:41:58.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:58 compute-0 ceph-mon[73668]: pgmap v1984: 305 pgs: 305 active+clean; 779 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Oct 02 12:41:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3672207254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3482794670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:58.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:41:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:59 compute-0 nova_compute[256940]: 2025-10-02 12:41:59.232 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 779 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Oct 02 12:41:59 compute-0 ovn_controller[148123]: 2025-10-02T12:41:59Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:f4:17 10.100.0.12
Oct 02 12:42:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1881695082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:00 compute-0 sudo[328646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:00 compute-0 sudo[328646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:00 compute-0 sudo[328646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:00 compute-0 sudo[328672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:00 compute-0 sudo[328672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:00 compute-0 sudo[328672]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:00.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.243 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.244 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.244 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.244 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.244 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:01 compute-0 ceph-mon[73668]: pgmap v1985: 305 pgs: 305 active+clean; 779 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 267 op/s
Oct 02 12:42:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350922912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.714 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.810 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.810 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.813 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.814 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.814 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.816 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.816 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:01.903 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:01 compute-0 nova_compute[256940]: 2025-10-02 12:42:01.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:01.904 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.010 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.012 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3906MB free_disk=20.69430923461914GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.012 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.012 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.093 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 1546ac1d-4a04-4c5e-ae02-b005461c7731 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.094 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e5acbf2d-74bd-452c-8732-db2f1c0739ff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.094 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.094 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.094 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.263 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486233126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.701 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.706 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.731 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.757 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:42:02 compute-0 nova_compute[256940]: 2025-10-02 12:42:02.757 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:02.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2350922912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Oct 02 12:42:03 compute-0 nova_compute[256940]: 2025-10-02 12:42:03.758 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:03 compute-0 nova_compute[256940]: 2025-10-02 12:42:03.759 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:03 compute-0 nova_compute[256940]: 2025-10-02 12:42:03.759 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:42:04 compute-0 ceph-mon[73668]: pgmap v1986: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 267 op/s
Oct 02 12:42:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3486233126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.542 2 DEBUG oslo_concurrency.lockutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.542 2 DEBUG oslo_concurrency.lockutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.558 2 DEBUG nova.objects.instance [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'flavor' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.598 2 DEBUG oslo_concurrency.lockutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.773 2 DEBUG oslo_concurrency.lockutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.773 2 DEBUG oslo_concurrency.lockutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.774 2 INFO nova.compute.manager [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Attaching volume 13e98bdf-a59c-4cec-a1d4-6b0f09d31497 to /dev/vdb
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.982 2 DEBUG os_brick.utils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.983 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.996 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.996 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[33098afe-955d-4674-92a0-ba59625e8721]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:04 compute-0 nova_compute[256940]: 2025-10-02 12:42:04.998 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:04.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.004 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.005 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[f7351c52-3146-49e7-bd00-e64991f193b3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.006 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.013 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.013 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[4900119d-b319-4737-b3de-b0830e4d5c2a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.014 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[432f71ab-632d-4f39-878e-96a6df64a64c]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.015 2 DEBUG oslo_concurrency.processutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.055 2 DEBUG oslo_concurrency.processutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.058 2 DEBUG os_brick.initiator.connectors.lightos [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.058 2 DEBUG os_brick.initiator.connectors.lightos [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.058 2 DEBUG os_brick.initiator.connectors.lightos [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.058 2 DEBUG os_brick.utils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.059 2 DEBUG nova.virt.block_device [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updating existing volume attachment record: 6a767a41-7929-4e78-ab39-6475cae18c68 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:05 compute-0 nova_compute[256940]: 2025-10-02 12:42:05.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:05 compute-0 ceph-mon[73668]: pgmap v1987: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Oct 02 12:42:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 02 12:42:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:42:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2563361418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:42:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2563361418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.563 2 DEBUG nova.objects.instance [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'flavor' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.587 2 DEBUG nova.virt.libvirt.driver [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Attempting to attach volume 13e98bdf-a59c-4cec-a1d4-6b0f09d31497 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.589 2 DEBUG nova.virt.libvirt.guest [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:42:06 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:06 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-13e98bdf-a59c-4cec-a1d4-6b0f09d31497">
Oct 02 12:42:06 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:06 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:06 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:06 compute-0 nova_compute[256940]:   </source>
Oct 02 12:42:06 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:42:06 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:06 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:42:06 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:06 compute-0 nova_compute[256940]:   <serial>13e98bdf-a59c-4cec-a1d4-6b0f09d31497</serial>
Oct 02 12:42:06 compute-0 nova_compute[256940]: </disk>
Oct 02 12:42:06 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:42:06 compute-0 ceph-mon[73668]: pgmap v1988: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 02 12:42:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2563361418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2563361418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3270305858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.787 2 DEBUG nova.virt.libvirt.driver [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.787 2 DEBUG nova.virt.libvirt.driver [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.788 2 DEBUG nova.virt.libvirt.driver [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:06 compute-0 nova_compute[256940]: 2025-10-02 12:42:06.788 2 DEBUG nova.virt.libvirt.driver [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:3e:c8:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:06.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:07 compute-0 nova_compute[256940]: 2025-10-02 12:42:07.035 2 DEBUG oslo_concurrency.lockutils [None req-50fa3686-c253-4afe-8bfd-fe081fe525d8 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:07 compute-0 nova_compute[256940]: 2025-10-02 12:42:07.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:07 compute-0 nova_compute[256940]: 2025-10-02 12:42:07.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:42:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:07.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 789 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Oct 02 12:42:07 compute-0 nova_compute[256940]: 2025-10-02 12:42:07.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-0 nova_compute[256940]: 2025-10-02 12:42:08.212 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:08 compute-0 nova_compute[256940]: 2025-10-02 12:42:08.212 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:08 compute-0 nova_compute[256940]: 2025-10-02 12:42:08.212 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:42:08 compute-0 podman[328772]: 2025-10-02 12:42:08.392160865 +0000 UTC m=+0.056925067 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:42:08 compute-0 podman[328773]: 2025-10-02 12:42:08.394295651 +0000 UTC m=+0.057214865 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:42:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:08.906 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:09.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:09 compute-0 ceph-mon[73668]: pgmap v1989: 305 pgs: 305 active+clean; 789 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Oct 02 12:42:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:09.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 789 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 83 KiB/s wr, 141 op/s
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.255 2 INFO nova.compute.manager [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Rebuilding instance
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.312 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [{"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.333 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-e5acbf2d-74bd-452c-8732-db2f1c0739ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.334 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.510 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.537 2 DEBUG nova.compute.manager [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.581 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_requests' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.593 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.604 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.616 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.629 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:42:10 compute-0 nova_compute[256940]: 2025-10-02 12:42:10.633 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:42:10 compute-0 ceph-mon[73668]: pgmap v1990: 305 pgs: 305 active+clean; 789 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 83 KiB/s wr, 141 op/s
Oct 02 12:42:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:11.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:11.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:11 compute-0 nova_compute[256940]: 2025-10-02 12:42:11.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 800 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 162 op/s
Oct 02 12:42:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2821617500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:12 compute-0 nova_compute[256940]: 2025-10-02 12:42:12.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:13.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:13.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:13 compute-0 ceph-mon[73668]: pgmap v1991: 305 pgs: 305 active+clean; 800 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 162 op/s
Oct 02 12:42:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 817 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 885 KiB/s rd, 2.3 MiB/s wr, 71 op/s
Oct 02 12:42:14 compute-0 nova_compute[256940]: 2025-10-02 12:42:14.653 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance shutdown successfully after 4 seconds.
Oct 02 12:42:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:15.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:15 compute-0 ceph-mon[73668]: pgmap v1992: 305 pgs: 305 active+clean; 817 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 885 KiB/s rd, 2.3 MiB/s wr, 71 op/s
Oct 02 12:42:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:15 compute-0 kernel: tapda2c94a9-a1 (unregistering): left promiscuous mode
Oct 02 12:42:15 compute-0 NetworkManager[44981]: <info>  [1759408935.5646] device (tapda2c94a9-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:15 compute-0 ovn_controller[148123]: 2025-10-02T12:42:15Z|00543|binding|INFO|Releasing lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 from this chassis (sb_readonly=0)
Oct 02 12:42:15 compute-0 ovn_controller[148123]: 2025-10-02T12:42:15Z|00544|binding|INFO|Setting lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 down in Southbound
Oct 02 12:42:15 compute-0 nova_compute[256940]: 2025-10-02 12:42:15.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:15 compute-0 ovn_controller[148123]: 2025-10-02T12:42:15Z|00545|binding|INFO|Removing iface tapda2c94a9-a1 ovn-installed in OVS
Oct 02 12:42:15 compute-0 nova_compute[256940]: 2025-10-02 12:42:15.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:15.589 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:c8:4b 10.100.0.6'], port_security=['fa:16:3e:3e:c8:4b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0cda84d2-b983-447e-b9ef-6888fdeb2df5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4443be83-b7f7-4207-b668-d7bb46101dd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=da2c94a9-a16d-4ec8-9797-c39c6c89ec26) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:15.590 158104 INFO neutron.agent.ovn.metadata.agent [-] Port da2c94a9-a16d-4ec8-9797-c39c6c89ec26 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis
Oct 02 12:42:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:15.593 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3643647-7cd9-4c43-8aaa-9b0f3160274b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:42:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:15.596 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5d92db-d6a3-4267-8b28-5eb8958da63a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:15.598 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace which is not needed anymore
Oct 02 12:42:15 compute-0 nova_compute[256940]: 2025-10-02 12:42:15.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:15 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 02 12:42:15 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006f.scope: Consumed 14.456s CPU time.
Oct 02 12:42:15 compute-0 systemd-machined[210927]: Machine qemu-54-instance-0000006f terminated.
Oct 02 12:42:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 844 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct 02 12:42:15 compute-0 nova_compute[256940]: 2025-10-02 12:42:15.685 2 INFO nova.virt.libvirt.driver [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance destroyed successfully.
Oct 02 12:42:15 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[328453]: [NOTICE]   (328457) : haproxy version is 2.8.14-c23fe91
Oct 02 12:42:15 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[328453]: [NOTICE]   (328457) : path to executable is /usr/sbin/haproxy
Oct 02 12:42:15 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[328453]: [WARNING]  (328457) : Exiting Master process...
Oct 02 12:42:15 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[328453]: [ALERT]    (328457) : Current worker (328459) exited with code 143 (Terminated)
Oct 02 12:42:15 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[328453]: [WARNING]  (328457) : All workers exited. Exiting... (0)
Oct 02 12:42:15 compute-0 systemd[1]: libpod-055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728.scope: Deactivated successfully.
Oct 02 12:42:15 compute-0 podman[328848]: 2025-10-02 12:42:15.813619739 +0000 UTC m=+0.111656266 container died 055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:42:15 compute-0 nova_compute[256940]: 2025-10-02 12:42:15.954 2 INFO nova.compute.manager [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Detaching volume 13e98bdf-a59c-4cec-a1d4-6b0f09d31497
Oct 02 12:42:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728-userdata-shm.mount: Deactivated successfully.
Oct 02 12:42:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-77fd4236dab70fbf7ec8ac81836695654982bf6d36235c52f253a7335c3bb49c-merged.mount: Deactivated successfully.
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.193 2 INFO nova.virt.block_device [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Attempting to driver detach volume 13e98bdf-a59c-4cec-a1d4-6b0f09d31497 from mountpoint /dev/vdb
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.202 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Attempting to detach device vdb from instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.203 2 DEBUG nova.virt.libvirt.guest [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:16 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:16 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-13e98bdf-a59c-4cec-a1d4-6b0f09d31497">
Oct 02 12:42:16 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:16 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:16 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:16 compute-0 nova_compute[256940]:   </source>
Oct 02 12:42:16 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:16 compute-0 nova_compute[256940]:   <serial>13e98bdf-a59c-4cec-a1d4-6b0f09d31497</serial>
Oct 02 12:42:16 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:42:16 compute-0 nova_compute[256940]: </disk>
Oct 02 12:42:16 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.246 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully detached device vdb from instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 from the persistent domain config.
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.267 2 DEBUG nova.compute.manager [req-f2e66a64-9d2c-42ae-9552-5129d2baac42 req-c625b8a2-2cd1-421b-be75-eeb63a240175 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-unplugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.267 2 DEBUG oslo_concurrency.lockutils [req-f2e66a64-9d2c-42ae-9552-5129d2baac42 req-c625b8a2-2cd1-421b-be75-eeb63a240175 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.267 2 DEBUG oslo_concurrency.lockutils [req-f2e66a64-9d2c-42ae-9552-5129d2baac42 req-c625b8a2-2cd1-421b-be75-eeb63a240175 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.268 2 DEBUG oslo_concurrency.lockutils [req-f2e66a64-9d2c-42ae-9552-5129d2baac42 req-c625b8a2-2cd1-421b-be75-eeb63a240175 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.268 2 DEBUG nova.compute.manager [req-f2e66a64-9d2c-42ae-9552-5129d2baac42 req-c625b8a2-2cd1-421b-be75-eeb63a240175 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] No waiting events found dispatching network-vif-unplugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.268 2 WARNING nova.compute.manager [req-f2e66a64-9d2c-42ae-9552-5129d2baac42 req-c625b8a2-2cd1-421b-be75-eeb63a240175 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received unexpected event network-vif-unplugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for instance with vm_state active and task_state rebuilding.
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.502 2 INFO nova.virt.libvirt.driver [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance destroyed successfully.
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.503 2 DEBUG nova.virt.libvirt.vif [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-406908592',display_name='tempest-ServerActionsTestOtherA-server-1705103539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-406908592',id=111,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCSsWXrBNZ81Oa5/v3uLFZPIFpDy17i7Kz6ewOeuJDVAk5R+HQoVoDBkiXST2AxzkbF7tTgAwuMLAPF53S6z+CYB6oTV67eUJLHZsTPnYcmhNeSTiY83xjd4+pIhpS58A==',key_name='tempest-keypair-1887452136',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-11xfc9yw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=0cda84d2-b983-447e-b9ef-6888fdeb2df5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.503 2 DEBUG nova.network.os_vif_util [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.504 2 DEBUG nova.network.os_vif_util [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.504 2 DEBUG os_vif [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.506 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda2c94a9-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.515 2 INFO os_vif [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1')
Oct 02 12:42:16 compute-0 nova_compute[256940]: 2025-10-02 12:42:16.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1654915291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4124274956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:16 compute-0 podman[328848]: 2025-10-02 12:42:16.661869252 +0000 UTC m=+0.959905819 container cleanup 055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:16 compute-0 systemd[1]: libpod-conmon-055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728.scope: Deactivated successfully.
Oct 02 12:42:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:17.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:17 compute-0 podman[328903]: 2025-10-02 12:42:17.228645727 +0000 UTC m=+0.531592917 container remove 055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.237 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc672a84-ec50-477d-82c7-b640755f9fdd]: (4, ('Thu Oct  2 12:42:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728)\n055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728\nThu Oct  2 12:42:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728)\n055514cebafacaf33a0368d36edb193506e071ccdcdbd1e50b1b096449a59728\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.239 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d62922-8993-4590-9dcb-4ca946f1ece1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.240 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:17 compute-0 nova_compute[256940]: 2025-10-02 12:42:17.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-0 kernel: tapf3643647-70: left promiscuous mode
Oct 02 12:42:17 compute-0 nova_compute[256940]: 2025-10-02 12:42:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-0 nova_compute[256940]: 2025-10-02 12:42:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.262 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d6e7c8-8850-4b0c-b6eb-8051fe1a6956]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.295 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[13f57c7f-6cb1-47a1-b536-0f86707db7ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.297 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[669991ba-c671-4dbb-9191-f243469454a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.319 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e020750a-4c13-49e4-b256-f475fa939e23]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679521, 'reachable_time': 21213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328919, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-0 systemd[1]: run-netns-ovnmeta\x2df3643647\x2d7cd9\x2d4c43\x2d8aaa\x2d9b0f3160274b.mount: Deactivated successfully.
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.324 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:42:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:17.324 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[47d1a19a-bf66-473a-bc55-84782c559917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-0 ceph-mon[73668]: pgmap v1993: 305 pgs: 305 active+clean; 844 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct 02 12:42:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.020 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Deleting instance files /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5_del
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.021 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Deletion of /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5_del complete
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.216 2 INFO nova.virt.block_device [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Booting with volume 13e98bdf-a59c-4cec-a1d4-6b0f09d31497 at /dev/vdb
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.329 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.355 2 DEBUG os_brick.utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.357 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.371 2 DEBUG nova.compute.manager [req-990e127d-daec-4748-9fe5-6115646305c7 req-6ca1c11e-0cfb-4845-b26b-c5f817f2ef2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.371 2 DEBUG oslo_concurrency.lockutils [req-990e127d-daec-4748-9fe5-6115646305c7 req-6ca1c11e-0cfb-4845-b26b-c5f817f2ef2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.371 2 DEBUG oslo_concurrency.lockutils [req-990e127d-daec-4748-9fe5-6115646305c7 req-6ca1c11e-0cfb-4845-b26b-c5f817f2ef2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.372 2 DEBUG oslo_concurrency.lockutils [req-990e127d-daec-4748-9fe5-6115646305c7 req-6ca1c11e-0cfb-4845-b26b-c5f817f2ef2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.372 2 DEBUG nova.compute.manager [req-990e127d-daec-4748-9fe5-6115646305c7 req-6ca1c11e-0cfb-4845-b26b-c5f817f2ef2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] No waiting events found dispatching network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.372 2 WARNING nova.compute.manager [req-990e127d-daec-4748-9fe5-6115646305c7 req-6ca1c11e-0cfb-4845-b26b-c5f817f2ef2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received unexpected event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for instance with vm_state active and task_state rebuild_block_device_mapping.
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.370 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.370 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[9f143043-67cd-414e-93d5-d856cb7f5c79]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.374 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.384 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.385 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[b907e670-b074-41ef-8045-805f3cd357cb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.387 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.398 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.398 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[f57507fb-7a44-47c0-af6a-8e6a8bc1790e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.402 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[d57b12e3-c51a-45cb-9405-b7c6ade5fb60]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.403 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.438 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.440 2 DEBUG os_brick.initiator.connectors.lightos [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.440 2 DEBUG os_brick.initiator.connectors.lightos [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.440 2 DEBUG os_brick.initiator.connectors.lightos [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.441 2 DEBUG os_brick.utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:42:18 compute-0 nova_compute[256940]: 2025-10-02 12:42:18.441 2 DEBUG nova.virt.block_device [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updating existing volume attachment record: 21d085da-e46b-4e90-bddb-ca7351a2dfe5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:42:18 compute-0 ceph-mon[73668]: pgmap v1994: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Oct 02 12:42:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:19.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:19.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.245 2 DEBUG oslo_concurrency.lockutils [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.246 2 DEBUG oslo_concurrency.lockutils [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.288 2 INFO nova.compute.manager [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Detaching volume 76444907-9f59-47f1-99d8-aa820d2e0f3a
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.427 2 INFO nova.virt.block_device [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Attempting to driver detach volume 76444907-9f59-47f1-99d8-aa820d2e0f3a from mountpoint /dev/vdb
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.443 2 DEBUG nova.virt.libvirt.driver [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Attempting to detach device vdb from instance e5acbf2d-74bd-452c-8732-db2f1c0739ff from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.444 2 DEBUG nova.virt.libvirt.guest [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-76444907-9f59-47f1-99d8-aa820d2e0f3a">
Oct 02 12:42:19 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   </source>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <serial>76444907-9f59-47f1-99d8-aa820d2e0f3a</serial>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]: </disk>
Oct 02 12:42:19 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.456 2 INFO nova.virt.libvirt.driver [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully detached device vdb from instance e5acbf2d-74bd-452c-8732-db2f1c0739ff from the persistent domain config.
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.457 2 DEBUG nova.virt.libvirt.driver [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e5acbf2d-74bd-452c-8732-db2f1c0739ff from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.458 2 DEBUG nova.virt.libvirt.guest [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-76444907-9f59-47f1-99d8-aa820d2e0f3a">
Oct 02 12:42:19 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   </source>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <serial>76444907-9f59-47f1-99d8-aa820d2e0f3a</serial>
Oct 02 12:42:19 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:42:19 compute-0 nova_compute[256940]: </disk>
Oct 02 12:42:19 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.572 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759408939.5715175, e5acbf2d-74bd-452c-8732-db2f1c0739ff => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.575 2 DEBUG nova.virt.libvirt.driver [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e5acbf2d-74bd-452c-8732-db2f1c0739ff _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.578 2 INFO nova.virt.libvirt.driver [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully detached device vdb from instance e5acbf2d-74bd-452c-8732-db2f1c0739ff from the live domain config.
Oct 02 12:42:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.852 2 DEBUG nova.objects.instance [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'flavor' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.946 2 DEBUG oslo_concurrency.lockutils [None req-a3d3f8bf-6e73-478e-9257-2c3d2d2ed513 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.970 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:42:19 compute-0 nova_compute[256940]: 2025-10-02 12:42:19.971 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Creating image(s)
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.011 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.074 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.102 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.107 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.193 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.194 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.194 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.195 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.224 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:20 compute-0 nova_compute[256940]: 2025-10-02 12:42:20.228 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2044087468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:20 compute-0 sudo[329023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:20 compute-0 sudo[329023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:20 compute-0 sudo[329023]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:20 compute-0 sudo[329048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:20 compute-0 sudo[329048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:20 compute-0 sudo[329048]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:21.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:21.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:21 compute-0 nova_compute[256940]: 2025-10-02 12:42:21.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:21 compute-0 nova_compute[256940]: 2025-10-02 12:42:21.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 816 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 02 12:42:21 compute-0 ceph-mon[73668]: pgmap v1995: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:42:21 compute-0 nova_compute[256940]: 2025-10-02 12:42:21.894 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:21 compute-0 nova_compute[256940]: 2025-10-02 12:42:21.968 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:42:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 02 12:42:22 compute-0 ceph-mon[73668]: pgmap v1996: 305 pgs: 305 active+clean; 816 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 02 12:42:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 02 12:42:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:23.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:23 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.034 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.034 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Ensure instance console log exists: /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.035 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.035 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.035 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.038 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Start _get_guest_xml network_info=[{"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-13e98bdf-a59c-4cec-a1d4-6b0f09d31497', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '13e98bdf-a59c-4cec-a1d4-6b0f09d31497', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '0cda84d2-b983-447e-b9ef-6888fdeb2df5', 'attached_at': '', 'detached_at': '', 'volume_id': '13e98bdf-a59c-4cec-a1d4-6b0f09d31497', 'serial': '13e98bdf-a59c-4cec-a1d4-6b0f09d31497'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '21d085da-e46b-4e90-bddb-ca7351a2dfe5', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.043 2 WARNING nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.052 2 DEBUG nova.virt.libvirt.host [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.052 2 DEBUG nova.virt.libvirt.host [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.055 2 DEBUG nova.virt.libvirt.host [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.055 2 DEBUG nova.virt.libvirt.host [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.057 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.057 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.058 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.058 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.058 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.058 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.058 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.059 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.059 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.059 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.059 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.060 2 DEBUG nova.virt.hardware [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.060 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:23.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 808 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 233 op/s
Oct 02 12:42:23 compute-0 nova_compute[256940]: 2025-10-02 12:42:23.754 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:24 compute-0 ceph-mon[73668]: osdmap e268: 3 total, 3 up, 3 in
Oct 02 12:42:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4049197200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.210 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.248 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.254 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:24 compute-0 podman[329190]: 2025-10-02 12:42:24.416225055 +0000 UTC m=+0.084378334 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:42:24 compute-0 podman[329191]: 2025-10-02 12:42:24.444691198 +0000 UTC m=+0.110145766 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133103057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.710 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.862 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.863 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.865 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.866 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.866 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.868 2 INFO nova.compute.manager [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Terminating instance
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.869 2 DEBUG nova.compute.manager [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.893 2 DEBUG nova.virt.libvirt.vif [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-406908592',display_name='tempest-ServerActionsTestOtherA-server-1705103539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-406908592',id=111,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCSsWXrBNZ81Oa5/v3uLFZPIFpDy17i7Kz6ewOeuJDVAk5R+HQoVoDBkiXST2AxzkbF7tTgAwuMLAPF53S6z+CYB6oTV67eUJLHZsTPnYcmhNeSTiY83xjd4+pIhpS58A==',key_name='tempest-keypair-1887452136',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-11xfc9yw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=0cda84d2-b983-447e-b9ef-6888fdeb2df5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.894 2 DEBUG nova.network.os_vif_util [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.894 2 DEBUG nova.network.os_vif_util [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.897 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <uuid>0cda84d2-b983-447e-b9ef-6888fdeb2df5</uuid>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <name>instance-0000006f</name>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestOtherA-server-1705103539</nova:name>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:42:23</nova:creationTime>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <nova:port uuid="da2c94a9-a16d-4ec8-9797-c39c6c89ec26">
Oct 02 12:42:24 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <system>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <entry name="serial">0cda84d2-b983-447e-b9ef-6888fdeb2df5</entry>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <entry name="uuid">0cda84d2-b983-447e-b9ef-6888fdeb2df5</entry>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </system>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <os>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   </os>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <features>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   </features>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk">
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </source>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config">
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </source>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-13e98bdf-a59c-4cec-a1d4-6b0f09d31497">
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </source>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:42:24 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <serial>13e98bdf-a59c-4cec-a1d4-6b0f09d31497</serial>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:3e:c8:4b"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <target dev="tapda2c94a9-a1"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/console.log" append="off"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <video>
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </video>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:42:24 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:42:24 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:42:24 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:42:24 compute-0 nova_compute[256940]: </domain>
Oct 02 12:42:24 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.899 2 DEBUG nova.virt.libvirt.vif [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-406908592',display_name='tempest-ServerActionsTestOtherA-server-1705103539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-406908592',id=111,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCSsWXrBNZ81Oa5/v3uLFZPIFpDy17i7Kz6ewOeuJDVAk5R+HQoVoDBkiXST2AxzkbF7tTgAwuMLAPF53S6z+CYB6oTV67eUJLHZsTPnYcmhNeSTiY83xjd4+pIhpS58A==',key_name='tempest-keypair-1887452136',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-11xfc9yw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=0cda84d2-b983-447e-b9ef-6888fdeb2df5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.899 2 DEBUG nova.network.os_vif_util [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.900 2 DEBUG nova.network.os_vif_util [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.900 2 DEBUG os_vif [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda2c94a9-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda2c94a9-a1, col_values=(('external_ids', {'iface-id': 'da2c94a9-a16d-4ec8-9797-c39c6c89ec26', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:c8:4b', 'vm-uuid': '0cda84d2-b983-447e-b9ef-6888fdeb2df5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:24 compute-0 NetworkManager[44981]: <info>  [1759408944.9468] manager: (tapda2c94a9-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:24 compute-0 kernel: tap1613b7ad-4e (unregistering): left promiscuous mode
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.955 2 INFO os_vif [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1')
Oct 02 12:42:24 compute-0 NetworkManager[44981]: <info>  [1759408944.9604] device (tap1613b7ad-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:24 compute-0 ovn_controller[148123]: 2025-10-02T12:42:24Z|00546|binding|INFO|Releasing lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 from this chassis (sb_readonly=0)
Oct 02 12:42:24 compute-0 ovn_controller[148123]: 2025-10-02T12:42:24Z|00547|binding|INFO|Setting lport 1613b7ad-4ed9-456e-bd64-5de6348252d5 down in Southbound
Oct 02 12:42:24 compute-0 ovn_controller[148123]: 2025-10-02T12:42:24Z|00548|binding|INFO|Removing iface tap1613b7ad-4e ovn-installed in OVS
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:24 compute-0 nova_compute[256940]: 2025-10-02 12:42:24.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.013 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:f4:17 10.100.0.12'], port_security=['fa:16:3e:d3:f4:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e5acbf2d-74bd-452c-8732-db2f1c0739ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a45bfea3-3640-4750-bd91-9e9e392369a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1613b7ad-4ed9-456e-bd64-5de6348252d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.014 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1613b7ad-4ed9-456e-bd64-5de6348252d5 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis
Oct 02 12:42:25 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.016 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2
Oct 02 12:42:25 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000006d.scope: Consumed 14.692s CPU time.
Oct 02 12:42:25 compute-0 systemd-machined[210927]: Machine qemu-55-instance-0000006d terminated.
Oct 02 12:42:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:25.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.037 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5872fb37-5859-4fc8-94bb-bfd60f6200ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.065 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c1814a-aad9-42b3-8f3f-1fbf5b64d7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.068 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[879ecb4c-1102-443b-be25-afad09a9c822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.096 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8707fe-80ee-4dd6-88e0-2074030f0f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.110 2 INFO nova.virt.libvirt.driver [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Instance destroyed successfully.
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.111 2 DEBUG nova.objects.instance [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid e5acbf2d-74bd-452c-8732-db2f1c0739ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.117 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cda1197c-bf9c-4cf3-9b6a-ac0a4f48dd4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 1084, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 1084, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661241, 'reachable_time': 24331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329279, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.136 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccc885e-df39-44e2-b7fd-93d8e2c6577e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661254, 'tstamp': 661254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329282, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661257, 'tstamp': 661257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329282, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.138 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:25 compute-0 ceph-mon[73668]: pgmap v1998: 305 pgs: 305 active+clean; 808 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 233 op/s
Oct 02 12:42:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4049197200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3133103057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.140 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.141 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.141 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.141 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:3e:c8:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.141 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Using config drive
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.147 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.148 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.148 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:25.148 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.173 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.182 2 DEBUG nova.virt.libvirt.vif [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:40:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-169288015',display_name='tempest-ServerStableDeviceRescueTest-server-169288015',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-169288015',id=109,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNu9tPmj4bBUF1u39kUhNMdjiWifamHbOuNkgYWVMmgjxafglnyiQJEpPAJoRypDdX6VcTNTXPo4P81OuLi9mgNJNDIlKFU24K/KSUL24g28w4rHKYOFD6uTdH/rAn40Q==',key_name='tempest-keypair-1658267838',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-uken2igj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fdbe447f49374937a828d6281949a2a4',uuid=e5acbf2d-74bd-452c-8732-db2f1c0739ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.182 2 DEBUG nova.network.os_vif_util [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "address": "fa:16:3e:d3:f4:17", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1613b7ad-4e", "ovs_interfaceid": "1613b7ad-4ed9-456e-bd64-5de6348252d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.183 2 DEBUG nova.network.os_vif_util [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.183 2 DEBUG os_vif [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1613b7ad-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.191 2 INFO os_vif [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:f4:17,bridge_name='br-int',has_traffic_filtering=True,id=1613b7ad-4ed9-456e-bd64-5de6348252d5,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1613b7ad-4e')
Oct 02 12:42:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.337 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.416 2 DEBUG nova.compute.manager [req-1262848f-f689-4b0f-af5c-85a32f7282ce req-67d6457b-dbdc-4c93-8f49-d9c98b25f3d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.417 2 DEBUG oslo_concurrency.lockutils [req-1262848f-f689-4b0f-af5c-85a32f7282ce req-67d6457b-dbdc-4c93-8f49-d9c98b25f3d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.417 2 DEBUG oslo_concurrency.lockutils [req-1262848f-f689-4b0f-af5c-85a32f7282ce req-67d6457b-dbdc-4c93-8f49-d9c98b25f3d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.417 2 DEBUG oslo_concurrency.lockutils [req-1262848f-f689-4b0f-af5c-85a32f7282ce req-67d6457b-dbdc-4c93-8f49-d9c98b25f3d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.418 2 DEBUG nova.compute.manager [req-1262848f-f689-4b0f-af5c-85a32f7282ce req-67d6457b-dbdc-4c93-8f49-d9c98b25f3d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.418 2 DEBUG nova.compute.manager [req-1262848f-f689-4b0f-af5c-85a32f7282ce req-67d6457b-dbdc-4c93-8f49-d9c98b25f3d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-unplugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.434 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'keypairs' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 774 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 235 op/s
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.719 2 INFO nova.virt.libvirt.driver [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Deleting instance files /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff_del
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.720 2 INFO nova.virt.libvirt.driver [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Deletion of /var/lib/nova/instances/e5acbf2d-74bd-452c-8732-db2f1c0739ff_del complete
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.778 2 INFO nova.compute.manager [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Took 0.91 seconds to destroy the instance on the hypervisor.
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.779 2 DEBUG oslo.service.loopingcall [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.779 2 DEBUG nova.compute.manager [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:42:25 compute-0 nova_compute[256940]: 2025-10-02 12:42:25.779 2 DEBUG nova.network.neutron [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:42:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:26.477 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:26.478 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:26.478 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:26 compute-0 nova_compute[256940]: 2025-10-02 12:42:26.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:27.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:27 compute-0 ceph-mon[73668]: pgmap v1999: 305 pgs: 305 active+clean; 774 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 235 op/s
Oct 02 12:42:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 662 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 228 op/s
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.217 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Creating config drive at /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.225 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsq6w6bvh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.385 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsq6w6bvh" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.431 2 DEBUG nova.storage.rbd_utils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.435 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.608 2 DEBUG oslo_concurrency.processutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config 0cda84d2-b983-447e-b9ef-6888fdeb2df5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.610 2 INFO nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Deleting local config drive /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5/disk.config because it was imported into RBD.
Oct 02 12:42:28 compute-0 kernel: tapda2c94a9-a1: entered promiscuous mode
Oct 02 12:42:28 compute-0 NetworkManager[44981]: <info>  [1759408948.6787] manager: (tapda2c94a9-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Oct 02 12:42:28 compute-0 ovn_controller[148123]: 2025-10-02T12:42:28Z|00549|binding|INFO|Claiming lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for this chassis.
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:28 compute-0 ovn_controller[148123]: 2025-10-02T12:42:28Z|00550|binding|INFO|da2c94a9-a16d-4ec8-9797-c39c6c89ec26: Claiming fa:16:3e:3e:c8:4b 10.100.0.6
Oct 02 12:42:28 compute-0 ovn_controller[148123]: 2025-10-02T12:42:28Z|00551|binding|INFO|Setting lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 ovn-installed in OVS
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:28 compute-0 systemd-machined[210927]: New machine qemu-56-instance-0000006f.
Oct 02 12:42:28 compute-0 systemd-udevd[329379]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:42:28 compute-0 NetworkManager[44981]: <info>  [1759408948.7250] device (tapda2c94a9-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:42:28 compute-0 NetworkManager[44981]: <info>  [1759408948.7257] device (tapda2c94a9-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:42:28 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-0000006f.
Oct 02 12:42:28 compute-0 ovn_controller[148123]: 2025-10-02T12:42:28Z|00552|binding|INFO|Setting lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 up in Southbound
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.769 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:c8:4b 10.100.0.6'], port_security=['fa:16:3e:3e:c8:4b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0cda84d2-b983-447e-b9ef-6888fdeb2df5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4443be83-b7f7-4207-b668-d7bb46101dd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=da2c94a9-a16d-4ec8-9797-c39c6c89ec26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.771 158104 INFO neutron.agent.ovn.metadata.agent [-] Port da2c94a9-a16d-4ec8-9797-c39c6c89ec26 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.773 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.778 2 DEBUG nova.compute.manager [req-f396e419-61d6-4fcb-828a-fda32413b1b2 req-a91d4068-8688-4d71-800b-b0bd32bac455 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.778 2 DEBUG oslo_concurrency.lockutils [req-f396e419-61d6-4fcb-828a-fda32413b1b2 req-a91d4068-8688-4d71-800b-b0bd32bac455 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.779 2 DEBUG oslo_concurrency.lockutils [req-f396e419-61d6-4fcb-828a-fda32413b1b2 req-a91d4068-8688-4d71-800b-b0bd32bac455 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.779 2 DEBUG oslo_concurrency.lockutils [req-f396e419-61d6-4fcb-828a-fda32413b1b2 req-a91d4068-8688-4d71-800b-b0bd32bac455 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.779 2 DEBUG nova.compute.manager [req-f396e419-61d6-4fcb-828a-fda32413b1b2 req-a91d4068-8688-4d71-800b-b0bd32bac455 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] No waiting events found dispatching network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:28 compute-0 nova_compute[256940]: 2025-10-02 12:42:28.779 2 WARNING nova.compute.manager [req-f396e419-61d6-4fcb-828a-fda32413b1b2 req-a91d4068-8688-4d71-800b-b0bd32bac455 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received unexpected event network-vif-plugged-1613b7ad-4ed9-456e-bd64-5de6348252d5 for instance with vm_state active and task_state deleting.
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.788 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc61ff8-74fd-45ca-9bb6-6d0cba758ac9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.789 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3643647-71 in ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.790 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3643647-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.790 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ee942cec-bdc7-43ec-94d0-31ca3ac19fc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.793 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[678e7e1b-d83e-49a6-a9c2-2de8dbbe6457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:42:28
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Oct 02 12:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.805 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe976bb-a915-4bef-98ed-927e0030a9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.829 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1dff0546-6182-4779-99a4-b849aad3f75c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.859 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1426db99-8f88-4203-97ec-9bb7c78a15e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 NetworkManager[44981]: <info>  [1759408948.8701] manager: (tapf3643647-70): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.869 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2d064801-311f-4b89-8deb-c040479f5e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 systemd-udevd[329381]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.911 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6a9721-3e94-420a-975b-77b2494fd7dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.915 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bb69ccb6-6181-4a23-9217-e205e9780c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 NetworkManager[44981]: <info>  [1759408948.9388] device (tapf3643647-70): carrier: link connected
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.945 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cc4bdc56-8e1e-48af-9821-b46ea3872b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.966 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[626166e8-db05-4667-b659-659c8b4d44af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684442, 'reachable_time': 41195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329447, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:28.984 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[64d0d7ba-80cc-4a12-892c-d3593a32b524]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:edfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 684442, 'tstamp': 684442}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329451, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.011 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8073d266-c066-421f-8fc4-ca12c8feb10d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684442, 'reachable_time': 41195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329467, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:29.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.047 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0061cbb2-7e37-4b83-850a-b888705baffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.103 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e19156b3-3efd-4c5c-8f58-03d173fcd786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.104 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.104 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.105 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:29 compute-0 kernel: tapf3643647-70: entered promiscuous mode
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:29 compute-0 NetworkManager[44981]: <info>  [1759408949.1070] manager: (tapf3643647-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.110 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:29 compute-0 ovn_controller[148123]: 2025-10-02T12:42:29Z|00553|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.128 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.129 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[edae64b3-a9f0-479b-a87d-d63475c1084a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.129 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:42:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:29.130 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'env', 'PROCESS_TAG=haproxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3643647-7cd9-4c43-8aaa-9b0f3160274b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:42:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:29 compute-0 ceph-mon[73668]: pgmap v2000: 305 pgs: 305 active+clean; 662 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 228 op/s
Oct 02 12:42:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/104167137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:29 compute-0 podman[329508]: 2025-10-02 12:42:29.526656501 +0000 UTC m=+0.063628802 container create c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.555 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for 0cda84d2-b983-447e-b9ef-6888fdeb2df5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.556 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408949.5544586, 0cda84d2-b983-447e-b9ef-6888fdeb2df5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.556 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] VM Resumed (Lifecycle Event)
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.558 2 DEBUG nova.compute.manager [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.559 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.562 2 INFO nova.virt.libvirt.driver [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance spawned successfully.
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.563 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:42:29 compute-0 systemd[1]: Started libpod-conmon-c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0.scope.
Oct 02 12:42:29 compute-0 podman[329508]: 2025-10-02 12:42:29.492282163 +0000 UTC m=+0.029254514 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:42:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f9784db446c39288ff6fb6c2f38947a51aec417c61b10ed44afa6c1e88d23f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:29 compute-0 podman[329508]: 2025-10-02 12:42:29.638963592 +0000 UTC m=+0.175935893 container init c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:42:29 compute-0 podman[329508]: 2025-10-02 12:42:29.645473542 +0000 UTC m=+0.182445843 container start c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:42:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 662 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 228 op/s
Oct 02 12:42:29 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[329523]: [NOTICE]   (329527) : New worker (329529) forked
Oct 02 12:42:29 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[329523]: [NOTICE]   (329527) : Loading success.
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.735 2 DEBUG nova.compute.manager [req-b9a760ee-5621-477a-8f27-79e3affb3a1b req-168b861c-2e8b-496c-ac85-bfd42bbca5f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.736 2 DEBUG oslo_concurrency.lockutils [req-b9a760ee-5621-477a-8f27-79e3affb3a1b req-168b861c-2e8b-496c-ac85-bfd42bbca5f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.737 2 DEBUG oslo_concurrency.lockutils [req-b9a760ee-5621-477a-8f27-79e3affb3a1b req-168b861c-2e8b-496c-ac85-bfd42bbca5f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.737 2 DEBUG oslo_concurrency.lockutils [req-b9a760ee-5621-477a-8f27-79e3affb3a1b req-168b861c-2e8b-496c-ac85-bfd42bbca5f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.738 2 DEBUG nova.compute.manager [req-b9a760ee-5621-477a-8f27-79e3affb3a1b req-168b861c-2e8b-496c-ac85-bfd42bbca5f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] No waiting events found dispatching network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.739 2 WARNING nova.compute.manager [req-b9a760ee-5621-477a-8f27-79e3affb3a1b req-168b861c-2e8b-496c-ac85-bfd42bbca5f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received unexpected event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.770 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.778 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.779 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.779 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.780 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.780 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.780 2 DEBUG nova.virt.libvirt.driver [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:29 compute-0 nova_compute[256940]: 2025-10-02 12:42:29.784 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:30 compute-0 sudo[329538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:30 compute-0 sudo[329538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-0 sudo[329538]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:30 compute-0 sudo[329563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:30 compute-0 sudo[329563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-0 sudo[329563]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-0 sudo[329588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:30 compute-0 sudo[329588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-0 sudo[329588]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.297 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.297 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408949.556562, 0cda84d2-b983-447e-b9ef-6888fdeb2df5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.297 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] VM Started (Lifecycle Event)
Oct 02 12:42:30 compute-0 sudo[329613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:42:30 compute-0 sudo[329613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.418 2 DEBUG nova.compute.manager [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.440 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.444 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.532 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.566 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.567 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.567 2 DEBUG nova.objects.instance [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:42:30 compute-0 sudo[329613]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-0 nova_compute[256940]: 2025-10-02 12:42:30.823 2 DEBUG oslo_concurrency.lockutils [None req-08f7aa89-0a18-4be7-b3bc-94137ecaf0cb 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:42:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:42:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:42:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:42:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d93b4b06-991f-4aaf-a4d6-49a0d60c44ea does not exist
Oct 02 12:42:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3ee1acc3-b2a5-4f9a-9e4b-6178f5711265 does not exist
Oct 02 12:42:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6fbb831f-f0fa-46ab-87d5-2035276f3aad does not exist
Oct 02 12:42:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:31.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:42:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.041 2 DEBUG nova.network.neutron [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:42:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:42:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:42:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:31 compute-0 sudo[329669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:31 compute-0 sudo[329669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:31 compute-0 sudo[329669]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:31 compute-0 sudo[329694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:31 compute-0 sudo[329694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:31 compute-0 sudo[329694]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.201 2 INFO nova.compute.manager [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Took 5.42 seconds to deallocate network for instance.
Oct 02 12:42:31 compute-0 sudo[329719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:31 compute-0 sudo[329719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:31 compute-0 sudo[329719]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 02 12:42:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:31.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 02 12:42:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 02 12:42:31 compute-0 sudo[329744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:42:31 compute-0 sudo[329744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.293 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.293 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.380 2 DEBUG oslo_concurrency.processutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:31 compute-0 ceph-mon[73668]: pgmap v2001: 305 pgs: 305 active+clean; 662 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 228 op/s
Oct 02 12:42:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:42:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:42:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:42:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:31 compute-0 ceph-mon[73668]: osdmap e269: 3 total, 3 up, 3 in
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:31 compute-0 podman[329826]: 2025-10-02 12:42:31.592411786 +0000 UTC m=+0.039416000 container create 9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:42:31 compute-0 systemd[1]: Started libpod-conmon-9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375.scope.
Oct 02 12:42:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 1.1 MiB/s wr, 141 op/s
Oct 02 12:42:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:31 compute-0 podman[329826]: 2025-10-02 12:42:31.574005015 +0000 UTC m=+0.021009259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:31 compute-0 podman[329826]: 2025-10-02 12:42:31.678334069 +0000 UTC m=+0.125338313 container init 9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:42:31 compute-0 podman[329826]: 2025-10-02 12:42:31.689809448 +0000 UTC m=+0.136813672 container start 9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:42:31 compute-0 podman[329826]: 2025-10-02 12:42:31.693232398 +0000 UTC m=+0.140236612 container attach 9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:42:31 compute-0 agitated_archimedes[329842]: 167 167
Oct 02 12:42:31 compute-0 systemd[1]: libpod-9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375.scope: Deactivated successfully.
Oct 02 12:42:31 compute-0 conmon[329842]: conmon 9cae009c05350393b606 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375.scope/container/memory.events
Oct 02 12:42:31 compute-0 podman[329847]: 2025-10-02 12:42:31.741232711 +0000 UTC m=+0.029516742 container died 9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:42:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f044a4bb00bc14e3a269e6ad5f3d4052863b5395779d854ff20dd3954e62e3-merged.mount: Deactivated successfully.
Oct 02 12:42:31 compute-0 podman[329847]: 2025-10-02 12:42:31.83620177 +0000 UTC m=+0.124485761 container remove 9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:42:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437680059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:31 compute-0 systemd[1]: libpod-conmon-9cae009c05350393b60673fa590040f454b55d334780674252f7134f962cf375.scope: Deactivated successfully.
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.864 2 DEBUG oslo_concurrency.processutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.870 2 DEBUG nova.compute.provider_tree [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.891 2 DEBUG nova.scheduler.client.report [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.901 2 DEBUG nova.compute.manager [req-567ecbf4-def3-4083-aa85-ac6a0f8ac764 req-c2b15748-f884-40a8-8ede-5e90f99a568a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Received event network-vif-deleted-1613b7ad-4ed9-456e-bd64-5de6348252d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.902 2 DEBUG nova.compute.manager [req-567ecbf4-def3-4083-aa85-ac6a0f8ac764 req-c2b15748-f884-40a8-8ede-5e90f99a568a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.904 2 DEBUG oslo_concurrency.lockutils [req-567ecbf4-def3-4083-aa85-ac6a0f8ac764 req-c2b15748-f884-40a8-8ede-5e90f99a568a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.905 2 DEBUG oslo_concurrency.lockutils [req-567ecbf4-def3-4083-aa85-ac6a0f8ac764 req-c2b15748-f884-40a8-8ede-5e90f99a568a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.906 2 DEBUG oslo_concurrency.lockutils [req-567ecbf4-def3-4083-aa85-ac6a0f8ac764 req-c2b15748-f884-40a8-8ede-5e90f99a568a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.906 2 DEBUG nova.compute.manager [req-567ecbf4-def3-4083-aa85-ac6a0f8ac764 req-c2b15748-f884-40a8-8ede-5e90f99a568a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] No waiting events found dispatching network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.907 2 WARNING nova.compute.manager [req-567ecbf4-def3-4083-aa85-ac6a0f8ac764 req-c2b15748-f884-40a8-8ede-5e90f99a568a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received unexpected event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for instance with vm_state active and task_state None.
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.922 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:31 compute-0 nova_compute[256940]: 2025-10-02 12:42:31.961 2 INFO nova.scheduler.client.report [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Deleted allocations for instance e5acbf2d-74bd-452c-8732-db2f1c0739ff
Oct 02 12:42:32 compute-0 podman[329870]: 2025-10-02 12:42:32.031207451 +0000 UTC m=+0.047819520 container create a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:42:32 compute-0 nova_compute[256940]: 2025-10-02 12:42:32.058 2 DEBUG oslo_concurrency.lockutils [None req-bcd98287-3eab-4729-9107-96bc4f3db101 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "e5acbf2d-74bd-452c-8732-db2f1c0739ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:32 compute-0 systemd[1]: Started libpod-conmon-a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd.scope.
Oct 02 12:42:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66122d3accc8b9dbee78be47f0aa875488df15e359620cd23722b590dd341355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:32 compute-0 podman[329870]: 2025-10-02 12:42:32.011984379 +0000 UTC m=+0.028596468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66122d3accc8b9dbee78be47f0aa875488df15e359620cd23722b590dd341355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66122d3accc8b9dbee78be47f0aa875488df15e359620cd23722b590dd341355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66122d3accc8b9dbee78be47f0aa875488df15e359620cd23722b590dd341355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66122d3accc8b9dbee78be47f0aa875488df15e359620cd23722b590dd341355/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:32 compute-0 podman[329870]: 2025-10-02 12:42:32.132074614 +0000 UTC m=+0.148686703 container init a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:42:32 compute-0 podman[329870]: 2025-10-02 12:42:32.139255821 +0000 UTC m=+0.155867890 container start a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dirac, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:32 compute-0 podman[329870]: 2025-10-02 12:42:32.146070409 +0000 UTC m=+0.162682508 container attach a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:32 compute-0 ceph-mon[73668]: pgmap v2003: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 1.1 MiB/s wr, 141 op/s
Oct 02 12:42:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/437680059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:32 compute-0 nova_compute[256940]: 2025-10-02 12:42:32.924 2 DEBUG oslo_concurrency.lockutils [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:32 compute-0 nova_compute[256940]: 2025-10-02 12:42:32.925 2 DEBUG oslo_concurrency.lockutils [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:32 compute-0 nova_compute[256940]: 2025-10-02 12:42:32.938 2 INFO nova.compute.manager [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Detaching volume 13e98bdf-a59c-4cec-a1d4-6b0f09d31497
Oct 02 12:42:32 compute-0 lucid_dirac[329887]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:42:32 compute-0 lucid_dirac[329887]: --> relative data size: 1.0
Oct 02 12:42:32 compute-0 lucid_dirac[329887]: --> All data devices are unavailable
Oct 02 12:42:32 compute-0 systemd[1]: libpod-a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd.scope: Deactivated successfully.
Oct 02 12:42:32 compute-0 podman[329870]: 2025-10-02 12:42:32.981623161 +0000 UTC m=+0.998235240 container died a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dirac, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-66122d3accc8b9dbee78be47f0aa875488df15e359620cd23722b590dd341355-merged.mount: Deactivated successfully.
Oct 02 12:42:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:33.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:33 compute-0 podman[329870]: 2025-10-02 12:42:33.048848726 +0000 UTC m=+1.065460795 container remove a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:42:33 compute-0 systemd[1]: libpod-conmon-a18ad59b0679708b8359b8f94b71bd4ee5ae8012a94bfb6e73076d78b4fa00cd.scope: Deactivated successfully.
Oct 02 12:42:33 compute-0 nova_compute[256940]: 2025-10-02 12:42:33.081 2 INFO nova.virt.block_device [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Attempting to driver detach volume 13e98bdf-a59c-4cec-a1d4-6b0f09d31497 from mountpoint /dev/vdb
Oct 02 12:42:33 compute-0 sudo[329744]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:33 compute-0 nova_compute[256940]: 2025-10-02 12:42:33.091 2 DEBUG nova.virt.libvirt.driver [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Attempting to detach device vdb from instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:42:33 compute-0 nova_compute[256940]: 2025-10-02 12:42:33.092 2 DEBUG nova.virt.libvirt.guest [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-13e98bdf-a59c-4cec-a1d4-6b0f09d31497">
Oct 02 12:42:33 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   </source>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <serial>13e98bdf-a59c-4cec-a1d4-6b0f09d31497</serial>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]: </disk>
Oct 02 12:42:33 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:33 compute-0 nova_compute[256940]: 2025-10-02 12:42:33.101 2 INFO nova.virt.libvirt.driver [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully detached device vdb from instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 from the persistent domain config.
Oct 02 12:42:33 compute-0 nova_compute[256940]: 2025-10-02 12:42:33.101 2 DEBUG nova.virt.libvirt.driver [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:42:33 compute-0 nova_compute[256940]: 2025-10-02 12:42:33.102 2 DEBUG nova.virt.libvirt.guest [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-13e98bdf-a59c-4cec-a1d4-6b0f09d31497">
Oct 02 12:42:33 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   </source>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <serial>13e98bdf-a59c-4cec-a1d4-6b0f09d31497</serial>
Oct 02 12:42:33 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 12:42:33 compute-0 nova_compute[256940]: </disk>
Oct 02 12:42:33 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:42:33 compute-0 sudo[329916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:33 compute-0 sudo[329916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:33 compute-0 sudo[329916]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:33 compute-0 sudo[329941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:33 compute-0 sudo[329941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:33 compute-0 sudo[329941]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:33.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:33 compute-0 sudo[329966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:33 compute-0 sudo[329966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:33 compute-0 sudo[329966]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:33 compute-0 sudo[329991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:42:33 compute-0 sudo[329991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1011 KiB/s wr, 168 op/s
Oct 02 12:42:33 compute-0 podman[330057]: 2025-10-02 12:42:33.713951228 +0000 UTC m=+0.053892898 container create 82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:42:33 compute-0 systemd[1]: Started libpod-conmon-82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2.scope.
Oct 02 12:42:33 compute-0 podman[330057]: 2025-10-02 12:42:33.687125998 +0000 UTC m=+0.027067698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:33 compute-0 podman[330057]: 2025-10-02 12:42:33.806901315 +0000 UTC m=+0.146843005 container init 82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:42:33 compute-0 podman[330057]: 2025-10-02 12:42:33.815272713 +0000 UTC m=+0.155214413 container start 82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:42:33 compute-0 nifty_murdock[330074]: 167 167
Oct 02 12:42:33 compute-0 podman[330057]: 2025-10-02 12:42:33.819226256 +0000 UTC m=+0.159167956 container attach 82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:33 compute-0 systemd[1]: libpod-82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2.scope: Deactivated successfully.
Oct 02 12:42:33 compute-0 podman[330057]: 2025-10-02 12:42:33.820573262 +0000 UTC m=+0.160514942 container died 82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2488ffbe85bc2098954fb6ddc35f4c53ef1b116c9204403b8fbb376f3bd4278a-merged.mount: Deactivated successfully.
Oct 02 12:42:33 compute-0 podman[330057]: 2025-10-02 12:42:33.855644917 +0000 UTC m=+0.195586587 container remove 82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:42:33 compute-0 systemd[1]: libpod-conmon-82380eb341740fa42c4f610d92db920f5936572569c147edddc440813a4099a2.scope: Deactivated successfully.
Oct 02 12:42:34 compute-0 podman[330098]: 2025-10-02 12:42:34.057222999 +0000 UTC m=+0.046681699 container create cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:42:34 compute-0 systemd[1]: Started libpod-conmon-cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0.scope.
Oct 02 12:42:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e59c1e810d50fac069741677996ce5ab66fd3f6a40c07c25fd67d0499366c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e59c1e810d50fac069741677996ce5ab66fd3f6a40c07c25fd67d0499366c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e59c1e810d50fac069741677996ce5ab66fd3f6a40c07c25fd67d0499366c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e59c1e810d50fac069741677996ce5ab66fd3f6a40c07c25fd67d0499366c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:34 compute-0 podman[330098]: 2025-10-02 12:42:34.039007364 +0000 UTC m=+0.028466094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:34 compute-0 podman[330098]: 2025-10-02 12:42:34.141204982 +0000 UTC m=+0.130663702 container init cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:34 compute-0 podman[330098]: 2025-10-02 12:42:34.14994273 +0000 UTC m=+0.139401430 container start cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:34 compute-0 podman[330098]: 2025-10-02 12:42:34.153692728 +0000 UTC m=+0.143151428 container attach cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.372 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.376 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.393 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.450 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.450 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.458 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.458 2 INFO nova.compute.claims [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:42:34 compute-0 nova_compute[256940]: 2025-10-02 12:42:34.581 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:34 compute-0 ceph-mon[73668]: pgmap v2004: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1011 KiB/s wr, 168 op/s
Oct 02 12:42:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:35.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207719170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]: {
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:     "1": [
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:         {
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "devices": [
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "/dev/loop3"
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             ],
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "lv_name": "ceph_lv0",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "lv_size": "7511998464",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "name": "ceph_lv0",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "tags": {
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.cluster_name": "ceph",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.crush_device_class": "",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.encrypted": "0",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.osd_id": "1",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.type": "block",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:                 "ceph.vdo": "0"
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             },
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "type": "block",
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:             "vg_name": "ceph_vg0"
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:         }
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]:     ]
Oct 02 12:42:35 compute-0 mystifying_ganguly[330114]: }
Oct 02 12:42:35 compute-0 systemd[1]: libpod-cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0.scope: Deactivated successfully.
Oct 02 12:42:35 compute-0 podman[330098]: 2025-10-02 12:42:35.097377961 +0000 UTC m=+1.086836671 container died cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.105 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.116 2 DEBUG nova.compute.provider_tree [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-07e59c1e810d50fac069741677996ce5ab66fd3f6a40c07c25fd67d0499366c6-merged.mount: Deactivated successfully.
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.135 2 DEBUG nova.scheduler.client.report [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:35 compute-0 podman[330098]: 2025-10-02 12:42:35.162266415 +0000 UTC m=+1.151725115 container remove cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.167 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.168 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:42:35 compute-0 systemd[1]: libpod-conmon-cb800e0c5a4da49fe17ddbf10dd4565fc161b152b11077b737c84615cec8bad0.scope: Deactivated successfully.
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:35 compute-0 sudo[329991]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.215 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.215 2 DEBUG nova.network.neutron [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.237 2 INFO nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:42:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:35.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.254 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:42:35 compute-0 sudo[330160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:35 compute-0 sudo[330160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:35 compute-0 sudo[330160]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.329 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.330 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.331 2 INFO nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Creating image(s)
Oct 02 12:42:35 compute-0 sudo[330185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:35 compute-0 sudo[330185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:35 compute-0 sudo[330185]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.359 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.395 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:35 compute-0 sudo[330226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:35 compute-0 sudo[330226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:35 compute-0 sudo[330226]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.443 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.450 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:35 compute-0 sudo[330278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:42:35 compute-0 sudo[330278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.493 2 DEBUG nova.policy [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.542 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.543 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.544 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.544 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.581 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:35 compute-0 nova_compute[256940]: 2025-10-02 12:42:35.589 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 367 KiB/s wr, 162 op/s
Oct 02 12:42:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/207719170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:35 compute-0 podman[330389]: 2025-10-02 12:42:35.847161544 +0000 UTC m=+0.038776613 container create 886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:42:35 compute-0 systemd[1]: Started libpod-conmon-886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129.scope.
Oct 02 12:42:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:35 compute-0 podman[330389]: 2025-10-02 12:42:35.830827478 +0000 UTC m=+0.022442567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:35 compute-0 podman[330389]: 2025-10-02 12:42:35.944770742 +0000 UTC m=+0.136385871 container init 886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:42:35 compute-0 podman[330389]: 2025-10-02 12:42:35.954198419 +0000 UTC m=+0.145813488 container start 886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:35 compute-0 podman[330389]: 2025-10-02 12:42:35.957464294 +0000 UTC m=+0.149079383 container attach 886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:42:35 compute-0 recursing_archimedes[330407]: 167 167
Oct 02 12:42:35 compute-0 systemd[1]: libpod-886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129.scope: Deactivated successfully.
Oct 02 12:42:35 compute-0 podman[330389]: 2025-10-02 12:42:35.96461159 +0000 UTC m=+0.156226659 container died 886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5fb0889cb0b95cefb128c12abdf7b0fa9af814f091d0fd6d2d6e5e5e2f9a2d4-merged.mount: Deactivated successfully.
Oct 02 12:42:36 compute-0 podman[330389]: 2025-10-02 12:42:36.004793379 +0000 UTC m=+0.196408448 container remove 886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:42:36 compute-0 systemd[1]: libpod-conmon-886d3892883040c3106223d64b2d1a7350186ced8a7816d85267365e24105129.scope: Deactivated successfully.
Oct 02 12:42:36 compute-0 podman[330432]: 2025-10-02 12:42:36.195131338 +0000 UTC m=+0.042543952 container create 73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hypatia, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:42:36 compute-0 systemd[1]: Started libpod-conmon-73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852.scope.
Oct 02 12:42:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d306bfdcac8d8488ddaf7f3f22a66a0c80696edb21c0e43a90ac19e8415532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d306bfdcac8d8488ddaf7f3f22a66a0c80696edb21c0e43a90ac19e8415532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d306bfdcac8d8488ddaf7f3f22a66a0c80696edb21c0e43a90ac19e8415532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d306bfdcac8d8488ddaf7f3f22a66a0c80696edb21c0e43a90ac19e8415532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:36 compute-0 podman[330432]: 2025-10-02 12:42:36.175590738 +0000 UTC m=+0.023003392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.296 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:36 compute-0 podman[330432]: 2025-10-02 12:42:36.308334803 +0000 UTC m=+0.155747447 container init 73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:42:36 compute-0 podman[330432]: 2025-10-02 12:42:36.316141287 +0000 UTC m=+0.163553901 container start 73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:42:36 compute-0 podman[330432]: 2025-10-02 12:42:36.34461718 +0000 UTC m=+0.192029894 container attach 73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.384 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.547 2 DEBUG nova.objects.instance [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 1e02b9cd-d03a-4fd7-bece-d6aac4461748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.563 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.563 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Ensure instance console log exists: /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.564 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.565 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.565 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:36 compute-0 nova_compute[256940]: 2025-10-02 12:42:36.630 2 DEBUG nova.network.neutron [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Successfully created port: 4042a669-02b9-444c-9cd0-50a675aa7adf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:42:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 02 12:42:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 02 12:42:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.000 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759408956.9987984, 0cda84d2-b983-447e-b9ef-6888fdeb2df5 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.001 2 DEBUG nova.virt.libvirt.driver [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.008 2 INFO nova.virt.libvirt.driver [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully detached device vdb from instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 from the live domain config.
Oct 02 12:42:37 compute-0 ceph-mon[73668]: pgmap v2005: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 367 KiB/s wr, 162 op/s
Oct 02 12:42:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:42:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:37.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]: {
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]:         "osd_id": 1,
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]:         "type": "bluestore"
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]:     }
Oct 02 12:42:37 compute-0 amazing_hypatia[330448]: }
Oct 02 12:42:37 compute-0 systemd[1]: libpod-73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852.scope: Deactivated successfully.
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.221 2 DEBUG nova.objects.instance [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'flavor' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:37 compute-0 podman[330544]: 2025-10-02 12:42:37.240543928 +0000 UTC m=+0.023576896 container died 73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:42:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:37.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d306bfdcac8d8488ddaf7f3f22a66a0c80696edb21c0e43a90ac19e8415532-merged.mount: Deactivated successfully.
Oct 02 12:42:37 compute-0 podman[330544]: 2025-10-02 12:42:37.295312798 +0000 UTC m=+0.078345746 container remove 73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:42:37 compute-0 systemd[1]: libpod-conmon-73dec7e5414674972593d82a08b97483c7b09ff8be96f2cd8c05c9c1f22ff852.scope: Deactivated successfully.
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.302 2 DEBUG oslo_concurrency.lockutils [None req-dedbae34-5ce5-4922-afeb-3f74705b800a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 4.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:37 compute-0 sudo[330278]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:42:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:42:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:37 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f8673810-2b31-48fa-a82d-05a201376289 does not exist
Oct 02 12:42:37 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6fcafe1c-d78d-42eb-8e1f-59cf2de453e2 does not exist
Oct 02 12:42:37 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev dfe3b945-8893-4d43-ae31-db8a0da47a3d does not exist
Oct 02 12:42:37 compute-0 sudo[330560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:37 compute-0 sudo[330560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:37 compute-0 sudo[330560]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:37 compute-0 sudo[330585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:42:37 compute-0 sudo[330585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:37 compute-0 sudo[330585]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 214 KiB/s wr, 206 op/s
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.982 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.982 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.983 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.983 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.983 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.984 2 INFO nova.compute.manager [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Terminating instance
Oct 02 12:42:37 compute-0 nova_compute[256940]: 2025-10-02 12:42:37.985 2 DEBUG nova.compute.manager [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:42:38 compute-0 kernel: tapda2c94a9-a1 (unregistering): left promiscuous mode
Oct 02 12:42:38 compute-0 NetworkManager[44981]: <info>  [1759408958.0334] device (tapda2c94a9-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:38 compute-0 ovn_controller[148123]: 2025-10-02T12:42:38Z|00554|binding|INFO|Releasing lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 from this chassis (sb_readonly=0)
Oct 02 12:42:38 compute-0 ovn_controller[148123]: 2025-10-02T12:42:38Z|00555|binding|INFO|Setting lport da2c94a9-a16d-4ec8-9797-c39c6c89ec26 down in Southbound
Oct 02 12:42:38 compute-0 ovn_controller[148123]: 2025-10-02T12:42:38Z|00556|binding|INFO|Removing iface tapda2c94a9-a1 ovn-installed in OVS
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.063 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:c8:4b 10.100.0.6'], port_security=['fa:16:3e:3e:c8:4b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0cda84d2-b983-447e-b9ef-6888fdeb2df5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4443be83-b7f7-4207-b668-d7bb46101dd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=da2c94a9-a16d-4ec8-9797-c39c6c89ec26) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.065 158104 INFO neutron.agent.ovn.metadata.agent [-] Port da2c94a9-a16d-4ec8-9797-c39c6c89ec26 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.067 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3643647-7cd9-4c43-8aaa-9b0f3160274b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.069 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aeffba91-160d-4fbe-9f82-15ec68f2475f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.070 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace which is not needed anymore
Oct 02 12:42:38 compute-0 ceph-mon[73668]: osdmap e270: 3 total, 3 up, 3 in
Oct 02 12:42:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:38 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:38 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 02 12:42:38 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000006f.scope: Consumed 9.140s CPU time.
Oct 02 12:42:38 compute-0 systemd-machined[210927]: Machine qemu-56-instance-0000006f terminated.
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.223 2 INFO nova.virt.libvirt.driver [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Instance destroyed successfully.
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.223 2 DEBUG nova.objects.instance [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 0cda84d2-b983-447e-b9ef-6888fdeb2df5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:38 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[329523]: [NOTICE]   (329527) : haproxy version is 2.8.14-c23fe91
Oct 02 12:42:38 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[329523]: [NOTICE]   (329527) : path to executable is /usr/sbin/haproxy
Oct 02 12:42:38 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[329523]: [WARNING]  (329527) : Exiting Master process...
Oct 02 12:42:38 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[329523]: [ALERT]    (329527) : Current worker (329529) exited with code 143 (Terminated)
Oct 02 12:42:38 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[329523]: [WARNING]  (329527) : All workers exited. Exiting... (0)
Oct 02 12:42:38 compute-0 systemd[1]: libpod-c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0.scope: Deactivated successfully.
Oct 02 12:42:38 compute-0 conmon[329523]: conmon c06f82b09b4b613dee1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0.scope/container/memory.events
Oct 02 12:42:38 compute-0 podman[330634]: 2025-10-02 12:42:38.249831916 +0000 UTC m=+0.068468209 container died c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0-userdata-shm.mount: Deactivated successfully.
Oct 02 12:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-68f9784db446c39288ff6fb6c2f38947a51aec417c61b10ed44afa6c1e88d23f-merged.mount: Deactivated successfully.
Oct 02 12:42:38 compute-0 podman[330634]: 2025-10-02 12:42:38.290137978 +0000 UTC m=+0.108774271 container cleanup c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.291 2 DEBUG nova.virt.libvirt.vif [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-406908592',display_name='tempest-ServerActionsTestOtherA-server-1705103539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-406908592',id=111,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCSsWXrBNZ81Oa5/v3uLFZPIFpDy17i7Kz6ewOeuJDVAk5R+HQoVoDBkiXST2AxzkbF7tTgAwuMLAPF53S6z+CYB6oTV67eUJLHZsTPnYcmhNeSTiY83xjd4+pIhpS58A==',key_name='tempest-keypair-1887452136',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-11xfc9yw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=0cda84d2-b983-447e-b9ef-6888fdeb2df5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.292 2 DEBUG nova.network.os_vif_util [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "address": "fa:16:3e:3e:c8:4b", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda2c94a9-a1", "ovs_interfaceid": "da2c94a9-a16d-4ec8-9797-c39c6c89ec26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.292 2 DEBUG nova.network.os_vif_util [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.293 2 DEBUG os_vif [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda2c94a9-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:38 compute-0 systemd[1]: libpod-conmon-c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0.scope: Deactivated successfully.
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.301 2 INFO os_vif [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:c8:4b,bridge_name='br-int',has_traffic_filtering=True,id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda2c94a9-a1')
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.325 2 DEBUG nova.compute.manager [req-154f1abe-5340-4828-aeca-7e1c63a7982b req-58cbf29a-fdd0-486c-bdbb-284b93e75d4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-unplugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.325 2 DEBUG oslo_concurrency.lockutils [req-154f1abe-5340-4828-aeca-7e1c63a7982b req-58cbf29a-fdd0-486c-bdbb-284b93e75d4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.326 2 DEBUG oslo_concurrency.lockutils [req-154f1abe-5340-4828-aeca-7e1c63a7982b req-58cbf29a-fdd0-486c-bdbb-284b93e75d4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.326 2 DEBUG oslo_concurrency.lockutils [req-154f1abe-5340-4828-aeca-7e1c63a7982b req-58cbf29a-fdd0-486c-bdbb-284b93e75d4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.326 2 DEBUG nova.compute.manager [req-154f1abe-5340-4828-aeca-7e1c63a7982b req-58cbf29a-fdd0-486c-bdbb-284b93e75d4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] No waiting events found dispatching network-vif-unplugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.326 2 DEBUG nova.compute.manager [req-154f1abe-5340-4828-aeca-7e1c63a7982b req-58cbf29a-fdd0-486c-bdbb-284b93e75d4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-unplugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:42:38 compute-0 podman[330671]: 2025-10-02 12:42:38.354618661 +0000 UTC m=+0.042944872 container remove c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.360 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb26e54-f162-4c82-b99a-a340438ea01d]: (4, ('Thu Oct  2 12:42:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0)\nc06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0\nThu Oct  2 12:42:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (c06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0)\nc06f82b09b4b613dee1c3a1e90c4e3a86517d53aac68a0c1295b53e5758d76a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.362 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93943c9c-d36d-4dd9-ab08-a3410fe34a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.364 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:38 compute-0 kernel: tapf3643647-70: left promiscuous mode
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.384 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[81b6aac2-cd50-4463-8830-853af2010ab8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.419 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2044b1-7bc1-4a51-a426-fa5118e84133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.420 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1ca04e-cec1-4492-8635-63610434c9d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.437 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7b732c3a-0512-46f0-a1c5-986086c2bde1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684433, 'reachable_time': 17265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330704, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.439 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:42:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:38.439 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[28bb779b-57f8-41ba-b686-a62bbc675f67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:38 compute-0 systemd[1]: run-netns-ovnmeta\x2df3643647\x2d7cd9\x2d4c43\x2d8aaa\x2d9b0f3160274b.mount: Deactivated successfully.
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.524 2 DEBUG nova.network.neutron [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Successfully updated port: 4042a669-02b9-444c-9cd0-50a675aa7adf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:42:38 compute-0 podman[330706]: 2025-10-02 12:42:38.539859466 +0000 UTC m=+0.065195153 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:42:38 compute-0 podman[330705]: 2025-10-02 12:42:38.541148399 +0000 UTC m=+0.066485796 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.608 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1e02b9cd-d03a-4fd7-bece-d6aac4461748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.608 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1e02b9cd-d03a-4fd7-bece-d6aac4461748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.608 2 DEBUG nova.network.neutron [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:42:38 compute-0 nova_compute[256940]: 2025-10-02 12:42:38.923 2 DEBUG nova.network.neutron [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:42:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:39.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 02 12:42:39 compute-0 ceph-mon[73668]: pgmap v2007: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 214 KiB/s wr, 206 op/s
Oct 02 12:42:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 02 12:42:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 02 12:42:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 213 KiB/s wr, 146 op/s
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.718 2 DEBUG nova.network.neutron [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Updating instance_info_cache with network_info: [{"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.846 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1e02b9cd-d03a-4fd7-bece-d6aac4461748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.846 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Instance network_info: |[{"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.851 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Start _get_guest_xml network_info=[{"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.856 2 WARNING nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.862 2 DEBUG nova.virt.libvirt.host [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.862 2 DEBUG nova.virt.libvirt.host [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.875 2 DEBUG nova.virt.libvirt.host [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.876 2 DEBUG nova.virt.libvirt.host [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.878 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.879 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.880 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.880 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.880 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.881 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.881 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.881 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.882 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.882 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.882 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.883 2 DEBUG nova.virt.hardware [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:42:39 compute-0 nova_compute[256940]: 2025-10-02 12:42:39.888 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408945.1084578, e5acbf2d-74bd-452c-8732-db2f1c0739ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.110 2 INFO nova.compute.manager [-] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] VM Stopped (Lifecycle Event)
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.205 2 INFO nova.virt.libvirt.driver [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Deleting instance files /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5_del
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.206 2 INFO nova.virt.libvirt.driver [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Deletion of /var/lib/nova/instances/0cda84d2-b983-447e-b9ef-6888fdeb2df5_del complete
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.220 2 DEBUG nova.compute.manager [None req-0817bf24-0006-4f01-8169-d689ab0a235c - - - - - -] [instance: e5acbf2d-74bd-452c-8732-db2f1c0739ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:40 compute-0 ceph-mon[73668]: osdmap e271: 3 total, 3 up, 3 in
Oct 02 12:42:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744660793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.375 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011958119591010608 of space, bias 1.0, pg target 3.5874358773031823 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004869544481667597 of space, bias 1.0, pg target 1.4462547110552764 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:42:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.403 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.407 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.538 2 DEBUG nova.compute.manager [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received event network-changed-4042a669-02b9-444c-9cd0-50a675aa7adf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.539 2 DEBUG nova.compute.manager [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Refreshing instance network info cache due to event network-changed-4042a669-02b9-444c-9cd0-50a675aa7adf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.540 2 DEBUG oslo_concurrency.lockutils [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1e02b9cd-d03a-4fd7-bece-d6aac4461748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.540 2 DEBUG oslo_concurrency.lockutils [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1e02b9cd-d03a-4fd7-bece-d6aac4461748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.541 2 DEBUG nova.network.neutron [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Refreshing network info cache for port 4042a669-02b9-444c-9cd0-50a675aa7adf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.557 2 INFO nova.compute.manager [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Took 2.57 seconds to destroy the instance on the hypervisor.
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.558 2 DEBUG oslo.service.loopingcall [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.558 2 DEBUG nova.compute.manager [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.558 2 DEBUG nova.network.neutron [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:42:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3813736802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.866 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.868 2 DEBUG nova.virt.libvirt.vif [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-883237223',display_name='tempest-DeleteServersTestJSON-server-883237223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-883237223',id=114,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-s5l3s7nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:35Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1e02b9cd-d03a-4fd7-bece-d6aac4461748,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.868 2 DEBUG nova.network.os_vif_util [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.869 2 DEBUG nova.network.os_vif_util [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3d:02,bridge_name='br-int',has_traffic_filtering=True,id=4042a669-02b9-444c-9cd0-50a675aa7adf,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4042a669-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.870 2 DEBUG nova.objects.instance [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e02b9cd-d03a-4fd7-bece-d6aac4461748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:40 compute-0 sudo[330808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:40 compute-0 sudo[330808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.925 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <uuid>1e02b9cd-d03a-4fd7-bece-d6aac4461748</uuid>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <name>instance-00000072</name>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:42:40 compute-0 sudo[330808]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <nova:name>tempest-DeleteServersTestJSON-server-883237223</nova:name>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:42:39</nova:creationTime>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <nova:port uuid="4042a669-02b9-444c-9cd0-50a675aa7adf">
Oct 02 12:42:40 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <system>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <entry name="serial">1e02b9cd-d03a-4fd7-bece-d6aac4461748</entry>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <entry name="uuid">1e02b9cd-d03a-4fd7-bece-d6aac4461748</entry>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </system>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <os>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   </os>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <features>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   </features>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk">
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       </source>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk.config">
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       </source>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:42:40 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:1f:3d:02"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <target dev="tap4042a669-02"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/console.log" append="off"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <video>
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </video>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:42:40 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:42:40 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:42:40 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:42:40 compute-0 nova_compute[256940]: </domain>
Oct 02 12:42:40 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.925 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Preparing to wait for external event network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.925 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.926 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.926 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.926 2 DEBUG nova.virt.libvirt.vif [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-883237223',display_name='tempest-DeleteServersTestJSON-server-883237223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-883237223',id=114,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-s5l3s7nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:35Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1e02b9cd-d03a-4fd7-bece-d6aac4461748,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.927 2 DEBUG nova.network.os_vif_util [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.927 2 DEBUG nova.network.os_vif_util [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3d:02,bridge_name='br-int',has_traffic_filtering=True,id=4042a669-02b9-444c-9cd0-50a675aa7adf,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4042a669-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.927 2 DEBUG os_vif [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3d:02,bridge_name='br-int',has_traffic_filtering=True,id=4042a669-02b9-444c-9cd0-50a675aa7adf,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4042a669-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.928 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.929 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.931 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4042a669-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.932 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4042a669-02, col_values=(('external_ids', {'iface-id': '4042a669-02b9-444c-9cd0-50a675aa7adf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:3d:02', 'vm-uuid': '1e02b9cd-d03a-4fd7-bece-d6aac4461748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:40 compute-0 NetworkManager[44981]: <info>  [1759408960.9343] manager: (tap4042a669-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:40 compute-0 nova_compute[256940]: 2025-10-02 12:42:40.942 2 INFO os_vif [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3d:02,bridge_name='br-int',has_traffic_filtering=True,id=4042a669-02b9-444c-9cd0-50a675aa7adf,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4042a669-02')
Oct 02 12:42:40 compute-0 sudo[330833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:40 compute-0 sudo[330833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:40 compute-0 sudo[330833]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:41.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:42:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:41.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:42:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 02 12:42:41 compute-0 nova_compute[256940]: 2025-10-02 12:42:41.284 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:41 compute-0 nova_compute[256940]: 2025-10-02 12:42:41.284 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:41 compute-0 nova_compute[256940]: 2025-10-02 12:42:41.285 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:1f:3d:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:41 compute-0 nova_compute[256940]: 2025-10-02 12:42:41.286 2 INFO nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Using config drive
Oct 02 12:42:41 compute-0 nova_compute[256940]: 2025-10-02 12:42:41.330 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 02 12:42:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 02 12:42:41 compute-0 ceph-mon[73668]: pgmap v2009: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 213 KiB/s wr, 146 op/s
Oct 02 12:42:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2744660793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3813736802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:41 compute-0 nova_compute[256940]: 2025-10-02 12:42:41.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 587 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.2 MiB/s wr, 197 op/s
Oct 02 12:42:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 02 12:42:42 compute-0 ceph-mon[73668]: osdmap e272: 3 total, 3 up, 3 in
Oct 02 12:42:42 compute-0 ceph-mon[73668]: pgmap v2011: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 587 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.2 MiB/s wr, 197 op/s
Oct 02 12:42:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 02 12:42:42 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 02 12:42:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:43.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:43 compute-0 nova_compute[256940]: 2025-10-02 12:42:43.230 2 INFO nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Creating config drive at /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/disk.config
Oct 02 12:42:43 compute-0 nova_compute[256940]: 2025-10-02 12:42:43.235 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprgpqoj3s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:43.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:43 compute-0 nova_compute[256940]: 2025-10-02 12:42:43.368 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprgpqoj3s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:43 compute-0 nova_compute[256940]: 2025-10-02 12:42:43.395 2 DEBUG nova.storage.rbd_utils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:43 compute-0 nova_compute[256940]: 2025-10-02 12:42:43.398 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/disk.config 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 9.2 MiB/s wr, 271 op/s
Oct 02 12:42:43 compute-0 ceph-mon[73668]: osdmap e273: 3 total, 3 up, 3 in
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.617 2 DEBUG oslo_concurrency.processutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/disk.config 1e02b9cd-d03a-4fd7-bece-d6aac4461748_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.618 2 INFO nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Deleting local config drive /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748/disk.config because it was imported into RBD.
Oct 02 12:42:44 compute-0 kernel: tap4042a669-02: entered promiscuous mode
Oct 02 12:42:44 compute-0 NetworkManager[44981]: <info>  [1759408964.6799] manager: (tap4042a669-02): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Oct 02 12:42:44 compute-0 ovn_controller[148123]: 2025-10-02T12:42:44Z|00557|binding|INFO|Claiming lport 4042a669-02b9-444c-9cd0-50a675aa7adf for this chassis.
Oct 02 12:42:44 compute-0 ovn_controller[148123]: 2025-10-02T12:42:44Z|00558|binding|INFO|4042a669-02b9-444c-9cd0-50a675aa7adf: Claiming fa:16:3e:1f:3d:02 10.100.0.8
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:44 compute-0 ovn_controller[148123]: 2025-10-02T12:42:44Z|00559|binding|INFO|Setting lport 4042a669-02b9-444c-9cd0-50a675aa7adf ovn-installed in OVS
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:44 compute-0 systemd-udevd[330934]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:42:44 compute-0 systemd-machined[210927]: New machine qemu-57-instance-00000072.
Oct 02 12:42:44 compute-0 NetworkManager[44981]: <info>  [1759408964.7267] device (tap4042a669-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:42:44 compute-0 NetworkManager[44981]: <info>  [1759408964.7277] device (tap4042a669-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:42:44 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000072.
Oct 02 12:42:44 compute-0 ovn_controller[148123]: 2025-10-02T12:42:44Z|00560|binding|INFO|Setting lport 4042a669-02b9-444c-9cd0-50a675aa7adf up in Southbound
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.812 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:3d:02 10.100.0.8'], port_security=['fa:16:3e:1f:3d:02 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1e02b9cd-d03a-4fd7-bece-d6aac4461748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4042a669-02b9-444c-9cd0-50a675aa7adf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.813 2 DEBUG nova.compute.manager [req-33697409-17fe-41ee-ba17-d4eb9f81783c req-4d724d19-43f0-443d-aa34-1a36acdf78d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-deleted-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.814 2 INFO nova.compute.manager [req-33697409-17fe-41ee-ba17-d4eb9f81783c req-4d724d19-43f0-443d-aa34-1a36acdf78d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Neutron deleted interface da2c94a9-a16d-4ec8-9797-c39c6c89ec26; detaching it from the instance and deleting it from the info cache
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.814 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4042a669-02b9-444c-9cd0-50a675aa7adf in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.814 2 DEBUG nova.network.neutron [req-33697409-17fe-41ee-ba17-d4eb9f81783c req-4d724d19-43f0-443d-aa34-1a36acdf78d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.816 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.833 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6660b2-02e6-411c-b3c4-2738cd11b907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.835 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.837 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.838 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f44f4593-23bb-4be1-af14-7050c93f0f4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.838 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d27beb73-a508-4275-bd9a-d6ba8416a1f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.852 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c33600df-4811-4ff9-931d-f1af7b5ab637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.859 2 DEBUG nova.network.neutron [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.869 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7a075a64-e114-42f9-af00-315eccd3013e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.883 2 DEBUG nova.network.neutron [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Updated VIF entry in instance network info cache for port 4042a669-02b9-444c-9cd0-50a675aa7adf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:44 compute-0 nova_compute[256940]: 2025-10-02 12:42:44.884 2 DEBUG nova.network.neutron [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Updating instance_info_cache with network_info: [{"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.903 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[050a8952-92c8-4bb9-800e-3de78a420e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 NetworkManager[44981]: <info>  [1759408964.9141] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.912 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[849a5d3b-fde1-4268-8cec-f5eb695074dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.949 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4948adbe-0449-4adf-b376-7015f200a98d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.954 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dc96b003-b35d-4656-b663-d40ef7c0a6d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:44 compute-0 NetworkManager[44981]: <info>  [1759408964.9805] device (tapfd4432c5-b0): carrier: link connected
Oct 02 12:42:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:44.990 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[af2871ec-d768-48d4-ab30-a9837889cca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.007 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[658d3c98-60b7-493c-8d32-6f5d2cdc7485]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686046, 'reachable_time': 34717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330967, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.021 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ebec2c3c-b5c2-40e4-b692-2611287b2fc4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686046, 'tstamp': 686046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330968, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.037 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9152aa-3fec-471e-b257-1b2928d89562]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686046, 'reachable_time': 34717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330969, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:45.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.074 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1265a420-1d14-4ca9-a724-c528fa295679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.083 2 DEBUG nova.compute.manager [req-33697409-17fe-41ee-ba17-d4eb9f81783c req-4d724d19-43f0-443d-aa34-1a36acdf78d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Detach interface failed, port_id=da2c94a9-a16d-4ec8-9797-c39c6c89ec26, reason: Instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.117 2 INFO nova.compute.manager [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Took 4.56 seconds to deallocate network for instance.
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.121 2 DEBUG oslo_concurrency.lockutils [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1e02b9cd-d03a-4fd7-bece-d6aac4461748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.122 2 DEBUG nova.compute.manager [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.122 2 DEBUG oslo_concurrency.lockutils [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.122 2 DEBUG oslo_concurrency.lockutils [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.122 2 DEBUG oslo_concurrency.lockutils [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.122 2 DEBUG nova.compute.manager [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] No waiting events found dispatching network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.123 2 WARNING nova.compute.manager [req-69ff42cd-1695-4fe0-9ae6-d42d9d3d4321 req-140135da-30e4-4e32-a1f4-3816a3444663 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Received unexpected event network-vif-plugged-da2c94a9-a16d-4ec8-9797-c39c6c89ec26 for instance with vm_state active and task_state deleting.
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.141 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9a11179f-9fbe-4b83-a702-fd51df5f4f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.143 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.143 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.144 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:45 compute-0 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:45 compute-0 NetworkManager[44981]: <info>  [1759408965.1469] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.155 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:45 compute-0 ovn_controller[148123]: 2025-10-02T12:42:45Z|00561|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.165 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.166 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3a029f45-0f47-4c7b-a4d0-0d17478c2b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.170 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:42:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:45.170 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:45.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:45 compute-0 ceph-mon[73668]: pgmap v2013: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 9.2 MiB/s wr, 271 op/s
Oct 02 12:42:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4163876452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.396 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.398 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.512 2 DEBUG oslo_concurrency.processutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:45 compute-0 podman[331001]: 2025-10-02 12:42:45.558678779 +0000 UTC m=+0.060954382 container create c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:45 compute-0 systemd[1]: Started libpod-conmon-c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52.scope.
Oct 02 12:42:45 compute-0 podman[331001]: 2025-10-02 12:42:45.527939917 +0000 UTC m=+0.030215550 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:42:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c99f988d4a147378a7e8d1235301b022ed96f210215ee1778aec8720b07234/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:45 compute-0 podman[331001]: 2025-10-02 12:42:45.648696519 +0000 UTC m=+0.150972122 container init c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:42:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 10 MiB/s wr, 301 op/s
Oct 02 12:42:45 compute-0 podman[331001]: 2025-10-02 12:42:45.655806214 +0000 UTC m=+0.158081817 container start c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:42:45 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[331017]: [NOTICE]   (331031) : New worker (331042) forked
Oct 02 12:42:45 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[331017]: [NOTICE]   (331031) : Loading success.
Oct 02 12:42:45 compute-0 nova_compute[256940]: 2025-10-02 12:42:45.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947320957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.017 2 DEBUG oslo_concurrency.processutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.027 2 DEBUG nova.compute.provider_tree [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.177 2 DEBUG nova.scheduler.client.report [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 02 12:42:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 02 12:42:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.305 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408966.3037708, 1e02b9cd-d03a-4fd7-bece-d6aac4461748 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.305 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] VM Started (Lifecycle Event)
Oct 02 12:42:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2947320957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:46 compute-0 ceph-mon[73668]: osdmap e274: 3 total, 3 up, 3 in
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.407 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.417 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.421 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408966.3040168, 1e02b9cd-d03a-4fd7-bece-d6aac4461748 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.422 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] VM Paused (Lifecycle Event)
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.476 2 INFO nova.scheduler.client.report [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocations for instance 0cda84d2-b983-447e-b9ef-6888fdeb2df5
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.480 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.484 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.625 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:42:46 compute-0 nova_compute[256940]: 2025-10-02 12:42:46.875 2 DEBUG oslo_concurrency.lockutils [None req-9823867b-5d9d-4298-a343-84f687f33774 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "0cda84d2-b983-447e-b9ef-6888fdeb2df5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:47.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.095 2 DEBUG nova.compute.manager [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received event network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.096 2 DEBUG oslo_concurrency.lockutils [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.096 2 DEBUG oslo_concurrency.lockutils [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.096 2 DEBUG oslo_concurrency.lockutils [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.096 2 DEBUG nova.compute.manager [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Processing event network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.096 2 DEBUG nova.compute.manager [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received event network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.097 2 DEBUG oslo_concurrency.lockutils [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.097 2 DEBUG oslo_concurrency.lockutils [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.097 2 DEBUG oslo_concurrency.lockutils [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.097 2 DEBUG nova.compute.manager [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] No waiting events found dispatching network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.097 2 WARNING nova.compute.manager [req-e60d7ebb-455f-4066-96ee-3ff1337ab88a req-5f055e07-7390-4b99-a109-516befe3f1b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received unexpected event network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf for instance with vm_state building and task_state spawning.
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.098 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.102 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759408967.1024766, 1e02b9cd-d03a-4fd7-bece-d6aac4461748 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.103 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] VM Resumed (Lifecycle Event)
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.105 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.108 2 INFO nova.virt.libvirt.driver [-] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Instance spawned successfully.
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.109 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.178 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.183 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.184 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.184 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.185 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.185 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.186 2 DEBUG nova.virt.libvirt.driver [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.189 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.252 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:42:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:47.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.301 2 INFO nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Took 11.97 seconds to spawn the instance on the hypervisor.
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.302 2 DEBUG nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.384 2 INFO nova.compute.manager [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Took 12.95 seconds to build instance.
Oct 02 12:42:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 02 12:42:47 compute-0 nova_compute[256940]: 2025-10-02 12:42:47.406 2 DEBUG oslo_concurrency.lockutils [None req-ab258070-c27d-46af-8a19-93aebdda8d0c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:47 compute-0 ceph-mon[73668]: pgmap v2014: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 10 MiB/s wr, 301 op/s
Oct 02 12:42:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 02 12:42:47 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 02 12:42:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 223 op/s
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.388 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.389 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.389 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.389 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.389 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.390 2 INFO nova.compute.manager [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Terminating instance
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.391 2 DEBUG nova.compute.manager [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:42:48 compute-0 kernel: tap422859c2-b5 (unregistering): left promiscuous mode
Oct 02 12:42:48 compute-0 NetworkManager[44981]: <info>  [1759408968.4629] device (tap422859c2-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-0 ovn_controller[148123]: 2025-10-02T12:42:48Z|00562|binding|INFO|Releasing lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 from this chassis (sb_readonly=0)
Oct 02 12:42:48 compute-0 ovn_controller[148123]: 2025-10-02T12:42:48Z|00563|binding|INFO|Setting lport 422859c2-b5cd-467a-85cf-ff82d92d7d87 down in Southbound
Oct 02 12:42:48 compute-0 ovn_controller[148123]: 2025-10-02T12:42:48Z|00564|binding|INFO|Removing iface tap422859c2-b5 ovn-installed in OVS
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:48.538 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2f:ac 10.100.0.13'], port_security=['fa:16:3e:51:2f:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1546ac1d-4a04-4c5e-ae02-b005461c7731', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=422859c2-b5cd-467a-85cf-ff82d92d7d87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:48.540 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 422859c2-b5cd-467a-85cf-ff82d92d7d87 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis
Oct 02 12:42:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:48.541 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 494beff4-7fba-4749-8998-3432c91ac5d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:42:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:48.542 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a19a3525-3d94-47bb-a64b-b5444863496a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:48.543 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace which is not needed anymore
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 02 12:42:48 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000065.scope: Consumed 23.113s CPU time.
Oct 02 12:42:48 compute-0 systemd-machined[210927]: Machine qemu-50-instance-00000065 terminated.
Oct 02 12:42:48 compute-0 ceph-mon[73668]: osdmap e275: 3 total, 3 up, 3 in
Oct 02 12:42:48 compute-0 ceph-mon[73668]: pgmap v2017: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 223 op/s
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.626 2 INFO nova.virt.libvirt.driver [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Instance destroyed successfully.
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.627 2 DEBUG nova.objects.instance [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid 1546ac1d-4a04-4c5e-ae02-b005461c7731 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.646 2 DEBUG nova.virt.libvirt.vif [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185783620',display_name='tempest-ServerStableDeviceRescueTest-server-185783620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185783620',id=101,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:38:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-300aqtzl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:46Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=1546ac1d-4a04-4c5e-ae02-b005461c7731,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.647 2 DEBUG nova.network.os_vif_util [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "address": "fa:16:3e:51:2f:ac", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422859c2-b5", "ovs_interfaceid": "422859c2-b5cd-467a-85cf-ff82d92d7d87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.648 2 DEBUG nova.network.os_vif_util [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.648 2 DEBUG os_vif [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap422859c2-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.656 2 INFO os_vif [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:2f:ac,bridge_name='br-int',has_traffic_filtering=True,id=422859c2-b5cd-467a-85cf-ff82d92d7d87,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422859c2-b5')
Oct 02 12:42:48 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[321755]: [NOTICE]   (321780) : haproxy version is 2.8.14-c23fe91
Oct 02 12:42:48 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[321755]: [NOTICE]   (321780) : path to executable is /usr/sbin/haproxy
Oct 02 12:42:48 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[321755]: [WARNING]  (321780) : Exiting Master process...
Oct 02 12:42:48 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[321755]: [ALERT]    (321780) : Current worker (321784) exited with code 143 (Terminated)
Oct 02 12:42:48 compute-0 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[321755]: [WARNING]  (321780) : All workers exited. Exiting... (0)
Oct 02 12:42:48 compute-0 systemd[1]: libpod-77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43.scope: Deactivated successfully.
Oct 02 12:42:48 compute-0 conmon[321755]: conmon 77e41f814565ce908bf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43.scope/container/memory.events
Oct 02 12:42:48 compute-0 podman[331129]: 2025-10-02 12:42:48.697409325 +0000 UTC m=+0.050851499 container died 77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43-userdata-shm.mount: Deactivated successfully.
Oct 02 12:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b79d83966034ee8b76bccc2345b269744517dfdfd16133f2b6e0e75d9aabd75-merged.mount: Deactivated successfully.
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.764 2 DEBUG nova.compute.manager [req-bcd20982-475f-4037-bccf-e8dc38b2cb6d req-359532a3-6aa7-48fb-b158-0e15bbcff76a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.764 2 DEBUG oslo_concurrency.lockutils [req-bcd20982-475f-4037-bccf-e8dc38b2cb6d req-359532a3-6aa7-48fb-b158-0e15bbcff76a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.764 2 DEBUG oslo_concurrency.lockutils [req-bcd20982-475f-4037-bccf-e8dc38b2cb6d req-359532a3-6aa7-48fb-b158-0e15bbcff76a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.765 2 DEBUG oslo_concurrency.lockutils [req-bcd20982-475f-4037-bccf-e8dc38b2cb6d req-359532a3-6aa7-48fb-b158-0e15bbcff76a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.765 2 DEBUG nova.compute.manager [req-bcd20982-475f-4037-bccf-e8dc38b2cb6d req-359532a3-6aa7-48fb-b158-0e15bbcff76a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:48 compute-0 nova_compute[256940]: 2025-10-02 12:42:48.766 2 DEBUG nova.compute.manager [req-bcd20982-475f-4037-bccf-e8dc38b2cb6d req-359532a3-6aa7-48fb-b158-0e15bbcff76a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-unplugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:42:48 compute-0 podman[331129]: 2025-10-02 12:42:48.844469754 +0000 UTC m=+0.197911958 container cleanup 77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:42:48 compute-0 systemd[1]: libpod-conmon-77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43.scope: Deactivated successfully.
Oct 02 12:42:49 compute-0 podman[331175]: 2025-10-02 12:42:49.009655496 +0000 UTC m=+0.138561538 container remove 77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.018 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0f309d75-52c5-449b-9a93-d2cc3c9c2cff]: (4, ('Thu Oct  2 12:42:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43)\n77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43\nThu Oct  2 12:42:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43)\n77e41f814565ce908bf5afc4ad5fddf1a17c4b08c5bf714ebed3cae6752dee43\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.020 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dcbfc7-2a1a-43e8-9ebb-d58ce6fd3fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.021 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:49 compute-0 nova_compute[256940]: 2025-10-02 12:42:49.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 kernel: tap494beff4-70: left promiscuous mode
Oct 02 12:42:49 compute-0 nova_compute[256940]: 2025-10-02 12:42:49.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.041 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eac793b1-f93e-4804-99a3-5216b066b34d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:49.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.076 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4ac29a-7239-42bf-883f-f16fae0a856b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.077 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[45c72e80-cd76-4a6d-a81c-e040374e27d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.093 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcd941c-85dd-4581-966d-021f4653dce8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661233, 'reachable_time': 41213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331190, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d494beff4\x2d7fba\x2d4749\x2d8998\x2d3432c91ac5d2.mount: Deactivated successfully.
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.096 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:42:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:49.096 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[52412b32-9771-4faa-bad3-015e78f76f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:49.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.6 MiB/s wr, 68 op/s
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.079 2 DEBUG oslo_concurrency.lockutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.079 2 DEBUG oslo_concurrency.lockutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.095 2 DEBUG nova.objects.instance [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'flavor' on Instance uuid 1e02b9cd-d03a-4fd7-bece-d6aac4461748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.135 2 DEBUG oslo_concurrency.lockutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.318 2 DEBUG oslo_concurrency.lockutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.319 2 DEBUG oslo_concurrency.lockutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.319 2 INFO nova.compute.manager [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Attaching volume 542f01f7-7f03-41d9-ba07-cf427bf3007f to /dev/vdb
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.488 2 DEBUG os_brick.utils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.489 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.502 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.503 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4e7c9a-f95e-4dd9-9028-d4b651da8fee]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.504 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.520 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.520 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[fddc3cb9-552f-4ecd-a4cd-6a9b6aa04d89]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.522 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.534 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.534 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[022a3642-049f-4098-81b0-43e56ec636a0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.535 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[38a4ac2e-3e1e-49b6-9500-a3865f121527]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.536 2 DEBUG oslo_concurrency.processutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.567 2 DEBUG oslo_concurrency.processutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.570 2 DEBUG os_brick.initiator.connectors.lightos [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.570 2 DEBUG os_brick.initiator.connectors.lightos [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.570 2 DEBUG os_brick.initiator.connectors.lightos [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.570 2 DEBUG os_brick.utils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.571 2 DEBUG nova.virt.block_device [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Updating existing volume attachment record: 39f71c1c-2f92-451b-8d2c-d3fca7d63b21 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:42:50 compute-0 ceph-mon[73668]: pgmap v2018: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.6 MiB/s wr, 68 op/s
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.926 2 DEBUG nova.compute.manager [req-a23f8cd3-a2bc-4d9c-a3a4-f070b219580b req-46ae0af8-d397-43ce-ad54-b7302614cb6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.927 2 DEBUG oslo_concurrency.lockutils [req-a23f8cd3-a2bc-4d9c-a3a4-f070b219580b req-46ae0af8-d397-43ce-ad54-b7302614cb6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.927 2 DEBUG oslo_concurrency.lockutils [req-a23f8cd3-a2bc-4d9c-a3a4-f070b219580b req-46ae0af8-d397-43ce-ad54-b7302614cb6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.928 2 DEBUG oslo_concurrency.lockutils [req-a23f8cd3-a2bc-4d9c-a3a4-f070b219580b req-46ae0af8-d397-43ce-ad54-b7302614cb6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.928 2 DEBUG nova.compute.manager [req-a23f8cd3-a2bc-4d9c-a3a4-f070b219580b req-46ae0af8-d397-43ce-ad54-b7302614cb6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] No waiting events found dispatching network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:50 compute-0 nova_compute[256940]: 2025-10-02 12:42:50.928 2 WARNING nova.compute.manager [req-a23f8cd3-a2bc-4d9c-a3a4-f070b219580b req-46ae0af8-d397-43ce-ad54-b7302614cb6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received unexpected event network-vif-plugged-422859c2-b5cd-467a-85cf-ff82d92d7d87 for instance with vm_state active and task_state deleting.
Oct 02 12:42:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:51.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 02 12:42:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:42:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:42:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 02 12:42:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 02 12:42:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2351080038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.384 2 DEBUG nova.objects.instance [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'flavor' on Instance uuid 1e02b9cd-d03a-4fd7-bece-d6aac4461748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.407 2 DEBUG nova.virt.libvirt.driver [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Attempting to attach volume 542f01f7-7f03-41d9-ba07-cf427bf3007f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.411 2 DEBUG nova.virt.libvirt.guest [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:42:51 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:42:51 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-542f01f7-7f03-41d9-ba07-cf427bf3007f">
Oct 02 12:42:51 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:51 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:51 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:51 compute-0 nova_compute[256940]:   </source>
Oct 02 12:42:51 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:42:51 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:51 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:42:51 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:42:51 compute-0 nova_compute[256940]:   <serial>542f01f7-7f03-41d9-ba07-cf427bf3007f</serial>
Oct 02 12:42:51 compute-0 nova_compute[256940]: </disk>
Oct 02 12:42:51 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 463 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.7 KiB/s wr, 252 op/s
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.787 2 DEBUG nova.virt.libvirt.driver [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.787 2 DEBUG nova.virt.libvirt.driver [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.787 2 DEBUG nova.virt.libvirt.driver [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:51 compute-0 nova_compute[256940]: 2025-10-02 12:42:51.788 2 DEBUG nova.virt.libvirt.driver [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:1f:3d:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.018 2 DEBUG oslo_concurrency.lockutils [None req-12efe944-d45b-40de-926f-42d9af7b071b ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:52 compute-0 ceph-mon[73668]: osdmap e276: 3 total, 3 up, 3 in
Oct 02 12:42:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2351080038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.563 2 INFO nova.virt.libvirt.driver [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Deleting instance files /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731_del
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.564 2 INFO nova.virt.libvirt.driver [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Deletion of /var/lib/nova/instances/1546ac1d-4a04-4c5e-ae02-b005461c7731_del complete
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.622 2 INFO nova.compute.manager [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Took 4.23 seconds to destroy the instance on the hypervisor.
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.623 2 DEBUG oslo.service.loopingcall [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.623 2 DEBUG nova.compute.manager [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.623 2 DEBUG nova.network.neutron [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.893 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.893 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.894 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.894 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.894 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.895 2 INFO nova.compute.manager [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Terminating instance
Oct 02 12:42:52 compute-0 nova_compute[256940]: 2025-10-02 12:42:52.897 2 DEBUG nova.compute.manager [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:42:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:53.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.221 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408958.2207844, 0cda84d2-b983-447e-b9ef-6888fdeb2df5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.221 2 INFO nova.compute.manager [-] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] VM Stopped (Lifecycle Event)
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.248 2 DEBUG nova.compute.manager [None req-faddda7b-70d4-48fb-a59a-9fd6bca6398f - - - - - -] [instance: 0cda84d2-b983-447e-b9ef-6888fdeb2df5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.251 2 DEBUG nova.network.neutron [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.285 2 INFO nova.compute.manager [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Took 0.66 seconds to deallocate network for instance.
Oct 02 12:42:53 compute-0 kernel: tap4042a669-02 (unregistering): left promiscuous mode
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.334 2 DEBUG nova.compute.manager [req-62d9481c-70e5-4683-a502-e92bd24b8e6b req-2e15452f-4423-4e02-bf3a-08c5c2465f64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Received event network-vif-deleted-422859c2-b5cd-467a-85cf-ff82d92d7d87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:53 compute-0 NetworkManager[44981]: <info>  [1759408973.3359] device (tap4042a669-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.338 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.339 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:53 compute-0 ovn_controller[148123]: 2025-10-02T12:42:53Z|00565|binding|INFO|Releasing lport 4042a669-02b9-444c-9cd0-50a675aa7adf from this chassis (sb_readonly=0)
Oct 02 12:42:53 compute-0 ovn_controller[148123]: 2025-10-02T12:42:53Z|00566|binding|INFO|Setting lport 4042a669-02b9-444c-9cd0-50a675aa7adf down in Southbound
Oct 02 12:42:53 compute-0 ovn_controller[148123]: 2025-10-02T12:42:53Z|00567|binding|INFO|Removing iface tap4042a669-02 ovn-installed in OVS
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.355 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:3d:02 10.100.0.8'], port_security=['fa:16:3e:1f:3d:02 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1e02b9cd-d03a-4fd7-bece-d6aac4461748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4042a669-02b9-444c-9cd0-50a675aa7adf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.356 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4042a669-02b9-444c-9cd0-50a675aa7adf in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.358 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.360 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea894f4-53fd-4d2e-a09c-da74fd0203da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.361 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct 02 12:42:53 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000072.scope: Consumed 7.142s CPU time.
Oct 02 12:42:53 compute-0 systemd-machined[210927]: Machine qemu-57-instance-00000072 terminated.
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.409 2 DEBUG oslo_concurrency.processutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:53 compute-0 ceph-mon[73668]: pgmap v2020: 305 pgs: 305 active+clean; 463 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.7 KiB/s wr, 252 op/s
Oct 02 12:42:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3290921538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:53 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[331017]: [NOTICE]   (331031) : haproxy version is 2.8.14-c23fe91
Oct 02 12:42:53 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[331017]: [NOTICE]   (331031) : path to executable is /usr/sbin/haproxy
Oct 02 12:42:53 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[331017]: [WARNING]  (331031) : Exiting Master process...
Oct 02 12:42:53 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[331017]: [ALERT]    (331031) : Current worker (331042) exited with code 143 (Terminated)
Oct 02 12:42:53 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[331017]: [WARNING]  (331031) : All workers exited. Exiting... (0)
Oct 02 12:42:53 compute-0 systemd[1]: libpod-c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52.scope: Deactivated successfully.
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.535 2 INFO nova.virt.libvirt.driver [-] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Instance destroyed successfully.
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.536 2 DEBUG nova.objects.instance [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 1e02b9cd-d03a-4fd7-bece-d6aac4461748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:53 compute-0 podman[331247]: 2025-10-02 12:42:53.541595009 +0000 UTC m=+0.095643968 container died c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.550 2 DEBUG nova.virt.libvirt.vif [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:42:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-883237223',display_name='tempest-DeleteServersTestJSON-server-883237223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-883237223',id=114,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-s5l3s7nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:47Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1e02b9cd-d03a-4fd7-bece-d6aac4461748,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.551 2 DEBUG nova.network.os_vif_util [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "4042a669-02b9-444c-9cd0-50a675aa7adf", "address": "fa:16:3e:1f:3d:02", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4042a669-02", "ovs_interfaceid": "4042a669-02b9-444c-9cd0-50a675aa7adf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.552 2 DEBUG nova.network.os_vif_util [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3d:02,bridge_name='br-int',has_traffic_filtering=True,id=4042a669-02b9-444c-9cd0-50a675aa7adf,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4042a669-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.552 2 DEBUG os_vif [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3d:02,bridge_name='br-int',has_traffic_filtering=True,id=4042a669-02b9-444c-9cd0-50a675aa7adf,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4042a669-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.554 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4042a669-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.560 2 INFO os_vif [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3d:02,bridge_name='br-int',has_traffic_filtering=True,id=4042a669-02b9-444c-9cd0-50a675aa7adf,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4042a669-02')
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.585 2 DEBUG nova.compute.manager [req-6352b276-edde-4eac-a4e8-984c6f516db5 req-5b816ffa-3706-417b-95f1-89f3f74496bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received event network-vif-unplugged-4042a669-02b9-444c-9cd0-50a675aa7adf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.586 2 DEBUG oslo_concurrency.lockutils [req-6352b276-edde-4eac-a4e8-984c6f516db5 req-5b816ffa-3706-417b-95f1-89f3f74496bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.586 2 DEBUG oslo_concurrency.lockutils [req-6352b276-edde-4eac-a4e8-984c6f516db5 req-5b816ffa-3706-417b-95f1-89f3f74496bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.586 2 DEBUG oslo_concurrency.lockutils [req-6352b276-edde-4eac-a4e8-984c6f516db5 req-5b816ffa-3706-417b-95f1-89f3f74496bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.586 2 DEBUG nova.compute.manager [req-6352b276-edde-4eac-a4e8-984c6f516db5 req-5b816ffa-3706-417b-95f1-89f3f74496bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] No waiting events found dispatching network-vif-unplugged-4042a669-02b9-444c-9cd0-50a675aa7adf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.587 2 DEBUG nova.compute.manager [req-6352b276-edde-4eac-a4e8-984c6f516db5 req-5b816ffa-3706-417b-95f1-89f3f74496bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received event network-vif-unplugged-4042a669-02b9-444c-9cd0-50a675aa7adf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:42:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 385 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.8 KiB/s wr, 231 op/s
Oct 02 12:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52-userdata-shm.mount: Deactivated successfully.
Oct 02 12:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4c99f988d4a147378a7e8d1235301b022ed96f210215ee1778aec8720b07234-merged.mount: Deactivated successfully.
Oct 02 12:42:53 compute-0 podman[331247]: 2025-10-02 12:42:53.739312681 +0000 UTC m=+0.293361650 container cleanup c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:42:53 compute-0 systemd[1]: libpod-conmon-c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52.scope: Deactivated successfully.
Oct 02 12:42:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315279011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:53 compute-0 podman[331324]: 2025-10-02 12:42:53.853041379 +0000 UTC m=+0.086334804 container remove c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.854 2 DEBUG oslo_concurrency.processutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.859 2 DEBUG nova.compute.provider_tree [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.862 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cef690bf-64c2-4de0-8ae2-ca55672b9ca3]: (4, ('Thu Oct  2 12:42:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52)\nc3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52\nThu Oct  2 12:42:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (c3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52)\nc3b7d94764676618a22102d017a8f62974d728f1d148df08c6fa627f12760e52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.863 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1f05c248-ded1-4717-8c72-a52284d1af9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.864 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-0 kernel: tapfd4432c5-b0: left promiscuous mode
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.875 2 DEBUG nova.scheduler.client.report [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.885 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3268f5f1-3e79-49c8-b422-addeeeb1fa89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.897 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.909 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[90f1b04a-7e60-46fa-bcaf-0f909584c956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.910 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[72d3d3e4-39f6-4dec-bdad-a12bcd9bfdf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:53 compute-0 nova_compute[256940]: 2025-10-02 12:42:53.924 2 INFO nova.scheduler.client.report [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Deleted allocations for instance 1546ac1d-4a04-4c5e-ae02-b005461c7731
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.927 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e605a4-76a7-4846-87f3-b745f3233e7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686038, 'reachable_time': 19155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331341, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.932 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:42:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:42:53.932 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[23c2c8d9-b2c9-4b4e-9f31-500bf9b650c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:54 compute-0 nova_compute[256940]: 2025-10-02 12:42:54.023 2 DEBUG oslo_concurrency.lockutils [None req-b593cdd1-290a-41b1-a8cf-a4cc346ff07e fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1546ac1d-4a04-4c5e-ae02-b005461c7731" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:54 compute-0 ceph-mon[73668]: pgmap v2021: 305 pgs: 305 active+clean; 385 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.8 KiB/s wr, 231 op/s
Oct 02 12:42:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1315279011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/829567821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:55.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:55 compute-0 podman[331343]: 2025-10-02 12:42:55.41141799 +0000 UTC m=+0.080884272 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:42:55 compute-0 podman[331344]: 2025-10-02 12:42:55.411495042 +0000 UTC m=+0.079568838 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:42:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 374 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 KiB/s wr, 223 op/s
Oct 02 12:42:55 compute-0 nova_compute[256940]: 2025-10-02 12:42:55.707 2 DEBUG nova.compute.manager [req-22fb1ac7-2c24-45e7-acb1-3ffaba4f9d32 req-5d64e5d8-fb18-4018-9c46-e7801369a559 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received event network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:55 compute-0 nova_compute[256940]: 2025-10-02 12:42:55.708 2 DEBUG oslo_concurrency.lockutils [req-22fb1ac7-2c24-45e7-acb1-3ffaba4f9d32 req-5d64e5d8-fb18-4018-9c46-e7801369a559 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:55 compute-0 nova_compute[256940]: 2025-10-02 12:42:55.708 2 DEBUG oslo_concurrency.lockutils [req-22fb1ac7-2c24-45e7-acb1-3ffaba4f9d32 req-5d64e5d8-fb18-4018-9c46-e7801369a559 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:55 compute-0 nova_compute[256940]: 2025-10-02 12:42:55.709 2 DEBUG oslo_concurrency.lockutils [req-22fb1ac7-2c24-45e7-acb1-3ffaba4f9d32 req-5d64e5d8-fb18-4018-9c46-e7801369a559 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:55 compute-0 nova_compute[256940]: 2025-10-02 12:42:55.709 2 DEBUG nova.compute.manager [req-22fb1ac7-2c24-45e7-acb1-3ffaba4f9d32 req-5d64e5d8-fb18-4018-9c46-e7801369a559 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] No waiting events found dispatching network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:55 compute-0 nova_compute[256940]: 2025-10-02 12:42:55.709 2 WARNING nova.compute.manager [req-22fb1ac7-2c24-45e7-acb1-3ffaba4f9d32 req-5d64e5d8-fb18-4018-9c46-e7801369a559 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received unexpected event network-vif-plugged-4042a669-02b9-444c-9cd0-50a675aa7adf for instance with vm_state active and task_state deleting.
Oct 02 12:42:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 02 12:42:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 02 12:42:56 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 02 12:42:56 compute-0 nova_compute[256940]: 2025-10-02 12:42:56.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:56 compute-0 ceph-mon[73668]: pgmap v2022: 305 pgs: 305 active+clean; 374 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 KiB/s wr, 223 op/s
Oct 02 12:42:56 compute-0 ceph-mon[73668]: osdmap e277: 3 total, 3 up, 3 in
Oct 02 12:42:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:57.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 383 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 267 op/s
Oct 02 12:42:58 compute-0 nova_compute[256940]: 2025-10-02 12:42:58.047 2 INFO nova.virt.libvirt.driver [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Deleting instance files /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748_del
Oct 02 12:42:58 compute-0 nova_compute[256940]: 2025-10-02 12:42:58.048 2 INFO nova.virt.libvirt.driver [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Deletion of /var/lib/nova/instances/1e02b9cd-d03a-4fd7-bece-d6aac4461748_del complete
Oct 02 12:42:58 compute-0 nova_compute[256940]: 2025-10-02 12:42:58.363 2 INFO nova.compute.manager [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Took 5.46 seconds to destroy the instance on the hypervisor.
Oct 02 12:42:58 compute-0 nova_compute[256940]: 2025-10-02 12:42:58.364 2 DEBUG oslo.service.loopingcall [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:42:58 compute-0 nova_compute[256940]: 2025-10-02 12:42:58.365 2 DEBUG nova.compute.manager [-] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:42:58 compute-0 nova_compute[256940]: 2025-10-02 12:42:58.365 2 DEBUG nova.network.neutron [-] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:42:58 compute-0 nova_compute[256940]: 2025-10-02 12:42:58.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:59 compute-0 ceph-mon[73668]: pgmap v2024: 305 pgs: 305 active+clean; 383 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 267 op/s
Oct 02 12:42:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3834112678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1302167254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:59.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:59 compute-0 nova_compute[256940]: 2025-10-02 12:42:59.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:42:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:42:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:59.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:42:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 383 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 1.3 MiB/s wr, 96 op/s
Oct 02 12:42:59 compute-0 nova_compute[256940]: 2025-10-02 12:42:59.757 2 DEBUG nova.network.neutron [-] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:59 compute-0 nova_compute[256940]: 2025-10-02 12:42:59.810 2 DEBUG nova.compute.manager [req-88ba5748-29a4-45cb-9f5b-5438a8aafa2e req-194101c7-2e78-4616-8572-e19314657d7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Received event network-vif-deleted-4042a669-02b9-444c-9cd0-50a675aa7adf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:59 compute-0 nova_compute[256940]: 2025-10-02 12:42:59.811 2 INFO nova.compute.manager [req-88ba5748-29a4-45cb-9f5b-5438a8aafa2e req-194101c7-2e78-4616-8572-e19314657d7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Neutron deleted interface 4042a669-02b9-444c-9cd0-50a675aa7adf; detaching it from the instance and deleting it from the info cache
Oct 02 12:42:59 compute-0 nova_compute[256940]: 2025-10-02 12:42:59.811 2 DEBUG nova.network.neutron [req-88ba5748-29a4-45cb-9f5b-5438a8aafa2e req-194101c7-2e78-4616-8572-e19314657d7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:59 compute-0 nova_compute[256940]: 2025-10-02 12:42:59.871 2 INFO nova.compute.manager [-] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Took 1.51 seconds to deallocate network for instance.
Oct 02 12:42:59 compute-0 nova_compute[256940]: 2025-10-02 12:42:59.890 2 DEBUG nova.compute.manager [req-88ba5748-29a4-45cb-9f5b-5438a8aafa2e req-194101c7-2e78-4616-8572-e19314657d7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Detach interface failed, port_id=4042a669-02b9-444c-9cd0-50a675aa7adf, reason: Instance 1e02b9cd-d03a-4fd7-bece-d6aac4461748 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:43:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2706070081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/534810139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.137 2 INFO nova.compute.manager [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Took 0.27 seconds to detach 1 volumes for instance.
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.234 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.234 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.286 2 DEBUG oslo_concurrency.processutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3206392360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.712 2 DEBUG oslo_concurrency.processutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.719 2 DEBUG nova.compute.provider_tree [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.793 2 DEBUG nova.scheduler.client.report [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.890 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:00 compute-0 nova_compute[256940]: 2025-10-02 12:43:00.944 2 INFO nova.scheduler.client.report [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 1e02b9cd-d03a-4fd7-bece-d6aac4461748
Oct 02 12:43:01 compute-0 ceph-mon[73668]: pgmap v2025: 305 pgs: 305 active+clean; 383 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 1.3 MiB/s wr, 96 op/s
Oct 02 12:43:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/889090069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3206392360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-0 sudo[331413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:01 compute-0 sudo[331413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:01 compute-0 sudo[331413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:01.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.108 2 DEBUG oslo_concurrency.lockutils [None req-cc58a0e2-314f-4efd-9416-dcf505643b17 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1e02b9cd-d03a-4fd7-bece-d6aac4461748" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:01 compute-0 sudo[331438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:01 compute-0 sudo[331438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:01 compute-0 sudo[331438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:01.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.288 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.288 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.289 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.289 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.289 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 332 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 02 12:43:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/311222988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.724 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.875 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.876 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4446MB free_disk=20.868064880371094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.877 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.877 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.955 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.956 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:43:01 compute-0 nova_compute[256940]: 2025-10-02 12:43:01.974 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/311222988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3097588909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1798938763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231430811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-0 nova_compute[256940]: 2025-10-02 12:43:02.418 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:02 compute-0 nova_compute[256940]: 2025-10-02 12:43:02.424 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:02 compute-0 nova_compute[256940]: 2025-10-02 12:43:02.443 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:02 compute-0 nova_compute[256940]: 2025-10-02 12:43:02.467 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:43:02 compute-0 nova_compute[256940]: 2025-10-02 12:43:02.468 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:03.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:03 compute-0 ceph-mon[73668]: pgmap v2026: 305 pgs: 305 active+clean; 332 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 02 12:43:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2835615240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3231430811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2220759351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:03 compute-0 nova_compute[256940]: 2025-10-02 12:43:03.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-0 nova_compute[256940]: 2025-10-02 12:43:03.625 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408968.6243856, 1546ac1d-4a04-4c5e-ae02-b005461c7731 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:03 compute-0 nova_compute[256940]: 2025-10-02 12:43:03.626 2 INFO nova.compute.manager [-] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] VM Stopped (Lifecycle Event)
Oct 02 12:43:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 276 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:43:03 compute-0 nova_compute[256940]: 2025-10-02 12:43:03.850 2 DEBUG nova.compute.manager [None req-8704c1c5-6a01-498b-b554-c92b49ccc016 - - - - - -] [instance: 1546ac1d-4a04-4c5e-ae02-b005461c7731] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:04 compute-0 nova_compute[256940]: 2025-10-02 12:43:04.464 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:04 compute-0 nova_compute[256940]: 2025-10-02 12:43:04.465 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:04 compute-0 nova_compute[256940]: 2025-10-02 12:43:04.465 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:04 compute-0 nova_compute[256940]: 2025-10-02 12:43:04.466 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:43:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:04.895 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:04.895 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:43:04 compute-0 nova_compute[256940]: 2025-10-02 12:43:04.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:05.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:05 compute-0 ceph-mon[73668]: pgmap v2027: 305 pgs: 305 active+clean; 276 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:43:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/69113505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/69113505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 02 12:43:06 compute-0 nova_compute[256940]: 2025-10-02 12:43:06.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:06 compute-0 nova_compute[256940]: 2025-10-02 12:43:06.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 02 12:43:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 02 12:43:06 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 02 12:43:06 compute-0 nova_compute[256940]: 2025-10-02 12:43:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:07.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:07 compute-0 nova_compute[256940]: 2025-10-02 12:43:07.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:07.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:07 compute-0 ceph-mon[73668]: pgmap v2028: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 02 12:43:07 compute-0 ceph-mon[73668]: osdmap e278: 3 total, 3 up, 3 in
Oct 02 12:43:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Oct 02 12:43:08 compute-0 nova_compute[256940]: 2025-10-02 12:43:08.534 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408973.5331264, 1e02b9cd-d03a-4fd7-bece-d6aac4461748 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:08 compute-0 nova_compute[256940]: 2025-10-02 12:43:08.534 2 INFO nova.compute.manager [-] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] VM Stopped (Lifecycle Event)
Oct 02 12:43:08 compute-0 ceph-mon[73668]: pgmap v2030: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Oct 02 12:43:08 compute-0 nova_compute[256940]: 2025-10-02 12:43:08.553 2 DEBUG nova.compute.manager [None req-440bdefa-495a-4d32-89e5-efb3922af7b3 - - - - - -] [instance: 1e02b9cd-d03a-4fd7-bece-d6aac4461748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:08 compute-0 nova_compute[256940]: 2025-10-02 12:43:08.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:09.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.236 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:43:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:09 compute-0 podman[331512]: 2025-10-02 12:43:09.401470704 +0000 UTC m=+0.066504687 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:43:09 compute-0 podman[331513]: 2025-10-02 12:43:09.40822945 +0000 UTC m=+0.063489638 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:43:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.667 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.668 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3519200454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.685 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.848 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.849 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.858 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.858 2 INFO nova.compute.claims [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:43:09 compute-0 nova_compute[256940]: 2025-10-02 12:43:09.974 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/255502040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.401 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.408 2 DEBUG nova.compute.provider_tree [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.425 2 DEBUG nova.scheduler.client.report [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.453 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.454 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.513 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.514 2 DEBUG nova.network.neutron [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.535 2 INFO nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.566 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.707 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.709 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.710 2 INFO nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Creating image(s)
Oct 02 12:43:10 compute-0 ceph-mon[73668]: pgmap v2031: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Oct 02 12:43:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3432120987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/255502040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.751 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.778 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.807 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.812 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.886 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.887 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.888 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.888 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:10.897 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.928 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.931 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:10 compute-0 nova_compute[256940]: 2025-10-02 12:43:10.967 2 DEBUG nova.policy [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:43:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:11.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.264 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.348 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.536 2 DEBUG nova.objects.instance [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 686e5112-9d36-4b52-9162-cb3cfcbbab74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.623 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.624 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Ensure instance console log exists: /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.624 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.625 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:11 compute-0 nova_compute[256940]: 2025-10-02 12:43:11.625 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 292 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 12:43:12 compute-0 nova_compute[256940]: 2025-10-02 12:43:12.217 2 DEBUG nova.network.neutron [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Successfully created port: 78484beb-0c30-4b51-b9d5-55ff21fae85a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:43:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 02 12:43:12 compute-0 ceph-mon[73668]: pgmap v2032: 305 pgs: 305 active+clean; 292 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 12:43:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 02 12:43:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 02 12:43:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:13.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:13.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:13 compute-0 nova_compute[256940]: 2025-10-02 12:43:13.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 353 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 7.4 MiB/s wr, 252 op/s
Oct 02 12:43:13 compute-0 ceph-mon[73668]: osdmap e279: 3 total, 3 up, 3 in
Oct 02 12:43:14 compute-0 nova_compute[256940]: 2025-10-02 12:43:14.453 2 DEBUG nova.network.neutron [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Successfully updated port: 78484beb-0c30-4b51-b9d5-55ff21fae85a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:43:14 compute-0 nova_compute[256940]: 2025-10-02 12:43:14.498 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-686e5112-9d36-4b52-9162-cb3cfcbbab74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:14 compute-0 nova_compute[256940]: 2025-10-02 12:43:14.498 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-686e5112-9d36-4b52-9162-cb3cfcbbab74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:14 compute-0 nova_compute[256940]: 2025-10-02 12:43:14.498 2 DEBUG nova.network.neutron [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:43:14 compute-0 nova_compute[256940]: 2025-10-02 12:43:14.661 2 DEBUG nova.compute.manager [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received event network-changed-78484beb-0c30-4b51-b9d5-55ff21fae85a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:14 compute-0 nova_compute[256940]: 2025-10-02 12:43:14.662 2 DEBUG nova.compute.manager [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Refreshing instance network info cache due to event network-changed-78484beb-0c30-4b51-b9d5-55ff21fae85a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:14 compute-0 nova_compute[256940]: 2025-10-02 12:43:14.662 2 DEBUG oslo_concurrency.lockutils [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-686e5112-9d36-4b52-9162-cb3cfcbbab74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:15.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:15 compute-0 ceph-mon[73668]: pgmap v2034: 305 pgs: 305 active+clean; 353 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 7.4 MiB/s wr, 252 op/s
Oct 02 12:43:15 compute-0 nova_compute[256940]: 2025-10-02 12:43:15.231 2 DEBUG nova.network.neutron [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:43:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:15.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 336 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.4 MiB/s wr, 239 op/s
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.316 2 DEBUG nova.network.neutron [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Updating instance_info_cache with network_info: [{"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.371 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-686e5112-9d36-4b52-9162-cb3cfcbbab74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.371 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Instance network_info: |[{"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.372 2 DEBUG oslo_concurrency.lockutils [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-686e5112-9d36-4b52-9162-cb3cfcbbab74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.372 2 DEBUG nova.network.neutron [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Refreshing network info cache for port 78484beb-0c30-4b51-b9d5-55ff21fae85a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.375 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Start _get_guest_xml network_info=[{"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.381 2 WARNING nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.387 2 DEBUG nova.virt.libvirt.host [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.388 2 DEBUG nova.virt.libvirt.host [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.392 2 DEBUG nova.virt.libvirt.host [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.393 2 DEBUG nova.virt.libvirt.host [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.395 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.396 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.396 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.396 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.397 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.397 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.397 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.397 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.398 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.398 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.398 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.398 2 DEBUG nova.virt.hardware [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.402 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601498084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.839 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.882 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:16 compute-0 nova_compute[256940]: 2025-10-02 12:43:16.888 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:17.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:17.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738523268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.379 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.381 2 DEBUG nova.virt.libvirt.vif [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-35961646',display_name='tempest-DeleteServersTestJSON-server-35961646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-35961646',id=116,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-z96l11hx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:10Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=686e5112-9d36-4b52-9162-cb3cfcbbab74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.381 2 DEBUG nova.network.os_vif_util [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.382 2 DEBUG nova.network.os_vif_util [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:49:22,bridge_name='br-int',has_traffic_filtering=True,id=78484beb-0c30-4b51-b9d5-55ff21fae85a,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78484beb-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.383 2 DEBUG nova.objects.instance [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 686e5112-9d36-4b52-9162-cb3cfcbbab74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:17 compute-0 ceph-mon[73668]: pgmap v2035: 305 pgs: 305 active+clean; 336 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.4 MiB/s wr, 239 op/s
Oct 02 12:43:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2601498084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.411 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <uuid>686e5112-9d36-4b52-9162-cb3cfcbbab74</uuid>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <name>instance-00000074</name>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <nova:name>tempest-DeleteServersTestJSON-server-35961646</nova:name>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:43:16</nova:creationTime>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <nova:port uuid="78484beb-0c30-4b51-b9d5-55ff21fae85a">
Oct 02 12:43:17 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <system>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <entry name="serial">686e5112-9d36-4b52-9162-cb3cfcbbab74</entry>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <entry name="uuid">686e5112-9d36-4b52-9162-cb3cfcbbab74</entry>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </system>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <os>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   </os>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <features>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   </features>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/686e5112-9d36-4b52-9162-cb3cfcbbab74_disk">
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       </source>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/686e5112-9d36-4b52-9162-cb3cfcbbab74_disk.config">
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       </source>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:43:17 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9b:49:22"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <target dev="tap78484beb-0c"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/console.log" append="off"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <video>
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </video>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:43:17 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:43:17 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:43:17 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:43:17 compute-0 nova_compute[256940]: </domain>
Oct 02 12:43:17 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.413 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Preparing to wait for external event network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.414 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.414 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.414 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.415 2 DEBUG nova.virt.libvirt.vif [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-35961646',display_name='tempest-DeleteServersTestJSON-server-35961646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-35961646',id=116,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-z96l11hx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:10Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=686e5112-9d36-4b52-9162-cb3cfcbbab74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.415 2 DEBUG nova.network.os_vif_util [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.416 2 DEBUG nova.network.os_vif_util [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:49:22,bridge_name='br-int',has_traffic_filtering=True,id=78484beb-0c30-4b51-b9d5-55ff21fae85a,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78484beb-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.416 2 DEBUG os_vif [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:49:22,bridge_name='br-int',has_traffic_filtering=True,id=78484beb-0c30-4b51-b9d5-55ff21fae85a,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78484beb-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78484beb-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78484beb-0c, col_values=(('external_ids', {'iface-id': '78484beb-0c30-4b51-b9d5-55ff21fae85a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:49:22', 'vm-uuid': '686e5112-9d36-4b52-9162-cb3cfcbbab74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:17 compute-0 NetworkManager[44981]: <info>  [1759408997.4234] manager: (tap78484beb-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.431 2 INFO os_vif [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:49:22,bridge_name='br-int',has_traffic_filtering=True,id=78484beb-0c30-4b51-b9d5-55ff21fae85a,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78484beb-0c')
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.529 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.530 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.530 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:9b:49:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.531 2 INFO nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Using config drive
Oct 02 12:43:17 compute-0 nova_compute[256940]: 2025-10-02 12:43:17.561 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 300 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 290 op/s
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.487 2 INFO nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Creating config drive at /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/disk.config
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.495 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpadk45fx_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/738523268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.634 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpadk45fx_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.909 2 DEBUG nova.storage.rbd_utils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.913 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/disk.config 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.947 2 DEBUG nova.network.neutron [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Updated VIF entry in instance network info cache for port 78484beb-0c30-4b51-b9d5-55ff21fae85a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.948 2 DEBUG nova.network.neutron [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Updating instance_info_cache with network_info: [{"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:18 compute-0 nova_compute[256940]: 2025-10-02 12:43:18.976 2 DEBUG oslo_concurrency.lockutils [req-d80861d4-2f00-43e0-98ad-34adfcfcd9f5 req-22f4c97a-40f6-41a2-8f47-5582aa2ef148 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-686e5112-9d36-4b52-9162-cb3cfcbbab74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:19.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 300 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 290 op/s
Oct 02 12:43:19 compute-0 ceph-mon[73668]: pgmap v2036: 305 pgs: 305 active+clean; 300 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 290 op/s
Oct 02 12:43:20 compute-0 ceph-mon[73668]: pgmap v2037: 305 pgs: 305 active+clean; 300 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 290 op/s
Oct 02 12:43:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:21.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:21 compute-0 sudo[331870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:21 compute-0 sudo[331870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:21 compute-0 sudo[331870]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:21 compute-0 sudo[331895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:21 compute-0 sudo[331895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:21 compute-0 sudo[331895]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:21.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 02 12:43:21 compute-0 nova_compute[256940]: 2025-10-02 12:43:21.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 307 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Oct 02 12:43:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 02 12:43:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 02 12:43:22 compute-0 nova_compute[256940]: 2025-10-02 12:43:22.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:22 compute-0 ceph-mon[73668]: pgmap v2038: 305 pgs: 305 active+clean; 307 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Oct 02 12:43:22 compute-0 ceph-mon[73668]: osdmap e280: 3 total, 3 up, 3 in
Oct 02 12:43:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:23.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.440 2 DEBUG oslo_concurrency.processutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/disk.config 686e5112-9d36-4b52-9162-cb3cfcbbab74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.441 2 INFO nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Deleting local config drive /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74/disk.config because it was imported into RBD.
Oct 02 12:43:23 compute-0 kernel: tap78484beb-0c: entered promiscuous mode
Oct 02 12:43:23 compute-0 NetworkManager[44981]: <info>  [1759409003.5127] manager: (tap78484beb-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Oct 02 12:43:23 compute-0 ovn_controller[148123]: 2025-10-02T12:43:23Z|00568|binding|INFO|Claiming lport 78484beb-0c30-4b51-b9d5-55ff21fae85a for this chassis.
Oct 02 12:43:23 compute-0 ovn_controller[148123]: 2025-10-02T12:43:23Z|00569|binding|INFO|78484beb-0c30-4b51-b9d5-55ff21fae85a: Claiming fa:16:3e:9b:49:22 10.100.0.3
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.536 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:49:22 10.100.0.3'], port_security=['fa:16:3e:9b:49:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '686e5112-9d36-4b52-9162-cb3cfcbbab74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=78484beb-0c30-4b51-b9d5-55ff21fae85a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.537 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 78484beb-0c30-4b51-b9d5-55ff21fae85a in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.540 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:43:23 compute-0 systemd-udevd[331933]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.551 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83fc64a7-17ba-4513-a9b3-2976f0461c72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.552 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:43:23 compute-0 systemd-machined[210927]: New machine qemu-58-instance-00000074.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.556 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.556 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8872fb56-88cc-40a9-807d-5a7e7dbabf6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 NetworkManager[44981]: <info>  [1759409003.5589] device (tap78484beb-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.558 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[48d0d9de-2db5-433a-afb9-452acbe852c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 NetworkManager[44981]: <info>  [1759409003.5603] device (tap78484beb-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:43:23 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000074.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.578 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[458fbbca-38c3-40d0-8a00-92333d41df0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.605 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a42b1103-b875-443b-87ba-863f11168b01]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_controller[148123]: 2025-10-02T12:43:23Z|00570|binding|INFO|Setting lport 78484beb-0c30-4b51-b9d5-55ff21fae85a ovn-installed in OVS
Oct 02 12:43:23 compute-0 ovn_controller[148123]: 2025-10-02T12:43:23Z|00571|binding|INFO|Setting lport 78484beb-0c30-4b51-b9d5-55ff21fae85a up in Southbound
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.654 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6870b7-f388-4a4d-b38f-18dbfc2667e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 NetworkManager[44981]: <info>  [1759409003.6611] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Oct 02 12:43:23 compute-0 systemd-udevd[331937]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.661 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[10fcb5b3-bb59-40f3-ab84-a1cfe58df581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 314 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.4 MiB/s wr, 182 op/s
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.697 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e43b6eb2-f059-493f-89ec-470722ce4f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.703 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[db6dc642-aec2-4aaf-a552-2c97f4780d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 NetworkManager[44981]: <info>  [1759409003.7312] device (tapfd4432c5-b0): carrier: link connected
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.738 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5666260e-f701-46b1-bdc4-bf73c4e0a4d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.762 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b47dbfd7-7271-404a-881c-a45942591776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689921, 'reachable_time': 27464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331967, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.779 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[365b2504-2545-4558-96c5-2701089235f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689921, 'tstamp': 689921}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331968, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.800 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea68f9e-29a7-42c6-a78e-abe91a5c57d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689921, 'reachable_time': 27464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331969, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.841 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf017bf-8176-4e64-80e9-462dc3c8dbcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.913 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c3312c92-68ad-4593-96fc-d98943e061ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.915 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.915 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.915 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 NetworkManager[44981]: <info>  [1759409003.9180] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct 02 12:43:23 compute-0 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.922 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_controller[148123]: 2025-10-02T12:43:23Z|00572|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:43:23 compute-0 nova_compute[256940]: 2025-10-02 12:43:23.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.943 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.944 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ec801bb0-17b2-4754-9149-3bef2c80c4fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.945 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:43:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:23.947 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:43:24 compute-0 podman[332037]: 2025-10-02 12:43:24.312670834 +0000 UTC m=+0.022484778 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:43:24 compute-0 podman[332037]: 2025-10-02 12:43:24.575069914 +0000 UTC m=+0.284883838 container create 7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:43:24 compute-0 systemd[1]: Started libpod-conmon-7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f.scope.
Oct 02 12:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d0a0a52809071a1124011db08b6469d068226fe4f6c46ab75049e3841d8c36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:24 compute-0 podman[332037]: 2025-10-02 12:43:24.818796627 +0000 UTC m=+0.528610581 container init 7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:43:24 compute-0 podman[332037]: 2025-10-02 12:43:24.826076667 +0000 UTC m=+0.535890591 container start 7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:43:24 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[332059]: [NOTICE]   (332063) : New worker (332065) forked
Oct 02 12:43:24 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[332059]: [NOTICE]   (332063) : Loading success.
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.862 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409004.8618498, 686e5112-9d36-4b52-9162-cb3cfcbbab74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.863 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] VM Started (Lifecycle Event)
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.885 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.889 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409004.8621635, 686e5112-9d36-4b52-9162-cb3cfcbbab74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.889 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] VM Paused (Lifecycle Event)
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.904 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.908 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:24 compute-0 nova_compute[256940]: 2025-10-02 12:43:24.928 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:25.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:25 compute-0 ceph-mon[73668]: pgmap v2040: 305 pgs: 305 active+clean; 314 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.4 MiB/s wr, 182 op/s
Oct 02 12:43:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:25.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 314 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 167 op/s
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.876 2 DEBUG nova.compute.manager [req-6042c1b2-8fce-4b4d-98ce-1bf83f5f0294 req-91e58e66-aa4a-4019-a224-a1bd397ac1b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received event network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.877 2 DEBUG oslo_concurrency.lockutils [req-6042c1b2-8fce-4b4d-98ce-1bf83f5f0294 req-91e58e66-aa4a-4019-a224-a1bd397ac1b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.877 2 DEBUG oslo_concurrency.lockutils [req-6042c1b2-8fce-4b4d-98ce-1bf83f5f0294 req-91e58e66-aa4a-4019-a224-a1bd397ac1b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.877 2 DEBUG oslo_concurrency.lockutils [req-6042c1b2-8fce-4b4d-98ce-1bf83f5f0294 req-91e58e66-aa4a-4019-a224-a1bd397ac1b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.877 2 DEBUG nova.compute.manager [req-6042c1b2-8fce-4b4d-98ce-1bf83f5f0294 req-91e58e66-aa4a-4019-a224-a1bd397ac1b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Processing event network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.878 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.882 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409005.8821945, 686e5112-9d36-4b52-9162-cb3cfcbbab74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.882 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] VM Resumed (Lifecycle Event)
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.884 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.889 2 INFO nova.virt.libvirt.driver [-] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Instance spawned successfully.
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.890 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.912 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.919 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.923 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.923 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.924 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.924 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.925 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.925 2 DEBUG nova.virt.libvirt.driver [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.947 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.980 2 INFO nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Took 15.27 seconds to spawn the instance on the hypervisor.
Oct 02 12:43:25 compute-0 nova_compute[256940]: 2025-10-02 12:43:25.981 2 DEBUG nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:26 compute-0 nova_compute[256940]: 2025-10-02 12:43:26.055 2 INFO nova.compute.manager [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Took 16.24 seconds to build instance.
Oct 02 12:43:26 compute-0 nova_compute[256940]: 2025-10-02 12:43:26.072 2 DEBUG oslo_concurrency.lockutils [None req-2dcd6394-d2a6-4485-9ff5-69304886010c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:26 compute-0 podman[332074]: 2025-10-02 12:43:26.40604987 +0000 UTC m=+0.080821230 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:43:26 compute-0 podman[332075]: 2025-10-02 12:43:26.417680184 +0000 UTC m=+0.089954519 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:43:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:26.478 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:26.479 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:26.479 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:26 compute-0 nova_compute[256940]: 2025-10-02 12:43:26.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.028 2 INFO nova.compute.manager [None req-4af42090-2585-4a38-89cf-bb05f61c993f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Pausing
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.029 2 DEBUG nova.objects.instance [None req-4af42090-2585-4a38-89cf-bb05f61c993f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'flavor' on Instance uuid 686e5112-9d36-4b52-9162-cb3cfcbbab74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.056 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409007.0559962, 686e5112-9d36-4b52-9162-cb3cfcbbab74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.056 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] VM Paused (Lifecycle Event)
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.057 2 DEBUG nova.compute.manager [None req-4af42090-2585-4a38-89cf-bb05f61c993f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.076 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.079 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.110 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 12:43:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:27.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:27.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:27 compute-0 ceph-mon[73668]: pgmap v2041: 305 pgs: 305 active+clean; 314 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 167 op/s
Oct 02 12:43:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 322 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.965 2 DEBUG nova.compute.manager [req-36d2fead-93c1-4838-90d4-6452a6bef604 req-dfd5d0ef-d022-4c5c-8da1-2faee7abf9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received event network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.966 2 DEBUG oslo_concurrency.lockutils [req-36d2fead-93c1-4838-90d4-6452a6bef604 req-dfd5d0ef-d022-4c5c-8da1-2faee7abf9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.966 2 DEBUG oslo_concurrency.lockutils [req-36d2fead-93c1-4838-90d4-6452a6bef604 req-dfd5d0ef-d022-4c5c-8da1-2faee7abf9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.966 2 DEBUG oslo_concurrency.lockutils [req-36d2fead-93c1-4838-90d4-6452a6bef604 req-dfd5d0ef-d022-4c5c-8da1-2faee7abf9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.966 2 DEBUG nova.compute.manager [req-36d2fead-93c1-4838-90d4-6452a6bef604 req-dfd5d0ef-d022-4c5c-8da1-2faee7abf9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] No waiting events found dispatching network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:27 compute-0 nova_compute[256940]: 2025-10-02 12:43:27.967 2 WARNING nova.compute.manager [req-36d2fead-93c1-4838-90d4-6452a6bef604 req-dfd5d0ef-d022-4c5c-8da1-2faee7abf9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received unexpected event network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a for instance with vm_state paused and task_state None.
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:43:28
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.control']
Oct 02 12:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:43:28 compute-0 ceph-mon[73668]: pgmap v2042: 305 pgs: 305 active+clean; 322 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Oct 02 12:43:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:29.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:43:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:29.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 322 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Oct 02 12:43:30 compute-0 nova_compute[256940]: 2025-10-02 12:43:30.535 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:30 compute-0 nova_compute[256940]: 2025-10-02 12:43:30.536 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:30 compute-0 nova_compute[256940]: 2025-10-02 12:43:30.536 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:30 compute-0 nova_compute[256940]: 2025-10-02 12:43:30.536 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:30 compute-0 nova_compute[256940]: 2025-10-02 12:43:30.536 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:30 compute-0 nova_compute[256940]: 2025-10-02 12:43:30.538 2 INFO nova.compute.manager [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Terminating instance
Oct 02 12:43:30 compute-0 nova_compute[256940]: 2025-10-02 12:43:30.539 2 DEBUG nova.compute.manager [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:43:31 compute-0 kernel: tap78484beb-0c (unregistering): left promiscuous mode
Oct 02 12:43:31 compute-0 NetworkManager[44981]: <info>  [1759409011.0100] device (tap78484beb-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:43:31 compute-0 ceph-mon[73668]: pgmap v2043: 305 pgs: 305 active+clean; 322 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Oct 02 12:43:31 compute-0 ovn_controller[148123]: 2025-10-02T12:43:31Z|00573|binding|INFO|Releasing lport 78484beb-0c30-4b51-b9d5-55ff21fae85a from this chassis (sb_readonly=0)
Oct 02 12:43:31 compute-0 ovn_controller[148123]: 2025-10-02T12:43:31Z|00574|binding|INFO|Setting lport 78484beb-0c30-4b51-b9d5-55ff21fae85a down in Southbound
Oct 02 12:43:31 compute-0 ovn_controller[148123]: 2025-10-02T12:43:31Z|00575|binding|INFO|Removing iface tap78484beb-0c ovn-installed in OVS
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:31.035 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:49:22 10.100.0.3'], port_security=['fa:16:3e:9b:49:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '686e5112-9d36-4b52-9162-cb3cfcbbab74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=78484beb-0c30-4b51-b9d5-55ff21fae85a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:31.036 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 78484beb-0c30-4b51-b9d5-55ff21fae85a in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:43:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:31.037 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:43:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:31.039 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ec94f6-3592-40e2-9cb0-6a9a3207843e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:31.039 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct 02 12:43:31 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000074.scope: Consumed 2.145s CPU time.
Oct 02 12:43:31 compute-0 systemd-machined[210927]: Machine qemu-58-instance-00000074 terminated.
Oct 02 12:43:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:43:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:31.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.172 2 INFO nova.virt.libvirt.driver [-] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Instance destroyed successfully.
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.173 2 DEBUG nova.objects.instance [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 686e5112-9d36-4b52-9162-cb3cfcbbab74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.188 2 DEBUG nova.virt.libvirt.vif [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-35961646',display_name='tempest-DeleteServersTestJSON-server-35961646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-35961646',id=116,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-z96l11hx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:27Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=686e5112-9d36-4b52-9162-cb3cfcbbab74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.189 2 DEBUG nova.network.os_vif_util [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "address": "fa:16:3e:9b:49:22", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78484beb-0c", "ovs_interfaceid": "78484beb-0c30-4b51-b9d5-55ff21fae85a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.190 2 DEBUG nova.network.os_vif_util [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:49:22,bridge_name='br-int',has_traffic_filtering=True,id=78484beb-0c30-4b51-b9d5-55ff21fae85a,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78484beb-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.190 2 DEBUG os_vif [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:49:22,bridge_name='br-int',has_traffic_filtering=True,id=78484beb-0c30-4b51-b9d5-55ff21fae85a,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78484beb-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78484beb-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.198 2 INFO os_vif [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:49:22,bridge_name='br-int',has_traffic_filtering=True,id=78484beb-0c30-4b51-b9d5-55ff21fae85a,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78484beb-0c')
Oct 02 12:43:31 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[332059]: [NOTICE]   (332063) : haproxy version is 2.8.14-c23fe91
Oct 02 12:43:31 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[332059]: [NOTICE]   (332063) : path to executable is /usr/sbin/haproxy
Oct 02 12:43:31 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[332059]: [WARNING]  (332063) : Exiting Master process...
Oct 02 12:43:31 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[332059]: [ALERT]    (332063) : Current worker (332065) exited with code 143 (Terminated)
Oct 02 12:43:31 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[332059]: [WARNING]  (332063) : All workers exited. Exiting... (0)
Oct 02 12:43:31 compute-0 systemd[1]: libpod-7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f.scope: Deactivated successfully.
Oct 02 12:43:31 compute-0 podman[332146]: 2025-10-02 12:43:31.260036382 +0000 UTC m=+0.128137826 container died 7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:43:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:43:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:31.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:43:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.587 2 DEBUG nova.compute.manager [req-7ea70bd9-734c-42b1-a6e8-8d01109ad90d req-c316d023-3c45-4687-b1f3-8237e46bef14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received event network-vif-unplugged-78484beb-0c30-4b51-b9d5-55ff21fae85a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.588 2 DEBUG oslo_concurrency.lockutils [req-7ea70bd9-734c-42b1-a6e8-8d01109ad90d req-c316d023-3c45-4687-b1f3-8237e46bef14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.588 2 DEBUG oslo_concurrency.lockutils [req-7ea70bd9-734c-42b1-a6e8-8d01109ad90d req-c316d023-3c45-4687-b1f3-8237e46bef14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.588 2 DEBUG oslo_concurrency.lockutils [req-7ea70bd9-734c-42b1-a6e8-8d01109ad90d req-c316d023-3c45-4687-b1f3-8237e46bef14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.589 2 DEBUG nova.compute.manager [req-7ea70bd9-734c-42b1-a6e8-8d01109ad90d req-c316d023-3c45-4687-b1f3-8237e46bef14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] No waiting events found dispatching network-vif-unplugged-78484beb-0c30-4b51-b9d5-55ff21fae85a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.589 2 DEBUG nova.compute.manager [req-7ea70bd9-734c-42b1-a6e8-8d01109ad90d req-c316d023-3c45-4687-b1f3-8237e46bef14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received event network-vif-unplugged-78484beb-0c30-4b51-b9d5-55ff21fae85a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:43:31 compute-0 nova_compute[256940]: 2025-10-02 12:43:31.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-72d0a0a52809071a1124011db08b6469d068226fe4f6c46ab75049e3841d8c36-merged.mount: Deactivated successfully.
Oct 02 12:43:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 348 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 202 op/s
Oct 02 12:43:31 compute-0 podman[332146]: 2025-10-02 12:43:31.942710233 +0000 UTC m=+0.810811677 container cleanup 7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:43:31 compute-0 systemd[1]: libpod-conmon-7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f.scope: Deactivated successfully.
Oct 02 12:43:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3590946673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:32 compute-0 podman[332205]: 2025-10-02 12:43:32.280613302 +0000 UTC m=+0.309864698 container remove 7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.287 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bd06b1-e3e0-4424-81b0-5cf9f868c3c8]: (4, ('Thu Oct  2 12:43:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f)\n7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f\nThu Oct  2 12:43:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f)\n7ab4fba8570d4659962bd317c19b418f2792e3071ee903e5771f3095d5c0d06f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.290 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a8acf6-d46e-4204-8306-20de618382b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.291 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:32 compute-0 nova_compute[256940]: 2025-10-02 12:43:32.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:32 compute-0 kernel: tapfd4432c5-b0: left promiscuous mode
Oct 02 12:43:32 compute-0 nova_compute[256940]: 2025-10-02 12:43:32.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.312 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a72107c3-ef8a-4892-b451-72924c765009]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.341 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d96f22-903a-4107-82ff-44e55b096ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.342 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5cac2d28-6de2-477f-a839-519c7bd4f24c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.359 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[723ffa83-4096-413c-8faf-68b975c864c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689912, 'reachable_time': 32816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332222, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.362 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:43:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:32.362 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[55fc227c-d8f7-404a-87fd-9031f27a7532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:32 compute-0 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.078 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.078 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.099 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:43:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:33.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.177 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.177 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.184 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.184 2 INFO nova.compute.claims [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:43:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:33.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.324 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:33 compute-0 ceph-mon[73668]: pgmap v2044: 305 pgs: 305 active+clean; 348 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 202 op/s
Oct 02 12:43:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 372 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.3 MiB/s wr, 175 op/s
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.681 2 DEBUG nova.compute.manager [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received event network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.682 2 DEBUG oslo_concurrency.lockutils [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.682 2 DEBUG oslo_concurrency.lockutils [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.683 2 DEBUG oslo_concurrency.lockutils [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.683 2 DEBUG nova.compute.manager [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] No waiting events found dispatching network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.683 2 WARNING nova.compute.manager [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received unexpected event network-vif-plugged-78484beb-0c30-4b51-b9d5-55ff21fae85a for instance with vm_state paused and task_state deleting.
Oct 02 12:43:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065390548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.793 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.799 2 DEBUG nova.compute.provider_tree [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.817 2 DEBUG nova.scheduler.client.report [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.840 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.841 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.895 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.896 2 DEBUG nova.network.neutron [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.920 2 INFO nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:43:33 compute-0 nova_compute[256940]: 2025-10-02 12:43:33.947 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.036 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.037 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.037 2 INFO nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Creating image(s)
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.066 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.101 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.132 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.136 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.217 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.219 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.220 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.220 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.307 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.313 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f3cb3218-0640-497e-94de-2549ed7da8e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:34 compute-0 nova_compute[256940]: 2025-10-02 12:43:34.348 2 DEBUG nova.policy [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b168e90f7c0c414ba26c576fb8706a80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:43:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3065390548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:35 compute-0 nova_compute[256940]: 2025-10-02 12:43:35.303 2 DEBUG nova.network.neutron [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Successfully created port: 94d27ec1-617c-4911-bfe7-441d064785a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:43:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:35.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 393 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.377 2 DEBUG nova.network.neutron [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Successfully updated port: 94d27ec1-617c-4911-bfe7-441d064785a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.399 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.400 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.400 2 DEBUG nova.network.neutron [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.468 2 DEBUG nova.compute.manager [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-changed-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.469 2 DEBUG nova.compute.manager [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Refreshing instance network info cache due to event network-changed-94d27ec1-617c-4911-bfe7-441d064785a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.469 2 DEBUG oslo_concurrency.lockutils [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:36 compute-0 ceph-mon[73668]: pgmap v2045: 305 pgs: 305 active+clean; 372 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.3 MiB/s wr, 175 op/s
Oct 02 12:43:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/469806506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.574 2 DEBUG nova.network.neutron [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:43:36 compute-0 nova_compute[256940]: 2025-10-02 12:43:36.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:37.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:37.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Oct 02 12:43:37 compute-0 ceph-mon[73668]: pgmap v2046: 305 pgs: 305 active+clean; 393 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Oct 02 12:43:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3507850382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:37 compute-0 sudo[332342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:37 compute-0 sudo[332342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:37 compute-0 sudo[332342]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:37 compute-0 sudo[332367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:37 compute-0 sudo[332367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:37 compute-0 sudo[332367]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:37 compute-0 sudo[332392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:37 compute-0 sudo[332392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:37 compute-0 nova_compute[256940]: 2025-10-02 12:43:37.963 2 DEBUG nova.network.neutron [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updating instance_info_cache with network_info: [{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:37 compute-0 sudo[332392]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:37 compute-0 nova_compute[256940]: 2025-10-02 12:43:37.987 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:37 compute-0 nova_compute[256940]: 2025-10-02 12:43:37.988 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance network_info: |[{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:43:37 compute-0 nova_compute[256940]: 2025-10-02 12:43:37.988 2 DEBUG oslo_concurrency.lockutils [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:37 compute-0 nova_compute[256940]: 2025-10-02 12:43:37.988 2 DEBUG nova.network.neutron [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Refreshing network info cache for port 94d27ec1-617c-4911-bfe7-441d064785a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:38 compute-0 sudo[332417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:43:38 compute-0 sudo[332417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:38 compute-0 sudo[332417]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:38 compute-0 nova_compute[256940]: 2025-10-02 12:43:38.514 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f3cb3218-0640-497e-94de-2549ed7da8e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:38 compute-0 nova_compute[256940]: 2025-10-02 12:43:38.621 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] resizing rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:43:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:43:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:43:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:43:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:43:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ae458bc2-af6f-49e1-911c-3f6d82f93813 does not exist
Oct 02 12:43:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 04a2a0e0-cfb9-435c-8c5d-3b042f4c30ea does not exist
Oct 02 12:43:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f28613bb-7c43-459d-bdc1-d1e20eb9ee24 does not exist
Oct 02 12:43:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:43:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:43:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:43:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:43:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:43:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:38 compute-0 sudo[332527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:38 compute-0 sudo[332527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:38 compute-0 sudo[332527]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:38 compute-0 sudo[332552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:38 compute-0 sudo[332552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:38 compute-0 sudo[332552]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:39 compute-0 sudo[332577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:39 compute-0 sudo[332577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:39 compute-0 sudo[332577]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:39 compute-0 sudo[332602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:43:39 compute-0 sudo[332602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:39.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1679 writes, 7184 keys, 1679 commit groups, 1.0 writes per commit group, ingest: 10.82 MB, 0.02 MB/s
                                           Interval WAL: 1678 writes, 1678 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     44.2      1.30              0.18        27    0.048       0      0       0.0       0.0
                                             L6      1/0   10.91 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1     73.4     61.3      3.90              0.74        26    0.150    149K    14K       0.0       0.0
                                            Sum      1/0   10.91 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     55.0     57.1      5.20              0.92        53    0.098    149K    14K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.0     58.2     59.4      1.03              0.19        10    0.103     36K   2592       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     73.4     61.3      3.90              0.74        26    0.150    149K    14K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     44.3      1.30              0.18        26    0.050       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.056, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 5.2 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 33.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000316 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1956,31.94 MB,10.5071%) FilterBlock(54,429.42 KB,0.137946%) IndexBlock(54,738.20 KB,0.237139%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:43:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:39 compute-0 podman[332667]: 2025-10-02 12:43:39.430712733 +0000 UTC m=+0.022763246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:39 compute-0 ceph-mon[73668]: pgmap v2047: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/783646978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4191999098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:39 compute-0 podman[332667]: 2025-10-02 12:43:39.631389491 +0000 UTC m=+0.223439974 container create 8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_burnell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 902 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 02 12:43:39 compute-0 nova_compute[256940]: 2025-10-02 12:43:39.673 2 DEBUG nova.network.neutron [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updated VIF entry in instance network info cache for port 94d27ec1-617c-4911-bfe7-441d064785a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:39 compute-0 nova_compute[256940]: 2025-10-02 12:43:39.674 2 DEBUG nova.network.neutron [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updating instance_info_cache with network_info: [{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:39 compute-0 nova_compute[256940]: 2025-10-02 12:43:39.695 2 DEBUG oslo_concurrency.lockutils [req-b224fcef-d3ae-453f-b547-091e8b010d85 req-28e1f374-c5f9-4ade-9e90-324cfcdf3212 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:39 compute-0 systemd[1]: Started libpod-conmon-8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7.scope.
Oct 02 12:43:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:40 compute-0 podman[332667]: 2025-10-02 12:43:40.072569898 +0000 UTC m=+0.664620461 container init 8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 12:43:40 compute-0 podman[332667]: 2025-10-02 12:43:40.082316393 +0000 UTC m=+0.674366876 container start 8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_burnell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:43:40 compute-0 romantic_burnell[332704]: 167 167
Oct 02 12:43:40 compute-0 systemd[1]: libpod-8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7.scope: Deactivated successfully.
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.099 2 DEBUG nova.objects.instance [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.114 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.115 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Ensure instance console log exists: /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.115 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.116 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.116 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.118 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Start _get_guest_xml network_info=[{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.124 2 WARNING nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.130 2 DEBUG nova.virt.libvirt.host [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.132 2 DEBUG nova.virt.libvirt.host [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.141 2 DEBUG nova.virt.libvirt.host [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.141 2 DEBUG nova.virt.libvirt.host [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.142 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.142 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.143 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.143 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.143 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.143 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.144 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.144 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.144 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.144 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.145 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.145 2 DEBUG nova.virt.hardware [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.147 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:40 compute-0 podman[332667]: 2025-10-02 12:43:40.191307108 +0000 UTC m=+0.783357611 container attach 8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_burnell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:43:40 compute-0 podman[332667]: 2025-10-02 12:43:40.192825297 +0000 UTC m=+0.784875770 container died 8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_burnell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-116b879a7d7a199830f854f27540d491db32c411059668fb368c2ecc86cc8554-merged.mount: Deactivated successfully.
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008681646077610274 of space, bias 1.0, pg target 2.604493823283082 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29502365705844036 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:43:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:43:40 compute-0 podman[332682]: 2025-10-02 12:43:40.571805221 +0000 UTC m=+0.892859019 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 02 12:43:40 compute-0 podman[332681]: 2025-10-02 12:43:40.579309367 +0000 UTC m=+0.904883413 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230281663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.670 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.711 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:40 compute-0 nova_compute[256940]: 2025-10-02 12:43:40.716 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:40 compute-0 podman[332667]: 2025-10-02 12:43:40.731341385 +0000 UTC m=+1.323391868 container remove 8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:43:40 compute-0 systemd[1]: libpod-conmon-8c0ab87163e8bbc908ec0d0e26ede9db4e0620e242d47371db344cd8f47767d7.scope: Deactivated successfully.
Oct 02 12:43:40 compute-0 ceph-mon[73668]: pgmap v2048: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 902 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 02 12:43:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3663326302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:40 compute-0 podman[332823]: 2025-10-02 12:43:40.890687755 +0000 UTC m=+0.028497875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:40 compute-0 podman[332823]: 2025-10-02 12:43:40.988811477 +0000 UTC m=+0.126621577 container create d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:41 compute-0 systemd[1]: Started libpod-conmon-d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb.scope.
Oct 02 12:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54327cb5e5e5432f5c11f9973a226b3b387ab2dbd6081fcb9c71cbf4108305eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54327cb5e5e5432f5c11f9973a226b3b387ab2dbd6081fcb9c71cbf4108305eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54327cb5e5e5432f5c11f9973a226b3b387ab2dbd6081fcb9c71cbf4108305eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54327cb5e5e5432f5c11f9973a226b3b387ab2dbd6081fcb9c71cbf4108305eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54327cb5e5e5432f5c11f9973a226b3b387ab2dbd6081fcb9c71cbf4108305eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821274135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:41.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.156 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.158 2 DEBUG nova.virt.libvirt.vif [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-604679211',display_name='tempest-ServerRescueNegativeTestJSON-server-604679211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-604679211',id=118,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-h4w9h3c8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:33Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=f3cb3218-0640-497e-94de-2549ed7da8e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.159 2 DEBUG nova.network.os_vif_util [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.159 2 DEBUG nova.network.os_vif_util [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.161 2 DEBUG nova.objects.instance [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.186 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <uuid>f3cb3218-0640-497e-94de-2549ed7da8e4</uuid>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <name>instance-00000076</name>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-604679211</nova:name>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:43:40</nova:creationTime>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <nova:port uuid="94d27ec1-617c-4911-bfe7-441d064785a1">
Oct 02 12:43:41 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <system>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <entry name="serial">f3cb3218-0640-497e-94de-2549ed7da8e4</entry>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <entry name="uuid">f3cb3218-0640-497e-94de-2549ed7da8e4</entry>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </system>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <os>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   </os>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <features>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   </features>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/f3cb3218-0640-497e-94de-2549ed7da8e4_disk">
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       </source>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config">
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       </source>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:43:41 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:68:5b:10"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <target dev="tap94d27ec1-61"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/console.log" append="off"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <video>
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </video>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:43:41 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:43:41 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:43:41 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:43:41 compute-0 nova_compute[256940]: </domain>
Oct 02 12:43:41 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.187 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Preparing to wait for external event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.188 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.189 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.189 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.190 2 DEBUG nova.virt.libvirt.vif [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-604679211',display_name='tempest-ServerRescueNegativeTestJSON-server-604679211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-604679211',id=118,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-h4w9h3c8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:33Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=f3cb3218-0640-497e-94de-2549ed7da8e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.191 2 DEBUG nova.network.os_vif_util [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.191 2 DEBUG nova.network.os_vif_util [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.192 2 DEBUG os_vif [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.193 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94d27ec1-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94d27ec1-61, col_values=(('external_ids', {'iface-id': '94d27ec1-617c-4911-bfe7-441d064785a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:5b:10', 'vm-uuid': 'f3cb3218-0640-497e-94de-2549ed7da8e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:41 compute-0 NetworkManager[44981]: <info>  [1759409021.2528] manager: (tap94d27ec1-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.261 2 INFO os_vif [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61')
Oct 02 12:43:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:41.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:41 compute-0 podman[332823]: 2025-10-02 12:43:41.322920928 +0000 UTC m=+0.460731098 container init d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:43:41 compute-0 podman[332823]: 2025-10-02 12:43:41.336497813 +0000 UTC m=+0.474307913 container start d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:41 compute-0 sudo[332847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:41 compute-0 sudo[332847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:41 compute-0 sudo[332847]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:41 compute-0 sudo[332875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:41 compute-0 sudo[332875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:41 compute-0 sudo[332875]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 392 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 922 KiB/s rd, 5.3 MiB/s wr, 129 op/s
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.678 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.679 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.679 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:68:5b:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.679 2 INFO nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Using config drive
Oct 02 12:43:41 compute-0 nova_compute[256940]: 2025-10-02 12:43:41.715 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:41 compute-0 podman[332823]: 2025-10-02 12:43:41.803039212 +0000 UTC m=+0.940849332 container attach d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:43:41 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 02 12:43:41 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:41.830701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:43:41 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 02 12:43:41 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021830738, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1992, "num_deletes": 263, "total_data_size": 3207293, "memory_usage": 3259896, "flush_reason": "Manual Compaction"}
Oct 02 12:43:41 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022103560, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3143689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43740, "largest_seqno": 45730, "table_properties": {"data_size": 3134738, "index_size": 5509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19614, "raw_average_key_size": 20, "raw_value_size": 3116370, "raw_average_value_size": 3290, "num_data_blocks": 239, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408858, "oldest_key_time": 1759408858, "file_creation_time": 1759409021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 272907 microseconds, and 7341 cpu microseconds.
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:42 compute-0 clever_bassi[332840]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:43:42 compute-0 clever_bassi[332840]: --> relative data size: 1.0
Oct 02 12:43:42 compute-0 clever_bassi[332840]: --> All data devices are unavailable
Oct 02 12:43:42 compute-0 systemd[1]: libpod-d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb.scope: Deactivated successfully.
Oct 02 12:43:42 compute-0 podman[332823]: 2025-10-02 12:43:42.199791609 +0000 UTC m=+1.337601729 container died d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.103605) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3143689 bytes OK
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.103625) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.216653) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.216693) EVENT_LOG_v1 {"time_micros": 1759409022216685, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.216748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3198918, prev total WAL file size 3227994, number of live WAL files 2.
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.217864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3070KB)], [95(10MB)]
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022217901, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 14586560, "oldest_snapshot_seqno": -1}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2230281663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1821274135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7538 keys, 14437470 bytes, temperature: kUnknown
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022853066, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 14437470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14382979, "index_size": 34539, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 193435, "raw_average_key_size": 25, "raw_value_size": 14244481, "raw_average_value_size": 1889, "num_data_blocks": 1379, "num_entries": 7538, "num_filter_entries": 7538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:42 compute-0 nova_compute[256940]: 2025-10-02 12:43:42.900 2 INFO nova.virt.libvirt.driver [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Deleting instance files /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74_del
Oct 02 12:43:42 compute-0 nova_compute[256940]: 2025-10-02 12:43:42.902 2 INFO nova.virt.libvirt.driver [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Deletion of /var/lib/nova/instances/686e5112-9d36-4b52-9162-cb3cfcbbab74_del complete
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.853459) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 14437470 bytes
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.939990) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 23.0 rd, 22.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.9 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 8078, records dropped: 540 output_compression: NoCompression
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.940040) EVENT_LOG_v1 {"time_micros": 1759409022940022, "job": 56, "event": "compaction_finished", "compaction_time_micros": 635388, "compaction_time_cpu_micros": 29417, "output_level": 6, "num_output_files": 1, "total_output_size": 14437470, "num_input_records": 8078, "num_output_records": 7538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022941065, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022943629, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.217807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.943688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.943692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.943694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.943695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.943697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:42.944014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022944062, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 266, "num_deletes": 250, "total_data_size": 47703, "memory_usage": 53680, "flush_reason": "Manual Compaction"}
Oct 02 12:43:42 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 02 12:43:42 compute-0 nova_compute[256940]: 2025-10-02 12:43:42.950 2 INFO nova.compute.manager [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Took 12.41 seconds to destroy the instance on the hypervisor.
Oct 02 12:43:42 compute-0 nova_compute[256940]: 2025-10-02 12:43:42.952 2 DEBUG oslo.service.loopingcall [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:43:42 compute-0 nova_compute[256940]: 2025-10-02 12:43:42.953 2 DEBUG nova.compute.manager [-] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:43:42 compute-0 nova_compute[256940]: 2025-10-02 12:43:42.953 2 DEBUG nova.network.neutron [-] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409023019340, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 47268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45731, "largest_seqno": 45996, "table_properties": {"data_size": 45383, "index_size": 114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5312, "raw_average_key_size": 20, "raw_value_size": 41753, "raw_average_value_size": 158, "num_data_blocks": 5, "num_entries": 263, "num_filter_entries": 263, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409022, "oldest_key_time": 1759409022, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 75368 microseconds, and 919 cpu microseconds.
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-54327cb5e5e5432f5c11f9973a226b3b387ab2dbd6081fcb9c71cbf4108305eb-merged.mount: Deactivated successfully.
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.019384) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 47268 bytes OK
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.019403) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.051005) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.051044) EVENT_LOG_v1 {"time_micros": 1759409023051035, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.051065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 45639, prev total WAL file size 45639, number of live WAL files 2.
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.051698) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(46KB)], [98(13MB)]
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409023051727, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 14484738, "oldest_snapshot_seqno": -1}
Oct 02 12:43:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:43.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7294 keys, 10655960 bytes, temperature: kUnknown
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409023245926, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10655960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10608014, "index_size": 28634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188562, "raw_average_key_size": 25, "raw_value_size": 10478644, "raw_average_value_size": 1436, "num_data_blocks": 1132, "num_entries": 7294, "num_filter_entries": 7294, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.292 2 INFO nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Creating config drive at /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.297 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgte2ug5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:43.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.246753) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10655960 bytes
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.406338) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.6 rd, 54.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.8 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(531.9) write-amplify(225.4) OK, records in: 7801, records dropped: 507 output_compression: NoCompression
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.406416) EVENT_LOG_v1 {"time_micros": 1759409023406380, "job": 58, "event": "compaction_finished", "compaction_time_micros": 194285, "compaction_time_cpu_micros": 24399, "output_level": 6, "num_output_files": 1, "total_output_size": 10655960, "num_input_records": 7801, "num_output_records": 7294, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409023406855, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409023410245, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.051624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.410331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.410338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.410340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.410342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:43:43.410344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.429 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgte2ug5" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.461 2 DEBUG nova.storage.rbd_utils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.465 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:43 compute-0 podman[332823]: 2025-10-02 12:43:43.504339073 +0000 UTC m=+2.642149173 container remove d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:43:43 compute-0 sudo[332602]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:43 compute-0 sudo[332978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:43 compute-0 sudo[332978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:43 compute-0 systemd[1]: libpod-conmon-d25c18cedb4a9ddfc4ca2344512a4cb990e341132793dd88c031422ecebfb2bb.scope: Deactivated successfully.
Oct 02 12:43:43 compute-0 sudo[332978]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:43 compute-0 sudo[333006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:43 compute-0 sudo[333006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:43 compute-0 sudo[333006]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 339 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 4.6 MiB/s wr, 104 op/s
Oct 02 12:43:43 compute-0 sudo[333031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:43 compute-0 sudo[333031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:43 compute-0 sudo[333031]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.760 2 DEBUG oslo_concurrency.processutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.761 2 INFO nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Deleting local config drive /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config because it was imported into RBD.
Oct 02 12:43:43 compute-0 sudo[333056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:43:43 compute-0 sudo[333056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:43 compute-0 kernel: tap94d27ec1-61: entered promiscuous mode
Oct 02 12:43:43 compute-0 ceph-mon[73668]: pgmap v2049: 305 pgs: 305 active+clean; 392 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 922 KiB/s rd, 5.3 MiB/s wr, 129 op/s
Oct 02 12:43:43 compute-0 NetworkManager[44981]: <info>  [1759409023.8210] manager: (tap94d27ec1-61): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct 02 12:43:43 compute-0 ovn_controller[148123]: 2025-10-02T12:43:43Z|00576|binding|INFO|Claiming lport 94d27ec1-617c-4911-bfe7-441d064785a1 for this chassis.
Oct 02 12:43:43 compute-0 ovn_controller[148123]: 2025-10-02T12:43:43Z|00577|binding|INFO|94d27ec1-617c-4911-bfe7-441d064785a1: Claiming fa:16:3e:68:5b:10 10.100.0.4
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-0 systemd-udevd[333094]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.851 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5b:10 10.100.0.4'], port_security=['fa:16:3e:68:5b:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f3cb3218-0640-497e-94de-2549ed7da8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=94d27ec1-617c-4911-bfe7-441d064785a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.852 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 94d27ec1-617c-4911-bfe7-441d064785a1 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.854 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:43:43 compute-0 systemd-machined[210927]: New machine qemu-59-instance-00000076.
Oct 02 12:43:43 compute-0 NetworkManager[44981]: <info>  [1759409023.8650] device (tap94d27ec1-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:43:43 compute-0 NetworkManager[44981]: <info>  [1759409023.8659] device (tap94d27ec1-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.866 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[484dd1de-c5ed-482f-a77f-e0245fe086d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.867 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.869 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.869 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc684af-f900-440a-9c00-856fe36fff75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.870 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3993158f-3fe7-4c49-9dd7-7f5b920371dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000076.
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.883 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8414f6-5b31-451f-adce-a27165829f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-0 ovn_controller[148123]: 2025-10-02T12:43:43Z|00578|binding|INFO|Setting lport 94d27ec1-617c-4911-bfe7-441d064785a1 ovn-installed in OVS
Oct 02 12:43:43 compute-0 ovn_controller[148123]: 2025-10-02T12:43:43Z|00579|binding|INFO|Setting lport 94d27ec1-617c-4911-bfe7-441d064785a1 up in Southbound
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.907 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b413f5ff-5072-4927-9e06-b5d37a550f6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 nova_compute[256940]: 2025-10-02 12:43:43.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.939 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e1aed164-c34a-4fe3-ab84-2244f4be6f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 NetworkManager[44981]: <info>  [1759409023.9472] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/254)
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.949 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a2257d0a-17c3-45af-925d-da9a02b5d30b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.980 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6c329201-50f6-40f7-97f0-bbfa411c9f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:43.984 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1f3f94-5499-448d-a8e9-248c2142bfc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 NetworkManager[44981]: <info>  [1759409024.0112] device (tapf3934261-b0): carrier: link connected
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.017 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b68de5-a12a-427b-8439-4cc465c458dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.035 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c193ad-675b-4fca-84d7-a0b8ea5806a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691949, 'reachable_time': 43385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333152, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.053 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d6cd0529-1b88-42c7-94cd-1334ef0c92c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691949, 'tstamp': 691949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333155, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.071 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e03b2a54-797c-4893-aeee-2ef43f2bc05a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691949, 'reachable_time': 43385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333167, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.102 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d9550608-f50d-4d17-8a42-3a8f56c8b1db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.162 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c37ae37c-5ae4-4923-9538-d16d243b58d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.163 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.163 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.164 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:44 compute-0 kernel: tapf3934261-b0: entered promiscuous mode
Oct 02 12:43:44 compute-0 NetworkManager[44981]: <info>  [1759409024.1660] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.176 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:44 compute-0 ovn_controller[148123]: 2025-10-02T12:43:44Z|00580|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.178 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.183 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a09eed54-1188-4f75-8733-24d7d6265982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.183 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:43:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:43:44.184 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:44 compute-0 podman[333171]: 2025-10-02 12:43:44.126608117 +0000 UTC m=+0.024817469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:44 compute-0 podman[333171]: 2025-10-02 12:43:44.504676956 +0000 UTC m=+0.402886288 container create 0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.593 2 DEBUG nova.compute.manager [req-9cf183ae-2b3a-4bde-a2f6-2b649e209b51 req-0a5644db-c251-420c-9b30-0dbe8b4a7eb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.594 2 DEBUG oslo_concurrency.lockutils [req-9cf183ae-2b3a-4bde-a2f6-2b649e209b51 req-0a5644db-c251-420c-9b30-0dbe8b4a7eb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.594 2 DEBUG oslo_concurrency.lockutils [req-9cf183ae-2b3a-4bde-a2f6-2b649e209b51 req-0a5644db-c251-420c-9b30-0dbe8b4a7eb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.594 2 DEBUG oslo_concurrency.lockutils [req-9cf183ae-2b3a-4bde-a2f6-2b649e209b51 req-0a5644db-c251-420c-9b30-0dbe8b4a7eb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.595 2 DEBUG nova.compute.manager [req-9cf183ae-2b3a-4bde-a2f6-2b649e209b51 req-0a5644db-c251-420c-9b30-0dbe8b4a7eb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Processing event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:43:44 compute-0 systemd[1]: Started libpod-conmon-0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7.scope.
Oct 02 12:43:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.759 2 DEBUG nova.network.neutron [-] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.780 2 INFO nova.compute.manager [-] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Took 1.83 seconds to deallocate network for instance.
Oct 02 12:43:44 compute-0 podman[333256]: 2025-10-02 12:43:44.689814669 +0000 UTC m=+0.027282103 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.839 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.839 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:44 compute-0 nova_compute[256940]: 2025-10-02 12:43:44.913 2 DEBUG oslo_concurrency.processutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:45 compute-0 podman[333171]: 2025-10-02 12:43:45.061496972 +0000 UTC m=+0.959706324 container init 0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:43:45 compute-0 podman[333171]: 2025-10-02 12:43:45.068953277 +0000 UTC m=+0.967162609 container start 0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:45 compute-0 cool_maxwell[333248]: 167 167
Oct 02 12:43:45 compute-0 systemd[1]: libpod-0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7.scope: Deactivated successfully.
Oct 02 12:43:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:45.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.173 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409025.1734526, f3cb3218-0640-497e-94de-2549ed7da8e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.174 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] VM Started (Lifecycle Event)
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.177 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.181 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.183 2 INFO nova.virt.libvirt.driver [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance spawned successfully.
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.184 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.205 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.214 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.219 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.219 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.220 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.220 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.220 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.221 2 DEBUG nova.virt.libvirt.driver [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.261 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.262 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409025.17361, f3cb3218-0640-497e-94de-2549ed7da8e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.262 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] VM Paused (Lifecycle Event)
Oct 02 12:43:45 compute-0 ceph-mon[73668]: pgmap v2050: 305 pgs: 305 active+clean; 339 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 4.6 MiB/s wr, 104 op/s
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.301 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.307 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409025.1801536, f3cb3218-0640-497e-94de-2549ed7da8e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:45 compute-0 podman[333171]: 2025-10-02 12:43:45.308689315 +0000 UTC m=+1.206898667 container attach 0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:43:45 compute-0 podman[333171]: 2025-10-02 12:43:45.310164644 +0000 UTC m=+1.208373976 container died 0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.310 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] VM Resumed (Lifecycle Event)
Oct 02 12:43:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:43:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:45.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.321 2 INFO nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Took 11.28 seconds to spawn the instance on the hypervisor.
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.323 2 DEBUG nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.337 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.341 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.373 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.396 2 INFO nova.compute.manager [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Took 12.25 seconds to build instance.
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.414 2 DEBUG oslo_concurrency.lockutils [None req-a8d761de-5e9a-471f-a55a-c65535f7baa3 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636903337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.475 2 DEBUG oslo_concurrency.processutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.482 2 DEBUG nova.compute.provider_tree [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.507 2 DEBUG nova.scheduler.client.report [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.542 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.595 2 INFO nova.scheduler.client.report [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 686e5112-9d36-4b52-9162-cb3cfcbbab74
Oct 02 12:43:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-384325bbe10b41245611649ad93eb02532d70c040e938ef0ea410011a3e9f9c6-merged.mount: Deactivated successfully.
Oct 02 12:43:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 340 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 02 12:43:45 compute-0 nova_compute[256940]: 2025-10-02 12:43:45.688 2 DEBUG oslo_concurrency.lockutils [None req-b9311b2c-01d6-4d1b-b37d-7e922071aefb ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "686e5112-9d36-4b52-9162-cb3cfcbbab74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.172 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409011.1710656, 686e5112-9d36-4b52-9162-cb3cfcbbab74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.173 2 INFO nova.compute.manager [-] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] VM Stopped (Lifecycle Event)
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.209 2 DEBUG nova.compute.manager [None req-5e5b0835-d2cd-428b-8ec0-bf09237707b6 - - - - - -] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:46 compute-0 podman[333171]: 2025-10-02 12:43:46.302315293 +0000 UTC m=+2.200524665 container remove 0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.446 2 INFO nova.compute.manager [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Rescuing
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.448 2 DEBUG oslo_concurrency.lockutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.449 2 DEBUG oslo_concurrency.lockutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.449 2 DEBUG nova.network.neutron [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:43:46 compute-0 podman[333256]: 2025-10-02 12:43:46.471436077 +0000 UTC m=+1.808903491 container create ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1636903337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3103788107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:46 compute-0 systemd[1]: libpod-conmon-0eec15ae546e2ad2c30ff1bc1e92197ea93fd91dbe89b14e319562f3e3d1f3e7.scope: Deactivated successfully.
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.696 2 DEBUG nova.compute.manager [req-4d29fdf8-f87e-4ae9-a908-20fef3636d64 req-d49569dd-7911-4563-9b9c-bd87ddccbbe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 686e5112-9d36-4b52-9162-cb3cfcbbab74] Received event network-vif-deleted-78484beb-0c30-4b51-b9d5-55ff21fae85a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.696 2 DEBUG nova.compute.manager [req-4d29fdf8-f87e-4ae9-a908-20fef3636d64 req-d49569dd-7911-4563-9b9c-bd87ddccbbe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.697 2 DEBUG oslo_concurrency.lockutils [req-4d29fdf8-f87e-4ae9-a908-20fef3636d64 req-d49569dd-7911-4563-9b9c-bd87ddccbbe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:46 compute-0 systemd[1]: Started libpod-conmon-ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42.scope.
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.699 2 DEBUG oslo_concurrency.lockutils [req-4d29fdf8-f87e-4ae9-a908-20fef3636d64 req-d49569dd-7911-4563-9b9c-bd87ddccbbe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.699 2 DEBUG oslo_concurrency.lockutils [req-4d29fdf8-f87e-4ae9-a908-20fef3636d64 req-d49569dd-7911-4563-9b9c-bd87ddccbbe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.700 2 DEBUG nova.compute.manager [req-4d29fdf8-f87e-4ae9-a908-20fef3636d64 req-d49569dd-7911-4563-9b9c-bd87ddccbbe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] No waiting events found dispatching network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:46 compute-0 nova_compute[256940]: 2025-10-02 12:43:46.700 2 WARNING nova.compute.manager [req-4d29fdf8-f87e-4ae9-a908-20fef3636d64 req-d49569dd-7911-4563-9b9c-bd87ddccbbe0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received unexpected event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 for instance with vm_state active and task_state rescuing.
Oct 02 12:43:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b417dbc733a8ff6524a964adb086eb25d2976b50f324f636cf6f5bdd5e94094f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:46 compute-0 podman[333316]: 2025-10-02 12:43:46.680451264 +0000 UTC m=+0.238088297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:46 compute-0 podman[333316]: 2025-10-02 12:43:46.924543586 +0000 UTC m=+0.482180599 container create 581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dijkstra, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:43:46 compute-0 systemd[1]: Started libpod-conmon-581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c.scope.
Oct 02 12:43:47 compute-0 podman[333256]: 2025-10-02 12:43:47.003347313 +0000 UTC m=+2.340814737 container init ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:43:47 compute-0 podman[333256]: 2025-10-02 12:43:47.013155379 +0000 UTC m=+2.350622793 container start ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:43:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75962bf3e05bc6a4ccdc1f730f2076f49a47eafee40ba826a53cca4912424759/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75962bf3e05bc6a4ccdc1f730f2076f49a47eafee40ba826a53cca4912424759/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75962bf3e05bc6a4ccdc1f730f2076f49a47eafee40ba826a53cca4912424759/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75962bf3e05bc6a4ccdc1f730f2076f49a47eafee40ba826a53cca4912424759/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:47 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[333334]: [NOTICE]   (333343) : New worker (333345) forked
Oct 02 12:43:47 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[333334]: [NOTICE]   (333343) : Loading success.
Oct 02 12:43:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:47.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:47 compute-0 podman[333316]: 2025-10-02 12:43:47.266133593 +0000 UTC m=+0.823770626 container init 581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 02 12:43:47 compute-0 podman[333316]: 2025-10-02 12:43:47.275411935 +0000 UTC m=+0.833048938 container start 581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:43:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:47 compute-0 podman[333316]: 2025-10-02 12:43:47.449000406 +0000 UTC m=+1.006637509 container attach 581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 250 op/s
Oct 02 12:43:47 compute-0 ceph-mon[73668]: pgmap v2051: 305 pgs: 305 active+clean; 340 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]: {
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:     "1": [
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:         {
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "devices": [
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "/dev/loop3"
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             ],
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "lv_name": "ceph_lv0",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "lv_size": "7511998464",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "name": "ceph_lv0",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "tags": {
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.cluster_name": "ceph",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.crush_device_class": "",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.encrypted": "0",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.osd_id": "1",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.type": "block",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:                 "ceph.vdo": "0"
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             },
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "type": "block",
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:             "vg_name": "ceph_vg0"
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:         }
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]:     ]
Oct 02 12:43:48 compute-0 blissful_dijkstra[333339]: }
Oct 02 12:43:48 compute-0 systemd[1]: libpod-581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c.scope: Deactivated successfully.
Oct 02 12:43:48 compute-0 podman[333316]: 2025-10-02 12:43:48.088417428 +0000 UTC m=+1.646054431 container died 581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dijkstra, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-75962bf3e05bc6a4ccdc1f730f2076f49a47eafee40ba826a53cca4912424759-merged.mount: Deactivated successfully.
Oct 02 12:43:48 compute-0 podman[333316]: 2025-10-02 12:43:48.97527507 +0000 UTC m=+2.532912073 container remove 581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:43:49 compute-0 sudo[333056]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:49 compute-0 nova_compute[256940]: 2025-10-02 12:43:49.043 2 DEBUG nova.network.neutron [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updating instance_info_cache with network_info: [{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:49 compute-0 systemd[1]: libpod-conmon-581626606ec639e570eedf806a106cb8a61ebacbca807994cf5c680966939f3c.scope: Deactivated successfully.
Oct 02 12:43:49 compute-0 sudo[333372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:49 compute-0 sudo[333372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:49 compute-0 sudo[333372]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:49 compute-0 nova_compute[256940]: 2025-10-02 12:43:49.097 2 DEBUG oslo_concurrency.lockutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:49 compute-0 sudo[333397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:49 compute-0 sudo[333397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:49 compute-0 sudo[333397]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:49.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:49 compute-0 sudo[333422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:49 compute-0 sudo[333422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:49 compute-0 sudo[333422]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:49 compute-0 sudo[333447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:43:49 compute-0 sudo[333447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:49.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:49 compute-0 nova_compute[256940]: 2025-10-02 12:43:49.398 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:43:49 compute-0 ceph-mon[73668]: pgmap v2052: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 250 op/s
Oct 02 12:43:49 compute-0 podman[333513]: 2025-10-02 12:43:49.578713752 +0000 UTC m=+0.027532299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 223 op/s
Oct 02 12:43:49 compute-0 podman[333513]: 2025-10-02 12:43:49.689355401 +0000 UTC m=+0.138173918 container create 8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:49 compute-0 systemd[1]: Started libpod-conmon-8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984.scope.
Oct 02 12:43:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:49 compute-0 podman[333513]: 2025-10-02 12:43:49.926891151 +0000 UTC m=+0.375709688 container init 8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:43:49 compute-0 podman[333513]: 2025-10-02 12:43:49.935587128 +0000 UTC m=+0.384405645 container start 8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_benz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:49 compute-0 compassionate_benz[333529]: 167 167
Oct 02 12:43:49 compute-0 systemd[1]: libpod-8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984.scope: Deactivated successfully.
Oct 02 12:43:49 compute-0 podman[333513]: 2025-10-02 12:43:49.984227227 +0000 UTC m=+0.433045764 container attach 8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:43:49 compute-0 podman[333513]: 2025-10-02 12:43:49.988095468 +0000 UTC m=+0.436913985 container died 8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_benz, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e2b07302dd39c1e35bfcbef0dafff2faeb42c66411e30a83e9e62de77f3a444-merged.mount: Deactivated successfully.
Oct 02 12:43:50 compute-0 podman[333513]: 2025-10-02 12:43:50.659885705 +0000 UTC m=+1.108704252 container remove 8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_benz, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:43:50 compute-0 systemd[1]: libpod-conmon-8001480af608e14e4c64edfb08071e885458476f61ac80846a2bed0247390984.scope: Deactivated successfully.
Oct 02 12:43:50 compute-0 ceph-mon[73668]: pgmap v2053: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 223 op/s
Oct 02 12:43:50 compute-0 podman[333555]: 2025-10-02 12:43:50.907285973 +0000 UTC m=+0.097420304 container create 02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:43:50 compute-0 podman[333555]: 2025-10-02 12:43:50.835713495 +0000 UTC m=+0.025847886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:43:50 compute-0 systemd[1]: Started libpod-conmon-02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1.scope.
Oct 02 12:43:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ad23b528d5adac6522bc02e100900b41766c6327af817edbb7d0fdff113333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ad23b528d5adac6522bc02e100900b41766c6327af817edbb7d0fdff113333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ad23b528d5adac6522bc02e100900b41766c6327af817edbb7d0fdff113333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ad23b528d5adac6522bc02e100900b41766c6327af817edbb7d0fdff113333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:43:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:51.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:51 compute-0 podman[333555]: 2025-10-02 12:43:51.175958847 +0000 UTC m=+0.366093218 container init 02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lehmann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:43:51 compute-0 podman[333555]: 2025-10-02 12:43:51.184722616 +0000 UTC m=+0.374856957 container start 02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:43:51 compute-0 nova_compute[256940]: 2025-10-02 12:43:51.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:51 compute-0 podman[333555]: 2025-10-02 12:43:51.424706871 +0000 UTC m=+0.614841332 container attach 02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lehmann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:43:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.6 MiB/s wr, 294 op/s
Oct 02 12:43:51 compute-0 nova_compute[256940]: 2025-10-02 12:43:51.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]: {
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]:         "osd_id": 1,
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]:         "type": "bluestore"
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]:     }
Oct 02 12:43:52 compute-0 zealous_lehmann[333572]: }
Oct 02 12:43:52 compute-0 systemd[1]: libpod-02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1.scope: Deactivated successfully.
Oct 02 12:43:52 compute-0 podman[333555]: 2025-10-02 12:43:52.088308694 +0000 UTC m=+1.278443055 container died 02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lehmann, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3030946270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.081 2 DEBUG nova.compute.manager [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:43:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:43:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:53.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1ad23b528d5adac6522bc02e100900b41766c6327af817edbb7d0fdff113333-merged.mount: Deactivated successfully.
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.281 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.282 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:53.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.361 2 DEBUG nova.objects.instance [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_requests' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.459 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.460 2 INFO nova.compute.claims [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.460 2 DEBUG nova.objects.instance [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:53 compute-0 nova_compute[256940]: 2025-10-02 12:43:53.539 2 DEBUG nova.objects.instance [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 109 KiB/s wr, 263 op/s
Oct 02 12:43:53 compute-0 ceph-mon[73668]: pgmap v2054: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.6 MiB/s wr, 294 op/s
Oct 02 12:43:53 compute-0 podman[333555]: 2025-10-02 12:43:53.850480174 +0000 UTC m=+3.040614515 container remove 02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lehmann, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:43:53 compute-0 sudo[333447]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:53 compute-0 systemd[1]: libpod-conmon-02880f75599d51e62366994cdafc02ce8b73def22b3f44f93c005ccad1ca38e1.scope: Deactivated successfully.
Oct 02 12:43:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:43:54 compute-0 nova_compute[256940]: 2025-10-02 12:43:54.122 2 INFO nova.compute.resource_tracker [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating resource usage from migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d
Oct 02 12:43:54 compute-0 nova_compute[256940]: 2025-10-02 12:43:54.122 2 DEBUG nova.compute.resource_tracker [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Starting to track incoming migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:43:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:43:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 85805ead-4295-4252-883e-22d0de643d8f does not exist
Oct 02 12:43:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 805d5af8-a709-46ca-8254-2687105b5602 does not exist
Oct 02 12:43:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 606313d7-386b-44b8-9c1e-3941c28f317b does not exist
Oct 02 12:43:54 compute-0 sudo[333609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:54 compute-0 sudo[333609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:54 compute-0 sudo[333609]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:54 compute-0 sudo[333635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:43:54 compute-0 sudo[333635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:54 compute-0 sudo[333635]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:55.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.240 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:55.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:55 compute-0 ceph-mon[73668]: pgmap v2055: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 109 KiB/s wr, 263 op/s
Oct 02 12:43:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.543 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.544 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.591 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:43:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 356 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 689 KiB/s wr, 234 op/s
Oct 02 12:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386052185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.684 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.700 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.709 2 DEBUG nova.compute.provider_tree [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.740 2 DEBUG nova.scheduler.client.report [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.793 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 2.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.794 2 INFO nova.compute.manager [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Migrating
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.804 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.811 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:43:55 compute-0 nova_compute[256940]: 2025-10-02 12:43:55.811 2 INFO nova.compute.claims [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.067 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784953652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.526 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.533 2 DEBUG nova.compute.provider_tree [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.551 2 DEBUG nova.scheduler.client.report [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.580 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.581 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.631 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.632 2 DEBUG nova.network.neutron [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.660 2 INFO nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.692 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:43:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.829 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.831 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.832 2 INFO nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Creating image(s)
Oct 02 12:43:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3386052185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:56 compute-0 nova_compute[256940]: 2025-10-02 12:43:56.991 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.021 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.069 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.073 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.145 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.146 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.147 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.147 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:57.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.174 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.179 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:43:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:57.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:43:57 compute-0 nova_compute[256940]: 2025-10-02 12:43:57.379 2 DEBUG nova.policy [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3151966e941f4652ba984616bfa760c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:43:57 compute-0 podman[333796]: 2025-10-02 12:43:57.417986822 +0000 UTC m=+0.066002344 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:43:57 compute-0 podman[333800]: 2025-10-02 12:43:57.446774973 +0000 UTC m=+0.094813416 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:43:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.0 MiB/s wr, 245 op/s
Oct 02 12:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:43:58 compute-0 ceph-mon[73668]: pgmap v2056: 305 pgs: 305 active+clean; 356 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 689 KiB/s wr, 234 op/s
Oct 02 12:43:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1784953652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:58 compute-0 nova_compute[256940]: 2025-10-02 12:43:58.691 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.103 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] resizing rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:43:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:59.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.213 2 DEBUG nova.objects.instance [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'migration_context' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.227 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.227 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Ensure instance console log exists: /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.228 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.228 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.228 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:43:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:59.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.437 2 DEBUG nova.network.neutron [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Successfully created port: fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:43:59 compute-0 nova_compute[256940]: 2025-10-02 12:43:59.451 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:43:59 compute-0 sshd-session[333913]: Accepted publickey for nova from 192.168.122.101 port 53268 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:43:59 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:43:59 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:43:59 compute-0 systemd-logind[820]: New session 63 of user nova.
Oct 02 12:43:59 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:43:59 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:43:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 126 op/s
Oct 02 12:43:59 compute-0 systemd[333917]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:43:59 compute-0 systemd[333917]: Queued start job for default target Main User Target.
Oct 02 12:43:59 compute-0 systemd[333917]: Created slice User Application Slice.
Oct 02 12:43:59 compute-0 systemd[333917]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:43:59 compute-0 systemd[333917]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:43:59 compute-0 systemd[333917]: Reached target Paths.
Oct 02 12:43:59 compute-0 systemd[333917]: Reached target Timers.
Oct 02 12:43:59 compute-0 systemd[333917]: Starting D-Bus User Message Bus Socket...
Oct 02 12:43:59 compute-0 systemd[333917]: Starting Create User's Volatile Files and Directories...
Oct 02 12:43:59 compute-0 systemd[333917]: Finished Create User's Volatile Files and Directories.
Oct 02 12:43:59 compute-0 systemd[333917]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:43:59 compute-0 systemd[333917]: Reached target Sockets.
Oct 02 12:43:59 compute-0 systemd[333917]: Reached target Basic System.
Oct 02 12:43:59 compute-0 systemd[333917]: Reached target Main User Target.
Oct 02 12:43:59 compute-0 systemd[333917]: Startup finished in 175ms.
Oct 02 12:43:59 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:43:59 compute-0 ceph-mon[73668]: pgmap v2057: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.0 MiB/s wr, 245 op/s
Oct 02 12:43:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/707952787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:59 compute-0 systemd[1]: Started Session 63 of User nova.
Oct 02 12:43:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3154617667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1830993686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:59 compute-0 sshd-session[333913]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:43:59 compute-0 sshd-session[333932]: Received disconnect from 192.168.122.101 port 53268:11: disconnected by user
Oct 02 12:43:59 compute-0 sshd-session[333932]: Disconnected from user nova 192.168.122.101 port 53268
Oct 02 12:43:59 compute-0 sshd-session[333913]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:43:59 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Oct 02 12:43:59 compute-0 systemd-logind[820]: Session 63 logged out. Waiting for processes to exit.
Oct 02 12:43:59 compute-0 systemd-logind[820]: Removed session 63.
Oct 02 12:44:00 compute-0 sshd-session[333934]: Accepted publickey for nova from 192.168.122.101 port 53274 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:44:00 compute-0 systemd-logind[820]: New session 65 of user nova.
Oct 02 12:44:00 compute-0 systemd[1]: Started Session 65 of User nova.
Oct 02 12:44:00 compute-0 sshd-session[333934]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:44:00 compute-0 sshd-session[333937]: Received disconnect from 192.168.122.101 port 53274:11: disconnected by user
Oct 02 12:44:00 compute-0 sshd-session[333937]: Disconnected from user nova 192.168.122.101 port 53274
Oct 02 12:44:00 compute-0 sshd-session[333934]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:44:00 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Oct 02 12:44:00 compute-0 systemd-logind[820]: Session 65 logged out. Waiting for processes to exit.
Oct 02 12:44:00 compute-0 systemd-logind[820]: Removed session 65.
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.646 2 DEBUG nova.network.neutron [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Successfully updated port: fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.677 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.677 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.677 2 DEBUG nova.network.neutron [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.798 2 DEBUG nova.compute.manager [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.799 2 DEBUG nova.compute.manager [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing instance network info cache due to event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.799 2 DEBUG oslo_concurrency.lockutils [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:00 compute-0 nova_compute[256940]: 2025-10-02 12:44:00.972 2 DEBUG nova.network.neutron [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:44:01 compute-0 ceph-mon[73668]: pgmap v2058: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 126 op/s
Oct 02 12:44:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3350584005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:01.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:01 compute-0 nova_compute[256940]: 2025-10-02 12:44:01.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:01 compute-0 nova_compute[256940]: 2025-10-02 12:44:01.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:01.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:01 compute-0 sudo[333940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:01 compute-0 sudo[333940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:01 compute-0 sudo[333940]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:01 compute-0 sudo[333965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:01 compute-0 sudo[333965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:01 compute-0 sudo[333965]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 457 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.8 MiB/s wr, 221 op/s
Oct 02 12:44:01 compute-0 nova_compute[256940]: 2025-10-02 12:44:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:01 compute-0 ovn_controller[148123]: 2025-10-02T12:44:01Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:5b:10 10.100.0.4
Oct 02 12:44:01 compute-0 ovn_controller[148123]: 2025-10-02T12:44:01Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:5b:10 10.100.0.4
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.241 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.241 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.242 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.242 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:44:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1387513762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.691 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.815 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.816 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.997 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.999 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4287MB free_disk=20.781471252441406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.999 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:02 compute-0 nova_compute[256940]: 2025-10-02 12:44:02.999 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.046 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration for instance 173830cb-12bb-4e1a-ba80-088da01ad107 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.067 2 INFO nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating resource usage from migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.068 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Starting to track incoming migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.095 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance f3cb3218-0640-497e-94de-2549ed7da8e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.114 2 WARNING nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 173830cb-12bb-4e1a-ba80-088da01ad107 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 192, 'VCPU': 1}}.
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.115 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 9aff2d67-195f-4081-9a1c-ba173a39af9d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.115 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.115 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.176 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:03.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:03 compute-0 ceph-mon[73668]: pgmap v2059: 305 pgs: 305 active+clean; 457 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.8 MiB/s wr, 221 op/s
Oct 02 12:44:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1387513762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/771119567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.298 2 DEBUG nova.network.neutron [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.321 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.321 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance network_info: |[{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.322 2 DEBUG oslo_concurrency.lockutils [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.322 2 DEBUG nova.network.neutron [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.325 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Start _get_guest_xml network_info=[{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.329 2 WARNING nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:44:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:03.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.333 2 DEBUG nova.virt.libvirt.host [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.333 2 DEBUG nova.virt.libvirt.host [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.337 2 DEBUG nova.virt.libvirt.host [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.338 2 DEBUG nova.virt.libvirt.host [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.339 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.339 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.339 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.339 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.340 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.340 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.340 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.340 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.341 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.341 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.341 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.341 2 DEBUG nova.virt.hardware [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.344 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:44:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/908961569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.632 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.638 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.653 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.675 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.676 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 508 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 9.4 MiB/s wr, 201 op/s
Oct 02 12:44:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:44:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452370904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.790 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.819 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:03 compute-0 nova_compute[256940]: 2025-10-02 12:44:03.823 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:44:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164478413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.258 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.259 2 DEBUG nova.virt.libvirt.vif [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1355261775',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1355261775',id=121,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJqnSC5dEFNVxJNe6UAIIaljTk9QXiRqWs9XkOwP1Uo3z0m7kLVKnpN3LhUWVriRnpghb9/lFHsZ1jstgRcNV8lxwQ1W9dxXw6nQRynMb+rfh9iIE+CgS9POWn5d32lCvQ==',key_name='tempest-keypair-1498791303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wmoytp73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=9aff2d67-195f-4081-9a1c-ba173a39af9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.260 2 DEBUG nova.network.os_vif_util [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.260 2 DEBUG nova.network.os_vif_util [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.262 2 DEBUG nova.objects.instance [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.344 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <uuid>9aff2d67-195f-4081-9a1c-ba173a39af9d</uuid>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <name>instance-00000079</name>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-1355261775</nova:name>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:44:03</nova:creationTime>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:user uuid="3151966e941f4652ba984616bfa760c7">tempest-AttachVolumeShelveTestJSON-1943710095-project-member</nova:user>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:project uuid="f7e2edef094b4ba5a56a5ec5ffce911e">tempest-AttachVolumeShelveTestJSON-1943710095</nova:project>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <nova:port uuid="fbc8e36a-6d1e-4928-ae02-cc1c07215c0c">
Oct 02 12:44:04 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <system>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <entry name="serial">9aff2d67-195f-4081-9a1c-ba173a39af9d</entry>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <entry name="uuid">9aff2d67-195f-4081-9a1c-ba173a39af9d</entry>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </system>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <os>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   </os>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <features>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   </features>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk">
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       </source>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config">
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       </source>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:44:04 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9c:a6:e5"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <target dev="tapfbc8e36a-6d"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/console.log" append="off"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <video>
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </video>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:44:04 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:44:04 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:44:04 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:44:04 compute-0 nova_compute[256940]: </domain>
Oct 02 12:44:04 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.344 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Preparing to wait for external event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.345 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.345 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.345 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.346 2 DEBUG nova.virt.libvirt.vif [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1355261775',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1355261775',id=121,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJqnSC5dEFNVxJNe6UAIIaljTk9QXiRqWs9XkOwP1Uo3z0m7kLVKnpN3LhUWVriRnpghb9/lFHsZ1jstgRcNV8lxwQ1W9dxXw6nQRynMb+rfh9iIE+CgS9POWn5d32lCvQ==',key_name='tempest-keypair-1498791303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wmoytp73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=9aff2d67-195f-4081-9a1c-ba173a39af9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.346 2 DEBUG nova.network.os_vif_util [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.347 2 DEBUG nova.network.os_vif_util [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.347 2 DEBUG os_vif [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.351 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbc8e36a-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbc8e36a-6d, col_values=(('external_ids', {'iface-id': 'fbc8e36a-6d1e-4928-ae02-cc1c07215c0c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:a6:e5', 'vm-uuid': '9aff2d67-195f-4081-9a1c-ba173a39af9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:04 compute-0 NetworkManager[44981]: <info>  [1759409044.3551] manager: (tapfbc8e36a-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.362 2 INFO os_vif [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d')
Oct 02 12:44:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/908961569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2152615919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3452370904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/164478413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.467 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.467 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.467 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No VIF found with MAC fa:16:3e:9c:a6:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.468 2 INFO nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Using config drive
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.494 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.676 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.725 2 DEBUG nova.network.neutron [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updated VIF entry in instance network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.725 2 DEBUG nova.network.neutron [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:04 compute-0 nova_compute[256940]: 2025-10-02 12:44:04.770 2 DEBUG oslo_concurrency.lockutils [req-32985907-d49a-4438-a4bd-77ee3dde8a75 req-be4b6e88-090e-4547-a50b-6ef3bbd28335 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:05.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.272 2 INFO nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Creating config drive at /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.277 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwvn8miwj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:05.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.412 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwvn8miwj" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.450 2 DEBUG nova.storage.rbd_utils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:05 compute-0 nova_compute[256940]: 2025-10-02 12:44:05.454 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:05 compute-0 ceph-mon[73668]: pgmap v2060: 305 pgs: 305 active+clean; 508 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 9.4 MiB/s wr, 201 op/s
Oct 02 12:44:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 528 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 9.9 MiB/s wr, 262 op/s
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.660 2 DEBUG oslo_concurrency.processutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.660 2 INFO nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Deleting local config drive /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config because it was imported into RBD.
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:06 compute-0 ceph-mon[73668]: pgmap v2061: 305 pgs: 305 active+clean; 528 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 9.9 MiB/s wr, 262 op/s
Oct 02 12:44:06 compute-0 kernel: tapfbc8e36a-6d: entered promiscuous mode
Oct 02 12:44:06 compute-0 NetworkManager[44981]: <info>  [1759409046.7174] manager: (tapfbc8e36a-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Oct 02 12:44:06 compute-0 ovn_controller[148123]: 2025-10-02T12:44:06Z|00581|binding|INFO|Claiming lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for this chassis.
Oct 02 12:44:06 compute-0 ovn_controller[148123]: 2025-10-02T12:44:06Z|00582|binding|INFO|fbc8e36a-6d1e-4928-ae02-cc1c07215c0c: Claiming fa:16:3e:9c:a6:e5 10.100.0.12
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.736 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:a6:e5 10.100.0.12'], port_security=['fa:16:3e:9c:a6:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9aff2d67-195f-4081-9a1c-ba173a39af9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e5d4b7f8-4549-4722-8356-487047feb0fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.738 158104 INFO neutron.agent.ovn.metadata.agent [-] Port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c in datapath 385a384c-5df0-4b04-b928-517a46df04f4 bound to our chassis
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.741 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 385a384c-5df0-4b04-b928-517a46df04f4
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.755 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2857ca6f-82da-4f4a-a833-cb249a64991a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.756 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap385a384c-51 in ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:44:06 compute-0 systemd-machined[210927]: New machine qemu-60-instance-00000079.
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.759 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap385a384c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.759 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[17b27399-f34e-419a-87a5-ebc38f43a2d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 systemd-udevd[334174]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.760 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[653c2dbc-1ee6-4f07-ad5f-50f7b18d3c4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.771 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdd3967-fd99-4cd4-b423-b3818caeef31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 NetworkManager[44981]: <info>  [1759409046.7755] device (tapfbc8e36a-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:44:06 compute-0 NetworkManager[44981]: <info>  [1759409046.7764] device (tapfbc8e36a-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:44:06 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000079.
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.803 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3f05bb05-e0cb-407e-bf9c-88fef32ce1b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:06 compute-0 ovn_controller[148123]: 2025-10-02T12:44:06Z|00583|binding|INFO|Setting lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c ovn-installed in OVS
Oct 02 12:44:06 compute-0 ovn_controller[148123]: 2025-10-02T12:44:06Z|00584|binding|INFO|Setting lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c up in Southbound
Oct 02 12:44:06 compute-0 nova_compute[256940]: 2025-10-02 12:44:06.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.835 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[510affbe-51f0-4472-b574-b8ed7d6359c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.840 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ac185cc0-5744-43c8-bdea-7b05595adffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 systemd-udevd[334177]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:44:06 compute-0 NetworkManager[44981]: <info>  [1759409046.8424] manager: (tap385a384c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.874 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e791b475-dc66-48cf-8a8e-2cbc70fee20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.878 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[19bcd82a-99e0-4882-acf0-ec43543afc7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.888269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046888332, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 489, "num_deletes": 251, "total_data_size": 470142, "memory_usage": 479288, "flush_reason": "Manual Compaction"}
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 02 12:44:06 compute-0 NetworkManager[44981]: <info>  [1759409046.8994] device (tap385a384c-50): carrier: link connected
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.907 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0de1fafd-e0f4-47f0-a27a-e9bc0c51d9fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.926 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a98d2303-d2fc-4997-923a-a17a56836597]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694238, 'reachable_time': 43444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334206, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046931450, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 465207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45997, "largest_seqno": 46485, "table_properties": {"data_size": 462450, "index_size": 793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6874, "raw_average_key_size": 19, "raw_value_size": 456830, "raw_average_value_size": 1283, "num_data_blocks": 34, "num_entries": 356, "num_filter_entries": 356, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409023, "oldest_key_time": 1759409023, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 43209 microseconds, and 2376 cpu microseconds.
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.944 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[27b54d00-7d67-417b-a339-758ed5a8095b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:d461'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694238, 'tstamp': 694238}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334207, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.931491) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 465207 bytes OK
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.931507) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.965668) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.965713) EVENT_LOG_v1 {"time_micros": 1759409046965702, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.965738) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 467272, prev total WAL file size 467272, number of live WAL files 2.
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.965 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[326ffa9a-ab9a-409f-b9d3-c6b27bc0c704]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694238, 'reachable_time': 43444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334208, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.967344) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(454KB)], [101(10MB)]
Oct 02 12:44:06 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046967407, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11121167, "oldest_snapshot_seqno": -1}
Oct 02 12:44:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:06.994 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fed92114-86b7-447d-9396-214ad02fc002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.037 2 DEBUG nova.compute.manager [req-91761469-ef78-4372-951b-beabfcde4597 req-e15e1236-49db-4a2b-8ccb-970b11f1de07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.038 2 DEBUG oslo_concurrency.lockutils [req-91761469-ef78-4372-951b-beabfcde4597 req-e15e1236-49db-4a2b-8ccb-970b11f1de07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.038 2 DEBUG oslo_concurrency.lockutils [req-91761469-ef78-4372-951b-beabfcde4597 req-e15e1236-49db-4a2b-8ccb-970b11f1de07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.038 2 DEBUG oslo_concurrency.lockutils [req-91761469-ef78-4372-951b-beabfcde4597 req-e15e1236-49db-4a2b-8ccb-970b11f1de07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.038 2 DEBUG nova.compute.manager [req-91761469-ef78-4372-951b-beabfcde4597 req-e15e1236-49db-4a2b-8ccb-970b11f1de07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Processing event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7137 keys, 9260306 bytes, temperature: kUnknown
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047046316, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9260306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9214667, "index_size": 26711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 186049, "raw_average_key_size": 26, "raw_value_size": 9089235, "raw_average_value_size": 1273, "num_data_blocks": 1045, "num_entries": 7137, "num_filter_entries": 7137, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.046628) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9260306 bytes
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.050589) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.8 rd, 117.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(43.8) write-amplify(19.9) OK, records in: 7650, records dropped: 513 output_compression: NoCompression
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.050618) EVENT_LOG_v1 {"time_micros": 1759409047050606, "job": 60, "event": "compaction_finished", "compaction_time_micros": 78997, "compaction_time_cpu_micros": 25645, "output_level": 6, "num_output_files": 1, "total_output_size": 9260306, "num_input_records": 7650, "num_output_records": 7137, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047050860, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047053195, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.053334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.053342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.053344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.053345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:44:07.053347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.059 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[91306276-be1c-4f22-a237-caace61acf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.060 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.060 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.061 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385a384c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:07 compute-0 NetworkManager[44981]: <info>  [1759409047.0633] manager: (tap385a384c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct 02 12:44:07 compute-0 kernel: tap385a384c-50: entered promiscuous mode
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.070 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap385a384c-50, col_values=(('external_ids', {'iface-id': '12496c3c-f50d-4104-bfb7-81f1aa24617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:07 compute-0 ovn_controller[148123]: 2025-10-02T12:44:07Z|00585|binding|INFO|Releasing lport 12496c3c-f50d-4104-bfb7-81f1aa24617e from this chassis (sb_readonly=0)
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.075 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.076 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[37ff13fa-3673-41ca-8140-b9b25279f3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.078 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-385a384c-5df0-4b04-b928-517a46df04f4
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 385a384c-5df0-4b04-b928-517a46df04f4
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.078 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'env', 'PROCESS_TAG=haproxy-385a384c-5df0-4b04-b928-517a46df04f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/385a384c-5df0-4b04-b928-517a46df04f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:07.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:07.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:07 compute-0 podman[334240]: 2025-10-02 12:44:07.442377995 +0000 UTC m=+0.022827947 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:44:07 compute-0 nova_compute[256940]: 2025-10-02 12:44:07.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.560 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.3 MiB/s wr, 318 op/s
Oct 02 12:44:07 compute-0 podman[334240]: 2025-10-02 12:44:07.712429364 +0000 UTC m=+0.292879286 container create 9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:44:07 compute-0 systemd[1]: Started libpod-conmon-9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1.scope.
Oct 02 12:44:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e853ca4ae7fa98cdac00bf12e8eef35c97fd3efd44c644991f1aff15f871183f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:07 compute-0 podman[334240]: 2025-10-02 12:44:07.830016914 +0000 UTC m=+0.410466846 container init 9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:44:07 compute-0 podman[334240]: 2025-10-02 12:44:07.840120698 +0000 UTC m=+0.420570620 container start 9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:44:07 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [NOTICE]   (334260) : New worker (334262) forked
Oct 02 12:44:07 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [NOTICE]   (334260) : Loading success.
Oct 02 12:44:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:07.922 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.078 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409049.0775456, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.078 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Started (Lifecycle Event)
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.081 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.086 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.089 2 INFO nova.virt.libvirt.driver [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance spawned successfully.
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.090 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.152 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.155 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:09.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.194 2 DEBUG nova.compute.manager [req-83cfb476-2023-42a0-b43d-826e710f08b2 req-14c75504-c32f-437e-9493-a23ac1656dd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.194 2 DEBUG oslo_concurrency.lockutils [req-83cfb476-2023-42a0-b43d-826e710f08b2 req-14c75504-c32f-437e-9493-a23ac1656dd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.194 2 DEBUG oslo_concurrency.lockutils [req-83cfb476-2023-42a0-b43d-826e710f08b2 req-14c75504-c32f-437e-9493-a23ac1656dd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.195 2 DEBUG oslo_concurrency.lockutils [req-83cfb476-2023-42a0-b43d-826e710f08b2 req-14c75504-c32f-437e-9493-a23ac1656dd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.195 2 DEBUG nova.compute.manager [req-83cfb476-2023-42a0-b43d-826e710f08b2 req-14c75504-c32f-437e-9493-a23ac1656dd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] No waiting events found dispatching network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.195 2 WARNING nova.compute.manager [req-83cfb476-2023-42a0-b43d-826e710f08b2 req-14c75504-c32f-437e-9493-a23ac1656dd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received unexpected event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for instance with vm_state building and task_state spawning.
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.208 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.209 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.209 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.210 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.210 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.211 2 DEBUG nova.virt.libvirt.driver [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.214 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.217 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.217 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409049.0785034, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.217 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Paused (Lifecycle Event)
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.301 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.302 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.302 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.302 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.302 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.306 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.309 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409049.0857956, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.309 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Resumed (Lifecycle Event)
Oct 02 12:44:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:09.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.384 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.389 2 INFO nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Took 12.56 seconds to spawn the instance on the hypervisor.
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.389 2 DEBUG nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.395 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.488 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:44:09 compute-0 ceph-mon[73668]: pgmap v2062: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.3 MiB/s wr, 318 op/s
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.552 2 INFO nova.compute.manager [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Took 13.89 seconds to build instance.
Oct 02 12:44:09 compute-0 nova_compute[256940]: 2025-10-02 12:44:09.633 2 DEBUG oslo_concurrency.lockutils [None req-afa7c44c-4921-485c-902a-93dddbe81e3d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.0 MiB/s wr, 268 op/s
Oct 02 12:44:10 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:44:10 compute-0 systemd[333917]: Activating special unit Exit the Session...
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped target Main User Target.
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped target Basic System.
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped target Paths.
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped target Sockets.
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped target Timers.
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:44:10 compute-0 systemd[333917]: Closed D-Bus User Message Bus Socket.
Oct 02 12:44:10 compute-0 systemd[333917]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:44:10 compute-0 systemd[333917]: Removed slice User Application Slice.
Oct 02 12:44:10 compute-0 systemd[333917]: Reached target Shutdown.
Oct 02 12:44:10 compute-0 systemd[333917]: Finished Exit the Session.
Oct 02 12:44:10 compute-0 systemd[333917]: Reached target Exit the Session.
Oct 02 12:44:10 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:44:10 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:44:10 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:44:10 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:44:10 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:44:10 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:44:10 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:44:10 compute-0 nova_compute[256940]: 2025-10-02 12:44:10.531 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:44:10 compute-0 ceph-mon[73668]: pgmap v2063: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.0 MiB/s wr, 268 op/s
Oct 02 12:44:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:11.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:11 compute-0 nova_compute[256940]: 2025-10-02 12:44:11.210 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updating instance_info_cache with network_info: [{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:11 compute-0 nova_compute[256940]: 2025-10-02 12:44:11.291 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:11 compute-0 nova_compute[256940]: 2025-10-02 12:44:11.292 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:44:11 compute-0 nova_compute[256940]: 2025-10-02 12:44:11.293 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:44:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:44:11 compute-0 podman[334317]: 2025-10-02 12:44:11.393853576 +0000 UTC m=+0.056793103 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:44:11 compute-0 podman[334316]: 2025-10-02 12:44:11.41622127 +0000 UTC m=+0.080708258 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:44:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 310 op/s
Oct 02 12:44:11 compute-0 nova_compute[256940]: 2025-10-02 12:44:11.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:12.924 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:13.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:13.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:13 compute-0 ceph-mon[73668]: pgmap v2064: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 310 op/s
Oct 02 12:44:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 245 op/s
Oct 02 12:44:14 compute-0 nova_compute[256940]: 2025-10-02 12:44:14.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:14 compute-0 nova_compute[256940]: 2025-10-02 12:44:14.571 2 DEBUG nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:14 compute-0 nova_compute[256940]: 2025-10-02 12:44:14.571 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:14 compute-0 nova_compute[256940]: 2025-10-02 12:44:14.572 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:14 compute-0 nova_compute[256940]: 2025-10-02 12:44:14.573 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:14 compute-0 nova_compute[256940]: 2025-10-02 12:44:14.573 2 DEBUG nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:14 compute-0 nova_compute[256940]: 2025-10-02 12:44:14.574 2 WARNING nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_migrating.
Oct 02 12:44:14 compute-0 ceph-mon[73668]: pgmap v2065: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 245 op/s
Oct 02 12:44:15 compute-0 kernel: tap94d27ec1-61 (unregistering): left promiscuous mode
Oct 02 12:44:15 compute-0 NetworkManager[44981]: <info>  [1759409055.1347] device (tap94d27ec1-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:15 compute-0 ovn_controller[148123]: 2025-10-02T12:44:15Z|00586|binding|INFO|Releasing lport 94d27ec1-617c-4911-bfe7-441d064785a1 from this chassis (sb_readonly=0)
Oct 02 12:44:15 compute-0 ovn_controller[148123]: 2025-10-02T12:44:15Z|00587|binding|INFO|Setting lport 94d27ec1-617c-4911-bfe7-441d064785a1 down in Southbound
Oct 02 12:44:15 compute-0 ovn_controller[148123]: 2025-10-02T12:44:15Z|00588|binding|INFO|Removing iface tap94d27ec1-61 ovn-installed in OVS
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:15.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:15 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 02 12:44:15 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000076.scope: Consumed 14.865s CPU time.
Oct 02 12:44:15 compute-0 systemd-machined[210927]: Machine qemu-59-instance-00000076 terminated.
Oct 02 12:44:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:15.233 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5b:10 10.100.0.4'], port_security=['fa:16:3e:68:5b:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f3cb3218-0640-497e-94de-2549ed7da8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=94d27ec1-617c-4911-bfe7-441d064785a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:15.235 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 94d27ec1-617c-4911-bfe7-441d064785a1 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis
Oct 02 12:44:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:15.237 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:44:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:15.238 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2276c399-7445-4848-943d-3156ba7d3acf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:15.238 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore
Oct 02 12:44:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:15.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:15 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[333334]: [NOTICE]   (333343) : haproxy version is 2.8.14-c23fe91
Oct 02 12:44:15 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[333334]: [NOTICE]   (333343) : path to executable is /usr/sbin/haproxy
Oct 02 12:44:15 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[333334]: [WARNING]  (333343) : Exiting Master process...
Oct 02 12:44:15 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[333334]: [ALERT]    (333343) : Current worker (333345) exited with code 143 (Terminated)
Oct 02 12:44:15 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[333334]: [WARNING]  (333343) : All workers exited. Exiting... (0)
Oct 02 12:44:15 compute-0 systemd[1]: libpod-ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42.scope: Deactivated successfully.
Oct 02 12:44:15 compute-0 podman[334381]: 2025-10-02 12:44:15.399592033 +0000 UTC m=+0.054807762 container died ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.561 2 INFO nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance shutdown successfully after 26 seconds.
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.565 2 INFO nova.virt.libvirt.driver [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance destroyed successfully.
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.566 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'numa_topology' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.679 2 INFO nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Attempting rescue
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.680 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.684 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.684 2 INFO nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Creating image(s)
Oct 02 12:44:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 547 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 207 op/s
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.707 2 DEBUG nova.storage.rbd_utils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.712 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.789 2 DEBUG nova.storage.rbd_utils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.818 2 DEBUG nova.storage.rbd_utils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.822 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.893 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.894 2 DEBUG oslo_concurrency.lockutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.895 2 DEBUG oslo_concurrency.lockutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:15 compute-0 nova_compute[256940]: 2025-10-02 12:44:15.895 2 DEBUG oslo_concurrency.lockutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b417dbc733a8ff6524a964adb086eb25d2976b50f324f636cf6f5bdd5e94094f-merged.mount: Deactivated successfully.
Oct 02 12:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42-userdata-shm.mount: Deactivated successfully.
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.048 2 DEBUG nova.storage.rbd_utils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.055 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f3cb3218-0640-497e-94de-2549ed7da8e4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:16 compute-0 podman[334381]: 2025-10-02 12:44:16.135773601 +0000 UTC m=+0.790989330 container cleanup ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:44:16 compute-0 systemd[1]: libpod-conmon-ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42.scope: Deactivated successfully.
Oct 02 12:44:16 compute-0 podman[334508]: 2025-10-02 12:44:16.495469001 +0000 UTC m=+0.329945954 container remove ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.504 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[74a1fa8c-1df5-4bd9-8c60-ce6fc9f6fee7]: (4, ('Thu Oct  2 12:44:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42)\ned0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42\nThu Oct  2 12:44:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (ed0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42)\ned0d259319cda424f18c08cb0404ac0dbb7aa5f25d4565d5244aaaafc2ccbe42\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.506 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[43720e67-602d-420c-90db-27a156265c9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.509 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:16 compute-0 kernel: tapf3934261-b0: left promiscuous mode
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.538 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e4dac853-f4e5-46ea-ab7b-f37ce84ff619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.564 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d087c211-7946-4d83-909c-1fbab22b8c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.566 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2556887d-2c1c-4703-9e29-99305bf38989]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.584 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bad33920-4eec-4eea-9136-9e09c3c22dcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691941, 'reachable_time': 26157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334533, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:16 compute-0 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.586 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:44:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:16.586 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7f0afa-ea45-4f8b-8ab1-4e880bca99f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.775 2 DEBUG nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.775 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.776 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.776 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.776 2 DEBUG nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:16 compute-0 nova_compute[256940]: 2025-10-02 12:44:16.777 2 WARNING nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_migrated.
Oct 02 12:44:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:17 compute-0 ceph-mon[73668]: pgmap v2066: 305 pgs: 305 active+clean; 547 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 207 op/s
Oct 02 12:44:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:17.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:17.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 571 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 175 op/s
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.697 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f3cb3218-0640-497e-94de-2549ed7da8e4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.698 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.903 2 DEBUG nova.compute.manager [req-c734f739-fee9-45ce-ba64-f24f124a819e req-524e0348-b526-44a2-86c1-6c827d6a0cd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-unplugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.904 2 DEBUG oslo_concurrency.lockutils [req-c734f739-fee9-45ce-ba64-f24f124a819e req-524e0348-b526-44a2-86c1-6c827d6a0cd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.904 2 DEBUG oslo_concurrency.lockutils [req-c734f739-fee9-45ce-ba64-f24f124a819e req-524e0348-b526-44a2-86c1-6c827d6a0cd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.904 2 DEBUG oslo_concurrency.lockutils [req-c734f739-fee9-45ce-ba64-f24f124a819e req-524e0348-b526-44a2-86c1-6c827d6a0cd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.905 2 DEBUG nova.compute.manager [req-c734f739-fee9-45ce-ba64-f24f124a819e req-524e0348-b526-44a2-86c1-6c827d6a0cd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] No waiting events found dispatching network-vif-unplugged-94d27ec1-617c-4911-bfe7-441d064785a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:17 compute-0 nova_compute[256940]: 2025-10-02 12:44:17.905 2 WARNING nova.compute.manager [req-c734f739-fee9-45ce-ba64-f24f124a819e req-524e0348-b526-44a2-86c1-6c827d6a0cd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received unexpected event network-vif-unplugged-94d27ec1-617c-4911-bfe7-441d064785a1 for instance with vm_state active and task_state rescuing.
Oct 02 12:44:18 compute-0 NetworkManager[44981]: <info>  [1759409058.1271] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct 02 12:44:18 compute-0 NetworkManager[44981]: <info>  [1759409058.1280] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.138 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.144 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Start _get_guest_xml network_info=[{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:68:5b:10"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.144 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:18 compute-0 ovn_controller[148123]: 2025-10-02T12:44:18Z|00589|binding|INFO|Releasing lport 12496c3c-f50d-4104-bfb7-81f1aa24617e from this chassis (sb_readonly=0)
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.285 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.568 2 WARNING nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.576 2 DEBUG nova.virt.libvirt.host [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.577 2 DEBUG nova.virt.libvirt.host [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.580 2 DEBUG nova.virt.libvirt.host [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.580 2 DEBUG nova.virt.libvirt.host [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.582 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.582 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.582 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.583 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.583 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.583 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.583 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.584 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.584 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.584 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.585 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.585 2 DEBUG nova.virt.hardware [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.585 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.888 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:18 compute-0 nova_compute[256940]: 2025-10-02 12:44:18.932 2 INFO nova.network.neutron [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:44:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:19.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:44:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215505735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:19.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:19 compute-0 nova_compute[256940]: 2025-10-02 12:44:19.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:19 compute-0 nova_compute[256940]: 2025-10-02 12:44:19.369 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:19 compute-0 nova_compute[256940]: 2025-10-02 12:44:19.370 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:19 compute-0 ceph-mon[73668]: pgmap v2067: 305 pgs: 305 active+clean; 571 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 175 op/s
Oct 02 12:44:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3215505735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 571 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 115 op/s
Oct 02 12:44:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:44:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001607952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:19 compute-0 nova_compute[256940]: 2025-10-02 12:44:19.822 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:19 compute-0 nova_compute[256940]: 2025-10-02 12:44:19.823 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:44:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558284503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:20 compute-0 nova_compute[256940]: 2025-10-02 12:44:20.331 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:20 compute-0 nova_compute[256940]: 2025-10-02 12:44:20.333 2 DEBUG nova.virt.libvirt.vif [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-604679211',display_name='tempest-ServerRescueNegativeTestJSON-server-604679211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-604679211',id=118,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-h4w9h3c8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:45Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=f3cb3218-0640-497e-94de-2549ed7da8e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:68:5b:10"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:44:20 compute-0 nova_compute[256940]: 2025-10-02 12:44:20.334 2 DEBUG nova.network.os_vif_util [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:68:5b:10"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:44:20 compute-0 nova_compute[256940]: 2025-10-02 12:44:20.335 2 DEBUG nova.network.os_vif_util [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:44:20 compute-0 nova_compute[256940]: 2025-10-02 12:44:20.336 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:20 compute-0 nova_compute[256940]: 2025-10-02 12:44:20.823 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <uuid>f3cb3218-0640-497e-94de-2549ed7da8e4</uuid>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <name>instance-00000076</name>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-604679211</nova:name>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:44:18</nova:creationTime>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <nova:port uuid="94d27ec1-617c-4911-bfe7-441d064785a1">
Oct 02 12:44:20 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <system>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <entry name="serial">f3cb3218-0640-497e-94de-2549ed7da8e4</entry>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <entry name="uuid">f3cb3218-0640-497e-94de-2549ed7da8e4</entry>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </system>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <os>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   </os>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <features>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   </features>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/f3cb3218-0640-497e-94de-2549ed7da8e4_disk.rescue">
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </source>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/f3cb3218-0640-497e-94de-2549ed7da8e4_disk">
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </source>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config.rescue">
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </source>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:44:20 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:68:5b:10"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <target dev="tap94d27ec1-61"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/console.log" append="off"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <video>
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </video>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:44:20 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:44:20 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:44:20 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:44:20 compute-0 nova_compute[256940]: </domain>
Oct 02 12:44:20 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:44:20 compute-0 nova_compute[256940]: 2025-10-02 12:44:20.831 2 INFO nova.virt.libvirt.driver [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance destroyed successfully.
Oct 02 12:44:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:21.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:21 compute-0 ceph-mon[73668]: pgmap v2068: 305 pgs: 305 active+clean; 571 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 115 op/s
Oct 02 12:44:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1001607952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1558284503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:21.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.615 2 DEBUG nova.compute.manager [req-9237a903-f3ad-4f7b-9f8b-8a2b68689d31 req-6496f05b-8b8c-4041-a6c0-efbd766742c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.616 2 DEBUG oslo_concurrency.lockutils [req-9237a903-f3ad-4f7b-9f8b-8a2b68689d31 req-6496f05b-8b8c-4041-a6c0-efbd766742c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.616 2 DEBUG oslo_concurrency.lockutils [req-9237a903-f3ad-4f7b-9f8b-8a2b68689d31 req-6496f05b-8b8c-4041-a6c0-efbd766742c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.616 2 DEBUG oslo_concurrency.lockutils [req-9237a903-f3ad-4f7b-9f8b-8a2b68689d31 req-6496f05b-8b8c-4041-a6c0-efbd766742c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.616 2 DEBUG nova.compute.manager [req-9237a903-f3ad-4f7b-9f8b-8a2b68689d31 req-6496f05b-8b8c-4041-a6c0-efbd766742c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] No waiting events found dispatching network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.616 2 WARNING nova.compute.manager [req-9237a903-f3ad-4f7b-9f8b-8a2b68689d31 req-6496f05b-8b8c-4041-a6c0-efbd766742c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received unexpected event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 for instance with vm_state active and task_state rescuing.
Oct 02 12:44:21 compute-0 sudo[334610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:21 compute-0 sudo[334610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:21 compute-0 sudo[334610]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 609 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 155 op/s
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:21 compute-0 sudo[334635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:21 compute-0 sudo[334635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:21 compute-0 sudo[334635]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.983 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.984 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.984 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.984 2 DEBUG nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:68:5b:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:44:21 compute-0 nova_compute[256940]: 2025-10-02 12:44:21.985 2 INFO nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Using config drive
Oct 02 12:44:22 compute-0 nova_compute[256940]: 2025-10-02 12:44:22.018 2 DEBUG nova.storage.rbd_utils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:22 compute-0 nova_compute[256940]: 2025-10-02 12:44:22.168 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:22 compute-0 nova_compute[256940]: 2025-10-02 12:44:22.300 2 DEBUG nova.objects.instance [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'keypairs' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:22 compute-0 nova_compute[256940]: 2025-10-02 12:44:22.915 2 INFO nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Creating config drive at /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config.rescue
Oct 02 12:44:22 compute-0 nova_compute[256940]: 2025-10-02 12:44:22.920 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph0s8hnw7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:23 compute-0 nova_compute[256940]: 2025-10-02 12:44:23.055 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph0s8hnw7" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:23 compute-0 nova_compute[256940]: 2025-10-02 12:44:23.090 2 DEBUG nova.storage.rbd_utils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:23 compute-0 nova_compute[256940]: 2025-10-02 12:44:23.094 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config.rescue f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:23.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:23.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:23 compute-0 ceph-mon[73668]: pgmap v2069: 305 pgs: 305 active+clean; 609 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 155 op/s
Oct 02 12:44:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 614 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.4 MiB/s wr, 121 op/s
Oct 02 12:44:24 compute-0 ovn_controller[148123]: 2025-10-02T12:44:24Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:a6:e5 10.100.0.12
Oct 02 12:44:24 compute-0 ovn_controller[148123]: 2025-10-02T12:44:24Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:a6:e5 10.100.0.12
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.220 2 DEBUG nova.compute.manager [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.220 2 DEBUG nova.compute.manager [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing instance network info cache due to event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.221 2 DEBUG oslo_concurrency.lockutils [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.221 2 DEBUG oslo_concurrency.lockutils [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.221 2 DEBUG nova.network.neutron [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.245 2 DEBUG oslo_concurrency.processutils [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config.rescue f3cb3218-0640-497e-94de-2549ed7da8e4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.246 2 INFO nova.virt.libvirt.driver [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Deleting local config drive /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4/disk.config.rescue because it was imported into RBD.
Oct 02 12:44:24 compute-0 kernel: tap94d27ec1-61: entered promiscuous mode
Oct 02 12:44:24 compute-0 NetworkManager[44981]: <info>  [1759409064.2980] manager: (tap94d27ec1-61): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Oct 02 12:44:24 compute-0 ovn_controller[148123]: 2025-10-02T12:44:24Z|00590|binding|INFO|Claiming lport 94d27ec1-617c-4911-bfe7-441d064785a1 for this chassis.
Oct 02 12:44:24 compute-0 ovn_controller[148123]: 2025-10-02T12:44:24Z|00591|binding|INFO|94d27ec1-617c-4911-bfe7-441d064785a1: Claiming fa:16:3e:68:5b:10 10.100.0.4
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.314 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.316 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.316 2 DEBUG nova.network.neutron [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:44:24 compute-0 ovn_controller[148123]: 2025-10-02T12:44:24Z|00592|binding|INFO|Setting lport 94d27ec1-617c-4911-bfe7-441d064785a1 ovn-installed in OVS
Oct 02 12:44:24 compute-0 ovn_controller[148123]: 2025-10-02T12:44:24Z|00593|binding|INFO|Setting lport 94d27ec1-617c-4911-bfe7-441d064785a1 up in Southbound
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.324 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5b:10 10.100.0.4'], port_security=['fa:16:3e:68:5b:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f3cb3218-0640-497e-94de-2549ed7da8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=94d27ec1-617c-4911-bfe7-441d064785a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.326 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 94d27ec1-617c-4911-bfe7-441d064785a1 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.327 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:44:24 compute-0 systemd-udevd[334731]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:44:24 compute-0 systemd-machined[210927]: New machine qemu-61-instance-00000076.
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.340 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[112d7912-0724-415c-9f2b-8bb0316d46c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.341 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.343 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.343 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8c25028c-34de-471d-b509-dad6e2c84369]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.344 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0cdf94-9c06-4e9e-a149-e234ac7b311f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 NetworkManager[44981]: <info>  [1759409064.3462] device (tap94d27ec1-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:44:24 compute-0 NetworkManager[44981]: <info>  [1759409064.3473] device (tap94d27ec1-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:44:24 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000076.
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.359 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[b267aa56-c009-4220-a3ce-1ed0158d6612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.391 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f5885e-2105-4a3a-a798-4e6a9a8a128f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.433 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9139c66a-3167-4961-b72a-414f41284d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 NetworkManager[44981]: <info>  [1759409064.4419] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/263)
Oct 02 12:44:24 compute-0 systemd-udevd[334735]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.445 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9845ee1b-31b9-4d08-9572-ac747706ed2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.489 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[362e6dca-c3ff-42e8-8000-e8e58f8780fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.495 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fbec066a-9cb3-4e5d-83aa-3d6509430cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.502 2 DEBUG nova.compute.manager [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.502 2 DEBUG nova.compute.manager [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing instance network info cache due to event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.502 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:24 compute-0 NetworkManager[44981]: <info>  [1759409064.5302] device (tapf3934261-b0): carrier: link connected
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.538 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[88a5ecae-7bf2-4f8f-b55d-77ff9661a83e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.563 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e284d8-3401-44c0-b71b-8cc984035d84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696001, 'reachable_time': 23895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334765, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.586 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5145c606-fbbb-447e-bd81-ab3aff9725f3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696001, 'tstamp': 696001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334766, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.607 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[376524dd-811a-4f48-9720-5386df3ce3ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696001, 'reachable_time': 23895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334768, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.645 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b959d5ed-6158-413d-8dc5-c2ca030c1055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.729 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[074f0f90-b43e-4318-b1bc-c21702253a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.731 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.731 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.732 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:24 compute-0 kernel: tapf3934261-b0: entered promiscuous mode
Oct 02 12:44:24 compute-0 NetworkManager[44981]: <info>  [1759409064.7923] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.796 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.799 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:44:24 compute-0 ovn_controller[148123]: 2025-10-02T12:44:24Z|00594|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.800 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[23779c79-e9cd-4586-acf1-2bae548c9a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.801 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:44:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:24.803 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:44:24 compute-0 nova_compute[256940]: 2025-10-02 12:44:24.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-0 ceph-mon[73668]: pgmap v2070: 305 pgs: 305 active+clean; 614 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.4 MiB/s wr, 121 op/s
Oct 02 12:44:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:25.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:25 compute-0 podman[334800]: 2025-10-02 12:44:25.180307665 +0000 UTC m=+0.027376676 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:44:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:25.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:25 compute-0 podman[334800]: 2025-10-02 12:44:25.410292789 +0000 UTC m=+0.257361780 container create 5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:44:25 compute-0 systemd[1]: Started libpod-conmon-5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20.scope.
Oct 02 12:44:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00f3c1346ce8398951db5dc4297454dc63763e8a0cdf455a258e0194c797935/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:25 compute-0 podman[334800]: 2025-10-02 12:44:25.635456607 +0000 UTC m=+0.482525608 container init 5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:44:25 compute-0 podman[334800]: 2025-10-02 12:44:25.641693469 +0000 UTC m=+0.488762460 container start 5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:44:25 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[334815]: [NOTICE]   (334834) : New worker (334839) forked
Oct 02 12:44:25 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[334815]: [NOTICE]   (334834) : Loading success.
Oct 02 12:44:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 625 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 685 KiB/s rd, 5.5 MiB/s wr, 125 op/s
Oct 02 12:44:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:26.479 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:26.479 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:26.480 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:26 compute-0 nova_compute[256940]: 2025-10-02 12:44:26.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:27.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:27 compute-0 ceph-mon[73668]: pgmap v2071: 305 pgs: 305 active+clean; 625 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 685 KiB/s rd, 5.5 MiB/s wr, 125 op/s
Oct 02 12:44:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:27.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:27 compute-0 nova_compute[256940]: 2025-10-02 12:44:27.391 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for f3cb3218-0640-497e-94de-2549ed7da8e4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:44:27 compute-0 nova_compute[256940]: 2025-10-02 12:44:27.392 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409067.3915033, f3cb3218-0640-497e-94de-2549ed7da8e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:27 compute-0 nova_compute[256940]: 2025-10-02 12:44:27.392 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] VM Resumed (Lifecycle Event)
Oct 02 12:44:27 compute-0 nova_compute[256940]: 2025-10-02 12:44:27.396 2 DEBUG nova.compute.manager [None req-4fa684cd-2ab1-45fc-88f5-48c2b1079d6a b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 636 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 4.7 MiB/s wr, 138 op/s
Oct 02 12:44:28 compute-0 podman[334891]: 2025-10-02 12:44:28.390196738 +0000 UTC m=+0.056830845 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:44:28 compute-0 podman[334892]: 2025-10-02 12:44:28.444330991 +0000 UTC m=+0.109736816 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:44:28
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 02 12:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:44:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:29.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.293 2 DEBUG nova.compute.manager [req-291bfbee-3228-49e0-bf3b-50e9b8bbae00 req-a71be6d5-9a48-4d6c-9173-07fee2520b50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.293 2 DEBUG oslo_concurrency.lockutils [req-291bfbee-3228-49e0-bf3b-50e9b8bbae00 req-a71be6d5-9a48-4d6c-9173-07fee2520b50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.294 2 DEBUG oslo_concurrency.lockutils [req-291bfbee-3228-49e0-bf3b-50e9b8bbae00 req-a71be6d5-9a48-4d6c-9173-07fee2520b50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.294 2 DEBUG oslo_concurrency.lockutils [req-291bfbee-3228-49e0-bf3b-50e9b8bbae00 req-a71be6d5-9a48-4d6c-9173-07fee2520b50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.294 2 DEBUG nova.compute.manager [req-291bfbee-3228-49e0-bf3b-50e9b8bbae00 req-a71be6d5-9a48-4d6c-9173-07fee2520b50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] No waiting events found dispatching network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.294 2 WARNING nova.compute.manager [req-291bfbee-3228-49e0-bf3b-50e9b8bbae00 req-a71be6d5-9a48-4d6c-9173-07fee2520b50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received unexpected event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 for instance with vm_state active and task_state rescuing.
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.311 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.314 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.355 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409067.3917017, f3cb3218-0640-497e-94de-2549ed7da8e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.355 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] VM Started (Lifecycle Event)
Oct 02 12:44:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:29.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:29 compute-0 ceph-mon[73668]: pgmap v2072: 305 pgs: 305 active+clean; 636 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 4.7 MiB/s wr, 138 op/s
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.392 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:29 compute-0 nova_compute[256940]: 2025-10-02 12:44:29.395 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 636 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:44:30 compute-0 nova_compute[256940]: 2025-10-02 12:44:30.866 2 DEBUG nova.network.neutron [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:30 compute-0 nova_compute[256940]: 2025-10-02 12:44:30.919 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:30 compute-0 nova_compute[256940]: 2025-10-02 12:44:30.922 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:30 compute-0 nova_compute[256940]: 2025-10-02 12:44:30.923 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.145 2 DEBUG os_brick.utils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.145 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.158 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.158 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[89376311-fcf4-4035-b3d4-f61b3d1e8c71]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.159 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.168 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.169 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[016e8208-0408-4ae4-b718-9c5728cc5b09]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.170 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.181 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.181 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[adedd6d9-2d59-4d06-a377-b22ffdd52eeb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.183 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8c0704-8b86-4c36-a543-539340ce9c51]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.183 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.216 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:31.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.219 2 DEBUG os_brick.initiator.connectors.lightos [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.219 2 DEBUG os_brick.initiator.connectors.lightos [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.220 2 DEBUG os_brick.initiator.connectors.lightos [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.220 2 DEBUG os_brick.utils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:44:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:31.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:31 compute-0 ceph-mon[73668]: pgmap v2073: 305 pgs: 305 active+clean; 636 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.511 2 DEBUG nova.network.neutron [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updated VIF entry in instance network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.512 2 DEBUG nova.network.neutron [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.622 2 DEBUG oslo_concurrency.lockutils [req-d6335713-68b3-4e45-8abd-b8a85c836fec req-9b6c8952-589c-4d26-b7a7-72e21ecbadb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 176 op/s
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.732 2 DEBUG nova.compute.manager [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.733 2 DEBUG oslo_concurrency.lockutils [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.733 2 DEBUG oslo_concurrency.lockutils [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.733 2 DEBUG oslo_concurrency.lockutils [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.733 2 DEBUG nova.compute.manager [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] No waiting events found dispatching network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:31 compute-0 nova_compute[256940]: 2025-10-02 12:44:31.734 2 WARNING nova.compute.manager [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received unexpected event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 for instance with vm_state rescued and task_state None.
Oct 02 12:44:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 02 12:44:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 02 12:44:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 02 12:44:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:44:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2114306074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:32 compute-0 ceph-mon[73668]: pgmap v2074: 305 pgs: 305 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 176 op/s
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.017 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updated VIF entry in instance network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.018 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.076 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.099 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.101 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.101 2 INFO nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Creating image(s)
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.101 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.102 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Ensure instance console log exists: /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.102 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.103 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.103 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.106 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start _get_guest_xml network_info=[{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '173830cb-12bb-4e1a-ba80-088da01ad107', 'attached_at': '2025-10-02T12:44:32.000000', 'detached_at': '', 'volume_id': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'serial': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': 'eccd3a32-3cb5-4945-a572-dafcef07df2b', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.110 2 WARNING nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.116 2 DEBUG nova.virt.libvirt.host [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.117 2 DEBUG nova.virt.libvirt.host [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.121 2 DEBUG nova.virt.libvirt.host [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.121 2 DEBUG nova.virt.libvirt.host [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.122 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.122 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.123 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.123 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.123 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.123 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.124 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.124 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.124 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.124 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.125 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.125 2 DEBUG nova.virt.hardware [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.125 2 DEBUG nova.objects.instance [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:33.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.236 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:33.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:44:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2191314356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.695 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.751 2 DEBUG nova.virt.libvirt.vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.751 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.752 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.756 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <uuid>173830cb-12bb-4e1a-ba80-088da01ad107</uuid>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <name>instance-00000077</name>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestOtherA-server-716874932</nova:name>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:44:33</nova:creationTime>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <nova:port uuid="ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb">
Oct 02 12:44:33 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <system>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <entry name="serial">173830cb-12bb-4e1a-ba80-088da01ad107</entry>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <entry name="uuid">173830cb-12bb-4e1a-ba80-088da01ad107</entry>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </system>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <os>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   </os>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <features>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   </features>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/173830cb-12bb-4e1a-ba80-088da01ad107_disk.config">
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       </source>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-fdc5e1d9-2228-4ec0-a6bb-8605f6207831">
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       </source>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:44:33 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <serial>fdc5e1d9-2228-4ec0-a6bb-8605f6207831</serial>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d6:35:d6"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <target dev="tapea4a4acf-33"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/console.log" append="off"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <video>
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </video>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:44:33 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:44:33 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:44:33 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:44:33 compute-0 nova_compute[256940]: </domain>
Oct 02 12:44:33 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.757 2 DEBUG nova.virt.libvirt.vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.757 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.758 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.758 2 DEBUG os_vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea4a4acf-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.765 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea4a4acf-33, col_values=(('external_ids', {'iface-id': 'ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:35:d6', 'vm-uuid': '173830cb-12bb-4e1a-ba80-088da01ad107'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:33 compute-0 NetworkManager[44981]: <info>  [1759409073.7680] manager: (tapea4a4acf-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:33 compute-0 nova_compute[256940]: 2025-10-02 12:44:33.775 2 INFO os_vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')
Oct 02 12:44:34 compute-0 ceph-mon[73668]: osdmap e281: 3 total, 3 up, 3 in
Oct 02 12:44:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2114306074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2191314356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.114 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.114 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.114 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:d6:35:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.115 2 INFO nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Using config drive
Oct 02 12:44:34 compute-0 NetworkManager[44981]: <info>  [1759409074.2044] manager: (tapea4a4acf-33): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Oct 02 12:44:34 compute-0 kernel: tapea4a4acf-33: entered promiscuous mode
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:34 compute-0 ovn_controller[148123]: 2025-10-02T12:44:34Z|00595|binding|INFO|Claiming lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for this chassis.
Oct 02 12:44:34 compute-0 ovn_controller[148123]: 2025-10-02T12:44:34Z|00596|binding|INFO|ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb: Claiming fa:16:3e:d6:35:d6 10.100.0.13
Oct 02 12:44:34 compute-0 ovn_controller[148123]: 2025-10-02T12:44:34Z|00597|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb ovn-installed in OVS
Oct 02 12:44:34 compute-0 ovn_controller[148123]: 2025-10-02T12:44:34Z|00598|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb up in Southbound
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.240 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:35:d6 10.100.0.13'], port_security=['fa:16:3e:d6:35:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '173830cb-12bb-4e1a-ba80-088da01ad107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.241 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.242 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:44:34 compute-0 systemd-machined[210927]: New machine qemu-62-instance-00000077.
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.256 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9e619cc9-99dc-43fd-ab28-00184a0a5cf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.257 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3643647-71 in ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:44:34 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000077.
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.258 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3643647-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.258 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c5374c11-5b83-4b88-b9c0-8c94ae039a22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.260 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[70fe39ab-6a40-4794-ad94-408445717dda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 systemd-udevd[335021]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.272 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[74352e45-6bc3-4c42-8948-08e6a19ec1ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 NetworkManager[44981]: <info>  [1759409074.2832] device (tapea4a4acf-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:44:34 compute-0 NetworkManager[44981]: <info>  [1759409074.2845] device (tapea4a4acf-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.287 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cce2b00a-84ee-45b9-8597-d5ca2b7e22af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.323 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a17e2a2d-7051-4e42-837c-1a4df82964cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.328 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a15cfe83-3722-4180-97ce-24b95e9f9a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 NetworkManager[44981]: <info>  [1759409074.3308] manager: (tapf3643647-70): new Veth device (/org/freedesktop/NetworkManager/Devices/267)
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.374 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[48d7a8f2-2ad7-4e9e-bcba-c7016edba496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.378 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fb66ac0c-4e3d-40b8-8088-21dd1a7c6a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 NetworkManager[44981]: <info>  [1759409074.4041] device (tapf3643647-70): carrier: link connected
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.417 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7f289fa2-29b5-48ce-9e0b-f865db93d010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.436 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[32577944-c3af-45f2-9eb5-c0b6d18fd5a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696988, 'reachable_time': 24345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335053, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.456 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[14fd83b2-8389-41c2-88b8-980167743538]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:edfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696988, 'tstamp': 696988}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335054, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.475 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c05885c6-1dac-4f7c-a46b-2a40561cbbe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696988, 'reachable_time': 24345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335055, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.511 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[42ba5e8b-46bf-45b4-916a-06a74da236f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.585 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6141774b-f434-4c5f-9ac8-6d126878c055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.587 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.587 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.588 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:34 compute-0 kernel: tapf3643647-70: entered promiscuous mode
Oct 02 12:44:34 compute-0 NetworkManager[44981]: <info>  [1759409074.5907] manager: (tapf3643647-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.595 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:34 compute-0 ovn_controller[148123]: 2025-10-02T12:44:34Z|00599|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.599 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.600 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf3a6ea-77b6-494b-b4aa-003e78194905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.601 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:44:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:34.603 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'env', 'PROCESS_TAG=haproxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3643647-7cd9-4c43-8aaa-9b0f3160274b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:44:34 compute-0 nova_compute[256940]: 2025-10-02 12:44:34.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:35 compute-0 podman[335089]: 2025-10-02 12:44:34.939399361 +0000 UTC m=+0.025478746 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:44:35 compute-0 podman[335089]: 2025-10-02 12:44:35.139183506 +0000 UTC m=+0.225262871 container create 7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:44:35 compute-0 systemd[1]: Started libpod-conmon-7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267.scope.
Oct 02 12:44:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5e7b6919bec675029a9dcbd7db6255e6bace2b67f7dce97013a8436dd4dc6c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:44:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:35 compute-0 podman[335089]: 2025-10-02 12:44:35.230734116 +0000 UTC m=+0.316813491 container init 7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:44:35 compute-0 podman[335089]: 2025-10-02 12:44:35.237402871 +0000 UTC m=+0.323482226 container start 7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:44:35 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [NOTICE]   (335108) : New worker (335110) forked
Oct 02 12:44:35 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [NOTICE]   (335108) : Loading success.
Oct 02 12:44:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:35.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:35 compute-0 ceph-mon[73668]: pgmap v2076: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 02 12:44:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 647 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.1 MiB/s wr, 142 op/s
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.170 2 DEBUG nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.171 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.171 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.171 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.172 2 DEBUG nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.172 2 WARNING nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_finish.
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.190 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409076.1897326, 173830cb-12bb-4e1a-ba80-088da01ad107 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.190 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Resumed (Lifecycle Event)
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.192 2 DEBUG nova.compute.manager [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.196 2 INFO nova.virt.libvirt.driver [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance running successfully.
Oct 02 12:44:36 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.201 2 DEBUG nova.virt.libvirt.guest [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.201 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.228 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.232 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.279 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.280 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409076.1925614, 173830cb-12bb-4e1a-ba80-088da01ad107 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.280 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Started (Lifecycle Event)
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.333 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.338 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.391 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:44:36 compute-0 nova_compute[256940]: 2025-10-02 12:44:36.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:36 compute-0 ceph-mon[73668]: pgmap v2077: 305 pgs: 305 active+clean; 647 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.1 MiB/s wr, 142 op/s
Oct 02 12:44:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:37.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:37.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 708 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 4.6 MiB/s wr, 190 op/s
Oct 02 12:44:38 compute-0 nova_compute[256940]: 2025-10-02 12:44:38.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 02 12:44:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 02 12:44:39 compute-0 ceph-mon[73668]: pgmap v2078: 305 pgs: 305 active+clean; 708 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 4.6 MiB/s wr, 190 op/s
Oct 02 12:44:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:39.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 02 12:44:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:39.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 02 12:44:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 708 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.7 MiB/s wr, 137 op/s
Oct 02 12:44:39 compute-0 nova_compute[256940]: 2025-10-02 12:44:39.744 2 DEBUG nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:39 compute-0 nova_compute[256940]: 2025-10-02 12:44:39.744 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:39 compute-0 nova_compute[256940]: 2025-10-02 12:44:39.744 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:39 compute-0 nova_compute[256940]: 2025-10-02 12:44:39.744 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:39 compute-0 nova_compute[256940]: 2025-10-02 12:44:39.745 2 DEBUG nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:39 compute-0 nova_compute[256940]: 2025-10-02 12:44:39.745 2 WARNING nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state resized and task_state None.
Oct 02 12:44:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014020842057509772 of space, bias 1.0, pg target 4.2062526172529315 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002166503815373162 of space, bias 1.0, pg target 0.6412851293504559 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004025480411315839 of space, bias 1.0, pg target 1.1915422017494883 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:44:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Oct 02 12:44:40 compute-0 ceph-mon[73668]: osdmap e282: 3 total, 3 up, 3 in
Oct 02 12:44:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 02 12:44:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 02 12:44:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:41.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:41.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:41 compute-0 nova_compute[256940]: 2025-10-02 12:44:41.652 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:41 compute-0 nova_compute[256940]: 2025-10-02 12:44:41.653 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:41 compute-0 nova_compute[256940]: 2025-10-02 12:44:41.653 2 INFO nova.compute.manager [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Shelving
Oct 02 12:44:41 compute-0 nova_compute[256940]: 2025-10-02 12:44:41.681 2 DEBUG nova.virt.libvirt.driver [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:44:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 719 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 5.8 MiB/s wr, 223 op/s
Oct 02 12:44:41 compute-0 nova_compute[256940]: 2025-10-02 12:44:41.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:41 compute-0 ovn_controller[148123]: 2025-10-02T12:44:41Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:5b:10 10.100.0.4
Oct 02 12:44:41 compute-0 ceph-mon[73668]: pgmap v2080: 305 pgs: 305 active+clean; 708 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.7 MiB/s wr, 137 op/s
Oct 02 12:44:41 compute-0 ceph-mon[73668]: osdmap e283: 3 total, 3 up, 3 in
Oct 02 12:44:41 compute-0 sudo[335164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:41 compute-0 sudo[335164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:41 compute-0 sudo[335164]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:41 compute-0 podman[335188]: 2025-10-02 12:44:41.863132282 +0000 UTC m=+0.059379571 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:41 compute-0 sudo[335201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:41 compute-0 sudo[335201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:41 compute-0 sudo[335201]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:41 compute-0 podman[335189]: 2025-10-02 12:44:41.879335905 +0000 UTC m=+0.074032084 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 02 12:44:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:43.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:43.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:43 compute-0 ceph-mon[73668]: pgmap v2082: 305 pgs: 305 active+clean; 719 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 5.8 MiB/s wr, 223 op/s
Oct 02 12:44:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 294 op/s
Oct 02 12:44:43 compute-0 nova_compute[256940]: 2025-10-02 12:44:43.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:45 compute-0 ceph-mon[73668]: pgmap v2083: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 294 op/s
Oct 02 12:44:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1418788370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:45.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 12:44:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:45.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 12:44:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 158 KiB/s wr, 210 op/s
Oct 02 12:44:45 compute-0 kernel: tapfbc8e36a-6d (unregistering): left promiscuous mode
Oct 02 12:44:45 compute-0 NetworkManager[44981]: <info>  [1759409085.9308] device (tapfbc8e36a-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:44:45 compute-0 ovn_controller[148123]: 2025-10-02T12:44:45Z|00600|binding|INFO|Releasing lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c from this chassis (sb_readonly=0)
Oct 02 12:44:45 compute-0 ovn_controller[148123]: 2025-10-02T12:44:45Z|00601|binding|INFO|Setting lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c down in Southbound
Oct 02 12:44:45 compute-0 nova_compute[256940]: 2025-10-02 12:44:45.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:45 compute-0 ovn_controller[148123]: 2025-10-02T12:44:45Z|00602|binding|INFO|Removing iface tapfbc8e36a-6d ovn-installed in OVS
Oct 02 12:44:45 compute-0 nova_compute[256940]: 2025-10-02 12:44:45.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:45 compute-0 nova_compute[256940]: 2025-10-02 12:44:45.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:46 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 02 12:44:46 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000079.scope: Consumed 16.239s CPU time.
Oct 02 12:44:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:46.031 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:a6:e5 10.100.0.12'], port_security=['fa:16:3e:9c:a6:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9aff2d67-195f-4081-9a1c-ba173a39af9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e5d4b7f8-4549-4722-8356-487047feb0fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:46.032 158104 INFO neutron.agent.ovn.metadata.agent [-] Port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c in datapath 385a384c-5df0-4b04-b928-517a46df04f4 unbound from our chassis
Oct 02 12:44:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:46.034 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 385a384c-5df0-4b04-b928-517a46df04f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:44:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:46.036 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec50992-d133-484d-9a8d-de0ac38f7cbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:46.036 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace which is not needed anymore
Oct 02 12:44:46 compute-0 systemd-machined[210927]: Machine qemu-60-instance-00000079 terminated.
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:46 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [NOTICE]   (334260) : haproxy version is 2.8.14-c23fe91
Oct 02 12:44:46 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [NOTICE]   (334260) : path to executable is /usr/sbin/haproxy
Oct 02 12:44:46 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [WARNING]  (334260) : Exiting Master process...
Oct 02 12:44:46 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [WARNING]  (334260) : Exiting Master process...
Oct 02 12:44:46 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [ALERT]    (334260) : Current worker (334262) exited with code 143 (Terminated)
Oct 02 12:44:46 compute-0 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[334256]: [WARNING]  (334260) : All workers exited. Exiting... (0)
Oct 02 12:44:46 compute-0 systemd[1]: libpod-9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1.scope: Deactivated successfully.
Oct 02 12:44:46 compute-0 podman[335275]: 2025-10-02 12:44:46.34421507 +0000 UTC m=+0.214936104 container died 9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.650 2 DEBUG nova.compute.manager [req-232dd4dd-c2b7-4c69-8b2c-d137ed200ab2 req-f5f41101-27a7-418c-b4c0-4b76eb269a91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-unplugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.651 2 DEBUG oslo_concurrency.lockutils [req-232dd4dd-c2b7-4c69-8b2c-d137ed200ab2 req-f5f41101-27a7-418c-b4c0-4b76eb269a91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.651 2 DEBUG oslo_concurrency.lockutils [req-232dd4dd-c2b7-4c69-8b2c-d137ed200ab2 req-f5f41101-27a7-418c-b4c0-4b76eb269a91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.651 2 DEBUG oslo_concurrency.lockutils [req-232dd4dd-c2b7-4c69-8b2c-d137ed200ab2 req-f5f41101-27a7-418c-b4c0-4b76eb269a91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.652 2 DEBUG nova.compute.manager [req-232dd4dd-c2b7-4c69-8b2c-d137ed200ab2 req-f5f41101-27a7-418c-b4c0-4b76eb269a91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] No waiting events found dispatching network-vif-unplugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.653 2 WARNING nova.compute.manager [req-232dd4dd-c2b7-4c69-8b2c-d137ed200ab2 req-f5f41101-27a7-418c-b4c0-4b76eb269a91 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received unexpected event network-vif-unplugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for instance with vm_state active and task_state shelving.
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.706 2 INFO nova.virt.libvirt.driver [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance shutdown successfully after 5 seconds.
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.713 2 INFO nova.virt.libvirt.driver [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance destroyed successfully.
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.713 2 DEBUG nova.objects.instance [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'numa_topology' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:46 compute-0 nova_compute[256940]: 2025-10-02 12:44:46.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1-userdata-shm.mount: Deactivated successfully.
Oct 02 12:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e853ca4ae7fa98cdac00bf12e8eef35c97fd3efd44c644991f1aff15f871183f-merged.mount: Deactivated successfully.
Oct 02 12:44:46 compute-0 podman[335275]: 2025-10-02 12:44:46.777852633 +0000 UTC m=+0.648573667 container cleanup 9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:46 compute-0 systemd[1]: libpod-conmon-9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1.scope: Deactivated successfully.
Oct 02 12:44:47 compute-0 podman[335316]: 2025-10-02 12:44:47.073849059 +0000 UTC m=+0.267283219 container remove 9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.083 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[baa318b8-c704-4f46-b36d-c0f34b7314ca]: (4, ('Thu Oct  2 12:44:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1)\n9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1\nThu Oct  2 12:44:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1)\n9aca8faea71991108e10f0e3bf4f145f7ee4c0fa59657a4abf6f3999018dc0e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.087 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4941393f-6999-4e07-8cf6-bdbe322f1f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.088 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:47 compute-0 kernel: tap385a384c-50: left promiscuous mode
Oct 02 12:44:47 compute-0 nova_compute[256940]: 2025-10-02 12:44:47.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 02 12:44:47 compute-0 nova_compute[256940]: 2025-10-02 12:44:47.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.140 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad3fc4a-c0cf-4525-864d-3b94b4680060]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:47 compute-0 nova_compute[256940]: 2025-10-02 12:44:47.157 2 INFO nova.virt.libvirt.driver [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Beginning cold snapshot process
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.178 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[77ceb0f5-da54-4531-a590-0130465437a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.180 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d02b18f1-dc8c-40a5-9992-69c90f507937]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.198 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a58ebb-06e0-487f-9ed0-15f0ec05b6b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694231, 'reachable_time': 26191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335334, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d385a384c\x2d5df0\x2d4b04\x2db928\x2d517a46df04f4.mount: Deactivated successfully.
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.209 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:44:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:44:47.211 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[37ca8f3c-266e-4000-9a34-b316a2d839f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:47.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:47 compute-0 nova_compute[256940]: 2025-10-02 12:44:47.295 2 INFO nova.compute.manager [None req-4a2d45f7-5624-4dd0-827c-799d2c3be2ce 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Get console output
Oct 02 12:44:47 compute-0 nova_compute[256940]: 2025-10-02 12:44:47.300 2 INFO oslo.privsep.daemon [None req-4a2d45f7-5624-4dd0-827c-799d2c3be2ce 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp__db8n8k/privsep.sock']
Oct 02 12:44:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 02 12:44:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:47.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:47 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 02 12:44:47 compute-0 nova_compute[256940]: 2025-10-02 12:44:47.604 2 DEBUG nova.virt.libvirt.imagebackend [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:44:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 163 KiB/s wr, 225 op/s
Oct 02 12:44:47 compute-0 ceph-mon[73668]: pgmap v2084: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 158 KiB/s wr, 210 op/s
Oct 02 12:44:47 compute-0 nova_compute[256940]: 2025-10-02 12:44:47.876 2 DEBUG nova.storage.rbd_utils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] creating snapshot(eef84f5fcc584785b31b441aead11cb8) on rbd image(9aff2d67-195f-4081-9a1c-ba173a39af9d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:44:48 compute-0 nova_compute[256940]: 2025-10-02 12:44:48.688 2 INFO oslo.privsep.daemon [None req-4a2d45f7-5624-4dd0-827c-799d2c3be2ce 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Spawned new privsep daemon via rootwrap
Oct 02 12:44:48 compute-0 nova_compute[256940]: 2025-10-02 12:44:48.516 21118 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:44:48 compute-0 nova_compute[256940]: 2025-10-02 12:44:48.521 21118 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:44:48 compute-0 nova_compute[256940]: 2025-10-02 12:44:48.524 21118 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 12:44:48 compute-0 nova_compute[256940]: 2025-10-02 12:44:48.524 21118 INFO oslo.privsep.daemon [-] privsep daemon running as pid 21118
Oct 02 12:44:48 compute-0 nova_compute[256940]: 2025-10-02 12:44:48.805 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:44:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 02 12:44:48 compute-0 nova_compute[256940]: 2025-10-02 12:44:48.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:49 compute-0 nova_compute[256940]: 2025-10-02 12:44:49.059 2 DEBUG nova.compute.manager [req-cbdea349-23ea-4535-aaf6-d26295876080 req-bf44da0b-a659-4ddc-8170-c2c052c587fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:49 compute-0 nova_compute[256940]: 2025-10-02 12:44:49.059 2 DEBUG oslo_concurrency.lockutils [req-cbdea349-23ea-4535-aaf6-d26295876080 req-bf44da0b-a659-4ddc-8170-c2c052c587fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:49 compute-0 nova_compute[256940]: 2025-10-02 12:44:49.060 2 DEBUG oslo_concurrency.lockutils [req-cbdea349-23ea-4535-aaf6-d26295876080 req-bf44da0b-a659-4ddc-8170-c2c052c587fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:49 compute-0 nova_compute[256940]: 2025-10-02 12:44:49.060 2 DEBUG oslo_concurrency.lockutils [req-cbdea349-23ea-4535-aaf6-d26295876080 req-bf44da0b-a659-4ddc-8170-c2c052c587fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:49 compute-0 nova_compute[256940]: 2025-10-02 12:44:49.060 2 DEBUG nova.compute.manager [req-cbdea349-23ea-4535-aaf6-d26295876080 req-bf44da0b-a659-4ddc-8170-c2c052c587fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] No waiting events found dispatching network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:49 compute-0 nova_compute[256940]: 2025-10-02 12:44:49.060 2 WARNING nova.compute.manager [req-cbdea349-23ea-4535-aaf6-d26295876080 req-bf44da0b-a659-4ddc-8170-c2c052c587fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received unexpected event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 12:44:49 compute-0 ceph-mon[73668]: osdmap e284: 3 total, 3 up, 3 in
Oct 02 12:44:49 compute-0 ceph-mon[73668]: pgmap v2086: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 163 KiB/s wr, 225 op/s
Oct 02 12:44:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:44:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:49.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:44:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 138 KiB/s wr, 146 op/s
Oct 02 12:44:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 02 12:44:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 02 12:44:50 compute-0 nova_compute[256940]: 2025-10-02 12:44:50.095 2 DEBUG nova.storage.rbd_utils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] cloning vms/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk@eef84f5fcc584785b31b441aead11cb8 to images/3866598c-0b46-42ff-ba05-28dab62cd167 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:44:51 compute-0 ceph-mon[73668]: pgmap v2087: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 138 KiB/s wr, 146 op/s
Oct 02 12:44:51 compute-0 ceph-mon[73668]: osdmap e285: 3 total, 3 up, 3 in
Oct 02 12:44:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:51.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 52 KiB/s wr, 63 op/s
Oct 02 12:44:51 compute-0 ovn_controller[148123]: 2025-10-02T12:44:51Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:35:d6 10.100.0.13
Oct 02 12:44:51 compute-0 nova_compute[256940]: 2025-10-02 12:44:51.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:51 compute-0 nova_compute[256940]: 2025-10-02 12:44:51.951 2 DEBUG nova.storage.rbd_utils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] flattening images/3866598c-0b46-42ff-ba05-28dab62cd167 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:44:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:53.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:53.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:53 compute-0 ceph-mon[73668]: pgmap v2089: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 52 KiB/s wr, 63 op/s
Oct 02 12:44:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 202 KiB/s wr, 100 op/s
Oct 02 12:44:53 compute-0 nova_compute[256940]: 2025-10-02 12:44:53.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:54 compute-0 nova_compute[256940]: 2025-10-02 12:44:54.015 2 DEBUG nova.storage.rbd_utils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] removing snapshot(eef84f5fcc584785b31b441aead11cb8) on rbd image(9aff2d67-195f-4081-9a1c-ba173a39af9d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 02 12:44:54 compute-0 ceph-mon[73668]: pgmap v2090: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 202 KiB/s wr, 100 op/s
Oct 02 12:44:55 compute-0 sudo[335470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:55 compute-0 sudo[335470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-0 sudo[335470]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:55 compute-0 sudo[335495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:44:55 compute-0 sudo[335495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-0 sudo[335495]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 02 12:44:55 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 02 12:44:55 compute-0 sudo[335520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:55 compute-0 sudo[335520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-0 sudo[335520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:55 compute-0 sudo[335545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:44:55 compute-0 sudo[335545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:55 compute-0 nova_compute[256940]: 2025-10-02 12:44:55.497 2 DEBUG nova.storage.rbd_utils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] creating snapshot(snap) on rbd image(3866598c-0b46-42ff-ba05-28dab62cd167) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:44:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 767 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.0 MiB/s wr, 148 op/s
Oct 02 12:44:55 compute-0 sudo[335545]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 02 12:44:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 02 12:44:56 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 02 12:44:56 compute-0 ceph-mon[73668]: osdmap e286: 3 total, 3 up, 3 in
Oct 02 12:44:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3079862474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:56 compute-0 nova_compute[256940]: 2025-10-02 12:44:56.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:44:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:57.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:57.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:44:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.6 MiB/s wr, 235 op/s
Oct 02 12:44:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:58 compute-0 ceph-mon[73668]: pgmap v2092: 305 pgs: 305 active+clean; 767 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.0 MiB/s wr, 148 op/s
Oct 02 12:44:58 compute-0 ceph-mon[73668]: osdmap e287: 3 total, 3 up, 3 in
Oct 02 12:44:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2297640210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/865932368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:58 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:44:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:44:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:44:58 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:44:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:44:58 compute-0 nova_compute[256940]: 2025-10-02 12:44:58.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:59 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 48153380-ff11-4116-9e1d-74294ec984b6 does not exist
Oct 02 12:44:59 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a3acaf2c-c954-4acb-9de0-a88e1cda86f0 does not exist
Oct 02 12:44:59 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8f9e7d31-849e-4b76-8a05-02a62a2478c8 does not exist
Oct 02 12:44:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:44:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:44:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:44:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:44:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:44:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:59 compute-0 sudo[335622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:59 compute-0 sudo[335622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:59 compute-0 sudo[335622]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:59 compute-0 sudo[335659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:44:59 compute-0 sudo[335659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:59 compute-0 podman[335646]: 2025-10-02 12:44:59.302684543 +0000 UTC m=+0.056065921 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:44:59 compute-0 sudo[335659]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:59 compute-0 ceph-mon[73668]: pgmap v2094: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.6 MiB/s wr, 235 op/s
Oct 02 12:44:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:44:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:44:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:44:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:59 compute-0 podman[335647]: 2025-10-02 12:44:59.357390899 +0000 UTC m=+0.109535206 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:44:59 compute-0 sudo[335715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:59 compute-0 sudo[335715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:59 compute-0 sudo[335715]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:44:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:59.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:59 compute-0 sudo[335744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:44:59 compute-0 sudo[335744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 8.5 MiB/s wr, 203 op/s
Oct 02 12:44:59 compute-0 podman[335811]: 2025-10-02 12:44:59.750552152 +0000 UTC m=+0.040896682 container create 5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:44:59 compute-0 systemd[1]: Started libpod-conmon-5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa.scope.
Oct 02 12:44:59 compute-0 podman[335811]: 2025-10-02 12:44:59.732005065 +0000 UTC m=+0.022349605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:44:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:44:59 compute-0 podman[335811]: 2025-10-02 12:44:59.844228909 +0000 UTC m=+0.134573459 container init 5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:44:59 compute-0 podman[335811]: 2025-10-02 12:44:59.851958137 +0000 UTC m=+0.142302657 container start 5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:44:59 compute-0 podman[335811]: 2025-10-02 12:44:59.855274062 +0000 UTC m=+0.145618612 container attach 5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:44:59 compute-0 systemd[1]: libpod-5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa.scope: Deactivated successfully.
Oct 02 12:44:59 compute-0 elated_gould[335827]: 167 167
Oct 02 12:44:59 compute-0 conmon[335827]: conmon 5277c89afec64664bb0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa.scope/container/memory.events
Oct 02 12:44:59 compute-0 podman[335811]: 2025-10-02 12:44:59.85868366 +0000 UTC m=+0.149028220 container died 5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9db5a30be2f2bd6e8514069377a824ec818f28c1dd897bb3cbe504e3d2592e1-merged.mount: Deactivated successfully.
Oct 02 12:44:59 compute-0 podman[335811]: 2025-10-02 12:44:59.897303122 +0000 UTC m=+0.187647642 container remove 5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:44:59 compute-0 systemd[1]: libpod-conmon-5277c89afec64664bb0c6471e1b663d5a3f964b327b10e2c3c620197b2430bfa.scope: Deactivated successfully.
Oct 02 12:45:00 compute-0 podman[335851]: 2025-10-02 12:45:00.051646998 +0000 UTC m=+0.033978404 container create e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 12:45:00 compute-0 systemd[1]: Started libpod-conmon-e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0.scope.
Oct 02 12:45:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212b1ef426b5a6349ae5deb64b151dac8ac60b314c4d2dd6b11f01006be7a788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212b1ef426b5a6349ae5deb64b151dac8ac60b314c4d2dd6b11f01006be7a788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212b1ef426b5a6349ae5deb64b151dac8ac60b314c4d2dd6b11f01006be7a788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212b1ef426b5a6349ae5deb64b151dac8ac60b314c4d2dd6b11f01006be7a788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212b1ef426b5a6349ae5deb64b151dac8ac60b314c4d2dd6b11f01006be7a788/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:00 compute-0 podman[335851]: 2025-10-02 12:45:00.037197187 +0000 UTC m=+0.019528613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:00 compute-0 podman[335851]: 2025-10-02 12:45:00.137810302 +0000 UTC m=+0.120141708 container init e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:45:00 compute-0 podman[335851]: 2025-10-02 12:45:00.14473453 +0000 UTC m=+0.127065936 container start e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:45:00 compute-0 podman[335851]: 2025-10-02 12:45:00.148540338 +0000 UTC m=+0.130871744 container attach e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:45:00 compute-0 ceph-mon[73668]: pgmap v2095: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 8.5 MiB/s wr, 203 op/s
Oct 02 12:45:01 compute-0 kind_edison[335867]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:45:01 compute-0 kind_edison[335867]: --> relative data size: 1.0
Oct 02 12:45:01 compute-0 kind_edison[335867]: --> All data devices are unavailable
Oct 02 12:45:01 compute-0 systemd[1]: libpod-e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0.scope: Deactivated successfully.
Oct 02 12:45:01 compute-0 podman[335851]: 2025-10-02 12:45:01.036230838 +0000 UTC m=+1.018562244 container died e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-212b1ef426b5a6349ae5deb64b151dac8ac60b314c4d2dd6b11f01006be7a788-merged.mount: Deactivated successfully.
Oct 02 12:45:01 compute-0 podman[335851]: 2025-10-02 12:45:01.092389701 +0000 UTC m=+1.074721107 container remove e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:45:01 compute-0 systemd[1]: libpod-conmon-e90def15b001a58978cff638db8970b57f1a495078392aa1d6ed6b5dfe819cd0.scope: Deactivated successfully.
Oct 02 12:45:01 compute-0 sudo[335744]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:01 compute-0 sudo[335896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:01 compute-0 sudo[335896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:01 compute-0 sudo[335896]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.216 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409086.2154481, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.217 2 INFO nova.compute.manager [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Stopped (Lifecycle Event)
Oct 02 12:45:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:01 compute-0 sudo[335921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:45:01 compute-0 sudo[335921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:01 compute-0 sudo[335921]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:01 compute-0 sudo[335946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:01 compute-0 sudo[335946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:01 compute-0 sudo[335946]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:45:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:01.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:45:01 compute-0 sudo[335971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:45:01 compute-0 sudo[335971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.456 2 DEBUG nova.compute.manager [None req-639a7490-745f-4a5f-abe6-0e6128319a81 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.460 2 DEBUG nova.compute.manager [None req-639a7490-745f-4a5f-abe6-0e6128319a81 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.520 2 INFO nova.virt.libvirt.driver [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Snapshot image upload complete
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.521 2 DEBUG nova.compute.manager [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.4 MiB/s wr, 170 op/s
Oct 02 12:45:01 compute-0 podman[336038]: 2025-10-02 12:45:01.741865629 +0000 UTC m=+0.038207493 container create 9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:01 compute-0 systemd[1]: Started libpod-conmon-9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12.scope.
Oct 02 12:45:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:01 compute-0 podman[336038]: 2025-10-02 12:45:01.818764635 +0000 UTC m=+0.115106509 container init 9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:45:01 compute-0 podman[336038]: 2025-10-02 12:45:01.725765175 +0000 UTC m=+0.022107069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:01 compute-0 podman[336038]: 2025-10-02 12:45:01.829754977 +0000 UTC m=+0.126096841 container start 9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:45:01 compute-0 podman[336038]: 2025-10-02 12:45:01.832904938 +0000 UTC m=+0.129246822 container attach 9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:45:01 compute-0 tender_shtern[336054]: 167 167
Oct 02 12:45:01 compute-0 systemd[1]: libpod-9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12.scope: Deactivated successfully.
Oct 02 12:45:01 compute-0 podman[336038]: 2025-10-02 12:45:01.835975997 +0000 UTC m=+0.132317861 container died 9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:45:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2698531315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3557431853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7334afe7c99053bf6f22dbf250d83437dbe02e105bc24760c4ae1a22499cef9-merged.mount: Deactivated successfully.
Oct 02 12:45:01 compute-0 podman[336038]: 2025-10-02 12:45:01.875127283 +0000 UTC m=+0.171469147 container remove 9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:45:01 compute-0 systemd[1]: libpod-conmon-9271afc874d26373764a58afcf4144c104921838e4d868b0a6342dcdc87b1a12.scope: Deactivated successfully.
Oct 02 12:45:01 compute-0 sudo[336070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:01 compute-0 sudo[336070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:01 compute-0 sudo[336070]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:01 compute-0 nova_compute[256940]: 2025-10-02 12:45:01.995 2 INFO nova.compute.manager [None req-639a7490-745f-4a5f-abe6-0e6128319a81 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.
Oct 02 12:45:02 compute-0 sudo[336096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:02 compute-0 sudo[336096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:02 compute-0 sudo[336096]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:02 compute-0 podman[336125]: 2025-10-02 12:45:02.081516756 +0000 UTC m=+0.055099376 container create 58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:45:02 compute-0 systemd[1]: Started libpod-conmon-58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f.scope.
Oct 02 12:45:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 02 12:45:02 compute-0 podman[336125]: 2025-10-02 12:45:02.054665757 +0000 UTC m=+0.028248407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d7f592b28231e064f3bb34153459c51a41882145e93c725f3317a54fb5500b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d7f592b28231e064f3bb34153459c51a41882145e93c725f3317a54fb5500b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d7f592b28231e064f3bb34153459c51a41882145e93c725f3317a54fb5500b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d7f592b28231e064f3bb34153459c51a41882145e93c725f3317a54fb5500b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:02 compute-0 podman[336125]: 2025-10-02 12:45:02.191343779 +0000 UTC m=+0.164926429 container init 58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:45:02 compute-0 podman[336125]: 2025-10-02 12:45:02.20073092 +0000 UTC m=+0.174313540 container start 58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:45:02 compute-0 podman[336125]: 2025-10-02 12:45:02.205320788 +0000 UTC m=+0.178903428 container attach 58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 02 12:45:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.330 2 INFO nova.compute.manager [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Shelve offloading
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.337 2 INFO nova.virt.libvirt.driver [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance destroyed successfully.
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.337 2 DEBUG nova.compute.manager [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.340 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.340 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.340 2 DEBUG nova.network.neutron [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.381 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.381 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.382 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.382 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.382 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053349566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.879 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.995 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.997 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:02 compute-0 nova_compute[256940]: 2025-10-02 12:45:02.997 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.002 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.002 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.005 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.005 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:03 compute-0 awesome_edison[336143]: {
Oct 02 12:45:03 compute-0 awesome_edison[336143]:     "1": [
Oct 02 12:45:03 compute-0 awesome_edison[336143]:         {
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "devices": [
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "/dev/loop3"
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             ],
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "lv_name": "ceph_lv0",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "lv_size": "7511998464",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "name": "ceph_lv0",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "tags": {
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.cluster_name": "ceph",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.crush_device_class": "",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.encrypted": "0",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.osd_id": "1",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.type": "block",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:                 "ceph.vdo": "0"
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             },
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "type": "block",
Oct 02 12:45:03 compute-0 awesome_edison[336143]:             "vg_name": "ceph_vg0"
Oct 02 12:45:03 compute-0 awesome_edison[336143]:         }
Oct 02 12:45:03 compute-0 awesome_edison[336143]:     ]
Oct 02 12:45:03 compute-0 awesome_edison[336143]: }
Oct 02 12:45:03 compute-0 systemd[1]: libpod-58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f.scope: Deactivated successfully.
Oct 02 12:45:03 compute-0 podman[336125]: 2025-10-02 12:45:03.082842106 +0000 UTC m=+1.056424726 container died 58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-89d7f592b28231e064f3bb34153459c51a41882145e93c725f3317a54fb5500b-merged.mount: Deactivated successfully.
Oct 02 12:45:03 compute-0 podman[336125]: 2025-10-02 12:45:03.141454172 +0000 UTC m=+1.115036792 container remove 58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:45:03 compute-0 systemd[1]: libpod-conmon-58976aafd0f8539c940a9c458108367c3bfd2a14bb302a66697037eb9504879f.scope: Deactivated successfully.
Oct 02 12:45:03 compute-0 sudo[335971]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.200 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.202 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3865MB free_disk=20.71859359741211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.202 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.203 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:03 compute-0 sudo[336190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:03 compute-0 sudo[336190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:03 compute-0 sudo[336190]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:45:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:45:03 compute-0 ceph-mon[73668]: pgmap v2096: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.4 MiB/s wr, 170 op/s
Oct 02 12:45:03 compute-0 ceph-mon[73668]: osdmap e288: 3 total, 3 up, 3 in
Oct 02 12:45:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1186594829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3417613539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2053349566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1132442729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.310 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance f3cb3218-0640-497e-94de-2549ed7da8e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.310 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 173830cb-12bb-4e1a-ba80-088da01ad107 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.311 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 9aff2d67-195f-4081-9a1c-ba173a39af9d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.311 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.311 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:45:03 compute-0 sudo[336215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:45:03 compute-0 sudo[336215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:03 compute-0 sudo[336215]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:03 compute-0 sudo[336240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:03 compute-0 sudo[336240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:03 compute-0 sudo[336240]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:03.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.404 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:03 compute-0 sudo[336265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:45:03 compute-0 sudo[336265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 161 op/s
Oct 02 12:45:03 compute-0 podman[336349]: 2025-10-02 12:45:03.777464325 +0000 UTC m=+0.047906712 container create 9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:45:03 compute-0 systemd[1]: Started libpod-conmon-9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6.scope.
Oct 02 12:45:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1376223628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-0 podman[336349]: 2025-10-02 12:45:03.759927855 +0000 UTC m=+0.030370262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:03 compute-0 podman[336349]: 2025-10-02 12:45:03.855161352 +0000 UTC m=+0.125603739 container init 9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:03 compute-0 podman[336349]: 2025-10-02 12:45:03.862946852 +0000 UTC m=+0.133389239 container start 9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:45:03 compute-0 podman[336349]: 2025-10-02 12:45:03.867124479 +0000 UTC m=+0.137566886 container attach 9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 02 12:45:03 compute-0 gallant_buck[336366]: 167 167
Oct 02 12:45:03 compute-0 systemd[1]: libpod-9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6.scope: Deactivated successfully.
Oct 02 12:45:03 compute-0 conmon[336366]: conmon 9416dd9c152aaedb6e56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6.scope/container/memory.events
Oct 02 12:45:03 compute-0 podman[336349]: 2025-10-02 12:45:03.871329297 +0000 UTC m=+0.141771684 container died 9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.889 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.896 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-59757b5541f392cb5ed67f2e5d13f7bd679f91553ba2b6f1aa29768bda570bed-merged.mount: Deactivated successfully.
Oct 02 12:45:03 compute-0 podman[336349]: 2025-10-02 12:45:03.912478685 +0000 UTC m=+0.182921072 container remove 9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.913 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:03 compute-0 systemd[1]: libpod-conmon-9416dd9c152aaedb6e56855cc575e6e7119253835c9b160dc0d318cd5cf3acd6.scope: Deactivated successfully.
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.939 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:45:03 compute-0 nova_compute[256940]: 2025-10-02 12:45:03.939 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:04 compute-0 podman[336391]: 2025-10-02 12:45:04.135886505 +0000 UTC m=+0.091707627 container create 24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:45:04 compute-0 podman[336391]: 2025-10-02 12:45:04.07460254 +0000 UTC m=+0.030423682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:45:04 compute-0 systemd[1]: Started libpod-conmon-24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6.scope.
Oct 02 12:45:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48145f15b44a49dd0c43175f1a833ce6e57d7903672838c23a0e65edea6e1824/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48145f15b44a49dd0c43175f1a833ce6e57d7903672838c23a0e65edea6e1824/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48145f15b44a49dd0c43175f1a833ce6e57d7903672838c23a0e65edea6e1824/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48145f15b44a49dd0c43175f1a833ce6e57d7903672838c23a0e65edea6e1824/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:04 compute-0 podman[336391]: 2025-10-02 12:45:04.333480172 +0000 UTC m=+0.289301314 container init 24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_turing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:45:04 compute-0 podman[336391]: 2025-10-02 12:45:04.342075073 +0000 UTC m=+0.297896205 container start 24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_turing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:45:04 compute-0 podman[336391]: 2025-10-02 12:45:04.348360844 +0000 UTC m=+0.304181966 container attach 24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_turing, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:45:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1376223628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:04 compute-0 nova_compute[256940]: 2025-10-02 12:45:04.939 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:04 compute-0 nova_compute[256940]: 2025-10-02 12:45:04.940 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:05 compute-0 laughing_turing[336408]: {
Oct 02 12:45:05 compute-0 laughing_turing[336408]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:45:05 compute-0 laughing_turing[336408]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:45:05 compute-0 laughing_turing[336408]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:45:05 compute-0 laughing_turing[336408]:         "osd_id": 1,
Oct 02 12:45:05 compute-0 laughing_turing[336408]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:45:05 compute-0 laughing_turing[336408]:         "type": "bluestore"
Oct 02 12:45:05 compute-0 laughing_turing[336408]:     }
Oct 02 12:45:05 compute-0 laughing_turing[336408]: }
Oct 02 12:45:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:05 compute-0 systemd[1]: libpod-24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6.scope: Deactivated successfully.
Oct 02 12:45:05 compute-0 podman[336391]: 2025-10-02 12:45:05.291743535 +0000 UTC m=+1.247564657 container died 24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-48145f15b44a49dd0c43175f1a833ce6e57d7903672838c23a0e65edea6e1824-merged.mount: Deactivated successfully.
Oct 02 12:45:05 compute-0 podman[336391]: 2025-10-02 12:45:05.345226999 +0000 UTC m=+1.301048121 container remove 24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:05 compute-0 systemd[1]: libpod-conmon-24fc90fdda51a632510abefcda34d64b80cb9688b3d09677a579eccfd0e480b6.scope: Deactivated successfully.
Oct 02 12:45:05 compute-0 sudo[336265]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:05.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:45:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:45:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:45:05 compute-0 ceph-mon[73668]: pgmap v2098: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 161 op/s
Oct 02 12:45:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3038036194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3038036194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:45:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 82bb5531-5f82-4444-9e30-03808512b8a1 does not exist
Oct 02 12:45:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6523a84b-a07a-4901-8c1a-47ddb771e420 does not exist
Oct 02 12:45:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e99cbc8d-b3f1-4f81-81ec-c577e8193925 does not exist
Oct 02 12:45:05 compute-0 sudo[336444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:05 compute-0 sudo[336444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:05 compute-0 sudo[336444]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:05 compute-0 sudo[336469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:45:05 compute-0 sudo[336469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:05 compute-0 sudo[336469]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.362 2 DEBUG nova.network.neutron [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.391 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:45:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:45:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 02 12:45:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 02 12:45:06 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.694 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.694 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.695 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.695 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.695 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.696 2 INFO nova.compute.manager [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Terminating instance
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.697 2 DEBUG nova.compute.manager [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:06 compute-0 kernel: tapea4a4acf-33 (unregistering): left promiscuous mode
Oct 02 12:45:06 compute-0 NetworkManager[44981]: <info>  [1759409106.8456] device (tapea4a4acf-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:06 compute-0 ovn_controller[148123]: 2025-10-02T12:45:06Z|00603|binding|INFO|Releasing lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb from this chassis (sb_readonly=0)
Oct 02 12:45:06 compute-0 ovn_controller[148123]: 2025-10-02T12:45:06Z|00604|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb down in Southbound
Oct 02 12:45:06 compute-0 ovn_controller[148123]: 2025-10-02T12:45:06Z|00605|binding|INFO|Removing iface tapea4a4acf-33 ovn-installed in OVS
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:06 compute-0 nova_compute[256940]: 2025-10-02 12:45:06.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:06.889 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:35:d6 10.100.0.13'], port_security=['fa:16:3e:d6:35:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '173830cb-12bb-4e1a-ba80-088da01ad107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:06.891 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis
Oct 02 12:45:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:06.893 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3643647-7cd9-4c43-8aaa-9b0f3160274b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:45:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:06.895 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2e9a3c-ec4c-4f16-b6b2-6b26e87c03f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:06.896 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace which is not needed anymore
Oct 02 12:45:06 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 02 12:45:06 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000077.scope: Consumed 15.594s CPU time.
Oct 02 12:45:06 compute-0 systemd-machined[210927]: Machine qemu-62-instance-00000077 terminated.
Oct 02 12:45:07 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [NOTICE]   (335108) : haproxy version is 2.8.14-c23fe91
Oct 02 12:45:07 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [NOTICE]   (335108) : path to executable is /usr/sbin/haproxy
Oct 02 12:45:07 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [WARNING]  (335108) : Exiting Master process...
Oct 02 12:45:07 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [WARNING]  (335108) : Exiting Master process...
Oct 02 12:45:07 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [ALERT]    (335108) : Current worker (335110) exited with code 143 (Terminated)
Oct 02 12:45:07 compute-0 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[335104]: [WARNING]  (335108) : All workers exited. Exiting... (0)
Oct 02 12:45:07 compute-0 systemd[1]: libpod-7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267.scope: Deactivated successfully.
Oct 02 12:45:07 compute-0 podman[336520]: 2025-10-02 12:45:07.059282353 +0000 UTC m=+0.049241707 container died 7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267-userdata-shm.mount: Deactivated successfully.
Oct 02 12:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a5e7b6919bec675029a9dcbd7db6255e6bace2b67f7dce97013a8436dd4dc6c-merged.mount: Deactivated successfully.
Oct 02 12:45:07 compute-0 podman[336520]: 2025-10-02 12:45:07.10002729 +0000 UTC m=+0.089986644 container cleanup 7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:07 compute-0 systemd[1]: libpod-conmon-7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267.scope: Deactivated successfully.
Oct 02 12:45:07 compute-0 NetworkManager[44981]: <info>  [1759409107.1198] manager: (tapea4a4acf-33): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.137 2 INFO nova.virt.libvirt.driver [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance destroyed successfully.
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.138 2 DEBUG nova.objects.instance [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.161 2 DEBUG nova.virt.libvirt.vif [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.163 2 DEBUG nova.network.os_vif_util [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.163 2 DEBUG nova.network.os_vif_util [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:07 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.164 2 DEBUG os_vif [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:45:07 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea4a4acf-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.177 2 INFO os_vif [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')
Oct 02 12:45:07 compute-0 podman[336552]: 2025-10-02 12:45:07.212179721 +0000 UTC m=+0.084936983 container remove 7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.220 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6df9fed5-e6f7-4ebc-ac5b-50f7c4c7c7f3]: (4, ('Thu Oct  2 12:45:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267)\n7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267\nThu Oct  2 12:45:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267)\n7b936f4647e83b35d77268f67a266d1ad0a6f38c717de4cf5df57ddf79186267\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.221 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[475823b0-87e5-4f5e-94cc-d6ba5660ac1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.222 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:07 compute-0 kernel: tapf3643647-70: left promiscuous mode
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.247 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcff3f3-78a9-4c5f-895e-888af6962bee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.285 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7556af64-a07c-4cb4-bd92-933f63bfb073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.287 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[05b7aa72-a5b5-4e84-a106-0f66a6b77f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.305 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cf997b18-44a4-4d1a-86e4-53f92f021f9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696979, 'reachable_time': 23767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336602, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.309 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:45:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:07.309 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[cc139782-ef78-44e4-bdff-838f8c1a7d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:07 compute-0 systemd[1]: run-netns-ovnmeta\x2df3643647\x2d7cd9\x2d4c43\x2d8aaa\x2d9b0f3160274b.mount: Deactivated successfully.
Oct 02 12:45:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:45:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.513 2 INFO nova.virt.libvirt.driver [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Deleting instance files /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107_del
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.514 2 INFO nova.virt.libvirt.driver [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Deletion of /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107_del complete
Oct 02 12:45:07 compute-0 ceph-mon[73668]: pgmap v2099: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 12:45:07 compute-0 ceph-mon[73668]: osdmap e289: 3 total, 3 up, 3 in
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.626 2 INFO nova.compute.manager [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.627 2 DEBUG oslo.service.loopingcall [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.627 2 DEBUG nova.compute.manager [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:45:07 compute-0 nova_compute[256940]: 2025-10-02 12:45:07.627 2 DEBUG nova.network.neutron [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:45:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 185 op/s
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:08.336 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:08.337 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.344 2 INFO nova.virt.libvirt.driver [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance destroyed successfully.
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.344 2 DEBUG nova.objects.instance [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'resources' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.366 2 DEBUG nova.virt.libvirt.vif [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1355261775',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1355261775',id=121,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJqnSC5dEFNVxJNe6UAIIaljTk9QXiRqWs9XkOwP1Uo3z0m7kLVKnpN3LhUWVriRnpghb9/lFHsZ1jstgRcNV8lxwQ1W9dxXw6nQRynMb+rfh9iIE+CgS9POWn5d32lCvQ==',key_name='tempest-keypair-1498791303',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wmoytp73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member',shelved_at='2025-10-02T12:45:01.521455',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3866598c-0b46-42ff-ba05-28dab62cd167'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=9aff2d67-195f-4081-9a1c-ba173a39af9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.367 2 DEBUG nova.network.os_vif_util [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.367 2 DEBUG nova.network.os_vif_util [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.368 2 DEBUG os_vif [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbc8e36a-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:08 compute-0 nova_compute[256940]: 2025-10-02 12:45:08.374 2 INFO os_vif [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d')
Oct 02 12:45:08 compute-0 ceph-mon[73668]: pgmap v2101: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 185 op/s
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.026 2 INFO nova.virt.libvirt.driver [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Deleting instance files /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d_del
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.026 2 INFO nova.virt.libvirt.driver [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Deletion of /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d_del complete
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.139 2 DEBUG nova.network.neutron [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.171 2 INFO nova.scheduler.client.report [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Deleted allocations for instance 9aff2d67-195f-4081-9a1c-ba173a39af9d
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.228 2 INFO nova.compute.manager [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Took 1.60 seconds to deallocate network for instance.
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.237 2 DEBUG nova.compute.manager [req-3d8e2625-47da-462a-be71-402f70bb220d req-f3da6644-dd31-41f6-ac8b-01fb4b9ecbb1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-deleted-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.237 2 INFO nova.compute.manager [req-3d8e2625-47da-462a-be71-402f70bb220d req-f3da6644-dd31-41f6-ac8b-01fb4b9ecbb1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Neutron deleted interface ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb; detaching it from the instance and deleting it from the info cache
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.238 2 DEBUG nova.network.neutron [req-3d8e2625-47da-462a-be71-402f70bb220d req-f3da6644-dd31-41f6-ac8b-01fb4b9ecbb1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.247 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.248 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.292 2 DEBUG nova.compute.manager [req-3d8e2625-47da-462a-be71-402f70bb220d req-f3da6644-dd31-41f6-ac8b-01fb4b9ecbb1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Detach interface failed, port_id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb, reason: Instance 173830cb-12bb-4e1a-ba80-088da01ad107 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.321 2 DEBUG oslo_concurrency.processutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:09.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.515 2 INFO nova.compute.manager [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Took 0.29 seconds to detach 1 volumes for instance.
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.519 2 DEBUG nova.compute.manager [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Deleting volume: fdc5e1d9-2228-4ec0-a6bb-8605f6207831 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 12:45:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 164 op/s
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.740 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99679887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.792 2 DEBUG oslo_concurrency.processutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.797 2 DEBUG nova.compute.provider_tree [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.835 2 DEBUG nova.scheduler.client.report [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.872 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.875 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.891 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.891 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing instance network info cache due to event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.891 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.892 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/99679887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:09 compute-0 nova_compute[256940]: 2025-10-02 12:45:09.892 2 DEBUG nova.network.neutron [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.018 2 DEBUG oslo_concurrency.lockutils [None req-2196c55f-0783-4e06-b999-30c15cde8a66 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 28.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.194 2 DEBUG oslo_concurrency.processutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140909361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.631 2 DEBUG oslo_concurrency.processutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.636 2 DEBUG nova.compute.provider_tree [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.650 2 DEBUG nova.scheduler.client.report [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.671 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.714 2 INFO nova.scheduler.client.report [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocations for instance 173830cb-12bb-4e1a-ba80-088da01ad107
Oct 02 12:45:10 compute-0 nova_compute[256940]: 2025-10-02 12:45:10.863 2 DEBUG oslo_concurrency.lockutils [None req-e0aa4c9b-07df-4cae-9cd9-a20762c77d7c 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:10 compute-0 ceph-mon[73668]: pgmap v2102: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 164 op/s
Oct 02 12:45:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/140909361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/378219883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/378219883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:45:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:11.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.458 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.458 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.459 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.459 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 602 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 21 KiB/s wr, 243 op/s
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.780 2 DEBUG nova.network.neutron [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updated VIF entry in instance network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.781 2 DEBUG nova.network.neutron [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.829 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.830 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.830 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.831 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.831 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.831 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.832 2 WARNING nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state deleted and task_state None.
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.832 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.832 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.833 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.833 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.833 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:11 compute-0 nova_compute[256940]: 2025-10-02 12:45:11.833 2 WARNING nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state deleted and task_state None.
Oct 02 12:45:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/378219883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/378219883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 02 12:45:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 02 12:45:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 02 12:45:12 compute-0 podman[336669]: 2025-10-02 12:45:12.389915756 +0000 UTC m=+0.052761307 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:45:12 compute-0 podman[336670]: 2025-10-02 12:45:12.406393859 +0000 UTC m=+0.069544718 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.460 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.460 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.475 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.586 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.587 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.601 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.601 2 INFO nova.compute.claims [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:45:12 compute-0 nova_compute[256940]: 2025-10-02 12:45:12.862 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.144 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updating instance_info_cache with network_info: [{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.161 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.162 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:45:13 compute-0 ceph-mon[73668]: pgmap v2103: 305 pgs: 305 active+clean; 602 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 21 KiB/s wr, 243 op/s
Oct 02 12:45:13 compute-0 ceph-mon[73668]: osdmap e290: 3 total, 3 up, 3 in
Oct 02 12:45:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672774457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.299 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.304 2 DEBUG nova.compute.provider_tree [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.322 2 DEBUG nova.scheduler.client.report [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.348 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.349 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.432 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.433 2 DEBUG nova.network.neutron [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.451 2 INFO nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.474 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.579 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.580 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.581 2 INFO nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Creating image(s)
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.608 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.641 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.671 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.675 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 509 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.4 KiB/s wr, 303 op/s
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.711 2 DEBUG nova.policy [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.750 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.750 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.751 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.751 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.779 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:13 compute-0 nova_compute[256940]: 2025-10-02 12:45:13.783 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2672774457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:14 compute-0 nova_compute[256940]: 2025-10-02 12:45:14.864 2 DEBUG nova.network.neutron [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Successfully created port: 4a4cb5e2-5484-4d93-aa83-af0ef4416230 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.092 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.168 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:45:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:15.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.7 MiB/s wr, 249 op/s
Oct 02 12:45:15 compute-0 ceph-mon[73668]: pgmap v2105: 305 pgs: 305 active+clean; 509 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.4 KiB/s wr, 303 op/s
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.954 2 DEBUG nova.objects.instance [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.966 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.967 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Ensure instance console log exists: /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.967 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.967 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:15 compute-0 nova_compute[256940]: 2025-10-02 12:45:15.968 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:16 compute-0 nova_compute[256940]: 2025-10-02 12:45:16.192 2 DEBUG nova.network.neutron [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Successfully updated port: 4a4cb5e2-5484-4d93-aa83-af0ef4416230 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:45:16 compute-0 nova_compute[256940]: 2025-10-02 12:45:16.214 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-06fbfe0f-34c5-4c90-a9d8-89adba54ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:16 compute-0 nova_compute[256940]: 2025-10-02 12:45:16.214 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-06fbfe0f-34c5-4c90-a9d8-89adba54ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:16 compute-0 nova_compute[256940]: 2025-10-02 12:45:16.214 2 DEBUG nova.network.neutron [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:45:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:16.340 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:16 compute-0 nova_compute[256940]: 2025-10-02 12:45:16.487 2 DEBUG nova.network.neutron [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:45:16 compute-0 nova_compute[256940]: 2025-10-02 12:45:16.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:16 compute-0 ceph-mon[73668]: pgmap v2106: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.7 MiB/s wr, 249 op/s
Oct 02 12:45:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3501430886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/889393379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:17.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.664 2 DEBUG nova.compute.manager [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-changed-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.664 2 DEBUG nova.compute.manager [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Refreshing instance network info cache due to event network-changed-4a4cb5e2-5484-4d93-aa83-af0ef4416230. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.664 2 DEBUG oslo_concurrency.lockutils [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-06fbfe0f-34c5-4c90-a9d8-89adba54ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.876 2 DEBUG nova.network.neutron [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Updating instance_info_cache with network_info: [{"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.917 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-06fbfe0f-34c5-4c90-a9d8-89adba54ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.919 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance network_info: |[{"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.919 2 DEBUG oslo_concurrency.lockutils [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-06fbfe0f-34c5-4c90-a9d8-89adba54ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.919 2 DEBUG nova.network.neutron [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Refreshing network info cache for port 4a4cb5e2-5484-4d93-aa83-af0ef4416230 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.923 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Start _get_guest_xml network_info=[{"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.927 2 WARNING nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.931 2 DEBUG nova.virt.libvirt.host [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.932 2 DEBUG nova.virt.libvirt.host [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.939 2 DEBUG nova.virt.libvirt.host [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.939 2 DEBUG nova.virt.libvirt.host [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.941 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.941 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.942 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.942 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.942 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.943 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.943 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.943 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.943 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.944 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.944 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.944 2 DEBUG nova.virt.hardware [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:45:17 compute-0 nova_compute[256940]: 2025-10-02 12:45:17.947 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873613455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.409 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.445 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.450 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:45:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738593402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.971 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.974 2 DEBUG nova.virt.libvirt.vif [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-935104613',display_name='tempest-DeleteServersTestJSON-server-935104613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-935104613',id=122,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-p7hkngz0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:13Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=06fbfe0f-34c5-4c90-a9d8-89adba54ad38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.974 2 DEBUG nova.network.os_vif_util [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.975 2 DEBUG nova.network.os_vif_util [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:57:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a4cb5e2-5484-4d93-aa83-af0ef4416230,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a4cb5e2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:18 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.977 2 DEBUG nova.objects.instance [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:18.999 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <uuid>06fbfe0f-34c5-4c90-a9d8-89adba54ad38</uuid>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <name>instance-0000007a</name>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <nova:name>tempest-DeleteServersTestJSON-server-935104613</nova:name>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:45:17</nova:creationTime>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <nova:port uuid="4a4cb5e2-5484-4d93-aa83-af0ef4416230">
Oct 02 12:45:19 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <system>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <entry name="serial">06fbfe0f-34c5-4c90-a9d8-89adba54ad38</entry>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <entry name="uuid">06fbfe0f-34c5-4c90-a9d8-89adba54ad38</entry>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </system>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <os>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   </os>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <features>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   </features>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk">
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk.config">
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:45:19 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:0b:57:b6"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <target dev="tap4a4cb5e2-54"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/console.log" append="off"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <video>
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </video>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:45:19 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:45:19 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:45:19 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:45:19 compute-0 nova_compute[256940]: </domain>
Oct 02 12:45:19 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.001 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Preparing to wait for external event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.002 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.002 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.002 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.003 2 DEBUG nova.virt.libvirt.vif [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-935104613',display_name='tempest-DeleteServersTestJSON-server-935104613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-935104613',id=122,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-p7hkngz0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:13Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=06fbfe0f-34c5-4c90-a9d8-89adba54ad38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.003 2 DEBUG nova.network.os_vif_util [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.004 2 DEBUG nova.network.os_vif_util [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:57:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a4cb5e2-5484-4d93-aa83-af0ef4416230,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a4cb5e2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.004 2 DEBUG os_vif [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:57:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a4cb5e2-5484-4d93-aa83-af0ef4416230,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a4cb5e2-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a4cb5e2-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a4cb5e2-54, col_values=(('external_ids', {'iface-id': '4a4cb5e2-5484-4d93-aa83-af0ef4416230', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:57:b6', 'vm-uuid': '06fbfe0f-34c5-4c90-a9d8-89adba54ad38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 NetworkManager[44981]: <info>  [1759409119.0157] manager: (tap4a4cb5e2-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.021 2 INFO os_vif [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:57:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a4cb5e2-5484-4d93-aa83-af0ef4416230,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a4cb5e2-54')
Oct 02 12:45:19 compute-0 ceph-mon[73668]: pgmap v2107: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Oct 02 12:45:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/855172294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/855172294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1873613455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.079 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.079 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.079 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:0b:57:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.080 2 INFO nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Using config drive
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.106 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.522 2 DEBUG nova.network.neutron [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Updated VIF entry in instance network info cache for port 4a4cb5e2-5484-4d93-aa83-af0ef4416230. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.523 2 DEBUG nova.network.neutron [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Updating instance_info_cache with network_info: [{"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:19 compute-0 nova_compute[256940]: 2025-10-02 12:45:19.540 2 DEBUG oslo_concurrency.lockutils [req-a3f6a68d-3c3c-4156-a949-34593e3ac7b9 req-103c4a14-5a2e-44b5-9d81-7b1f66788d22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-06fbfe0f-34c5-4c90-a9d8-89adba54ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Oct 02 12:45:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/738593402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3012165453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:20 compute-0 nova_compute[256940]: 2025-10-02 12:45:20.352 2 INFO nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Creating config drive at /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/disk.config
Oct 02 12:45:20 compute-0 nova_compute[256940]: 2025-10-02 12:45:20.357 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1vu6n4k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:20 compute-0 nova_compute[256940]: 2025-10-02 12:45:20.498 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1vu6n4k" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:20 compute-0 nova_compute[256940]: 2025-10-02 12:45:20.565 2 DEBUG nova.storage.rbd_utils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:20 compute-0 nova_compute[256940]: 2025-10-02 12:45:20.569 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/disk.config 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:21 compute-0 ceph-mon[73668]: pgmap v2108: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Oct 02 12:45:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:21 compute-0 nova_compute[256940]: 2025-10-02 12:45:21.688 2 DEBUG oslo_concurrency.processutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/disk.config 06fbfe0f-34c5-4c90-a9d8-89adba54ad38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:21 compute-0 nova_compute[256940]: 2025-10-02 12:45:21.689 2 INFO nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Deleting local config drive /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38/disk.config because it was imported into RBD.
Oct 02 12:45:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 481 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.7 MiB/s wr, 223 op/s
Oct 02 12:45:21 compute-0 kernel: tap4a4cb5e2-54: entered promiscuous mode
Oct 02 12:45:21 compute-0 NetworkManager[44981]: <info>  [1759409121.7448] manager: (tap4a4cb5e2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Oct 02 12:45:21 compute-0 ovn_controller[148123]: 2025-10-02T12:45:21Z|00606|binding|INFO|Claiming lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 for this chassis.
Oct 02 12:45:21 compute-0 nova_compute[256940]: 2025-10-02 12:45:21.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:21 compute-0 ovn_controller[148123]: 2025-10-02T12:45:21Z|00607|binding|INFO|4a4cb5e2-5484-4d93-aa83-af0ef4416230: Claiming fa:16:3e:0b:57:b6 10.100.0.9
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.795 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:57:b6 10.100.0.9'], port_security=['fa:16:3e:0b:57:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '06fbfe0f-34c5-4c90-a9d8-89adba54ad38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4a4cb5e2-5484-4d93-aa83-af0ef4416230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.796 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4a4cb5e2-5484-4d93-aa83-af0ef4416230 in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.798 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:45:21 compute-0 ovn_controller[148123]: 2025-10-02T12:45:21Z|00608|binding|INFO|Setting lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 ovn-installed in OVS
Oct 02 12:45:21 compute-0 ovn_controller[148123]: 2025-10-02T12:45:21Z|00609|binding|INFO|Setting lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 up in Southbound
Oct 02 12:45:21 compute-0 nova_compute[256940]: 2025-10-02 12:45:21.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:21 compute-0 nova_compute[256940]: 2025-10-02 12:45:21.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:21 compute-0 systemd-udevd[337039]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.814 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[897a5766-0899-4b86-abfd-b1fdcda56167]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.815 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.817 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.817 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e1172261-9398-4501-9caa-7f3f5f2faa7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.818 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d54ae9db-0b8b-4191-8722-8854cc28514b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 systemd-machined[210927]: New machine qemu-63-instance-0000007a.
Oct 02 12:45:21 compute-0 NetworkManager[44981]: <info>  [1759409121.8277] device (tap4a4cb5e2-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:45:21 compute-0 NetworkManager[44981]: <info>  [1759409121.8290] device (tap4a4cb5e2-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.829 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[49ed2756-1c5f-4fd7-bfb9-5ae888a1cc73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-0000007a.
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.857 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f833a3f7-eac3-4bed-980d-0701d83a906d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.890 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f60b026e-f2b5-4867-9c2e-be1d4d552915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.895 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9a765c0d-1aed-42f9-aed7-8ebb901848cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 NetworkManager[44981]: <info>  [1759409121.8965] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Oct 02 12:45:21 compute-0 systemd-udevd[337043]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.936 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9c3f55-18b8-4e92-b778-509a8f8e8335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.940 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[54619e1d-c24e-4e0a-8735-08c22c055348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 NetworkManager[44981]: <info>  [1759409121.9632] device (tapfd4432c5-b0): carrier: link connected
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.970 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[adf45aef-4ccd-4f9f-a7d0-f55ed325cef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:21.991 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2267a0ab-4a4a-466e-8c7e-d70f7b4bf371]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701744, 'reachable_time': 24276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337072, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.007 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8216e1-b8bb-4a0b-8c85-0295aab11178]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701744, 'tstamp': 701744}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337073, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.027 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5e35c1bf-09e2-48e9-b0e1-e80d37834b91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701744, 'reachable_time': 24276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337074, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.069 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[46b09380-fd29-45b1-bfbf-bbb45a311315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:22 compute-0 sudo[337076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:22 compute-0 sudo[337076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:22 compute-0 sudo[337076]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.135 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409107.1345623, 173830cb-12bb-4e1a-ba80-088da01ad107 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.136 2 INFO nova.compute.manager [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Stopped (Lifecycle Event)
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.148 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[480e00fb-bfa9-4a06-8854-ea11be480c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.151 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.151 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.151 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:22 compute-0 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct 02 12:45:22 compute-0 NetworkManager[44981]: <info>  [1759409122.1541] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct 02 12:45:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.202 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.202 2 DEBUG nova.compute.manager [None req-6c93e1d9-b2f0-429a-ad10-efe8d970bf84 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:22 compute-0 ovn_controller[148123]: 2025-10-02T12:45:22Z|00610|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.224 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:45:22 compute-0 sudo[337105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.226 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7d05844f-dab7-4d52-bf1d-2bceddf0fa1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.227 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:45:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:22.228 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:45:22 compute-0 sudo[337105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:22 compute-0 sudo[337105]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2690838680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:22 compute-0 podman[337169]: 2025-10-02 12:45:22.613803805 +0000 UTC m=+0.053495266 container create c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:45:22 compute-0 systemd[1]: Started libpod-conmon-c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a.scope.
Oct 02 12:45:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:45:22 compute-0 podman[337169]: 2025-10-02 12:45:22.587038857 +0000 UTC m=+0.026730338 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35dc4952af7a579cb5cab2b3aaad2bbad0bdfe08a267724f68f3a638794e05e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:22 compute-0 podman[337169]: 2025-10-02 12:45:22.709298959 +0000 UTC m=+0.148990410 container init c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:45:22 compute-0 podman[337169]: 2025-10-02 12:45:22.714972784 +0000 UTC m=+0.154664235 container start c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:45:22 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[337203]: [NOTICE]   (337214) : New worker (337217) forked
Oct 02 12:45:22 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[337203]: [NOTICE]   (337214) : Loading success.
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.854 2 DEBUG nova.compute.manager [req-c2131938-96eb-40b2-a1ff-9321b581eb9d req-f8fac5da-4374-4f89-9778-8bf4ceb9b1d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.854 2 DEBUG oslo_concurrency.lockutils [req-c2131938-96eb-40b2-a1ff-9321b581eb9d req-f8fac5da-4374-4f89-9778-8bf4ceb9b1d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.855 2 DEBUG oslo_concurrency.lockutils [req-c2131938-96eb-40b2-a1ff-9321b581eb9d req-f8fac5da-4374-4f89-9778-8bf4ceb9b1d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.855 2 DEBUG oslo_concurrency.lockutils [req-c2131938-96eb-40b2-a1ff-9321b581eb9d req-f8fac5da-4374-4f89-9778-8bf4ceb9b1d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:22 compute-0 nova_compute[256940]: 2025-10-02 12:45:22.855 2 DEBUG nova.compute.manager [req-c2131938-96eb-40b2-a1ff-9321b581eb9d req-f8fac5da-4374-4f89-9778-8bf4ceb9b1d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Processing event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.188 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.189 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409123.1880167, 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.189 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] VM Started (Lifecycle Event)
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.192 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.195 2 INFO nova.virt.libvirt.driver [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance spawned successfully.
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.196 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.214 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.220 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.224 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.224 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.225 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.225 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.226 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.226 2 DEBUG nova.virt.libvirt.driver [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.260 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.261 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409123.1889446, 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.261 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] VM Paused (Lifecycle Event)
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.285 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.289 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409123.1914325, 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.289 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] VM Resumed (Lifecycle Event)
Oct 02 12:45:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.296 2 INFO nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Took 9.72 seconds to spawn the instance on the hypervisor.
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.297 2 DEBUG nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.308 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.312 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.338 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.368 2 INFO nova.compute.manager [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Took 10.81 seconds to build instance.
Oct 02 12:45:23 compute-0 nova_compute[256940]: 2025-10-02 12:45:23.389 2 DEBUG oslo_concurrency.lockutils [None req-1a5da2d6-b074-4b5a-98f8-fc5614e8b795 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:23.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:23 compute-0 ceph-mon[73668]: pgmap v2109: 305 pgs: 305 active+clean; 481 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.7 MiB/s wr, 223 op/s
Oct 02 12:45:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/96683948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.1 MiB/s wr, 251 op/s
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.705 2 DEBUG oslo_concurrency.lockutils [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.706 2 DEBUG oslo_concurrency.lockutils [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.706 2 DEBUG nova.compute.manager [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.710 2 DEBUG nova.compute.manager [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.712 2 DEBUG nova.objects.instance [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'flavor' on Instance uuid 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.740 2 DEBUG nova.virt.libvirt.driver [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:45:24 compute-0 ceph-mon[73668]: pgmap v2110: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.1 MiB/s wr, 251 op/s
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.968 2 DEBUG nova.compute.manager [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.969 2 DEBUG oslo_concurrency.lockutils [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.970 2 DEBUG oslo_concurrency.lockutils [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.970 2 DEBUG oslo_concurrency.lockutils [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.970 2 DEBUG nova.compute.manager [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] No waiting events found dispatching network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:24 compute-0 nova_compute[256940]: 2025-10-02 12:45:24.971 2 WARNING nova.compute.manager [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received unexpected event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 for instance with vm_state active and task_state powering-off.
Oct 02 12:45:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:25.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 496 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.0 MiB/s wr, 279 op/s
Oct 02 12:45:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3855621067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:26.480 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:26.481 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:45:26.482 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:26 compute-0 nova_compute[256940]: 2025-10-02 12:45:26.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:27 compute-0 ceph-mon[73668]: pgmap v2111: 305 pgs: 305 active+clean; 496 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.0 MiB/s wr, 279 op/s
Oct 02 12:45:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3442325023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:45:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:45:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 6.2 MiB/s wr, 357 op/s
Oct 02 12:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 02 12:45:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 02 12:45:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:45:28
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'images']
Oct 02 12:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:29 compute-0 ceph-mon[73668]: pgmap v2112: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 6.2 MiB/s wr, 357 op/s
Oct 02 12:45:29 compute-0 ceph-mon[73668]: osdmap e291: 3 total, 3 up, 3 in
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.212 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.212 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.213 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.213 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.213 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.213 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.300 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.300 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Image id 423b8b5f-aab8-418b-8fad-d82c90818bdd yields fingerprint 472c3cad2e339908bc4a8cea12fc22c04fcd93b6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.300 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): checking
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.300 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.302 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 12:45:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:29.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.304 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] f3cb3218-0640-497e-94de-2549ed7da8e4 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.304 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.305 2 WARNING nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.305 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Active base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.305 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Removable base files: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.305 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.306 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.306 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 12:45:29 compute-0 nova_compute[256940]: 2025-10-02 12:45:29.306 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 12:45:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:29.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 6.8 MiB/s wr, 368 op/s
Oct 02 12:45:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3675489831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:30 compute-0 podman[337229]: 2025-10-02 12:45:30.394288047 +0000 UTC m=+0.066423088 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:45:30 compute-0 podman[337230]: 2025-10-02 12:45:30.43373335 +0000 UTC m=+0.106362644 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:45:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:31.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:31 compute-0 ceph-mon[73668]: pgmap v2114: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 6.8 MiB/s wr, 368 op/s
Oct 02 12:45:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 449 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.3 MiB/s wr, 378 op/s
Oct 02 12:45:31 compute-0 nova_compute[256940]: 2025-10-02 12:45:31.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:32 compute-0 ceph-mon[73668]: pgmap v2115: 305 pgs: 305 active+clean; 449 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.3 MiB/s wr, 378 op/s
Oct 02 12:45:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:33.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:33.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.1 MiB/s wr, 360 op/s
Oct 02 12:45:34 compute-0 nova_compute[256940]: 2025-10-02 12:45:34.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:34 compute-0 ovn_controller[148123]: 2025-10-02T12:45:34Z|00611|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:45:34 compute-0 ovn_controller[148123]: 2025-10-02T12:45:34Z|00612|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:45:34 compute-0 nova_compute[256940]: 2025-10-02 12:45:34.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:34 compute-0 nova_compute[256940]: 2025-10-02 12:45:34.787 2 DEBUG nova.virt.libvirt.driver [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:45:35 compute-0 ceph-mon[73668]: pgmap v2116: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.1 MiB/s wr, 360 op/s
Oct 02 12:45:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:35.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 623 KiB/s wr, 254 op/s
Oct 02 12:45:36 compute-0 nova_compute[256940]: 2025-10-02 12:45:36.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:37 compute-0 ceph-mon[73668]: pgmap v2117: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 623 KiB/s wr, 254 op/s
Oct 02 12:45:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 02 12:45:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 02 12:45:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 02 12:45:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:37.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Oct 02 12:45:38 compute-0 ovn_controller[148123]: 2025-10-02T12:45:38Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:57:b6 10.100.0.9
Oct 02 12:45:38 compute-0 ovn_controller[148123]: 2025-10-02T12:45:38Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:57:b6 10.100.0.9
Oct 02 12:45:38 compute-0 ceph-mon[73668]: osdmap e292: 3 total, 3 up, 3 in
Oct 02 12:45:39 compute-0 nova_compute[256940]: 2025-10-02 12:45:39.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 32K writes, 132K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 10K syncs, 3.12 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6715 writes, 26K keys, 6715 commit groups, 1.0 writes per commit group, ingest: 29.45 MB, 0.05 MB/s
                                           Interval WAL: 6716 writes, 2519 syncs, 2.67 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:45:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Oct 02 12:45:39 compute-0 ceph-mon[73668]: pgmap v2119: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010197108226316769 of space, bias 1.0, pg target 3.0591324678950307 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.99929043364973e-06 of space, bias 1.0, pg target 0.0005937892587939698 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:45:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:45:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:41 compute-0 ceph-mon[73668]: pgmap v2120: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Oct 02 12:45:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 444 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 02 12:45:41 compute-0 nova_compute[256940]: 2025-10-02 12:45:41.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:42 compute-0 sudo[337281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:42 compute-0 sudo[337281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:42 compute-0 sudo[337281]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:42 compute-0 sudo[337306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:42 compute-0 sudo[337306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:42 compute-0 sudo[337306]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:42 compute-0 ceph-mon[73668]: pgmap v2121: 305 pgs: 305 active+clean; 444 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 02 12:45:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:45:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:43.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:45:43 compute-0 podman[337333]: 2025-10-02 12:45:43.391053115 +0000 UTC m=+0.056419681 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:43 compute-0 podman[337332]: 2025-10-02 12:45:43.41500023 +0000 UTC m=+0.084644246 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:45:43 compute-0 nova_compute[256940]: 2025-10-02 12:45:43.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 141 op/s
Oct 02 12:45:44 compute-0 nova_compute[256940]: 2025-10-02 12:45:44.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:45.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:45 compute-0 ceph-mon[73668]: pgmap v2122: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 141 op/s
Oct 02 12:45:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 461 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 150 op/s
Oct 02 12:45:45 compute-0 nova_compute[256940]: 2025-10-02 12:45:45.839 2 DEBUG nova.virt.libvirt.driver [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:45:46 compute-0 nova_compute[256940]: 2025-10-02 12:45:46.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:47.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:47.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:47 compute-0 ceph-mon[73668]: pgmap v2123: 305 pgs: 305 active+clean; 461 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 150 op/s
Oct 02 12:45:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.6 MiB/s wr, 173 op/s
Oct 02 12:45:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 12:45:48 compute-0 nova_compute[256940]: 2025-10-02 12:45:48.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:49 compute-0 nova_compute[256940]: 2025-10-02 12:45:49.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:49.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:49.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:49 compute-0 ceph-mon[73668]: pgmap v2124: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.6 MiB/s wr, 173 op/s
Oct 02 12:45:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 3.0 MiB/s wr, 129 op/s
Oct 02 12:45:50 compute-0 ceph-mon[73668]: pgmap v2125: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 3.0 MiB/s wr, 129 op/s
Oct 02 12:45:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3365978155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:45:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/368511100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:51.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:51.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 961 KiB/s rd, 3.1 MiB/s wr, 168 op/s
Oct 02 12:45:51 compute-0 nova_compute[256940]: 2025-10-02 12:45:51.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/368511100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:52 compute-0 ceph-mon[73668]: pgmap v2126: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 961 KiB/s rd, 3.1 MiB/s wr, 168 op/s
Oct 02 12:45:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2094384174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2094384174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:53.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 02 12:45:54 compute-0 nova_compute[256940]: 2025-10-02 12:45:54.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:55 compute-0 ceph-mon[73668]: pgmap v2127: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 02 12:45:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 1.9 MiB/s wr, 104 op/s
Oct 02 12:45:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:45:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1202705781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:45:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1202705781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/328033079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1202705781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1202705781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:56 compute-0 nova_compute[256940]: 2025-10-02 12:45:56.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:56 compute-0 nova_compute[256940]: 2025-10-02 12:45:56.877 2 DEBUG nova.virt.libvirt.driver [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:45:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.5 MiB/s wr, 108 op/s
Oct 02 12:45:57 compute-0 ceph-mon[73668]: pgmap v2128: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 1.9 MiB/s wr, 104 op/s
Oct 02 12:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:45:59 compute-0 nova_compute[256940]: 2025-10-02 12:45:59.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:59 compute-0 ceph-mon[73668]: pgmap v2129: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.5 MiB/s wr, 108 op/s
Oct 02 12:45:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:45:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:59.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:45:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:45:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:59.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 43 KiB/s wr, 72 op/s
Oct 02 12:46:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:01.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:01 compute-0 ceph-mon[73668]: pgmap v2130: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 43 KiB/s wr, 72 op/s
Oct 02 12:46:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/199098813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:01 compute-0 podman[337380]: 2025-10-02 12:46:01.377746611 +0000 UTC m=+0.046400563 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:46:01 compute-0 podman[337381]: 2025-10-02 12:46:01.432633671 +0000 UTC m=+0.092139618 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:46:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:01.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 02 12:46:01 compute-0 nova_compute[256940]: 2025-10-02 12:46:01.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3014393884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/459826867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3153811059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/626934885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-0 sudo[337424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:02 compute-0 sudo[337424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:02 compute-0 sudo[337424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:02 compute-0 sudo[337449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:02 compute-0 sudo[337449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:02 compute-0 sudo[337449]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:03 compute-0 ovn_controller[148123]: 2025-10-02T12:46:03Z|00613|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:46:03 compute-0 ovn_controller[148123]: 2025-10-02T12:46:03Z|00614|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:46:03 compute-0 nova_compute[256940]: 2025-10-02 12:46:03.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:03 compute-0 nova_compute[256940]: 2025-10-02 12:46:03.307 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:03 compute-0 ceph-mon[73668]: pgmap v2131: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 02 12:46:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2267448461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1811187702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:03.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.248 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982039176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.710 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.809 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.809 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.809 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.812 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.813 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.956 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.957 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3972MB free_disk=20.743602752685547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.958 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:04 compute-0 nova_compute[256940]: 2025-10-02 12:46:04.958 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523870997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523870997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.188 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance f3cb3218-0640-497e-94de-2549ed7da8e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.189 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.189 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.189 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:46:05 compute-0 ceph-mon[73668]: pgmap v2132: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.205 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.249 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.249 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.261 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.284 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.324 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:05.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:05.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 02 12:46:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626095779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.767 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.774 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.861 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.897 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:46:05 compute-0 nova_compute[256940]: 2025-10-02 12:46:05.898 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:06 compute-0 sudo[337521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:06 compute-0 sudo[337521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-0 sudo[337521]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:06 compute-0 sudo[337546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:06 compute-0 sudo[337546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-0 sudo[337546]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:06 compute-0 sudo[337571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:06 compute-0 sudo[337571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-0 sudo[337571]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1982039176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1696941197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3523870997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3523870997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/626095779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:06 compute-0 sudo[337596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:46:06 compute-0 sudo[337596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-0 sudo[337596]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:06 compute-0 nova_compute[256940]: 2025-10-02 12:46:06.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:07 compute-0 sudo[337652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:07 compute-0 sudo[337652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-0 sudo[337652]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-0 sudo[337677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:07 compute-0 sudo[337677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-0 sudo[337677]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-0 sudo[337702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:07 compute-0 sudo[337702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-0 sudo[337702]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-0 sudo[337727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 12:46:07 compute-0 sudo[337727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:07.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:07 compute-0 sudo[337727]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:46:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:07.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:46:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 562 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:46:07 compute-0 ceph-mon[73668]: pgmap v2133: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 02 12:46:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/221783826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:46:07 compute-0 nova_compute[256940]: 2025-10-02 12:46:07.894 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:07 compute-0 nova_compute[256940]: 2025-10-02 12:46:07.894 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:07 compute-0 nova_compute[256940]: 2025-10-02 12:46:07.895 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:07 compute-0 nova_compute[256940]: 2025-10-02 12:46:07.895 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:46:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ac6bad6b-099f-47c5-ae9e-701e597d2d70 does not exist
Oct 02 12:46:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8b4fbe64-ef60-46e6-b44f-da475b98f293 does not exist
Oct 02 12:46:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a1c6b653-48d5-46a0-af66-31690c04497e does not exist
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:46:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:46:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:07 compute-0 nova_compute[256940]: 2025-10-02 12:46:07.913 2 DEBUG nova.virt.libvirt.driver [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:46:07 compute-0 sudo[337772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:07 compute-0 sudo[337772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-0 sudo[337772]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:08 compute-0 sudo[337797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:08 compute-0 sudo[337797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:08 compute-0 sudo[337797]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:08 compute-0 sudo[337822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:08 compute-0 sudo[337822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:08 compute-0 sudo[337822]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:08 compute-0 sudo[337847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:46:08 compute-0 sudo[337847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:08 compute-0 podman[337913]: 2025-10-02 12:46:08.439534725 +0000 UTC m=+0.039844384 container create 24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 12:46:08 compute-0 systemd[1]: Started libpod-conmon-24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c.scope.
Oct 02 12:46:08 compute-0 podman[337913]: 2025-10-02 12:46:08.41987052 +0000 UTC m=+0.020180199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:08 compute-0 podman[337913]: 2025-10-02 12:46:08.544516843 +0000 UTC m=+0.144826532 container init 24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:46:08 compute-0 podman[337913]: 2025-10-02 12:46:08.552786906 +0000 UTC m=+0.153096565 container start 24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:46:08 compute-0 podman[337913]: 2025-10-02 12:46:08.558205965 +0000 UTC m=+0.158515634 container attach 24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 12:46:08 compute-0 frosty_franklin[337930]: 167 167
Oct 02 12:46:08 compute-0 systemd[1]: libpod-24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c.scope: Deactivated successfully.
Oct 02 12:46:08 compute-0 podman[337913]: 2025-10-02 12:46:08.561073028 +0000 UTC m=+0.161382687 container died 24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_franklin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-170532eb955a724b8c9dc48dd9643eb1f5823161f7a502578c4a48815f528278-merged.mount: Deactivated successfully.
Oct 02 12:46:08 compute-0 podman[337913]: 2025-10-02 12:46:08.615453836 +0000 UTC m=+0.215763495 container remove 24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_franklin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:46:08 compute-0 systemd[1]: libpod-conmon-24ec9b19299d39d2d016fd847d6518c6dd99b40f56a387211df50240d718807c.scope: Deactivated successfully.
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-0 ceph-mon[73668]: pgmap v2134: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 562 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:46:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:08 compute-0 podman[337955]: 2025-10-02 12:46:08.793822879 +0000 UTC m=+0.045935301 container create 86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:46:08 compute-0 systemd[1]: Started libpod-conmon-86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca.scope.
Oct 02 12:46:08 compute-0 podman[337955]: 2025-10-02 12:46:08.775259412 +0000 UTC m=+0.027371834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38d6f9b27b7f55c4334928f8aaefe2d09a45c9b52b87274ba6071b2d9fd6197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38d6f9b27b7f55c4334928f8aaefe2d09a45c9b52b87274ba6071b2d9fd6197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38d6f9b27b7f55c4334928f8aaefe2d09a45c9b52b87274ba6071b2d9fd6197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38d6f9b27b7f55c4334928f8aaefe2d09a45c9b52b87274ba6071b2d9fd6197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38d6f9b27b7f55c4334928f8aaefe2d09a45c9b52b87274ba6071b2d9fd6197/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:08 compute-0 podman[337955]: 2025-10-02 12:46:08.889633911 +0000 UTC m=+0.141746343 container init 86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:46:08 compute-0 podman[337955]: 2025-10-02 12:46:08.895526782 +0000 UTC m=+0.147639204 container start 86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:46:08 compute-0 podman[337955]: 2025-10-02 12:46:08.899374861 +0000 UTC m=+0.151487333 container attach 86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:46:09 compute-0 nova_compute[256940]: 2025-10-02 12:46:09.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:09 compute-0 nova_compute[256940]: 2025-10-02 12:46:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:09.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:09.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 539 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Oct 02 12:46:09 compute-0 busy_albattani[337972]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:46:09 compute-0 busy_albattani[337972]: --> relative data size: 1.0
Oct 02 12:46:09 compute-0 busy_albattani[337972]: --> All data devices are unavailable
Oct 02 12:46:09 compute-0 systemd[1]: libpod-86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca.scope: Deactivated successfully.
Oct 02 12:46:09 compute-0 podman[337955]: 2025-10-02 12:46:09.769128069 +0000 UTC m=+1.021240491 container died 86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:46:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38d6f9b27b7f55c4334928f8aaefe2d09a45c9b52b87274ba6071b2d9fd6197-merged.mount: Deactivated successfully.
Oct 02 12:46:09 compute-0 podman[337955]: 2025-10-02 12:46:09.99214398 +0000 UTC m=+1.244256422 container remove 86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:46:10 compute-0 sudo[337847]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:10 compute-0 systemd[1]: libpod-conmon-86c268341cea80464828d742574f0da60fbad474e1d97d0de8d732262e45c4ca.scope: Deactivated successfully.
Oct 02 12:46:10 compute-0 sudo[337999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:10 compute-0 sudo[337999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:10 compute-0 sudo[337999]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:10 compute-0 sudo[338024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:10 compute-0 sudo[338024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:10 compute-0 sudo[338024]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:10 compute-0 sudo[338049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:10 compute-0 sudo[338049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:10 compute-0 sudo[338049]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:10 compute-0 sudo[338074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:46:10 compute-0 sudo[338074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:10.652 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:10.653 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:46:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:10.654 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:10 compute-0 podman[338140]: 2025-10-02 12:46:10.58054551 +0000 UTC m=+0.022597572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:10 compute-0 nova_compute[256940]: 2025-10-02 12:46:10.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-0 podman[338140]: 2025-10-02 12:46:10.808057695 +0000 UTC m=+0.250109737 container create 7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:46:10 compute-0 ceph-mon[73668]: pgmap v2135: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 539 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Oct 02 12:46:10 compute-0 systemd[1]: Started libpod-conmon-7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940.scope.
Oct 02 12:46:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:10 compute-0 podman[338140]: 2025-10-02 12:46:10.976889664 +0000 UTC m=+0.418941736 container init 7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:46:10 compute-0 podman[338140]: 2025-10-02 12:46:10.983561015 +0000 UTC m=+0.425613067 container start 7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:46:10 compute-0 objective_meninsky[338157]: 167 167
Oct 02 12:46:10 compute-0 systemd[1]: libpod-7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940.scope: Deactivated successfully.
Oct 02 12:46:11 compute-0 podman[338140]: 2025-10-02 12:46:11.032962275 +0000 UTC m=+0.475014347 container attach 7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:46:11 compute-0 podman[338140]: 2025-10-02 12:46:11.033531419 +0000 UTC m=+0.475583451 container died 7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:46:11 compute-0 nova_compute[256940]: 2025-10-02 12:46:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9620be57f8a9a0bc47ffc01fe1eb57aca9e49c5fc258afe829cc8bab44593fc-merged.mount: Deactivated successfully.
Oct 02 12:46:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:11.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:11 compute-0 podman[338140]: 2025-10-02 12:46:11.521484848 +0000 UTC m=+0.963536890 container remove 7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 12:46:11 compute-0 systemd[1]: libpod-conmon-7aa70ed318bf10f963d1fe1467c815c0fb5ff4fd22242ecd90c01c9f8473e940.scope: Deactivated successfully.
Oct 02 12:46:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Oct 02 12:46:11 compute-0 podman[338181]: 2025-10-02 12:46:11.708941975 +0000 UTC m=+0.033199484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:11 compute-0 podman[338181]: 2025-10-02 12:46:11.828075506 +0000 UTC m=+0.152332985 container create 1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:46:11 compute-0 nova_compute[256940]: 2025-10-02 12:46:11.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:12 compute-0 systemd[1]: Started libpod-conmon-1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3.scope.
Oct 02 12:46:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1934fe700f0433ac117d90de4940d779581b1ecb98d78f7d118fda8bba6d8bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1934fe700f0433ac117d90de4940d779581b1ecb98d78f7d118fda8bba6d8bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1934fe700f0433ac117d90de4940d779581b1ecb98d78f7d118fda8bba6d8bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1934fe700f0433ac117d90de4940d779581b1ecb98d78f7d118fda8bba6d8bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:12 compute-0 podman[338181]: 2025-10-02 12:46:12.136750958 +0000 UTC m=+0.461008437 container init 1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:46:12 compute-0 podman[338181]: 2025-10-02 12:46:12.145065981 +0000 UTC m=+0.469323460 container start 1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:46:12 compute-0 podman[338181]: 2025-10-02 12:46:12.151850746 +0000 UTC m=+0.476108235 container attach 1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:46:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:12 compute-0 awesome_payne[338199]: {
Oct 02 12:46:12 compute-0 awesome_payne[338199]:     "1": [
Oct 02 12:46:12 compute-0 awesome_payne[338199]:         {
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "devices": [
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "/dev/loop3"
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             ],
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "lv_name": "ceph_lv0",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "lv_size": "7511998464",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "name": "ceph_lv0",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "tags": {
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.cluster_name": "ceph",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.crush_device_class": "",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.encrypted": "0",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.osd_id": "1",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.type": "block",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:                 "ceph.vdo": "0"
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             },
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "type": "block",
Oct 02 12:46:12 compute-0 awesome_payne[338199]:             "vg_name": "ceph_vg0"
Oct 02 12:46:12 compute-0 awesome_payne[338199]:         }
Oct 02 12:46:12 compute-0 awesome_payne[338199]:     ]
Oct 02 12:46:12 compute-0 awesome_payne[338199]: }
Oct 02 12:46:12 compute-0 systemd[1]: libpod-1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3.scope: Deactivated successfully.
Oct 02 12:46:12 compute-0 podman[338181]: 2025-10-02 12:46:12.948705551 +0000 UTC m=+1.272963030 container died 1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1934fe700f0433ac117d90de4940d779581b1ecb98d78f7d118fda8bba6d8bc-merged.mount: Deactivated successfully.
Oct 02 12:46:13 compute-0 podman[338181]: 2025-10-02 12:46:13.00781574 +0000 UTC m=+1.332073219 container remove 1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:46:13 compute-0 systemd[1]: libpod-conmon-1610ee04fb1d55cbd932c3da431fc9af8910c0b2971a470d3465a4be563c3ca3.scope: Deactivated successfully.
Oct 02 12:46:13 compute-0 sudo[338074]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:13 compute-0 sudo[338223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:13 compute-0 sudo[338223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:13 compute-0 sudo[338223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:13 compute-0 sudo[338248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:13 compute-0 sudo[338248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:13 compute-0 sudo[338248]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:13 compute-0 nova_compute[256940]: 2025-10-02 12:46:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:13 compute-0 nova_compute[256940]: 2025-10-02 12:46:13.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:46:13 compute-0 nova_compute[256940]: 2025-10-02 12:46:13.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:46:13 compute-0 sudo[338273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:13 compute-0 sudo[338273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:13 compute-0 sudo[338273]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:13 compute-0 ceph-mon[73668]: pgmap v2136: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Oct 02 12:46:13 compute-0 sudo[338298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:46:13 compute-0 sudo[338298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:13 compute-0 podman[338363]: 2025-10-02 12:46:13.63131143 +0000 UTC m=+0.046016343 container create ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:46:13 compute-0 systemd[1]: Started libpod-conmon-ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c.scope.
Oct 02 12:46:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:13 compute-0 podman[338363]: 2025-10-02 12:46:13.61025957 +0000 UTC m=+0.024964493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:13 compute-0 nova_compute[256940]: 2025-10-02 12:46:13.717 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:13 compute-0 nova_compute[256940]: 2025-10-02 12:46:13.718 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:13 compute-0 nova_compute[256940]: 2025-10-02 12:46:13.718 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:46:13 compute-0 nova_compute[256940]: 2025-10-02 12:46:13.718 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:13 compute-0 podman[338363]: 2025-10-02 12:46:13.731064754 +0000 UTC m=+0.145769717 container init ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:46:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 33 KiB/s wr, 153 op/s
Oct 02 12:46:13 compute-0 podman[338363]: 2025-10-02 12:46:13.740197788 +0000 UTC m=+0.154902701 container start ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:46:13 compute-0 podman[338363]: 2025-10-02 12:46:13.747160387 +0000 UTC m=+0.161865350 container attach ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:46:13 compute-0 nice_poitras[338381]: 167 167
Oct 02 12:46:13 compute-0 systemd[1]: libpod-ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c.scope: Deactivated successfully.
Oct 02 12:46:13 compute-0 podman[338363]: 2025-10-02 12:46:13.749140778 +0000 UTC m=+0.163845711 container died ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:46:13 compute-0 podman[338380]: 2025-10-02 12:46:13.766217987 +0000 UTC m=+0.087540840 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:46:13 compute-0 podman[338377]: 2025-10-02 12:46:13.7815095 +0000 UTC m=+0.102516155 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:46:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e050fc90c7e4046eed00a174928718936df08715f67fbcc091a6b12fc6b8e37-merged.mount: Deactivated successfully.
Oct 02 12:46:13 compute-0 podman[338363]: 2025-10-02 12:46:13.836019071 +0000 UTC m=+0.250723984 container remove ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:46:13 compute-0 systemd[1]: libpod-conmon-ac1e11dae7407db0933985a504f0a09983f8dcb5475aef7f500caea465745e0c.scope: Deactivated successfully.
Oct 02 12:46:14 compute-0 podman[338442]: 2025-10-02 12:46:14.041340636 +0000 UTC m=+0.052640063 container create 4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:46:14 compute-0 systemd[1]: Started libpod-conmon-4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504.scope.
Oct 02 12:46:14 compute-0 nova_compute[256940]: 2025-10-02 12:46:14.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:14 compute-0 podman[338442]: 2025-10-02 12:46:14.01813049 +0000 UTC m=+0.029429937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:46:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb75c8d60d444a07921c7c2709d81f7c321cb81b43176ed4e5231aa0905c2319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb75c8d60d444a07921c7c2709d81f7c321cb81b43176ed4e5231aa0905c2319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb75c8d60d444a07921c7c2709d81f7c321cb81b43176ed4e5231aa0905c2319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb75c8d60d444a07921c7c2709d81f7c321cb81b43176ed4e5231aa0905c2319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:14 compute-0 podman[338442]: 2025-10-02 12:46:14.159371749 +0000 UTC m=+0.170671196 container init 4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:46:14 compute-0 podman[338442]: 2025-10-02 12:46:14.167657342 +0000 UTC m=+0.178956769 container start 4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:46:14 compute-0 podman[338442]: 2025-10-02 12:46:14.175989386 +0000 UTC m=+0.187288843 container attach 4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:46:15 compute-0 kind_fermat[338458]: {
Oct 02 12:46:15 compute-0 kind_fermat[338458]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:46:15 compute-0 kind_fermat[338458]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:46:15 compute-0 kind_fermat[338458]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:46:15 compute-0 kind_fermat[338458]:         "osd_id": 1,
Oct 02 12:46:15 compute-0 kind_fermat[338458]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:46:15 compute-0 kind_fermat[338458]:         "type": "bluestore"
Oct 02 12:46:15 compute-0 kind_fermat[338458]:     }
Oct 02 12:46:15 compute-0 kind_fermat[338458]: }
Oct 02 12:46:15 compute-0 systemd[1]: libpod-4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504.scope: Deactivated successfully.
Oct 02 12:46:15 compute-0 podman[338442]: 2025-10-02 12:46:15.111770642 +0000 UTC m=+1.123070129 container died 4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb75c8d60d444a07921c7c2709d81f7c321cb81b43176ed4e5231aa0905c2319-merged.mount: Deactivated successfully.
Oct 02 12:46:15 compute-0 podman[338442]: 2025-10-02 12:46:15.219465069 +0000 UTC m=+1.230764496 container remove 4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:46:15 compute-0 systemd[1]: libpod-conmon-4a353dc7bdb6c4a68d661c019d15aedac375975b3ef58daa65d8cbab41e60504.scope: Deactivated successfully.
Oct 02 12:46:15 compute-0 sudo[338298]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:46:15 compute-0 ceph-mon[73668]: pgmap v2137: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 33 KiB/s wr, 153 op/s
Oct 02 12:46:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2788448323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2788448323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:46:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 772da12e-e9be-4f59-aef6-a6a2056b0b8b does not exist
Oct 02 12:46:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cda95e92-f50d-4cbd-b71c-e316c347ab1f does not exist
Oct 02 12:46:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cca278dc-e0ef-40f2-813d-b6b62a99e41a does not exist
Oct 02 12:46:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:15 compute-0 sudo[338494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:15 compute-0 sudo[338494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:15 compute-0 sudo[338494]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:46:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:15.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:46:15 compute-0 nova_compute[256940]: 2025-10-02 12:46:15.482 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updating instance_info_cache with network_info: [{"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:15 compute-0 sudo[338519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:46:15 compute-0 sudo[338519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:15 compute-0 sudo[338519]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:15 compute-0 nova_compute[256940]: 2025-10-02 12:46:15.508 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-f3cb3218-0640-497e-94de-2549ed7da8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:15 compute-0 nova_compute[256940]: 2025-10-02 12:46:15.509 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:46:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 34 KiB/s wr, 203 op/s
Oct 02 12:46:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:16 compute-0 nova_compute[256940]: 2025-10-02 12:46:16.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:17 compute-0 ceph-mon[73668]: pgmap v2138: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 34 KiB/s wr, 203 op/s
Oct 02 12:46:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:17.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:17.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 455 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 16 KiB/s wr, 276 op/s
Oct 02 12:46:18 compute-0 kernel: tap4a4cb5e2-54 (unregistering): left promiscuous mode
Oct 02 12:46:18 compute-0 NetworkManager[44981]: <info>  [1759409178.4848] device (tap4a4cb5e2-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00615|binding|INFO|Releasing lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 from this chassis (sb_readonly=0)
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00616|binding|INFO|Setting lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 down in Southbound
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00617|binding|INFO|Removing iface tap4a4cb5e2-54 ovn-installed in OVS
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.502 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:57:b6 10.100.0.9'], port_security=['fa:16:3e:0b:57:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '06fbfe0f-34c5-4c90-a9d8-89adba54ad38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4a4cb5e2-5484-4d93-aa83-af0ef4416230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.504 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4a4cb5e2-5484-4d93-aa83-af0ef4416230 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.506 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.507 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1d6449-d5a4-4c6b-acde-99b05c4006dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.508 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 02 12:46:18 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000007a.scope: Consumed 16.625s CPU time.
Oct 02 12:46:18 compute-0 systemd-machined[210927]: Machine qemu-63-instance-0000007a terminated.
Oct 02 12:46:18 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[337203]: [NOTICE]   (337214) : haproxy version is 2.8.14-c23fe91
Oct 02 12:46:18 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[337203]: [NOTICE]   (337214) : path to executable is /usr/sbin/haproxy
Oct 02 12:46:18 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[337203]: [WARNING]  (337214) : Exiting Master process...
Oct 02 12:46:18 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[337203]: [ALERT]    (337214) : Current worker (337217) exited with code 143 (Terminated)
Oct 02 12:46:18 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[337203]: [WARNING]  (337214) : All workers exited. Exiting... (0)
Oct 02 12:46:18 compute-0 systemd[1]: libpod-c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a.scope: Deactivated successfully.
Oct 02 12:46:18 compute-0 conmon[337203]: conmon c0e218ab6e1261fdb55e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a.scope/container/memory.events
Oct 02 12:46:18 compute-0 podman[338567]: 2025-10-02 12:46:18.671044829 +0000 UTC m=+0.051084874 container died c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-35dc4952af7a579cb5cab2b3aaad2bbad0bdfe08a267724f68f3a638794e05e4-merged.mount: Deactivated successfully.
Oct 02 12:46:18 compute-0 kernel: tap4a4cb5e2-54: entered promiscuous mode
Oct 02 12:46:18 compute-0 NetworkManager[44981]: <info>  [1759409178.7174] manager: (tap4a4cb5e2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Oct 02 12:46:18 compute-0 podman[338567]: 2025-10-02 12:46:18.718287073 +0000 UTC m=+0.098327108 container cleanup c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00618|binding|INFO|Claiming lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 for this chassis.
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00619|binding|INFO|4a4cb5e2-5484-4d93-aa83-af0ef4416230: Claiming fa:16:3e:0b:57:b6 10.100.0.9
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 kernel: tap4a4cb5e2-54 (unregistering): left promiscuous mode
Oct 02 12:46:18 compute-0 systemd-udevd[338549]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.732 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:57:b6 10.100.0.9'], port_security=['fa:16:3e:0b:57:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '06fbfe0f-34c5-4c90-a9d8-89adba54ad38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4a4cb5e2-5484-4d93-aa83-af0ef4416230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.734 2 DEBUG nova.compute.manager [req-328c81b2-9085-4d36-ab36-dde44f9e6e2d req-50c61ead-a538-4df1-aee4-13c509bd2656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-unplugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.735 2 DEBUG oslo_concurrency.lockutils [req-328c81b2-9085-4d36-ab36-dde44f9e6e2d req-50c61ead-a538-4df1-aee4-13c509bd2656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.735 2 DEBUG oslo_concurrency.lockutils [req-328c81b2-9085-4d36-ab36-dde44f9e6e2d req-50c61ead-a538-4df1-aee4-13c509bd2656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.735 2 DEBUG oslo_concurrency.lockutils [req-328c81b2-9085-4d36-ab36-dde44f9e6e2d req-50c61ead-a538-4df1-aee4-13c509bd2656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.735 2 DEBUG nova.compute.manager [req-328c81b2-9085-4d36-ab36-dde44f9e6e2d req-50c61ead-a538-4df1-aee4-13c509bd2656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] No waiting events found dispatching network-vif-unplugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.736 2 WARNING nova.compute.manager [req-328c81b2-9085-4d36-ab36-dde44f9e6e2d req-50c61ead-a538-4df1-aee4-13c509bd2656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received unexpected event network-vif-unplugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 for instance with vm_state active and task_state powering-off.
Oct 02 12:46:18 compute-0 systemd[1]: libpod-conmon-c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a.scope: Deactivated successfully.
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00620|binding|INFO|Setting lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 ovn-installed in OVS
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00621|binding|INFO|Setting lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 up in Southbound
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00622|binding|INFO|Releasing lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 from this chassis (sb_readonly=1)
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00623|if_status|INFO|Dropped 4 log messages in last 367 seconds (most recently, 367 seconds ago) due to excessive rate
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00624|if_status|INFO|Not setting lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 down as sb is readonly
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00625|binding|INFO|Removing iface tap4a4cb5e2-54 ovn-installed in OVS
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00626|binding|INFO|Releasing lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 from this chassis (sb_readonly=0)
Oct 02 12:46:18 compute-0 ovn_controller[148123]: 2025-10-02T12:46:18Z|00627|binding|INFO|Setting lport 4a4cb5e2-5484-4d93-aa83-af0ef4416230 down in Southbound
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.779 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:57:b6 10.100.0.9'], port_security=['fa:16:3e:0b:57:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '06fbfe0f-34c5-4c90-a9d8-89adba54ad38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4a4cb5e2-5484-4d93-aa83-af0ef4416230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:18 compute-0 podman[338602]: 2025-10-02 12:46:18.798435822 +0000 UTC m=+0.053679510 container remove c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.804 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8463e7a7-ce9b-4163-972e-12ac0204ebde]: (4, ('Thu Oct  2 12:46:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a)\nc0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a\nThu Oct  2 12:46:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (c0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a)\nc0e218ab6e1261fdb55e38bc11403f0c77b172ce8a9707de46956bc321d24c0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.806 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[24893270-df0a-4736-ad0f-e2e960959f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.807 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 kernel: tapfd4432c5-b0: left promiscuous mode
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.830 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[43a03954-402c-4079-8fe4-066e8a04a4be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.862 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aed4dc8f-13e9-44d9-b3a0-3d37b57b7a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.864 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4d752dec-7f0e-4cb7-8235-07abf01be7da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.884 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[67d3ef3e-6579-4d77-b6b6-9cdf863e0fa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701736, 'reachable_time': 17727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338622, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.887 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.888 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[b18796fd-1d85-42a7-a2ab-db637c0a213c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.888 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4a4cb5e2-5484-4d93-aa83-af0ef4416230 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:46:18 compute-0 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.890 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.891 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[039581d9-9fbd-45c3-9ccf-0e2c878b1d88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.891 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4a4cb5e2-5484-4d93-aa83-af0ef4416230 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.892 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:18.893 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c0eb8e-5fd5-49fe-8336-621fb06aa014]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.965 2 INFO nova.virt.libvirt.driver [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance shutdown successfully after 54 seconds.
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.970 2 INFO nova.virt.libvirt.driver [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance destroyed successfully.
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.970 2 DEBUG nova.objects.instance [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'numa_topology' on Instance uuid 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:18 compute-0 nova_compute[256940]: 2025-10-02 12:46:18.982 2 DEBUG nova.compute.manager [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:19 compute-0 nova_compute[256940]: 2025-10-02 12:46:19.033 2 DEBUG oslo_concurrency.lockutils [None req-beb4854b-0b27-4a67-8c76-04103ad01d9f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 54.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:19 compute-0 nova_compute[256940]: 2025-10-02 12:46:19.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:19.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:19.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:19 compute-0 nova_compute[256940]: 2025-10-02 12:46:19.503 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:19 compute-0 ceph-mon[73668]: pgmap v2139: 305 pgs: 305 active+clean; 455 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 16 KiB/s wr, 276 op/s
Oct 02 12:46:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 455 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 15 KiB/s wr, 243 op/s
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.575 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.575 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.576 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.576 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.576 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.577 2 INFO nova.compute.manager [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Terminating instance
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.578 2 DEBUG nova.compute.manager [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.585 2 INFO nova.virt.libvirt.driver [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Instance destroyed successfully.
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.586 2 DEBUG nova.objects.instance [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.601 2 DEBUG nova.virt.libvirt.vif [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-935104613',display_name='tempest-DeleteServersTestJSON-server-935104613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-935104613',id=122,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:45:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-p7hkngz0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:46:19Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=06fbfe0f-34c5-4c90-a9d8-89adba54ad38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.601 2 DEBUG nova.network.os_vif_util [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "address": "fa:16:3e:0b:57:b6", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a4cb5e2-54", "ovs_interfaceid": "4a4cb5e2-5484-4d93-aa83-af0ef4416230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.602 2 DEBUG nova.network.os_vif_util [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:57:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a4cb5e2-5484-4d93-aa83-af0ef4416230,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a4cb5e2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.602 2 DEBUG os_vif [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:57:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a4cb5e2-5484-4d93-aa83-af0ef4416230,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a4cb5e2-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a4cb5e2-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.610 2 INFO os_vif [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:57:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a4cb5e2-5484-4d93-aa83-af0ef4416230,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a4cb5e2-54')
Oct 02 12:46:20 compute-0 ceph-mon[73668]: pgmap v2140: 305 pgs: 305 active+clean; 455 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 15 KiB/s wr, 243 op/s
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.876 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.877 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.877 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.877 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.877 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] No waiting events found dispatching network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.878 2 WARNING nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received unexpected event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 for instance with vm_state stopped and task_state deleting.
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.878 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.878 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.878 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.878 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.878 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] No waiting events found dispatching network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.878 2 WARNING nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received unexpected event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 for instance with vm_state stopped and task_state deleting.
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.879 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.880 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.880 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.881 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.881 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] No waiting events found dispatching network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.881 2 WARNING nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received unexpected event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 for instance with vm_state stopped and task_state deleting.
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.882 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-unplugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.882 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.882 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.883 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.884 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] No waiting events found dispatching network-vif-unplugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.884 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-unplugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.884 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.884 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.884 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.884 2 DEBUG oslo_concurrency.lockutils [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.885 2 DEBUG nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] No waiting events found dispatching network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:20 compute-0 nova_compute[256940]: 2025-10-02 12:46:20.885 2 WARNING nova.compute.manager [req-e17c8d02-c825-483d-aa32-81ba843c7b01 req-ddf7abe8-2487-41fd-9c9c-d1ce0bb60a2b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received unexpected event network-vif-plugged-4a4cb5e2-5484-4d93-aa83-af0ef4416230 for instance with vm_state stopped and task_state deleting.
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.215 2 INFO nova.virt.libvirt.driver [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Deleting instance files /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38_del
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.215 2 INFO nova.virt.libvirt.driver [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Deletion of /var/lib/nova/instances/06fbfe0f-34c5-4c90-a9d8-89adba54ad38_del complete
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.298 2 INFO nova.compute.manager [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.299 2 DEBUG oslo.service.loopingcall [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.299 2 DEBUG nova.compute.manager [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.299 2 DEBUG nova.network.neutron [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:46:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:21.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 328 op/s
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.768 2 DEBUG nova.network.neutron [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.792 2 INFO nova.compute.manager [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Took 0.49 seconds to deallocate network for instance.
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.857 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.857 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.867 2 DEBUG nova.compute.manager [req-4609d95d-f84c-49cd-b59d-0c3464405c9b req-ca6e4c0a-8848-4bba-9b0a-3f1496a55803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Received event network-vif-deleted-4a4cb5e2-5484-4d93-aa83-af0ef4416230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:21 compute-0 nova_compute[256940]: 2025-10-02 12:46:21.929 2 DEBUG oslo_concurrency.processutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/569026192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:22 compute-0 nova_compute[256940]: 2025-10-02 12:46:22.361 2 DEBUG oslo_concurrency.processutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:22 compute-0 nova_compute[256940]: 2025-10-02 12:46:22.367 2 DEBUG nova.compute.provider_tree [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:22 compute-0 nova_compute[256940]: 2025-10-02 12:46:22.384 2 DEBUG nova.scheduler.client.report [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:22 compute-0 nova_compute[256940]: 2025-10-02 12:46:22.413 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:22 compute-0 nova_compute[256940]: 2025-10-02 12:46:22.444 2 INFO nova.scheduler.client.report [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 06fbfe0f-34c5-4c90-a9d8-89adba54ad38
Oct 02 12:46:22 compute-0 auditd[701]: Audit daemon rotating log files
Oct 02 12:46:22 compute-0 nova_compute[256940]: 2025-10-02 12:46:22.558 2 DEBUG oslo_concurrency.lockutils [None req-9d9e57b5-2a00-48fa-bd14-37d9c4ebe660 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "06fbfe0f-34c5-4c90-a9d8-89adba54ad38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:22 compute-0 sudo[338665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:22 compute-0 sudo[338665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:22 compute-0 sudo[338665]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:22 compute-0 sudo[338691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:22 compute-0 sudo[338691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:22 compute-0 sudo[338691]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:23 compute-0 nova_compute[256940]: 2025-10-02 12:46:23.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:23 compute-0 nova_compute[256940]: 2025-10-02 12:46:23.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:46:23 compute-0 ceph-mon[73668]: pgmap v2141: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 328 op/s
Oct 02 12:46:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/569026192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:23 compute-0 nova_compute[256940]: 2025-10-02 12:46:23.224 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:46:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:23.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:23.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 210 op/s
Oct 02 12:46:25 compute-0 ceph-mon[73668]: pgmap v2142: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 210 op/s
Oct 02 12:46:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:25.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:25.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:25 compute-0 nova_compute[256940]: 2025-10-02 12:46:25.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 244 op/s
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:26.482 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:26.483 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:26.483 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:26 compute-0 nova_compute[256940]: 2025-10-02 12:46:26.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:27 compute-0 ceph-mon[73668]: pgmap v2143: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 244 op/s
Oct 02 12:46:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:27.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.072 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.073 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.094 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.179 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.179 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.188 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.188 2 INFO nova.compute.claims [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.304 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357741991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.792 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.798 2 DEBUG nova.compute.provider_tree [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:46:28
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 02 12:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.820 2 DEBUG nova.scheduler.client.report [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.855 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.856 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.894 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.894 2 DEBUG nova.network.neutron [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.914 2 INFO nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:46:28 compute-0 nova_compute[256940]: 2025-10-02 12:46:28.942 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.047 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.048 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.048 2 INFO nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Creating image(s)
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.082 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.111 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.137 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.140 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.171 2 DEBUG nova.policy [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.212 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.213 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.214 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.214 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.241 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.245 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4323e59b-2191-4a89-a017-3809d49eed0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 02 12:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 02 12:46:29 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 02 12:46:29 compute-0 ceph-mon[73668]: pgmap v2144: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Oct 02 12:46:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/357741991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:29.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:29.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.573 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4323e59b-2191-4a89-a017-3809d49eed0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.642 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:46:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 2.6 MiB/s wr, 180 op/s
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.804 2 DEBUG nova.objects.instance [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 4323e59b-2191-4a89-a017-3809d49eed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.840 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.841 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Ensure instance console log exists: /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.842 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.842 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.843 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:29 compute-0 nova_compute[256940]: 2025-10-02 12:46:29.882 2 DEBUG nova.network.neutron [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Successfully created port: 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:46:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 02 12:46:30 compute-0 ceph-mon[73668]: osdmap e293: 3 total, 3 up, 3 in
Oct 02 12:46:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 02 12:46:30 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.636 2 DEBUG nova.network.neutron [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Successfully updated port: 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.653 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-4323e59b-2191-4a89-a017-3809d49eed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.654 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-4323e59b-2191-4a89-a017-3809d49eed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.654 2 DEBUG nova.network.neutron [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.850 2 DEBUG nova.compute.manager [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received event network-changed-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.851 2 DEBUG nova.compute.manager [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Refreshing instance network info cache due to event network-changed-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.851 2 DEBUG oslo_concurrency.lockutils [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4323e59b-2191-4a89-a017-3809d49eed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:30 compute-0 nova_compute[256940]: 2025-10-02 12:46:30.936 2 DEBUG nova.network.neutron [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:46:31 compute-0 nova_compute[256940]: 2025-10-02 12:46:31.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:31 compute-0 nova_compute[256940]: 2025-10-02 12:46:31.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:31 compute-0 nova_compute[256940]: 2025-10-02 12:46:31.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:46:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:31.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 02 12:46:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 02 12:46:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 02 12:46:31 compute-0 ceph-mon[73668]: pgmap v2146: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 2.6 MiB/s wr, 180 op/s
Oct 02 12:46:31 compute-0 ceph-mon[73668]: osdmap e294: 3 total, 3 up, 3 in
Oct 02 12:46:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:31.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 269 op/s
Oct 02 12:46:31 compute-0 nova_compute[256940]: 2025-10-02 12:46:31.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.040 2 DEBUG nova.network.neutron [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Updating instance_info_cache with network_info: [{"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.162 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-4323e59b-2191-4a89-a017-3809d49eed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.163 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Instance network_info: |[{"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.163 2 DEBUG oslo_concurrency.lockutils [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4323e59b-2191-4a89-a017-3809d49eed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.163 2 DEBUG nova.network.neutron [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Refreshing network info cache for port 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.165 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Start _get_guest_xml network_info=[{"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.170 2 WARNING nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.174 2 DEBUG nova.virt.libvirt.host [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.175 2 DEBUG nova.virt.libvirt.host [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.178 2 DEBUG nova.virt.libvirt.host [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.178 2 DEBUG nova.virt.libvirt.host [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.179 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.179 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.180 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.180 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.180 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.181 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.181 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.181 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.181 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.182 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.182 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.182 2 DEBUG nova.virt.hardware [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.185 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:32 compute-0 podman[338909]: 2025-10-02 12:46:32.404318183 +0000 UTC m=+0.064877578 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:46:32 compute-0 podman[338918]: 2025-10-02 12:46:32.425856767 +0000 UTC m=+0.091466802 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:32 compute-0 ceph-mon[73668]: osdmap e295: 3 total, 3 up, 3 in
Oct 02 12:46:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:46:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1083121368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.649 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.691 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:32 compute-0 nova_compute[256940]: 2025-10-02 12:46:32.696 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:46:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689995221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.163 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.165 2 DEBUG nova.virt.libvirt.vif [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-978576839',display_name='tempest-DeleteServersTestJSON-server-978576839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-978576839',id=125,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-japem02d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:28Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=4323e59b-2191-4a89-a017-3809d49eed0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.165 2 DEBUG nova.network.os_vif_util [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.166 2 DEBUG nova.network.os_vif_util [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e7:cc,bridge_name='br-int',has_traffic_filtering=True,id=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf23a1f-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.167 2 DEBUG nova.objects.instance [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4323e59b-2191-4a89-a017-3809d49eed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.196 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <uuid>4323e59b-2191-4a89-a017-3809d49eed0d</uuid>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <name>instance-0000007d</name>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <nova:name>tempest-DeleteServersTestJSON-server-978576839</nova:name>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:46:32</nova:creationTime>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <nova:port uuid="3cf23a1f-0d94-4ea8-b0e3-d883806eafa9">
Oct 02 12:46:33 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <system>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <entry name="serial">4323e59b-2191-4a89-a017-3809d49eed0d</entry>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <entry name="uuid">4323e59b-2191-4a89-a017-3809d49eed0d</entry>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </system>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <os>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   </os>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <features>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   </features>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/4323e59b-2191-4a89-a017-3809d49eed0d_disk">
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       </source>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/4323e59b-2191-4a89-a017-3809d49eed0d_disk.config">
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       </source>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:46:33 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:49:e7:cc"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <target dev="tap3cf23a1f-0d"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/console.log" append="off"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <video>
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </video>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:46:33 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:46:33 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:46:33 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:46:33 compute-0 nova_compute[256940]: </domain>
Oct 02 12:46:33 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.197 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Preparing to wait for external event network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.198 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.198 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.198 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.199 2 DEBUG nova.virt.libvirt.vif [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-978576839',display_name='tempest-DeleteServersTestJSON-server-978576839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-978576839',id=125,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-japem02d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:28Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=4323e59b-2191-4a89-a017-3809d49eed0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.199 2 DEBUG nova.network.os_vif_util [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.200 2 DEBUG nova.network.os_vif_util [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e7:cc,bridge_name='br-int',has_traffic_filtering=True,id=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf23a1f-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.200 2 DEBUG os_vif [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e7:cc,bridge_name='br-int',has_traffic_filtering=True,id=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf23a1f-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.205 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cf23a1f-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cf23a1f-0d, col_values=(('external_ids', {'iface-id': '3cf23a1f-0d94-4ea8-b0e3-d883806eafa9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:e7:cc', 'vm-uuid': '4323e59b-2191-4a89-a017-3809d49eed0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:33 compute-0 NetworkManager[44981]: <info>  [1759409193.2085] manager: (tap3cf23a1f-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.216 2 INFO os_vif [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e7:cc,bridge_name='br-int',has_traffic_filtering=True,id=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf23a1f-0d')
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.338 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.338 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.338 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:49:e7:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.339 2 INFO nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Using config drive
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.368 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:33.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:33 compute-0 ceph-mon[73668]: pgmap v2149: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 269 op/s
Oct 02 12:46:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1083121368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2689995221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.741 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409178.7403166, 06fbfe0f-34c5-4c90-a9d8-89adba54ad38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.742 2 INFO nova.compute.manager [-] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] VM Stopped (Lifecycle Event)
Oct 02 12:46:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 207 op/s
Oct 02 12:46:33 compute-0 nova_compute[256940]: 2025-10-02 12:46:33.767 2 DEBUG nova.compute.manager [None req-4edca070-cc7b-4962-b463-16e660e5016b - - - - - -] [instance: 06fbfe0f-34c5-4c90-a9d8-89adba54ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:34 compute-0 nova_compute[256940]: 2025-10-02 12:46:34.308 2 INFO nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Creating config drive at /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/disk.config
Oct 02 12:46:34 compute-0 nova_compute[256940]: 2025-10-02 12:46:34.313 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xldde88 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:34 compute-0 nova_compute[256940]: 2025-10-02 12:46:34.454 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xldde88" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:34 compute-0 nova_compute[256940]: 2025-10-02 12:46:34.483 2 DEBUG nova.storage.rbd_utils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 4323e59b-2191-4a89-a017-3809d49eed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:34 compute-0 nova_compute[256940]: 2025-10-02 12:46:34.486 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/disk.config 4323e59b-2191-4a89-a017-3809d49eed0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:34 compute-0 ceph-mon[73668]: pgmap v2150: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 207 op/s
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.063 2 DEBUG oslo_concurrency.processutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/disk.config 4323e59b-2191-4a89-a017-3809d49eed0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.064 2 INFO nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Deleting local config drive /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d/disk.config because it was imported into RBD.
Oct 02 12:46:35 compute-0 kernel: tap3cf23a1f-0d: entered promiscuous mode
Oct 02 12:46:35 compute-0 NetworkManager[44981]: <info>  [1759409195.1154] manager: (tap3cf23a1f-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Oct 02 12:46:35 compute-0 ovn_controller[148123]: 2025-10-02T12:46:35Z|00628|binding|INFO|Claiming lport 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 for this chassis.
Oct 02 12:46:35 compute-0 ovn_controller[148123]: 2025-10-02T12:46:35Z|00629|binding|INFO|3cf23a1f-0d94-4ea8-b0e3-d883806eafa9: Claiming fa:16:3e:49:e7:cc 10.100.0.9
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.131 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:e7:cc 10.100.0.9'], port_security=['fa:16:3e:49:e7:cc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4323e59b-2191-4a89-a017-3809d49eed0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.132 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.133 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:46:35 compute-0 ovn_controller[148123]: 2025-10-02T12:46:35Z|00630|binding|INFO|Setting lport 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 ovn-installed in OVS
Oct 02 12:46:35 compute-0 ovn_controller[148123]: 2025-10-02T12:46:35Z|00631|binding|INFO|Setting lport 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 up in Southbound
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.145 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[af3e9011-926e-4fed-8fcb-984a9c248abb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.147 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:46:35 compute-0 systemd-udevd[339093]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.150 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.150 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[65dcaef2-8723-4e40-8df4-525852184fd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 systemd-machined[210927]: New machine qemu-64-instance-0000007d.
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.151 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9908220a-6fe1-46ef-beec-7628d6ba1fc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 NetworkManager[44981]: <info>  [1759409195.1632] device (tap3cf23a1f-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.163 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a16119-bf55-4a6d-80b5-2705783f8c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 NetworkManager[44981]: <info>  [1759409195.1648] device (tap3cf23a1f-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:46:35 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-0000007d.
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.187 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2115b171-acbf-43a1-9be2-b607b61c96b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.215 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2013b8-3a1b-4fba-9bcc-5d14e19dca78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.220 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0a618376-0d02-4676-ae74-b359d81b7743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 NetworkManager[44981]: <info>  [1759409195.2227] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/277)
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.251 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[63f6fd79-131b-496e-a669-e6d4b4865789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.254 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[addd60fa-54f5-46a9-8109-9b8ba1a288fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 NetworkManager[44981]: <info>  [1759409195.2761] device (tapfd4432c5-b0): carrier: link connected
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.284 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3965563f-c68e-411a-8430-615640b19a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.302 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[86277737-4d8a-446d-8c61-8b1db36cab38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709076, 'reachable_time': 38567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339125, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.314 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b4953256-e099-4ffc-8aa5-d5afdc952b7f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709076, 'tstamp': 709076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339126, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.328 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f154a780-6946-4487-bd01-20438f08abaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709076, 'reachable_time': 38567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339127, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.358 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[754aab9e-682c-44da-b379-8b8cc2a928c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.420 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b23843cb-6890-44d3-be24-beb213af77db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.422 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.422 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.423 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:35 compute-0 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct 02 12:46:35 compute-0 NetworkManager[44981]: <info>  [1759409195.4704] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.474 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:35 compute-0 ovn_controller[148123]: 2025-10-02T12:46:35Z|00632|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.479 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.480 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3141b464-deb7-45d7-95e2-a5b0ce30f434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.481 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:46:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:35.482 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:46:35 compute-0 nova_compute[256940]: 2025-10-02 12:46:35.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 206 op/s
Oct 02 12:46:35 compute-0 podman[339159]: 2025-10-02 12:46:35.890227876 +0000 UTC m=+0.104542477 container create 82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:46:35 compute-0 podman[339159]: 2025-10-02 12:46:35.805854908 +0000 UTC m=+0.020169529 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:46:35 compute-0 systemd[1]: Started libpod-conmon-82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290.scope.
Oct 02 12:46:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05264c522599ba106c3305e5b4b4cea9220ca1fff308026dec9a6a096043ce8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:36 compute-0 podman[339159]: 2025-10-02 12:46:36.0985623 +0000 UTC m=+0.312876931 container init 82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:46:36 compute-0 podman[339159]: 2025-10-02 12:46:36.105236631 +0000 UTC m=+0.319551222 container start 82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:46:36 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[339175]: [NOTICE]   (339179) : New worker (339181) forked
Oct 02 12:46:36 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[339175]: [NOTICE]   (339179) : Loading success.
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.296 2 DEBUG nova.network.neutron [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Updated VIF entry in instance network info cache for port 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.298 2 DEBUG nova.network.neutron [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Updating instance_info_cache with network_info: [{"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.323 2 DEBUG oslo_concurrency.lockutils [req-becf712b-ba9b-4679-941f-c1920c762915 req-4dcaecc5-2777-4cb2-8589-2f2fbad825f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4323e59b-2191-4a89-a017-3809d49eed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.401 2 DEBUG nova.compute.manager [req-6532d0b3-459d-4686-9e92-e054779ae54c req-5233799a-e131-410f-878d-32d79ffb9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received event network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.401 2 DEBUG oslo_concurrency.lockutils [req-6532d0b3-459d-4686-9e92-e054779ae54c req-5233799a-e131-410f-878d-32d79ffb9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.402 2 DEBUG oslo_concurrency.lockutils [req-6532d0b3-459d-4686-9e92-e054779ae54c req-5233799a-e131-410f-878d-32d79ffb9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.402 2 DEBUG oslo_concurrency.lockutils [req-6532d0b3-459d-4686-9e92-e054779ae54c req-5233799a-e131-410f-878d-32d79ffb9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.402 2 DEBUG nova.compute.manager [req-6532d0b3-459d-4686-9e92-e054779ae54c req-5233799a-e131-410f-878d-32d79ffb9e15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Processing event network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 02 12:46:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 02 12:46:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 02 12:46:36 compute-0 ceph-mon[73668]: pgmap v2151: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 206 op/s
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.919 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409196.9187665, 4323e59b-2191-4a89-a017-3809d49eed0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.919 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] VM Started (Lifecycle Event)
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.922 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.925 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.928 2 INFO nova.virt.libvirt.driver [-] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Instance spawned successfully.
Oct 02 12:46:36 compute-0 nova_compute[256940]: 2025-10-02 12:46:36.928 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.123 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.129 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.129 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.130 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.130 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.131 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.131 2 DEBUG nova.virt.libvirt.driver [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.135 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.179 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.180 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409196.91972, 4323e59b-2191-4a89-a017-3809d49eed0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.180 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] VM Paused (Lifecycle Event)
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.208 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.213 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409196.924268, 4323e59b-2191-4a89-a017-3809d49eed0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.213 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] VM Resumed (Lifecycle Event)
Oct 02 12:46:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 02 12:46:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 02 12:46:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.240 2 INFO nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Took 8.19 seconds to spawn the instance on the hypervisor.
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.241 2 DEBUG nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.254 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.257 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.303 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.327 2 INFO nova.compute.manager [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Took 9.17 seconds to build instance.
Oct 02 12:46:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:37.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:37.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:37 compute-0 nova_compute[256940]: 2025-10-02 12:46:37.518 2 DEBUG oslo_concurrency.lockutils [None req-22f5103d-64de-40e5-a8d9-e5c055304904 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 473 KiB/s wr, 47 op/s
Oct 02 12:46:37 compute-0 ceph-mon[73668]: osdmap e296: 3 total, 3 up, 3 in
Oct 02 12:46:37 compute-0 ceph-mon[73668]: osdmap e297: 3 total, 3 up, 3 in
Oct 02 12:46:38 compute-0 nova_compute[256940]: 2025-10-02 12:46:38.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 02 12:46:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 02 12:46:38 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 02 12:46:38 compute-0 nova_compute[256940]: 2025-10-02 12:46:38.602 2 DEBUG nova.compute.manager [req-4a7834a2-ea91-4767-a20e-abef699531c3 req-21062960-9628-47cf-8553-8865ca4f114a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received event network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:38 compute-0 nova_compute[256940]: 2025-10-02 12:46:38.602 2 DEBUG oslo_concurrency.lockutils [req-4a7834a2-ea91-4767-a20e-abef699531c3 req-21062960-9628-47cf-8553-8865ca4f114a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:38 compute-0 nova_compute[256940]: 2025-10-02 12:46:38.603 2 DEBUG oslo_concurrency.lockutils [req-4a7834a2-ea91-4767-a20e-abef699531c3 req-21062960-9628-47cf-8553-8865ca4f114a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:38 compute-0 nova_compute[256940]: 2025-10-02 12:46:38.603 2 DEBUG oslo_concurrency.lockutils [req-4a7834a2-ea91-4767-a20e-abef699531c3 req-21062960-9628-47cf-8553-8865ca4f114a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:38 compute-0 nova_compute[256940]: 2025-10-02 12:46:38.603 2 DEBUG nova.compute.manager [req-4a7834a2-ea91-4767-a20e-abef699531c3 req-21062960-9628-47cf-8553-8865ca4f114a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] No waiting events found dispatching network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:38 compute-0 nova_compute[256940]: 2025-10-02 12:46:38.603 2 WARNING nova.compute.manager [req-4a7834a2-ea91-4767-a20e-abef699531c3 req-21062960-9628-47cf-8553-8865ca4f114a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received unexpected event network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 for instance with vm_state active and task_state None.
Oct 02 12:46:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 02 12:46:39 compute-0 ceph-mon[73668]: pgmap v2154: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 473 KiB/s wr, 47 op/s
Oct 02 12:46:39 compute-0 ceph-mon[73668]: osdmap e298: 3 total, 3 up, 3 in
Oct 02 12:46:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 02 12:46:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 02 12:46:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:39.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:39.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 49 KiB/s wr, 56 op/s
Oct 02 12:46:40 compute-0 ceph-mon[73668]: osdmap e299: 3 total, 3 up, 3 in
Oct 02 12:46:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2370722550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01068111826493579 of space, bias 1.0, pg target 3.2043354794807373 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.000377865891959799 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066193234691979 of space, bias 1.0, pg target 1.2076593907035176 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:46:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.140 2 DEBUG nova.objects.instance [None req-6304afb7-e207-4a0c-808e-d06cf2b8b482 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4323e59b-2191-4a89-a017-3809d49eed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.177 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409201.1772654, 4323e59b-2191-4a89-a017-3809d49eed0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.178 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] VM Paused (Lifecycle Event)
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.229 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.234 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.274 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 12:46:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:46:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:41.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:46:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:41.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:41 compute-0 ceph-mon[73668]: pgmap v2157: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 49 KiB/s wr, 56 op/s
Oct 02 12:46:41 compute-0 kernel: tap3cf23a1f-0d (unregistering): left promiscuous mode
Oct 02 12:46:41 compute-0 NetworkManager[44981]: <info>  [1759409201.6537] device (tap3cf23a1f-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:46:41 compute-0 ovn_controller[148123]: 2025-10-02T12:46:41Z|00633|binding|INFO|Releasing lport 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 from this chassis (sb_readonly=0)
Oct 02 12:46:41 compute-0 ovn_controller[148123]: 2025-10-02T12:46:41Z|00634|binding|INFO|Setting lport 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 down in Southbound
Oct 02 12:46:41 compute-0 ovn_controller[148123]: 2025-10-02T12:46:41Z|00635|binding|INFO|Removing iface tap3cf23a1f-0d ovn-installed in OVS
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.679 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:e7:cc 10.100.0.9'], port_security=['fa:16:3e:49:e7:cc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4323e59b-2191-4a89-a017-3809d49eed0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.681 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.682 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.684 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8abbf301-9cec-4e1b-a4e8-b32a281968df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.686 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct 02 12:46:41 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007d.scope: Consumed 6.083s CPU time.
Oct 02 12:46:41 compute-0 systemd-machined[210927]: Machine qemu-64-instance-0000007d terminated.
Oct 02 12:46:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 9.6 MiB/s wr, 432 op/s
Oct 02 12:46:41 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[339175]: [NOTICE]   (339179) : haproxy version is 2.8.14-c23fe91
Oct 02 12:46:41 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[339175]: [NOTICE]   (339179) : path to executable is /usr/sbin/haproxy
Oct 02 12:46:41 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[339175]: [WARNING]  (339179) : Exiting Master process...
Oct 02 12:46:41 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[339175]: [ALERT]    (339179) : Current worker (339181) exited with code 143 (Terminated)
Oct 02 12:46:41 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[339175]: [WARNING]  (339179) : All workers exited. Exiting... (0)
Oct 02 12:46:41 compute-0 systemd[1]: libpod-82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290.scope: Deactivated successfully.
Oct 02 12:46:41 compute-0 podman[339261]: 2025-10-02 12:46:41.83265229 +0000 UTC m=+0.048101456 container died 82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.840 2 DEBUG nova.compute.manager [None req-6304afb7-e207-4a0c-808e-d06cf2b8b482 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290-userdata-shm.mount: Deactivated successfully.
Oct 02 12:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-05264c522599ba106c3305e5b4b4cea9220ca1fff308026dec9a6a096043ce8a-merged.mount: Deactivated successfully.
Oct 02 12:46:41 compute-0 podman[339261]: 2025-10-02 12:46:41.881000502 +0000 UTC m=+0.096449658 container cleanup 82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:46:41 compute-0 systemd[1]: libpod-conmon-82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290.scope: Deactivated successfully.
Oct 02 12:46:41 compute-0 podman[339302]: 2025-10-02 12:46:41.951583966 +0000 UTC m=+0.044604707 container remove 82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.962 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa5884e-2e5e-4d52-b428-2826a3c4387d]: (4, ('Thu Oct  2 12:46:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290)\n82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290\nThu Oct  2 12:46:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290)\n82a7f69bac163059af6f2bc537ade50c8b307fc35ad9145fef8c9fdb7c22e290\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.964 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a2247a-cea6-494f-b41f-1fa7e95e10a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.966 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:41 compute-0 kernel: tapfd4432c5-b0: left promiscuous mode
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-0 nova_compute[256940]: 2025-10-02 12:46:41.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:41.996 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[162e1677-267c-4c76-8a02-46a9f0e79b46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:42.024 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab2ba41-fe0a-4a7d-b709-161baaddee16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:42.025 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0e798d01-b131-4b66-ab2d-16b8cd3cf830]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:42.043 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[baf8142e-d6e4-48cd-857d-771dd3a9ae16]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709069, 'reachable_time': 37321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339319, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:42.046 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:46:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:42.047 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[5060ede2-0eed-4b1d-ab3f-7e1d711cd61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct 02 12:46:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:42 compute-0 nova_compute[256940]: 2025-10-02 12:46:42.535 2 DEBUG nova.compute.manager [req-1d41623b-1490-498d-8a4a-50a503a7ac67 req-d617898a-4014-46ac-99d8-cd874a565ec5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received event network-vif-unplugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:42 compute-0 nova_compute[256940]: 2025-10-02 12:46:42.535 2 DEBUG oslo_concurrency.lockutils [req-1d41623b-1490-498d-8a4a-50a503a7ac67 req-d617898a-4014-46ac-99d8-cd874a565ec5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:42 compute-0 nova_compute[256940]: 2025-10-02 12:46:42.536 2 DEBUG oslo_concurrency.lockutils [req-1d41623b-1490-498d-8a4a-50a503a7ac67 req-d617898a-4014-46ac-99d8-cd874a565ec5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:42 compute-0 nova_compute[256940]: 2025-10-02 12:46:42.536 2 DEBUG oslo_concurrency.lockutils [req-1d41623b-1490-498d-8a4a-50a503a7ac67 req-d617898a-4014-46ac-99d8-cd874a565ec5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:42 compute-0 nova_compute[256940]: 2025-10-02 12:46:42.536 2 DEBUG nova.compute.manager [req-1d41623b-1490-498d-8a4a-50a503a7ac67 req-d617898a-4014-46ac-99d8-cd874a565ec5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] No waiting events found dispatching network-vif-unplugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:42 compute-0 nova_compute[256940]: 2025-10-02 12:46:42.536 2 WARNING nova.compute.manager [req-1d41623b-1490-498d-8a4a-50a503a7ac67 req-d617898a-4014-46ac-99d8-cd874a565ec5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received unexpected event network-vif-unplugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 for instance with vm_state suspended and task_state None.
Oct 02 12:46:42 compute-0 ceph-mon[73668]: pgmap v2158: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 9.6 MiB/s wr, 432 op/s
Oct 02 12:46:42 compute-0 sudo[339321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:42 compute-0 sudo[339321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:42 compute-0 sudo[339321]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:42 compute-0 sudo[339346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:42 compute-0 sudo[339346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:42 compute-0 sudo[339346]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:43 compute-0 nova_compute[256940]: 2025-10-02 12:46:43.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:43.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:43.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 322 op/s
Oct 02 12:46:44 compute-0 nova_compute[256940]: 2025-10-02 12:46:44.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:44 compute-0 podman[339371]: 2025-10-02 12:46:44.384083021 +0000 UTC m=+0.058558926 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 02 12:46:44 compute-0 podman[339372]: 2025-10-02 12:46:44.414122492 +0000 UTC m=+0.084383339 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:46:44 compute-0 ceph-mon[73668]: pgmap v2159: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 322 op/s
Oct 02 12:46:44 compute-0 nova_compute[256940]: 2025-10-02 12:46:44.879 2 DEBUG nova.compute.manager [req-5b8b983d-6c10-4553-a3fc-3b6182193e68 req-eeff2427-224a-4fdb-8d7e-f5ded349060b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received event network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:44 compute-0 nova_compute[256940]: 2025-10-02 12:46:44.879 2 DEBUG oslo_concurrency.lockutils [req-5b8b983d-6c10-4553-a3fc-3b6182193e68 req-eeff2427-224a-4fdb-8d7e-f5ded349060b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:44 compute-0 nova_compute[256940]: 2025-10-02 12:46:44.879 2 DEBUG oslo_concurrency.lockutils [req-5b8b983d-6c10-4553-a3fc-3b6182193e68 req-eeff2427-224a-4fdb-8d7e-f5ded349060b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:44 compute-0 nova_compute[256940]: 2025-10-02 12:46:44.880 2 DEBUG oslo_concurrency.lockutils [req-5b8b983d-6c10-4553-a3fc-3b6182193e68 req-eeff2427-224a-4fdb-8d7e-f5ded349060b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:44 compute-0 nova_compute[256940]: 2025-10-02 12:46:44.880 2 DEBUG nova.compute.manager [req-5b8b983d-6c10-4553-a3fc-3b6182193e68 req-eeff2427-224a-4fdb-8d7e-f5ded349060b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] No waiting events found dispatching network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:44 compute-0 nova_compute[256940]: 2025-10-02 12:46:44.880 2 WARNING nova.compute.manager [req-5b8b983d-6c10-4553-a3fc-3b6182193e68 req-eeff2427-224a-4fdb-8d7e-f5ded349060b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received unexpected event network-vif-plugged-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 for instance with vm_state suspended and task_state None.
Oct 02 12:46:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:45.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 262 op/s
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.274 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.275 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.275 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.275 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.275 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.278 2 INFO nova.compute.manager [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Terminating instance
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.280 2 DEBUG nova.compute.manager [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.290 2 INFO nova.virt.libvirt.driver [-] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Instance destroyed successfully.
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.290 2 DEBUG nova.objects.instance [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 4323e59b-2191-4a89-a017-3809d49eed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.335 2 DEBUG nova.virt.libvirt.vif [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-978576839',display_name='tempest-DeleteServersTestJSON-server-978576839',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-978576839',id=125,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-japem02d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:46:41Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=4323e59b-2191-4a89-a017-3809d49eed0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.335 2 DEBUG nova.network.os_vif_util [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "address": "fa:16:3e:49:e7:cc", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf23a1f-0d", "ovs_interfaceid": "3cf23a1f-0d94-4ea8-b0e3-d883806eafa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.336 2 DEBUG nova.network.os_vif_util [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e7:cc,bridge_name='br-int',has_traffic_filtering=True,id=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf23a1f-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.336 2 DEBUG os_vif [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e7:cc,bridge_name='br-int',has_traffic_filtering=True,id=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf23a1f-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cf23a1f-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.346 2 INFO os_vif [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e7:cc,bridge_name='br-int',has_traffic_filtering=True,id=3cf23a1f-0d94-4ea8-b0e3-d883806eafa9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf23a1f-0d')
Oct 02 12:46:46 compute-0 ceph-mon[73668]: pgmap v2160: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 262 op/s
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.943 2 INFO nova.virt.libvirt.driver [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Deleting instance files /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d_del
Oct 02 12:46:46 compute-0 nova_compute[256940]: 2025-10-02 12:46:46.944 2 INFO nova.virt.libvirt.driver [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Deletion of /var/lib/nova/instances/4323e59b-2191-4a89-a017-3809d49eed0d_del complete
Oct 02 12:46:47 compute-0 nova_compute[256940]: 2025-10-02 12:46:47.106 2 INFO nova.compute.manager [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 02 12:46:47 compute-0 nova_compute[256940]: 2025-10-02 12:46:47.107 2 DEBUG oslo.service.loopingcall [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:46:47 compute-0 nova_compute[256940]: 2025-10-02 12:46:47.108 2 DEBUG nova.compute.manager [-] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:46:47 compute-0 nova_compute[256940]: 2025-10-02 12:46:47.108 2 DEBUG nova.network.neutron [-] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:46:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 02 12:46:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 02 12:46:47 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 02 12:46:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:47.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:47.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.5 MiB/s wr, 263 op/s
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.014 2 DEBUG nova.network.neutron [-] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.061 2 INFO nova.compute.manager [-] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Took 0.95 seconds to deallocate network for instance.
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.151 2 DEBUG nova.compute.manager [req-898480e7-de70-4d5c-b1a1-b50de0a57ccf req-2d6f48ac-a8c6-431c-9a61-1b5f6deee196 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Received event network-vif-deleted-3cf23a1f-0d94-4ea8-b0e3-d883806eafa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.165 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.166 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.236 2 DEBUG oslo_concurrency.processutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 02 12:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 02 12:46:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 02 12:46:48 compute-0 ceph-mon[73668]: osdmap e300: 3 total, 3 up, 3 in
Oct 02 12:46:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1020510815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1020510815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468040698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.717 2 DEBUG oslo_concurrency.processutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.724 2 DEBUG nova.compute.provider_tree [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.752 2 DEBUG nova.scheduler.client.report [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.787 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.850 2 INFO nova.scheduler.client.report [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 4323e59b-2191-4a89-a017-3809d49eed0d
Oct 02 12:46:48 compute-0 nova_compute[256940]: 2025-10-02 12:46:48.979 2 DEBUG oslo_concurrency.lockutils [None req-aedd58a9-64f8-46e4-9987-9960f12996a0 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "4323e59b-2191-4a89-a017-3809d49eed0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:49.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:49.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:49 compute-0 ceph-mon[73668]: pgmap v2162: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.5 MiB/s wr, 263 op/s
Oct 02 12:46:49 compute-0 ceph-mon[73668]: osdmap e301: 3 total, 3 up, 3 in
Oct 02 12:46:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1468040698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 425 KiB/s wr, 48 op/s
Oct 02 12:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 02 12:46:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 02 12:46:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 ceph-mon[73668]: pgmap v2164: 305 pgs: 305 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 425 KiB/s wr, 48 op/s
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.195 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.196 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.196 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.196 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.197 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.198 2 INFO nova.compute.manager [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Terminating instance
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.199 2 DEBUG nova.compute.manager [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:46:51 compute-0 kernel: tap94d27ec1-61 (unregistering): left promiscuous mode
Oct 02 12:46:51 compute-0 NetworkManager[44981]: <info>  [1759409211.3381] device (tap94d27ec1-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 ovn_controller[148123]: 2025-10-02T12:46:51Z|00636|binding|INFO|Releasing lport 94d27ec1-617c-4911-bfe7-441d064785a1 from this chassis (sb_readonly=0)
Oct 02 12:46:51 compute-0 ovn_controller[148123]: 2025-10-02T12:46:51Z|00637|binding|INFO|Setting lport 94d27ec1-617c-4911-bfe7-441d064785a1 down in Southbound
Oct 02 12:46:51 compute-0 ovn_controller[148123]: 2025-10-02T12:46:51Z|00638|binding|INFO|Removing iface tap94d27ec1-61 ovn-installed in OVS
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.351 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5b:10 10.100.0.4'], port_security=['fa:16:3e:68:5b:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f3cb3218-0640-497e-94de-2549ed7da8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=94d27ec1-617c-4911-bfe7-441d064785a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.353 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 94d27ec1-617c-4911-bfe7-441d064785a1 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.354 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.356 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[24472941-d8ff-421f-9105-fa375fd29cb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.356 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 02 12:46:51 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000076.scope: Consumed 19.763s CPU time.
Oct 02 12:46:51 compute-0 systemd-machined[210927]: Machine qemu-61-instance-00000076 terminated.
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.402 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:51.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.439 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid f3cb3218-0640-497e-94de-2549ed7da8e4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.440 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.443 2 INFO nova.virt.libvirt.driver [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Instance destroyed successfully.
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.444 2 DEBUG nova.objects.instance [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid f3cb3218-0640-497e-94de-2549ed7da8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.459 2 DEBUG nova.virt.libvirt.vif [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-604679211',display_name='tempest-ServerRescueNegativeTestJSON-server-604679211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-604679211',id=118,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-h4w9h3c8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:29Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=f3cb3218-0640-497e-94de-2549ed7da8e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.460 2 DEBUG nova.network.os_vif_util [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "94d27ec1-617c-4911-bfe7-441d064785a1", "address": "fa:16:3e:68:5b:10", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d27ec1-61", "ovs_interfaceid": "94d27ec1-617c-4911-bfe7-441d064785a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.461 2 DEBUG nova.network.os_vif_util [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.461 2 DEBUG os_vif [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94d27ec1-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.470 2 INFO os_vif [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:5b:10,bridge_name='br-int',has_traffic_filtering=True,id=94d27ec1-617c-4911-bfe7-441d064785a1,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d27ec1-61')
Oct 02 12:46:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:51.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:51 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[334815]: [NOTICE]   (334834) : haproxy version is 2.8.14-c23fe91
Oct 02 12:46:51 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[334815]: [NOTICE]   (334834) : path to executable is /usr/sbin/haproxy
Oct 02 12:46:51 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[334815]: [WARNING]  (334834) : Exiting Master process...
Oct 02 12:46:51 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[334815]: [ALERT]    (334834) : Current worker (334839) exited with code 143 (Terminated)
Oct 02 12:46:51 compute-0 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[334815]: [WARNING]  (334834) : All workers exited. Exiting... (0)
Oct 02 12:46:51 compute-0 systemd[1]: libpod-5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20.scope: Deactivated successfully.
Oct 02 12:46:51 compute-0 podman[339486]: 2025-10-02 12:46:51.529510286 +0000 UTC m=+0.047959993 container died 5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b00f3c1346ce8398951db5dc4297454dc63763e8a0cdf455a258e0194c797935-merged.mount: Deactivated successfully.
Oct 02 12:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20-userdata-shm.mount: Deactivated successfully.
Oct 02 12:46:51 compute-0 podman[339486]: 2025-10-02 12:46:51.570323215 +0000 UTC m=+0.088772922 container cleanup 5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:46:51 compute-0 systemd[1]: libpod-conmon-5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20.scope: Deactivated successfully.
Oct 02 12:46:51 compute-0 podman[339535]: 2025-10-02 12:46:51.634890984 +0000 UTC m=+0.043476458 container remove 5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.642 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[db950b0d-5912-4167-8bea-37840ca7f082]: (4, ('Thu Oct  2 12:46:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20)\n5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20\nThu Oct  2 12:46:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20)\n5fa27b842acdf4f961d9f70e51abc411c4ef59f2998c20ef7cdcb1853a098c20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.644 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[031b248c-c7a5-4fef-bf77-593f5176b5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.645 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:51 compute-0 kernel: tapf3934261-b0: left promiscuous mode
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.667 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[840e55d5-57b8-43a5-a667-f958bb1db3b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.698 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7181d164-1daa-438f-b366-6c5686a00b44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.699 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2211aa39-5224-4f71-af8d-5aed50ccf073]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.720 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[66af8729-5120-480d-9a62-84f777eaa4a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695991, 'reachable_time': 26729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339550, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.726 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:46:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:46:51.727 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[32961e0c-8390-4285-a74d-3980d5f722df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 555 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 6.7 MiB/s wr, 260 op/s
Oct 02 12:46:51 compute-0 nova_compute[256940]: 2025-10-02 12:46:51.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 02 12:46:52 compute-0 ceph-mon[73668]: osdmap e302: 3 total, 3 up, 3 in
Oct 02 12:46:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 02 12:46:52 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 02 12:46:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:53 compute-0 nova_compute[256940]: 2025-10-02 12:46:53.035 2 DEBUG nova.compute.manager [req-62f65b76-80cc-4985-907a-8d5ec32560cf req-3a192b37-27b9-4463-99aa-a19e50f26518 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-unplugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:53 compute-0 nova_compute[256940]: 2025-10-02 12:46:53.035 2 DEBUG oslo_concurrency.lockutils [req-62f65b76-80cc-4985-907a-8d5ec32560cf req-3a192b37-27b9-4463-99aa-a19e50f26518 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:53 compute-0 nova_compute[256940]: 2025-10-02 12:46:53.036 2 DEBUG oslo_concurrency.lockutils [req-62f65b76-80cc-4985-907a-8d5ec32560cf req-3a192b37-27b9-4463-99aa-a19e50f26518 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:53 compute-0 nova_compute[256940]: 2025-10-02 12:46:53.036 2 DEBUG oslo_concurrency.lockutils [req-62f65b76-80cc-4985-907a-8d5ec32560cf req-3a192b37-27b9-4463-99aa-a19e50f26518 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:53 compute-0 nova_compute[256940]: 2025-10-02 12:46:53.036 2 DEBUG nova.compute.manager [req-62f65b76-80cc-4985-907a-8d5ec32560cf req-3a192b37-27b9-4463-99aa-a19e50f26518 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] No waiting events found dispatching network-vif-unplugged-94d27ec1-617c-4911-bfe7-441d064785a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:53 compute-0 nova_compute[256940]: 2025-10-02 12:46:53.036 2 DEBUG nova.compute.manager [req-62f65b76-80cc-4985-907a-8d5ec32560cf req-3a192b37-27b9-4463-99aa-a19e50f26518 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-unplugged-94d27ec1-617c-4911-bfe7-441d064785a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:46:53 compute-0 ceph-mon[73668]: pgmap v2166: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 555 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 6.7 MiB/s wr, 260 op/s
Oct 02 12:46:53 compute-0 ceph-mon[73668]: osdmap e303: 3 total, 3 up, 3 in
Oct 02 12:46:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:53.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:53.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 555 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.7 MiB/s wr, 239 op/s
Oct 02 12:46:54 compute-0 nova_compute[256940]: 2025-10-02 12:46:54.163 2 INFO nova.virt.libvirt.driver [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Deleting instance files /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4_del
Oct 02 12:46:54 compute-0 nova_compute[256940]: 2025-10-02 12:46:54.164 2 INFO nova.virt.libvirt.driver [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Deletion of /var/lib/nova/instances/f3cb3218-0640-497e-94de-2549ed7da8e4_del complete
Oct 02 12:46:54 compute-0 nova_compute[256940]: 2025-10-02 12:46:54.267 2 INFO nova.compute.manager [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Took 3.07 seconds to destroy the instance on the hypervisor.
Oct 02 12:46:54 compute-0 nova_compute[256940]: 2025-10-02 12:46:54.268 2 DEBUG oslo.service.loopingcall [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:46:54 compute-0 nova_compute[256940]: 2025-10-02 12:46:54.269 2 DEBUG nova.compute.manager [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:46:54 compute-0 nova_compute[256940]: 2025-10-02 12:46:54.269 2 DEBUG nova.network.neutron [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:46:55 compute-0 ceph-mon[73668]: pgmap v2168: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 555 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.7 MiB/s wr, 239 op/s
Oct 02 12:46:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:55 compute-0 nova_compute[256940]: 2025-10-02 12:46:55.470 2 DEBUG nova.compute.manager [req-0659c3aa-8edd-4ee2-bf5c-9fa7ab668d3c req-13a43f1d-50fc-4df5-bac5-278009f965ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:55 compute-0 nova_compute[256940]: 2025-10-02 12:46:55.470 2 DEBUG oslo_concurrency.lockutils [req-0659c3aa-8edd-4ee2-bf5c-9fa7ab668d3c req-13a43f1d-50fc-4df5-bac5-278009f965ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:55 compute-0 nova_compute[256940]: 2025-10-02 12:46:55.470 2 DEBUG oslo_concurrency.lockutils [req-0659c3aa-8edd-4ee2-bf5c-9fa7ab668d3c req-13a43f1d-50fc-4df5-bac5-278009f965ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:55 compute-0 nova_compute[256940]: 2025-10-02 12:46:55.470 2 DEBUG oslo_concurrency.lockutils [req-0659c3aa-8edd-4ee2-bf5c-9fa7ab668d3c req-13a43f1d-50fc-4df5-bac5-278009f965ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:55 compute-0 nova_compute[256940]: 2025-10-02 12:46:55.471 2 DEBUG nova.compute.manager [req-0659c3aa-8edd-4ee2-bf5c-9fa7ab668d3c req-13a43f1d-50fc-4df5-bac5-278009f965ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] No waiting events found dispatching network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:55 compute-0 nova_compute[256940]: 2025-10-02 12:46:55.471 2 WARNING nova.compute.manager [req-0659c3aa-8edd-4ee2-bf5c-9fa7ab668d3c req-13a43f1d-50fc-4df5-bac5-278009f965ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received unexpected event network-vif-plugged-94d27ec1-617c-4911-bfe7-441d064785a1 for instance with vm_state rescued and task_state deleting.
Oct 02 12:46:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:46:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:55.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:46:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 518 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.3 MiB/s wr, 246 op/s
Oct 02 12:46:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.457 2 DEBUG nova.network.neutron [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.476 2 INFO nova.compute.manager [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Took 2.21 seconds to deallocate network for instance.
Oct 02 12:46:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2657385976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.527 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.527 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.598 2 DEBUG oslo_concurrency.processutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 02 12:46:56 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.841 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409201.8398342, 4323e59b-2191-4a89-a017-3809d49eed0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.842 2 INFO nova.compute.manager [-] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] VM Stopped (Lifecycle Event)
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.870 2 DEBUG nova.compute.manager [None req-8a21ee89-a15f-4df0-a9a5-f87fc77cbdfb - - - - - -] [instance: 4323e59b-2191-4a89-a017-3809d49eed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:56 compute-0 nova_compute[256940]: 2025-10-02 12:46:56.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892251147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.102 2 DEBUG oslo_concurrency.processutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.108 2 DEBUG nova.compute.provider_tree [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.129 2 DEBUG nova.scheduler.client.report [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.180 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.209 2 INFO nova.scheduler.client.report [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Deleted allocations for instance f3cb3218-0640-497e-94de-2549ed7da8e4
Oct 02 12:46:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.298 2 DEBUG oslo_concurrency.lockutils [None req-5124a718-8bd9-410b-999c-1260712b506f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.299 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 5.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.300 2 INFO nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.300 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "f3cb3218-0640-497e-94de-2549ed7da8e4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:57.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:57.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:57 compute-0 nova_compute[256940]: 2025-10-02 12:46:57.656 2 DEBUG nova.compute.manager [req-0b0890b8-834a-4c84-81b1-aa66ec9cfa00 req-b00b86d5-9739-4a19-8219-651343798f8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Received event network-vif-deleted-94d27ec1-617c-4911-bfe7-441d064785a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:57 compute-0 ceph-mon[73668]: pgmap v2169: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 518 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.3 MiB/s wr, 246 op/s
Oct 02 12:46:57 compute-0 ceph-mon[73668]: osdmap e304: 3 total, 3 up, 3 in
Oct 02 12:46:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3892251147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 439 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.2 MiB/s wr, 300 op/s
Oct 02 12:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:46:58 compute-0 ceph-mon[73668]: pgmap v2171: 305 pgs: 305 active+clean; 439 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.2 MiB/s wr, 300 op/s
Oct 02 12:46:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:46:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:46:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:46:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:59.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:59 compute-0 nova_compute[256940]: 2025-10-02 12:46:59.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 128 op/s
Oct 02 12:47:01 compute-0 ceph-mon[73668]: pgmap v2172: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 128 op/s
Oct 02 12:47:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:01 compute-0 nova_compute[256940]: 2025-10-02 12:47:01.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:01.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 357 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 182 op/s
Oct 02 12:47:01 compute-0 nova_compute[256940]: 2025-10-02 12:47:01.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 02 12:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 02 12:47:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 02 12:47:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1797470337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:02 compute-0 sudo[339581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:02 compute-0 sudo[339581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:02 compute-0 sudo[339581]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:02 compute-0 podman[339605]: 2025-10-02 12:47:02.969516921 +0000 UTC m=+0.056179944 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 02 12:47:02 compute-0 sudo[339618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:02 compute-0 sudo[339618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:02 compute-0 sudo[339618]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:03 compute-0 podman[339606]: 2025-10-02 12:47:03.004579242 +0000 UTC m=+0.088754431 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:47:03 compute-0 nova_compute[256940]: 2025-10-02 12:47:03.249 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:03.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:47:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:03.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:47:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 02 12:47:03 compute-0 ceph-mon[73668]: pgmap v2173: 305 pgs: 305 active+clean; 357 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 182 op/s
Oct 02 12:47:03 compute-0 ceph-mon[73668]: osdmap e305: 3 total, 3 up, 3 in
Oct 02 12:47:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/773088728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/604551742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 02 12:47:03 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 02 12:47:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 357 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 3.0 MiB/s wr, 113 op/s
Oct 02 12:47:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/887990936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:04 compute-0 ceph-mon[73668]: osdmap e306: 3 total, 3 up, 3 in
Oct 02 12:47:04 compute-0 ceph-mon[73668]: pgmap v2176: 305 pgs: 305 active+clean; 357 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 3.0 MiB/s wr, 113 op/s
Oct 02 12:47:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1011230374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:05.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 02 12:47:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 02 12:47:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 02 12:47:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:47:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:47:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 3.1 MiB/s wr, 134 op/s
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.236 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.236 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.442 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409211.4398408, f3cb3218-0640-497e-94de-2549ed7da8e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.443 2 INFO nova.compute.manager [-] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] VM Stopped (Lifecycle Event)
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.480 2 DEBUG nova.compute.manager [None req-f55ea48c-e10f-4c58-8313-e0bc247cc172 - - - - - -] [instance: f3cb3218-0640-497e-94de-2549ed7da8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084135172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.692 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:06 compute-0 ceph-mon[73668]: osdmap e307: 3 total, 3 up, 3 in
Oct 02 12:47:06 compute-0 ceph-mon[73668]: pgmap v2178: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 3.1 MiB/s wr, 134 op/s
Oct 02 12:47:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2285883654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4084135172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.856 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.858 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4345MB free_disk=20.909137725830078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.858 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.858 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.977 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:47:06 compute-0 nova_compute[256940]: 2025-10-02 12:47:06.978 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:47:07 compute-0 nova_compute[256940]: 2025-10-02 12:47:07.002 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 02 12:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 02 12:47:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 02 12:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847758872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:07 compute-0 nova_compute[256940]: 2025-10-02 12:47:07.464 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:07 compute-0 nova_compute[256940]: 2025-10-02 12:47:07.472 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:07 compute-0 nova_compute[256940]: 2025-10-02 12:47:07.497 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:07 compute-0 nova_compute[256940]: 2025-10-02 12:47:07.545 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:47:07 compute-0 nova_compute[256940]: 2025-10-02 12:47:07.546 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 240 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 02 12:47:08 compute-0 ceph-mon[73668]: osdmap e308: 3 total, 3 up, 3 in
Oct 02 12:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3847758872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2976661906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:08 compute-0 nova_compute[256940]: 2025-10-02 12:47:08.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:08 compute-0 nova_compute[256940]: 2025-10-02 12:47:08.542 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:08 compute-0 nova_compute[256940]: 2025-10-02 12:47:08.543 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:08 compute-0 nova_compute[256940]: 2025-10-02 12:47:08.543 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:08 compute-0 nova_compute[256940]: 2025-10-02 12:47:08.543 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:47:08 compute-0 nova_compute[256940]: 2025-10-02 12:47:08.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:09 compute-0 nova_compute[256940]: 2025-10-02 12:47:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:09 compute-0 ceph-mon[73668]: pgmap v2180: 305 pgs: 305 active+clean; 240 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 02 12:47:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:09.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 194 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 3.4 MiB/s wr, 171 op/s
Oct 02 12:47:11 compute-0 ceph-mon[73668]: pgmap v2181: 305 pgs: 305 active+clean; 194 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 3.4 MiB/s wr, 171 op/s
Oct 02 12:47:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:11.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:11.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:11 compute-0 nova_compute[256940]: 2025-10-02 12:47:11.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 223 op/s
Oct 02 12:47:11 compute-0 nova_compute[256940]: 2025-10-02 12:47:11.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct 02 12:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct 02 12:47:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:47:13 compute-0 ceph-mon[73668]: pgmap v2182: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 223 op/s
Oct 02 12:47:13 compute-0 ceph-mon[73668]: osdmap e309: 3 total, 3 up, 3 in
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.330 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.331 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.396 2 DEBUG nova.compute.manager [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:47:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:13.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:13.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.682 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.682 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 210 op/s
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.781 2 DEBUG nova.objects.instance [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_requests' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.817 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.817 2 INFO nova.compute.claims [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:47:13 compute-0 nova_compute[256940]: 2025-10-02 12:47:13.817 2 DEBUG nova.objects.instance [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:14 compute-0 nova_compute[256940]: 2025-10-02 12:47:14.527 2 DEBUG nova.objects.instance [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:15 compute-0 ceph-mon[73668]: pgmap v2184: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 210 op/s
Oct 02 12:47:15 compute-0 podman[339727]: 2025-10-02 12:47:15.389821328 +0000 UTC m=+0.061430869 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct 02 12:47:15 compute-0 nova_compute[256940]: 2025-10-02 12:47:15.400 2 INFO nova.compute.resource_tracker [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating resource usage from migration 31253cf4-d4b6-4114-ba91-912bf75a32d5
Oct 02 12:47:15 compute-0 nova_compute[256940]: 2025-10-02 12:47:15.401 2 DEBUG nova.compute.resource_tracker [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Starting to track incoming migration 31253cf4-d4b6-4114-ba91-912bf75a32d5 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:47:15 compute-0 podman[339728]: 2025-10-02 12:47:15.415368365 +0000 UTC m=+0.085431066 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:15.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 135 op/s
Oct 02 12:47:15 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 12:47:15 compute-0 sudo[339766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:15 compute-0 sudo[339766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:15 compute-0 sudo[339766]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:15 compute-0 sudo[339791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:15 compute-0 sudo[339791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:15 compute-0 sudo[339791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:15 compute-0 nova_compute[256940]: 2025-10-02 12:47:15.917 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:15 compute-0 sudo[339816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:15 compute-0 sudo[339816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:15 compute-0 sudo[339816]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-0 sudo[339842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:47:16 compute-0 sudo[339842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 02 12:47:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 02 12:47:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 02 12:47:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 02 12:47:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 02 12:47:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 02 12:47:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 02 12:47:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358379371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:16 compute-0 nova_compute[256940]: 2025-10-02 12:47:16.376 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:16 compute-0 nova_compute[256940]: 2025-10-02 12:47:16.383 2 DEBUG nova.compute.provider_tree [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:16 compute-0 sudo[339842]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-0 sudo[339918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:16 compute-0 sudo[339918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-0 sudo[339918]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-0 nova_compute[256940]: 2025-10-02 12:47:16.527 2 DEBUG nova.scheduler.client.report [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:16 compute-0 nova_compute[256940]: 2025-10-02 12:47:16.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:16 compute-0 sudo[339943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:16 compute-0 sudo[339943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-0 sudo[339943]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-0 sudo[339969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:16 compute-0 sudo[339969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-0 sudo[339969]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-0 nova_compute[256940]: 2025-10-02 12:47:16.696 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 3.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:16 compute-0 nova_compute[256940]: 2025-10-02 12:47:16.696 2 INFO nova.compute.manager [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Migrating
Oct 02 12:47:16 compute-0 sudo[339994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 12:47:16 compute-0 sudo[339994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-0 nova_compute[256940]: 2025-10-02 12:47:16.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:47:17 compute-0 podman[340059]: 2025-10-02 12:47:17.024271437 +0000 UTC m=+0.038193722 container create ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:47:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:47:17 compute-0 systemd[1]: Started libpod-conmon-ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8.scope.
Oct 02 12:47:17 compute-0 podman[340059]: 2025-10-02 12:47:17.006722317 +0000 UTC m=+0.020644622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.123935) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237123974, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2312, "num_deletes": 262, "total_data_size": 3918246, "memory_usage": 3987600, "flush_reason": "Manual Compaction"}
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 02 12:47:17 compute-0 podman[340059]: 2025-10-02 12:47:17.13098808 +0000 UTC m=+0.144910385 container init ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:47:17 compute-0 podman[340059]: 2025-10-02 12:47:17.140636338 +0000 UTC m=+0.154558623 container start ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:47:17 compute-0 amazing_leavitt[340075]: 167 167
Oct 02 12:47:17 compute-0 systemd[1]: libpod-ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8.scope: Deactivated successfully.
Oct 02 12:47:17 compute-0 conmon[340075]: conmon ead189435d4f27d44ad9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8.scope/container/memory.events
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237151349, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3845464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46486, "largest_seqno": 48797, "table_properties": {"data_size": 3834771, "index_size": 6931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22643, "raw_average_key_size": 21, "raw_value_size": 3813334, "raw_average_value_size": 3573, "num_data_blocks": 297, "num_entries": 1067, "num_filter_entries": 1067, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409047, "oldest_key_time": 1759409047, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 27459 microseconds, and 8223 cpu microseconds.
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:47:17 compute-0 podman[340059]: 2025-10-02 12:47:17.15161425 +0000 UTC m=+0.165536565 container attach ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:47:17 compute-0 podman[340059]: 2025-10-02 12:47:17.152379209 +0000 UTC m=+0.166301494 container died ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.151388) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3845464 bytes OK
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.151421) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.153050) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.153073) EVENT_LOG_v1 {"time_micros": 1759409237153069, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.153088) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3908628, prev total WAL file size 3929687, number of live WAL files 2.
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.153965) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3755KB)], [104(9043KB)]
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237153994, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 13105770, "oldest_snapshot_seqno": -1}
Oct 02 12:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-96e481b39c44f79df4f68a9b01b01af405a6f251f1fb7629652969bec332ab4f-merged.mount: Deactivated successfully.
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7668 keys, 11159173 bytes, temperature: kUnknown
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237210831, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11159173, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11107905, "index_size": 30996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198295, "raw_average_key_size": 25, "raw_value_size": 10971163, "raw_average_value_size": 1430, "num_data_blocks": 1219, "num_entries": 7668, "num_filter_entries": 7668, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:47:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.211088) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11159173 bytes
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.212662) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.1 rd, 195.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8204, records dropped: 536 output_compression: NoCompression
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.212678) EVENT_LOG_v1 {"time_micros": 1759409237212670, "job": 62, "event": "compaction_finished", "compaction_time_micros": 56964, "compaction_time_cpu_micros": 26185, "output_level": 6, "num_output_files": 1, "total_output_size": 11159173, "num_input_records": 8204, "num_output_records": 7668, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237213501, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237215133, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.153901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.215195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.215199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.215200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.215201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:47:17.215203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-0 podman[340059]: 2025-10-02 12:47:17.2224514 +0000 UTC m=+0.236373685 container remove ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:47:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:17 compute-0 systemd[1]: libpod-conmon-ead189435d4f27d44ad95113563aec81c8f89f969b6db3dd3c5f63444f3fa5f8.scope: Deactivated successfully.
Oct 02 12:47:17 compute-0 ceph-mon[73668]: pgmap v2185: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 135 op/s
Oct 02 12:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3358379371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:17 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:17 compute-0 podman[340098]: 2025-10-02 12:47:17.393926306 +0000 UTC m=+0.048268631 container create 731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:47:17 compute-0 systemd[1]: Started libpod-conmon-731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1.scope.
Oct 02 12:47:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:17 compute-0 podman[340098]: 2025-10-02 12:47:17.374932908 +0000 UTC m=+0.029275273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000077s ======
Oct 02 12:47:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000077s
Oct 02 12:47:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded161d0b15796bf817fb4009801959f6d641fcca1647b2e2cd2eed8da5fdc0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded161d0b15796bf817fb4009801959f6d641fcca1647b2e2cd2eed8da5fdc0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded161d0b15796bf817fb4009801959f6d641fcca1647b2e2cd2eed8da5fdc0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded161d0b15796bf817fb4009801959f6d641fcca1647b2e2cd2eed8da5fdc0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:17 compute-0 podman[340098]: 2025-10-02 12:47:17.501197273 +0000 UTC m=+0.155539608 container init 731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wing, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:47:17 compute-0 podman[340098]: 2025-10-02 12:47:17.51628253 +0000 UTC m=+0.170624855 container start 731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:47:17 compute-0 podman[340098]: 2025-10-02 12:47:17.521026072 +0000 UTC m=+0.175368427 container attach 731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:47:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:17.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:47:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 12 KiB/s wr, 187 op/s
Oct 02 12:47:18 compute-0 zen_wing[340114]: [
Oct 02 12:47:18 compute-0 zen_wing[340114]:     {
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "available": false,
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "ceph_device": false,
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "lsm_data": {},
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "lvs": [],
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "path": "/dev/sr0",
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "rejected_reasons": [
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "Has a FileSystem",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "Insufficient space (<5GB)"
Oct 02 12:47:18 compute-0 zen_wing[340114]:         ],
Oct 02 12:47:18 compute-0 zen_wing[340114]:         "sys_api": {
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "actuators": null,
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "device_nodes": "sr0",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "devname": "sr0",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "human_readable_size": "482.00 KB",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "id_bus": "ata",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "model": "QEMU DVD-ROM",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "nr_requests": "2",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "parent": "/dev/sr0",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "partitions": {},
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "path": "/dev/sr0",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "removable": "1",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "rev": "2.5+",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "ro": "0",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "rotational": "0",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "sas_address": "",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "sas_device_handle": "",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "scheduler_mode": "mq-deadline",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "sectors": 0,
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "sectorsize": "2048",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "size": 493568.0,
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "support_discard": "2048",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "type": "disk",
Oct 02 12:47:18 compute-0 zen_wing[340114]:             "vendor": "QEMU"
Oct 02 12:47:18 compute-0 zen_wing[340114]:         }
Oct 02 12:47:18 compute-0 zen_wing[340114]:     }
Oct 02 12:47:18 compute-0 zen_wing[340114]: ]
Oct 02 12:47:18 compute-0 systemd[1]: libpod-731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1.scope: Deactivated successfully.
Oct 02 12:47:18 compute-0 systemd[1]: libpod-731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1.scope: Consumed 1.223s CPU time.
Oct 02 12:47:18 compute-0 podman[341361]: 2025-10-02 12:47:18.778140504 +0000 UTC m=+0.022841198 container died 731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ded161d0b15796bf817fb4009801959f6d641fcca1647b2e2cd2eed8da5fdc0e-merged.mount: Deactivated successfully.
Oct 02 12:47:18 compute-0 podman[341361]: 2025-10-02 12:47:18.833459105 +0000 UTC m=+0.078159779 container remove 731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:47:18 compute-0 systemd[1]: libpod-conmon-731f8e492c92c97acce3f57a580a78eab2b8bca7cb177a9f3f2719566ff86ea1.scope: Deactivated successfully.
Oct 02 12:47:18 compute-0 sudo[339994]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:47:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:47:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:19.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:47:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:19.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:47:19 compute-0 ceph-mon[73668]: pgmap v2186: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 12 KiB/s wr, 187 op/s
Oct 02 12:47:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 202 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 220 op/s
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 12c345e6-c4f5-40e2-8bbb-54110464b6b5 does not exist
Oct 02 12:47:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 623e25a1-0e2e-4ca7-b6c3-0f4df4cea46f does not exist
Oct 02 12:47:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f686c08a-3f7a-4196-8602-5e53653f21c0 does not exist
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:47:20 compute-0 sudo[341376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:20 compute-0 sudo[341376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:20 compute-0 sudo[341376]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:20 compute-0 sudo[341401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:20 compute-0 sudo[341401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:20 compute-0 sudo[341401]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:20 compute-0 sudo[341427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:20 compute-0 sudo[341427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:20 compute-0 sudo[341427]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:20 compute-0 sudo[341452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:47:20 compute-0 sudo[341452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:21 compute-0 podman[341519]: 2025-10-02 12:47:21.256983368 +0000 UTC m=+0.058623957 container create 51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galileo, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:21 compute-0 systemd[1]: Started libpod-conmon-51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379.scope.
Oct 02 12:47:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:21 compute-0 podman[341519]: 2025-10-02 12:47:21.330835496 +0000 UTC m=+0.132476075 container init 51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:47:21 compute-0 podman[341519]: 2025-10-02 12:47:21.236839751 +0000 UTC m=+0.038480340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:21 compute-0 podman[341519]: 2025-10-02 12:47:21.338885573 +0000 UTC m=+0.140526122 container start 51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galileo, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:47:21 compute-0 podman[341519]: 2025-10-02 12:47:21.341993923 +0000 UTC m=+0.143634472 container attach 51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:47:21 compute-0 dreamy_galileo[341535]: 167 167
Oct 02 12:47:21 compute-0 podman[341519]: 2025-10-02 12:47:21.349251949 +0000 UTC m=+0.150892508 container died 51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galileo, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:47:21 compute-0 systemd[1]: libpod-51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379.scope: Deactivated successfully.
Oct 02 12:47:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-866815a0e207ed33962e42b201506a24dceb272dad7d85e3292833bd1915553b-merged.mount: Deactivated successfully.
Oct 02 12:47:21 compute-0 podman[341519]: 2025-10-02 12:47:21.396998196 +0000 UTC m=+0.198638745 container remove 51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:47:21 compute-0 systemd[1]: libpod-conmon-51e3c5b544d3a67eb5c8c2f8b647375feddb12fe28a64ee2e764745c56b08379.scope: Deactivated successfully.
Oct 02 12:47:21 compute-0 sshd-session[341540]: Accepted publickey for nova from 192.168.122.101 port 37940 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:47:21 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:47:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:21.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:21 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:47:21 compute-0 systemd-logind[820]: New session 66 of user nova.
Oct 02 12:47:21 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:47:21 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:47:21 compute-0 ceph-mon[73668]: pgmap v2187: 305 pgs: 305 active+clean; 202 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 220 op/s
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:47:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:47:21 compute-0 systemd[341557]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:47:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:21.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:21 compute-0 podman[341564]: 2025-10-02 12:47:21.582712028 +0000 UTC m=+0.039045604 container create 6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_noyce, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:47:21 compute-0 nova_compute[256940]: 2025-10-02 12:47:21.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:21 compute-0 systemd[1]: Started libpod-conmon-6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958.scope.
Oct 02 12:47:21 compute-0 podman[341564]: 2025-10-02 12:47:21.566247615 +0000 UTC m=+0.022581201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa3551a7d95033fa057148f1636f3980dbb958f4fd30649597a98d2cb46fa2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:21 compute-0 systemd[341557]: Queued start job for default target Main User Target.
Oct 02 12:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa3551a7d95033fa057148f1636f3980dbb958f4fd30649597a98d2cb46fa2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa3551a7d95033fa057148f1636f3980dbb958f4fd30649597a98d2cb46fa2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:21 compute-0 systemd[341557]: Created slice User Application Slice.
Oct 02 12:47:21 compute-0 systemd[341557]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:47:21 compute-0 systemd[341557]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:47:21 compute-0 systemd[341557]: Reached target Paths.
Oct 02 12:47:21 compute-0 systemd[341557]: Reached target Timers.
Oct 02 12:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa3551a7d95033fa057148f1636f3980dbb958f4fd30649597a98d2cb46fa2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa3551a7d95033fa057148f1636f3980dbb958f4fd30649597a98d2cb46fa2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:21 compute-0 systemd[341557]: Starting D-Bus User Message Bus Socket...
Oct 02 12:47:21 compute-0 systemd[341557]: Starting Create User's Volatile Files and Directories...
Oct 02 12:47:21 compute-0 podman[341564]: 2025-10-02 12:47:21.687996463 +0000 UTC m=+0.144330049 container init 6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_noyce, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:21 compute-0 systemd[341557]: Finished Create User's Volatile Files and Directories.
Oct 02 12:47:21 compute-0 systemd[341557]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:47:21 compute-0 systemd[341557]: Reached target Sockets.
Oct 02 12:47:21 compute-0 systemd[341557]: Reached target Basic System.
Oct 02 12:47:21 compute-0 systemd[341557]: Reached target Main User Target.
Oct 02 12:47:21 compute-0 systemd[341557]: Startup finished in 153ms.
Oct 02 12:47:21 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:47:21 compute-0 systemd[1]: Started Session 66 of User nova.
Oct 02 12:47:21 compute-0 podman[341564]: 2025-10-02 12:47:21.698736669 +0000 UTC m=+0.155070235 container start 6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_noyce, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 12:47:21 compute-0 podman[341564]: 2025-10-02 12:47:21.702335722 +0000 UTC m=+0.158669298 container attach 6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_noyce, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:21 compute-0 sshd-session[341540]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:47:21 compute-0 sshd-session[341598]: Received disconnect from 192.168.122.101 port 37940:11: disconnected by user
Oct 02 12:47:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.4 MiB/s wr, 295 op/s
Oct 02 12:47:21 compute-0 sshd-session[341598]: Disconnected from user nova 192.168.122.101 port 37940
Oct 02 12:47:21 compute-0 sshd-session[341540]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:47:21 compute-0 systemd[1]: session-66.scope: Deactivated successfully.
Oct 02 12:47:21 compute-0 systemd-logind[820]: Session 66 logged out. Waiting for processes to exit.
Oct 02 12:47:21 compute-0 systemd-logind[820]: Removed session 66.
Oct 02 12:47:21 compute-0 sshd-session[341600]: Accepted publickey for nova from 192.168.122.101 port 37944 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:47:21 compute-0 systemd-logind[820]: New session 68 of user nova.
Oct 02 12:47:21 compute-0 systemd[1]: Started Session 68 of User nova.
Oct 02 12:47:21 compute-0 sshd-session[341600]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:47:21 compute-0 nova_compute[256940]: 2025-10-02 12:47:21.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:21 compute-0 sshd-session[341603]: Received disconnect from 192.168.122.101 port 37944:11: disconnected by user
Oct 02 12:47:21 compute-0 sshd-session[341603]: Disconnected from user nova 192.168.122.101 port 37944
Oct 02 12:47:21 compute-0 sshd-session[341600]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:47:21 compute-0 systemd[1]: session-68.scope: Deactivated successfully.
Oct 02 12:47:21 compute-0 systemd-logind[820]: Session 68 logged out. Waiting for processes to exit.
Oct 02 12:47:21 compute-0 systemd-logind[820]: Removed session 68.
Oct 02 12:47:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:22 compute-0 determined_noyce[341591]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:47:22 compute-0 determined_noyce[341591]: --> relative data size: 1.0
Oct 02 12:47:22 compute-0 determined_noyce[341591]: --> All data devices are unavailable
Oct 02 12:47:22 compute-0 systemd[1]: libpod-6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958.scope: Deactivated successfully.
Oct 02 12:47:22 compute-0 podman[341564]: 2025-10-02 12:47:22.552674992 +0000 UTC m=+1.009008568 container died 6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_noyce, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa3551a7d95033fa057148f1636f3980dbb958f4fd30649597a98d2cb46fa2d-merged.mount: Deactivated successfully.
Oct 02 12:47:22 compute-0 podman[341564]: 2025-10-02 12:47:22.602278317 +0000 UTC m=+1.058611883 container remove 6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:47:22 compute-0 systemd[1]: libpod-conmon-6d1c560939d561dadfc70ae0537691c5553c08fb68a49aea94f3ab0cda42e958.scope: Deactivated successfully.
Oct 02 12:47:22 compute-0 sudo[341452]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:22 compute-0 sudo[341631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:22 compute-0 sudo[341631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:22 compute-0 sudo[341631]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:22 compute-0 sudo[341656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:22 compute-0 sudo[341656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:22 compute-0 sudo[341656]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:22 compute-0 sudo[341681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:22 compute-0 sudo[341681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:22 compute-0 sudo[341681]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:22 compute-0 sudo[341706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:47:22 compute-0 sudo[341706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:23 compute-0 sudo[341731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:23 compute-0 sudo[341731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:23 compute-0 sudo[341731]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:23 compute-0 sudo[341770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:23 compute-0 sudo[341770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:23 compute-0 sudo[341770]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:23 compute-0 podman[341821]: 2025-10-02 12:47:23.195469009 +0000 UTC m=+0.034565469 container create 63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:47:23 compute-0 systemd[1]: Started libpod-conmon-63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1.scope.
Oct 02 12:47:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:23 compute-0 podman[341821]: 2025-10-02 12:47:23.26749611 +0000 UTC m=+0.106592570 container init 63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_neumann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:47:23 compute-0 podman[341821]: 2025-10-02 12:47:23.274092869 +0000 UTC m=+0.113189329 container start 63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_neumann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:23 compute-0 podman[341821]: 2025-10-02 12:47:23.180040433 +0000 UTC m=+0.019136923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:23 compute-0 podman[341821]: 2025-10-02 12:47:23.276781288 +0000 UTC m=+0.115877778 container attach 63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:47:23 compute-0 dreamy_neumann[341837]: 167 167
Oct 02 12:47:23 compute-0 systemd[1]: libpod-63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1.scope: Deactivated successfully.
Oct 02 12:47:23 compute-0 conmon[341837]: conmon 63bcaf2c3d367bab1ae4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1.scope/container/memory.events
Oct 02 12:47:23 compute-0 podman[341821]: 2025-10-02 12:47:23.280092084 +0000 UTC m=+0.119188544 container died 63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_neumann, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-91502d45eebfad94751caca66c6181d2fbede05a79d2bb8268786b6d32ad7c51-merged.mount: Deactivated successfully.
Oct 02 12:47:23 compute-0 podman[341821]: 2025-10-02 12:47:23.319136167 +0000 UTC m=+0.158232637 container remove 63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_neumann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:47:23 compute-0 systemd[1]: libpod-conmon-63bcaf2c3d367bab1ae4ad63251e2e532398de3879414a8950e47315fdc9f2c1.scope: Deactivated successfully.
Oct 02 12:47:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:23.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:23 compute-0 podman[341861]: 2025-10-02 12:47:23.495336104 +0000 UTC m=+0.049779170 container create df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:47:23 compute-0 systemd[1]: Started libpod-conmon-df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3.scope.
Oct 02 12:47:23 compute-0 ceph-mon[73668]: pgmap v2188: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.4 MiB/s wr, 295 op/s
Oct 02 12:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3250697552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:23.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a59ef4708dab098e0e4b496b351bddbf8dbe0b6f3675ef0ce9f2e7883c024/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a59ef4708dab098e0e4b496b351bddbf8dbe0b6f3675ef0ce9f2e7883c024/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a59ef4708dab098e0e4b496b351bddbf8dbe0b6f3675ef0ce9f2e7883c024/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a59ef4708dab098e0e4b496b351bddbf8dbe0b6f3675ef0ce9f2e7883c024/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:23 compute-0 podman[341861]: 2025-10-02 12:47:23.473448502 +0000 UTC m=+0.027891598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:23 compute-0 podman[341861]: 2025-10-02 12:47:23.588661662 +0000 UTC m=+0.143104728 container init df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:47:23 compute-0 podman[341861]: 2025-10-02 12:47:23.5959582 +0000 UTC m=+0.150401276 container start df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:47:23 compute-0 podman[341861]: 2025-10-02 12:47:23.59945239 +0000 UTC m=+0.153895446 container attach df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 02 12:47:24 compute-0 angry_carver[341878]: {
Oct 02 12:47:24 compute-0 angry_carver[341878]:     "1": [
Oct 02 12:47:24 compute-0 angry_carver[341878]:         {
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "devices": [
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "/dev/loop3"
Oct 02 12:47:24 compute-0 angry_carver[341878]:             ],
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "lv_name": "ceph_lv0",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "lv_size": "7511998464",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "name": "ceph_lv0",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "tags": {
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.cluster_name": "ceph",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.crush_device_class": "",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.encrypted": "0",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.osd_id": "1",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.type": "block",
Oct 02 12:47:24 compute-0 angry_carver[341878]:                 "ceph.vdo": "0"
Oct 02 12:47:24 compute-0 angry_carver[341878]:             },
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "type": "block",
Oct 02 12:47:24 compute-0 angry_carver[341878]:             "vg_name": "ceph_vg0"
Oct 02 12:47:24 compute-0 angry_carver[341878]:         }
Oct 02 12:47:24 compute-0 angry_carver[341878]:     ]
Oct 02 12:47:24 compute-0 angry_carver[341878]: }
Oct 02 12:47:24 compute-0 systemd[1]: libpod-df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3.scope: Deactivated successfully.
Oct 02 12:47:24 compute-0 podman[341887]: 2025-10-02 12:47:24.426358848 +0000 UTC m=+0.026184664 container died df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 12:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-915a59ef4708dab098e0e4b496b351bddbf8dbe0b6f3675ef0ce9f2e7883c024-merged.mount: Deactivated successfully.
Oct 02 12:47:24 compute-0 podman[341887]: 2025-10-02 12:47:24.481929006 +0000 UTC m=+0.081754742 container remove df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:24 compute-0 systemd[1]: libpod-conmon-df2bdad25faa672742015ef2578aa4f3e01a58473b6c1d77ce0f602b8718a8c3.scope: Deactivated successfully.
Oct 02 12:47:24 compute-0 sudo[341706]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:24 compute-0 sudo[341902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:24 compute-0 sudo[341902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:24 compute-0 sudo[341902]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:24 compute-0 ceph-mon[73668]: pgmap v2189: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 02 12:47:24 compute-0 sudo[341927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:24 compute-0 sudo[341927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:24 compute-0 sudo[341927]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:24 compute-0 sudo[341953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:24 compute-0 sudo[341953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:24 compute-0 sudo[341953]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:24 compute-0 sudo[341978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:47:24 compute-0 sudo[341978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:25 compute-0 podman[342044]: 2025-10-02 12:47:25.123562721 +0000 UTC m=+0.045403007 container create 130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:25 compute-0 systemd[1]: Started libpod-conmon-130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728.scope.
Oct 02 12:47:25 compute-0 podman[342044]: 2025-10-02 12:47:25.106428081 +0000 UTC m=+0.028268397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:25 compute-0 podman[342044]: 2025-10-02 12:47:25.240210768 +0000 UTC m=+0.162051064 container init 130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:47:25 compute-0 podman[342044]: 2025-10-02 12:47:25.248963213 +0000 UTC m=+0.170803499 container start 130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:25 compute-0 adoring_payne[342060]: 167 167
Oct 02 12:47:25 compute-0 systemd[1]: libpod-130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728.scope: Deactivated successfully.
Oct 02 12:47:25 compute-0 podman[342044]: 2025-10-02 12:47:25.295468068 +0000 UTC m=+0.217308424 container attach 130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:47:25 compute-0 podman[342044]: 2025-10-02 12:47:25.296578717 +0000 UTC m=+0.218419003 container died 130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d3d8b58bb7f3fcf74733b0aa1adb66b2f4ab3967ea5de05765c618c33feeeee-merged.mount: Deactivated successfully.
Oct 02 12:47:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:25.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:25.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:25 compute-0 podman[342044]: 2025-10-02 12:47:25.680746748 +0000 UTC m=+0.602587034 container remove 130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:47:25 compute-0 systemd[1]: libpod-conmon-130926864e22d12049edf2801827caa617429f763e315ec357ab82a57ac30728.scope: Deactivated successfully.
Oct 02 12:47:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2809155184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 214 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:47:25 compute-0 podman[342086]: 2025-10-02 12:47:25.841071838 +0000 UTC m=+0.043180101 container create 50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:47:25 compute-0 systemd[1]: Started libpod-conmon-50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab.scope.
Oct 02 12:47:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46015656102cb5ae2684744f6989923bf0ad1899d627e7234ccc7e4745f12136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46015656102cb5ae2684744f6989923bf0ad1899d627e7234ccc7e4745f12136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46015656102cb5ae2684744f6989923bf0ad1899d627e7234ccc7e4745f12136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46015656102cb5ae2684744f6989923bf0ad1899d627e7234ccc7e4745f12136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:25 compute-0 podman[342086]: 2025-10-02 12:47:25.823495826 +0000 UTC m=+0.025604119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:25 compute-0 podman[342086]: 2025-10-02 12:47:25.924648495 +0000 UTC m=+0.126756768 container init 50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:47:25 compute-0 podman[342086]: 2025-10-02 12:47:25.933248416 +0000 UTC m=+0.135356679 container start 50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:47:25 compute-0 podman[342086]: 2025-10-02 12:47:25.936997242 +0000 UTC m=+0.139105535 container attach 50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.343 2 INFO nova.network.neutron [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.453 2 DEBUG nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.454 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.454 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.455 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.455 2 DEBUG nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.455 2 WARNING nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:47:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:26.484 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:26.484 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:26.485 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:26 compute-0 strange_moore[342102]: {
Oct 02 12:47:26 compute-0 strange_moore[342102]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:47:26 compute-0 strange_moore[342102]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:47:26 compute-0 strange_moore[342102]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:47:26 compute-0 strange_moore[342102]:         "osd_id": 1,
Oct 02 12:47:26 compute-0 strange_moore[342102]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:47:26 compute-0 strange_moore[342102]:         "type": "bluestore"
Oct 02 12:47:26 compute-0 strange_moore[342102]:     }
Oct 02 12:47:26 compute-0 strange_moore[342102]: }
Oct 02 12:47:26 compute-0 systemd[1]: libpod-50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab.scope: Deactivated successfully.
Oct 02 12:47:26 compute-0 podman[342124]: 2025-10-02 12:47:26.806047723 +0000 UTC m=+0.020812476 container died 50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-46015656102cb5ae2684744f6989923bf0ad1899d627e7234ccc7e4745f12136-merged.mount: Deactivated successfully.
Oct 02 12:47:26 compute-0 podman[342124]: 2025-10-02 12:47:26.8577258 +0000 UTC m=+0.072490533 container remove 50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:47:26 compute-0 ceph-mon[73668]: pgmap v2190: 305 pgs: 305 active+clean; 214 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:47:26 compute-0 systemd[1]: libpod-conmon-50580bae2fbf285c559576d2279e1647f6af6478e6d57cf27d41924c40749dab.scope: Deactivated successfully.
Oct 02 12:47:26 compute-0 sudo[341978]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:47:26 compute-0 nova_compute[256940]: 2025-10-02 12:47:26.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:47:27 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 12c58bbf-ecdb-4a13-a823-eed85bbbb012 does not exist
Oct 02 12:47:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 09ae1677-8e46-4798-9f17-65bcdd4a8e13 does not exist
Oct 02 12:47:27 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d61169bc-5d19-4f5f-a184-0dd62a7fc6da does not exist
Oct 02 12:47:27 compute-0 sudo[342138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:27 compute-0 sudo[342138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:27 compute-0 sudo[342138]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:27 compute-0 sudo[342163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:47:27 compute-0 sudo[342163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:27.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:27 compute-0 sudo[342163]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:27.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.580 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "de49d646-d8aa-4a19-a2c7-477038c243c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.580 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "de49d646-d8aa-4a19-a2c7-477038c243c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.643 2 DEBUG nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.754 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.755 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 239 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.9 MiB/s wr, 272 op/s
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.766 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.766 2 INFO nova.compute.claims [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:47:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:27.868 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:27.869 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:47:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:27.870 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:27 compute-0 nova_compute[256940]: 2025-10-02 12:47:27.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.195 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1989105194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.520 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.521 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.521 2 DEBUG nova.network.neutron [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889467188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.621 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.627 2 DEBUG nova.compute.provider_tree [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.744 2 DEBUG nova.scheduler.client.report [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:47:28
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.rgw.root']
Oct 02 12:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.831 2 DEBUG nova.compute.manager [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.832 2 DEBUG nova.compute.manager [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing instance network info cache due to event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.832 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.836 2 DEBUG nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.836 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.836 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.837 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.837 2 DEBUG nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.837 2 WARNING nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.890 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.891 2 DEBUG nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:47:28 compute-0 nova_compute[256940]: 2025-10-02 12:47:28.961 2 DEBUG nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.035 2 INFO nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.087 2 DEBUG nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.344 2 DEBUG nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.345 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.346 2 INFO nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Creating image(s)
Oct 02 12:47:29 compute-0 ceph-mon[73668]: pgmap v2191: 305 pgs: 305 active+clean; 239 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.9 MiB/s wr, 272 op/s
Oct 02 12:47:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2696433292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3889467188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.378 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.401 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.426 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.429 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:47:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.504 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.505 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.506 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.506 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.534 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.537 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 de49d646-d8aa-4a19-a2c7-477038c243c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:29.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 309 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 255 op/s
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.916 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 de49d646-d8aa-4a19-a2c7-477038c243c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:29 compute-0 nova_compute[256940]: 2025-10-02 12:47:29.991 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] resizing rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:47:30 compute-0 nova_compute[256940]: 2025-10-02 12:47:30.471 2 DEBUG nova.network.neutron [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:30 compute-0 nova_compute[256940]: 2025-10-02 12:47:30.713 2 DEBUG nova.objects.instance [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'migration_context' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.387 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.388 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Ensure instance console log exists: /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.388 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.389 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.389 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.391 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.395 2 WARNING nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.401 2 DEBUG nova.virt.libvirt.host [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.401 2 DEBUG nova.virt.libvirt.host [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.406 2 DEBUG nova.virt.libvirt.host [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.407 2 DEBUG nova.virt.libvirt.host [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.408 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.408 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.409 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.409 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.409 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.410 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.410 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.410 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.411 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.411 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.411 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.411 2 DEBUG nova.virt.hardware [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.415 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:31 compute-0 ceph-mon[73668]: pgmap v2192: 305 pgs: 305 active+clean; 309 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 255 op/s
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.449 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.454 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.454 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:31.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:31.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 9.5 MiB/s wr, 317 op/s
Oct 02 12:47:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4205247774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.864 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.890 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.893 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:31 compute-0 nova_compute[256940]: 2025-10-02 12:47:31.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:32 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:47:32 compute-0 systemd[341557]: Activating special unit Exit the Session...
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped target Main User Target.
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped target Basic System.
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped target Paths.
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped target Sockets.
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped target Timers.
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:47:32 compute-0 systemd[341557]: Closed D-Bus User Message Bus Socket.
Oct 02 12:47:32 compute-0 systemd[341557]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:47:32 compute-0 systemd[341557]: Removed slice User Application Slice.
Oct 02 12:47:32 compute-0 systemd[341557]: Reached target Shutdown.
Oct 02 12:47:32 compute-0 systemd[341557]: Finished Exit the Session.
Oct 02 12:47:32 compute-0 systemd[341557]: Reached target Exit the Session.
Oct 02 12:47:32 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:47:32 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:47:32 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:47:32 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:47:32 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:47:32 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:47:32 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:47:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991247951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.340 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.344 2 DEBUG nova.objects.instance [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'pci_devices' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4205247774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/991247951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.544 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <uuid>de49d646-d8aa-4a19-a2c7-477038c243c9</uuid>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <name>instance-00000081</name>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerShowV247Test-server-866270074</nova:name>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:47:31</nova:creationTime>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <nova:user uuid="537a49488e284c9ab1330c64e8072747">tempest-ServerShowV247Test-568202848-project-member</nova:user>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <nova:project uuid="9768ac969bcb49a08f0cf2563ecd3980">tempest-ServerShowV247Test-568202848</nova:project>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <system>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <entry name="serial">de49d646-d8aa-4a19-a2c7-477038c243c9</entry>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <entry name="uuid">de49d646-d8aa-4a19-a2c7-477038c243c9</entry>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </system>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <os>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   </os>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <features>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   </features>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/de49d646-d8aa-4a19-a2c7-477038c243c9_disk">
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config">
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:32 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/console.log" append="off"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <video>
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </video>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:47:32 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:47:32 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:47:32 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:47:32 compute-0 nova_compute[256940]: </domain>
Oct 02 12:47:32 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.622 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.624 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.624 2 INFO nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Creating image(s)
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.656 2 DEBUG nova.storage.rbd_utils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] creating snapshot(nova-resize) on rbd image(1c4025f8-834f-474c-87ee-59600e6ffb96_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.742 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.742 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.743 2 INFO nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Using config drive
Oct 02 12:47:32 compute-0 nova_compute[256940]: 2025-10-02 12:47:32.767 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.147 2 INFO nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Creating config drive at /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.151 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0rzmxqpk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.292 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0rzmxqpk" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.323 2 DEBUG nova.storage.rbd_utils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.326 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:33 compute-0 podman[342506]: 2025-10-02 12:47:33.383823611 +0000 UTC m=+0.052956462 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:47:33 compute-0 podman[342516]: 2025-10-02 12:47:33.421076818 +0000 UTC m=+0.088059943 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:47:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:33.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:33 compute-0 ceph-mon[73668]: pgmap v2193: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 9.5 MiB/s wr, 317 op/s
Oct 02 12:47:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2906456310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.506 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updated VIF entry in instance network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.507 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:33 compute-0 nova_compute[256940]: 2025-10-02 12:47:33.553 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:33.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct 02 12:47:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct 02 12:47:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct 02 12:47:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 MiB/s wr, 231 op/s
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.236 2 DEBUG nova.objects.instance [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.237 2 DEBUG oslo_concurrency.processutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.911s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.238 2 INFO nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Deleting local config drive /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config because it was imported into RBD.
Oct 02 12:47:34 compute-0 systemd-machined[210927]: New machine qemu-65-instance-00000081.
Oct 02 12:47:34 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000081.
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.463 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.463 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Ensure instance console log exists: /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.464 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.464 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.464 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.467 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start _get_guest_xml network_info=[{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.470 2 WARNING nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.477 2 DEBUG nova.virt.libvirt.host [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.477 2 DEBUG nova.virt.libvirt.host [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.481 2 DEBUG nova.virt.libvirt.host [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.481 2 DEBUG nova.virt.libvirt.host [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.482 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.483 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.483 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.484 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.484 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.484 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.484 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.484 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.485 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.485 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.485 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.485 2 DEBUG nova.virt.hardware [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.486 2 DEBUG nova.objects.instance [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1188429841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:34 compute-0 ceph-mon[73668]: osdmap e310: 3 total, 3 up, 3 in
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.517 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232072933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:34 compute-0 nova_compute[256940]: 2025-10-02 12:47:34.963 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.001 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.050 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409255.049971, de49d646-d8aa-4a19-a2c7-477038c243c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.051 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] VM Resumed (Lifecycle Event)
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.054 2 DEBUG nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.055 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.058 2 INFO nova.virt.libvirt.driver [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance spawned successfully.
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.058 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:47:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088756800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.430 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.431 2 DEBUG nova.virt.libvirt.vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:25Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.432 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.433 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.435 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <uuid>1c4025f8-834f-474c-87ee-59600e6ffb96</uuid>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <name>instance-0000007e</name>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <nova:name>tempest-DeleteServersTestJSON-server-1232591881</nova:name>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:47:34</nova:creationTime>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <nova:port uuid="8001e1a0-d1c2-49ac-8630-690ed8ac9801">
Oct 02 12:47:35 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <system>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <entry name="serial">1c4025f8-834f-474c-87ee-59600e6ffb96</entry>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <entry name="uuid">1c4025f8-834f-474c-87ee-59600e6ffb96</entry>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </system>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <os>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   </os>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <features>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   </features>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1c4025f8-834f-474c-87ee-59600e6ffb96_disk">
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config">
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:35 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:48:3d:b5"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <target dev="tap8001e1a0-d1"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/console.log" append="off"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <video>
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </video>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:47:35 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:47:35 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:47:35 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:47:35 compute-0 nova_compute[256940]: </domain>
Oct 02 12:47:35 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.440 2 DEBUG nova.virt.libvirt.vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:25Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.440 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.441 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.442 2 DEBUG os_vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8001e1a0-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8001e1a0-d1, col_values=(('external_ids', {'iface-id': '8001e1a0-d1c2-49ac-8630-690ed8ac9801', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:3d:b5', 'vm-uuid': '1c4025f8-834f-474c-87ee-59600e6ffb96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:35 compute-0 NetworkManager[44981]: <info>  [1759409255.4490] manager: (tap8001e1a0-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.459 2 INFO os_vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')
Oct 02 12:47:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:35 compute-0 ceph-mon[73668]: pgmap v2195: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 MiB/s wr, 231 op/s
Oct 02 12:47:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/232072933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3088756800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:35.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 251 op/s
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.904 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.908 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.909 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.909 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.909 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.910 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.910 2 DEBUG nova.virt.libvirt.driver [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:35 compute-0 nova_compute[256940]: 2025-10-02 12:47:35.914 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:36 compute-0 nova_compute[256940]: 2025-10-02 12:47:36.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:47:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:37.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:47:37 compute-0 ceph-mon[73668]: pgmap v2196: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 251 op/s
Oct 02 12:47:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:37.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 9.8 MiB/s wr, 340 op/s
Oct 02 12:47:38 compute-0 nova_compute[256940]: 2025-10-02 12:47:38.379 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:38 compute-0 nova_compute[256940]: 2025-10-02 12:47:38.379 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409255.0503387, de49d646-d8aa-4a19-a2c7-477038c243c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:38 compute-0 nova_compute[256940]: 2025-10-02 12:47:38.379 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] VM Started (Lifecycle Event)
Oct 02 12:47:38 compute-0 ceph-mon[73668]: pgmap v2197: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 9.8 MiB/s wr, 340 op/s
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.120 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.121 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.121 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:48:3d:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.121 2 INFO nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Using config drive
Oct 02 12:47:39 compute-0 NetworkManager[44981]: <info>  [1759409259.2180] manager: (tap8001e1a0-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Oct 02 12:47:39 compute-0 kernel: tap8001e1a0-d1: entered promiscuous mode
Oct 02 12:47:39 compute-0 ovn_controller[148123]: 2025-10-02T12:47:39Z|00639|binding|INFO|Claiming lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 for this chassis.
Oct 02 12:47:39 compute-0 ovn_controller[148123]: 2025-10-02T12:47:39Z|00640|binding|INFO|8001e1a0-d1c2-49ac-8630-690ed8ac9801: Claiming fa:16:3e:48:3d:b5 10.100.0.5
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:39 compute-0 systemd-udevd[342767]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:39 compute-0 NetworkManager[44981]: <info>  [1759409259.2472] device (tap8001e1a0-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:47:39 compute-0 NetworkManager[44981]: <info>  [1759409259.2478] device (tap8001e1a0-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:47:39 compute-0 systemd-machined[210927]: New machine qemu-66-instance-0000007e.
Oct 02 12:47:39 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000007e.
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:39 compute-0 ovn_controller[148123]: 2025-10-02T12:47:39Z|00641|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 ovn-installed in OVS
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:39.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:39 compute-0 ovn_controller[148123]: 2025-10-02T12:47:39Z|00642|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 up in Southbound
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.567 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:3d:b5 10.100.0.5'], port_security=['fa:16:3e:48:3d:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c4025f8-834f-474c-87ee-59600e6ffb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8001e1a0-d1c2-49ac-8630-690ed8ac9801) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.568 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.570 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:47:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:39.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.583 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[18dbdb79-bdc8-4135-b462-56c3d1f9dd99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.584 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.585 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.586 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d9493bff-b032-4623-a068-a2031b712be6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.586 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5347d427-6200-46a7-a039-e6cf7e81cc4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.600 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[da1e6752-52c7-4371-bd5d-21cddb94cf1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.622 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c28234-bdab-45a0-9924-a5d7540957d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.665 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[72db4597-4070-45bd-acfc-d9cdb319374f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.671 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[17c99e48-b34c-453f-a46e-177d3d22db54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 NetworkManager[44981]: <info>  [1759409259.6729] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Oct 02 12:47:39 compute-0 systemd-udevd[342772]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.704 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b8443b69-c579-48ee-a2ec-c191a5288dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.707 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f06243-ff47-408f-ab20-308fe849be82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 NetworkManager[44981]: <info>  [1759409259.7290] device (tapfd4432c5-b0): carrier: link connected
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.736 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f47bd42b-49f3-4c56-9906-76cca98c33b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.754 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d98e0312-3934-41b8-a559-27177f1d73f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715521, 'reachable_time': 30700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342840, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 464 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.9 MiB/s wr, 339 op/s
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.774 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93604762-c83a-466e-895a-6515f78ab2af]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 715521, 'tstamp': 715521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342841, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.799 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[323499bc-6ba9-4eac-8ead-8f132e72e52e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715521, 'reachable_time': 30700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342842, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.840 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eb250383-df5b-4bc1-af75-0d6e04cf0a60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.906 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4166aaa9-7f01-4cbe-99f0-844dc4e13345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.908 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.908 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.908 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:39 compute-0 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:39 compute-0 NetworkManager[44981]: <info>  [1759409259.9123] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.913 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:39 compute-0 ovn_controller[148123]: 2025-10-02T12:47:39Z|00643|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=1)
Oct 02 12:47:39 compute-0 nova_compute[256940]: 2025-10-02 12:47:39.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.931 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.932 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[61c3c2bc-3cac-42e9-afc9-309ebf9e6ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.933 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:47:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:39.933 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.096 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.101 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.174 2 INFO nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Took 10.83 seconds to spawn the instance on the hypervisor.
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.174 2 DEBUG nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.231 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:40 compute-0 podman[342878]: 2025-10-02 12:47:40.312818525 +0000 UTC m=+0.058498064 container create 7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:47:40 compute-0 systemd[1]: Started libpod-conmon-7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e.scope.
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.355 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409260.3553696, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.356 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Resumed (Lifecycle Event)
Oct 02 12:47:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18dd0131f83e65b869865e15e750d47724605bbbf87d03065eb0a66f53e82b53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.371 2 DEBUG nova.compute.manager [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:47:40 compute-0 podman[342878]: 2025-10-02 12:47:40.279002586 +0000 UTC m=+0.024682135 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.374 2 INFO nova.virt.libvirt.driver [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance running successfully.
Oct 02 12:47:40 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.376 2 DEBUG nova.virt.libvirt.guest [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.376 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:47:40 compute-0 podman[342878]: 2025-10-02 12:47:40.388567451 +0000 UTC m=+0.134246990 container init 7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.388 2 INFO nova.compute.manager [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Took 12.68 seconds to build instance.
Oct 02 12:47:40 compute-0 podman[342878]: 2025-10-02 12:47:40.395500869 +0000 UTC m=+0.141180408 container start 7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.404 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.407 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:40 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [NOTICE]   (342897) : New worker (342899) forked
Oct 02 12:47:40 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [NOTICE]   (342897) : Loading success.
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00732794470035362 of space, bias 1.0, pg target 2.198383410106086 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164322771263726 of space, bias 1.0, pg target 0.6449681858365903 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:47:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.553 2 DEBUG oslo_concurrency.lockutils [None req-2f6add3a-2932-41a3-a845-e31da7743b07 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "de49d646-d8aa-4a19-a2c7-477038c243c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.574 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.575 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409260.3715768, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.575 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Started (Lifecycle Event)
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.762 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:40 compute-0 nova_compute[256940]: 2025-10-02 12:47:40.772 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:40 compute-0 ceph-mon[73668]: pgmap v2198: 305 pgs: 305 active+clean; 464 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.9 MiB/s wr, 339 op/s
Oct 02 12:47:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:41.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:41.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 277 op/s
Oct 02 12:47:41 compute-0 nova_compute[256940]: 2025-10-02 12:47:41.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:43 compute-0 sudo[342910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:43 compute-0 sudo[342910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:43 compute-0 sudo[342910]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:43 compute-0 ceph-mon[73668]: pgmap v2199: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 277 op/s
Oct 02 12:47:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2583393535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:43 compute-0 sudo[342935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:43 compute-0 sudo[342935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:43 compute-0 sudo[342935]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:43.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:43.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.5 MiB/s wr, 275 op/s
Oct 02 12:47:45 compute-0 ceph-mon[73668]: pgmap v2200: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.5 MiB/s wr, 275 op/s
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.337 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.337 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.370 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.486 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.486 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.498 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.499 2 INFO nova.compute.claims [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:47:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:45.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 304 op/s
Oct 02 12:47:45 compute-0 nova_compute[256940]: 2025-10-02 12:47:45.854 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2446061113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155259324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.360 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.369 2 DEBUG nova.compute.provider_tree [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2446061113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.397 2 DEBUG nova.scheduler.client.report [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:46 compute-0 podman[342983]: 2025-10-02 12:47:46.400911261 +0000 UTC m=+0.070566844 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:47:46 compute-0 podman[342981]: 2025-10-02 12:47:46.407245064 +0000 UTC m=+0.068195804 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.436 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.436 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.597 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.597 2 DEBUG nova.network.neutron [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.674 2 INFO nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.704 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.805 2 INFO nova.virt.block_device [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Booting with volume 4ce2a695-6966-417a-847c-61fce9c9533c at /dev/vda
Oct 02 12:47:46 compute-0 nova_compute[256940]: 2025-10-02 12:47:46.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.017 2 DEBUG nova.policy [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3cd62a3208649c183d3fc2edc1c0f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.128 2 DEBUG os_brick.utils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.130 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.159 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.159 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd01920-3b2a-4c6a-b0c7-13b02e9f8256]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.160 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.168 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.168 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[ad94699b-ba2f-4dc5-a153-c477e0a1cff2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.169 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.177 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.178 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[536a31db-3d35-4e1d-83a3-5bbdb5964614]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.179 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[c37709bf-e656-4f87-ab2d-1c3567a46d43]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.180 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.224 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.234 2 DEBUG os_brick.initiator.connectors.lightos [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.234 2 DEBUG os_brick.initiator.connectors.lightos [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.234 2 DEBUG os_brick.initiator.connectors.lightos [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.235 2 DEBUG os_brick.utils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (105ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:47:47 compute-0 nova_compute[256940]: 2025-10-02 12:47:47.236 2 DEBUG nova.virt.block_device [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating existing volume attachment record: 92c19cce-94cc-4877-9308-c5354c63f368 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:47:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:47 compute-0 ceph-mon[73668]: pgmap v2201: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 304 op/s
Oct 02 12:47:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3155259324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:47.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:47.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.8 MiB/s wr, 355 op/s
Oct 02 12:47:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2476830693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:49 compute-0 ceph-mon[73668]: pgmap v2202: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.8 MiB/s wr, 355 op/s
Oct 02 12:47:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2476830693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:49.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:49.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 304 op/s
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.049 2 DEBUG nova.network.neutron [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Successfully created port: e45b089f-5ee7-489a-8871-2386b27282c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.102 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.104 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.104 2 INFO nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Creating image(s)
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.105 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.105 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Ensure instance console log exists: /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.105 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.105 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.106 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.370 2 INFO nova.compute.manager [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Rebuilding instance
Oct 02 12:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct 02 12:47:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct 02 12:47:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.779 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'trusted_certs' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.838 2 DEBUG nova.compute.manager [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.904 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'pci_requests' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.920 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'pci_devices' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.940 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'resources' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.970 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'migration_context' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.988 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:47:50 compute-0 nova_compute[256940]: 2025-10-02 12:47:50.992 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:47:51 compute-0 ceph-mon[73668]: pgmap v2203: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 304 op/s
Oct 02 12:47:51 compute-0 ceph-mon[73668]: osdmap e311: 3 total, 3 up, 3 in
Oct 02 12:47:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2675420944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.474 2 DEBUG nova.network.neutron [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Successfully updated port: e45b089f-5ee7-489a-8871-2386b27282c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:47:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:51.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.550 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.550 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.550 2 DEBUG nova.network.neutron [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.638 2 DEBUG nova.compute.manager [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.639 2 DEBUG nova.compute.manager [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing instance network info cache due to event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.639 2 DEBUG oslo_concurrency.lockutils [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Oct 02 12:47:51 compute-0 nova_compute[256940]: 2025-10-02 12:47:51.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.089 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.090 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.090 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.090 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.090 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.091 2 WARNING nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state resized and task_state deleting.
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.091 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.091 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.091 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.091 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.092 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.092 2 WARNING nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state resized and task_state deleting.
Oct 02 12:47:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.273 2 DEBUG nova.network.neutron [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.451 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.451 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.451 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.452 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.452 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.453 2 INFO nova.compute.manager [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Terminating instance
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.454 2 DEBUG nova.compute.manager [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:47:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/560336633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:52 compute-0 kernel: tap8001e1a0-d1 (unregistering): left promiscuous mode
Oct 02 12:47:52 compute-0 NetworkManager[44981]: <info>  [1759409272.5165] device (tap8001e1a0-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 ovn_controller[148123]: 2025-10-02T12:47:52Z|00644|binding|INFO|Releasing lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 from this chassis (sb_readonly=0)
Oct 02 12:47:52 compute-0 ovn_controller[148123]: 2025-10-02T12:47:52Z|00645|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 down in Southbound
Oct 02 12:47:52 compute-0 ovn_controller[148123]: 2025-10-02T12:47:52Z|00646|binding|INFO|Removing iface tap8001e1a0-d1 ovn-installed in OVS
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 02 12:47:52 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000007e.scope: Consumed 12.563s CPU time.
Oct 02 12:47:52 compute-0 systemd-machined[210927]: Machine qemu-66-instance-0000007e terminated.
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.690 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:3d:b5 10.100.0.5'], port_security=['fa:16:3e:48:3d:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c4025f8-834f-474c-87ee-59600e6ffb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8001e1a0-d1c2-49ac-8630-690ed8ac9801) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.692 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.695 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.695 2 INFO nova.virt.libvirt.driver [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance destroyed successfully.
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.696 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ea54995c-5400-4bc5-85ec-a7583ae633fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.695 2 DEBUG nova.objects.instance [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.696 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.729 2 DEBUG nova.virt.libvirt.vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:40Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.729 2 DEBUG nova.network.os_vif_util [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.730 2 DEBUG nova.network.os_vif_util [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.730 2 DEBUG os_vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8001e1a0-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.738 2 INFO os_vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')
Oct 02 12:47:52 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [NOTICE]   (342897) : haproxy version is 2.8.14-c23fe91
Oct 02 12:47:52 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [NOTICE]   (342897) : path to executable is /usr/sbin/haproxy
Oct 02 12:47:52 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [WARNING]  (342897) : Exiting Master process...
Oct 02 12:47:52 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [WARNING]  (342897) : Exiting Master process...
Oct 02 12:47:52 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [ALERT]    (342897) : Current worker (342899) exited with code 143 (Terminated)
Oct 02 12:47:52 compute-0 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[342893]: [WARNING]  (342897) : All workers exited. Exiting... (0)
Oct 02 12:47:52 compute-0 systemd[1]: libpod-7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e.scope: Deactivated successfully.
Oct 02 12:47:52 compute-0 podman[343082]: 2025-10-02 12:47:52.834124766 +0000 UTC m=+0.046738312 container died 7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-18dd0131f83e65b869865e15e750d47724605bbbf87d03065eb0a66f53e82b53-merged.mount: Deactivated successfully.
Oct 02 12:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e-userdata-shm.mount: Deactivated successfully.
Oct 02 12:47:52 compute-0 podman[343082]: 2025-10-02 12:47:52.879232605 +0000 UTC m=+0.091846121 container cleanup 7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:47:52 compute-0 systemd[1]: libpod-conmon-7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e.scope: Deactivated successfully.
Oct 02 12:47:52 compute-0 podman[343116]: 2025-10-02 12:47:52.939277078 +0000 UTC m=+0.039300571 container remove 7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.945 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d089dc-2808-448b-9c1f-e2c9c132bb4d]: (4, ('Thu Oct  2 12:47:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e)\n7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e\nThu Oct  2 12:47:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e)\n7a98bee52933a65026879533c33cb05289f1971e0674f2a2a43620d7c15bf84e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.948 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc69e31-0582-4b27-8e06-5c3df27a2c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.949 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:52 compute-0 kernel: tapfd4432c5-b0: left promiscuous mode
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 nova_compute[256940]: 2025-10-02 12:47:52.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:52.978 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3d950d49-6454-4372-be55-4e6ce675625f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:53.015 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c8e6db-6b56-4ff7-98f5-1d805a05d771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:53.017 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c31f8389-7437-44d6-8cc4-86ff0dccacc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:53.034 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[40d9657a-19de-49dc-bde7-c7aafc10fda4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715513, 'reachable_time': 20851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343132, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:53.037 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:47:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:53.037 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[af2293c7-06a3-4fd4-8d39-8c2cab2649e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct 02 12:47:53 compute-0 nova_compute[256940]: 2025-10-02 12:47:53.234 2 INFO nova.virt.libvirt.driver [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Deleting instance files /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96_del
Oct 02 12:47:53 compute-0 nova_compute[256940]: 2025-10-02 12:47:53.235 2 INFO nova.virt.libvirt.driver [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Deletion of /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96_del complete
Oct 02 12:47:53 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 02 12:47:53 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000081.scope: Consumed 13.922s CPU time.
Oct 02 12:47:53 compute-0 systemd-machined[210927]: Machine qemu-65-instance-00000081 terminated.
Oct 02 12:47:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:53.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:53 compute-0 ceph-mon[73668]: pgmap v2205: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Oct 02 12:47:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2696604845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1678523138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.013 2 INFO nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance shutdown successfully after 3 seconds.
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.018 2 INFO nova.virt.libvirt.driver [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance destroyed successfully.
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.023 2 INFO nova.virt.libvirt.driver [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance destroyed successfully.
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.324 2 INFO nova.compute.manager [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Took 1.87 seconds to destroy the instance on the hypervisor.
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.325 2 DEBUG oslo.service.loopingcall [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.326 2 DEBUG nova.compute.manager [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.326 2 DEBUG nova.network.neutron [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.371 2 DEBUG nova.compute.manager [req-ce9cb02e-5f48-40ac-94e2-afcf85130a8a req-bb922c39-2fc7-459b-8f0b-a50f4a04bb63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.371 2 DEBUG oslo_concurrency.lockutils [req-ce9cb02e-5f48-40ac-94e2-afcf85130a8a req-bb922c39-2fc7-459b-8f0b-a50f4a04bb63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.371 2 DEBUG oslo_concurrency.lockutils [req-ce9cb02e-5f48-40ac-94e2-afcf85130a8a req-bb922c39-2fc7-459b-8f0b-a50f4a04bb63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.372 2 DEBUG oslo_concurrency.lockutils [req-ce9cb02e-5f48-40ac-94e2-afcf85130a8a req-bb922c39-2fc7-459b-8f0b-a50f4a04bb63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.372 2 DEBUG nova.compute.manager [req-ce9cb02e-5f48-40ac-94e2-afcf85130a8a req-bb922c39-2fc7-459b-8f0b-a50f4a04bb63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.372 2 WARNING nova.compute.manager [req-ce9cb02e-5f48-40ac-94e2-afcf85130a8a req-bb922c39-2fc7-459b-8f0b-a50f4a04bb63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state None.
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.574 2 DEBUG nova.network.neutron [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.880 2 INFO nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Deleting instance files /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9_del
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.881 2 INFO nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Deletion of /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9_del complete
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.934 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.934 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Instance network_info: |[{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.935 2 DEBUG oslo_concurrency.lockutils [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.935 2 DEBUG nova.network.neutron [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.938 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Start _get_guest_xml network_info=[{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4ce2a695-6966-417a-847c-61fce9c9533c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4ce2a695-6966-417a-847c-61fce9c9533c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2755e56a-ea2b-43ad-8bae-29eb7fc0ed60', 'attached_at': '', 'detached_at': '', 'volume_id': '4ce2a695-6966-417a-847c-61fce9c9533c', 'serial': '4ce2a695-6966-417a-847c-61fce9c9533c'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '92c19cce-94cc-4877-9308-c5354c63f368', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.941 2 WARNING nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.946 2 DEBUG nova.virt.libvirt.host [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.946 2 DEBUG nova.virt.libvirt.host [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.949 2 DEBUG nova.virt.libvirt.host [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.949 2 DEBUG nova.virt.libvirt.host [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.950 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.950 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.951 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.951 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.951 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.951 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.952 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.952 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.952 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.952 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.953 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.953 2 DEBUG nova.virt.hardware [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.976 2 DEBUG nova.storage.rbd_utils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:54 compute-0 nova_compute[256940]: 2025-10-02 12:47:54.979 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.224 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.225 2 INFO nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Creating image(s)
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.253 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.275 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.303 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.307 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.377 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.379 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.379 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.380 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.406 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330067773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.413 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 de49d646-d8aa-4a19-a2c7-477038c243c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.445 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:47:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:55.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.541 2 DEBUG nova.virt.libvirt.vif [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1310446020',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1310446020',id=131,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-dckwdwmk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:46Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=2755e56a-ea2b-43ad-8bae-29eb7fc0ed60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.542 2 DEBUG nova.network.os_vif_util [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.543 2 DEBUG nova.network.os_vif_util [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6e:9c,bridge_name='br-int',has_traffic_filtering=True,id=e45b089f-5ee7-489a-8871-2386b27282c1,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape45b089f-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.544 2 DEBUG nova.objects.instance [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:55.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:55 compute-0 ceph-mon[73668]: pgmap v2206: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Oct 02 12:47:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2330067773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.612 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <uuid>2755e56a-ea2b-43ad-8bae-29eb7fc0ed60</uuid>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <name>instance-00000083</name>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-1310446020</nova:name>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:47:54</nova:creationTime>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:user uuid="e3cd62a3208649c183d3fc2edc1c0f18">tempest-TestInstancesWithCinderVolumes-621751307-project-member</nova:user>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:project uuid="d3e0300f3cf5493d8a9e62e2c4a95767">tempest-TestInstancesWithCinderVolumes-621751307</nova:project>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <nova:port uuid="e45b089f-5ee7-489a-8871-2386b27282c1">
Oct 02 12:47:55 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <system>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <entry name="serial">2755e56a-ea2b-43ad-8bae-29eb7fc0ed60</entry>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <entry name="uuid">2755e56a-ea2b-43ad-8bae-29eb7fc0ed60</entry>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </system>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <os>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   </os>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <features>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   </features>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_disk.config">
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-4ce2a695-6966-417a-847c-61fce9c9533c">
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:55 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <serial>4ce2a695-6966-417a-847c-61fce9c9533c</serial>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:ec:6e:9c"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <target dev="tape45b089f-5e"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/console.log" append="off"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <video>
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </video>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:47:55 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:47:55 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:47:55 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:47:55 compute-0 nova_compute[256940]: </domain>
Oct 02 12:47:55 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.614 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Preparing to wait for external event network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.614 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.614 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.615 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.616 2 DEBUG nova.virt.libvirt.vif [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1310446020',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1310446020',id=131,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-dckwdwmk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:46Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=2755e56a-ea2b-43ad-8bae-29eb7fc0ed60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.616 2 DEBUG nova.network.os_vif_util [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.617 2 DEBUG nova.network.os_vif_util [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6e:9c,bridge_name='br-int',has_traffic_filtering=True,id=e45b089f-5ee7-489a-8871-2386b27282c1,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape45b089f-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.617 2 DEBUG os_vif [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6e:9c,bridge_name='br-int',has_traffic_filtering=True,id=e45b089f-5ee7-489a-8871-2386b27282c1,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape45b089f-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.619 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape45b089f-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape45b089f-5e, col_values=(('external_ids', {'iface-id': 'e45b089f-5ee7-489a-8871-2386b27282c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:6e:9c', 'vm-uuid': '2755e56a-ea2b-43ad-8bae-29eb7fc0ed60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:55 compute-0 NetworkManager[44981]: <info>  [1759409275.6248] manager: (tape45b089f-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.630 2 INFO os_vif [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:6e:9c,bridge_name='br-int',has_traffic_filtering=True,id=e45b089f-5ee7-489a-8871-2386b27282c1,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape45b089f-5e')
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.685 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.685 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.686 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:ec:6e:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.686 2 INFO nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Using config drive
Oct 02 12:47:55 compute-0 nova_compute[256940]: 2025-10-02 12:47:55.713 2 DEBUG nova.storage.rbd_utils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 539 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.9 MiB/s wr, 218 op/s
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.256 2 DEBUG nova.network.neutron [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.373 2 INFO nova.compute.manager [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Took 2.05 seconds to deallocate network for instance.
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.405 2 DEBUG nova.compute.manager [req-bf8d96af-8aad-4e92-8333-060b22452b85 req-f360a060-320e-400b-b45b-1b2e7e1c44be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-deleted-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.459 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.459 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.464 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.523 2 INFO nova.scheduler.client.report [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 1c4025f8-834f-474c-87ee-59600e6ffb96
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.543 2 DEBUG nova.compute.manager [req-31186c34-c263-451e-b394-fc2c2b8c9b1a req-2ac3cbe2-ebff-4461-b277-73779e568b34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.543 2 DEBUG oslo_concurrency.lockutils [req-31186c34-c263-451e-b394-fc2c2b8c9b1a req-2ac3cbe2-ebff-4461-b277-73779e568b34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.543 2 DEBUG oslo_concurrency.lockutils [req-31186c34-c263-451e-b394-fc2c2b8c9b1a req-2ac3cbe2-ebff-4461-b277-73779e568b34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.543 2 DEBUG oslo_concurrency.lockutils [req-31186c34-c263-451e-b394-fc2c2b8c9b1a req-2ac3cbe2-ebff-4461-b277-73779e568b34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.544 2 DEBUG nova.compute.manager [req-31186c34-c263-451e-b394-fc2c2b8c9b1a req-2ac3cbe2-ebff-4461-b277-73779e568b34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.544 2 WARNING nova.compute.manager [req-31186c34-c263-451e-b394-fc2c2b8c9b1a req-2ac3cbe2-ebff-4461-b277-73779e568b34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state deleted and task_state None.
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.550 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 de49d646-d8aa-4a19-a2c7-477038c243c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.620 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.629 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] resizing rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:47:56 compute-0 ceph-mon[73668]: pgmap v2207: 305 pgs: 305 active+clean; 539 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.9 MiB/s wr, 218 op/s
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.666 2 INFO nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Creating config drive at /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/disk.config
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.671 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl077q1jt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.808 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl077q1jt" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.848 2 DEBUG nova.storage.rbd_utils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.853 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/disk.config 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:56 compute-0 nova_compute[256940]: 2025-10-02 12:47:56.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.017 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.018 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Ensure instance console log exists: /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.018 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.019 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.019 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.020 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.024 2 WARNING nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.028 2 DEBUG nova.virt.libvirt.host [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.029 2 DEBUG nova.virt.libvirt.host [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.032 2 DEBUG nova.virt.libvirt.host [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.032 2 DEBUG nova.virt.libvirt.host [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.033 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.033 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.034 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.034 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.034 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.034 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.035 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.035 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.035 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.035 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.036 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.036 2 DEBUG nova.virt.hardware [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.036 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.055 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.149 2 DEBUG oslo_concurrency.processutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/disk.config 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.150 2 INFO nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Deleting local config drive /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60/disk.config because it was imported into RBD.
Oct 02 12:47:57 compute-0 NetworkManager[44981]: <info>  [1759409277.2177] manager: (tape45b089f-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Oct 02 12:47:57 compute-0 kernel: tape45b089f-5e: entered promiscuous mode
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-0 ovn_controller[148123]: 2025-10-02T12:47:57Z|00647|binding|INFO|Claiming lport e45b089f-5ee7-489a-8871-2386b27282c1 for this chassis.
Oct 02 12:47:57 compute-0 ovn_controller[148123]: 2025-10-02T12:47:57Z|00648|binding|INFO|e45b089f-5ee7-489a-8871-2386b27282c1: Claiming fa:16:3e:ec:6e:9c 10.100.0.6
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.237 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:6e:9c 10.100.0.6'], port_security=['fa:16:3e:ec:6e:9c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2755e56a-ea2b-43ad-8bae-29eb7fc0ed60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e45b089f-5ee7-489a-8871-2386b27282c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.240 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e45b089f-5ee7-489a-8871-2386b27282c1 in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa bound to our chassis
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.242 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:47:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.258 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bf51aceb-f3ad-4d59-b40c-612bfe0d14a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.259 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa3b4df3-61 in ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.261 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa3b4df3-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.261 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[877d97f3-7435-4a8b-9f63-bfbc66e7b214]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 systemd-udevd[343456]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.263 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[429a81bc-4096-4bee-b238-d25ee791c58f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 NetworkManager[44981]: <info>  [1759409277.2748] device (tape45b089f-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:47:57 compute-0 NetworkManager[44981]: <info>  [1759409277.2757] device (tape45b089f-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:47:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct 02 12:47:57 compute-0 systemd-machined[210927]: New machine qemu-67-instance-00000083.
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.280 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[cec0e545-b9d4-475d-9774-f27dffa524a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct 02 12:47:57 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-00000083.
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.308 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[92435634-51d3-4dbe-8489-df268800fa78]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-0 ovn_controller[148123]: 2025-10-02T12:47:57Z|00649|binding|INFO|Setting lport e45b089f-5ee7-489a-8871-2386b27282c1 ovn-installed in OVS
Oct 02 12:47:57 compute-0 ovn_controller[148123]: 2025-10-02T12:47:57Z|00650|binding|INFO|Setting lport e45b089f-5ee7-489a-8871-2386b27282c1 up in Southbound
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.343 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4b23d21d-eb6f-44c5-974a-769a2792eb38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.348 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[32c9eede-7811-4bfd-8acc-b97d4274a3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 NetworkManager[44981]: <info>  [1759409277.3499] manager: (tapaa3b4df3-60): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.385 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba56d9e-4c28-4ee0-9161-9d02980c712e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.390 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[06b5ba2f-17f0-4157-9184-12e1ec0f3884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 NetworkManager[44981]: <info>  [1759409277.4151] device (tapaa3b4df3-60): carrier: link connected
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.424 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d57c30-c051-4eea-87db-28204deb0ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.443 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c3fa3414-bc27-463d-b76d-4b4a95c0113d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717289, 'reachable_time': 44341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343489, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.462 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e45814-58a3-48c2-a4c0-331532db061f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:817f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717289, 'tstamp': 717289}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343490, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.479 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[78b20f82-a174-4b86-a9ef-e3467f95fbe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717289, 'reachable_time': 44341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343498, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389259163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.514 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2c938a3f-60b3-43e3-a2a0-4eb077746db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.517 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.547 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.553 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.587 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2dc092-5f7c-449f-8516-82ece0aacc5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.588 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.589 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.589 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3b4df3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:57 compute-0 kernel: tapaa3b4df3-60: entered promiscuous mode
Oct 02 12:47:57 compute-0 NetworkManager[44981]: <info>  [1759409277.5921] manager: (tapaa3b4df3-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.594 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3b4df3-60, col_values=(('external_ids', {'iface-id': 'fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:57 compute-0 ovn_controller[148123]: 2025-10-02T12:47:57Z|00651|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:47:57 compute-0 nova_compute[256940]: 2025-10-02 12:47:57.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.612 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.613 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0eb242-5ef5-45fc-b812-3be39b582400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.615 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:47:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:47:57.615 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'env', 'PROCESS_TAG=haproxy-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa3b4df3-6044-4a53-8039-c9a5c05725aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:47:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 808 KiB/s rd, 5.0 MiB/s wr, 222 op/s
Oct 02 12:47:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573455435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:57 compute-0 podman[343603]: 2025-10-02 12:47:57.996000233 +0000 UTC m=+0.045987943 container create 2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.018 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.020 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <uuid>de49d646-d8aa-4a19-a2c7-477038c243c9</uuid>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <name>instance-00000081</name>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerShowV247Test-server-866270074</nova:name>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:47:57</nova:creationTime>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <nova:user uuid="537a49488e284c9ab1330c64e8072747">tempest-ServerShowV247Test-568202848-project-member</nova:user>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <nova:project uuid="9768ac969bcb49a08f0cf2563ecd3980">tempest-ServerShowV247Test-568202848</nova:project>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <nova:ports/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <system>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <entry name="serial">de49d646-d8aa-4a19-a2c7-477038c243c9</entry>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <entry name="uuid">de49d646-d8aa-4a19-a2c7-477038c243c9</entry>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </system>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <os>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   </os>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <features>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   </features>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/de49d646-d8aa-4a19-a2c7-477038c243c9_disk">
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config">
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       </source>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:47:58 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/console.log" append="off"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <video>
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </video>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:47:58 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:47:58 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:47:58 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:47:58 compute-0 nova_compute[256940]: </domain>
Oct 02 12:47:58 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:58 compute-0 systemd[1]: Started libpod-conmon-2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb.scope.
Oct 02 12:47:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:47:58 compute-0 podman[343603]: 2025-10-02 12:47:57.973493564 +0000 UTC m=+0.023481294 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402927eae0ce9b9ac99512f564211e8b49d50c16bb139c4cd43e5f5e1b133404/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:58 compute-0 podman[343603]: 2025-10-02 12:47:58.082054924 +0000 UTC m=+0.132042654 container init 2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:47:58 compute-0 podman[343603]: 2025-10-02 12:47:58.088749276 +0000 UTC m=+0.138736986 container start 2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:47:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [NOTICE]   (343626) : New worker (343628) forked
Oct 02 12:47:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [NOTICE]   (343626) : Loading success.
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.123 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.124 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.124 2 INFO nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Using config drive
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.146 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.151 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409278.1253998, 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.151 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] VM Started (Lifecycle Event)
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.187 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'ec2_ids' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.190 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.193 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409278.1255007, 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.193 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] VM Paused (Lifecycle Event)
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.237 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.239 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.244 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'keypairs' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.269 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:58 compute-0 ceph-mon[73668]: osdmap e312: 3 total, 3 up, 3 in
Oct 02 12:47:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2389259163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/573455435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.443 2 INFO nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Creating config drive at /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.448 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplj3rr72z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:47:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.508 2 DEBUG nova.network.neutron [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated VIF entry in instance network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.509 2 DEBUG nova.network.neutron [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.541 2 DEBUG oslo_concurrency.lockutils [req-db4e88a3-9d40-43e5-a56c-82884239b67b req-350bcd70-7e92-4d14-ac66-a7d1fbc11b04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.596 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplj3rr72z" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.624 2 DEBUG nova.storage.rbd_utils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:58 compute-0 nova_compute[256940]: 2025-10-02 12:47:58.628 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.009 2 DEBUG nova.compute.manager [req-84826195-967a-4d95-a3ee-ccf76f936b2e req-7a03b81f-e14e-4832-9543-0572ea36b16f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.010 2 DEBUG oslo_concurrency.lockutils [req-84826195-967a-4d95-a3ee-ccf76f936b2e req-7a03b81f-e14e-4832-9543-0572ea36b16f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.010 2 DEBUG oslo_concurrency.lockutils [req-84826195-967a-4d95-a3ee-ccf76f936b2e req-7a03b81f-e14e-4832-9543-0572ea36b16f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.011 2 DEBUG oslo_concurrency.lockutils [req-84826195-967a-4d95-a3ee-ccf76f936b2e req-7a03b81f-e14e-4832-9543-0572ea36b16f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.011 2 DEBUG nova.compute.manager [req-84826195-967a-4d95-a3ee-ccf76f936b2e req-7a03b81f-e14e-4832-9543-0572ea36b16f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Processing event network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.012 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.016 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409279.0148907, 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.017 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] VM Resumed (Lifecycle Event)
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.020 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.025 2 INFO nova.virt.libvirt.driver [-] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Instance spawned successfully.
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.026 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.057 2 DEBUG oslo_concurrency.processutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config de49d646-d8aa-4a19-a2c7-477038c243c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:59 compute-0 nova_compute[256940]: 2025-10-02 12:47:59.058 2 INFO nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Deleting local config drive /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9/disk.config because it was imported into RBD.
Oct 02 12:47:59 compute-0 systemd-machined[210927]: New machine qemu-68-instance-00000081.
Oct 02 12:47:59 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-00000081.
Oct 02 12:47:59 compute-0 ceph-mon[73668]: pgmap v2209: 305 pgs: 305 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 808 KiB/s rd, 5.0 MiB/s wr, 222 op/s
Oct 02 12:47:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:59.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:47:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 967 KiB/s rd, 6.7 MiB/s wr, 256 op/s
Oct 02 12:48:00 compute-0 nova_compute[256940]: 2025-10-02 12:48:00.145 2 DEBUG nova.compute.manager [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:48:00 compute-0 nova_compute[256940]: 2025-10-02 12:48:00.147 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:48:00 compute-0 nova_compute[256940]: 2025-10-02 12:48:00.150 2 INFO nova.virt.libvirt.driver [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance spawned successfully.
Oct 02 12:48:00 compute-0 nova_compute[256940]: 2025-10-02 12:48:00.151 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:48:00 compute-0 nova_compute[256940]: 2025-10-02 12:48:00.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:01.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:01 compute-0 ceph-mon[73668]: pgmap v2210: 305 pgs: 305 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 967 KiB/s rd, 6.7 MiB/s wr, 256 op/s
Oct 02 12:48:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:01.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.637 2 DEBUG nova.compute.manager [req-81a08a98-b486-4f7a-ae0f-304250cbe807 req-f4b6c52a-5420-4c78-a511-6a1a389735bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.638 2 DEBUG oslo_concurrency.lockutils [req-81a08a98-b486-4f7a-ae0f-304250cbe807 req-f4b6c52a-5420-4c78-a511-6a1a389735bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.638 2 DEBUG oslo_concurrency.lockutils [req-81a08a98-b486-4f7a-ae0f-304250cbe807 req-f4b6c52a-5420-4c78-a511-6a1a389735bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.638 2 DEBUG oslo_concurrency.lockutils [req-81a08a98-b486-4f7a-ae0f-304250cbe807 req-f4b6c52a-5420-4c78-a511-6a1a389735bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.638 2 DEBUG nova.compute.manager [req-81a08a98-b486-4f7a-ae0f-304250cbe807 req-f4b6c52a-5420-4c78-a511-6a1a389735bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] No waiting events found dispatching network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.639 2 WARNING nova.compute.manager [req-81a08a98-b486-4f7a-ae0f-304250cbe807 req-f4b6c52a-5420-4c78-a511-6a1a389735bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received unexpected event network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 for instance with vm_state building and task_state spawning.
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.674 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.678 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.678 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.679 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.679 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.679 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.680 2 DEBUG nova.virt.libvirt.driver [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.686 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.733 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.733 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.734 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.734 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.734 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.735 2 DEBUG nova.virt.libvirt.driver [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.764 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.765 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for de49d646-d8aa-4a19-a2c7-477038c243c9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.765 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409280.1465287, de49d646-d8aa-4a19-a2c7-477038c243c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.765 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] VM Resumed (Lifecycle Event)
Oct 02 12:48:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.821 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.825 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.970 2 INFO nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Took 11.87 seconds to spawn the instance on the hypervisor.
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.971 2 DEBUG nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.972 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.973 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409280.1465726, de49d646-d8aa-4a19-a2c7-477038c243c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.973 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] VM Started (Lifecycle Event)
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.995 2 DEBUG nova.compute.manager [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:01 compute-0 nova_compute[256940]: 2025-10-02 12:48:01.997 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.003 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.063 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.152 2 INFO nova.compute.manager [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Took 16.71 seconds to build instance.
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.158 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.159 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.159 2 DEBUG nova.objects.instance [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:48:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.340 2 DEBUG oslo_concurrency.lockutils [None req-f7c3c465-3152-4d7c-9f4e-97e588148b6a e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:02 compute-0 nova_compute[256940]: 2025-10-02 12:48:02.444 2 DEBUG oslo_concurrency.lockutils [None req-9bb1e138-3af3-41a3-b4e0-1047f8f7a2d0 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3094106259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:03 compute-0 sudo[343754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:03 compute-0 sudo[343754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:03 compute-0 sudo[343754]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:03 compute-0 sudo[343779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:03 compute-0 sudo[343779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:03 compute-0 sudo[343779]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:03 compute-0 podman[343803]: 2025-10-02 12:48:03.496713577 +0000 UTC m=+0.062108397 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:48:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:03.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:03 compute-0 podman[343821]: 2025-10-02 12:48:03.63771487 +0000 UTC m=+0.111641569 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:48:03 compute-0 ceph-mon[73668]: pgmap v2211: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:48:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:48:04 compute-0 ceph-mon[73668]: pgmap v2212: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:48:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2977171546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2977171546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1389528664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1078617232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:05 compute-0 nova_compute[256940]: 2025-10-02 12:48:05.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:05.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:05 compute-0 nova_compute[256940]: 2025-10-02 12:48:05.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 4.5 MiB/s wr, 445 op/s
Oct 02 12:48:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.526 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "de49d646-d8aa-4a19-a2c7-477038c243c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.526 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "de49d646-d8aa-4a19-a2c7-477038c243c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.526 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "de49d646-d8aa-4a19-a2c7-477038c243c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.527 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "de49d646-d8aa-4a19-a2c7-477038c243c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.527 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "de49d646-d8aa-4a19-a2c7-477038c243c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.528 2 INFO nova.compute.manager [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Terminating instance
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.529 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "refresh_cache-de49d646-d8aa-4a19-a2c7-477038c243c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.529 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquired lock "refresh_cache-de49d646-d8aa-4a19-a2c7-477038c243c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.529 2 DEBUG nova.network.neutron [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.877 2 DEBUG nova.network.neutron [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:48:06 compute-0 nova_compute[256940]: 2025-10-02 12:48:06.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:07 compute-0 nova_compute[256940]: 2025-10-02 12:48:07.443 2 DEBUG nova.network.neutron [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:07 compute-0 nova_compute[256940]: 2025-10-02 12:48:07.485 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Releasing lock "refresh_cache-de49d646-d8aa-4a19-a2c7-477038c243c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:07 compute-0 nova_compute[256940]: 2025-10-02 12:48:07.486 2 DEBUG nova.compute.manager [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:48:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:07.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:07.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:07 compute-0 nova_compute[256940]: 2025-10-02 12:48:07.694 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409272.6927073, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:07 compute-0 nova_compute[256940]: 2025-10-02 12:48:07.694 2 INFO nova.compute.manager [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Stopped (Lifecycle Event)
Oct 02 12:48:07 compute-0 ceph-mon[73668]: pgmap v2213: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 4.5 MiB/s wr, 445 op/s
Oct 02 12:48:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/659319244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1625439027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:07 compute-0 nova_compute[256940]: 2025-10-02 12:48:07.742 2 DEBUG nova.compute.manager [None req-3e205891-2577-45ee-ac49-f7751b92c1d1 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 2.3 MiB/s wr, 341 op/s
Oct 02 12:48:07 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 02 12:48:07 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000081.scope: Consumed 8.381s CPU time.
Oct 02 12:48:07 compute-0 systemd-machined[210927]: Machine qemu-68-instance-00000081 terminated.
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.108 2 INFO nova.virt.libvirt.driver [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance destroyed successfully.
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.108 2 DEBUG nova.objects.instance [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'resources' on Instance uuid de49d646-d8aa-4a19-a2c7-477038c243c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.263 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.263 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.263 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154427461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.704 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.885 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.886 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.890 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:08 compute-0 nova_compute[256940]: 2025-10-02 12:48:08.890 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.078 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.079 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4133MB free_disk=20.830432891845703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.080 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.080 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.255 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance de49d646-d8aa-4a19-a2c7-477038c243c9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.256 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.256 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.257 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.352 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:48:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:09.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:48:09 compute-0 ceph-mon[73668]: pgmap v2214: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 2.3 MiB/s wr, 341 op/s
Oct 02 12:48:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2255234982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4154427461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:09.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.0 MiB/s wr, 314 op/s
Oct 02 12:48:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974617506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.836 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:09 compute-0 nova_compute[256940]: 2025-10-02 12:48:09.841 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:10 compute-0 nova_compute[256940]: 2025-10-02 12:48:10.044 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:10 compute-0 nova_compute[256940]: 2025-10-02 12:48:10.087 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:48:10 compute-0 nova_compute[256940]: 2025-10-02 12:48:10.088 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:10 compute-0 ovn_controller[148123]: 2025-10-02T12:48:10Z|00652|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:48:10 compute-0 nova_compute[256940]: 2025-10-02 12:48:10.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:10 compute-0 nova_compute[256940]: 2025-10-02 12:48:10.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:10 compute-0 ceph-mon[73668]: pgmap v2215: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.0 MiB/s wr, 314 op/s
Oct 02 12:48:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1974617506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:11 compute-0 nova_compute[256940]: 2025-10-02 12:48:11.084 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:11 compute-0 nova_compute[256940]: 2025-10-02 12:48:11.085 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:11 compute-0 nova_compute[256940]: 2025-10-02 12:48:11.085 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:48:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:11.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:48:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:11.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 484 KiB/s wr, 336 op/s
Oct 02 12:48:11 compute-0 nova_compute[256940]: 2025-10-02 12:48:11.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/867284691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:13 compute-0 ceph-mon[73668]: pgmap v2216: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 484 KiB/s wr, 336 op/s
Oct 02 12:48:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:13.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:13.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.702 2 DEBUG nova.compute.manager [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Stashing vm_state: stopped _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:48:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 418 KiB/s wr, 200 op/s
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.909 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.910 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.943 2 DEBUG nova.objects.instance [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.965 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.965 2 INFO nova.compute.claims [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:48:13 compute-0 nova_compute[256940]: 2025-10-02 12:48:13.966 2 DEBUG nova.objects.instance [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.000 2 DEBUG nova.objects.instance [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.052 2 INFO nova.compute.resource_tracker [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating resource usage from migration 2484406b-c08a-4a1e-845a-5539b8eb7a58
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.053 2 DEBUG nova.compute.resource_tracker [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Starting to track incoming migration 2484406b-c08a-4a1e-845a-5539b8eb7a58 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.144 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.322 2 INFO nova.virt.libvirt.driver [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Deleting instance files /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9_del
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.322 2 INFO nova.virt.libvirt.driver [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Deletion of /var/lib/nova/instances/de49d646-d8aa-4a19-a2c7-477038c243c9_del complete
Oct 02 12:48:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151108426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.591 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.596 2 DEBUG nova.compute.provider_tree [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.908 2 DEBUG nova.scheduler.client.report [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.989 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.989 2 INFO nova.compute.manager [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Migrating
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.997 2 INFO nova.compute.manager [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Took 7.51 seconds to destroy the instance on the hypervisor.
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.997 2 DEBUG oslo.service.loopingcall [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.998 2 DEBUG nova.compute.manager [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:48:14 compute-0 nova_compute[256940]: 2025-10-02 12:48:14.998 2 DEBUG nova.network.neutron [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.248 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.283 2 DEBUG nova.network.neutron [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.299 2 DEBUG nova.network.neutron [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:15 compute-0 ceph-mon[73668]: pgmap v2217: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 418 KiB/s wr, 200 op/s
Oct 02 12:48:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3151108426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.314 2 INFO nova.compute.manager [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Took 0.32 seconds to deallocate network for instance.
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.375 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.376 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.443 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.443 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.444 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.444 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.467 2 DEBUG oslo_concurrency.processutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:15.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:15.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 481 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 230 op/s
Oct 02 12:48:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837585474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:15 compute-0 ovn_controller[148123]: 2025-10-02T12:48:15Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:6e:9c 10.100.0.6
Oct 02 12:48:15 compute-0 ovn_controller[148123]: 2025-10-02T12:48:15Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:6e:9c 10.100.0.6
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.917 2 DEBUG oslo_concurrency.processutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.925 2 DEBUG nova.compute.provider_tree [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:15 compute-0 NetworkManager[44981]: <info>  [1759409295.9319] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Oct 02 12:48:15 compute-0 NetworkManager[44981]: <info>  [1759409295.9333] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Oct 02 12:48:15 compute-0 nova_compute[256940]: 2025-10-02 12:48:15.962 2 DEBUG nova.scheduler.client.report [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:16 compute-0 nova_compute[256940]: 2025-10-02 12:48:16.021 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:16 compute-0 nova_compute[256940]: 2025-10-02 12:48:16.067 2 INFO nova.scheduler.client.report [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Deleted allocations for instance de49d646-d8aa-4a19-a2c7-477038c243c9
Oct 02 12:48:16 compute-0 nova_compute[256940]: 2025-10-02 12:48:16.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:16 compute-0 ovn_controller[148123]: 2025-10-02T12:48:16Z|00653|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:48:16 compute-0 nova_compute[256940]: 2025-10-02 12:48:16.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:16 compute-0 nova_compute[256940]: 2025-10-02 12:48:16.154 2 DEBUG oslo_concurrency.lockutils [None req-b5ae980e-ad4e-4f86-8136-23e2f1cae47d 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "de49d646-d8aa-4a19-a2c7-477038c243c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1837585474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:16 compute-0 nova_compute[256940]: 2025-10-02 12:48:16.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:17 compute-0 podman[343967]: 2025-10-02 12:48:17.397322189 +0000 UTC m=+0.064523829 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:48:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:17 compute-0 podman[343966]: 2025-10-02 12:48:17.420069283 +0000 UTC m=+0.086976856 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:48:17 compute-0 ceph-mon[73668]: pgmap v2218: 305 pgs: 305 active+clean; 481 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 230 op/s
Oct 02 12:48:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:17.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:17 compute-0 nova_compute[256940]: 2025-10-02 12:48:17.625 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:17.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 237 op/s
Oct 02 12:48:18 compute-0 nova_compute[256940]: 2025-10-02 12:48:18.455 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:18 compute-0 nova_compute[256940]: 2025-10-02 12:48:18.455 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:48:18 compute-0 sshd-session[344004]: Accepted publickey for nova from 192.168.122.102 port 57256 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:48:18 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:48:18 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:48:18 compute-0 systemd-logind[820]: New session 69 of user nova.
Oct 02 12:48:18 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:48:18 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:48:18 compute-0 systemd[344009]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:48:18 compute-0 systemd[344009]: Queued start job for default target Main User Target.
Oct 02 12:48:18 compute-0 systemd[344009]: Created slice User Application Slice.
Oct 02 12:48:18 compute-0 systemd[344009]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:48:18 compute-0 systemd[344009]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:48:18 compute-0 systemd[344009]: Reached target Paths.
Oct 02 12:48:18 compute-0 systemd[344009]: Reached target Timers.
Oct 02 12:48:18 compute-0 systemd[344009]: Starting D-Bus User Message Bus Socket...
Oct 02 12:48:18 compute-0 systemd[344009]: Starting Create User's Volatile Files and Directories...
Oct 02 12:48:18 compute-0 systemd[344009]: Finished Create User's Volatile Files and Directories.
Oct 02 12:48:18 compute-0 systemd[344009]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:48:18 compute-0 systemd[344009]: Reached target Sockets.
Oct 02 12:48:18 compute-0 systemd[344009]: Reached target Basic System.
Oct 02 12:48:18 compute-0 systemd[344009]: Reached target Main User Target.
Oct 02 12:48:18 compute-0 systemd[344009]: Startup finished in 156ms.
Oct 02 12:48:18 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:48:18 compute-0 systemd[1]: Started Session 69 of User nova.
Oct 02 12:48:18 compute-0 sshd-session[344004]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:48:18 compute-0 sshd-session[344024]: Received disconnect from 192.168.122.102 port 57256:11: disconnected by user
Oct 02 12:48:18 compute-0 sshd-session[344024]: Disconnected from user nova 192.168.122.102 port 57256
Oct 02 12:48:18 compute-0 sshd-session[344004]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:48:18 compute-0 systemd[1]: session-69.scope: Deactivated successfully.
Oct 02 12:48:18 compute-0 systemd-logind[820]: Session 69 logged out. Waiting for processes to exit.
Oct 02 12:48:18 compute-0 systemd-logind[820]: Removed session 69.
Oct 02 12:48:19 compute-0 sshd-session[344026]: Accepted publickey for nova from 192.168.122.102 port 57258 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:48:19 compute-0 systemd-logind[820]: New session 71 of user nova.
Oct 02 12:48:19 compute-0 systemd[1]: Started Session 71 of User nova.
Oct 02 12:48:19 compute-0 sshd-session[344026]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:48:19 compute-0 sshd-session[344029]: Received disconnect from 192.168.122.102 port 57258:11: disconnected by user
Oct 02 12:48:19 compute-0 sshd-session[344029]: Disconnected from user nova 192.168.122.102 port 57258
Oct 02 12:48:19 compute-0 sshd-session[344026]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:48:19 compute-0 systemd[1]: session-71.scope: Deactivated successfully.
Oct 02 12:48:19 compute-0 systemd-logind[820]: Session 71 logged out. Waiting for processes to exit.
Oct 02 12:48:19 compute-0 systemd-logind[820]: Removed session 71.
Oct 02 12:48:19 compute-0 ceph-mon[73668]: pgmap v2219: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 237 op/s
Oct 02 12:48:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:19.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:19.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 512 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 242 op/s
Oct 02 12:48:20 compute-0 nova_compute[256940]: 2025-10-02 12:48:20.047 2 INFO nova.network.neutron [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating port ca09038c-def5-41c9-a98a-c7837558526f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:48:20 compute-0 nova_compute[256940]: 2025-10-02 12:48:20.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.126 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.127 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.127 2 DEBUG nova.network.neutron [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.281 2 DEBUG nova.compute.manager [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-changed-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.282 2 DEBUG nova.compute.manager [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing instance network info cache due to event network-changed-ca09038c-def5-41c9-a98a-c7837558526f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.282 2 DEBUG oslo_concurrency.lockutils [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.451 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:21.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:21.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:21 compute-0 ceph-mon[73668]: pgmap v2220: 305 pgs: 305 active+clean; 512 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 242 op/s
Oct 02 12:48:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 480 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 272 op/s
Oct 02 12:48:21 compute-0 nova_compute[256940]: 2025-10-02 12:48:21.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:23 compute-0 nova_compute[256940]: 2025-10-02 12:48:23.107 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409288.10621, de49d646-d8aa-4a19-a2c7-477038c243c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:23 compute-0 nova_compute[256940]: 2025-10-02 12:48:23.107 2 INFO nova.compute.manager [-] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] VM Stopped (Lifecycle Event)
Oct 02 12:48:23 compute-0 ceph-mon[73668]: pgmap v2221: 305 pgs: 305 active+clean; 480 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 272 op/s
Oct 02 12:48:23 compute-0 sudo[344033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:23 compute-0 sudo[344033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:23 compute-0 sudo[344033]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:23 compute-0 sudo[344058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:23 compute-0 sudo[344058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:23 compute-0 sudo[344058]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:23.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:23.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 480 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 833 KiB/s rd, 5.9 MiB/s wr, 199 op/s
Oct 02 12:48:24 compute-0 nova_compute[256940]: 2025-10-02 12:48:24.237 2 DEBUG nova.compute.manager [None req-3b81dce7-5578-4111-8b79-eadad616cf90 - - - - - -] [instance: de49d646-d8aa-4a19-a2c7-477038c243c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:24 compute-0 nova_compute[256940]: 2025-10-02 12:48:24.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:25 compute-0 ceph-mon[73668]: pgmap v2222: 305 pgs: 305 active+clean; 480 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 833 KiB/s rd, 5.9 MiB/s wr, 199 op/s
Oct 02 12:48:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4231190045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:25.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:48:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:25.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:48:25 compute-0 nova_compute[256940]: 2025-10-02 12:48:25.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 454 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 6.0 MiB/s wr, 227 op/s
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.138 2 DEBUG nova.network.neutron [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.396 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.399 2 DEBUG oslo_concurrency.lockutils [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.399 2 DEBUG nova.network.neutron [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing network info cache for port ca09038c-def5-41c9-a98a-c7837558526f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:26.486 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:26.487 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:26.487 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.907 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.909 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.910 2 INFO nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Creating image(s)
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.947 2 DEBUG nova.storage.rbd_utils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(nova-resize) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:48:26 compute-0 nova_compute[256940]: 2025-10-02 12:48:26.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct 02 12:48:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct 02 12:48:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct 02 12:48:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:27.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:27.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:27 compute-0 ceph-mon[73668]: pgmap v2223: 305 pgs: 305 active+clean; 454 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 6.0 MiB/s wr, 227 op/s
Oct 02 12:48:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 577 KiB/s rd, 2.7 MiB/s wr, 134 op/s
Oct 02 12:48:27 compute-0 sudo[344121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:27 compute-0 sudo[344121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:27 compute-0 sudo[344121]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:27 compute-0 sudo[344146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:27 compute-0 sudo[344146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:27 compute-0 sudo[344146]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:27 compute-0 sudo[344171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:27 compute-0 sudo[344171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:27 compute-0 sudo[344171]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:27 compute-0 sudo[344196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:48:27 compute-0 sudo[344196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:27 compute-0 nova_compute[256940]: 2025-10-02 12:48:27.986 2 DEBUG nova.objects.instance [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:28 compute-0 sudo[344196]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:48:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-0 sudo[344276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:28 compute-0 sudo[344276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-0 sudo[344276]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-0 sudo[344301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:28 compute-0 sudo[344301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-0 sudo[344301]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.420 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.421 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Ensure instance console log exists: /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.421 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.421 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.422 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.424 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Start _get_guest_xml network_info=[{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:21:32:15"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.428 2 WARNING nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.434 2 DEBUG nova.virt.libvirt.host [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.434 2 DEBUG nova.virt.libvirt.host [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.449 2 DEBUG nova.virt.libvirt.host [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.450 2 DEBUG nova.virt.libvirt.host [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.451 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.451 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.451 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.451 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.452 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.452 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.452 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.452 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.452 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.452 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.452 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.453 2 DEBUG nova.virt.hardware [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.453 2 DEBUG nova.objects.instance [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:28 compute-0 sudo[344326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:28 compute-0 sudo[344326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-0 sudo[344326]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-0 nova_compute[256940]: 2025-10-02 12:48:28.484 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:28 compute-0 sudo[344352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:48:28 compute-0 sudo[344352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:48:28
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'vms', '.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data']
Oct 02 12:48:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:48:28 compute-0 ceph-mon[73668]: osdmap e313: 3 total, 3 up, 3 in
Oct 02 12:48:28 compute-0 ceph-mon[73668]: pgmap v2225: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 577 KiB/s rd, 2.7 MiB/s wr, 134 op/s
Oct 02 12:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:29 compute-0 sudo[344352]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:48:29 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:48:29 compute-0 systemd[344009]: Activating special unit Exit the Session...
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped target Main User Target.
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped target Basic System.
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped target Paths.
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped target Sockets.
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped target Timers.
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:48:29 compute-0 systemd[344009]: Closed D-Bus User Message Bus Socket.
Oct 02 12:48:29 compute-0 systemd[344009]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:48:29 compute-0 systemd[344009]: Removed slice User Application Slice.
Oct 02 12:48:29 compute-0 systemd[344009]: Reached target Shutdown.
Oct 02 12:48:29 compute-0 systemd[344009]: Finished Exit the Session.
Oct 02 12:48:29 compute-0 systemd[344009]: Reached target Exit the Session.
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.308 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.823s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:29 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:48:29 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:48:29 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:48:29 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:48:29 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:48:29 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:48:29 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.349 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:29.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:29.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 02 12:48:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:48:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589268743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.813 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.814 2 DEBUG nova.virt.libvirt.vif [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:21:32:15"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.815 2 DEBUG nova.network.os_vif_util [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:21:32:15"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.815 2 DEBUG nova.network.os_vif_util [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.818 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <uuid>2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</uuid>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <name>instance-0000007c</name>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestOtherB-server-2011106745</nova:name>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:48:28</nova:creationTime>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <nova:port uuid="ca09038c-def5-41c9-a98a-c7837558526f">
Oct 02 12:48:29 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <system>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <entry name="serial">2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</entry>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <entry name="uuid">2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</entry>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </system>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <os>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   </os>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <features>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   </features>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk">
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       </source>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config">
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       </source>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:48:29 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:21:32:15"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <target dev="tapca09038c-de"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/console.log" append="off"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <video>
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </video>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:48:29 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:48:29 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:48:29 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:48:29 compute-0 nova_compute[256940]: </domain>
Oct 02 12:48:29 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.819 2 DEBUG nova.virt.libvirt.vif [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:21:32:15"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.819 2 DEBUG nova.network.os_vif_util [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:21:32:15"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.819 2 DEBUG nova.network.os_vif_util [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.820 2 DEBUG os_vif [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca09038c-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca09038c-de, col_values=(('external_ids', {'iface-id': 'ca09038c-def5-41c9-a98a-c7837558526f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:32:15', 'vm-uuid': '2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:29 compute-0 NetworkManager[44981]: <info>  [1759409309.8507] manager: (tapca09038c-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:29 compute-0 nova_compute[256940]: 2025-10-02 12:48:29.858 2 INFO os_vif [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de')
Oct 02 12:48:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2481788006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/589268743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:31.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:31 compute-0 ceph-mon[73668]: pgmap v2226: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 02 12:48:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:31.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 143 KiB/s wr, 59 op/s
Oct 02 12:48:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:48:31 compute-0 nova_compute[256940]: 2025-10-02 12:48:31.969 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:31 compute-0 nova_compute[256940]: 2025-10-02 12:48:31.969 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:31 compute-0 nova_compute[256940]: 2025-10-02 12:48:31.969 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:21:32:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:48:31 compute-0 nova_compute[256940]: 2025-10-02 12:48:31.970 2 INFO nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Using config drive
Oct 02 12:48:31 compute-0 nova_compute[256940]: 2025-10-02 12:48:31.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:32 compute-0 nova_compute[256940]: 2025-10-02 12:48:32.005 2 DEBUG nova.compute.manager [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:48:32 compute-0 nova_compute[256940]: 2025-10-02 12:48:32.005 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e6ac67ab-47bc-44f6-ace6-e5048eeeb753 does not exist
Oct 02 12:48:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0128ac4d-d90f-4571-afb8-c3211360dde3 does not exist
Oct 02 12:48:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6a2aa8de-958f-4553-a289-5bb3c7702442 does not exist
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:48:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:32 compute-0 sudo[344491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:32 compute-0 sudo[344491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:32 compute-0 sudo[344491]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:32 compute-0 sudo[344516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:32 compute-0 sudo[344516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:32 compute-0 sudo[344516]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:32 compute-0 sudo[344541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:32 compute-0 sudo[344541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:32 compute-0 sudo[344541]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:32 compute-0 sudo[344566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:48:32 compute-0 sudo[344566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:32.685 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:32 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:32.686 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:48:32 compute-0 nova_compute[256940]: 2025-10-02 12:48:32.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:32 compute-0 podman[344634]: 2025-10-02 12:48:32.787824934 +0000 UTC m=+0.046326990 container create 8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:48:32 compute-0 systemd[1]: Started libpod-conmon-8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda.scope.
Oct 02 12:48:32 compute-0 podman[344634]: 2025-10-02 12:48:32.766675131 +0000 UTC m=+0.025177227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:32 compute-0 podman[344634]: 2025-10-02 12:48:32.884763065 +0000 UTC m=+0.143265161 container init 8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:48:32 compute-0 podman[344634]: 2025-10-02 12:48:32.893778727 +0000 UTC m=+0.152280793 container start 8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:48:32 compute-0 podman[344634]: 2025-10-02 12:48:32.898377155 +0000 UTC m=+0.156879241 container attach 8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:48:32 compute-0 vigilant_noyce[344650]: 167 167
Oct 02 12:48:32 compute-0 systemd[1]: libpod-8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda.scope: Deactivated successfully.
Oct 02 12:48:32 compute-0 podman[344634]: 2025-10-02 12:48:32.900451218 +0000 UTC m=+0.158953294 container died 8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cab6037aa6db0a3b454dbe95d1118b194e25c922d94a09375ad4d28ffe69754f-merged.mount: Deactivated successfully.
Oct 02 12:48:32 compute-0 podman[344634]: 2025-10-02 12:48:32.971659478 +0000 UTC m=+0.230161544 container remove 8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:48:32 compute-0 systemd[1]: libpod-conmon-8fad455153e608cd591fb68448d32443f15a912774521d01d87c19f9469deeda.scope: Deactivated successfully.
Oct 02 12:48:33 compute-0 ceph-mon[73668]: pgmap v2227: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 143 KiB/s wr, 59 op/s
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:48:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:33 compute-0 podman[344675]: 2025-10-02 12:48:33.188940661 +0000 UTC m=+0.074041613 container create fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:48:33 compute-0 podman[344675]: 2025-10-02 12:48:33.136105954 +0000 UTC m=+0.021206936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:33 compute-0 systemd[1]: Started libpod-conmon-fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8.scope.
Oct 02 12:48:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e09d50ba7429a03ef7b5ee60e80c8cf8a78bb24858de7f24a87cec91e1dc31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e09d50ba7429a03ef7b5ee60e80c8cf8a78bb24858de7f24a87cec91e1dc31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e09d50ba7429a03ef7b5ee60e80c8cf8a78bb24858de7f24a87cec91e1dc31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e09d50ba7429a03ef7b5ee60e80c8cf8a78bb24858de7f24a87cec91e1dc31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e09d50ba7429a03ef7b5ee60e80c8cf8a78bb24858de7f24a87cec91e1dc31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:33 compute-0 podman[344675]: 2025-10-02 12:48:33.331306829 +0000 UTC m=+0.216407801 container init fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:48:33 compute-0 podman[344675]: 2025-10-02 12:48:33.338453883 +0000 UTC m=+0.223554835 container start fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:48:33 compute-0 podman[344675]: 2025-10-02 12:48:33.343255276 +0000 UTC m=+0.228356228 container attach fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:48:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:33.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 143 KiB/s wr, 59 op/s
Oct 02 12:48:34 compute-0 podman[344702]: 2025-10-02 12:48:34.420406865 +0000 UTC m=+0.088919686 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:48:34 compute-0 podman[344703]: 2025-10-02 12:48:34.425965698 +0000 UTC m=+0.094546151 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:48:34 compute-0 kind_taussig[344691]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:48:34 compute-0 kind_taussig[344691]: --> relative data size: 1.0
Oct 02 12:48:34 compute-0 kind_taussig[344691]: --> All data devices are unavailable
Oct 02 12:48:34 compute-0 systemd[1]: libpod-fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8.scope: Deactivated successfully.
Oct 02 12:48:34 compute-0 podman[344675]: 2025-10-02 12:48:34.595274438 +0000 UTC m=+1.480375420 container died fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:48:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:34.689 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:34 compute-0 nova_compute[256940]: 2025-10-02 12:48:34.794 2 DEBUG nova.network.neutron [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updated VIF entry in instance network info cache for port ca09038c-def5-41c9-a98a-c7837558526f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:34 compute-0 nova_compute[256940]: 2025-10-02 12:48:34.794 2 DEBUG nova.network.neutron [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:34 compute-0 nova_compute[256940]: 2025-10-02 12:48:34.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:34 compute-0 nova_compute[256940]: 2025-10-02 12:48:34.857 2 DEBUG oslo_concurrency.lockutils [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e09d50ba7429a03ef7b5ee60e80c8cf8a78bb24858de7f24a87cec91e1dc31-merged.mount: Deactivated successfully.
Oct 02 12:48:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:35.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 81 KiB/s wr, 25 op/s
Oct 02 12:48:36 compute-0 ceph-mon[73668]: pgmap v2228: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 143 KiB/s wr, 59 op/s
Oct 02 12:48:36 compute-0 podman[344675]: 2025-10-02 12:48:36.667996267 +0000 UTC m=+3.553097209 container remove fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:48:36 compute-0 sudo[344566]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:36 compute-0 sudo[344762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:36 compute-0 sudo[344762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:36 compute-0 systemd[1]: libpod-conmon-fb665daa77399ed1cf552e0fce434ea91d2dce1f8390ecd4d08a37a116f576e8.scope: Deactivated successfully.
Oct 02 12:48:36 compute-0 sudo[344762]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:36 compute-0 nova_compute[256940]: 2025-10-02 12:48:36.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:36 compute-0 sudo[344787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:36 compute-0 sudo[344787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:36 compute-0 sudo[344787]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:36 compute-0 sudo[344812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:36 compute-0 sudo[344812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:36 compute-0 sudo[344812]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:36 compute-0 sudo[344837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:48:36 compute-0 sudo[344837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:36 compute-0 nova_compute[256940]: 2025-10-02 12:48:36.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:37 compute-0 podman[344900]: 2025-10-02 12:48:37.282072626 +0000 UTC m=+0.023034883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:37 compute-0 ceph-mon[73668]: pgmap v2229: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 81 KiB/s wr, 25 op/s
Oct 02 12:48:37 compute-0 podman[344900]: 2025-10-02 12:48:37.43326402 +0000 UTC m=+0.174226227 container create f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_khorana, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:48:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:37 compute-0 systemd[1]: Started libpod-conmon-f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645.scope.
Oct 02 12:48:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:37 compute-0 podman[344900]: 2025-10-02 12:48:37.552978197 +0000 UTC m=+0.293940414 container init f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 12:48:37 compute-0 podman[344900]: 2025-10-02 12:48:37.5655606 +0000 UTC m=+0.306522807 container start f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_khorana, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:48:37 compute-0 hopeful_khorana[344917]: 167 167
Oct 02 12:48:37 compute-0 systemd[1]: libpod-f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645.scope: Deactivated successfully.
Oct 02 12:48:37 compute-0 podman[344900]: 2025-10-02 12:48:37.580812022 +0000 UTC m=+0.321774259 container attach f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:48:37 compute-0 podman[344900]: 2025-10-02 12:48:37.582087695 +0000 UTC m=+0.323049902 container died f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:48:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:48:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:37.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:48:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:37.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f41219e0731e42f5e0cea9d842b5f09f6083926e9ec7e60328a2f3718637f2cc-merged.mount: Deactivated successfully.
Oct 02 12:48:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 219 KiB/s wr, 24 op/s
Oct 02 12:48:37 compute-0 podman[344900]: 2025-10-02 12:48:37.801590594 +0000 UTC m=+0.542552801 container remove f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:48:37 compute-0 systemd[1]: libpod-conmon-f06333464f8e34e1cccd043c2650557adf5484aa1f4d63866877a86b5b716645.scope: Deactivated successfully.
Oct 02 12:48:37 compute-0 podman[344943]: 2025-10-02 12:48:37.998900574 +0000 UTC m=+0.052941231 container create 17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:48:38 compute-0 systemd[1]: Started libpod-conmon-17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36.scope.
Oct 02 12:48:38 compute-0 podman[344943]: 2025-10-02 12:48:37.973683887 +0000 UTC m=+0.027724564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87833cf3bdc12dc66d3bbabdff5728eacd44cf206e83e277b4476c3e4809b03c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87833cf3bdc12dc66d3bbabdff5728eacd44cf206e83e277b4476c3e4809b03c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87833cf3bdc12dc66d3bbabdff5728eacd44cf206e83e277b4476c3e4809b03c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87833cf3bdc12dc66d3bbabdff5728eacd44cf206e83e277b4476c3e4809b03c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:38 compute-0 podman[344943]: 2025-10-02 12:48:38.092060168 +0000 UTC m=+0.146100825 container init 17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:48:38 compute-0 podman[344943]: 2025-10-02 12:48:38.099680144 +0000 UTC m=+0.153720801 container start 17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:48:38 compute-0 podman[344943]: 2025-10-02 12:48:38.103365579 +0000 UTC m=+0.157406266 container attach 17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:48:38 compute-0 eloquent_bell[344960]: {
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:     "1": [
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:         {
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "devices": [
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "/dev/loop3"
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             ],
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "lv_name": "ceph_lv0",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "lv_size": "7511998464",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "name": "ceph_lv0",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "tags": {
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.cluster_name": "ceph",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.crush_device_class": "",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.encrypted": "0",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.osd_id": "1",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.type": "block",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:                 "ceph.vdo": "0"
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             },
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "type": "block",
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:             "vg_name": "ceph_vg0"
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:         }
Oct 02 12:48:38 compute-0 eloquent_bell[344960]:     ]
Oct 02 12:48:38 compute-0 eloquent_bell[344960]: }
Oct 02 12:48:38 compute-0 systemd[1]: libpod-17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36.scope: Deactivated successfully.
Oct 02 12:48:38 compute-0 conmon[344960]: conmon 17832b79a6831f30118c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36.scope/container/memory.events
Oct 02 12:48:38 compute-0 podman[344943]: 2025-10-02 12:48:38.982893909 +0000 UTC m=+1.036934566 container died 17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 12:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-87833cf3bdc12dc66d3bbabdff5728eacd44cf206e83e277b4476c3e4809b03c-merged.mount: Deactivated successfully.
Oct 02 12:48:39 compute-0 podman[344943]: 2025-10-02 12:48:39.087325692 +0000 UTC m=+1.141366339 container remove 17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bell, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:48:39 compute-0 systemd[1]: libpod-conmon-17832b79a6831f30118c5705066bb943ed951b9917cc8cf1191dda5359e4ef36.scope: Deactivated successfully.
Oct 02 12:48:39 compute-0 sudo[344837]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:39 compute-0 sudo[344982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:39 compute-0 sudo[344982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:39 compute-0 sudo[344982]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:39 compute-0 sudo[345007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:39 compute-0 sudo[345007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:39 compute-0 sudo[345007]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:39 compute-0 sudo[345032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:39 compute-0 sudo[345032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:39 compute-0 sudo[345032]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:39 compute-0 sudo[345057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:48:39 compute-0 sudo[345057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:39 compute-0 ceph-mon[73668]: pgmap v2230: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 219 KiB/s wr, 24 op/s
Oct 02 12:48:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:48:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:39.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:48:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:39 compute-0 podman[345125]: 2025-10-02 12:48:39.739251724 +0000 UTC m=+0.041384074 container create 61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:48:39 compute-0 systemd[1]: Started libpod-conmon-61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc.scope.
Oct 02 12:48:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 201 KiB/s wr, 24 op/s
Oct 02 12:48:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:39 compute-0 podman[345125]: 2025-10-02 12:48:39.719493607 +0000 UTC m=+0.021625977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:39 compute-0 podman[345125]: 2025-10-02 12:48:39.842493477 +0000 UTC m=+0.144625887 container init 61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermat, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:48:39 compute-0 podman[345125]: 2025-10-02 12:48:39.849951469 +0000 UTC m=+0.152083829 container start 61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:48:39 compute-0 nova_compute[256940]: 2025-10-02 12:48:39.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:39 compute-0 naughty_fermat[345142]: 167 167
Oct 02 12:48:39 compute-0 systemd[1]: libpod-61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc.scope: Deactivated successfully.
Oct 02 12:48:39 compute-0 podman[345125]: 2025-10-02 12:48:39.884141307 +0000 UTC m=+0.186273677 container attach 61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermat, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:48:39 compute-0 podman[345125]: 2025-10-02 12:48:39.885217785 +0000 UTC m=+0.187350135 container died 61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-70ca0cc192037fcca3d0b21b3d64e360b20d28a9661eeb749e6caa9b36766078-merged.mount: Deactivated successfully.
Oct 02 12:48:39 compute-0 podman[345125]: 2025-10-02 12:48:39.943370478 +0000 UTC m=+0.245502828 container remove 61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermat, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:48:39 compute-0 systemd[1]: libpod-conmon-61e34bef19272dc8d053d4b0764ac937ad7b77ef279221e02b2870e463f867fc.scope: Deactivated successfully.
Oct 02 12:48:40 compute-0 podman[345169]: 2025-10-02 12:48:40.111252062 +0000 UTC m=+0.041054206 container create 359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:48:40 compute-0 systemd[1]: Started libpod-conmon-359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2.scope.
Oct 02 12:48:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fded6fa67104c642cb38ff15650ea1567d468e2a0af7090f59d15d7405b9390a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fded6fa67104c642cb38ff15650ea1567d468e2a0af7090f59d15d7405b9390a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fded6fa67104c642cb38ff15650ea1567d468e2a0af7090f59d15d7405b9390a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fded6fa67104c642cb38ff15650ea1567d468e2a0af7090f59d15d7405b9390a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:40 compute-0 podman[345169]: 2025-10-02 12:48:40.185585892 +0000 UTC m=+0.115388056 container init 359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:48:40 compute-0 podman[345169]: 2025-10-02 12:48:40.093914277 +0000 UTC m=+0.023716441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:48:40 compute-0 podman[345169]: 2025-10-02 12:48:40.192820688 +0000 UTC m=+0.122622822 container start 359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:48:40 compute-0 podman[345169]: 2025-10-02 12:48:40.195600839 +0000 UTC m=+0.125402983 container attach 359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00436299698725107 of space, bias 1.0, pg target 1.308899096175321 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006584026905360134 of space, bias 1.0, pg target 1.96862404470268 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:48:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:48:40 compute-0 ceph-mon[73668]: pgmap v2231: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 201 KiB/s wr, 24 op/s
Oct 02 12:48:41 compute-0 festive_ride[345185]: {
Oct 02 12:48:41 compute-0 festive_ride[345185]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:48:41 compute-0 festive_ride[345185]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:48:41 compute-0 festive_ride[345185]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:48:41 compute-0 festive_ride[345185]:         "osd_id": 1,
Oct 02 12:48:41 compute-0 festive_ride[345185]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:48:41 compute-0 festive_ride[345185]:         "type": "bluestore"
Oct 02 12:48:41 compute-0 festive_ride[345185]:     }
Oct 02 12:48:41 compute-0 festive_ride[345185]: }
Oct 02 12:48:41 compute-0 systemd[1]: libpod-359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2.scope: Deactivated successfully.
Oct 02 12:48:41 compute-0 podman[345207]: 2025-10-02 12:48:41.097446553 +0000 UTC m=+0.023596647 container died 359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fded6fa67104c642cb38ff15650ea1567d468e2a0af7090f59d15d7405b9390a-merged.mount: Deactivated successfully.
Oct 02 12:48:41 compute-0 podman[345207]: 2025-10-02 12:48:41.150734002 +0000 UTC m=+0.076884066 container remove 359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:48:41 compute-0 systemd[1]: libpod-conmon-359db171036bc010cd591386a62e5f300253d3b651acff069a97715bf9bbe4d2.scope: Deactivated successfully.
Oct 02 12:48:41 compute-0 sudo[345057]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:48:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:48:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 32eb8303-0f16-4150-9c99-ac65dc4922d2 does not exist
Oct 02 12:48:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 531908f7-fe0c-47f7-84d0-cc01df262ebd does not exist
Oct 02 12:48:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 03e4699e-38c1-487d-b9e1-fa4a531e4a3a does not exist
Oct 02 12:48:41 compute-0 sudo[345222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:41 compute-0 sudo[345222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:41 compute-0 sudo[345222]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:41 compute-0 sudo[345247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:48:41 compute-0 sudo[345247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:41 compute-0 sudo[345247]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:41.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:48:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:48:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 217 KiB/s wr, 26 op/s
Oct 02 12:48:41 compute-0 nova_compute[256940]: 2025-10-02 12:48:41.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Oct 02 12:48:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Oct 02 12:48:42 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Oct 02 12:48:43 compute-0 ceph-mon[73668]: pgmap v2232: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 217 KiB/s wr, 26 op/s
Oct 02 12:48:43 compute-0 ceph-mon[73668]: osdmap e314: 3 total, 3 up, 3 in
Oct 02 12:48:43 compute-0 sudo[345273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:43 compute-0 sudo[345273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:43 compute-0 sudo[345273]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:43 compute-0 sudo[345298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:43 compute-0 sudo[345298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:43 compute-0 sudo[345298]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 245 KiB/s wr, 15 op/s
Oct 02 12:48:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4245125267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:44 compute-0 nova_compute[256940]: 2025-10-02 12:48:44.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:45.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:45.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 245 KiB/s wr, 23 op/s
Oct 02 12:48:45 compute-0 ceph-mon[73668]: pgmap v2234: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 245 KiB/s wr, 15 op/s
Oct 02 12:48:46 compute-0 nova_compute[256940]: 2025-10-02 12:48:46.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:47 compute-0 ceph-mon[73668]: pgmap v2235: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 245 KiB/s wr, 23 op/s
Oct 02 12:48:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3566954741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2530885159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:47.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 46 KiB/s wr, 30 op/s
Oct 02 12:48:48 compute-0 podman[345326]: 2025-10-02 12:48:48.385216604 +0000 UTC m=+0.058529275 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:48:48 compute-0 podman[345325]: 2025-10-02 12:48:48.385373378 +0000 UTC m=+0.058685729 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:48:49 compute-0 ceph-mon[73668]: pgmap v2236: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 46 KiB/s wr, 30 op/s
Oct 02 12:48:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:49.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:49.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:49 compute-0 nova_compute[256940]: 2025-10-02 12:48:49.709 2 DEBUG nova.objects.instance [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:49 compute-0 nova_compute[256940]: 2025-10-02 12:48:49.747 2 DEBUG oslo_concurrency.lockutils [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:49 compute-0 nova_compute[256940]: 2025-10-02 12:48:49.748 2 DEBUG oslo_concurrency.lockutils [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:49 compute-0 nova_compute[256940]: 2025-10-02 12:48:49.748 2 DEBUG nova.network.neutron [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:48:49 compute-0 nova_compute[256940]: 2025-10-02 12:48:49.748 2 DEBUG nova.objects.instance [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'info_cache' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 506 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Oct 02 12:48:49 compute-0 nova_compute[256940]: 2025-10-02 12:48:49.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 ceph-mon[73668]: pgmap v2237: 305 pgs: 305 active+clean; 506 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Oct 02 12:48:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1118640825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2709375719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.578 2 DEBUG nova.network.neutron [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:51.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.642 2 DEBUG oslo_concurrency.lockutils [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.671 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance destroyed successfully.
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.672 2 DEBUG nova.objects.instance [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:51.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.708 2 DEBUG nova.objects.instance [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.741 2 DEBUG nova.virt.libvirt.vif [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.742 2 DEBUG nova.network.os_vif_util [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.742 2 DEBUG nova.network.os_vif_util [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.743 2 DEBUG os_vif [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca09038c-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.751 2 INFO os_vif [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de')
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.756 2 DEBUG nova.virt.libvirt.driver [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Start _get_guest_xml network_info=[{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.760 2 WARNING nova.virt.libvirt.driver [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.765 2 DEBUG nova.virt.libvirt.host [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.765 2 DEBUG nova.virt.libvirt.host [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.769 2 DEBUG nova.virt.libvirt.host [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.769 2 DEBUG nova.virt.libvirt.host [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.770 2 DEBUG nova.virt.libvirt.driver [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.770 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.771 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.771 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.771 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.771 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.771 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.772 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.772 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.772 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.772 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.772 2 DEBUG nova.virt.hardware [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.773 2 DEBUG nova.objects.instance [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 4.5 MiB/s wr, 97 op/s
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.833 2 DEBUG oslo_concurrency.processutils [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:51 compute-0 nova_compute[256940]: 2025-10-02 12:48:51.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:48:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128083608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.268 2 DEBUG oslo_concurrency.processutils [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.300 2 DEBUG oslo_concurrency.processutils [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Oct 02 12:48:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Oct 02 12:48:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/67991890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1656398606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3128083608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Oct 02 12:48:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:48:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3908904066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.749 2 DEBUG oslo_concurrency.processutils [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.750 2 DEBUG nova.virt.libvirt.vif [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.751 2 DEBUG nova.network.os_vif_util [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.752 2 DEBUG nova.network.os_vif_util [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.753 2 DEBUG nova.objects.instance [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.782 2 DEBUG nova.virt.libvirt.driver [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <uuid>2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</uuid>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <name>instance-0000007c</name>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestOtherB-server-2011106745</nova:name>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:48:51</nova:creationTime>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <nova:port uuid="ca09038c-def5-41c9-a98a-c7837558526f">
Oct 02 12:48:52 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <system>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <entry name="serial">2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</entry>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <entry name="uuid">2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</entry>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </system>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <os>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   </os>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <features>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   </features>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk">
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       </source>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config">
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       </source>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:48:52 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:21:32:15"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <target dev="tapca09038c-de"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/console.log" append="off"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <video>
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </video>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:48:52 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:48:52 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:48:52 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:48:52 compute-0 nova_compute[256940]: </domain>
Oct 02 12:48:52 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.785 2 DEBUG nova.virt.libvirt.driver [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.785 2 DEBUG nova.virt.libvirt.driver [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.786 2 DEBUG nova.virt.libvirt.vif [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.787 2 DEBUG nova.network.os_vif_util [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.788 2 DEBUG nova.network.os_vif_util [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.788 2 DEBUG os_vif [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.790 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca09038c-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca09038c-de, col_values=(('external_ids', {'iface-id': 'ca09038c-def5-41c9-a98a-c7837558526f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:32:15', 'vm-uuid': '2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 NetworkManager[44981]: <info>  [1759409332.7958] manager: (tapca09038c-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.803 2 INFO os_vif [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de')
Oct 02 12:48:52 compute-0 kernel: tapca09038c-de: entered promiscuous mode
Oct 02 12:48:52 compute-0 ovn_controller[148123]: 2025-10-02T12:48:52Z|00654|binding|INFO|Claiming lport ca09038c-def5-41c9-a98a-c7837558526f for this chassis.
Oct 02 12:48:52 compute-0 ovn_controller[148123]: 2025-10-02T12:48:52Z|00655|binding|INFO|ca09038c-def5-41c9-a98a-c7837558526f: Claiming fa:16:3e:21:32:15 10.100.0.13
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 NetworkManager[44981]: <info>  [1759409332.8716] manager: (tapca09038c-de): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.880 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:32:15 10.100.0.13'], port_security=['fa:16:3e:21:32:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '6', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ca09038c-def5-41c9-a98a-c7837558526f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.881 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ca09038c-def5-41c9-a98a-c7837558526f in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.883 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:48:52 compute-0 ovn_controller[148123]: 2025-10-02T12:48:52Z|00656|binding|INFO|Setting lport ca09038c-def5-41c9-a98a-c7837558526f ovn-installed in OVS
Oct 02 12:48:52 compute-0 ovn_controller[148123]: 2025-10-02T12:48:52Z|00657|binding|INFO|Setting lport ca09038c-def5-41c9-a98a-c7837558526f up in Southbound
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 nova_compute[256940]: 2025-10-02 12:48:52.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.900 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[722403f0-656f-4b7c-9529-d77c4d376456]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.901 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9266ebd7-31 in ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.902 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9266ebd7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.903 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[be47dd04-f431-4136-99ee-fd1909bc50f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.904 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6b618553-739d-4bad-9639-34f2dcb37511]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 systemd-udevd[345441]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:48:52 compute-0 systemd-machined[210927]: New machine qemu-69-instance-0000007c.
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.920 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[74a26751-801a-4ecb-b2f2-073c1a0ba849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 NetworkManager[44981]: <info>  [1759409332.9249] device (tapca09038c-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:48:52 compute-0 NetworkManager[44981]: <info>  [1759409332.9258] device (tapca09038c-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:48:52 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000007c.
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.935 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6d639d-b785-4647-9f44-085270477b49]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.966 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9859582a-3793-4830-a541-418574683cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:52 compute-0 NetworkManager[44981]: <info>  [1759409332.9745] manager: (tap9266ebd7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Oct 02 12:48:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:52.975 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3e133efc-b3d8-44df-9652-167d2e00cc36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.013 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dc42f714-e095-4443-920d-b59b4bfb5f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.016 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c416670e-c910-4cee-ba86-75da27a0aa0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 NetworkManager[44981]: <info>  [1759409333.0425] device (tap9266ebd7-30): carrier: link connected
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.048 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6062cc-00d1-433a-91c1-02954d2d9dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.065 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a58f7d7-4de1-4441-ad76-a2769bc337e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722852, 'reachable_time': 21821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345473, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.085 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0db465be-8af9-4890-9325-eb64b31b5ecc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:6593'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722852, 'tstamp': 722852}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345474, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.104 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[db10b095-e92a-455c-aa04-851e049908d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722852, 'reachable_time': 21821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345475, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.145 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[56c29845-7425-402b-8e2b-0d6d5d55b122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.206 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e9a36f-4c63-45cb-80ac-658af3a8ef9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.208 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.208 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.208 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:53 compute-0 NetworkManager[44981]: <info>  [1759409333.2106] manager: (tap9266ebd7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Oct 02 12:48:53 compute-0 kernel: tap9266ebd7-30: entered promiscuous mode
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.214 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:53 compute-0 ovn_controller[148123]: 2025-10-02T12:48:53Z|00658|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.233 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.234 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0a76e5-2212-428e-90fc-e53dffadda87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.235 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:48:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:53.235 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'env', 'PROCESS_TAG=haproxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:48:53 compute-0 podman[345549]: 2025-10-02 12:48:53.625818795 +0000 UTC m=+0.057878168 container create c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:48:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:53 compute-0 systemd[1]: Started libpod-conmon-c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280.scope.
Oct 02 12:48:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:53.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:53 compute-0 podman[345549]: 2025-10-02 12:48:53.591609686 +0000 UTC m=+0.023669079 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:48:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c65e0bcac8c2da58eecfb33061ca94475888d6f5d6a81f658f379233d5ca40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:48:53 compute-0 podman[345549]: 2025-10-02 12:48:53.716596337 +0000 UTC m=+0.148655730 container init c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:48:53 compute-0 podman[345549]: 2025-10-02 12:48:53.722447018 +0000 UTC m=+0.154506391 container start c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:48:53 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [NOTICE]   (345568) : New worker (345570) forked
Oct 02 12:48:53 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [NOTICE]   (345568) : Loading success.
Oct 02 12:48:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 4.5 MiB/s wr, 97 op/s
Oct 02 12:48:53 compute-0 ceph-mon[73668]: pgmap v2238: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 4.5 MiB/s wr, 97 op/s
Oct 02 12:48:53 compute-0 ceph-mon[73668]: osdmap e315: 3 total, 3 up, 3 in
Oct 02 12:48:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3908904066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.966 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409333.9656215, 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.966 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] VM Resumed (Lifecycle Event)
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.968 2 DEBUG nova.compute.manager [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.972 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance rebooted successfully.
Oct 02 12:48:53 compute-0 nova_compute[256940]: 2025-10-02 12:48:53.973 2 DEBUG nova.compute.manager [None req-e2371972-576c-4729-b7ee-1d4368d2efab b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.009 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.012 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.042 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.043 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409333.9658422, 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.043 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] VM Started (Lifecycle Event)
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.082 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.086 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.102 2 DEBUG nova.compute.manager [req-bb5a4b75-05e0-40b6-aee4-41039c539f52 req-83c8a780-d955-4f92-9c43-855a7548a047 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.102 2 DEBUG oslo_concurrency.lockutils [req-bb5a4b75-05e0-40b6-aee4-41039c539f52 req-83c8a780-d955-4f92-9c43-855a7548a047 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.102 2 DEBUG oslo_concurrency.lockutils [req-bb5a4b75-05e0-40b6-aee4-41039c539f52 req-83c8a780-d955-4f92-9c43-855a7548a047 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.103 2 DEBUG oslo_concurrency.lockutils [req-bb5a4b75-05e0-40b6-aee4-41039c539f52 req-83c8a780-d955-4f92-9c43-855a7548a047 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.103 2 DEBUG nova.compute.manager [req-bb5a4b75-05e0-40b6-aee4-41039c539f52 req-83c8a780-d955-4f92-9c43-855a7548a047 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] No waiting events found dispatching network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:54 compute-0 nova_compute[256940]: 2025-10-02 12:48:54.103 2 WARNING nova.compute.manager [req-bb5a4b75-05e0-40b6-aee4-41039c539f52 req-83c8a780-d955-4f92-9c43-855a7548a047 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received unexpected event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f for instance with vm_state active and task_state None.
Oct 02 12:48:55 compute-0 ceph-mon[73668]: pgmap v2240: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 4.5 MiB/s wr, 97 op/s
Oct 02 12:48:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:55.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:55.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 551 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 4.5 MiB/s wr, 107 op/s
Oct 02 12:48:55 compute-0 nova_compute[256940]: 2025-10-02 12:48:55.983 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:55 compute-0 nova_compute[256940]: 2025-10-02 12:48:55.983 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:55 compute-0 nova_compute[256940]: 2025-10-02 12:48:55.983 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:55 compute-0 nova_compute[256940]: 2025-10-02 12:48:55.984 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:55 compute-0 nova_compute[256940]: 2025-10-02 12:48:55.984 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:55 compute-0 nova_compute[256940]: 2025-10-02 12:48:55.985 2 INFO nova.compute.manager [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Terminating instance
Oct 02 12:48:55 compute-0 nova_compute[256940]: 2025-10-02 12:48:55.986 2 DEBUG nova.compute.manager [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:48:56 compute-0 kernel: tapca09038c-de (unregistering): left promiscuous mode
Oct 02 12:48:56 compute-0 NetworkManager[44981]: <info>  [1759409336.1084] device (tapca09038c-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:48:56 compute-0 ovn_controller[148123]: 2025-10-02T12:48:56Z|00659|binding|INFO|Releasing lport ca09038c-def5-41c9-a98a-c7837558526f from this chassis (sb_readonly=0)
Oct 02 12:48:56 compute-0 ovn_controller[148123]: 2025-10-02T12:48:56Z|00660|binding|INFO|Setting lport ca09038c-def5-41c9-a98a-c7837558526f down in Southbound
Oct 02 12:48:56 compute-0 ovn_controller[148123]: 2025-10-02T12:48:56Z|00661|binding|INFO|Removing iface tapca09038c-de ovn-installed in OVS
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.129 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:32:15 10.100.0.13'], port_security=['fa:16:3e:21:32:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '8', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ca09038c-def5-41c9-a98a-c7837558526f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.130 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ca09038c-def5-41c9-a98a-c7837558526f in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.132 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.134 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[37422820-de50-40c1-bd42-8ac6aa2f782c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.134 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace which is not needed anymore
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 02 12:48:56 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000007c.scope: Consumed 2.949s CPU time.
Oct 02 12:48:56 compute-0 systemd-machined[210927]: Machine qemu-69-instance-0000007c terminated.
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.211 2 DEBUG nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.213 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.213 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.213 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.214 2 DEBUG nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] No waiting events found dispatching network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.214 2 WARNING nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received unexpected event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f for instance with vm_state active and task_state deleting.
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.225 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance destroyed successfully.
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.226 2 DEBUG nova.objects.instance [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.249 2 DEBUG nova.virt.libvirt.vif [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.250 2 DEBUG nova.network.os_vif_util [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.251 2 DEBUG nova.network.os_vif_util [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.251 2 DEBUG os_vif [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.253 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca09038c-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.258 2 INFO os_vif [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de')
Oct 02 12:48:56 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [NOTICE]   (345568) : haproxy version is 2.8.14-c23fe91
Oct 02 12:48:56 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [NOTICE]   (345568) : path to executable is /usr/sbin/haproxy
Oct 02 12:48:56 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [WARNING]  (345568) : Exiting Master process...
Oct 02 12:48:56 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [WARNING]  (345568) : Exiting Master process...
Oct 02 12:48:56 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [ALERT]    (345568) : Current worker (345570) exited with code 143 (Terminated)
Oct 02 12:48:56 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[345564]: [WARNING]  (345568) : All workers exited. Exiting... (0)
Oct 02 12:48:56 compute-0 systemd[1]: libpod-c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280.scope: Deactivated successfully.
Oct 02 12:48:56 compute-0 conmon[345564]: conmon c7785f0d1bc067c250ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280.scope/container/memory.events
Oct 02 12:48:56 compute-0 podman[345612]: 2025-10-02 12:48:56.301060836 +0000 UTC m=+0.057312043 container died c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280-userdata-shm.mount: Deactivated successfully.
Oct 02 12:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-54c65e0bcac8c2da58eecfb33061ca94475888d6f5d6a81f658f379233d5ca40-merged.mount: Deactivated successfully.
Oct 02 12:48:56 compute-0 podman[345612]: 2025-10-02 12:48:56.341196138 +0000 UTC m=+0.097447345 container cleanup c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:48:56 compute-0 systemd[1]: libpod-conmon-c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280.scope: Deactivated successfully.
Oct 02 12:48:56 compute-0 podman[345662]: 2025-10-02 12:48:56.415518517 +0000 UTC m=+0.054690036 container remove c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.426 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[14b3c198-1fd8-448f-94cb-39a3a32d07c1]: (4, ('Thu Oct  2 12:48:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280)\nc7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280\nThu Oct  2 12:48:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (c7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280)\nc7785f0d1bc067c250ce5df3c1097ed82b8d03303488e2df734aa5ba475b6280\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.428 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bf02fb-395d-48de-8948-9c5b4a917a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.429 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 kernel: tap9266ebd7-30: left promiscuous mode
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.453 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d2f381-0595-4c0b-b12a-425ef85917ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.479 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4f153747-74b6-4780-9641-081d8bdc772c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.481 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[167d2653-a7ca-4505-bf46-e5b57d938369]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.501 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcd9657-0ac4-4e9d-b92c-f699a92a7abd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722844, 'reachable_time': 29907, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345678, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.504 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:48:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:48:56.505 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd914be-8dde-446f-853b-4a1d2d7415f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d9266ebd7\x2d321c\x2d4fc7\x2da6c8\x2dc1c304634bb4.mount: Deactivated successfully.
Oct 02 12:48:56 compute-0 ceph-mon[73668]: pgmap v2241: 305 pgs: 305 active+clean; 551 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 4.5 MiB/s wr, 107 op/s
Oct 02 12:48:56 compute-0 nova_compute[256940]: 2025-10-02 12:48:56.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:48:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:57.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:48:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:57.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 539 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.5 MiB/s wr, 220 op/s
Oct 02 12:48:58 compute-0 nova_compute[256940]: 2025-10-02 12:48:58.274 2 INFO nova.virt.libvirt.driver [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Deleting instance files /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_del
Oct 02 12:48:58 compute-0 nova_compute[256940]: 2025-10-02 12:48:58.275 2 INFO nova.virt.libvirt.driver [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Deletion of /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_del complete
Oct 02 12:48:58 compute-0 nova_compute[256940]: 2025-10-02 12:48:58.418 2 INFO nova.compute.manager [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Took 2.43 seconds to destroy the instance on the hypervisor.
Oct 02 12:48:58 compute-0 nova_compute[256940]: 2025-10-02 12:48:58.418 2 DEBUG oslo.service.loopingcall [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:48:58 compute-0 nova_compute[256940]: 2025-10-02 12:48:58.419 2 DEBUG nova.compute.manager [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:48:58 compute-0 nova_compute[256940]: 2025-10-02 12:48:58.419 2 DEBUG nova.network.neutron [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:48:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:48:58 compute-0 ceph-mon[73668]: pgmap v2242: 305 pgs: 305 active+clean; 539 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.5 MiB/s wr, 220 op/s
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.043 2 DEBUG nova.compute.manager [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-unplugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.044 2 DEBUG oslo_concurrency.lockutils [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.044 2 DEBUG oslo_concurrency.lockutils [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.044 2 DEBUG oslo_concurrency.lockutils [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.045 2 DEBUG nova.compute.manager [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] No waiting events found dispatching network-vif-unplugged-ca09038c-def5-41c9-a98a-c7837558526f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.045 2 DEBUG nova.compute.manager [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-unplugged-ca09038c-def5-41c9-a98a-c7837558526f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.046 2 DEBUG nova.compute.manager [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.046 2 DEBUG oslo_concurrency.lockutils [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.046 2 DEBUG oslo_concurrency.lockutils [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.047 2 DEBUG oslo_concurrency.lockutils [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.047 2 DEBUG nova.compute.manager [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] No waiting events found dispatching network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.047 2 WARNING nova.compute.manager [req-a11224d1-174c-48b9-8f13-03ba24ec3d3f req-e9b7bcee-aef9-473e-9467-936aa6d3a48e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received unexpected event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f for instance with vm_state active and task_state deleting.
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.539 2 DEBUG nova.network.neutron [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.571 2 INFO nova.compute.manager [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Took 1.15 seconds to deallocate network for instance.
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.635 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.636 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.641 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:59.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.654 2 DEBUG nova.compute.manager [req-409417dd-50fa-4aec-bbc7-e8e798711447 req-e616bc3a-3553-4a48-8dec-bfbf784dc869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-deleted-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.684 2 INFO nova.scheduler.client.report [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocations for instance 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4
Oct 02 12:48:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:48:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:59.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:59 compute-0 nova_compute[256940]: 2025-10-02 12:48:59.783 2 DEBUG oslo_concurrency.lockutils [None req-b8809e2b-dee7-408a-8a32-9871151cd0c7 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 284 op/s
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.140 2 DEBUG oslo_concurrency.lockutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.140 2 DEBUG oslo_concurrency.lockutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.156 2 DEBUG nova.objects.instance [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.207 2 DEBUG oslo_concurrency.lockutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.544 2 DEBUG oslo_concurrency.lockutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.545 2 DEBUG oslo_concurrency.lockutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.546 2 INFO nova.compute.manager [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Attaching volume b6b70d12-1f08-4fe5-adac-6ced9b30e04f to /dev/vdb
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.824 2 DEBUG os_brick.utils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.825 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.835 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.835 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[108ac34a-ee19-402e-a32a-ee50ef1ff98a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.837 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.844 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.845 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[61fc071e-c866-4c18-a980-75cc3f6ece7d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.846 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.855 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.855 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d271ec-d7ca-4160-acb2-b7ff00e1688b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.857 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[76d57d6b-4c24-4c31-960c-6cef09d600cc]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.857 2 DEBUG oslo_concurrency.processutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.892 2 DEBUG oslo_concurrency.processutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.895 2 DEBUG os_brick.initiator.connectors.lightos [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.895 2 DEBUG os_brick.initiator.connectors.lightos [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.895 2 DEBUG os_brick.initiator.connectors.lightos [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.896 2 DEBUG os_brick.utils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:49:00 compute-0 nova_compute[256940]: 2025-10-02 12:49:00.896 2 DEBUG nova.virt.block_device [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating existing volume attachment record: 57eccb80-4840-4673-8a59-e009e44cd558 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:49:01 compute-0 ceph-mon[73668]: pgmap v2243: 305 pgs: 305 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 284 op/s
Oct 02 12:49:01 compute-0 nova_compute[256940]: 2025-10-02 12:49:01.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:01.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:01.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 35 KiB/s wr, 317 op/s
Oct 02 12:49:01 compute-0 nova_compute[256940]: 2025-10-02 12:49:01.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.040 2 DEBUG nova.objects.instance [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.077 2 DEBUG nova.virt.libvirt.driver [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Attempting to attach volume b6b70d12-1f08-4fe5-adac-6ced9b30e04f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.081 2 DEBUG nova.virt.libvirt.guest [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:49:02 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:02 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b6b70d12-1f08-4fe5-adac-6ced9b30e04f">
Oct 02 12:49:02 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:02 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:02 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:02 compute-0 nova_compute[256940]:   </source>
Oct 02 12:49:02 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:49:02 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:49:02 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:49:02 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:49:02 compute-0 nova_compute[256940]:   <serial>b6b70d12-1f08-4fe5-adac-6ced9b30e04f</serial>
Oct 02 12:49:02 compute-0 nova_compute[256940]: </disk>
Oct 02 12:49:02 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:49:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/832610661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.287 2 DEBUG nova.virt.libvirt.driver [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.287 2 DEBUG nova.virt.libvirt.driver [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.288 2 DEBUG nova.virt.libvirt.driver [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.288 2 DEBUG nova.virt.libvirt.driver [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:ec:6e:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:02 compute-0 nova_compute[256940]: 2025-10-02 12:49:02.637 2 DEBUG oslo_concurrency.lockutils [None req-f80f1bc9-f601-411f-be19-73ea0aa71e83 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:03 compute-0 ceph-mon[73668]: pgmap v2244: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 35 KiB/s wr, 317 op/s
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.638 2 DEBUG oslo_concurrency.lockutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.638 2 DEBUG oslo_concurrency.lockutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.651 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.652 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:03.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.665 2 DEBUG nova.objects.instance [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.689 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:49:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:03.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:03 compute-0 sudo[345712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:03 compute-0 sudo[345712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:03 compute-0 sudo[345712]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.783 2 DEBUG oslo_concurrency.lockutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 32 KiB/s wr, 286 op/s
Oct 02 12:49:03 compute-0 sudo[345737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:03 compute-0 sudo[345737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.860 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.861 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:03 compute-0 sudo[345737]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.867 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:49:03 compute-0 nova_compute[256940]: 2025-10-02 12:49:03.868 2 INFO nova.compute.claims [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.086 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.175 2 DEBUG oslo_concurrency.lockutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.176 2 DEBUG oslo_concurrency.lockutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.177 2 INFO nova.compute.manager [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Attaching volume 9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2 to /dev/vdc
Oct 02 12:49:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1560321428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.419 2 DEBUG os_brick.utils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.420 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.431 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.431 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[de1b8dae-1543-4065-a085-30a6193a470e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.433 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.442 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.442 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[69a2e8fa-a5f4-4b83-be53-2c7e4f317341]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.444 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.453 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.454 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1a722d93-501f-488f-be39-b8ad87628b65]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.456 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[6a880e3b-1fb6-4afd-9cfa-425b161157fe]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.458 2 DEBUG oslo_concurrency.processutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.492 2 DEBUG oslo_concurrency.processutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.495 2 DEBUG os_brick.initiator.connectors.lightos [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.495 2 DEBUG os_brick.initiator.connectors.lightos [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.496 2 DEBUG os_brick.initiator.connectors.lightos [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.496 2 DEBUG os_brick.utils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.497 2 DEBUG nova.virt.block_device [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating existing volume attachment record: 0a7a6ccd-e966-42ab-95a4-d4a03bcdd5e3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:49:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003549039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.537 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.542 2 DEBUG nova.compute.provider_tree [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.563 2 DEBUG nova.scheduler.client.report [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.590 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.590 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.644 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.645 2 DEBUG nova.network.neutron [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.671 2 INFO nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.703 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.851 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.853 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.853 2 INFO nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Creating image(s)
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.882 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.911 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.940 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.946 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:04 compute-0 nova_compute[256940]: 2025-10-02 12:49:04.977 2 DEBUG nova.policy [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b5104e5372994cd19b720862cf1ca2ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.020 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.021 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.022 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.022 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.081 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.084 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 74af08e5-d1ea-478b-ace8-00363679ec4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:49:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722255255' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:49:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722255255' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.290 2 DEBUG nova.objects.instance [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.320 2 DEBUG nova.virt.libvirt.driver [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Attempting to attach volume 9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.322 2 DEBUG nova.virt.libvirt.guest [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:49:05 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:05 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2">
Oct 02 12:49:05 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:05 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:05 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:05 compute-0 nova_compute[256940]:   </source>
Oct 02 12:49:05 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:49:05 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:49:05 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:49:05 compute-0 nova_compute[256940]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:49:05 compute-0 nova_compute[256940]:   <serial>9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2</serial>
Oct 02 12:49:05 compute-0 nova_compute[256940]: </disk>
Oct 02 12:49:05 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.374 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 74af08e5-d1ea-478b-ace8-00363679ec4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:05 compute-0 podman[345886]: 2025-10-02 12:49:05.400475649 +0000 UTC m=+0.062027665 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:49:05 compute-0 podman[345887]: 2025-10-02 12:49:05.444579752 +0000 UTC m=+0.105411479 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.467 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] resizing rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:49:05 compute-0 ceph-mon[73668]: pgmap v2245: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 32 KiB/s wr, 286 op/s
Oct 02 12:49:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1003549039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1519361877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1372453133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/722255255' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/722255255' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.531 2 DEBUG nova.virt.libvirt.driver [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.531 2 DEBUG nova.virt.libvirt.driver [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.532 2 DEBUG nova.virt.libvirt.driver [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.532 2 DEBUG nova.virt.libvirt.driver [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.532 2 DEBUG nova.virt.libvirt.driver [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:ec:6e:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.595 2 DEBUG nova.objects.instance [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.634 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.635 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Ensure instance console log exists: /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.635 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.636 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.636 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:05.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:05.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 448 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 31 KiB/s wr, 285 op/s
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.930 2 DEBUG oslo_concurrency.lockutils [None req-2548ebc5-069d-4670-9a46-6c616a3867f0 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:05 compute-0 nova_compute[256940]: 2025-10-02 12:49:05.953 2 DEBUG nova.network.neutron [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Successfully created port: 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:49:06 compute-0 nova_compute[256940]: 2025-10-02 12:49:06.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3544233615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1743317268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2052356705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:06 compute-0 nova_compute[256940]: 2025-10-02 12:49:06.855 2 DEBUG nova.network.neutron [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Successfully updated port: 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:49:06 compute-0 nova_compute[256940]: 2025-10-02 12:49:06.883 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:06 compute-0 nova_compute[256940]: 2025-10-02 12:49:06.884 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:06 compute-0 nova_compute[256940]: 2025-10-02 12:49:06.884 2 DEBUG nova.network.neutron [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:49:06 compute-0 nova_compute[256940]: 2025-10-02 12:49:06.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:07 compute-0 nova_compute[256940]: 2025-10-02 12:49:07.154 2 DEBUG nova.network.neutron [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:49:07 compute-0 nova_compute[256940]: 2025-10-02 12:49:07.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:07 compute-0 ceph-mon[73668]: pgmap v2246: 305 pgs: 305 active+clean; 448 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 31 KiB/s wr, 285 op/s
Oct 02 12:49:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:07.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:07.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 314 op/s
Oct 02 12:49:07 compute-0 nova_compute[256940]: 2025-10-02 12:49:07.905 2 DEBUG nova.compute.manager [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:07 compute-0 nova_compute[256940]: 2025-10-02 12:49:07.906 2 DEBUG nova.compute.manager [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing instance network info cache due to event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:07 compute-0 nova_compute[256940]: 2025-10-02 12:49:07.906 2 DEBUG oslo_concurrency.lockutils [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.644 2 DEBUG nova.network.neutron [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:08 compute-0 ceph-mon[73668]: pgmap v2247: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 314 op/s
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.690 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.690 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance network_info: |[{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.690 2 DEBUG oslo_concurrency.lockutils [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.690 2 DEBUG nova.network.neutron [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.693 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Start _get_guest_xml network_info=[{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.697 2 WARNING nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.723 2 DEBUG nova.virt.libvirt.host [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.724 2 DEBUG nova.virt.libvirt.host [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.749 2 DEBUG nova.virt.libvirt.host [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.750 2 DEBUG nova.virt.libvirt.host [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.751 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.752 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.752 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.752 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.753 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.753 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.753 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.753 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.753 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.754 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.754 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.754 2 DEBUG nova.virt.hardware [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:49:08 compute-0 nova_compute[256940]: 2025-10-02 12:49:08.756 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:49:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/194072597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.214 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.238 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.242 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:49:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080643873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:09.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.680 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.681 2 DEBUG nova.virt.libvirt.vif [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:49:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.682 2 DEBUG nova.network.os_vif_util [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.683 2 DEBUG nova.network.os_vif_util [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.684 2 DEBUG nova.objects.instance [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/194072597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1080643873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:09.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.777 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <uuid>74af08e5-d1ea-478b-ace8-00363679ec4d</uuid>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <name>instance-00000087</name>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestOtherB-server-1460832742</nova:name>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:49:08</nova:creationTime>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <nova:port uuid="97f82dce-0b1b-4848-bd1a-7ec40fbf49ae">
Oct 02 12:49:09 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <system>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <entry name="serial">74af08e5-d1ea-478b-ace8-00363679ec4d</entry>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <entry name="uuid">74af08e5-d1ea-478b-ace8-00363679ec4d</entry>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </system>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <os>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   </os>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <features>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   </features>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/74af08e5-d1ea-478b-ace8-00363679ec4d_disk">
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config">
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:49:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:42:83:29"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <target dev="tap97f82dce-0b"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/console.log" append="off"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <video>
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </video>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:49:09 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:49:09 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:49:09 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:49:09 compute-0 nova_compute[256940]: </domain>
Oct 02 12:49:09 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.779 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Preparing to wait for external event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.779 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.779 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.779 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.780 2 DEBUG nova.virt.libvirt.vif [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:49:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.780 2 DEBUG nova.network.os_vif_util [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.781 2 DEBUG nova.network.os_vif_util [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.781 2 DEBUG os_vif [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97f82dce-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97f82dce-0b, col_values=(('external_ids', {'iface-id': '97f82dce-0b1b-4848-bd1a-7ec40fbf49ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:83:29', 'vm-uuid': '74af08e5-d1ea-478b-ace8-00363679ec4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:09 compute-0 NetworkManager[44981]: <info>  [1759409349.7905] manager: (tap97f82dce-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.797 2 INFO os_vif [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b')
Oct 02 12:49:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 241 op/s
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.993 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.994 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.994 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:42:83:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:09 compute-0 nova_compute[256940]: 2025-10-02 12:49:09.995 2 INFO nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Using config drive
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.028 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.309 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.309 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.309 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.310 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.310 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206636422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.743 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:10 compute-0 ceph-mon[73668]: pgmap v2248: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 241 op/s
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.864 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.864 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.868 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.869 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.869 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.869 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.900 2 DEBUG nova.compute.manager [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.901 2 DEBUG nova.compute.manager [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing instance network info cache due to event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.901 2 DEBUG oslo_concurrency.lockutils [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.901 2 DEBUG oslo_concurrency.lockutils [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:10 compute-0 nova_compute[256940]: 2025-10-02 12:49:10.902 2 DEBUG nova.network.neutron [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.058 2 INFO nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Creating config drive at /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/disk.config
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.064 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0k6hdw8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.125 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.126 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4107MB free_disk=20.881046295166016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.126 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.127 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.209 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0k6hdw8" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.235 2 DEBUG nova.storage.rbd_utils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.240 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/disk.config 74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.277 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409336.2221792, 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.278 2 INFO nova.compute.manager [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] VM Stopped (Lifecycle Event)
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.305 2 DEBUG nova.compute.manager [None req-03b0e965-d9fd-454a-be1c-64b30c44ee5a - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.315 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.316 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 74af08e5-d1ea-478b-ace8-00363679ec4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.316 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.317 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.403 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.437 2 DEBUG oslo_concurrency.processutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/disk.config 74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.438 2 INFO nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Deleting local config drive /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/disk.config because it was imported into RBD.
Oct 02 12:49:11 compute-0 NetworkManager[44981]: <info>  [1759409351.4887] manager: (tap97f82dce-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Oct 02 12:49:11 compute-0 kernel: tap97f82dce-0b: entered promiscuous mode
Oct 02 12:49:11 compute-0 ovn_controller[148123]: 2025-10-02T12:49:11Z|00662|binding|INFO|Claiming lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for this chassis.
Oct 02 12:49:11 compute-0 ovn_controller[148123]: 2025-10-02T12:49:11Z|00663|binding|INFO|97f82dce-0b1b-4848-bd1a-7ec40fbf49ae: Claiming fa:16:3e:42:83:29 10.100.0.11
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.503 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:29 10.100.0.11'], port_security=['fa:16:3e:42:83:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.505 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.506 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.519 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[12adcc3f-7a67-4bde-87b7-25a93d0dfb5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.520 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9266ebd7-31 in ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.520 2 DEBUG nova.network.neutron [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updated VIF entry in instance network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.520 2 DEBUG nova.network.neutron [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.522 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9266ebd7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.522 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e13e28f6-5ef7-40c0-b87b-bc001c6b032a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.523 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4479d4-f95d-42ed-959a-f7c3c17ca7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_controller[148123]: 2025-10-02T12:49:11Z|00664|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae ovn-installed in OVS
Oct 02 12:49:11 compute-0 ovn_controller[148123]: 2025-10-02T12:49:11Z|00665|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae up in Southbound
Oct 02 12:49:11 compute-0 systemd-machined[210927]: New machine qemu-70-instance-00000087.
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.535 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[4a20bae1-680c-4bdc-81d0-5cd48cec58a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.540 2 DEBUG oslo_concurrency.lockutils [req-7dc395da-42ee-41e9-8547-b1623a8ed21a req-34f7f315-da3c-4e43-81e2-cefaa8e2a4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:11 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-00000087.
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.551 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b11408-5783-44da-938c-df8923e86015]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 systemd-udevd[346206]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:49:11 compute-0 NetworkManager[44981]: <info>  [1759409351.5686] device (tap97f82dce-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:49:11 compute-0 NetworkManager[44981]: <info>  [1759409351.5700] device (tap97f82dce-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.598 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8293564c-cec8-4d5d-bae1-262712d61c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 systemd-udevd[346209]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.603 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[66dd574d-81e1-4811-9080-cfea746709bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 NetworkManager[44981]: <info>  [1759409351.6051] manager: (tap9266ebd7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.642 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[46856062-9c33-43e2-be56-bad16027fe02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.645 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c4eb5240-f225-434b-8467-be476dae2107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:11.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:11 compute-0 NetworkManager[44981]: <info>  [1759409351.6713] device (tap9266ebd7-30): carrier: link connected
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.679 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[29e2bd4f-9807-4a66-8672-a4ad48d1b330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.697 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93ce773e-a76d-4c65-a64e-951560d45f47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724715, 'reachable_time': 27153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346236, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:11.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.716 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0855fc-4687-4c7b-bbc3-1d007a96b460]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:6593'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724715, 'tstamp': 724715}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346237, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.738 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[eafd7b72-d4b5-467a-b04d-66e4332b3cc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724715, 'reachable_time': 27153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346238, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.768 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[00ed92fa-e14e-443c-a658-4228d60afae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 02 12:49:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1206636422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.834 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f529adbb-ab40-4bfa-b972-0dda4bc2d84a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.838 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.839 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.839 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 NetworkManager[44981]: <info>  [1759409351.8420] manager: (tap9266ebd7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Oct 02 12:49:11 compute-0 kernel: tap9266ebd7-30: entered promiscuous mode
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.848 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:11 compute-0 ovn_controller[148123]: 2025-10-02T12:49:11Z|00666|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 ovn_controller[148123]: 2025-10-02T12:49:11Z|00667|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.863 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.864 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3e213052-c359-49e6-b579-60c0da440882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.864 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:49:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:11.865 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'env', 'PROCESS_TAG=haproxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:49:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483710548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.904 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.919 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.972 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.982 2 DEBUG nova.compute.manager [req-57e846cb-1e58-4a2e-a969-7659ea4654df req-dff679a7-4fc8-4398-9124-931e005907f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.983 2 DEBUG oslo_concurrency.lockutils [req-57e846cb-1e58-4a2e-a969-7659ea4654df req-dff679a7-4fc8-4398-9124-931e005907f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.984 2 DEBUG oslo_concurrency.lockutils [req-57e846cb-1e58-4a2e-a969-7659ea4654df req-dff679a7-4fc8-4398-9124-931e005907f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.984 2 DEBUG oslo_concurrency.lockutils [req-57e846cb-1e58-4a2e-a969-7659ea4654df req-dff679a7-4fc8-4398-9124-931e005907f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.985 2 DEBUG nova.compute.manager [req-57e846cb-1e58-4a2e-a969-7659ea4654df req-dff679a7-4fc8-4398-9124-931e005907f5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Processing event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:49:11 compute-0 nova_compute[256940]: 2025-10-02 12:49:11.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:12 compute-0 nova_compute[256940]: 2025-10-02 12:49:12.051 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:49:12 compute-0 nova_compute[256940]: 2025-10-02 12:49:12.052 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:12 compute-0 podman[346272]: 2025-10-02 12:49:12.263702223 +0000 UTC m=+0.074751802 container create 1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:49:12 compute-0 podman[346272]: 2025-10-02 12:49:12.212095017 +0000 UTC m=+0.023144616 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:49:12 compute-0 systemd[1]: Started libpod-conmon-1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0.scope.
Oct 02 12:49:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18933c5c4683bd1bfd8cacc95e6aa8cb21a9e6e50e9e207744f5f70e14018b91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:12 compute-0 podman[346272]: 2025-10-02 12:49:12.358924939 +0000 UTC m=+0.169974538 container init 1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:49:12 compute-0 podman[346272]: 2025-10-02 12:49:12.364734579 +0000 UTC m=+0.175784148 container start 1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:12 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[346288]: [NOTICE]   (346292) : New worker (346294) forked
Oct 02 12:49:12 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[346288]: [NOTICE]   (346292) : Loading success.
Oct 02 12:49:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:12 compute-0 ceph-mon[73668]: pgmap v2249: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 02 12:49:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3483710548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.660 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.661 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409353.6612394, 74af08e5-d1ea-478b-ace8-00363679ec4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.661 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Started (Lifecycle Event)
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.668 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:49:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:13.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.673 2 INFO nova.virt.libvirt.driver [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance spawned successfully.
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.673 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:49:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:13.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.703 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.708 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.713 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.713 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.714 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.714 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.715 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.715 2 DEBUG nova.virt.libvirt.driver [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.747 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.747 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409353.661546, 74af08e5-d1ea-478b-ace8-00363679ec4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.748 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Paused (Lifecycle Event)
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.785 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.789 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409353.6667278, 74af08e5-d1ea-478b-ace8-00363679ec4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.789 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Resumed (Lifecycle Event)
Oct 02 12:49:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.810 2 INFO nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Took 8.96 seconds to spawn the instance on the hypervisor.
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.811 2 DEBUG nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.821 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.825 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.865 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.896 2 INFO nova.compute.manager [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Took 10.05 seconds to build instance.
Oct 02 12:49:13 compute-0 nova_compute[256940]: 2025-10-02 12:49:13.939 2 DEBUG oslo_concurrency.lockutils [None req-2ea4fdbe-5062-4958-ae1b-a2f1a3bbe15f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:14 compute-0 nova_compute[256940]: 2025-10-02 12:49:14.053 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:14 compute-0 nova_compute[256940]: 2025-10-02 12:49:14.054 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:14 compute-0 nova_compute[256940]: 2025-10-02 12:49:14.478 2 DEBUG nova.compute.manager [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:14 compute-0 nova_compute[256940]: 2025-10-02 12:49:14.478 2 DEBUG nova.compute.manager [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing instance network info cache due to event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:14 compute-0 nova_compute[256940]: 2025-10-02 12:49:14.479 2 DEBUG oslo_concurrency.lockutils [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:14.727 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:14 compute-0 nova_compute[256940]: 2025-10-02 12:49:14.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:14.731 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:49:14 compute-0 nova_compute[256940]: 2025-10-02 12:49:14.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:14 compute-0 ceph-mon[73668]: pgmap v2250: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Oct 02 12:49:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3924265383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2830854995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:15 compute-0 nova_compute[256940]: 2025-10-02 12:49:15.367 2 DEBUG nova.network.neutron [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated VIF entry in instance network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:15 compute-0 nova_compute[256940]: 2025-10-02 12:49:15.367 2 DEBUG nova.network.neutron [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:15 compute-0 nova_compute[256940]: 2025-10-02 12:49:15.551 2 DEBUG oslo_concurrency.lockutils [req-3f9d5201-88e2-454d-9d72-29a544676f2e req-889b40a2-aa2f-4150-a85c-156249a9e298 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:15 compute-0 nova_compute[256940]: 2025-10-02 12:49:15.552 2 DEBUG oslo_concurrency.lockutils [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:15 compute-0 nova_compute[256940]: 2025-10-02 12:49:15.552 2 DEBUG nova.network.neutron [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:15.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:15.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 495 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 509 KiB/s rd, 5.1 MiB/s wr, 150 op/s
Oct 02 12:49:16 compute-0 nova_compute[256940]: 2025-10-02 12:49:16.633 2 DEBUG nova.compute.manager [req-a30dac9f-410f-4d43-b5f5-75d5d6ae850b req-8124251c-8b59-4929-8ec8-a000e363d27c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:16 compute-0 nova_compute[256940]: 2025-10-02 12:49:16.633 2 DEBUG oslo_concurrency.lockutils [req-a30dac9f-410f-4d43-b5f5-75d5d6ae850b req-8124251c-8b59-4929-8ec8-a000e363d27c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:16 compute-0 nova_compute[256940]: 2025-10-02 12:49:16.633 2 DEBUG oslo_concurrency.lockutils [req-a30dac9f-410f-4d43-b5f5-75d5d6ae850b req-8124251c-8b59-4929-8ec8-a000e363d27c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:16 compute-0 nova_compute[256940]: 2025-10-02 12:49:16.634 2 DEBUG oslo_concurrency.lockutils [req-a30dac9f-410f-4d43-b5f5-75d5d6ae850b req-8124251c-8b59-4929-8ec8-a000e363d27c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:16 compute-0 nova_compute[256940]: 2025-10-02 12:49:16.634 2 DEBUG nova.compute.manager [req-a30dac9f-410f-4d43-b5f5-75d5d6ae850b req-8124251c-8b59-4929-8ec8-a000e363d27c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:49:16 compute-0 nova_compute[256940]: 2025-10-02 12:49:16.635 2 WARNING nova.compute.manager [req-a30dac9f-410f-4d43-b5f5-75d5d6ae850b req-8124251c-8b59-4929-8ec8-a000e363d27c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state None.
Oct 02 12:49:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:16.733 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:16 compute-0 ceph-mon[73668]: pgmap v2251: 305 pgs: 305 active+clean; 495 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 509 KiB/s rd, 5.1 MiB/s wr, 150 op/s
Oct 02 12:49:16 compute-0 nova_compute[256940]: 2025-10-02 12:49:16.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:17 compute-0 nova_compute[256940]: 2025-10-02 12:49:17.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:17 compute-0 nova_compute[256940]: 2025-10-02 12:49:17.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:49:17 compute-0 nova_compute[256940]: 2025-10-02 12:49:17.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:49:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:17 compute-0 nova_compute[256940]: 2025-10-02 12:49:17.620 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:17.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:17.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Oct 02 12:49:18 compute-0 ceph-mon[73668]: pgmap v2252: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Oct 02 12:49:19 compute-0 podman[346350]: 2025-10-02 12:49:19.403498403 +0000 UTC m=+0.063503552 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:19 compute-0 podman[346349]: 2025-10-02 12:49:19.40608481 +0000 UTC m=+0.070589385 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:49:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:19.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:19.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:19 compute-0 nova_compute[256940]: 2025-10-02 12:49:19.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Oct 02 12:49:20 compute-0 ceph-mon[73668]: pgmap v2253: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.138 2 DEBUG nova.network.neutron [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated VIF entry in instance network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.138 2 DEBUG nova.network.neutron [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.265 2 DEBUG oslo_concurrency.lockutils [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.265 2 DEBUG nova.compute.manager [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.266 2 DEBUG nova.compute.manager [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing instance network info cache due to event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.266 2 DEBUG oslo_concurrency.lockutils [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.266 2 DEBUG oslo_concurrency.lockutils [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.266 2 DEBUG nova.network.neutron [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:21.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 12:49:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:21.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 12:49:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 229 op/s
Oct 02 12:49:21 compute-0 nova_compute[256940]: 2025-10-02 12:49:21.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:22 compute-0 ceph-mon[73668]: pgmap v2254: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 229 op/s
Oct 02 12:49:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:23.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:23 compute-0 nova_compute[256940]: 2025-10-02 12:49:23.710 2 DEBUG nova.compute.manager [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:23 compute-0 nova_compute[256940]: 2025-10-02 12:49:23.710 2 DEBUG nova.compute.manager [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing instance network info cache due to event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:23 compute-0 nova_compute[256940]: 2025-10-02 12:49:23.711 2 DEBUG oslo_concurrency.lockutils [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:23 compute-0 nova_compute[256940]: 2025-10-02 12:49:23.711 2 DEBUG oslo_concurrency.lockutils [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:23 compute-0 nova_compute[256940]: 2025-10-02 12:49:23.711 2 DEBUG nova.network.neutron [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:23.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 02 12:49:23 compute-0 sudo[346388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:23 compute-0 sudo[346388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:23 compute-0 sudo[346388]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:24 compute-0 sudo[346413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:24 compute-0 sudo[346413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:24 compute-0 sudo[346413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:24 compute-0 nova_compute[256940]: 2025-10-02 12:49:24.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:25 compute-0 ceph-mon[73668]: pgmap v2255: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 02 12:49:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/526865586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:25.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:25.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 448 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 216 op/s
Oct 02 12:49:25 compute-0 nova_compute[256940]: 2025-10-02 12:49:25.996 2 DEBUG nova.network.neutron [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated VIF entry in instance network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:25 compute-0 nova_compute[256940]: 2025-10-02 12:49:25.997 2 DEBUG nova.network.neutron [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.031 2 DEBUG oslo_concurrency.lockutils [req-d6281e2c-9b64-45a8-9768-768b2785a7bc req-fb6cfe8b-7268-491b-a923-99d6a8ce02df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.031 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.032 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.032 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:26.487 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:26.487 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:26.488 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:26 compute-0 ovn_controller[148123]: 2025-10-02T12:49:26Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:83:29 10.100.0.11
Oct 02 12:49:26 compute-0 ovn_controller[148123]: 2025-10-02T12:49:26Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:83:29 10.100.0.11
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.905 2 DEBUG nova.network.neutron [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updated VIF entry in instance network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.905 2 DEBUG nova.network.neutron [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.934 2 DEBUG oslo_concurrency.lockutils [req-37030508-8cb1-4e54-b51b-522760d72bc5 req-bf51f21a-a7c2-41ac-a955-6b8703b9aff0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:26 compute-0 nova_compute[256940]: 2025-10-02 12:49:26.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:27 compute-0 ceph-mon[73668]: pgmap v2256: 305 pgs: 305 active+clean; 448 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 216 op/s
Oct 02 12:49:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:27.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:27.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 437 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 02 12:49:28 compute-0 nova_compute[256940]: 2025-10-02 12:49:28.008 2 DEBUG nova.compute.manager [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:28 compute-0 nova_compute[256940]: 2025-10-02 12:49:28.009 2 DEBUG nova.compute.manager [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing instance network info cache due to event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:28 compute-0 nova_compute[256940]: 2025-10-02 12:49:28.009 2 DEBUG oslo_concurrency.lockutils [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:49:28
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'images']
Oct 02 12:49:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:49:29 compute-0 nova_compute[256940]: 2025-10-02 12:49:29.105 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:29 compute-0 nova_compute[256940]: 2025-10-02 12:49:29.133 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:29 compute-0 nova_compute[256940]: 2025-10-02 12:49:29.133 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:49:29 compute-0 nova_compute[256940]: 2025-10-02 12:49:29.133 2 DEBUG oslo_concurrency.lockutils [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:29 compute-0 nova_compute[256940]: 2025-10-02 12:49:29.134 2 DEBUG nova.network.neutron [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:49:29 compute-0 ceph-mon[73668]: pgmap v2257: 305 pgs: 305 active+clean; 437 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:49:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:29.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:29 compute-0 nova_compute[256940]: 2025-10-02 12:49:29.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 455 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Oct 02 12:49:29 compute-0 nova_compute[256940]: 2025-10-02 12:49:29.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:31 compute-0 ceph-mon[73668]: pgmap v2258: 305 pgs: 305 active+clean; 455 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Oct 02 12:49:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:49:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:31.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:49:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1010 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Oct 02 12:49:31 compute-0 nova_compute[256940]: 2025-10-02 12:49:31.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:32 compute-0 nova_compute[256940]: 2025-10-02 12:49:32.489 2 DEBUG nova.network.neutron [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated VIF entry in instance network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:32 compute-0 nova_compute[256940]: 2025-10-02 12:49:32.489 2 DEBUG nova.network.neutron [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:32 compute-0 nova_compute[256940]: 2025-10-02 12:49:32.507 2 DEBUG oslo_concurrency.lockutils [req-22615ae0-cd42-4574-b61f-3353f4988a8d req-c6c327ec-8efe-417e-a1be-70ab3d1033f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:32 compute-0 nova_compute[256940]: 2025-10-02 12:49:32.650 2 DEBUG oslo_concurrency.lockutils [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:32 compute-0 nova_compute[256940]: 2025-10-02 12:49:32.651 2 DEBUG oslo_concurrency.lockutils [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:32 compute-0 nova_compute[256940]: 2025-10-02 12:49:32.679 2 INFO nova.compute.manager [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Detaching volume b6b70d12-1f08-4fe5-adac-6ced9b30e04f
Oct 02 12:49:32 compute-0 nova_compute[256940]: 2025-10-02 12:49:32.995 2 INFO nova.virt.block_device [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Attempting to driver detach volume b6b70d12-1f08-4fe5-adac-6ced9b30e04f from mountpoint /dev/vdb
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.007 2 DEBUG nova.virt.libvirt.driver [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdb from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.008 2 DEBUG nova.virt.libvirt.guest [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b6b70d12-1f08-4fe5-adac-6ced9b30e04f">
Oct 02 12:49:33 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   </source>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <serial>b6b70d12-1f08-4fe5-adac-6ced9b30e04f</serial>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]: </disk>
Oct 02 12:49:33 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.021 2 INFO nova.virt.libvirt.driver [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the persistent domain config.
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.022 2 DEBUG nova.virt.libvirt.driver [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.023 2 DEBUG nova.virt.libvirt.guest [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b6b70d12-1f08-4fe5-adac-6ced9b30e04f">
Oct 02 12:49:33 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   </source>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <serial>b6b70d12-1f08-4fe5-adac-6ced9b30e04f</serial>
Oct 02 12:49:33 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:49:33 compute-0 nova_compute[256940]: </disk>
Oct 02 12:49:33 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.398 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759409373.398052, 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.400 2 DEBUG nova.virt.libvirt.driver [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.402 2 INFO nova.virt.libvirt.driver [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the live domain config.
Oct 02 12:49:33 compute-0 ceph-mon[73668]: pgmap v2259: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1010 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Oct 02 12:49:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:33.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.833 2 DEBUG nova.objects.instance [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:33 compute-0 nova_compute[256940]: 2025-10-02 12:49:33.871 2 DEBUG oslo_concurrency.lockutils [None req-d49cb7a5-0b49-4b94-b64d-f146c39f0356 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:34 compute-0 nova_compute[256940]: 2025-10-02 12:49:34.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:34 compute-0 nova_compute[256940]: 2025-10-02 12:49:34.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:35 compute-0 ceph-mon[73668]: pgmap v2260: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 12:49:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:35.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:35.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:49:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3589274330' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:49:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3589274330' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 295 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Oct 02 12:49:36 compute-0 podman[346446]: 2025-10-02 12:49:36.394826013 +0000 UTC m=+0.057869128 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:49:36 compute-0 podman[346447]: 2025-10-02 12:49:36.442772565 +0000 UTC m=+0.104116366 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:49:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3589274330' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3589274330' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:36 compute-0 nova_compute[256940]: 2025-10-02 12:49:36.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:37 compute-0 nova_compute[256940]: 2025-10-02 12:49:37.545 2 DEBUG oslo_concurrency.lockutils [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:37 compute-0 nova_compute[256940]: 2025-10-02 12:49:37.545 2 DEBUG oslo_concurrency.lockutils [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:37 compute-0 nova_compute[256940]: 2025-10-02 12:49:37.564 2 INFO nova.compute.manager [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Detaching volume 9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2
Oct 02 12:49:37 compute-0 ceph-mon[73668]: pgmap v2261: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 295 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Oct 02 12:49:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:37.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 462 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Oct 02 12:49:37 compute-0 nova_compute[256940]: 2025-10-02 12:49:37.983 2 INFO nova.virt.block_device [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Attempting to driver detach volume 9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2 from mountpoint /dev/vdc
Oct 02 12:49:37 compute-0 nova_compute[256940]: 2025-10-02 12:49:37.990 2 DEBUG nova.virt.libvirt.driver [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdc from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:49:37 compute-0 nova_compute[256940]: 2025-10-02 12:49:37.990 2 DEBUG nova.virt.libvirt.guest [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:49:37 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:37 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2">
Oct 02 12:49:37 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:37 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:37 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:37 compute-0 nova_compute[256940]:   </source>
Oct 02 12:49:37 compute-0 nova_compute[256940]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:49:37 compute-0 nova_compute[256940]:   <serial>9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2</serial>
Oct 02 12:49:37 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:49:37 compute-0 nova_compute[256940]: </disk>
Oct 02 12:49:37 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:37.999 2 INFO nova.virt.libvirt.driver [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the persistent domain config.
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:38.000 2 DEBUG nova.virt.libvirt.driver [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:38.000 2 DEBUG nova.virt.libvirt.guest [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:49:38 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:38 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2">
Oct 02 12:49:38 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:38 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:38 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:38 compute-0 nova_compute[256940]:   </source>
Oct 02 12:49:38 compute-0 nova_compute[256940]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:49:38 compute-0 nova_compute[256940]:   <serial>9a0d7849-aa2a-475d-a21d-0d4dd0da6ab2</serial>
Oct 02 12:49:38 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:49:38 compute-0 nova_compute[256940]: </disk>
Oct 02 12:49:38 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:38.108 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759409378.1085038, 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:38.110 2 DEBUG nova.virt.libvirt.driver [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:38.112 2 INFO nova.virt.libvirt.driver [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 from the live domain config.
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:38.432 2 DEBUG nova.objects.instance [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:38 compute-0 nova_compute[256940]: 2025-10-02 12:49:38.474 2 DEBUG oslo_concurrency.lockutils [None req-e364fac0-b94c-463d-8eff-91652415c621 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:38 compute-0 ceph-mon[73668]: pgmap v2262: 305 pgs: 305 active+clean; 462 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Oct 02 12:49:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:39 compute-0 nova_compute[256940]: 2025-10-02 12:49:39.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 71 op/s
Oct 02 12:49:40 compute-0 nova_compute[256940]: 2025-10-02 12:49:40.374 2 DEBUG nova.compute.manager [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:49:40 compute-0 nova_compute[256940]: 2025-10-02 12:49:40.375 2 DEBUG nova.compute.manager [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing instance network info cache due to event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:49:40 compute-0 nova_compute[256940]: 2025-10-02 12:49:40.375 2 DEBUG oslo_concurrency.lockutils [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:40 compute-0 nova_compute[256940]: 2025-10-02 12:49:40.375 2 DEBUG oslo_concurrency.lockutils [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:40 compute-0 nova_compute[256940]: 2025-10-02 12:49:40.375 2 DEBUG nova.network.neutron [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004359361913735343 of space, bias 1.0, pg target 1.3078085741206031 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006598748953098827 of space, bias 1.0, pg target 1.9730259369765493 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:49:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:49:41 compute-0 ceph-mon[73668]: pgmap v2263: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 71 op/s
Oct 02 12:49:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:41.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:41.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 401 KiB/s wr, 46 op/s
Oct 02 12:49:41 compute-0 sudo[346495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:41 compute-0 sudo[346495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:41 compute-0 sudo[346495]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:41 compute-0 sudo[346520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:41 compute-0 sudo[346520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:41 compute-0 sudo[346520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:41 compute-0 nova_compute[256940]: 2025-10-02 12:49:41.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:41 compute-0 sudo[346545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:42 compute-0 sudo[346545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:42 compute-0 sudo[346545]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:42 compute-0 sudo[346570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:49:42 compute-0 sudo[346570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:49:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:49:42 compute-0 sudo[346570]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 12:49:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 12:49:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:43 compute-0 nova_compute[256940]: 2025-10-02 12:49:43.185 2 DEBUG oslo_concurrency.lockutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:43 compute-0 nova_compute[256940]: 2025-10-02 12:49:43.185 2 DEBUG oslo_concurrency.lockutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:43 compute-0 nova_compute[256940]: 2025-10-02 12:49:43.203 2 DEBUG nova.objects.instance [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/931648065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/931648065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-0 nova_compute[256940]: 2025-10-02 12:49:43.264 2 DEBUG oslo_concurrency.lockutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:43 compute-0 nova_compute[256940]: 2025-10-02 12:49:43.321 2 DEBUG nova.network.neutron [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated VIF entry in instance network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:49:43 compute-0 nova_compute[256940]: 2025-10-02 12:49:43.322 2 DEBUG nova.network.neutron [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:43 compute-0 nova_compute[256940]: 2025-10-02 12:49:43.438 2 DEBUG oslo_concurrency.lockutils [req-ce4b0d97-908c-42b0-b382-79fbb4dedb6c req-9275ada2-1517-4223-bef3-28634c0db793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 795b3b54-3ecd-43b8-a2f2-8ced940635b0 does not exist
Oct 02 12:49:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d0f9d4af-688c-44f2-9a8c-a1de42ddb68e does not exist
Oct 02 12:49:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 45e8ee69-ec25-4da7-86a2-934320e93d70 does not exist
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:49:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:43 compute-0 sudo[346627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:43 compute-0 sudo[346627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:43 compute-0 ceph-mon[73668]: pgmap v2264: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 401 KiB/s wr, 46 op/s
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/931648065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/931648065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:49:43 compute-0 sudo[346627]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:43 compute-0 sudo[346652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:43 compute-0 sudo[346652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:43 compute-0 sudo[346652]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:43.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:43 compute-0 sudo[346677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:43 compute-0 sudo[346677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:43 compute-0 sudo[346677]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:43 compute-0 sudo[346702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:49:43 compute-0 sudo[346702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 368 KiB/s wr, 30 op/s
Oct 02 12:49:44 compute-0 sudo[346753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:44 compute-0 sudo[346753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:44 compute-0 sudo[346753]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:44 compute-0 sudo[346792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:44 compute-0 sudo[346792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:44 compute-0 sudo[346792]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.169 2 DEBUG oslo_concurrency.lockutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.169 2 DEBUG oslo_concurrency.lockutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.170 2 INFO nova.compute.manager [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Attaching volume ac1432a4-bab5-43b6-871c-71608985c7ae to /dev/vdb
Oct 02 12:49:44 compute-0 podman[346789]: 2025-10-02 12:49:44.141004236 +0000 UTC m=+0.039835175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:44 compute-0 podman[346789]: 2025-10-02 12:49:44.474402982 +0000 UTC m=+0.373233901 container create 631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.478 2 DEBUG os_brick.utils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.481 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.498 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.499 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0b6ed7-dbef-4e1b-9631-0de91d3558e9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.501 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.513 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.513 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[c6668835-2951-4eb3-bb22-7a454406ce03]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.516 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.528 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.529 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[023b7e63-8ac4-47c6-8660-a3829dba4d11]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.531 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[33d1e01e-8d09-4dc8-98e0-0b107e596b0f]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.531 2 DEBUG oslo_concurrency.processutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:44 compute-0 systemd[1]: Started libpod-conmon-631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd.scope.
Oct 02 12:49:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.572 2 DEBUG oslo_concurrency.processutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.574 2 DEBUG os_brick.initiator.connectors.lightos [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.575 2 DEBUG os_brick.initiator.connectors.lightos [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.575 2 DEBUG os_brick.initiator.connectors.lightos [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.575 2 DEBUG os_brick.utils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.576 2 DEBUG nova.virt.block_device [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating existing volume attachment record: 224833de-8155-4c7a-9ea1-ae45f41d1ad3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:49:44 compute-0 podman[346789]: 2025-10-02 12:49:44.678585048 +0000 UTC m=+0.577415987 container init 631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:49:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:49:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:49:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:44 compute-0 podman[346789]: 2025-10-02 12:49:44.690185346 +0000 UTC m=+0.589016265 container start 631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 12:49:44 compute-0 podman[346789]: 2025-10-02 12:49:44.694532308 +0000 UTC m=+0.593363247 container attach 631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:49:44 compute-0 funny_sutherland[346837]: 167 167
Oct 02 12:49:44 compute-0 systemd[1]: libpod-631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd.scope: Deactivated successfully.
Oct 02 12:49:44 compute-0 podman[346789]: 2025-10-02 12:49:44.697861534 +0000 UTC m=+0.596692453 container died 631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-57bf87ea2b95a25887d3495e2030ee0a7722610ebec47da35cb0fedf187db50f-merged.mount: Deactivated successfully.
Oct 02 12:49:44 compute-0 podman[346789]: 2025-10-02 12:49:44.760260237 +0000 UTC m=+0.659091156 container remove 631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sutherland, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:49:44 compute-0 systemd[1]: libpod-conmon-631cefc5344053166507b2b55de22b8d392117e179e9d9e64d9dd75f9b286ebd.scope: Deactivated successfully.
Oct 02 12:49:44 compute-0 nova_compute[256940]: 2025-10-02 12:49:44.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:45 compute-0 podman[346862]: 2025-10-02 12:49:45.014570742 +0000 UTC m=+0.097274171 container create 06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:49:45 compute-0 podman[346862]: 2025-10-02 12:49:44.940602491 +0000 UTC m=+0.023305940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:45 compute-0 systemd[1]: Started libpod-conmon-06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad.scope.
Oct 02 12:49:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2280019817b0cd95446e8d41403ee2031c6e381181212592551fbd1a3e9cf44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2280019817b0cd95446e8d41403ee2031c6e381181212592551fbd1a3e9cf44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2280019817b0cd95446e8d41403ee2031c6e381181212592551fbd1a3e9cf44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2280019817b0cd95446e8d41403ee2031c6e381181212592551fbd1a3e9cf44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2280019817b0cd95446e8d41403ee2031c6e381181212592551fbd1a3e9cf44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:45 compute-0 podman[346862]: 2025-10-02 12:49:45.210095236 +0000 UTC m=+0.292798675 container init 06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:45 compute-0 podman[346862]: 2025-10-02 12:49:45.221010927 +0000 UTC m=+0.303714356 container start 06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:49:45 compute-0 podman[346862]: 2025-10-02 12:49:45.250969296 +0000 UTC m=+0.333672725 container attach 06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:49:45 compute-0 nova_compute[256940]: 2025-10-02 12:49:45.549 2 DEBUG nova.objects.instance [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:45 compute-0 nova_compute[256940]: 2025-10-02 12:49:45.595 2 DEBUG nova.virt.libvirt.driver [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Attempting to attach volume ac1432a4-bab5-43b6-871c-71608985c7ae with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:49:45 compute-0 nova_compute[256940]: 2025-10-02 12:49:45.599 2 DEBUG nova.virt.libvirt.guest [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:49:45 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:45 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-ac1432a4-bab5-43b6-871c-71608985c7ae">
Oct 02 12:49:45 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:45 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:45 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:45 compute-0 nova_compute[256940]:   </source>
Oct 02 12:49:45 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 12:49:45 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:49:45 compute-0 nova_compute[256940]:   </auth>
Oct 02 12:49:45 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:49:45 compute-0 nova_compute[256940]:   <serial>ac1432a4-bab5-43b6-871c-71608985c7ae</serial>
Oct 02 12:49:45 compute-0 nova_compute[256940]: </disk>
Oct 02 12:49:45 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:49:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:49:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:45.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:49:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:45 compute-0 nova_compute[256940]: 2025-10-02 12:49:45.772 2 DEBUG nova.virt.libvirt.driver [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:45 compute-0 nova_compute[256940]: 2025-10-02 12:49:45.772 2 DEBUG nova.virt.libvirt.driver [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:45 compute-0 nova_compute[256940]: 2025-10-02 12:49:45.772 2 DEBUG nova.virt.libvirt.driver [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:45 compute-0 nova_compute[256940]: 2025-10-02 12:49:45.772 2 DEBUG nova.virt.libvirt.driver [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:42:83:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 370 KiB/s wr, 41 op/s
Oct 02 12:49:45 compute-0 ceph-mon[73668]: pgmap v2265: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 368 KiB/s wr, 30 op/s
Oct 02 12:49:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/574049216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:46 compute-0 happy_hamilton[346879]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:49:46 compute-0 happy_hamilton[346879]: --> relative data size: 1.0
Oct 02 12:49:46 compute-0 happy_hamilton[346879]: --> All data devices are unavailable
Oct 02 12:49:46 compute-0 systemd[1]: libpod-06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad.scope: Deactivated successfully.
Oct 02 12:49:46 compute-0 podman[346862]: 2025-10-02 12:49:46.146281142 +0000 UTC m=+1.228984581 container died 06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:49:46 compute-0 nova_compute[256940]: 2025-10-02 12:49:46.200 2 DEBUG oslo_concurrency.lockutils [None req-35dedf17-46c7-44a4-a638-8da5849a9425 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2280019817b0cd95446e8d41403ee2031c6e381181212592551fbd1a3e9cf44-merged.mount: Deactivated successfully.
Oct 02 12:49:46 compute-0 podman[346862]: 2025-10-02 12:49:46.596374568 +0000 UTC m=+1.679077997 container remove 06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:49:46 compute-0 systemd[1]: libpod-conmon-06927216489b92a52dd6a28eb6bcc0cd820b5c65bb0b540a74cabb5d35b938ad.scope: Deactivated successfully.
Oct 02 12:49:46 compute-0 sudo[346702]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:46 compute-0 sudo[346927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:46 compute-0 sudo[346927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:46 compute-0 sudo[346927]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:46 compute-0 sudo[346953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:46 compute-0 sudo[346953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:46 compute-0 sudo[346953]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:46 compute-0 sudo[346978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:46 compute-0 sudo[346978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:46 compute-0 sudo[346978]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:46 compute-0 sudo[347003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:49:46 compute-0 sudo[347003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:47 compute-0 nova_compute[256940]: 2025-10-02 12:49:46.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:47 compute-0 ceph-mon[73668]: pgmap v2266: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 370 KiB/s wr, 41 op/s
Oct 02 12:49:47 compute-0 podman[347069]: 2025-10-02 12:49:47.278061804 +0000 UTC m=+0.090760533 container create 16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:49:47 compute-0 podman[347069]: 2025-10-02 12:49:47.210214311 +0000 UTC m=+0.022913070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:47 compute-0 systemd[1]: Started libpod-conmon-16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c.scope.
Oct 02 12:49:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:47 compute-0 podman[347069]: 2025-10-02 12:49:47.451129041 +0000 UTC m=+0.263827810 container init 16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:49:47 compute-0 podman[347069]: 2025-10-02 12:49:47.459980869 +0000 UTC m=+0.272679608 container start 16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:49:47 compute-0 podman[347069]: 2025-10-02 12:49:47.463597992 +0000 UTC m=+0.276296751 container attach 16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:49:47 compute-0 wizardly_boyd[347086]: 167 167
Oct 02 12:49:47 compute-0 systemd[1]: libpod-16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c.scope: Deactivated successfully.
Oct 02 12:49:47 compute-0 podman[347069]: 2025-10-02 12:49:47.465168612 +0000 UTC m=+0.277867351 container died 16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:49:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-44cf75ababc068c9d0be695f049f813855939d167a3fffcfd0a2629274fbdc4e-merged.mount: Deactivated successfully.
Oct 02 12:49:47 compute-0 podman[347069]: 2025-10-02 12:49:47.523349407 +0000 UTC m=+0.336048146 container remove 16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 12:49:47 compute-0 systemd[1]: libpod-conmon-16011062a04dae98f4b7d2ad6fbbd172eaf4395df7fd04fc1407d9b328c5412c.scope: Deactivated successfully.
Oct 02 12:49:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:47.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:47 compute-0 podman[347110]: 2025-10-02 12:49:47.723855679 +0000 UTC m=+0.041985580 container create 696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:49:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:47 compute-0 systemd[1]: Started libpod-conmon-696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a.scope.
Oct 02 12:49:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9e68b69cf72c5e0095ff1855acb0b2a245b543d14bc19cfba9ba7d75efa9ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9e68b69cf72c5e0095ff1855acb0b2a245b543d14bc19cfba9ba7d75efa9ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9e68b69cf72c5e0095ff1855acb0b2a245b543d14bc19cfba9ba7d75efa9ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9e68b69cf72c5e0095ff1855acb0b2a245b543d14bc19cfba9ba7d75efa9ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:47 compute-0 podman[347110]: 2025-10-02 12:49:47.706705399 +0000 UTC m=+0.024835320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:47 compute-0 podman[347110]: 2025-10-02 12:49:47.805463386 +0000 UTC m=+0.123593307 container init 696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:49:47 compute-0 podman[347110]: 2025-10-02 12:49:47.812906597 +0000 UTC m=+0.131036498 container start 696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:49:47 compute-0 podman[347110]: 2025-10-02 12:49:47.816171131 +0000 UTC m=+0.134301032 container attach 696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 187 KiB/s wr, 59 op/s
Oct 02 12:49:48 compute-0 competent_lewin[347127]: {
Oct 02 12:49:48 compute-0 competent_lewin[347127]:     "1": [
Oct 02 12:49:48 compute-0 competent_lewin[347127]:         {
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "devices": [
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "/dev/loop3"
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             ],
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "lv_name": "ceph_lv0",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "lv_size": "7511998464",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "name": "ceph_lv0",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "tags": {
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.cluster_name": "ceph",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.crush_device_class": "",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.encrypted": "0",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.osd_id": "1",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.type": "block",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:                 "ceph.vdo": "0"
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             },
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "type": "block",
Oct 02 12:49:48 compute-0 competent_lewin[347127]:             "vg_name": "ceph_vg0"
Oct 02 12:49:48 compute-0 competent_lewin[347127]:         }
Oct 02 12:49:48 compute-0 competent_lewin[347127]:     ]
Oct 02 12:49:48 compute-0 competent_lewin[347127]: }
Oct 02 12:49:48 compute-0 systemd[1]: libpod-696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a.scope: Deactivated successfully.
Oct 02 12:49:48 compute-0 podman[347110]: 2025-10-02 12:49:48.640596555 +0000 UTC m=+0.958726456 container died 696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:49:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-da9e68b69cf72c5e0095ff1855acb0b2a245b543d14bc19cfba9ba7d75efa9ba-merged.mount: Deactivated successfully.
Oct 02 12:49:48 compute-0 podman[347110]: 2025-10-02 12:49:48.695244129 +0000 UTC m=+1.013374030 container remove 696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:48 compute-0 systemd[1]: libpod-conmon-696c410d6caf7a5f940ec2428639f9577e8c578620048541a8ae83ae39cc9b0a.scope: Deactivated successfully.
Oct 02 12:49:48 compute-0 sudo[347003]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:48 compute-0 sudo[347148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:48 compute-0 sudo[347148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:48 compute-0 sudo[347148]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:48 compute-0 sudo[347173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:48 compute-0 sudo[347173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:48 compute-0 sudo[347173]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:48 compute-0 sudo[347198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:48 compute-0 sudo[347198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:48 compute-0 sudo[347198]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:48 compute-0 sudo[347223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:49:48 compute-0 sudo[347223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:49 compute-0 ceph-mon[73668]: pgmap v2267: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 187 KiB/s wr, 59 op/s
Oct 02 12:49:49 compute-0 podman[347289]: 2025-10-02 12:49:49.308832796 +0000 UTC m=+0.036732775 container create 8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:49:49 compute-0 systemd[1]: Started libpod-conmon-8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356.scope.
Oct 02 12:49:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:49 compute-0 podman[347289]: 2025-10-02 12:49:49.379399239 +0000 UTC m=+0.107299248 container init 8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:49:49 compute-0 podman[347289]: 2025-10-02 12:49:49.387458556 +0000 UTC m=+0.115358535 container start 8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:49 compute-0 podman[347289]: 2025-10-02 12:49:49.292691341 +0000 UTC m=+0.020591330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:49 compute-0 podman[347289]: 2025-10-02 12:49:49.390559366 +0000 UTC m=+0.118459345 container attach 8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:49:49 compute-0 quirky_ishizaka[347305]: 167 167
Oct 02 12:49:49 compute-0 systemd[1]: libpod-8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356.scope: Deactivated successfully.
Oct 02 12:49:49 compute-0 podman[347289]: 2025-10-02 12:49:49.393352507 +0000 UTC m=+0.121252486 container died 8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-83367ef6b8e81782d81d5c0e2924d94ed1f77d58efe4f3e34aaa3d2649aad743-merged.mount: Deactivated successfully.
Oct 02 12:49:49 compute-0 podman[347289]: 2025-10-02 12:49:49.456572322 +0000 UTC m=+0.184472311 container remove 8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:49:49 compute-0 systemd[1]: libpod-conmon-8abd55f894c10f3393f04e091df63706592357eee71609f3b15ef0650c782356.scope: Deactivated successfully.
Oct 02 12:49:49 compute-0 podman[347322]: 2025-10-02 12:49:49.527279969 +0000 UTC m=+0.062667231 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:49:49 compute-0 podman[347321]: 2025-10-02 12:49:49.560179834 +0000 UTC m=+0.096336496 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:49:49 compute-0 podman[347367]: 2025-10-02 12:49:49.675084047 +0000 UTC m=+0.049199245 container create 54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:49:49 compute-0 systemd[1]: Started libpod-conmon-54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf.scope.
Oct 02 12:49:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413c2661cd753ac8e8345fa033b34d120aae17de5f8f7d7020ee05417e470218/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:49 compute-0 podman[347367]: 2025-10-02 12:49:49.655450022 +0000 UTC m=+0.029565240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413c2661cd753ac8e8345fa033b34d120aae17de5f8f7d7020ee05417e470218/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413c2661cd753ac8e8345fa033b34d120aae17de5f8f7d7020ee05417e470218/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413c2661cd753ac8e8345fa033b34d120aae17de5f8f7d7020ee05417e470218/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:49:49 compute-0 podman[347367]: 2025-10-02 12:49:49.76236881 +0000 UTC m=+0.136484008 container init 54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:49:49 compute-0 podman[347367]: 2025-10-02 12:49:49.770623652 +0000 UTC m=+0.144738850 container start 54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:49:49 compute-0 podman[347367]: 2025-10-02 12:49:49.788577753 +0000 UTC m=+0.162692971 container attach 54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:49:49 compute-0 nova_compute[256940]: 2025-10-02 12:49:49.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.4 KiB/s wr, 41 op/s
Oct 02 12:49:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2086177414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]: {
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]:         "osd_id": 1,
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]:         "type": "bluestore"
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]:     }
Oct 02 12:49:50 compute-0 nostalgic_montalcini[347384]: }
Oct 02 12:49:50 compute-0 systemd[1]: libpod-54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf.scope: Deactivated successfully.
Oct 02 12:49:50 compute-0 conmon[347384]: conmon 54c9d1efc75c2568ac7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf.scope/container/memory.events
Oct 02 12:49:50 compute-0 podman[347367]: 2025-10-02 12:49:50.722187533 +0000 UTC m=+1.096302741 container died 54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-413c2661cd753ac8e8345fa033b34d120aae17de5f8f7d7020ee05417e470218-merged.mount: Deactivated successfully.
Oct 02 12:49:51 compute-0 podman[347367]: 2025-10-02 12:49:51.490633018 +0000 UTC m=+1.864748216 container remove 54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:49:51 compute-0 systemd[1]: libpod-conmon-54c9d1efc75c2568ac7df1a1febb4cfae7b562e56f7cd13d74e0a0d14b846baf.scope: Deactivated successfully.
Oct 02 12:49:51 compute-0 sudo[347223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:49:51 compute-0 ceph-mon[73668]: pgmap v2268: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.4 KiB/s wr, 41 op/s
Oct 02 12:49:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:51.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:51.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.6 KiB/s wr, 42 op/s
Oct 02 12:49:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:49:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7112d881-2ce9-4d14-9fa4-74e855095f2c does not exist
Oct 02 12:49:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e4d083bd-9686-49fb-9a87-bb6f8ecc3d8e does not exist
Oct 02 12:49:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f2e30c53-bfca-40cf-9896-18fe341fa504 does not exist
Oct 02 12:49:51 compute-0 sudo[347418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:51 compute-0 sudo[347418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:51 compute-0 sudo[347418]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:52 compute-0 nova_compute[256940]: 2025-10-02 12:49:52.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:52 compute-0 sudo[347443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:49:52 compute-0 sudo[347443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:52 compute-0 sudo[347443]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:52 compute-0 ceph-mon[73668]: pgmap v2269: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.6 KiB/s wr, 42 op/s
Oct 02 12:49:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:52 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:53 compute-0 nova_compute[256940]: 2025-10-02 12:49:53.531 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:53 compute-0 nova_compute[256940]: 2025-10-02 12:49:53.532 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:53 compute-0 nova_compute[256940]: 2025-10-02 12:49:53.532 2 DEBUG nova.network.neutron [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:49:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:49:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:49:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:49:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:53.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:49:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.4 KiB/s wr, 36 op/s
Oct 02 12:49:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3670348211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:54 compute-0 nova_compute[256940]: 2025-10-02 12:49:54.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:55.096 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:55 compute-0 nova_compute[256940]: 2025-10-02 12:49:55.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:49:55.097 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:49:55 compute-0 ceph-mon[73668]: pgmap v2270: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.4 KiB/s wr, 36 op/s
Oct 02 12:49:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1772711169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:49:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:49:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:55.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 KiB/s wr, 37 op/s
Oct 02 12:49:56 compute-0 nova_compute[256940]: 2025-10-02 12:49:56.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:56 compute-0 nova_compute[256940]: 2025-10-02 12:49:56.484 2 DEBUG nova.network.neutron [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:56 compute-0 nova_compute[256940]: 2025-10-02 12:49:56.518 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:56 compute-0 nova_compute[256940]: 2025-10-02 12:49:56.648 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:49:56 compute-0 nova_compute[256940]: 2025-10-02 12:49:56.648 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Creating file /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/91a2056ae1d54fcdb06584c88df49ded.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:49:56 compute-0 nova_compute[256940]: 2025-10-02 12:49:56.649 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/91a2056ae1d54fcdb06584c88df49ded.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:57 compute-0 nova_compute[256940]: 2025-10-02 12:49:57.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:57 compute-0 nova_compute[256940]: 2025-10-02 12:49:57.073 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/91a2056ae1d54fcdb06584c88df49ded.tmp" returned: 1 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:57 compute-0 nova_compute[256940]: 2025-10-02 12:49:57.074 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/91a2056ae1d54fcdb06584c88df49ded.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:49:57 compute-0 nova_compute[256940]: 2025-10-02 12:49:57.074 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Creating directory /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:49:57 compute-0 nova_compute[256940]: 2025-10-02 12:49:57.075 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:57 compute-0 nova_compute[256940]: 2025-10-02 12:49:57.295 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:57 compute-0 nova_compute[256940]: 2025-10-02 12:49:57.300 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:49:57 compute-0 ceph-mon[73668]: pgmap v2271: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 KiB/s wr, 37 op/s
Oct 02 12:49:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:49:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:57.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:49:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:49:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:57.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:49:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 706 KiB/s wr, 46 op/s
Oct 02 12:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:49:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:49:59 compute-0 ceph-mon[73668]: pgmap v2272: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 706 KiB/s wr, 46 op/s
Oct 02 12:49:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:59.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:49:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:59.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:59 compute-0 nova_compute[256940]: 2025-10-02 12:49:59.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 497 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 12:50:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 12:50:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 12:50:00 compute-0 kernel: tap97f82dce-0b (unregistering): left promiscuous mode
Oct 02 12:50:00 compute-0 NetworkManager[44981]: <info>  [1759409400.6971] device (tap97f82dce-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:50:00 compute-0 nova_compute[256940]: 2025-10-02 12:50:00.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:00 compute-0 ovn_controller[148123]: 2025-10-02T12:50:00Z|00668|binding|INFO|Releasing lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae from this chassis (sb_readonly=0)
Oct 02 12:50:00 compute-0 ovn_controller[148123]: 2025-10-02T12:50:00Z|00669|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae down in Southbound
Oct 02 12:50:00 compute-0 ovn_controller[148123]: 2025-10-02T12:50:00Z|00670|binding|INFO|Removing iface tap97f82dce-0b ovn-installed in OVS
Oct 02 12:50:00 compute-0 nova_compute[256940]: 2025-10-02 12:50:00.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:00.715 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:29 10.100.0.11'], port_security=['fa:16:3e:42:83:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:00.716 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis
Oct 02 12:50:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:00.718 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:50:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:00.719 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc92ace-5e4d-4666-8ef3-4b31d7e31664]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:00.720 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace which is not needed anymore
Oct 02 12:50:00 compute-0 nova_compute[256940]: 2025-10-02 12:50:00.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:00 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 02 12:50:00 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000087.scope: Consumed 17.076s CPU time.
Oct 02 12:50:00 compute-0 systemd-machined[210927]: Machine qemu-70-instance-00000087 terminated.
Oct 02 12:50:00 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[346288]: [NOTICE]   (346292) : haproxy version is 2.8.14-c23fe91
Oct 02 12:50:00 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[346288]: [NOTICE]   (346292) : path to executable is /usr/sbin/haproxy
Oct 02 12:50:00 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[346288]: [WARNING]  (346292) : Exiting Master process...
Oct 02 12:50:00 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[346288]: [ALERT]    (346292) : Current worker (346294) exited with code 143 (Terminated)
Oct 02 12:50:00 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[346288]: [WARNING]  (346292) : All workers exited. Exiting... (0)
Oct 02 12:50:00 compute-0 systemd[1]: libpod-1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0.scope: Deactivated successfully.
Oct 02 12:50:00 compute-0 podman[347500]: 2025-10-02 12:50:00.878535785 +0000 UTC m=+0.063477963 container died 1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0-userdata-shm.mount: Deactivated successfully.
Oct 02 12:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-18933c5c4683bd1bfd8cacc95e6aa8cb21a9e6e50e9e207744f5f70e14018b91-merged.mount: Deactivated successfully.
Oct 02 12:50:01 compute-0 podman[347500]: 2025-10-02 12:50:01.024242919 +0000 UTC m=+0.209185097 container cleanup 1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:01 compute-0 systemd[1]: libpod-conmon-1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0.scope: Deactivated successfully.
Oct 02 12:50:01 compute-0 podman[347540]: 2025-10-02 12:50:01.182226558 +0000 UTC m=+0.134612920 container remove 1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.188 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[de86a708-14a5-4598-89df-a593126f4e20]: (4, ('Thu Oct  2 12:50:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0)\n1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0\nThu Oct  2 12:50:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0)\n1412bd701da06bd0e1dcd89f8ba1501b850386fdd628e0bee78dca75274320b0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.190 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a9c456-0f31-43f0-872d-06df2854bee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.191 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:01 compute-0 kernel: tap9266ebd7-30: left promiscuous mode
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.214 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9c326adf-3957-41fe-ad61-a6d3e5a6895f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:01 compute-0 ceph-mon[73668]: pgmap v2273: 305 pgs: 305 active+clean; 497 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.249 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[74e64c01-e4a1-44e7-be1f-4b5e67f35596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.250 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[933c7b81-64ba-4730-aeaa-61044c7a9732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.268 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a70795f0-e24a-4625-b0fb-fd6b067607e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724707, 'reachable_time': 28918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347558, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d9266ebd7\x2d321c\x2d4fc7\x2da6c8\x2dc1c304634bb4.mount: Deactivated successfully.
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.274 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:50:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:01.274 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc6d87b-cf71-4f5e-8c98-4b4eb7381b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.324 2 INFO nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance shutdown successfully after 4 seconds.
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.330 2 INFO nova.virt.libvirt.driver [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance destroyed successfully.
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.331 2 DEBUG nova.virt.libvirt.vif [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:49:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:49:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:42:83:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.332 2 DEBUG nova.network.os_vif_util [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:42:83:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.332 2 DEBUG nova.network.os_vif_util [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.333 2 DEBUG os_vif [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97f82dce-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.339 2 DEBUG nova.compute.manager [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.340 2 DEBUG oslo_concurrency.lockutils [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.340 2 DEBUG oslo_concurrency.lockutils [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.340 2 DEBUG oslo_concurrency.lockutils [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.340 2 DEBUG nova.compute.manager [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.341 2 WARNING nova.compute.manager [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state resize_migrating.
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.345 2 INFO os_vif [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b')
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.351 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.352 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:01 compute-0 nova_compute[256940]: 2025-10-02 12:50:01.352 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:01.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:01.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 02 12:50:02 compute-0 nova_compute[256940]: 2025-10-02 12:50:02.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:02 compute-0 nova_compute[256940]: 2025-10-02 12:50:02.458 2 DEBUG neutronclient.v2_0.client [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:50:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.133 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.133 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.133 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:03 compute-0 ceph-mon[73668]: pgmap v2274: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 02 12:50:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:03.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:03.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.893 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.893 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.893 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.893 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.894 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:03 compute-0 nova_compute[256940]: 2025-10-02 12:50:03.894 2 WARNING nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state resize_migrated.
Oct 02 12:50:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:04.099 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:04 compute-0 sudo[347560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:04 compute-0 sudo[347560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:04 compute-0 sudo[347560]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:04 compute-0 sudo[347585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:04 compute-0 sudo[347585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:04 compute-0 sudo[347585]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:04 compute-0 ceph-mon[73668]: pgmap v2275: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 12:50:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/836225235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:04 compute-0 nova_compute[256940]: 2025-10-02 12:50:04.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:05.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 12:50:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:06 compute-0 nova_compute[256940]: 2025-10-02 12:50:06.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:06 compute-0 nova_compute[256940]: 2025-10-02 12:50:06.492 2 DEBUG nova.compute.manager [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:06 compute-0 nova_compute[256940]: 2025-10-02 12:50:06.492 2 DEBUG nova.compute.manager [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing instance network info cache due to event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:06 compute-0 nova_compute[256940]: 2025-10-02 12:50:06.492 2 DEBUG oslo_concurrency.lockutils [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:06 compute-0 nova_compute[256940]: 2025-10-02 12:50:06.493 2 DEBUG oslo_concurrency.lockutils [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:06 compute-0 nova_compute[256940]: 2025-10-02 12:50:06.493 2 DEBUG nova.network.neutron [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:07 compute-0 nova_compute[256940]: 2025-10-02 12:50:07.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:07 compute-0 ceph-mon[73668]: pgmap v2276: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 12:50:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1879367731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4131426592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:07 compute-0 nova_compute[256940]: 2025-10-02 12:50:07.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:07 compute-0 podman[347612]: 2025-10-02 12:50:07.389964267 +0000 UTC m=+0.060013553 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:50:07 compute-0 podman[347613]: 2025-10-02 12:50:07.422220046 +0000 UTC m=+0.090194709 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.737002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407737085, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2006, "num_deletes": 260, "total_data_size": 3463513, "memory_usage": 3534384, "flush_reason": "Manual Compaction"}
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 02 12:50:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:07.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:07.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407814625, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3348140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48798, "largest_seqno": 50803, "table_properties": {"data_size": 3339020, "index_size": 5614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19755, "raw_average_key_size": 20, "raw_value_size": 3320426, "raw_average_value_size": 3451, "num_data_blocks": 244, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409237, "oldest_key_time": 1759409237, "file_creation_time": 1759409407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 77773 microseconds, and 7229 cpu microseconds.
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.814784) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3348140 bytes OK
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.814870) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.838251) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.838293) EVENT_LOG_v1 {"time_micros": 1759409407838284, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.838316) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3455091, prev total WAL file size 3455091, number of live WAL files 2.
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.839785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303134' seq:0, type:0; will stop at (end)
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3269KB)], [107(10MB)]
Oct 02 12:50:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407839822, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 14507313, "oldest_snapshot_seqno": -1}
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 8090 keys, 14348119 bytes, temperature: kUnknown
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408014250, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 14348119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14290831, "index_size": 35928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208338, "raw_average_key_size": 25, "raw_value_size": 14143694, "raw_average_value_size": 1748, "num_data_blocks": 1427, "num_entries": 8090, "num_filter_entries": 8090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.014635) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14348119 bytes
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.026466) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.1 rd, 82.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.6 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(8.6) write-amplify(4.3) OK, records in: 8630, records dropped: 540 output_compression: NoCompression
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.026499) EVENT_LOG_v1 {"time_micros": 1759409408026486, "job": 64, "event": "compaction_finished", "compaction_time_micros": 174653, "compaction_time_cpu_micros": 32123, "output_level": 6, "num_output_files": 1, "total_output_size": 14348119, "num_input_records": 8630, "num_output_records": 8090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408027690, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408030716, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:07.839647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.030839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.030847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.030849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.030851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:08.030853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1040149648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:09 compute-0 nova_compute[256940]: 2025-10-02 12:50:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:09 compute-0 nova_compute[256940]: 2025-10-02 12:50:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:09 compute-0 nova_compute[256940]: 2025-10-02 12:50:09.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:50:09 compute-0 ceph-mon[73668]: pgmap v2277: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 12:50:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/793802527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2755918285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:09.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.1 MiB/s wr, 19 op/s
Oct 02 12:50:09 compute-0 nova_compute[256940]: 2025-10-02 12:50:09.990 2 DEBUG nova.network.neutron [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updated VIF entry in instance network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:09 compute-0 nova_compute[256940]: 2025-10-02 12:50:09.991 2 DEBUG nova.network.neutron [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.012 2 DEBUG oslo_concurrency.lockutils [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.238 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.239 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3290704519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3597875286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.676 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.783 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.784 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.784 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.787 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.787 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3290704519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.940 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.941 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4158MB free_disk=20.896709442138672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.942 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:10 compute-0 nova_compute[256940]: 2025-10-02 12:50:10.942 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.040 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration for instance 74af08e5-d1ea-478b-ace8-00363679ec4d refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.067 2 INFO nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating resource usage from migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.067 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Starting to track outgoing migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.098 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.098 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.099 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.099 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.190 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005696517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.640 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.647 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.664 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.705 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:50:11 compute-0 nova_compute[256940]: 2025-10-02 12:50:11.706 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:11.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 301 KiB/s wr, 8 op/s
Oct 02 12:50:12 compute-0 nova_compute[256940]: 2025-10-02 12:50:12.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:12 compute-0 ceph-mon[73668]: pgmap v2278: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.1 MiB/s wr, 19 op/s
Oct 02 12:50:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3597875286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3005696517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Oct 02 12:50:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Oct 02 12:50:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Oct 02 12:50:13 compute-0 nova_compute[256940]: 2025-10-02 12:50:13.701 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:13 compute-0 nova_compute[256940]: 2025-10-02 12:50:13.702 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:13 compute-0 nova_compute[256940]: 2025-10-02 12:50:13.702 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Oct 02 12:50:14 compute-0 ceph-mon[73668]: pgmap v2279: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 301 KiB/s wr, 8 op/s
Oct 02 12:50:14 compute-0 ceph-mon[73668]: osdmap e316: 3 total, 3 up, 3 in
Oct 02 12:50:15 compute-0 ceph-mon[73668]: pgmap v2281: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Oct 02 12:50:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:15.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 02 12:50:15 compute-0 nova_compute[256940]: 2025-10-02 12:50:15.938 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409400.9376223, 74af08e5-d1ea-478b-ace8-00363679ec4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:15 compute-0 nova_compute[256940]: 2025-10-02 12:50:15.939 2 INFO nova.compute.manager [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Stopped (Lifecycle Event)
Oct 02 12:50:15 compute-0 nova_compute[256940]: 2025-10-02 12:50:15.961 2 DEBUG nova.compute.manager [None req-5d21349c-befa-4f9c-bc0b-018195a92328 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:15 compute-0 nova_compute[256940]: 2025-10-02 12:50:15.966 2 DEBUG nova.compute.manager [None req-5d21349c-befa-4f9c-bc0b-018195a92328 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:16 compute-0 nova_compute[256940]: 2025-10-02 12:50:16.001 2 INFO nova.compute.manager [None req-5d21349c-befa-4f9c-bc0b-018195a92328 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:50:16 compute-0 nova_compute[256940]: 2025-10-02 12:50:16.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2961129839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/550337763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2081917777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:17 compute-0 nova_compute[256940]: 2025-10-02 12:50:17.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:50:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:17.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:50:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:17.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:17 compute-0 nova_compute[256940]: 2025-10-02 12:50:17.777 2 DEBUG nova.compute.manager [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:17 compute-0 nova_compute[256940]: 2025-10-02 12:50:17.777 2 DEBUG oslo_concurrency.lockutils [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:17 compute-0 nova_compute[256940]: 2025-10-02 12:50:17.777 2 DEBUG oslo_concurrency.lockutils [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:17 compute-0 nova_compute[256940]: 2025-10-02 12:50:17.778 2 DEBUG oslo_concurrency.lockutils [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:17 compute-0 nova_compute[256940]: 2025-10-02 12:50:17.778 2 DEBUG nova.compute.manager [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:17 compute-0 nova_compute[256940]: 2025-10-02 12:50:17.778 2 WARNING nova.compute.manager [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state resize_finish.
Oct 02 12:50:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 223 KiB/s wr, 19 op/s
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.083 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.084 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.280 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.301 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.302 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:50:18 compute-0 ceph-mon[73668]: pgmap v2282: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.743 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.744 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.750 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:50:18 compute-0 nova_compute[256940]: 2025-10-02 12:50:18.750 2 INFO nova.compute.claims [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.038 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613571210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.510 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.517 2 DEBUG nova.compute.provider_tree [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.565 2 DEBUG nova.scheduler.client.report [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:19 compute-0 ceph-mon[73668]: pgmap v2283: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 223 KiB/s wr, 19 op/s
Oct 02 12:50:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1613571210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:19.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 505 KiB/s rd, 241 KiB/s wr, 58 op/s
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.892 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.893 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.993 2 DEBUG nova.compute.manager [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.994 2 DEBUG oslo_concurrency.lockutils [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.994 2 DEBUG oslo_concurrency.lockutils [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.994 2 DEBUG oslo_concurrency.lockutils [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.995 2 DEBUG nova.compute.manager [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:19 compute-0 nova_compute[256940]: 2025-10-02 12:50:19.995 2 WARNING nova.compute.manager [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state resized and task_state None.
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.020 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.020 2 DEBUG nova.network.neutron [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.050 2 INFO nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.085 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.210 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.212 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.213 2 INFO nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Creating image(s)
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.241 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.268 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.297 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.304 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.346 2 DEBUG nova.policy [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.350 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.395 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.396 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:20 compute-0 podman[347784]: 2025-10-02 12:50:20.398337313 +0000 UTC m=+0.064832177 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.397 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.397 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:20 compute-0 podman[347782]: 2025-10-02 12:50:20.405667451 +0000 UTC m=+0.070329948 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.429 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:20 compute-0 nova_compute[256940]: 2025-10-02 12:50:20.433 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 840e5e36-5883-4877-a201-6f2da064a653_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:20 compute-0 ceph-mon[73668]: pgmap v2284: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 505 KiB/s rd, 241 KiB/s wr, 58 op/s
Oct 02 12:50:21 compute-0 nova_compute[256940]: 2025-10-02 12:50:21.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:21.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:21.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 533 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 171 op/s
Oct 02 12:50:21 compute-0 nova_compute[256940]: 2025-10-02 12:50:21.935 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 840e5e36-5883-4877-a201-6f2da064a653_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.008 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.057 2 DEBUG nova.network.neutron [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Successfully created port: 61335795-d917-4083-8d64-d9c93685176a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.643 2 DEBUG nova.objects.instance [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 840e5e36-5883-4877-a201-6f2da064a653 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.662 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.663 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Ensure instance console log exists: /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.663 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.664 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:22 compute-0 nova_compute[256940]: 2025-10-02 12:50:22.664 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.620 2 DEBUG nova.network.neutron [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Successfully updated port: 61335795-d917-4083-8d64-d9c93685176a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.647 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.647 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.647 2 DEBUG nova.network.neutron [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.741 2 DEBUG nova.compute.manager [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-changed-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.742 2 DEBUG nova.compute.manager [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Refreshing instance network info cache due to event network-changed-61335795-d917-4083-8d64-d9c93685176a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.742 2 DEBUG oslo_concurrency.lockutils [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:23.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:23.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 533 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Oct 02 12:50:23 compute-0 nova_compute[256940]: 2025-10-02 12:50:23.921 2 DEBUG nova.network.neutron [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:50:23 compute-0 ceph-mon[73668]: pgmap v2285: 305 pgs: 305 active+clean; 533 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 171 op/s
Oct 02 12:50:24 compute-0 sudo[347937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:24 compute-0 sudo[347937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:24 compute-0 sudo[347937]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:24 compute-0 sudo[347962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:24 compute-0 sudo[347962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:24 compute-0 sudo[347962]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:25 compute-0 ceph-mon[73668]: pgmap v2286: 305 pgs: 305 active+clean; 533 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Oct 02 12:50:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/221187614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/221187614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.691 2 DEBUG nova.network.neutron [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updating instance_info_cache with network_info: [{"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.731 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.732 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Instance network_info: |[{"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.732 2 DEBUG oslo_concurrency.lockutils [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.733 2 DEBUG nova.network.neutron [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Refreshing network info cache for port 61335795-d917-4083-8d64-d9c93685176a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.735 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Start _get_guest_xml network_info=[{"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.740 2 WARNING nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.750 2 DEBUG nova.virt.libvirt.host [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.751 2 DEBUG nova.virt.libvirt.host [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.759 2 DEBUG nova.virt.libvirt.host [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.759 2 DEBUG nova.virt.libvirt.host [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.760 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.761 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.761 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.761 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.761 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.762 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.762 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.762 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.763 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.763 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.763 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.763 2 DEBUG nova.virt.hardware [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.766 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:25.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc5564cb6f0 =====
Oct 02 12:50:25 compute-0 radosgw[92108]: ====== req done req=0x7fc5564cb6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:25 compute-0 radosgw[92108]: beast: 0x7fc5564cb6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:25.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.796 2 INFO nova.compute.manager [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Swapping old allocation on dict_keys(['8733289a-aa77-4139-9e88-bac686174c8d']) held by migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6 for instance
Oct 02 12:50:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 551 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.857 2 DEBUG nova.scheduler.client.report [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Overwriting current allocation {'allocations': {'f694d536-1dcd-4bb3-8516-534a40cdf6d7': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 68}}, 'project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'user_id': 'b5104e5372994cd19b720862cf1ca2ce', 'consumer_generation': 1} on consumer 74af08e5-d1ea-478b-ace8-00363679ec4d move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.961 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.961 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.961 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.962 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.962 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.962 2 WARNING nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.962 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.962 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.963 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.963 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.963 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:25 compute-0 nova_compute[256940]: 2025-10-02 12:50:25.964 2 WARNING nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.147 2 INFO nova.network.neutron [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:50:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1189858071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.215 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.252 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.257 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:26.488 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:26.488 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:26.488 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1660636347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1189858071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239796276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.786 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.788 2 DEBUG nova.virt.libvirt.vif [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1000491090',display_name='tempest-TestNetworkBasicOps-server-1000491090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1000491090',id=137,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGUNQF5uVCgqVXfKNK2KewQskNOuXMMHyJUyre0VdRp0qITEt7wId97Vu+ucnBohv/eWvftOnp9CrKI/Rz8sbPuPz70I98sLptzESRExzUtekk4TsNdR7Ukdf//JzAwSg==',key_name='tempest-TestNetworkBasicOps-1083850395',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-gh52ajua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:20Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=840e5e36-5883-4877-a201-6f2da064a653,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.788 2 DEBUG nova.network.os_vif_util [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.789 2 DEBUG nova.network.os_vif_util [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:66:44,bridge_name='br-int',has_traffic_filtering=True,id=61335795-d917-4083-8d64-d9c93685176a,network=Network(262284a0-f21a-41fb-8b6f-64675f7281e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61335795-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.790 2 DEBUG nova.objects.instance [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 840e5e36-5883-4877-a201-6f2da064a653 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.817 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <uuid>840e5e36-5883-4877-a201-6f2da064a653</uuid>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <name>instance-00000089</name>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkBasicOps-server-1000491090</nova:name>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:50:25</nova:creationTime>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <nova:port uuid="61335795-d917-4083-8d64-d9c93685176a">
Oct 02 12:50:26 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <system>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <entry name="serial">840e5e36-5883-4877-a201-6f2da064a653</entry>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <entry name="uuid">840e5e36-5883-4877-a201-6f2da064a653</entry>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </system>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <os>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   </os>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <features>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   </features>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/840e5e36-5883-4877-a201-6f2da064a653_disk">
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       </source>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/840e5e36-5883-4877-a201-6f2da064a653_disk.config">
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       </source>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:50:26 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:b5:66:44"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <target dev="tap61335795-d9"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/console.log" append="off"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <video>
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </video>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:50:26 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:50:26 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:50:26 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:50:26 compute-0 nova_compute[256940]: </domain>
Oct 02 12:50:26 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.818 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Preparing to wait for external event network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.818 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.819 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.819 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.820 2 DEBUG nova.virt.libvirt.vif [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1000491090',display_name='tempest-TestNetworkBasicOps-server-1000491090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1000491090',id=137,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGUNQF5uVCgqVXfKNK2KewQskNOuXMMHyJUyre0VdRp0qITEt7wId97Vu+ucnBohv/eWvftOnp9CrKI/Rz8sbPuPz70I98sLptzESRExzUtekk4TsNdR7Ukdf//JzAwSg==',key_name='tempest-TestNetworkBasicOps-1083850395',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-gh52ajua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:20Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=840e5e36-5883-4877-a201-6f2da064a653,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.820 2 DEBUG nova.network.os_vif_util [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.820 2 DEBUG nova.network.os_vif_util [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:66:44,bridge_name='br-int',has_traffic_filtering=True,id=61335795-d917-4083-8d64-d9c93685176a,network=Network(262284a0-f21a-41fb-8b6f-64675f7281e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61335795-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.821 2 DEBUG os_vif [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:66:44,bridge_name='br-int',has_traffic_filtering=True,id=61335795-d917-4083-8d64-d9c93685176a,network=Network(262284a0-f21a-41fb-8b6f-64675f7281e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61335795-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61335795-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61335795-d9, col_values=(('external_ids', {'iface-id': '61335795-d917-4083-8d64-d9c93685176a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:66:44', 'vm-uuid': '840e5e36-5883-4877-a201-6f2da064a653'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:26 compute-0 NetworkManager[44981]: <info>  [1759409426.8275] manager: (tap61335795-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.833 2 INFO os_vif [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:66:44,bridge_name='br-int',has_traffic_filtering=True,id=61335795-d917-4083-8d64-d9c93685176a,network=Network(262284a0-f21a-41fb-8b6f-64675f7281e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61335795-d9')
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.963 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.963 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.963 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:b5:66:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.964 2 INFO nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Using config drive
Oct 02 12:50:26 compute-0 nova_compute[256940]: 2025-10-02 12:50:26.990 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.614 2 INFO nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Creating config drive at /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/disk.config
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.619 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpufocrpct execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.651 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.651 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.652 2 DEBUG nova.network.neutron [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:50:27 compute-0 ceph-mon[73668]: pgmap v2287: 305 pgs: 305 active+clean; 551 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Oct 02 12:50:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2239796276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.758 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpufocrpct" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:27.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:50:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:27.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.792 2 DEBUG nova.storage.rbd_utils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 840e5e36-5883-4877-a201-6f2da064a653_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.795 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/disk.config 840e5e36-5883-4877-a201-6f2da064a653_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.932 2 DEBUG nova.network.neutron [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updated VIF entry in instance network info cache for port 61335795-d917-4083-8d64-d9c93685176a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.933 2 DEBUG nova.network.neutron [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updating instance_info_cache with network_info: [{"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:27 compute-0 nova_compute[256940]: 2025-10-02 12:50:27.950 2 DEBUG oslo_concurrency.lockutils [req-36727cbe-cd4d-41c3-aa81-f3273b84271e req-25b8e90c-ef26-43fb-a99b-71973c313849 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:27 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 02 12:50:27 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:27.985213) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:50:27 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 02 12:50:27 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409427985268, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 450, "num_deletes": 251, "total_data_size": 370246, "memory_usage": 378776, "flush_reason": "Manual Compaction"}
Oct 02 12:50:27 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428032717, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 366281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50804, "largest_seqno": 51253, "table_properties": {"data_size": 363680, "index_size": 637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 358421, "raw_average_value_size": 1051, "num_data_blocks": 28, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409408, "oldest_key_time": 1759409408, "file_creation_time": 1759409427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 47541 microseconds, and 1858 cpu microseconds.
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.032758) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 366281 bytes OK
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.032777) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.072240) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.072280) EVENT_LOG_v1 {"time_micros": 1759409428072271, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.072301) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 367529, prev total WAL file size 367529, number of live WAL files 2.
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.072746) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(357KB)], [110(13MB)]
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428072766, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 14714400, "oldest_snapshot_seqno": -1}
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7917 keys, 12844573 bytes, temperature: kUnknown
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428383899, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 12844573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12789792, "index_size": 33869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205491, "raw_average_key_size": 25, "raw_value_size": 12646939, "raw_average_value_size": 1597, "num_data_blocks": 1333, "num_entries": 7917, "num_filter_entries": 7917, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.384199) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12844573 bytes
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.415176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.3 rd, 41.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.7 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(75.2) write-amplify(35.1) OK, records in: 8431, records dropped: 514 output_compression: NoCompression
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.415209) EVENT_LOG_v1 {"time_micros": 1759409428415196, "job": 66, "event": "compaction_finished", "compaction_time_micros": 311203, "compaction_time_cpu_micros": 30728, "output_level": 6, "num_output_files": 1, "total_output_size": 12844573, "num_input_records": 8431, "num_output_records": 7917, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428415570, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428418547, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.072697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.418622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.418627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.418629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.418630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:50:28.418633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:50:28
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root']
Oct 02 12:50:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:50:29 compute-0 ceph-mon[73668]: pgmap v2288: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Oct 02 12:50:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:29.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 206 op/s
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.148 2 DEBUG nova.compute.manager [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.149 2 DEBUG nova.compute.manager [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing instance network info cache due to event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.149 2 DEBUG oslo_concurrency.lockutils [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.179 2 DEBUG nova.network.neutron [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.216 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.217 2 DEBUG os_brick.utils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.218 2 DEBUG oslo_concurrency.lockutils [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.218 2 DEBUG nova.network.neutron [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.218 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.230 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.230 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[8f026b30-c8f7-4274-a644-30209c1ca88e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.232 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.240 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.241 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[6f216202-be9a-431c-b5d9-7741cbf3a9c6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.242 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.251 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.251 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[b5fcc9ae-c922-433f-903a-0f2034d944db]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.253 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3ac3d6-40d5-41e0-b091-08b527dbd572]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.254 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.295 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.298 2 DEBUG os_brick.initiator.connectors.lightos [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.298 2 DEBUG os_brick.initiator.connectors.lightos [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.298 2 DEBUG os_brick.initiator.connectors.lightos [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.299 2 DEBUG os_brick.utils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.493 2 DEBUG oslo_concurrency.processutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/disk.config 840e5e36-5883-4877-a201-6f2da064a653_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.494 2 INFO nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Deleting local config drive /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653/disk.config because it was imported into RBD.
Oct 02 12:50:30 compute-0 kernel: tap61335795-d9: entered promiscuous mode
Oct 02 12:50:30 compute-0 NetworkManager[44981]: <info>  [1759409430.5540] manager: (tap61335795-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:30 compute-0 ovn_controller[148123]: 2025-10-02T12:50:30Z|00671|binding|INFO|Claiming lport 61335795-d917-4083-8d64-d9c93685176a for this chassis.
Oct 02 12:50:30 compute-0 ovn_controller[148123]: 2025-10-02T12:50:30Z|00672|binding|INFO|61335795-d917-4083-8d64-d9c93685176a: Claiming fa:16:3e:b5:66:44 10.100.0.3
Oct 02 12:50:30 compute-0 ovn_controller[148123]: 2025-10-02T12:50:30Z|00673|binding|INFO|Setting lport 61335795-d917-4083-8d64-d9c93685176a ovn-installed in OVS
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:30 compute-0 systemd-udevd[348132]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:30 compute-0 systemd-machined[210927]: New machine qemu-71-instance-00000089.
Oct 02 12:50:30 compute-0 NetworkManager[44981]: <info>  [1759409430.6059] device (tap61335795-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:50:30 compute-0 NetworkManager[44981]: <info>  [1759409430.6070] device (tap61335795-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.610 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:66:44 10.100.0.3'], port_security=['fa:16:3e:b5:66:44 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '840e5e36-5883-4877-a201-6f2da064a653', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-262284a0-f21a-41fb-8b6f-64675f7281e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba945eea-78b3-4f2f-aa85-13aebdaf7c53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38238194-c8a0-4bed-bef0-d7e9370604e9, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=61335795-d917-4083-8d64-d9c93685176a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.611 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 61335795-d917-4083-8d64-d9c93685176a in datapath 262284a0-f21a-41fb-8b6f-64675f7281e1 bound to our chassis
Oct 02 12:50:30 compute-0 ovn_controller[148123]: 2025-10-02T12:50:30Z|00674|binding|INFO|Setting lport 61335795-d917-4083-8d64-d9c93685176a up in Southbound
Oct 02 12:50:30 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000089.
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.613 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 262284a0-f21a-41fb-8b6f-64675f7281e1
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.627 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f7a914-9c8d-475a-9ba1-b4ac66a5b9c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.628 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap262284a0-f1 in ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.630 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap262284a0-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.630 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7460e296-6ad5-472c-ae43-80475198ea8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.631 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[51881473-712e-43b8-ba40-44a127e105d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.646 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c28855-1144-47bb-a102-21a02c3581cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.671 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3a190d48-0ea6-458e-ad5d-68c9adfd11fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.705 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7231117a-280c-4ed4-a5e3-141f837fe280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 NetworkManager[44981]: <info>  [1759409430.7118] manager: (tap262284a0-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.711 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f844a7-ad90-4776-a1d5-7ce651bcc525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.749 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dafa822f-d557-425b-a308-9bbc1bf67697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.753 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[483160fd-396b-459e-b478-3e6c8d97741c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 NetworkManager[44981]: <info>  [1759409430.7771] device (tap262284a0-f0): carrier: link connected
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.783 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[14f042c7-3529-425f-8de5-dd332522e8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.802 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d27c88-8970-4364-bc20-eda3f6282022]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap262284a0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:63:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732626, 'reachable_time': 32908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348167, 'error': None, 'target': 'ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.819 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9121667c-bcf4-4068-95a4-7c8442ec5218]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:6395'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 732626, 'tstamp': 732626}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348168, 'error': None, 'target': 'ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.841 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[96e29f63-a460-4839-b75a-c74c56433f9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap262284a0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:63:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732626, 'reachable_time': 32908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348169, 'error': None, 'target': 'ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.876 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f93d11e5-d3ab-4b36-8732-c09ce28c360b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.939 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9f0962-9060-4a61-bbc2-a86941272b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.940 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap262284a0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.941 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.941 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap262284a0-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:30 compute-0 NetworkManager[44981]: <info>  [1759409430.9433] manager: (tap262284a0-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Oct 02 12:50:30 compute-0 kernel: tap262284a0-f0: entered promiscuous mode
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.946 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap262284a0-f0, col_values=(('external_ids', {'iface-id': '3de92422-e907-4445-a7b3-06c347bd213b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:30 compute-0 ovn_controller[148123]: 2025-10-02T12:50:30Z|00675|binding|INFO|Releasing lport 3de92422-e907-4445-a7b3-06c347bd213b from this chassis (sb_readonly=0)
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:30 compute-0 nova_compute[256940]: 2025-10-02 12:50:30.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.963 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/262284a0-f21a-41fb-8b6f-64675f7281e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/262284a0-f21a-41fb-8b6f-64675f7281e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.963 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7083eaac-7ae3-4574-bd83-f7c22433ef6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.964 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-262284a0-f21a-41fb-8b6f-64675f7281e1
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/262284a0-f21a-41fb-8b6f-64675f7281e1.pid.haproxy
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 262284a0-f21a-41fb-8b6f-64675f7281e1
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:50:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:30.964 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1', 'env', 'PROCESS_TAG=haproxy-262284a0-f21a-41fb-8b6f-64675f7281e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/262284a0-f21a-41fb-8b6f-64675f7281e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:50:31 compute-0 ceph-mon[73668]: pgmap v2289: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 206 op/s
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.083 2 DEBUG nova.compute.manager [req-f222d58a-41b8-40f8-b761-f44235a0fb5a req-f988d39a-8418-40b2-9bb4-caa3be341074 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.084 2 DEBUG oslo_concurrency.lockutils [req-f222d58a-41b8-40f8-b761-f44235a0fb5a req-f988d39a-8418-40b2-9bb4-caa3be341074 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.084 2 DEBUG oslo_concurrency.lockutils [req-f222d58a-41b8-40f8-b761-f44235a0fb5a req-f988d39a-8418-40b2-9bb4-caa3be341074 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.084 2 DEBUG oslo_concurrency.lockutils [req-f222d58a-41b8-40f8-b761-f44235a0fb5a req-f988d39a-8418-40b2-9bb4-caa3be341074 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.084 2 DEBUG nova.compute.manager [req-f222d58a-41b8-40f8-b761-f44235a0fb5a req-f988d39a-8418-40b2-9bb4-caa3be341074 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Processing event network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:50:31 compute-0 podman[348237]: 2025-10-02 12:50:31.311564734 +0000 UTC m=+0.022820037 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:50:31 compute-0 podman[348237]: 2025-10-02 12:50:31.627647846 +0000 UTC m=+0.338903139 container create ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.668 2 DEBUG nova.virt.libvirt.driver [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Oct 02 12:50:31 compute-0 systemd[1]: Started libpod-conmon-ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5.scope.
Oct 02 12:50:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06cb8891cb20a71e785b4fba7b3b3480f8a27cff158ccac85cdde7435f02d0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:31.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Oct 02 12:50:31 compute-0 podman[348237]: 2025-10-02 12:50:31.853819958 +0000 UTC m=+0.565075261 container init ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:50:31 compute-0 podman[348237]: 2025-10-02 12:50:31.86714809 +0000 UTC m=+0.578403373 container start ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:50:31 compute-0 neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1[348288]: [NOTICE]   (348300) : New worker (348302) forked
Oct 02 12:50:31 compute-0 neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1[348288]: [NOTICE]   (348300) : Loading success.
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.966 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409431.9109263, 840e5e36-5883-4877-a201-6f2da064a653 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.969 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] VM Started (Lifecycle Event)
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.976 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.977 2 DEBUG nova.storage.rbd_utils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rolling back rbd image(74af08e5-d1ea-478b-ace8-00363679ec4d_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.990 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.997 2 INFO nova.virt.libvirt.driver [-] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Instance spawned successfully.
Oct 02 12:50:31 compute-0 nova_compute[256940]: 2025-10-02 12:50:31.997 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.001 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.005 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.031 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.032 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.032 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.033 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.033 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.033 2 DEBUG nova.virt.libvirt.driver [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.038 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.038 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409431.9110787, 840e5e36-5883-4877-a201-6f2da064a653 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.038 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] VM Paused (Lifecycle Event)
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.107 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.112 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409431.9865131, 840e5e36-5883-4877-a201-6f2da064a653 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.112 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] VM Resumed (Lifecycle Event)
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.138 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.142 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.157 2 INFO nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Took 11.95 seconds to spawn the instance on the hypervisor.
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.157 2 DEBUG nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.170 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2356732532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.252 2 INFO nova.compute.manager [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Took 13.56 seconds to build instance.
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.267 2 DEBUG oslo_concurrency.lockutils [None req-67ed2de3-3194-46fb-b769-72086b9e7388 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.436 2 DEBUG nova.network.neutron [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updated VIF entry in instance network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.436 2 DEBUG nova.network.neutron [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:32 compute-0 nova_compute[256940]: 2025-10-02 12:50:32.549 2 DEBUG oslo_concurrency.lockutils [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:33 compute-0 nova_compute[256940]: 2025-10-02 12:50:33.227 2 DEBUG nova.compute.manager [req-e1146da2-51d3-4f19-b17b-a2edd4d09480 req-205e8209-7e83-4e0e-9203-dfe8bf8e28db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:33 compute-0 nova_compute[256940]: 2025-10-02 12:50:33.227 2 DEBUG oslo_concurrency.lockutils [req-e1146da2-51d3-4f19-b17b-a2edd4d09480 req-205e8209-7e83-4e0e-9203-dfe8bf8e28db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:33 compute-0 nova_compute[256940]: 2025-10-02 12:50:33.227 2 DEBUG oslo_concurrency.lockutils [req-e1146da2-51d3-4f19-b17b-a2edd4d09480 req-205e8209-7e83-4e0e-9203-dfe8bf8e28db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:33 compute-0 nova_compute[256940]: 2025-10-02 12:50:33.228 2 DEBUG oslo_concurrency.lockutils [req-e1146da2-51d3-4f19-b17b-a2edd4d09480 req-205e8209-7e83-4e0e-9203-dfe8bf8e28db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:33 compute-0 nova_compute[256940]: 2025-10-02 12:50:33.228 2 DEBUG nova.compute.manager [req-e1146da2-51d3-4f19-b17b-a2edd4d09480 req-205e8209-7e83-4e0e-9203-dfe8bf8e28db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] No waiting events found dispatching network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:33 compute-0 nova_compute[256940]: 2025-10-02 12:50:33.228 2 WARNING nova.compute.manager [req-e1146da2-51d3-4f19-b17b-a2edd4d09480 req-205e8209-7e83-4e0e-9203-dfe8bf8e28db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received unexpected event network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a for instance with vm_state active and task_state None.
Oct 02 12:50:33 compute-0 ceph-mon[73668]: pgmap v2290: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Oct 02 12:50:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1010 KiB/s wr, 105 op/s
Oct 02 12:50:35 compute-0 ceph-mon[73668]: pgmap v2291: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1010 KiB/s wr, 105 op/s
Oct 02 12:50:35 compute-0 nova_compute[256940]: 2025-10-02 12:50:35.282 2 DEBUG nova.storage.rbd_utils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] removing snapshot(nova-resize) on rbd image(74af08e5-d1ea-478b-ace8-00363679ec4d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:50:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:35.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 136 op/s
Oct 02 12:50:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Oct 02 12:50:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Oct 02 12:50:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Oct 02 12:50:36 compute-0 nova_compute[256940]: 2025-10-02 12:50:36.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.094 2 DEBUG nova.virt.libvirt.driver [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Start _get_guest_xml network_info=[{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ac1432a4-bab5-43b6-871c-71608985c7ae', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ac1432a4-bab5-43b6-871c-71608985c7ae', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'attached_at': '2025-10-02T12:50:31.000000', 'detached_at': '', 'volume_id': 'ac1432a4-bab5-43b6-871c-71608985c7ae', 'serial': 'ac1432a4-bab5-43b6-871c-71608985c7ae'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '887d737f-0249-43af-a5f4-dcf24c024b39', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.098 2 WARNING nova.virt.libvirt.driver [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.105 2 DEBUG nova.virt.libvirt.host [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.106 2 DEBUG nova.virt.libvirt.host [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.110 2 DEBUG nova.virt.libvirt.host [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.111 2 DEBUG nova.virt.libvirt.host [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.112 2 DEBUG nova.virt.libvirt.driver [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.112 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.112 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.112 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.113 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.113 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.113 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.113 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.114 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.114 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.114 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.114 2 DEBUG nova.virt.hardware [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.114 2 DEBUG nova.objects.instance [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.135 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115229878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.595 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:37 compute-0 nova_compute[256940]: 2025-10-02 12:50:37.637 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:37 compute-0 ceph-mon[73668]: pgmap v2292: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 136 op/s
Oct 02 12:50:37 compute-0 ceph-mon[73668]: osdmap e317: 3 total, 3 up, 3 in
Oct 02 12:50:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:50:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:50:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 165 op/s
Oct 02 12:50:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816795570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.119 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.144 2 DEBUG nova.virt.libvirt.vif [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:50:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.145 2 DEBUG nova.network.os_vif_util [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.146 2 DEBUG nova.network.os_vif_util [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.149 2 DEBUG nova.virt.libvirt.driver [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <uuid>74af08e5-d1ea-478b-ace8-00363679ec4d</uuid>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <name>instance-00000087</name>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestOtherB-server-1460832742</nova:name>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:50:37</nova:creationTime>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <nova:port uuid="97f82dce-0b1b-4848-bd1a-7ec40fbf49ae">
Oct 02 12:50:38 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <system>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <entry name="serial">74af08e5-d1ea-478b-ace8-00363679ec4d</entry>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <entry name="uuid">74af08e5-d1ea-478b-ace8-00363679ec4d</entry>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </system>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <os>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   </os>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <features>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   </features>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/74af08e5-d1ea-478b-ace8-00363679ec4d_disk">
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config">
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-ac1432a4-bab5-43b6-871c-71608985c7ae">
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:50:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <serial>ac1432a4-bab5-43b6-871c-71608985c7ae</serial>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:42:83:29"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <target dev="tap97f82dce-0b"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/console.log" append="off"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <video>
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </video>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:50:38 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:50:38 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:50:38 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:50:38 compute-0 nova_compute[256940]: </domain>
Oct 02 12:50:38 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.156 2 DEBUG nova.compute.manager [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Preparing to wait for external event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.157 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.157 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.158 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.159 2 DEBUG nova.virt.libvirt.vif [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:50:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.159 2 DEBUG nova.network.os_vif_util [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.160 2 DEBUG nova.network.os_vif_util [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.161 2 DEBUG os_vif [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97f82dce-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97f82dce-0b, col_values=(('external_ids', {'iface-id': '97f82dce-0b1b-4848-bd1a-7ec40fbf49ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:83:29', 'vm-uuid': '74af08e5-d1ea-478b-ace8-00363679ec4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 NetworkManager[44981]: <info>  [1759409438.1695] manager: (tap97f82dce-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.176 2 INFO os_vif [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b')
Oct 02 12:50:38 compute-0 kernel: tap97f82dce-0b: entered promiscuous mode
Oct 02 12:50:38 compute-0 NetworkManager[44981]: <info>  [1759409438.2893] manager: (tap97f82dce-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 ovn_controller[148123]: 2025-10-02T12:50:38Z|00676|binding|INFO|Claiming lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for this chassis.
Oct 02 12:50:38 compute-0 ovn_controller[148123]: 2025-10-02T12:50:38Z|00677|binding|INFO|97f82dce-0b1b-4848-bd1a-7ec40fbf49ae: Claiming fa:16:3e:42:83:29 10.100.0.11
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.301 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:29 10.100.0.11'], port_security=['fa:16:3e:42:83:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '10', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.303 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.305 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:50:38 compute-0 ovn_controller[148123]: 2025-10-02T12:50:38Z|00678|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae ovn-installed in OVS
Oct 02 12:50:38 compute-0 ovn_controller[148123]: 2025-10-02T12:50:38Z|00679|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae up in Southbound
Oct 02 12:50:38 compute-0 podman[348397]: 2025-10-02 12:50:38.312033965 +0000 UTC m=+0.095359382 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.319 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[217601ce-edae-4a4a-a662-160ff8d0bd04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.320 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9266ebd7-31 in ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.322 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9266ebd7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.322 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[850041d8-b3fb-4bde-b439-0a127e131261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.323 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fa891f87-d383-4ecb-807d-13fc2781c603]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 systemd-machined[210927]: New machine qemu-72-instance-00000087.
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.338 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9e46ccf2-3f49-47d8-a577-eaadc7a1badb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 podman[348398]: 2025-10-02 12:50:38.341744248 +0000 UTC m=+0.124075059 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:50:38 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-00000087.
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.365 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2a84e4-9374-47cf-821a-4a03708f2838]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 systemd-udevd[348455]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:38 compute-0 NetworkManager[44981]: <info>  [1759409438.3815] device (tap97f82dce-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:50:38 compute-0 NetworkManager[44981]: <info>  [1759409438.3828] device (tap97f82dce-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.400 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fd79f1-150f-4308-a98c-5a328f170c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 NetworkManager[44981]: <info>  [1759409438.4077] manager: (tap9266ebd7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.409 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd60372-6a57-4d12-82db-fbff4ed94a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.445 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4da942ae-2808-4386-9909-582d8de3f89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.449 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2710453f-1975-4aa9-a9cc-b2aa4ab82c95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 NetworkManager[44981]: <info>  [1759409438.4714] device (tap9266ebd7-30): carrier: link connected
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.478 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6cf54c-fcc0-44a1-bcee-784cbce1eb30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.497 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6d39ec-62b7-436c-8738-fdd12c7d9acf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733395, 'reachable_time': 20577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348485, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.514 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c08589cc-c44c-4a1f-9913-3e355699f4b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:6593'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733395, 'tstamp': 733395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348486, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.536 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e4eca9b2-4d0f-4337-9617-80fe0c9f2c81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733395, 'reachable_time': 20577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348487, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.571 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[01ab83dc-3fd2-4ac0-b845-e9827c296b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.642 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[deac6db0-1e94-4d62-9565-6640c4e3e0ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 kernel: tap9266ebd7-30: entered promiscuous mode
Oct 02 12:50:38 compute-0 NetworkManager[44981]: <info>  [1759409438.6470] manager: (tap9266ebd7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.650 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:38 compute-0 ovn_controller[148123]: 2025-10-02T12:50:38Z|00680|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 nova_compute[256940]: 2025-10-02 12:50:38.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.668 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.669 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[95dee5a8-8406-4d36-9187-6c2a102050a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.670 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:50:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:38.670 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'env', 'PROCESS_TAG=haproxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:50:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1115229878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:38 compute-0 ceph-mon[73668]: pgmap v2294: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 165 op/s
Oct 02 12:50:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3816795570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:39 compute-0 nova_compute[256940]: 2025-10-02 12:50:39.078 2 DEBUG nova.compute.manager [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:39 compute-0 nova_compute[256940]: 2025-10-02 12:50:39.079 2 DEBUG oslo_concurrency.lockutils [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:39 compute-0 nova_compute[256940]: 2025-10-02 12:50:39.079 2 DEBUG oslo_concurrency.lockutils [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:39 compute-0 nova_compute[256940]: 2025-10-02 12:50:39.079 2 DEBUG oslo_concurrency.lockutils [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:39 compute-0 nova_compute[256940]: 2025-10-02 12:50:39.080 2 DEBUG nova.compute.manager [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Processing event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:50:39 compute-0 podman[348536]: 2025-10-02 12:50:38.990604 +0000 UTC m=+0.019390469 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:50:39 compute-0 podman[348536]: 2025-10-02 12:50:39.186568135 +0000 UTC m=+0.215354594 container create e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:50:39 compute-0 systemd[1]: Started libpod-conmon-e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987.scope.
Oct 02 12:50:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd199f030cbe663881153a6f08f8062550ae2b22234b486c2226ddd8b6721283/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:39 compute-0 podman[348536]: 2025-10-02 12:50:39.328755958 +0000 UTC m=+0.357542427 container init e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:50:39 compute-0 podman[348536]: 2025-10-02 12:50:39.340765667 +0000 UTC m=+0.369552126 container start e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:50:39 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[348569]: [NOTICE]   (348589) : New worker (348594) forked
Oct 02 12:50:39 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[348569]: [NOTICE]   (348589) : Loading success.
Oct 02 12:50:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:39.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.011 2 DEBUG nova.compute.manager [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.012 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409440.0106509, 74af08e5-d1ea-478b-ace8-00363679ec4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.012 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Started (Lifecycle Event)
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.019 2 INFO nova.virt.libvirt.driver [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance running successfully.
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.019 2 DEBUG nova.virt.libvirt.driver [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.041 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.045 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.079 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.079 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409440.0108361, 74af08e5-d1ea-478b-ace8-00363679ec4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.080 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Paused (Lifecycle Event)
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.104 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.110 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409440.015345, 74af08e5-d1ea-478b-ace8-00363679ec4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.111 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Resumed (Lifecycle Event)
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.132 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.136 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.156 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:50:40 compute-0 nova_compute[256940]: 2025-10-02 12:50:40.311 2 INFO nova.compute.manager [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance to original state: 'active'
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005366640784943234 of space, bias 1.0, pg target 1.6099922354829703 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008634753629257398 of space, bias 1.0, pg target 2.581791335147962 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:50:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.015 2 DEBUG nova.compute.manager [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-changed-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.016 2 DEBUG nova.compute.manager [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Refreshing instance network info cache due to event network-changed-61335795-d917-4083-8d64-d9c93685176a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.016 2 DEBUG oslo_concurrency.lockutils [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.016 2 DEBUG oslo_concurrency.lockutils [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.017 2 DEBUG nova.network.neutron [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Refreshing network info cache for port 61335795-d917-4083-8d64-d9c93685176a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.237 2 DEBUG nova.compute.manager [req-99a8a19e-de7d-4c4c-922c-a52e9ef31760 req-9b345603-0b5f-40af-86b8-341481c43f5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.237 2 DEBUG oslo_concurrency.lockutils [req-99a8a19e-de7d-4c4c-922c-a52e9ef31760 req-9b345603-0b5f-40af-86b8-341481c43f5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.238 2 DEBUG oslo_concurrency.lockutils [req-99a8a19e-de7d-4c4c-922c-a52e9ef31760 req-9b345603-0b5f-40af-86b8-341481c43f5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.238 2 DEBUG oslo_concurrency.lockutils [req-99a8a19e-de7d-4c4c-922c-a52e9ef31760 req-9b345603-0b5f-40af-86b8-341481c43f5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.239 2 DEBUG nova.compute.manager [req-99a8a19e-de7d-4c4c-922c-a52e9ef31760 req-9b345603-0b5f-40af-86b8-341481c43f5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:41 compute-0 nova_compute[256940]: 2025-10-02 12:50:41.239 2 WARNING nova.compute.manager [req-99a8a19e-de7d-4c4c-922c-a52e9ef31760 req-9b345603-0b5f-40af-86b8-341481c43f5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state None.
Oct 02 12:50:41 compute-0 ceph-mon[73668]: pgmap v2295: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Oct 02 12:50:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:41.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:41.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.4 MiB/s wr, 208 op/s
Oct 02 12:50:42 compute-0 nova_compute[256940]: 2025-10-02 12:50:42.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:42.871 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:42 compute-0 nova_compute[256940]: 2025-10-02 12:50:42.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:42.872 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:50:43 compute-0 ceph-mon[73668]: pgmap v2296: 305 pgs: 305 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.4 MiB/s wr, 208 op/s
Oct 02 12:50:43 compute-0 nova_compute[256940]: 2025-10-02 12:50:43.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:43 compute-0 nova_compute[256940]: 2025-10-02 12:50:43.354 2 DEBUG nova.compute.manager [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:43 compute-0 nova_compute[256940]: 2025-10-02 12:50:43.355 2 DEBUG nova.compute.manager [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing instance network info cache due to event network-changed-e45b089f-5ee7-489a-8871-2386b27282c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:43 compute-0 nova_compute[256940]: 2025-10-02 12:50:43.356 2 DEBUG oslo_concurrency.lockutils [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:43 compute-0 nova_compute[256940]: 2025-10-02 12:50:43.356 2 DEBUG oslo_concurrency.lockutils [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:43 compute-0 nova_compute[256940]: 2025-10-02 12:50:43.356 2 DEBUG nova.network.neutron [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Refreshing network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:43.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:43.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.4 MiB/s wr, 208 op/s
Oct 02 12:50:44 compute-0 sudo[348611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:44 compute-0 sudo[348611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:44 compute-0 sudo[348611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:44 compute-0 sudo[348636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:44 compute-0 sudo[348636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:44 compute-0 sudo[348636]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:44.874 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:44 compute-0 nova_compute[256940]: 2025-10-02 12:50:44.970 2 DEBUG nova.network.neutron [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updated VIF entry in instance network info cache for port 61335795-d917-4083-8d64-d9c93685176a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:44 compute-0 nova_compute[256940]: 2025-10-02 12:50:44.971 2 DEBUG nova.network.neutron [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updating instance_info_cache with network_info: [{"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.053 2 DEBUG oslo_concurrency.lockutils [req-d68cffe0-20e3-4340-8d1c-ca61038a26ff req-79744bd0-cd3d-4288-ad8d-2d08adb286d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:45 compute-0 ceph-mon[73668]: pgmap v2297: 305 pgs: 305 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.4 MiB/s wr, 208 op/s
Oct 02 12:50:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 213 op/s
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.972 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.974 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.974 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.975 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.975 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.976 2 INFO nova.compute.manager [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Terminating instance
Oct 02 12:50:45 compute-0 nova_compute[256940]: 2025-10-02 12:50:45.978 2 DEBUG nova.compute.manager [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.272 2 DEBUG nova.network.neutron [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updated VIF entry in instance network info cache for port e45b089f-5ee7-489a-8871-2386b27282c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.273 2 DEBUG nova.network.neutron [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [{"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.287 2 DEBUG oslo_concurrency.lockutils [req-244939aa-360e-449e-9cd9-25d6261350f9 req-a24fb6ba-3240-4a35-ac5f-08166dc76b75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:46 compute-0 kernel: tap97f82dce-0b (unregistering): left promiscuous mode
Oct 02 12:50:46 compute-0 NetworkManager[44981]: <info>  [1759409446.5047] device (tap97f82dce-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 ovn_controller[148123]: 2025-10-02T12:50:46Z|00681|binding|INFO|Releasing lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae from this chassis (sb_readonly=0)
Oct 02 12:50:46 compute-0 ovn_controller[148123]: 2025-10-02T12:50:46Z|00682|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae down in Southbound
Oct 02 12:50:46 compute-0 ovn_controller[148123]: 2025-10-02T12:50:46Z|00683|binding|INFO|Removing iface tap97f82dce-0b ovn-installed in OVS
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:46.524 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:29 10.100.0.11'], port_security=['fa:16:3e:42:83:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '12', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:46.525 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis
Oct 02 12:50:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:46.527 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:50:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:46.528 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[40f55061-8160-4b5d-970e-95a1c37b9d2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:46.529 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace which is not needed anymore
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 02 12:50:46 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000087.scope: Consumed 7.484s CPU time.
Oct 02 12:50:46 compute-0 systemd-machined[210927]: Machine qemu-72-instance-00000087 terminated.
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.615 2 INFO nova.virt.libvirt.driver [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance destroyed successfully.
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.615 2 DEBUG nova.objects.instance [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.634 2 DEBUG nova.virt.libvirt.vif [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:50:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.634 2 DEBUG nova.network.os_vif_util [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.635 2 DEBUG nova.network.os_vif_util [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.635 2 DEBUG os_vif [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97f82dce-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.645 2 INFO os_vif [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b')
Oct 02 12:50:46 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[348569]: [NOTICE]   (348589) : haproxy version is 2.8.14-c23fe91
Oct 02 12:50:46 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[348569]: [NOTICE]   (348589) : path to executable is /usr/sbin/haproxy
Oct 02 12:50:46 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[348569]: [WARNING]  (348589) : Exiting Master process...
Oct 02 12:50:46 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[348569]: [ALERT]    (348589) : Current worker (348594) exited with code 143 (Terminated)
Oct 02 12:50:46 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[348569]: [WARNING]  (348589) : All workers exited. Exiting... (0)
Oct 02 12:50:46 compute-0 systemd[1]: libpod-e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987.scope: Deactivated successfully.
Oct 02 12:50:46 compute-0 podman[348692]: 2025-10-02 12:50:46.857611086 +0000 UTC m=+0.223163906 container died e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.925 2 DEBUG nova.compute.manager [req-6ca9f330-18d0-44df-ba9f-836c33054fde req-53aaf07f-032d-4348-95da-efdd96858859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.925 2 DEBUG oslo_concurrency.lockutils [req-6ca9f330-18d0-44df-ba9f-836c33054fde req-53aaf07f-032d-4348-95da-efdd96858859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.925 2 DEBUG oslo_concurrency.lockutils [req-6ca9f330-18d0-44df-ba9f-836c33054fde req-53aaf07f-032d-4348-95da-efdd96858859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.926 2 DEBUG oslo_concurrency.lockutils [req-6ca9f330-18d0-44df-ba9f-836c33054fde req-53aaf07f-032d-4348-95da-efdd96858859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.926 2 DEBUG nova.compute.manager [req-6ca9f330-18d0-44df-ba9f-836c33054fde req-53aaf07f-032d-4348-95da-efdd96858859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:46 compute-0 nova_compute[256940]: 2025-10-02 12:50:46.926 2 DEBUG nova.compute.manager [req-6ca9f330-18d0-44df-ba9f-836c33054fde req-53aaf07f-032d-4348-95da-efdd96858859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:50:47 compute-0 nova_compute[256940]: 2025-10-02 12:50:47.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987-userdata-shm.mount: Deactivated successfully.
Oct 02 12:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd199f030cbe663881153a6f08f8062550ae2b22234b486c2226ddd8b6721283-merged.mount: Deactivated successfully.
Oct 02 12:50:47 compute-0 podman[348692]: 2025-10-02 12:50:47.433811072 +0000 UTC m=+0.799363892 container cleanup e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:50:47 compute-0 systemd[1]: libpod-conmon-e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987.scope: Deactivated successfully.
Oct 02 12:50:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Oct 02 12:50:47 compute-0 ceph-mon[73668]: pgmap v2298: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 213 op/s
Oct 02 12:50:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:47.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 593 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.4 MiB/s wr, 204 op/s
Oct 02 12:50:47 compute-0 podman[348742]: 2025-10-02 12:50:47.852332497 +0000 UTC m=+0.395648789 container remove e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.860 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c085987e-4483-431a-b44f-cd9949758a22]: (4, ('Thu Oct  2 12:50:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987)\ne6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987\nThu Oct  2 12:50:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (e6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987)\ne6e46fe06c02e2fd507a77f1c354939710fcc77f18b0ced89e69eaa1dc64e987\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.863 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0dcee7-b928-4b3f-88c5-e66b5e106988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.865 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:47 compute-0 nova_compute[256940]: 2025-10-02 12:50:47.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:47 compute-0 kernel: tap9266ebd7-30: left promiscuous mode
Oct 02 12:50:47 compute-0 nova_compute[256940]: 2025-10-02 12:50:47.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.872 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b8a743-b58f-4402-8695-4c57e165fc32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:47 compute-0 nova_compute[256940]: 2025-10-02 12:50:47.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.910 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[541c9a93-3209-4c70-9ec2-ebcb5e774a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.912 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[86c0d7ed-a585-4fbb-8570-cac23a245e5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.932 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac166aa-31f3-42ff-83a9-38ac2b5a6fa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733387, 'reachable_time': 32103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348758, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.937 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:50:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:47.937 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c58a5786-a2af-4f6e-aedf-6e409789fcaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d9266ebd7\x2d321c\x2d4fc7\x2da6c8\x2dc1c304634bb4.mount: Deactivated successfully.
Oct 02 12:50:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Oct 02 12:50:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Oct 02 12:50:49 compute-0 nova_compute[256940]: 2025-10-02 12:50:49.094 2 DEBUG nova.compute.manager [req-58d4e518-bbb1-4ec5-9259-3cd584496a50 req-18d5342b-d39e-43b6-abb6-bb95dd50a25d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:49 compute-0 nova_compute[256940]: 2025-10-02 12:50:49.095 2 DEBUG oslo_concurrency.lockutils [req-58d4e518-bbb1-4ec5-9259-3cd584496a50 req-18d5342b-d39e-43b6-abb6-bb95dd50a25d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:49 compute-0 nova_compute[256940]: 2025-10-02 12:50:49.095 2 DEBUG oslo_concurrency.lockutils [req-58d4e518-bbb1-4ec5-9259-3cd584496a50 req-18d5342b-d39e-43b6-abb6-bb95dd50a25d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:49 compute-0 nova_compute[256940]: 2025-10-02 12:50:49.095 2 DEBUG oslo_concurrency.lockutils [req-58d4e518-bbb1-4ec5-9259-3cd584496a50 req-18d5342b-d39e-43b6-abb6-bb95dd50a25d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:49 compute-0 nova_compute[256940]: 2025-10-02 12:50:49.095 2 DEBUG nova.compute.manager [req-58d4e518-bbb1-4ec5-9259-3cd584496a50 req-18d5342b-d39e-43b6-abb6-bb95dd50a25d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:49 compute-0 nova_compute[256940]: 2025-10-02 12:50:49.096 2 WARNING nova.compute.manager [req-58d4e518-bbb1-4ec5-9259-3cd584496a50 req-18d5342b-d39e-43b6-abb6-bb95dd50a25d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state deleting.
Oct 02 12:50:49 compute-0 ceph-mon[73668]: pgmap v2299: 305 pgs: 305 active+clean; 593 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.4 MiB/s wr, 204 op/s
Oct 02 12:50:49 compute-0 ceph-mon[73668]: osdmap e318: 3 total, 3 up, 3 in
Oct 02 12:50:49 compute-0 ovn_controller[148123]: 2025-10-02T12:50:49Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:66:44 10.100.0.3
Oct 02 12:50:49 compute-0 ovn_controller[148123]: 2025-10-02T12:50:49Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:66:44 10.100.0.3
Oct 02 12:50:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:49.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:49.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Oct 02 12:50:51 compute-0 nova_compute[256940]: 2025-10-02 12:50:51.286 2 INFO nova.virt.libvirt.driver [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Deleting instance files /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d_del
Oct 02 12:50:51 compute-0 nova_compute[256940]: 2025-10-02 12:50:51.286 2 INFO nova.virt.libvirt.driver [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Deletion of /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d_del complete
Oct 02 12:50:51 compute-0 ceph-mon[73668]: pgmap v2301: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Oct 02 12:50:51 compute-0 nova_compute[256940]: 2025-10-02 12:50:51.351 2 INFO nova.compute.manager [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Took 5.37 seconds to destroy the instance on the hypervisor.
Oct 02 12:50:51 compute-0 nova_compute[256940]: 2025-10-02 12:50:51.351 2 DEBUG oslo.service.loopingcall [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:50:51 compute-0 nova_compute[256940]: 2025-10-02 12:50:51.352 2 DEBUG nova.compute.manager [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:50:51 compute-0 nova_compute[256940]: 2025-10-02 12:50:51.352 2 DEBUG nova.network.neutron [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:50:51 compute-0 podman[348763]: 2025-10-02 12:50:51.406987306 +0000 UTC m=+0.063986385 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:51 compute-0 podman[348762]: 2025-10-02 12:50:51.416226483 +0000 UTC m=+0.074976837 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:51 compute-0 nova_compute[256940]: 2025-10-02 12:50:51.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:51.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Oct 02 12:50:52 compute-0 nova_compute[256940]: 2025-10-02 12:50:52.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:52 compute-0 sudo[348804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:52 compute-0 sudo[348804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:52 compute-0 sudo[348804]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:52 compute-0 sudo[348829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:52 compute-0 sudo[348829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:52 compute-0 sudo[348829]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:52 compute-0 sudo[348854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:52 compute-0 sudo[348854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:52 compute-0 sudo[348854]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:52 compute-0 sudo[348879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:50:52 compute-0 sudo[348879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:52 compute-0 nova_compute[256940]: 2025-10-02 12:50:52.763 2 DEBUG nova.network.neutron [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:52 compute-0 nova_compute[256940]: 2025-10-02 12:50:52.808 2 INFO nova.compute.manager [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Took 1.46 seconds to deallocate network for instance.
Oct 02 12:50:52 compute-0 nova_compute[256940]: 2025-10-02 12:50:52.896 2 DEBUG nova.compute.manager [req-99101043-4227-4a57-81bf-45ce576e5ec9 req-5582f3db-053d-4027-8f9e-63f216aea8eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-deleted-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:53 compute-0 nova_compute[256940]: 2025-10-02 12:50:53.062 2 INFO nova.compute.manager [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Took 0.25 seconds to detach 1 volumes for instance.
Oct 02 12:50:53 compute-0 nova_compute[256940]: 2025-10-02 12:50:53.106 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:53 compute-0 nova_compute[256940]: 2025-10-02 12:50:53.106 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:53 compute-0 nova_compute[256940]: 2025-10-02 12:50:53.111 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:53 compute-0 nova_compute[256940]: 2025-10-02 12:50:53.158 2 INFO nova.scheduler.client.report [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocations for instance 74af08e5-d1ea-478b-ace8-00363679ec4d
Oct 02 12:50:53 compute-0 sudo[348879]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:53 compute-0 nova_compute[256940]: 2025-10-02 12:50:53.249 2 DEBUG oslo_concurrency.lockutils [None req-2e1fb48d-df67-40eb-b75b-f03bb2ee288c b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:50:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:50:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:50:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:50:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:50:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 37244a77-9575-4a1f-bf72-03616bf82adf does not exist
Oct 02 12:50:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 53a34a01-1d32-419b-b00a-a74a5df46c62 does not exist
Oct 02 12:50:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 05768f56-36a6-44d8-815e-e009210479c3 does not exist
Oct 02 12:50:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:50:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:50:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:50:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:50:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:50:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:53 compute-0 sudo[348935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:53 compute-0 sudo[348935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:53 compute-0 sudo[348935]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:53 compute-0 sudo[348960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:53 compute-0 sudo[348960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:53 compute-0 sudo[348960]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:53 compute-0 sudo[348985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:53 compute-0 sudo[348985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:53 compute-0 sudo[348985]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:50:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:53.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:50:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Oct 02 12:50:53 compute-0 sudo[349010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:50:53 compute-0 sudo[349010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:53 compute-0 ceph-mon[73668]: pgmap v2302: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Oct 02 12:50:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:50:54 compute-0 podman[349076]: 2025-10-02 12:50:54.191487605 +0000 UTC m=+0.023452313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:54 compute-0 podman[349076]: 2025-10-02 12:50:54.325828667 +0000 UTC m=+0.157793365 container create 60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_volhard, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:50:54 compute-0 systemd[1]: Started libpod-conmon-60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c.scope.
Oct 02 12:50:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:54 compute-0 podman[349076]: 2025-10-02 12:50:54.672322261 +0000 UTC m=+0.504286989 container init 60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:50:54 compute-0 podman[349076]: 2025-10-02 12:50:54.679436594 +0000 UTC m=+0.511401292 container start 60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:50:54 compute-0 charming_volhard[349091]: 167 167
Oct 02 12:50:54 compute-0 systemd[1]: libpod-60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c.scope: Deactivated successfully.
Oct 02 12:50:54 compute-0 conmon[349091]: conmon 60ad4706f91a4bc1d482 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c.scope/container/memory.events
Oct 02 12:50:54 compute-0 podman[349076]: 2025-10-02 12:50:54.859815689 +0000 UTC m=+0.691780407 container attach 60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:50:54 compute-0 podman[349076]: 2025-10-02 12:50:54.860915547 +0000 UTC m=+0.692880245 container died 60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_volhard, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:50:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:50:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:50:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:50:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:55 compute-0 ceph-mon[73668]: pgmap v2303: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Oct 02 12:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdccbd34a9ddc31462286c2b5a6906243e4c1996769d2cb9de6d63fa096c5c06-merged.mount: Deactivated successfully.
Oct 02 12:50:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:55.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:55.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:55 compute-0 podman[349076]: 2025-10-02 12:50:55.833508539 +0000 UTC m=+1.665473237 container remove 60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:50:55 compute-0 systemd[1]: libpod-conmon-60ad4706f91a4bc1d482fcb678975fda631cccda329c1aa5d1bb04ba300e774c.scope: Deactivated successfully.
Oct 02 12:50:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Oct 02 12:50:56 compute-0 nova_compute[256940]: 2025-10-02 12:50:56.040 2 INFO nova.compute.manager [None req-cfdf432f-ec79-4d1d-9b73-0d5a7a30a05a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Get console output
Oct 02 12:50:56 compute-0 nova_compute[256940]: 2025-10-02 12:50:56.045 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:50:56 compute-0 podman[349117]: 2025-10-02 12:50:56.014462087 +0000 UTC m=+0.031188042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:56 compute-0 podman[349117]: 2025-10-02 12:50:56.162582124 +0000 UTC m=+0.179308069 container create eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:50:56 compute-0 systemd[1]: Started libpod-conmon-eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a.scope.
Oct 02 12:50:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714002ff4f69b3f0b6bee228b25368b347bbddccafd48853f838d295c42e7364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714002ff4f69b3f0b6bee228b25368b347bbddccafd48853f838d295c42e7364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714002ff4f69b3f0b6bee228b25368b347bbddccafd48853f838d295c42e7364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714002ff4f69b3f0b6bee228b25368b347bbddccafd48853f838d295c42e7364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714002ff4f69b3f0b6bee228b25368b347bbddccafd48853f838d295c42e7364/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:56 compute-0 podman[349117]: 2025-10-02 12:50:56.481276083 +0000 UTC m=+0.498002058 container init eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_edison, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 12:50:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2193676697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:56 compute-0 podman[349117]: 2025-10-02 12:50:56.492253405 +0000 UTC m=+0.508979350 container start eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:50:56 compute-0 podman[349117]: 2025-10-02 12:50:56.589432132 +0000 UTC m=+0.606158097 container attach eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:50:56 compute-0 nova_compute[256940]: 2025-10-02 12:50:56.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:57 compute-0 friendly_edison[349134]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:50:57 compute-0 friendly_edison[349134]: --> relative data size: 1.0
Oct 02 12:50:57 compute-0 friendly_edison[349134]: --> All data devices are unavailable
Oct 02 12:50:57 compute-0 systemd[1]: libpod-eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a.scope: Deactivated successfully.
Oct 02 12:50:57 compute-0 podman[349117]: 2025-10-02 12:50:57.423280468 +0000 UTC m=+1.440006413 container died eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-714002ff4f69b3f0b6bee228b25368b347bbddccafd48853f838d295c42e7364-merged.mount: Deactivated successfully.
Oct 02 12:50:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:57 compute-0 podman[349117]: 2025-10-02 12:50:57.574959346 +0000 UTC m=+1.591685291 container remove eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:50:57 compute-0 systemd[1]: libpod-conmon-eb9b91214ea944a7ee7bf6c1cc48411fdccc7ec5dd6e3de6c4b88047703dcc4a.scope: Deactivated successfully.
Oct 02 12:50:57 compute-0 sudo[349010]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:57 compute-0 ceph-mon[73668]: pgmap v2304: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Oct 02 12:50:57 compute-0 sudo[349162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:57 compute-0 sudo[349162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:57 compute-0 sudo[349162]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:57 compute-0 sudo[349187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:57 compute-0 sudo[349187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:57 compute-0 sudo[349187]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:57 compute-0 sudo[349212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:57 compute-0 sudo[349212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:57 compute-0 sudo[349212]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:57.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:57.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 1.5 MiB/s wr, 109 op/s
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.859 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.860 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.861 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.861 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.861 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.862 2 INFO nova.compute.manager [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Terminating instance
Oct 02 12:50:57 compute-0 nova_compute[256940]: 2025-10-02 12:50:57.863 2 DEBUG nova.compute.manager [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:50:57 compute-0 sudo[349237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:50:57 compute-0 sudo[349237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:58 compute-0 podman[349303]: 2025-10-02 12:50:58.258659384 +0000 UTC m=+0.056198105 container create 07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:50:58 compute-0 systemd[1]: Started libpod-conmon-07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9.scope.
Oct 02 12:50:58 compute-0 podman[349303]: 2025-10-02 12:50:58.226058326 +0000 UTC m=+0.023597087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:58 compute-0 podman[349303]: 2025-10-02 12:50:58.350406061 +0000 UTC m=+0.147944802 container init 07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:50:58 compute-0 podman[349303]: 2025-10-02 12:50:58.358298444 +0000 UTC m=+0.155837165 container start 07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:50:58 compute-0 blissful_grothendieck[349319]: 167 167
Oct 02 12:50:58 compute-0 systemd[1]: libpod-07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9.scope: Deactivated successfully.
Oct 02 12:50:58 compute-0 podman[349303]: 2025-10-02 12:50:58.371855583 +0000 UTC m=+0.169394324 container attach 07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 12:50:58 compute-0 podman[349303]: 2025-10-02 12:50:58.372395336 +0000 UTC m=+0.169934057 container died 07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:50:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-431c6a0dc34e2493cc594c6ecbf8afa1333e1e1ba4471e03b56f222649304f4c-merged.mount: Deactivated successfully.
Oct 02 12:50:58 compute-0 kernel: tape45b089f-5e (unregistering): left promiscuous mode
Oct 02 12:50:58 compute-0 NetworkManager[44981]: <info>  [1759409458.5774] device (tape45b089f-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:50:58 compute-0 ovn_controller[148123]: 2025-10-02T12:50:58Z|00684|binding|INFO|Releasing lport e45b089f-5ee7-489a-8871-2386b27282c1 from this chassis (sb_readonly=0)
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:58 compute-0 ovn_controller[148123]: 2025-10-02T12:50:58Z|00685|binding|INFO|Setting lport e45b089f-5ee7-489a-8871-2386b27282c1 down in Southbound
Oct 02 12:50:58 compute-0 ovn_controller[148123]: 2025-10-02T12:50:58Z|00686|binding|INFO|Removing iface tape45b089f-5e ovn-installed in OVS
Oct 02 12:50:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:58.597 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:6e:9c 10.100.0.6'], port_security=['fa:16:3e:ec:6e:9c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2755e56a-ea2b-43ad-8bae-29eb7fc0ed60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e45b089f-5ee7-489a-8871-2386b27282c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:58.598 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e45b089f-5ee7-489a-8871-2386b27282c1 in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa unbound from our chassis
Oct 02 12:50:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:58.599 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa3b4df3-6044-4a53-8039-c9a5c05725aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:50:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:58.601 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff29e75-0d09-40e8-b1db-ad1a1a53334b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:58.602 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa namespace which is not needed anymore
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:58 compute-0 podman[349303]: 2025-10-02 12:50:58.624945966 +0000 UTC m=+0.422484687 container remove 07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_grothendieck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:50:58 compute-0 systemd[1]: libpod-conmon-07c00b7fa1fb3ebf222ad7834b9be31f696a84c4b0026625c77692bf1d86b8d9.scope: Deactivated successfully.
Oct 02 12:50:58 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 02 12:50:58 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000083.scope: Consumed 21.239s CPU time.
Oct 02 12:50:58 compute-0 systemd-machined[210927]: Machine qemu-67-instance-00000083 terminated.
Oct 02 12:50:58 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.709 2 INFO nova.virt.libvirt.driver [-] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Instance destroyed successfully.
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.709 2 DEBUG nova.objects.instance [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'resources' on Instance uuid 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.727 2 DEBUG nova.virt.libvirt.vif [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1310446020',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1310446020',id=131,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-dckwdwmk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:02Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=2755e56a-ea2b-43ad-8bae-29eb7fc0ed60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.727 2 DEBUG nova.network.os_vif_util [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "e45b089f-5ee7-489a-8871-2386b27282c1", "address": "fa:16:3e:ec:6e:9c", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape45b089f-5e", "ovs_interfaceid": "e45b089f-5ee7-489a-8871-2386b27282c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.728 2 DEBUG nova.network.os_vif_util [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:6e:9c,bridge_name='br-int',has_traffic_filtering=True,id=e45b089f-5ee7-489a-8871-2386b27282c1,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape45b089f-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.728 2 DEBUG os_vif [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:6e:9c,bridge_name='br-int',has_traffic_filtering=True,id=e45b089f-5ee7-489a-8871-2386b27282c1,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape45b089f-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape45b089f-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.736 2 INFO os_vif [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:6e:9c,bridge_name='br-int',has_traffic_filtering=True,id=e45b089f-5ee7-489a-8871-2386b27282c1,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape45b089f-5e')
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.787 2 DEBUG nova.compute.manager [req-c107936e-df8e-49d9-ba43-9c7b253e603b req-7c0732a8-9090-4100-9f0f-feff7c7cda99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-vif-unplugged-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.787 2 DEBUG oslo_concurrency.lockutils [req-c107936e-df8e-49d9-ba43-9c7b253e603b req-7c0732a8-9090-4100-9f0f-feff7c7cda99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.787 2 DEBUG oslo_concurrency.lockutils [req-c107936e-df8e-49d9-ba43-9c7b253e603b req-7c0732a8-9090-4100-9f0f-feff7c7cda99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.787 2 DEBUG oslo_concurrency.lockutils [req-c107936e-df8e-49d9-ba43-9c7b253e603b req-7c0732a8-9090-4100-9f0f-feff7c7cda99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.788 2 DEBUG nova.compute.manager [req-c107936e-df8e-49d9-ba43-9c7b253e603b req-7c0732a8-9090-4100-9f0f-feff7c7cda99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] No waiting events found dispatching network-vif-unplugged-e45b089f-5ee7-489a-8871-2386b27282c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:58 compute-0 nova_compute[256940]: 2025-10-02 12:50:58.788 2 DEBUG nova.compute.manager [req-c107936e-df8e-49d9-ba43-9c7b253e603b req-7c0732a8-9090-4100-9f0f-feff7c7cda99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-vif-unplugged-e45b089f-5ee7-489a-8871-2386b27282c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:50:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [NOTICE]   (343626) : haproxy version is 2.8.14-c23fe91
Oct 02 12:50:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [NOTICE]   (343626) : path to executable is /usr/sbin/haproxy
Oct 02 12:50:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [WARNING]  (343626) : Exiting Master process...
Oct 02 12:50:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [WARNING]  (343626) : Exiting Master process...
Oct 02 12:50:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [ALERT]    (343626) : Current worker (343628) exited with code 143 (Terminated)
Oct 02 12:50:58 compute-0 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[343622]: [WARNING]  (343626) : All workers exited. Exiting... (0)
Oct 02 12:50:58 compute-0 systemd[1]: libpod-2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb.scope: Deactivated successfully.
Oct 02 12:50:58 compute-0 podman[349370]: 2025-10-02 12:50:58.836651926 +0000 UTC m=+0.108091569 container died 2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:50:58 compute-0 podman[349402]: 2025-10-02 12:50:58.835977939 +0000 UTC m=+0.059467030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:50:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:50:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-402927eae0ce9b9ac99512f564211e8b49d50c16bb139c4cd43e5f5e1b133404-merged.mount: Deactivated successfully.
Oct 02 12:50:59 compute-0 podman[349370]: 2025-10-02 12:50:59.041695845 +0000 UTC m=+0.313135488 container cleanup 2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:50:59 compute-0 podman[349402]: 2025-10-02 12:50:59.057246654 +0000 UTC m=+0.280735715 container create 5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 12:50:59 compute-0 systemd[1]: libpod-conmon-2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb.scope: Deactivated successfully.
Oct 02 12:50:59 compute-0 systemd[1]: Started libpod-conmon-5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c.scope.
Oct 02 12:50:59 compute-0 podman[349437]: 2025-10-02 12:50:59.113928751 +0000 UTC m=+0.041204070 container remove 2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.123 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0b483c1d-3ef1-4fbc-be23-160a62b2c9ce]: (4, ('Thu Oct  2 12:50:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa (2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb)\n2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb\nThu Oct  2 12:50:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa (2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb)\n2821538b0bb6845d157d746d570bba1948ae75143ffc2a5d6f2a220c2a1b9dfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.126 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[79a2ba04-d5b4-4303-9409-f49d0cc7d94f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.127 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:59 compute-0 nova_compute[256940]: 2025-10-02 12:50:59.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:59 compute-0 kernel: tapaa3b4df3-60: left promiscuous mode
Oct 02 12:50:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fd07fbd95599222eb53e23e945d6d8ec63d4403dfa91ee8fef31445c28a2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fd07fbd95599222eb53e23e945d6d8ec63d4403dfa91ee8fef31445c28a2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fd07fbd95599222eb53e23e945d6d8ec63d4403dfa91ee8fef31445c28a2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fd07fbd95599222eb53e23e945d6d8ec63d4403dfa91ee8fef31445c28a2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:59 compute-0 nova_compute[256940]: 2025-10-02 12:50:59.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.149 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e53522e1-c3c2-4631-bac0-d9d48511a011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:59 compute-0 podman[349402]: 2025-10-02 12:50:59.162357535 +0000 UTC m=+0.385846616 container init 5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:50:59 compute-0 podman[349402]: 2025-10-02 12:50:59.170535775 +0000 UTC m=+0.394024836 container start 5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hertz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:50:59 compute-0 podman[349402]: 2025-10-02 12:50:59.17772538 +0000 UTC m=+0.401214441 container attach 5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.177 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9c15ad-6677-4a3f-8eec-a013367320e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.179 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e3c96a-592f-4c3c-94d8-63f0d583ec06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.197 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc00a68-0fc1-4e6c-9fe7-cab866a447b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717282, 'reachable_time': 25807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349460, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:59 compute-0 systemd[1]: run-netns-ovnmeta\x2daa3b4df3\x2d6044\x2d4a53\x2d8039\x2dc9a5c05725aa.mount: Deactivated successfully.
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.203 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:50:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:50:59.203 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[20121091-60a3-4cc5-adb7-85a7c5a22bff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:59 compute-0 ceph-mon[73668]: pgmap v2305: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 1.5 MiB/s wr, 109 op/s
Oct 02 12:50:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:50:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:59.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 1.3 MiB/s wr, 94 op/s
Oct 02 12:51:00 compute-0 keen_hertz[349453]: {
Oct 02 12:51:00 compute-0 keen_hertz[349453]:     "1": [
Oct 02 12:51:00 compute-0 keen_hertz[349453]:         {
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "devices": [
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "/dev/loop3"
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             ],
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "lv_name": "ceph_lv0",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "lv_size": "7511998464",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "name": "ceph_lv0",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "tags": {
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.cluster_name": "ceph",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.crush_device_class": "",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.encrypted": "0",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.osd_id": "1",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.type": "block",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:                 "ceph.vdo": "0"
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             },
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "type": "block",
Oct 02 12:51:00 compute-0 keen_hertz[349453]:             "vg_name": "ceph_vg0"
Oct 02 12:51:00 compute-0 keen_hertz[349453]:         }
Oct 02 12:51:00 compute-0 keen_hertz[349453]:     ]
Oct 02 12:51:00 compute-0 keen_hertz[349453]: }
Oct 02 12:51:00 compute-0 systemd[1]: libpod-5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c.scope: Deactivated successfully.
Oct 02 12:51:00 compute-0 conmon[349453]: conmon 5e32f8f62a55df07ca8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c.scope/container/memory.events
Oct 02 12:51:00 compute-0 podman[349402]: 2025-10-02 12:51:00.070542311 +0000 UTC m=+1.294031382 container died 5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hertz, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b04fd07fbd95599222eb53e23e945d6d8ec63d4403dfa91ee8fef31445c28a2f-merged.mount: Deactivated successfully.
Oct 02 12:51:00 compute-0 podman[349402]: 2025-10-02 12:51:00.135486539 +0000 UTC m=+1.358975600 container remove 5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 12:51:00 compute-0 systemd[1]: libpod-conmon-5e32f8f62a55df07ca8d97b0ecc30310ab590e9a028ef5d3b0fc23f44ccadf5c.scope: Deactivated successfully.
Oct 02 12:51:00 compute-0 sudo[349237]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:00 compute-0 sudo[349477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:00 compute-0 sudo[349477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:00 compute-0 sudo[349477]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:00 compute-0 sudo[349502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:51:00 compute-0 sudo[349502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:00 compute-0 sudo[349502]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:00 compute-0 sudo[349527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:00 compute-0 sudo[349527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:00 compute-0 sudo[349527]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:00 compute-0 sudo[349552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:51:00 compute-0 sudo[349552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1132038165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:00 compute-0 podman[349617]: 2025-10-02 12:51:00.860988282 +0000 UTC m=+0.067735122 container create 157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:51:00 compute-0 systemd[1]: Started libpod-conmon-157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb.scope.
Oct 02 12:51:00 compute-0 nova_compute[256940]: 2025-10-02 12:51:00.910 2 DEBUG nova.compute.manager [req-5df3cfbc-aa35-4cea-bb5f-bf513eb3fd2f req-08f83268-26ae-4c7a-a49b-d3b00cf7a95d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:00 compute-0 podman[349617]: 2025-10-02 12:51:00.81809711 +0000 UTC m=+0.024843960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:00 compute-0 nova_compute[256940]: 2025-10-02 12:51:00.913 2 DEBUG oslo_concurrency.lockutils [req-5df3cfbc-aa35-4cea-bb5f-bf513eb3fd2f req-08f83268-26ae-4c7a-a49b-d3b00cf7a95d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:00 compute-0 nova_compute[256940]: 2025-10-02 12:51:00.914 2 DEBUG oslo_concurrency.lockutils [req-5df3cfbc-aa35-4cea-bb5f-bf513eb3fd2f req-08f83268-26ae-4c7a-a49b-d3b00cf7a95d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:00 compute-0 nova_compute[256940]: 2025-10-02 12:51:00.914 2 DEBUG oslo_concurrency.lockutils [req-5df3cfbc-aa35-4cea-bb5f-bf513eb3fd2f req-08f83268-26ae-4c7a-a49b-d3b00cf7a95d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:00 compute-0 nova_compute[256940]: 2025-10-02 12:51:00.915 2 DEBUG nova.compute.manager [req-5df3cfbc-aa35-4cea-bb5f-bf513eb3fd2f req-08f83268-26ae-4c7a-a49b-d3b00cf7a95d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] No waiting events found dispatching network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:00 compute-0 nova_compute[256940]: 2025-10-02 12:51:00.915 2 WARNING nova.compute.manager [req-5df3cfbc-aa35-4cea-bb5f-bf513eb3fd2f req-08f83268-26ae-4c7a-a49b-d3b00cf7a95d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received unexpected event network-vif-plugged-e45b089f-5ee7-489a-8871-2386b27282c1 for instance with vm_state active and task_state deleting.
Oct 02 12:51:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:00 compute-0 podman[349617]: 2025-10-02 12:51:00.966432951 +0000 UTC m=+0.173179821 container init 157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:51:00 compute-0 podman[349617]: 2025-10-02 12:51:00.97532429 +0000 UTC m=+0.182071130 container start 157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:51:00 compute-0 podman[349617]: 2025-10-02 12:51:00.979866366 +0000 UTC m=+0.186613216 container attach 157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:51:00 compute-0 charming_diffie[349634]: 167 167
Oct 02 12:51:00 compute-0 systemd[1]: libpod-157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb.scope: Deactivated successfully.
Oct 02 12:51:00 compute-0 podman[349617]: 2025-10-02 12:51:00.982590466 +0000 UTC m=+0.189337326 container died 157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:51:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f0e9b09823a409b09b972c56e9ecd0e306120d909ba8798868bd70b397f518-merged.mount: Deactivated successfully.
Oct 02 12:51:01 compute-0 podman[349617]: 2025-10-02 12:51:01.040452813 +0000 UTC m=+0.247199653 container remove 157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_diffie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 12:51:01 compute-0 systemd[1]: libpod-conmon-157fecd6ce28e30a307b5d9a29c409ee73dc5a0656c290ac3d66313d8aa3ddcb.scope: Deactivated successfully.
Oct 02 12:51:01 compute-0 podman[349658]: 2025-10-02 12:51:01.219180506 +0000 UTC m=+0.046857025 container create 8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 12:51:01 compute-0 systemd[1]: Started libpod-conmon-8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12.scope.
Oct 02 12:51:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:01 compute-0 podman[349658]: 2025-10-02 12:51:01.195836516 +0000 UTC m=+0.023513065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de4f4dff84f5f7854f6d4484fbc83d683af45ba94292bbd485477ce4703acc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de4f4dff84f5f7854f6d4484fbc83d683af45ba94292bbd485477ce4703acc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de4f4dff84f5f7854f6d4484fbc83d683af45ba94292bbd485477ce4703acc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de4f4dff84f5f7854f6d4484fbc83d683af45ba94292bbd485477ce4703acc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:01 compute-0 podman[349658]: 2025-10-02 12:51:01.310465281 +0000 UTC m=+0.138141820 container init 8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:51:01 compute-0 podman[349658]: 2025-10-02 12:51:01.318425696 +0000 UTC m=+0.146102215 container start 8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:51:01 compute-0 podman[349658]: 2025-10-02 12:51:01.323069015 +0000 UTC m=+0.150745534 container attach 8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_easley, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:51:01 compute-0 nova_compute[256940]: 2025-10-02 12:51:01.614 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409446.6135771, 74af08e5-d1ea-478b-ace8-00363679ec4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:01 compute-0 nova_compute[256940]: 2025-10-02 12:51:01.615 2 INFO nova.compute.manager [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Stopped (Lifecycle Event)
Oct 02 12:51:01 compute-0 nova_compute[256940]: 2025-10-02 12:51:01.636 2 DEBUG nova.compute.manager [None req-74978d49-bad5-44ba-8f72-885e8465c84e - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:01.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:01.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 574 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 274 KiB/s rd, 1.9 MiB/s wr, 95 op/s
Oct 02 12:51:01 compute-0 ceph-mon[73668]: pgmap v2306: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 1.3 MiB/s wr, 94 op/s
Oct 02 12:51:02 compute-0 nova_compute[256940]: 2025-10-02 12:51:02.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-0 quizzical_easley[349675]: {
Oct 02 12:51:02 compute-0 quizzical_easley[349675]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:51:02 compute-0 quizzical_easley[349675]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:51:02 compute-0 quizzical_easley[349675]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:51:02 compute-0 quizzical_easley[349675]:         "osd_id": 1,
Oct 02 12:51:02 compute-0 quizzical_easley[349675]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:51:02 compute-0 quizzical_easley[349675]:         "type": "bluestore"
Oct 02 12:51:02 compute-0 quizzical_easley[349675]:     }
Oct 02 12:51:02 compute-0 quizzical_easley[349675]: }
Oct 02 12:51:02 compute-0 systemd[1]: libpod-8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12.scope: Deactivated successfully.
Oct 02 12:51:02 compute-0 podman[349658]: 2025-10-02 12:51:02.266886487 +0000 UTC m=+1.094563006 container died 8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_easley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0de4f4dff84f5f7854f6d4484fbc83d683af45ba94292bbd485477ce4703acc1-merged.mount: Deactivated successfully.
Oct 02 12:51:02 compute-0 podman[349658]: 2025-10-02 12:51:02.443473695 +0000 UTC m=+1.271150224 container remove 8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:51:02 compute-0 systemd[1]: libpod-conmon-8311829c815449f9775cd9df0752a99a6dc02d4198083c069a33b8d9a3906c12.scope: Deactivated successfully.
Oct 02 12:51:02 compute-0 sudo[349552]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:51:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:51:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:51:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:51:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d5518dda-8790-4cea-9a94-eba28395f299 does not exist
Oct 02 12:51:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 798966fd-8367-4c03-b0aa-f6e45bb96ae9 does not exist
Oct 02 12:51:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7bde0997-7a9b-4539-8290-43b85fd6e3d2 does not exist
Oct 02 12:51:02 compute-0 sudo[349711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:03 compute-0 sudo[349711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:03 compute-0 sudo[349711]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:03 compute-0 sudo[349736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:51:03 compute-0 sudo[349736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:03 compute-0 sudo[349736]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.206 2 INFO nova.virt.libvirt.driver [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Deleting instance files /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_del
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.206 2 INFO nova.virt.libvirt.driver [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Deletion of /var/lib/nova/instances/2755e56a-ea2b-43ad-8bae-29eb7fc0ed60_del complete
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.268 2 INFO nova.compute.manager [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Took 5.40 seconds to destroy the instance on the hypervisor.
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.271 2 DEBUG oslo.service.loopingcall [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.272 2 DEBUG nova.compute.manager [-] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.272 2 DEBUG nova.network.neutron [-] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:03 compute-0 ceph-mon[73668]: pgmap v2307: 305 pgs: 305 active+clean; 574 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 274 KiB/s rd, 1.9 MiB/s wr, 95 op/s
Oct 02 12:51:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:51:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:51:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:03.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:03.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 574 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.896 2 DEBUG nova.network.neutron [-] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:03 compute-0 nova_compute[256940]: 2025-10-02 12:51:03.912 2 INFO nova.compute.manager [-] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Took 0.64 seconds to deallocate network for instance.
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.098 2 INFO nova.compute.manager [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Took 0.19 seconds to detach 1 volumes for instance.
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.148 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.149 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.247 2 DEBUG oslo_concurrency.processutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:04 compute-0 sudo[349781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:04 compute-0 sudo[349781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:04 compute-0 sudo[349781]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145012872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.737 2 DEBUG oslo_concurrency.processutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.745 2 DEBUG nova.compute.provider_tree [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:04 compute-0 sudo[349807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:04 compute-0 sudo[349807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:04 compute-0 sudo[349807]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.763 2 DEBUG nova.scheduler.client.report [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.788 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.831 2 INFO nova.scheduler.client.report [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Deleted allocations for instance 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60
Oct 02 12:51:04 compute-0 nova_compute[256940]: 2025-10-02 12:51:04.903 2 DEBUG oslo_concurrency.lockutils [None req-5d03b3d7-0905-48e2-8cc5-3278dca01042 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "2755e56a-ea2b-43ad-8bae-29eb7fc0ed60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:05 compute-0 ceph-mon[73668]: pgmap v2308: 305 pgs: 305 active+clean; 574 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 12:51:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4145012872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1436376951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:05 compute-0 nova_compute[256940]: 2025-10-02 12:51:05.243 2 DEBUG nova.compute.manager [req-ae9bbed6-c197-405d-b144-1618f7bb991a req-528b87d7-b1ff-4906-8042-72e7b3a986fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Received event network-vif-deleted-e45b089f-5ee7-489a-8871-2386b27282c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:51:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:05.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:51:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.9 MiB/s wr, 63 op/s
Oct 02 12:51:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2214410976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4258114913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:07 compute-0 nova_compute[256940]: 2025-10-02 12:51:07.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:07 compute-0 nova_compute[256940]: 2025-10-02 12:51:07.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:07 compute-0 ceph-mon[73668]: pgmap v2309: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.9 MiB/s wr, 63 op/s
Oct 02 12:51:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4101096271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:07.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:07.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.301 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.301 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.321 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.431 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.432 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.438 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.439 2 INFO nova.compute.claims [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:51:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3298766319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2420490110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.702 2 DEBUG nova.scheduler.client.report [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.741 2 DEBUG nova.scheduler.client.report [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.742 2 DEBUG nova.compute.provider_tree [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.780 2 DEBUG nova.scheduler.client.report [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.816 2 DEBUG nova.scheduler.client.report [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:51:08 compute-0 nova_compute[256940]: 2025-10-02 12:51:08.897 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069559069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.384 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.391 2 DEBUG nova.compute.provider_tree [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:09 compute-0 podman[349856]: 2025-10-02 12:51:09.397956044 +0000 UTC m=+0.060863975 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.416 2 DEBUG nova.scheduler.client.report [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:09 compute-0 podman[349857]: 2025-10-02 12:51:09.428053967 +0000 UTC m=+0.089287895 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.437 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.438 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.494 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.495 2 DEBUG nova.network.neutron [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:51:09 compute-0 ceph-mon[73668]: pgmap v2310: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:51:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3069559069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.519 2 INFO nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.539 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.651 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.652 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.652 2 INFO nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Creating image(s)
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.683 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.713 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.741 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.745 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.780 2 DEBUG nova.policy [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.825 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.826 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.827 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.827 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:09.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:09.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 583 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.857 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:09 compute-0 nova_compute[256940]: 2025-10-02 12:51:09.862 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.569 2 DEBUG nova.network.neutron [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Successfully created port: e4176135-8965-4856-86cc-6a0172ab2f82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:51:10 compute-0 ceph-mon[73668]: pgmap v2311: 305 pgs: 305 active+clean; 583 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.742 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.824 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.976 2 DEBUG nova.objects.instance [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.994 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.995 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Ensure instance console log exists: /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.995 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.995 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:10 compute-0 nova_compute[256940]: 2025-10-02 12:51:10.996 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.318 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.318 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.319 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.319 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.319 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.532 2 DEBUG nova.network.neutron [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Successfully updated port: e4176135-8965-4856-86cc-6a0172ab2f82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.561 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-27ae070b-22a6-4afa-9e25-e769a7fbcdc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.561 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-27ae070b-22a6-4afa-9e25-e769a7fbcdc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.561 2 DEBUG nova.network.neutron [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.635 2 DEBUG nova.compute.manager [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received event network-changed-e4176135-8965-4856-86cc-6a0172ab2f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.636 2 DEBUG nova.compute.manager [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Refreshing instance network info cache due to event network-changed-e4176135-8965-4856-86cc-6a0172ab2f82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.636 2 DEBUG oslo_concurrency.lockutils [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-27ae070b-22a6-4afa-9e25-e769a7fbcdc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.730 2 DEBUG nova.network.neutron [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:51:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944792511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3025908862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.787 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Oct 02 12:51:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:11.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.858 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:11 compute-0 nova_compute[256940]: 2025-10-02 12:51:11.858 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.006 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.007 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4135MB free_disk=20.876338958740234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.007 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.007 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.077 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 840e5e36-5883-4877-a201-6f2da064a653 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.078 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.078 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.078 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.169 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728589029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.613 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.618 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1944792511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:12 compute-0 ceph-mon[73668]: pgmap v2312: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Oct 02 12:51:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3728589029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:12 compute-0 nova_compute[256940]: 2025-10-02 12:51:12.970 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.039 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.040 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.146 2 DEBUG nova.network.neutron [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Updating instance_info_cache with network_info: [{"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.172 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-27ae070b-22a6-4afa-9e25-e769a7fbcdc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.172 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Instance network_info: |[{"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.173 2 DEBUG oslo_concurrency.lockutils [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-27ae070b-22a6-4afa-9e25-e769a7fbcdc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.173 2 DEBUG nova.network.neutron [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Refreshing network info cache for port e4176135-8965-4856-86cc-6a0172ab2f82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.176 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Start _get_guest_xml network_info=[{"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.181 2 WARNING nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.185 2 DEBUG nova.virt.libvirt.host [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.186 2 DEBUG nova.virt.libvirt.host [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.191 2 DEBUG nova.virt.libvirt.host [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.192 2 DEBUG nova.virt.libvirt.host [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.193 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.193 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.193 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.194 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.194 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.194 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.194 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.194 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.195 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.195 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.195 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.195 2 DEBUG nova.virt.hardware [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.198 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:51:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485776948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.671 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.704 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.710 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.741 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409458.7063308, 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.741 2 INFO nova.compute.manager [-] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] VM Stopped (Lifecycle Event)
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:13 compute-0 nova_compute[256940]: 2025-10-02 12:51:13.764 2 DEBUG nova.compute.manager [None req-a0c3d3c0-5bf4-4221-bfaa-59ebb11c2006 - - - - - -] [instance: 2755e56a-ea2b-43ad-8bae-29eb7fc0ed60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1485776948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:13.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Oct 02 12:51:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:13.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:51:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1009412585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.235 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.237 2 DEBUG nova.virt.libvirt.vif [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-417791477',display_name='tempest-TestNetworkBasicOps-server-417791477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-417791477',id=139,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF82fCF1SF1KJiJrnYadzVYZaSvONmAt3bXnRyCERpaKckTczgdOo+cHIfObHk6y+n+oLP/rWcqZN3cI/vFe5pkLm4V974Swcppdf+a/ebRcVClFeVkEpMfmUYdlLKz24g==',key_name='tempest-TestNetworkBasicOps-72798498',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7c3lrod0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:09Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=27ae070b-22a6-4afa-9e25-e769a7fbcdc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.237 2 DEBUG nova.network.os_vif_util [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.238 2 DEBUG nova.network.os_vif_util [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:40:45,bridge_name='br-int',has_traffic_filtering=True,id=e4176135-8965-4856-86cc-6a0172ab2f82,network=Network(622def0a-7579-48c4-82ac-fec4b9a4625b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4176135-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.239 2 DEBUG nova.objects.instance [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.381 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <uuid>27ae070b-22a6-4afa-9e25-e769a7fbcdc6</uuid>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <name>instance-0000008b</name>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkBasicOps-server-417791477</nova:name>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:51:13</nova:creationTime>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <nova:port uuid="e4176135-8965-4856-86cc-6a0172ab2f82">
Oct 02 12:51:14 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <system>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <entry name="serial">27ae070b-22a6-4afa-9e25-e769a7fbcdc6</entry>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <entry name="uuid">27ae070b-22a6-4afa-9e25-e769a7fbcdc6</entry>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </system>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <os>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   </os>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <features>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   </features>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk">
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       </source>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk.config">
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       </source>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:51:14 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:51:40:45"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <target dev="tape4176135-89"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/console.log" append="off"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <video>
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </video>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:51:14 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:51:14 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:51:14 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:51:14 compute-0 nova_compute[256940]: </domain>
Oct 02 12:51:14 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.382 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Preparing to wait for external event network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.382 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.382 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.382 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.383 2 DEBUG nova.virt.libvirt.vif [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-417791477',display_name='tempest-TestNetworkBasicOps-server-417791477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-417791477',id=139,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF82fCF1SF1KJiJrnYadzVYZaSvONmAt3bXnRyCERpaKckTczgdOo+cHIfObHk6y+n+oLP/rWcqZN3cI/vFe5pkLm4V974Swcppdf+a/ebRcVClFeVkEpMfmUYdlLKz24g==',key_name='tempest-TestNetworkBasicOps-72798498',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7c3lrod0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:09Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=27ae070b-22a6-4afa-9e25-e769a7fbcdc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.383 2 DEBUG nova.network.os_vif_util [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.384 2 DEBUG nova.network.os_vif_util [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:40:45,bridge_name='br-int',has_traffic_filtering=True,id=e4176135-8965-4856-86cc-6a0172ab2f82,network=Network(622def0a-7579-48c4-82ac-fec4b9a4625b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4176135-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.384 2 DEBUG os_vif [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:40:45,bridge_name='br-int',has_traffic_filtering=True,id=e4176135-8965-4856-86cc-6a0172ab2f82,network=Network(622def0a-7579-48c4-82ac-fec4b9a4625b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4176135-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4176135-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4176135-89, col_values=(('external_ids', {'iface-id': 'e4176135-8965-4856-86cc-6a0172ab2f82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:40:45', 'vm-uuid': '27ae070b-22a6-4afa-9e25-e769a7fbcdc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:14 compute-0 NetworkManager[44981]: <info>  [1759409474.3929] manager: (tape4176135-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.399 2 INFO os_vif [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:40:45,bridge_name='br-int',has_traffic_filtering=True,id=e4176135-8965-4856-86cc-6a0172ab2f82,network=Network(622def0a-7579-48c4-82ac-fec4b9a4625b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4176135-89')
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.493 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.493 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.493 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:51:40:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.494 2 INFO nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Using config drive
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.526 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.821 2 DEBUG nova.network.neutron [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Updated VIF entry in instance network info cache for port e4176135-8965-4856-86cc-6a0172ab2f82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.822 2 DEBUG nova.network.neutron [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Updating instance_info_cache with network_info: [{"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.849 2 DEBUG oslo_concurrency.lockutils [req-34c804f9-17af-44d7-a1dd-f23a997dc94e req-78877ea3-a240-44ea-99bd-02f1537d9287 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-27ae070b-22a6-4afa-9e25-e769a7fbcdc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:14 compute-0 nova_compute[256940]: 2025-10-02 12:51:14.995 2 INFO nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Creating config drive at /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/disk.config
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.000 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqdass9qg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:15 compute-0 ceph-mon[73668]: pgmap v2313: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Oct 02 12:51:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1009412585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.040 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.041 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.041 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.139 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqdass9qg" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.177 2 DEBUG nova.storage.rbd_utils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.185 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/disk.config 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.381 2 DEBUG oslo_concurrency.processutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/disk.config 27ae070b-22a6-4afa-9e25-e769a7fbcdc6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.382 2 INFO nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Deleting local config drive /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6/disk.config because it was imported into RBD.
Oct 02 12:51:15 compute-0 kernel: tape4176135-89: entered promiscuous mode
Oct 02 12:51:15 compute-0 NetworkManager[44981]: <info>  [1759409475.4363] manager: (tape4176135-89): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Oct 02 12:51:15 compute-0 ovn_controller[148123]: 2025-10-02T12:51:15Z|00687|binding|INFO|Claiming lport e4176135-8965-4856-86cc-6a0172ab2f82 for this chassis.
Oct 02 12:51:15 compute-0 ovn_controller[148123]: 2025-10-02T12:51:15Z|00688|binding|INFO|e4176135-8965-4856-86cc-6a0172ab2f82: Claiming fa:16:3e:51:40:45 10.100.0.30
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.454 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:40:45 10.100.0.30'], port_security=['fa:16:3e:51:40:45 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '27ae070b-22a6-4afa-9e25-e769a7fbcdc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-622def0a-7579-48c4-82ac-fec4b9a4625b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc2c998f-f12a-421d-b6c0-4bbd836580c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06223162-269e-4e92-b330-1d1e95c775cf, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e4176135-8965-4856-86cc-6a0172ab2f82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.455 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e4176135-8965-4856-86cc-6a0172ab2f82 in datapath 622def0a-7579-48c4-82ac-fec4b9a4625b bound to our chassis
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.457 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 622def0a-7579-48c4-82ac-fec4b9a4625b
Oct 02 12:51:15 compute-0 systemd-udevd[350252]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.472 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[01df5e5d-b81f-4f7e-b3e2-1c568b5072a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.473 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap622def0a-71 in ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.475 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap622def0a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.475 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ee153f31-3a21-405a-943d-78a620359529]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.476 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a284bf-ef1d-457d-9418-d75d1e33d248]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 systemd-machined[210927]: New machine qemu-73-instance-0000008b.
Oct 02 12:51:15 compute-0 NetworkManager[44981]: <info>  [1759409475.4885] device (tape4176135-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:51:15 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000008b.
Oct 02 12:51:15 compute-0 NetworkManager[44981]: <info>  [1759409475.4896] device (tape4176135-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.489 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e000b8d7-c69a-4f71-9edc-7f59279740d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:15 compute-0 ovn_controller[148123]: 2025-10-02T12:51:15Z|00689|binding|INFO|Setting lport e4176135-8965-4856-86cc-6a0172ab2f82 ovn-installed in OVS
Oct 02 12:51:15 compute-0 ovn_controller[148123]: 2025-10-02T12:51:15Z|00690|binding|INFO|Setting lport e4176135-8965-4856-86cc-6a0172ab2f82 up in Southbound
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.510 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6685d97d-e268-44d8-bb9c-3024cf651e61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.543 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0d07f4fd-59f2-4be3-b55b-5723c8a33831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 systemd-udevd[350256]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:51:15 compute-0 NetworkManager[44981]: <info>  [1759409475.5497] manager: (tap622def0a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/308)
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.548 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c45b94e4-0b30-42ab-a148-c0d10ec7bb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.587 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4a8ea2-9fc8-4730-bd31-093676a50795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.591 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba6e78b-91d8-416a-9fd0-c3a738789194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 NetworkManager[44981]: <info>  [1759409475.6190] device (tap622def0a-70): carrier: link connected
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.625 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[64cd5f59-28dd-4124-a375-f2274c928949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.650 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2a215bfe-5243-433a-a0d2-095c92809eb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap622def0a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:7f:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737110, 'reachable_time': 29562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350285, 'error': None, 'target': 'ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.670 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a696939-0f18-43ef-bbd9-278f376c86c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:7f45'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737110, 'tstamp': 737110}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350286, 'error': None, 'target': 'ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.691 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5b677505-67b6-4f80-8221-26117c4ecd8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap622def0a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:7f:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737110, 'reachable_time': 29562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350287, 'error': None, 'target': 'ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.734 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb4ca0a-cf5c-4e49-8143-649d1783913d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.808 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ee88b9c6-7cb6-4c0f-ba12-b7a735f49570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.810 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap622def0a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.810 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.810 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap622def0a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:15 compute-0 NetworkManager[44981]: <info>  [1759409475.8134] manager: (tap622def0a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Oct 02 12:51:15 compute-0 kernel: tap622def0a-70: entered promiscuous mode
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.816 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap622def0a-70, col_values=(('external_ids', {'iface-id': '4e903847-610e-40a6-888f-be8005fed9c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:15 compute-0 ovn_controller[148123]: 2025-10-02T12:51:15Z|00691|binding|INFO|Releasing lport 4e903847-610e-40a6-888f-be8005fed9c9 from this chassis (sb_readonly=0)
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.836 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/622def0a-7579-48c4-82ac-fec4b9a4625b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/622def0a-7579-48c4-82ac-fec4b9a4625b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.837 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9e24a08a-8c8d-4755-be42-a8347fbf4e98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.838 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-622def0a-7579-48c4-82ac-fec4b9a4625b
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/622def0a-7579-48c4-82ac-fec4b9a4625b.pid.haproxy
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 622def0a-7579-48c4-82ac-fec4b9a4625b
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:51:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:15.838 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b', 'env', 'PROCESS_TAG=haproxy-622def0a-7579-48c4-82ac-fec4b9a4625b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/622def0a-7579-48c4-82ac-fec4b9a4625b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:51:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:15.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 587 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 02 12:51:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:15.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.992 2 DEBUG nova.compute.manager [req-b8797f5e-51ed-4f9a-bc35-0d2ff905cd84 req-e5d65f46-737f-432d-897f-7769e8f1020e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received event network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.992 2 DEBUG oslo_concurrency.lockutils [req-b8797f5e-51ed-4f9a-bc35-0d2ff905cd84 req-e5d65f46-737f-432d-897f-7769e8f1020e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.992 2 DEBUG oslo_concurrency.lockutils [req-b8797f5e-51ed-4f9a-bc35-0d2ff905cd84 req-e5d65f46-737f-432d-897f-7769e8f1020e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.993 2 DEBUG oslo_concurrency.lockutils [req-b8797f5e-51ed-4f9a-bc35-0d2ff905cd84 req-e5d65f46-737f-432d-897f-7769e8f1020e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:15 compute-0 nova_compute[256940]: 2025-10-02 12:51:15.993 2 DEBUG nova.compute.manager [req-b8797f5e-51ed-4f9a-bc35-0d2ff905cd84 req-e5d65f46-737f-432d-897f-7769e8f1020e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Processing event network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:51:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:16 compute-0 podman[350336]: 2025-10-02 12:51:16.231714 +0000 UTC m=+0.055808945 container create 4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:51:16 compute-0 systemd[1]: Started libpod-conmon-4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1.scope.
Oct 02 12:51:16 compute-0 podman[350336]: 2025-10-02 12:51:16.201921145 +0000 UTC m=+0.026016110 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:51:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26200216b27cd38537f9ff103a8832dfb30d4e3cfb93f86ac2d4cd4937c14a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:16 compute-0 podman[350336]: 2025-10-02 12:51:16.318706476 +0000 UTC m=+0.142801441 container init 4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:51:16 compute-0 podman[350336]: 2025-10-02 12:51:16.324952716 +0000 UTC m=+0.149047651 container start 4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:51:16 compute-0 neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b[350351]: [NOTICE]   (350373) : New worker (350375) forked
Oct 02 12:51:16 compute-0 neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b[350351]: [NOTICE]   (350373) : Loading success.
Oct 02 12:51:16 compute-0 nova_compute[256940]: 2025-10-02 12:51:16.992 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:51:16 compute-0 nova_compute[256940]: 2025-10-02 12:51:16.993 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409476.9919035, 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:16 compute-0 nova_compute[256940]: 2025-10-02 12:51:16.993 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] VM Started (Lifecycle Event)
Oct 02 12:51:16 compute-0 nova_compute[256940]: 2025-10-02 12:51:16.996 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.000 2 INFO nova.virt.libvirt.driver [-] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Instance spawned successfully.
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.000 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.136 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.141 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.190 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.190 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.191 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.191 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.192 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.192 2 DEBUG nova.virt.libvirt.driver [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.196 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.197 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409476.9933176, 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.197 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] VM Paused (Lifecycle Event)
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.225 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.230 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409476.9961994, 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.230 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] VM Resumed (Lifecycle Event)
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.260 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.263 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.284 2 INFO nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Took 7.63 seconds to spawn the instance on the hypervisor.
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.285 2 DEBUG nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.298 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.373 2 INFO nova.compute.manager [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Took 9.00 seconds to build instance.
Oct 02 12:51:17 compute-0 nova_compute[256940]: 2025-10-02 12:51:17.388 2 DEBUG oslo_concurrency.lockutils [None req-5b6e79da-938f-4215-b197-7647f3f6dd59 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:17 compute-0 ceph-mon[73668]: pgmap v2314: 305 pgs: 305 active+clean; 587 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 02 12:51:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2108244912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:17.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Oct 02 12:51:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:17.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.115 2 DEBUG nova.compute.manager [req-e1d18459-6495-4a2a-9f1a-6f755bbbc30d req-dc048603-878c-4401-a77c-e4f250f2184f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received event network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.115 2 DEBUG oslo_concurrency.lockutils [req-e1d18459-6495-4a2a-9f1a-6f755bbbc30d req-dc048603-878c-4401-a77c-e4f250f2184f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.116 2 DEBUG oslo_concurrency.lockutils [req-e1d18459-6495-4a2a-9f1a-6f755bbbc30d req-dc048603-878c-4401-a77c-e4f250f2184f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.116 2 DEBUG oslo_concurrency.lockutils [req-e1d18459-6495-4a2a-9f1a-6f755bbbc30d req-dc048603-878c-4401-a77c-e4f250f2184f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.116 2 DEBUG nova.compute.manager [req-e1d18459-6495-4a2a-9f1a-6f755bbbc30d req-dc048603-878c-4401-a77c-e4f250f2184f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] No waiting events found dispatching network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.116 2 WARNING nova.compute.manager [req-e1d18459-6495-4a2a-9f1a-6f755bbbc30d req-dc048603-878c-4401-a77c-e4f250f2184f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received unexpected event network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 for instance with vm_state active and task_state None.
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.397 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.397 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.397 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:51:18 compute-0 nova_compute[256940]: 2025-10-02 12:51:18.398 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 840e5e36-5883-4877-a201-6f2da064a653 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:19 compute-0 nova_compute[256940]: 2025-10-02 12:51:19.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 380 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 267 op/s
Oct 02 12:51:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:19.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:19 compute-0 nova_compute[256940]: 2025-10-02 12:51:19.966 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updating instance_info_cache with network_info: [{"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:20 compute-0 nova_compute[256940]: 2025-10-02 12:51:20.054 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:20 compute-0 nova_compute[256940]: 2025-10-02 12:51:20.055 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:51:20 compute-0 ceph-mon[73668]: pgmap v2315: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Oct 02 12:51:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/100893316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:21 compute-0 ceph-mon[73668]: pgmap v2316: 305 pgs: 305 active+clean; 380 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 267 op/s
Oct 02 12:51:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 276 op/s
Oct 02 12:51:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:21.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:22 compute-0 nova_compute[256940]: 2025-10-02 12:51:22.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-0 podman[350394]: 2025-10-02 12:51:22.39000648 +0000 UTC m=+0.061739988 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:51:22 compute-0 podman[350393]: 2025-10-02 12:51:22.395545372 +0000 UTC m=+0.065873283 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 12:51:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Oct 02 12:51:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Oct 02 12:51:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Oct 02 12:51:22 compute-0 ceph-mon[73668]: pgmap v2317: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 276 op/s
Oct 02 12:51:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 223 op/s
Oct 02 12:51:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:23.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:24.296 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:24 compute-0 nova_compute[256940]: 2025-10-02 12:51:24.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:24.297 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:51:24 compute-0 ceph-mon[73668]: osdmap e319: 3 total, 3 up, 3 in
Oct 02 12:51:24 compute-0 nova_compute[256940]: 2025-10-02 12:51:24.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:24 compute-0 sudo[350435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:24 compute-0 sudo[350435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:24 compute-0 sudo[350435]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:24 compute-0 sudo[350460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:24 compute-0 sudo[350460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:24 compute-0 sudo[350460]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:25 compute-0 ceph-mon[73668]: pgmap v2319: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 223 op/s
Oct 02 12:51:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:25.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 367 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Oct 02 12:51:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:26.489 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:26.490 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:26.491 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:26 compute-0 ceph-mon[73668]: pgmap v2320: 305 pgs: 305 active+clean; 367 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Oct 02 12:51:27 compute-0 nova_compute[256940]: 2025-10-02 12:51:27.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:27.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 355 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.0 MiB/s wr, 233 op/s
Oct 02 12:51:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:27.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:51:28
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'volumes', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta']
Oct 02 12:51:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:51:29 compute-0 ceph-mon[73668]: pgmap v2321: 305 pgs: 305 active+clean; 355 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.0 MiB/s wr, 233 op/s
Oct 02 12:51:29 compute-0 nova_compute[256940]: 2025-10-02 12:51:29.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:51:29 compute-0 ovn_controller[148123]: 2025-10-02T12:51:29Z|00692|binding|INFO|Releasing lport 4e903847-610e-40a6-888f-be8005fed9c9 from this chassis (sb_readonly=0)
Oct 02 12:51:29 compute-0 ovn_controller[148123]: 2025-10-02T12:51:29Z|00693|binding|INFO|Releasing lport 3de92422-e907-4445-a7b3-06c347bd213b from this chassis (sb_readonly=0)
Oct 02 12:51:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 189 op/s
Oct 02 12:51:29 compute-0 nova_compute[256940]: 2025-10-02 12:51:29.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:29.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:31 compute-0 ceph-mon[73668]: pgmap v2322: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 189 op/s
Oct 02 12:51:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:31.304 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Oct 02 12:51:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:31.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:32 compute-0 nova_compute[256940]: 2025-10-02 12:51:32.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Oct 02 12:51:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Oct 02 12:51:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Oct 02 12:51:33 compute-0 ovn_controller[148123]: 2025-10-02T12:51:33Z|00694|binding|INFO|Releasing lport 4e903847-610e-40a6-888f-be8005fed9c9 from this chassis (sb_readonly=0)
Oct 02 12:51:33 compute-0 ovn_controller[148123]: 2025-10-02T12:51:33Z|00695|binding|INFO|Releasing lport 3de92422-e907-4445-a7b3-06c347bd213b from this chassis (sb_readonly=0)
Oct 02 12:51:33 compute-0 nova_compute[256940]: 2025-10-02 12:51:33.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:33 compute-0 ceph-mon[73668]: pgmap v2323: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Oct 02 12:51:33 compute-0 ceph-mon[73668]: osdmap e320: 3 total, 3 up, 3 in
Oct 02 12:51:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Oct 02 12:51:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:33.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:33.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:34 compute-0 nova_compute[256940]: 2025-10-02 12:51:34.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:35 compute-0 ovn_controller[148123]: 2025-10-02T12:51:35Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:40:45 10.100.0.30
Oct 02 12:51:35 compute-0 ovn_controller[148123]: 2025-10-02T12:51:35Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:40:45 10.100.0.30
Oct 02 12:51:35 compute-0 ceph-mon[73668]: pgmap v2325: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Oct 02 12:51:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 388 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 213 op/s
Oct 02 12:51:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:35.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:51:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:51:37 compute-0 ceph-mon[73668]: pgmap v2326: 305 pgs: 305 active+clean; 388 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 213 op/s
Oct 02 12:51:37 compute-0 nova_compute[256940]: 2025-10-02 12:51:37.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:37 compute-0 nova_compute[256940]: 2025-10-02 12:51:37.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:37 compute-0 nova_compute[256940]: 2025-10-02 12:51:37.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:51:37 compute-0 nova_compute[256940]: 2025-10-02 12:51:37.228 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:51:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 767 KiB/s rd, 4.6 MiB/s wr, 135 op/s
Oct 02 12:51:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:37.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:38 compute-0 nova_compute[256940]: 2025-10-02 12:51:38.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:38 compute-0 nova_compute[256940]: 2025-10-02 12:51:38.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:51:39 compute-0 ceph-mon[73668]: pgmap v2327: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 767 KiB/s rd, 4.6 MiB/s wr, 135 op/s
Oct 02 12:51:39 compute-0 nova_compute[256940]: 2025-10-02 12:51:39.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 420 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 660 KiB/s rd, 4.2 MiB/s wr, 131 op/s
Oct 02 12:51:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:40 compute-0 podman[350493]: 2025-10-02 12:51:40.396536935 +0000 UTC m=+0.062544948 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:51:40 compute-0 podman[350494]: 2025-10-02 12:51:40.422937834 +0000 UTC m=+0.086945185 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008674012423227247 of space, bias 1.0, pg target 2.602203726968174 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0018682460334077796 of space, bias 1.0, pg target 0.5567373179555183 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:51:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:51:41 compute-0 ceph-mon[73668]: pgmap v2328: 305 pgs: 305 active+clean; 420 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 660 KiB/s rd, 4.2 MiB/s wr, 131 op/s
Oct 02 12:51:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 3.8 MiB/s wr, 119 op/s
Oct 02 12:51:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:41.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.992 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.993 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.993 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.993 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.993 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.994 2 INFO nova.compute.manager [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Terminating instance
Oct 02 12:51:42 compute-0 nova_compute[256940]: 2025-10-02 12:51:42.995 2 DEBUG nova.compute.manager [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:51:43 compute-0 kernel: tape4176135-89 (unregistering): left promiscuous mode
Oct 02 12:51:43 compute-0 NetworkManager[44981]: <info>  [1759409503.0772] device (tape4176135-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:51:43 compute-0 ovn_controller[148123]: 2025-10-02T12:51:43Z|00696|binding|INFO|Releasing lport e4176135-8965-4856-86cc-6a0172ab2f82 from this chassis (sb_readonly=0)
Oct 02 12:51:43 compute-0 ovn_controller[148123]: 2025-10-02T12:51:43Z|00697|binding|INFO|Setting lport e4176135-8965-4856-86cc-6a0172ab2f82 down in Southbound
Oct 02 12:51:43 compute-0 ovn_controller[148123]: 2025-10-02T12:51:43Z|00698|binding|INFO|Removing iface tape4176135-89 ovn-installed in OVS
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.090 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:40:45 10.100.0.30'], port_security=['fa:16:3e:51:40:45 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '27ae070b-22a6-4afa-9e25-e769a7fbcdc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-622def0a-7579-48c4-82ac-fec4b9a4625b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc2c998f-f12a-421d-b6c0-4bbd836580c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06223162-269e-4e92-b330-1d1e95c775cf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e4176135-8965-4856-86cc-6a0172ab2f82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.091 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e4176135-8965-4856-86cc-6a0172ab2f82 in datapath 622def0a-7579-48c4-82ac-fec4b9a4625b unbound from our chassis
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.093 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 622def0a-7579-48c4-82ac-fec4b9a4625b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.094 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1f15f112-7608-4933-b39b-f1fd52be9787]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.094 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b namespace which is not needed anymore
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct 02 12:51:43 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000008b.scope: Consumed 14.623s CPU time.
Oct 02 12:51:43 compute-0 systemd-machined[210927]: Machine qemu-73-instance-0000008b terminated.
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.234 2 INFO nova.virt.libvirt.driver [-] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Instance destroyed successfully.
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.234 2 DEBUG nova.objects.instance [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.255 2 DEBUG nova.virt.libvirt.vif [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-417791477',display_name='tempest-TestNetworkBasicOps-server-417791477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-417791477',id=139,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF82fCF1SF1KJiJrnYadzVYZaSvONmAt3bXnRyCERpaKckTczgdOo+cHIfObHk6y+n+oLP/rWcqZN3cI/vFe5pkLm4V974Swcppdf+a/ebRcVClFeVkEpMfmUYdlLKz24g==',key_name='tempest-TestNetworkBasicOps-72798498',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7c3lrod0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:51:17Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=27ae070b-22a6-4afa-9e25-e769a7fbcdc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.256 2 DEBUG nova.network.os_vif_util [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e4176135-8965-4856-86cc-6a0172ab2f82", "address": "fa:16:3e:51:40:45", "network": {"id": "622def0a-7579-48c4-82ac-fec4b9a4625b", "bridge": "br-int", "label": "tempest-network-smoke--1428542795", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4176135-89", "ovs_interfaceid": "e4176135-8965-4856-86cc-6a0172ab2f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.257 2 DEBUG nova.network.os_vif_util [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:40:45,bridge_name='br-int',has_traffic_filtering=True,id=e4176135-8965-4856-86cc-6a0172ab2f82,network=Network(622def0a-7579-48c4-82ac-fec4b9a4625b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4176135-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.257 2 DEBUG os_vif [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:40:45,bridge_name='br-int',has_traffic_filtering=True,id=e4176135-8965-4856-86cc-6a0172ab2f82,network=Network(622def0a-7579-48c4-82ac-fec4b9a4625b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4176135-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4176135-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.267 2 INFO os_vif [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:40:45,bridge_name='br-int',has_traffic_filtering=True,id=e4176135-8965-4856-86cc-6a0172ab2f82,network=Network(622def0a-7579-48c4-82ac-fec4b9a4625b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4176135-89')
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.368 2 DEBUG nova.compute.manager [req-e994ac12-6d20-458b-b2cd-9aa5a6e0034b req-7bcb9005-f3a7-4fbf-ae81-f8f4e8da7b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received event network-vif-unplugged-e4176135-8965-4856-86cc-6a0172ab2f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.369 2 DEBUG oslo_concurrency.lockutils [req-e994ac12-6d20-458b-b2cd-9aa5a6e0034b req-7bcb9005-f3a7-4fbf-ae81-f8f4e8da7b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.369 2 DEBUG oslo_concurrency.lockutils [req-e994ac12-6d20-458b-b2cd-9aa5a6e0034b req-7bcb9005-f3a7-4fbf-ae81-f8f4e8da7b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.369 2 DEBUG oslo_concurrency.lockutils [req-e994ac12-6d20-458b-b2cd-9aa5a6e0034b req-7bcb9005-f3a7-4fbf-ae81-f8f4e8da7b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.370 2 DEBUG nova.compute.manager [req-e994ac12-6d20-458b-b2cd-9aa5a6e0034b req-7bcb9005-f3a7-4fbf-ae81-f8f4e8da7b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] No waiting events found dispatching network-vif-unplugged-e4176135-8965-4856-86cc-6a0172ab2f82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.370 2 DEBUG nova.compute.manager [req-e994ac12-6d20-458b-b2cd-9aa5a6e0034b req-7bcb9005-f3a7-4fbf-ae81-f8f4e8da7b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received event network-vif-unplugged-e4176135-8965-4856-86cc-6a0172ab2f82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:51:43 compute-0 neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b[350351]: [NOTICE]   (350373) : haproxy version is 2.8.14-c23fe91
Oct 02 12:51:43 compute-0 neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b[350351]: [NOTICE]   (350373) : path to executable is /usr/sbin/haproxy
Oct 02 12:51:43 compute-0 neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b[350351]: [WARNING]  (350373) : Exiting Master process...
Oct 02 12:51:43 compute-0 neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b[350351]: [ALERT]    (350373) : Current worker (350375) exited with code 143 (Terminated)
Oct 02 12:51:43 compute-0 neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b[350351]: [WARNING]  (350373) : All workers exited. Exiting... (0)
Oct 02 12:51:43 compute-0 systemd[1]: libpod-4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1.scope: Deactivated successfully.
Oct 02 12:51:43 compute-0 podman[350557]: 2025-10-02 12:51:43.528056861 +0000 UTC m=+0.340983243 container died 4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1-userdata-shm.mount: Deactivated successfully.
Oct 02 12:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b26200216b27cd38537f9ff103a8832dfb30d4e3cfb93f86ac2d4cd4937c14a3-merged.mount: Deactivated successfully.
Oct 02 12:51:43 compute-0 podman[350557]: 2025-10-02 12:51:43.642201594 +0000 UTC m=+0.455127966 container cleanup 4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:51:43 compute-0 systemd[1]: libpod-conmon-4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1.scope: Deactivated successfully.
Oct 02 12:51:43 compute-0 podman[350617]: 2025-10-02 12:51:43.725512715 +0000 UTC m=+0.060770983 container remove 4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:51:43 compute-0 ceph-mon[73668]: pgmap v2329: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 3.8 MiB/s wr, 119 op/s
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.731 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9028e2-b357-4b01-914d-52c9410170ea]: (4, ('Thu Oct  2 12:51:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b (4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1)\n4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1\nThu Oct  2 12:51:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b (4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1)\n4620b658e47435b7688df13d11ef418c67488fe8b456a8c9e2d3b9c7c7bf6af1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.733 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2774d1-f0d8-41a7-b297-202fb253b9c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.733 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap622def0a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 kernel: tap622def0a-70: left promiscuous mode
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 nova_compute[256940]: 2025-10-02 12:51:43.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.753 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b8b31a-5f8b-4bd2-a6a5-a612677c26f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.788 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0c075eb6-972b-4d74-8143-751124cf4d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.789 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a7249455-b063-418a-87e1-9f407e2e4a16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.807 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[26cfa788-df08-42e2-9c93-50ab50ba292f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737102, 'reachable_time': 34618, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350632, 'error': None, 'target': 'ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d622def0a\x2d7579\x2d48c4\x2d82ac\x2dfec4b9a4625b.mount: Deactivated successfully.
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.810 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-622def0a-7579-48c4-82ac-fec4b9a4625b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:51:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:43.810 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[7806497e-35e8-4b6b-9a07-af68b0662b45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 3.4 MiB/s wr, 105 op/s
Oct 02 12:51:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:43.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:44 compute-0 nova_compute[256940]: 2025-10-02 12:51:44.683 2 INFO nova.virt.libvirt.driver [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Deleting instance files /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6_del
Oct 02 12:51:44 compute-0 nova_compute[256940]: 2025-10-02 12:51:44.685 2 INFO nova.virt.libvirt.driver [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Deletion of /var/lib/nova/instances/27ae070b-22a6-4afa-9e25-e769a7fbcdc6_del complete
Oct 02 12:51:44 compute-0 nova_compute[256940]: 2025-10-02 12:51:44.739 2 INFO nova.compute.manager [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Took 1.74 seconds to destroy the instance on the hypervisor.
Oct 02 12:51:44 compute-0 nova_compute[256940]: 2025-10-02 12:51:44.740 2 DEBUG oslo.service.loopingcall [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:51:44 compute-0 nova_compute[256940]: 2025-10-02 12:51:44.741 2 DEBUG nova.compute.manager [-] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:51:44 compute-0 nova_compute[256940]: 2025-10-02 12:51:44.741 2 DEBUG nova.network.neutron [-] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:51:44 compute-0 ceph-mon[73668]: pgmap v2330: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 3.4 MiB/s wr, 105 op/s
Oct 02 12:51:44 compute-0 sudo[350635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:44 compute-0 sudo[350635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:44 compute-0 sudo[350635]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:45 compute-0 sudo[350660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:45 compute-0 sudo[350660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:45 compute-0 sudo[350660]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:45 compute-0 nova_compute[256940]: 2025-10-02 12:51:45.463 2 DEBUG nova.compute.manager [req-fc556a24-7361-4003-9afb-22c91c981448 req-88a6f237-f6ba-4f29-a0ac-ae62b72968eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received event network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:45 compute-0 nova_compute[256940]: 2025-10-02 12:51:45.464 2 DEBUG oslo_concurrency.lockutils [req-fc556a24-7361-4003-9afb-22c91c981448 req-88a6f237-f6ba-4f29-a0ac-ae62b72968eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:45 compute-0 nova_compute[256940]: 2025-10-02 12:51:45.464 2 DEBUG oslo_concurrency.lockutils [req-fc556a24-7361-4003-9afb-22c91c981448 req-88a6f237-f6ba-4f29-a0ac-ae62b72968eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:45 compute-0 nova_compute[256940]: 2025-10-02 12:51:45.464 2 DEBUG oslo_concurrency.lockutils [req-fc556a24-7361-4003-9afb-22c91c981448 req-88a6f237-f6ba-4f29-a0ac-ae62b72968eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:45 compute-0 nova_compute[256940]: 2025-10-02 12:51:45.465 2 DEBUG nova.compute.manager [req-fc556a24-7361-4003-9afb-22c91c981448 req-88a6f237-f6ba-4f29-a0ac-ae62b72968eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] No waiting events found dispatching network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:45 compute-0 nova_compute[256940]: 2025-10-02 12:51:45.465 2 WARNING nova.compute.manager [req-fc556a24-7361-4003-9afb-22c91c981448 req-88a6f237-f6ba-4f29-a0ac-ae62b72968eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received unexpected event network-vif-plugged-e4176135-8965-4856-86cc-6a0172ab2f82 for instance with vm_state active and task_state deleting.
Oct 02 12:51:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 3.3 MiB/s wr, 109 op/s
Oct 02 12:51:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:45.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:45.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:47 compute-0 ceph-mon[73668]: pgmap v2331: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 3.3 MiB/s wr, 109 op/s
Oct 02 12:51:47 compute-0 nova_compute[256940]: 2025-10-02 12:51:47.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 536 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Oct 02 12:51:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:51:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:47.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:51:48 compute-0 nova_compute[256940]: 2025-10-02 12:51:48.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-0 ceph-mon[73668]: pgmap v2332: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 536 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.314 2 DEBUG nova.network.neutron [-] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.369 2 INFO nova.compute.manager [-] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Took 4.63 seconds to deallocate network for instance.
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.401 2 DEBUG nova.compute.manager [req-e26f741b-0620-4dbe-b9f6-2aaba162bdf1 req-c253dd10-efd0-4fec-85a2-4f2aa2d2e437 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Received event network-vif-deleted-e4176135-8965-4856-86cc-6a0172ab2f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.429 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.430 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.495 2 DEBUG oslo_concurrency.processutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 985 KiB/s wr, 84 op/s
Oct 02 12:51:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:49.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327210934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.976 2 DEBUG oslo_concurrency.processutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:49 compute-0 nova_compute[256940]: 2025-10-02 12:51:49.983 2 DEBUG nova.compute.provider_tree [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.002 2 DEBUG nova.scheduler.client.report [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.020 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.050 2 INFO nova.scheduler.client.report [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 27ae070b-22a6-4afa-9e25-e769a7fbcdc6
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.122 2 DEBUG oslo_concurrency.lockutils [None req-457f48e0-6408-4f4f-9b2e-dd0337070fa9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "27ae070b-22a6-4afa-9e25-e769a7fbcdc6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2327210934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.450 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.450 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.465 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.539 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.539 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.545 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.546 2 INFO nova.compute.claims [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:51:50 compute-0 nova_compute[256940]: 2025-10-02 12:51:50.683 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790795647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.127 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.132 2 DEBUG nova.compute.provider_tree [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.147 2 DEBUG nova.scheduler.client.report [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.204 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.205 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.258 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.258 2 DEBUG nova.network.neutron [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.276 2 INFO nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.297 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.394 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.395 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.396 2 INFO nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Creating image(s)
Oct 02 12:51:51 compute-0 ceph-mon[73668]: pgmap v2333: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 985 KiB/s wr, 84 op/s
Oct 02 12:51:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2790795647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.424 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.455 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.483 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.486 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.562 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.563 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.564 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.564 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.587 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.591 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:51 compute-0 nova_compute[256940]: 2025-10-02 12:51:51.642 2 DEBUG nova.policy [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b5104e5372994cd19b720862cf1ca2ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:51:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 659 KiB/s wr, 73 op/s
Oct 02 12:51:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:51.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.010 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.082 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] resizing rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.267 2 DEBUG nova.objects.instance [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid b461d8a1-bc66-4c64-87ec-dac07e9585c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.283 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.284 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Ensure instance console log exists: /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.284 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.284 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.285 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.347 2 DEBUG nova.network.neutron [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Successfully created port: 2c773e57-b93e-4b14-812e-ac1359508815 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:51:52 compute-0 ovn_controller[148123]: 2025-10-02T12:51:52Z|00699|binding|INFO|Releasing lport 3de92422-e907-4445-a7b3-06c347bd213b from this chassis (sb_readonly=0)
Oct 02 12:51:52 compute-0 nova_compute[256940]: 2025-10-02 12:51:52.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.335 2 DEBUG nova.compute.manager [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-changed-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.336 2 DEBUG nova.compute.manager [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Refreshing instance network info cache due to event network-changed-61335795-d917-4083-8d64-d9c93685176a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.336 2 DEBUG oslo_concurrency.lockutils [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.336 2 DEBUG oslo_concurrency.lockutils [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.337 2 DEBUG nova.network.neutron [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Refreshing network info cache for port 61335795-d917-4083-8d64-d9c93685176a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:51:53 compute-0 podman[350900]: 2025-10-02 12:51:53.39591008 +0000 UTC m=+0.056875662 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:51:53 compute-0 podman[350899]: 2025-10-02 12:51:53.40019551 +0000 UTC m=+0.062014724 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.409 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.409 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.410 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.410 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.410 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.411 2 INFO nova.compute.manager [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Terminating instance
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.412 2 DEBUG nova.compute.manager [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:51:53 compute-0 ceph-mon[73668]: pgmap v2334: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 659 KiB/s wr, 73 op/s
Oct 02 12:51:53 compute-0 kernel: tap61335795-d9 (unregistering): left promiscuous mode
Oct 02 12:51:53 compute-0 NetworkManager[44981]: <info>  [1759409513.5090] device (tap61335795-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:51:53 compute-0 ovn_controller[148123]: 2025-10-02T12:51:53Z|00700|binding|INFO|Releasing lport 61335795-d917-4083-8d64-d9c93685176a from this chassis (sb_readonly=0)
Oct 02 12:51:53 compute-0 ovn_controller[148123]: 2025-10-02T12:51:53Z|00701|binding|INFO|Setting lport 61335795-d917-4083-8d64-d9c93685176a down in Southbound
Oct 02 12:51:53 compute-0 ovn_controller[148123]: 2025-10-02T12:51:53Z|00702|binding|INFO|Removing iface tap61335795-d9 ovn-installed in OVS
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:53.526 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:66:44 10.100.0.3'], port_security=['fa:16:3e:b5:66:44 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '840e5e36-5883-4877-a201-6f2da064a653', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-262284a0-f21a-41fb-8b6f-64675f7281e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba945eea-78b3-4f2f-aa85-13aebdaf7c53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38238194-c8a0-4bed-bef0-d7e9370604e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=61335795-d917-4083-8d64-d9c93685176a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:53.527 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 61335795-d917-4083-8d64-d9c93685176a in datapath 262284a0-f21a-41fb-8b6f-64675f7281e1 unbound from our chassis
Oct 02 12:51:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:53.528 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 262284a0-f21a-41fb-8b6f-64675f7281e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:51:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:53.529 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[996dc573-3eb2-4461-ac35-bb17dbab402e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:53.529 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1 namespace which is not needed anymore
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:53 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 02 12:51:53 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000089.scope: Consumed 18.695s CPU time.
Oct 02 12:51:53 compute-0 systemd-machined[210927]: Machine qemu-71-instance-00000089 terminated.
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.644 2 INFO nova.virt.libvirt.driver [-] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Instance destroyed successfully.
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.645 2 DEBUG nova.objects.instance [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 840e5e36-5883-4877-a201-6f2da064a653 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.679 2 DEBUG nova.virt.libvirt.vif [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1000491090',display_name='tempest-TestNetworkBasicOps-server-1000491090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1000491090',id=137,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGUNQF5uVCgqVXfKNK2KewQskNOuXMMHyJUyre0VdRp0qITEt7wId97Vu+ucnBohv/eWvftOnp9CrKI/Rz8sbPuPz70I98sLptzESRExzUtekk4TsNdR7Ukdf//JzAwSg==',key_name='tempest-TestNetworkBasicOps-1083850395',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-gh52ajua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:50:32Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=840e5e36-5883-4877-a201-6f2da064a653,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.679 2 DEBUG nova.network.os_vif_util [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.680 2 DEBUG nova.network.os_vif_util [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:66:44,bridge_name='br-int',has_traffic_filtering=True,id=61335795-d917-4083-8d64-d9c93685176a,network=Network(262284a0-f21a-41fb-8b6f-64675f7281e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61335795-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.680 2 DEBUG os_vif [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:66:44,bridge_name='br-int',has_traffic_filtering=True,id=61335795-d917-4083-8d64-d9c93685176a,network=Network(262284a0-f21a-41fb-8b6f-64675f7281e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61335795-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61335795-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.687 2 INFO os_vif [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:66:44,bridge_name='br-int',has_traffic_filtering=True,id=61335795-d917-4083-8d64-d9c93685176a,network=Network(262284a0-f21a-41fb-8b6f-64675f7281e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61335795-d9')
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.715 2 DEBUG nova.network.neutron [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Successfully updated port: 2c773e57-b93e-4b14-812e-ac1359508815 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:51:53 compute-0 neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1[348288]: [NOTICE]   (348300) : haproxy version is 2.8.14-c23fe91
Oct 02 12:51:53 compute-0 neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1[348288]: [NOTICE]   (348300) : path to executable is /usr/sbin/haproxy
Oct 02 12:51:53 compute-0 neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1[348288]: [WARNING]  (348300) : Exiting Master process...
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.745 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.746 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.746 2 DEBUG nova.network.neutron [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:51:53 compute-0 neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1[348288]: [ALERT]    (348300) : Current worker (348302) exited with code 143 (Terminated)
Oct 02 12:51:53 compute-0 neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1[348288]: [WARNING]  (348300) : All workers exited. Exiting... (0)
Oct 02 12:51:53 compute-0 systemd[1]: libpod-ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5.scope: Deactivated successfully.
Oct 02 12:51:53 compute-0 conmon[348288]: conmon ca5cb8fcab0bfa75acc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5.scope/container/memory.events
Oct 02 12:51:53 compute-0 podman[350961]: 2025-10-02 12:51:53.756385983 +0000 UTC m=+0.144270558 container died ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.822 2 DEBUG nova.compute.manager [req-c0852b42-aa0f-45e3-b463-ffbbe4e1147c req-9744f005-128a-4d2d-a1c6-a608d93d8b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-vif-unplugged-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.822 2 DEBUG oslo_concurrency.lockutils [req-c0852b42-aa0f-45e3-b463-ffbbe4e1147c req-9744f005-128a-4d2d-a1c6-a608d93d8b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.822 2 DEBUG oslo_concurrency.lockutils [req-c0852b42-aa0f-45e3-b463-ffbbe4e1147c req-9744f005-128a-4d2d-a1c6-a608d93d8b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.822 2 DEBUG oslo_concurrency.lockutils [req-c0852b42-aa0f-45e3-b463-ffbbe4e1147c req-9744f005-128a-4d2d-a1c6-a608d93d8b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.823 2 DEBUG nova.compute.manager [req-c0852b42-aa0f-45e3-b463-ffbbe4e1147c req-9744f005-128a-4d2d-a1c6-a608d93d8b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] No waiting events found dispatching network-vif-unplugged-61335795-d917-4083-8d64-d9c93685176a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.823 2 DEBUG nova.compute.manager [req-c0852b42-aa0f-45e3-b463-ffbbe4e1147c req-9744f005-128a-4d2d-a1c6-a608d93d8b70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-vif-unplugged-61335795-d917-4083-8d64-d9c93685176a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.870 2 DEBUG nova.compute.manager [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received event network-changed-2c773e57-b93e-4b14-812e-ac1359508815 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.870 2 DEBUG nova.compute.manager [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Refreshing instance network info cache due to event network-changed-2c773e57-b93e-4b14-812e-ac1359508815. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:51:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 121 KiB/s wr, 45 op/s
Oct 02 12:51:53 compute-0 nova_compute[256940]: 2025-10-02 12:51:53.870 2 DEBUG oslo_concurrency.lockutils [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:51:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:53.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:53.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a06cb8891cb20a71e785b4fba7b3b3480f8a27cff158ccac85cdde7435f02d0b-merged.mount: Deactivated successfully.
Oct 02 12:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5-userdata-shm.mount: Deactivated successfully.
Oct 02 12:51:54 compute-0 podman[350961]: 2025-10-02 12:51:54.012030672 +0000 UTC m=+0.399915227 container cleanup ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:51:54 compute-0 systemd[1]: libpod-conmon-ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5.scope: Deactivated successfully.
Oct 02 12:51:54 compute-0 podman[351020]: 2025-10-02 12:51:54.159724497 +0000 UTC m=+0.127603420 container remove ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.165 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6328462a-1655-4d6d-a985-6ffe429a567e]: (4, ('Thu Oct  2 12:51:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1 (ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5)\nca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5\nThu Oct  2 12:51:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1 (ca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5)\nca5cb8fcab0bfa75acc4d2bb954e21ffe09f33d1fa144782c5a9cd7cd8a50fb5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.167 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[80227297-974f-4511-af60-d650c94d2a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.167 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap262284a0-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:54 compute-0 nova_compute[256940]: 2025-10-02 12:51:54.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:54 compute-0 kernel: tap262284a0-f0: left promiscuous mode
Oct 02 12:51:54 compute-0 nova_compute[256940]: 2025-10-02 12:51:54.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.185 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[041be53c-c547-4ad2-927d-be29d1b18182]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:54 compute-0 nova_compute[256940]: 2025-10-02 12:51:54.190 2 DEBUG nova.network.neutron [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.211 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[323098e3-efb9-480c-b4fa-f7fa7d466d2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.213 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8ebf2b-2a91-4faf-932d-76dcefa9151f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.232 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd03362-bf4b-437c-8d17-03b36e9c9d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732618, 'reachable_time': 34231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351035, 'error': None, 'target': 'ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.234 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-262284a0-f21a-41fb-8b6f-64675f7281e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:51:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:54.234 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[1d96fab9-bcc9-47ed-8222-9bf41c25dfe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d262284a0\x2df21a\x2d41fb\x2d8b6f\x2d64675f7281e1.mount: Deactivated successfully.
Oct 02 12:51:54 compute-0 nova_compute[256940]: 2025-10-02 12:51:54.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:54 compute-0 ceph-mon[73668]: pgmap v2335: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 121 KiB/s wr, 45 op/s
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.035 2 DEBUG nova.network.neutron [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updated VIF entry in instance network info cache for port 61335795-d917-4083-8d64-d9c93685176a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.035 2 DEBUG nova.network.neutron [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updating instance_info_cache with network_info: [{"id": "61335795-d917-4083-8d64-d9c93685176a", "address": "fa:16:3e:b5:66:44", "network": {"id": "262284a0-f21a-41fb-8b6f-64675f7281e1", "bridge": "br-int", "label": "tempest-network-smoke--692162899", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61335795-d9", "ovs_interfaceid": "61335795-d917-4083-8d64-d9c93685176a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.065 2 DEBUG oslo_concurrency.lockutils [req-15f18256-dfb3-4743-a903-c9127f1b9447 req-211fdd11-a87f-4d14-a522-418fe0cd5853 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-840e5e36-5883-4877-a201-6f2da064a653" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.246 2 INFO nova.virt.libvirt.driver [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Deleting instance files /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653_del
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.247 2 INFO nova.virt.libvirt.driver [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Deletion of /var/lib/nova/instances/840e5e36-5883-4877-a201-6f2da064a653_del complete
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.296 2 INFO nova.compute.manager [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Took 1.88 seconds to destroy the instance on the hypervisor.
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.296 2 DEBUG oslo.service.loopingcall [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.297 2 DEBUG nova.compute.manager [-] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:51:55 compute-0 nova_compute[256940]: 2025-10-02 12:51:55.297 2 DEBUG nova.network.neutron [-] [instance: 840e5e36-5883-4877-a201-6f2da064a653] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:51:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 1.5 MiB/s wr, 69 op/s
Oct 02 12:51:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:55.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:51:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:55.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.109 2 DEBUG nova.compute.manager [req-abe13050-eb0b-4631-9166-18c4011b0afc req-a3f6e8fc-262d-409d-8844-9ee12b24ddfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.110 2 DEBUG oslo_concurrency.lockutils [req-abe13050-eb0b-4631-9166-18c4011b0afc req-a3f6e8fc-262d-409d-8844-9ee12b24ddfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "840e5e36-5883-4877-a201-6f2da064a653-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.110 2 DEBUG oslo_concurrency.lockutils [req-abe13050-eb0b-4631-9166-18c4011b0afc req-a3f6e8fc-262d-409d-8844-9ee12b24ddfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.111 2 DEBUG oslo_concurrency.lockutils [req-abe13050-eb0b-4631-9166-18c4011b0afc req-a3f6e8fc-262d-409d-8844-9ee12b24ddfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.111 2 DEBUG nova.compute.manager [req-abe13050-eb0b-4631-9166-18c4011b0afc req-a3f6e8fc-262d-409d-8844-9ee12b24ddfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] No waiting events found dispatching network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.111 2 WARNING nova.compute.manager [req-abe13050-eb0b-4631-9166-18c4011b0afc req-a3f6e8fc-262d-409d-8844-9ee12b24ddfe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received unexpected event network-vif-plugged-61335795-d917-4083-8d64-d9c93685176a for instance with vm_state active and task_state deleting.
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.142 2 DEBUG nova.network.neutron [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Updating instance_info_cache with network_info: [{"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.161 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.161 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Instance network_info: |[{"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.161 2 DEBUG oslo_concurrency.lockutils [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.162 2 DEBUG nova.network.neutron [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Refreshing network info cache for port 2c773e57-b93e-4b14-812e-ac1359508815 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.164 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Start _get_guest_xml network_info=[{"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.168 2 WARNING nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.174 2 DEBUG nova.virt.libvirt.host [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.174 2 DEBUG nova.virt.libvirt.host [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.177 2 DEBUG nova.virt.libvirt.host [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.177 2 DEBUG nova.virt.libvirt.host [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.178 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.178 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.179 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.179 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.179 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.179 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.179 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.180 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.180 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.180 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.180 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.181 2 DEBUG nova.virt.hardware [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.183 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:51:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079673580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.630 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.657 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:56 compute-0 nova_compute[256940]: 2025-10-02 12:51:56.660 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:51:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737708049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.128 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.131 2 DEBUG nova.virt.libvirt.vif [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-417900419',display_name='tempest-ServerActionsTestOtherB-server-417900419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-417900419',id=140,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-b53m285n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:51Z,user_data=None,user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=b461d8a1-bc66-4c64-87ec-dac07e9585c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.131 2 DEBUG nova.network.os_vif_util [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.133 2 DEBUG nova.network.os_vif_util [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:02:b2,bridge_name='br-int',has_traffic_filtering=True,id=2c773e57-b93e-4b14-812e-ac1359508815,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c773e57-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.135 2 DEBUG nova.objects.instance [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid b461d8a1-bc66-4c64-87ec-dac07e9585c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:57 compute-0 ceph-mon[73668]: pgmap v2336: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 1.5 MiB/s wr, 69 op/s
Oct 02 12:51:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4079673580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.270 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <uuid>b461d8a1-bc66-4c64-87ec-dac07e9585c4</uuid>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <name>instance-0000008c</name>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerActionsTestOtherB-server-417900419</nova:name>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:51:56</nova:creationTime>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <nova:port uuid="2c773e57-b93e-4b14-812e-ac1359508815">
Oct 02 12:51:57 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <system>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <entry name="serial">b461d8a1-bc66-4c64-87ec-dac07e9585c4</entry>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <entry name="uuid">b461d8a1-bc66-4c64-87ec-dac07e9585c4</entry>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </system>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <os>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   </os>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <features>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   </features>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk">
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       </source>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk.config">
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       </source>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:51:57 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:c5:02:b2"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <target dev="tap2c773e57-b9"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/console.log" append="off"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <video>
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </video>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:51:57 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:51:57 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:51:57 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:51:57 compute-0 nova_compute[256940]: </domain>
Oct 02 12:51:57 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.272 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Preparing to wait for external event network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.272 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.273 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.273 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.274 2 DEBUG nova.virt.libvirt.vif [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-417900419',display_name='tempest-ServerActionsTestOtherB-server-417900419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-417900419',id=140,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-b53m285n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:51Z,user_data=None,user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=b461d8a1-bc66-4c64-87ec-dac07e9585c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.274 2 DEBUG nova.network.os_vif_util [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.274 2 DEBUG nova.network.os_vif_util [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:02:b2,bridge_name='br-int',has_traffic_filtering=True,id=2c773e57-b93e-4b14-812e-ac1359508815,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c773e57-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.275 2 DEBUG os_vif [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:02:b2,bridge_name='br-int',has_traffic_filtering=True,id=2c773e57-b93e-4b14-812e-ac1359508815,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c773e57-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c773e57-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c773e57-b9, col_values=(('external_ids', {'iface-id': '2c773e57-b93e-4b14-812e-ac1359508815', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:02:b2', 'vm-uuid': 'b461d8a1-bc66-4c64-87ec-dac07e9585c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:57 compute-0 NetworkManager[44981]: <info>  [1759409517.2826] manager: (tap2c773e57-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.291 2 INFO os_vif [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:02:b2,bridge_name='br-int',has_traffic_filtering=True,id=2c773e57-b93e-4b14-812e-ac1359508815,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c773e57-b9')
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.436 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.436 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.437 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:c5:02:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.437 2 INFO nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Using config drive
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.460 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.465 2 DEBUG nova.network.neutron [-] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.498 2 INFO nova.compute.manager [-] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Took 2.20 seconds to deallocate network for instance.
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.516 2 DEBUG nova.compute.manager [req-272b6baf-27c5-40c6-b37d-4207d59d821b req-a006105f-4436-4b75-a15e-9c6513a2ca8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Received event network-vif-deleted-61335795-d917-4083-8d64-d9c93685176a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.516 2 INFO nova.compute.manager [req-272b6baf-27c5-40c6-b37d-4207d59d821b req-a006105f-4436-4b75-a15e-9c6513a2ca8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Neutron deleted interface 61335795-d917-4083-8d64-d9c93685176a; detaching it from the instance and deleting it from the info cache
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.516 2 DEBUG nova.network.neutron [req-272b6baf-27c5-40c6-b37d-4207d59d821b req-a006105f-4436-4b75-a15e-9c6513a2ca8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.599 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.599 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.611 2 DEBUG nova.compute.manager [req-272b6baf-27c5-40c6-b37d-4207d59d821b req-a006105f-4436-4b75-a15e-9c6513a2ca8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Detach interface failed, port_id=61335795-d917-4083-8d64-d9c93685176a, reason: Instance 840e5e36-5883-4877-a201-6f2da064a653 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:51:57 compute-0 nova_compute[256940]: 2025-10-02 12:51:57.679 2 DEBUG oslo_concurrency.processutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 12:51:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:57.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:51:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:57.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.019 2 INFO nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Creating config drive at /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/disk.config
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.024 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdi72qbtk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812003800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.111 2 DEBUG oslo_concurrency.processutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.117 2 DEBUG nova.compute.provider_tree [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.139 2 DEBUG nova.scheduler.client.report [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.168 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdi72qbtk" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.199 2 DEBUG nova.storage.rbd_utils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.202 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/disk.config b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.233 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409503.2317135, 27ae070b-22a6-4afa-9e25-e769a7fbcdc6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.234 2 INFO nova.compute.manager [-] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] VM Stopped (Lifecycle Event)
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.236 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.266 2 DEBUG nova.compute.manager [None req-082ed4f2-7b15-4c8f-898a-bd73bd5a176b - - - - - -] [instance: 27ae070b-22a6-4afa-9e25-e769a7fbcdc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.278 2 INFO nova.scheduler.client.report [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 840e5e36-5883-4877-a201-6f2da064a653
Oct 02 12:51:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3737708049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3812003800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.345 2 DEBUG oslo_concurrency.lockutils [None req-e6be997c-019e-4779-b468-392e0a24a54f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "840e5e36-5883-4877-a201-6f2da064a653" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:51:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.783 2 DEBUG oslo_concurrency.processutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/disk.config b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.783 2 INFO nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Deleting local config drive /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4/disk.config because it was imported into RBD.
Oct 02 12:51:58 compute-0 kernel: tap2c773e57-b9: entered promiscuous mode
Oct 02 12:51:58 compute-0 NetworkManager[44981]: <info>  [1759409518.8330] manager: (tap2c773e57-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/311)
Oct 02 12:51:58 compute-0 ovn_controller[148123]: 2025-10-02T12:51:58Z|00703|binding|INFO|Claiming lport 2c773e57-b93e-4b14-812e-ac1359508815 for this chassis.
Oct 02 12:51:58 compute-0 ovn_controller[148123]: 2025-10-02T12:51:58Z|00704|binding|INFO|2c773e57-b93e-4b14-812e-ac1359508815: Claiming fa:16:3e:c5:02:b2 10.100.0.10
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.846 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:02:b2 10.100.0.10'], port_security=['fa:16:3e:c5:02:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b461d8a1-bc66-4c64-87ec-dac07e9585c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c38185a-c389-4d04-8fc6-53a62e6c5352', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2c773e57-b93e-4b14-812e-ac1359508815) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.847 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2c773e57-b93e-4b14-812e-ac1359508815 in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.848 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:51:58 compute-0 ovn_controller[148123]: 2025-10-02T12:51:58Z|00705|binding|INFO|Setting lport 2c773e57-b93e-4b14-812e-ac1359508815 ovn-installed in OVS
Oct 02 12:51:58 compute-0 ovn_controller[148123]: 2025-10-02T12:51:58Z|00706|binding|INFO|Setting lport 2c773e57-b93e-4b14-812e-ac1359508815 up in Southbound
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:58 compute-0 systemd-udevd[351198]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:51:58 compute-0 systemd-machined[210927]: New machine qemu-74-instance-0000008c.
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.864 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[53fbf79d-0809-43ff-8af8-b706803d7a15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.866 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9266ebd7-31 in ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:51:58 compute-0 NetworkManager[44981]: <info>  [1759409518.8693] device (tap2c773e57-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.868 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9266ebd7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.868 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3f48b2c0-d12e-4477-ad3e-da38f0ff5359]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 NetworkManager[44981]: <info>  [1759409518.8701] device (tap2c773e57-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.870 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dc186317-a88a-4013-823e-439bd3081336]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-0000008c.
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.881 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c5b5b4-a833-46dc-9abe-ab4c5f316a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.907 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11da0c58-147d-4e05-9285-ab2ab4960dd0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.930 2 DEBUG nova.network.neutron [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Updated VIF entry in instance network info cache for port 2c773e57-b93e-4b14-812e-ac1359508815. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.930 2 DEBUG nova.network.neutron [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Updating instance_info_cache with network_info: [{"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.936 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[00521743-58ff-4f68-b67b-fabce0702bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 NetworkManager[44981]: <info>  [1759409518.9424] manager: (tap9266ebd7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/312)
Oct 02 12:51:58 compute-0 systemd-udevd[351201]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.941 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09e18114-e6eb-4730-8ca6-273e8cf81587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.974 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3641da-9fb0-4ab9-8284-81d8baacf445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:58 compute-0 nova_compute[256940]: 2025-10-02 12:51:58.977 2 DEBUG oslo_concurrency.lockutils [req-db96f27d-0cd0-4aea-a03f-9295c832eea7 req-fd769fd7-94e9-4284-bae1-f88066fdecd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:51:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:58.977 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5943a4bd-e0ba-4c94-be86-781c454e80f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 NetworkManager[44981]: <info>  [1759409519.0012] device (tap9266ebd7-30): carrier: link connected
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.006 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c5df626a-7f29-4019-942c-13462e8fd8c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.024 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[219c496b-3052-4710-be92-9bf053dd91c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741448, 'reachable_time': 20853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351231, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.037 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[22c6f930-c4de-4e93-9df7-b457d3aeac72]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:6593'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741448, 'tstamp': 741448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351232, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.071 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fae3997f-5552-40a0-ba85-e62c6ecec00c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741448, 'reachable_time': 20853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351240, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.108 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[94f3a933-0df8-4eb6-a00f-f0b577549056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.164 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.165 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.188 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8dc4bb-1d71-4bf4-90a7-e281e76f6d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.190 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.191 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.191 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:59 compute-0 kernel: tap9266ebd7-30: entered promiscuous mode
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:59 compute-0 NetworkManager[44981]: <info>  [1759409519.1957] manager: (tap9266ebd7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.196 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:59 compute-0 ovn_controller[148123]: 2025-10-02T12:51:59Z|00707|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.199 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.200 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[26c2cd90-9654-49c1-9626-37b07ddaadba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.201 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:51:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:51:59.202 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'env', 'PROCESS_TAG=haproxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.220 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.311 2 DEBUG nova.compute.manager [req-dc4cac32-f5d6-4f4a-b916-d01619b1c467 req-078b17c9-2058-4198-8e5f-89d0309629e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received event network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.312 2 DEBUG oslo_concurrency.lockutils [req-dc4cac32-f5d6-4f4a-b916-d01619b1c467 req-078b17c9-2058-4198-8e5f-89d0309629e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.312 2 DEBUG oslo_concurrency.lockutils [req-dc4cac32-f5d6-4f4a-b916-d01619b1c467 req-078b17c9-2058-4198-8e5f-89d0309629e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.312 2 DEBUG oslo_concurrency.lockutils [req-dc4cac32-f5d6-4f4a-b916-d01619b1c467 req-078b17c9-2058-4198-8e5f-89d0309629e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.312 2 DEBUG nova.compute.manager [req-dc4cac32-f5d6-4f4a-b916-d01619b1c467 req-078b17c9-2058-4198-8e5f-89d0309629e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Processing event network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.315 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.316 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.323 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.324 2 INFO nova.compute.claims [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:51:59 compute-0 nova_compute[256940]: 2025-10-02 12:51:59.509 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:59 compute-0 ceph-mon[73668]: pgmap v2337: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 12:51:59 compute-0 podman[351300]: 2025-10-02 12:51:59.686577902 +0000 UTC m=+0.108837367 container create 40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:51:59 compute-0 podman[351300]: 2025-10-02 12:51:59.605483818 +0000 UTC m=+0.027743363 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:51:59 compute-0 systemd[1]: Started libpod-conmon-40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c.scope.
Oct 02 12:51:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc3872dc3747f708cd4d88078cb106fb12cb8c6f456ac667d4809271dbbf4b4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:59 compute-0 podman[351300]: 2025-10-02 12:51:59.819155009 +0000 UTC m=+0.241414494 container init 40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 12:51:59 compute-0 podman[351300]: 2025-10-02 12:51:59.82622002 +0000 UTC m=+0.248479485 container start 40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:51:59 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[351341]: [NOTICE]   (351345) : New worker (351347) forked
Oct 02 12:51:59 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[351341]: [NOTICE]   (351345) : Loading success.
Oct 02 12:51:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:51:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:51:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1620616246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.016 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.022 2 DEBUG nova.compute.provider_tree [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.052 2 DEBUG nova.scheduler.client.report [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.095 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.096 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.131 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409520.1307511, b461d8a1-bc66-4c64-87ec-dac07e9585c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.132 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] VM Started (Lifecycle Event)
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.135 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.139 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.150 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.150 2 DEBUG nova.network.neutron [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.158 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.160 2 INFO nova.virt.libvirt.driver [-] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Instance spawned successfully.
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.161 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.167 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.187 2 INFO nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.193 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.193 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409520.130853, b461d8a1-bc66-4c64-87ec-dac07e9585c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.193 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] VM Paused (Lifecycle Event)
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.198 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.199 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.199 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.200 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.200 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.201 2 DEBUG nova.virt.libvirt.driver [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.235 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.238 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.241 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409520.1383643, b461d8a1-bc66-4c64-87ec-dac07e9585c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.242 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] VM Resumed (Lifecycle Event)
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.296 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.302 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.319 2 INFO nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Took 8.92 seconds to spawn the instance on the hypervisor.
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.319 2 DEBUG nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.350 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.398 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.400 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.400 2 INFO nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Creating image(s)
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.430 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.464 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.497 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.501 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.544 2 INFO nova.compute.manager [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Took 10.03 seconds to build instance.
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.572 2 DEBUG oslo_concurrency.lockutils [None req-72b8983d-598f-4fcc-90ce-75e2fa3c9512 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.579 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.580 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.580 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.580 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.626 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.630 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 063c7d3e-98b4-46a4-a75e-de10a2135604_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:00 compute-0 nova_compute[256940]: 2025-10-02 12:52:00.680 2 DEBUG nova.policy [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:52:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1620616246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.421 2 DEBUG nova.compute.manager [req-6535e89c-cba3-4bcd-a3b3-a0cef5bc24aa req-d4b9aad5-bf46-4acd-992d-80a09d2f0ff3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received event network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.422 2 DEBUG oslo_concurrency.lockutils [req-6535e89c-cba3-4bcd-a3b3-a0cef5bc24aa req-d4b9aad5-bf46-4acd-992d-80a09d2f0ff3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.422 2 DEBUG oslo_concurrency.lockutils [req-6535e89c-cba3-4bcd-a3b3-a0cef5bc24aa req-d4b9aad5-bf46-4acd-992d-80a09d2f0ff3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.422 2 DEBUG oslo_concurrency.lockutils [req-6535e89c-cba3-4bcd-a3b3-a0cef5bc24aa req-d4b9aad5-bf46-4acd-992d-80a09d2f0ff3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.422 2 DEBUG nova.compute.manager [req-6535e89c-cba3-4bcd-a3b3-a0cef5bc24aa req-d4b9aad5-bf46-4acd-992d-80a09d2f0ff3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] No waiting events found dispatching network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.423 2 WARNING nova.compute.manager [req-6535e89c-cba3-4bcd-a3b3-a0cef5bc24aa req-d4b9aad5-bf46-4acd-992d-80a09d2f0ff3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received unexpected event network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 for instance with vm_state active and task_state None.
Oct 02 12:52:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.877 2 INFO nova.compute.manager [None req-8e7a5bd6-644d-45fb-8a7a-e5a0e75a2f0b b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Pausing
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.879 2 DEBUG nova.objects.instance [None req-8e7a5bd6-644d-45fb-8a7a-e5a0e75a2f0b b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid b461d8a1-bc66-4c64-87ec-dac07e9585c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.924 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409521.9242022, b461d8a1-bc66-4c64-87ec-dac07e9585c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.925 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] VM Paused (Lifecycle Event)
Oct 02 12:52:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.926 2 DEBUG nova.compute.manager [None req-8e7a5bd6-644d-45fb-8a7a-e5a0e75a2f0b b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.954 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:01 compute-0 nova_compute[256940]: 2025-10-02 12:52:01.959 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:02 compute-0 nova_compute[256940]: 2025-10-02 12:52:02.034 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 12:52:02 compute-0 nova_compute[256940]: 2025-10-02 12:52:02.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:02 compute-0 ceph-mon[73668]: pgmap v2338: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:52:02 compute-0 nova_compute[256940]: 2025-10-02 12:52:02.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:03 compute-0 nova_compute[256940]: 2025-10-02 12:52:03.074 2 DEBUG nova.network.neutron [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Successfully created port: 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:52:03 compute-0 sudo[351454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:03 compute-0 sudo[351454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:03.410 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:03.411 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:52:03 compute-0 nova_compute[256940]: 2025-10-02 12:52:03.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:03 compute-0 sudo[351454]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:03 compute-0 sudo[351479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:03 compute-0 sudo[351479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-0 sudo[351479]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:03 compute-0 sudo[351504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:03 compute-0 sudo[351504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-0 sudo[351504]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:03 compute-0 ceph-mon[73668]: pgmap v2339: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 02 12:52:03 compute-0 sudo[351529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:52:03 compute-0 sudo[351529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-0 ovn_controller[148123]: 2025-10-02T12:52:03Z|00708|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:52:03 compute-0 nova_compute[256940]: 2025-10-02 12:52:03.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 02 12:52:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.057 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.057 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.058 2 INFO nova.compute.manager [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Shelving
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.069 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 063c7d3e-98b4-46a4-a75e-de10a2135604_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:04 compute-0 podman[351627]: 2025-10-02 12:52:04.080517266 +0000 UTC m=+0.068269775 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.156 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:52:04 compute-0 podman[351627]: 2025-10-02 12:52:04.186547841 +0000 UTC m=+0.174300350 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:04 compute-0 kernel: tap2c773e57-b9 (unregistering): left promiscuous mode
Oct 02 12:52:04 compute-0 NetworkManager[44981]: <info>  [1759409524.3518] device (tap2c773e57-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:52:04 compute-0 ovn_controller[148123]: 2025-10-02T12:52:04Z|00709|binding|INFO|Releasing lport 2c773e57-b93e-4b14-812e-ac1359508815 from this chassis (sb_readonly=0)
Oct 02 12:52:04 compute-0 ovn_controller[148123]: 2025-10-02T12:52:04Z|00710|binding|INFO|Setting lport 2c773e57-b93e-4b14-812e-ac1359508815 down in Southbound
Oct 02 12:52:04 compute-0 ovn_controller[148123]: 2025-10-02T12:52:04Z|00711|binding|INFO|Removing iface tap2c773e57-b9 ovn-installed in OVS
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.376 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:02:b2 10.100.0.10'], port_security=['fa:16:3e:c5:02:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b461d8a1-bc66-4c64-87ec-dac07e9585c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c38185a-c389-4d04-8fc6-53a62e6c5352', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2c773e57-b93e-4b14-812e-ac1359508815) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.377 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2c773e57-b93e-4b14-812e-ac1359508815 in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.379 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.380 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d8af45b0-9dc2-4a4f-9b98-eb0d8361032d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.381 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace which is not needed anymore
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct 02 12:52:04 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000008c.scope: Consumed 2.606s CPU time.
Oct 02 12:52:04 compute-0 systemd-machined[210927]: Machine qemu-74-instance-0000008c terminated.
Oct 02 12:52:04 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[351341]: [NOTICE]   (351345) : haproxy version is 2.8.14-c23fe91
Oct 02 12:52:04 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[351341]: [NOTICE]   (351345) : path to executable is /usr/sbin/haproxy
Oct 02 12:52:04 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[351341]: [WARNING]  (351345) : Exiting Master process...
Oct 02 12:52:04 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[351341]: [ALERT]    (351345) : Current worker (351347) exited with code 143 (Terminated)
Oct 02 12:52:04 compute-0 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[351341]: [WARNING]  (351345) : All workers exited. Exiting... (0)
Oct 02 12:52:04 compute-0 systemd[1]: libpod-40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c.scope: Deactivated successfully.
Oct 02 12:52:04 compute-0 podman[351786]: 2025-10-02 12:52:04.530635692 +0000 UTC m=+0.050922069 container died 40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc3872dc3747f708cd4d88078cb106fb12cb8c6f456ac667d4809271dbbf4b4e-merged.mount: Deactivated successfully.
Oct 02 12:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:52:04 compute-0 podman[351786]: 2025-10-02 12:52:04.568433133 +0000 UTC m=+0.088719510 container cleanup 40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 systemd[1]: libpod-conmon-40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c.scope: Deactivated successfully.
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.581 2 INFO nova.virt.libvirt.driver [-] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Instance destroyed successfully.
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.582 2 DEBUG nova.objects.instance [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'numa_topology' on Instance uuid b461d8a1-bc66-4c64-87ec-dac07e9585c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:04 compute-0 podman[351847]: 2025-10-02 12:52:04.638176616 +0000 UTC m=+0.044161056 container remove 40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.644 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7d145374-e2bc-4e24-b74e-4151ec21e300]: (4, ('Thu Oct  2 12:52:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c)\n40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c\nThu Oct  2 12:52:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c)\n40619522b990db81ad95783b8eb7a405d92f2fab6d59f665baa2633c9b1b269c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.646 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ed5a61-be45-4723-996b-b24b433c2201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.647 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:04 compute-0 kernel: tap9266ebd7-30: left promiscuous mode
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.653 2 DEBUG nova.compute.manager [req-d1813ad6-35be-45a0-ae4f-6eb13d2efd5a req-6f460aee-ccca-4f37-a9ef-f49b2ee252b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received event network-vif-unplugged-2c773e57-b93e-4b14-812e-ac1359508815 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.655 2 DEBUG oslo_concurrency.lockutils [req-d1813ad6-35be-45a0-ae4f-6eb13d2efd5a req-6f460aee-ccca-4f37-a9ef-f49b2ee252b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.655 2 DEBUG oslo_concurrency.lockutils [req-d1813ad6-35be-45a0-ae4f-6eb13d2efd5a req-6f460aee-ccca-4f37-a9ef-f49b2ee252b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.655 2 DEBUG oslo_concurrency.lockutils [req-d1813ad6-35be-45a0-ae4f-6eb13d2efd5a req-6f460aee-ccca-4f37-a9ef-f49b2ee252b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.656 2 DEBUG nova.compute.manager [req-d1813ad6-35be-45a0-ae4f-6eb13d2efd5a req-6f460aee-ccca-4f37-a9ef-f49b2ee252b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] No waiting events found dispatching network-vif-unplugged-2c773e57-b93e-4b14-812e-ac1359508815 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.656 2 WARNING nova.compute.manager [req-d1813ad6-35be-45a0-ae4f-6eb13d2efd5a req-6f460aee-ccca-4f37-a9ef-f49b2ee252b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received unexpected event network-vif-unplugged-2c773e57-b93e-4b14-812e-ac1359508815 for instance with vm_state paused and task_state shelving.
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.672 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8c58465e-3413-4bfa-8fbc-e828a60bb77e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.692 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5c278d-0502-466e-8da1-25c76ba85632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.693 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcc965c-6ac0-4a4b-bd5a-4c10c4252742]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.709 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f44092a0-9b7b-42d5-adcb-dc9c2bf36ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741441, 'reachable_time': 31364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351904, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.711 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:52:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:04.711 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[139cb37e-5dc7-4a55-b994-7850b4251ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d9266ebd7\x2d321c\x2d4fc7\x2da6c8\x2dc1c304634bb4.mount: Deactivated successfully.
Oct 02 12:52:04 compute-0 podman[351888]: 2025-10-02 12:52:04.742686951 +0000 UTC m=+0.066624873 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:52:04 compute-0 podman[351888]: 2025-10-02 12:52:04.752510503 +0000 UTC m=+0.076448385 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.948 2 DEBUG nova.objects.instance [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:04 compute-0 podman[351959]: 2025-10-02 12:52:04.95436293 +0000 UTC m=+0.056852642 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-type=git, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived)
Oct 02 12:52:04 compute-0 podman[351959]: 2025-10-02 12:52:04.967537739 +0000 UTC m=+0.070027441 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, version=2.2.4, io.openshift.expose-services=, release=1793, distribution-scope=public, name=keepalived, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.976 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.976 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Ensure instance console log exists: /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.977 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.977 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:04 compute-0 nova_compute[256940]: 2025-10-02 12:52:04.977 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:05 compute-0 sudo[351529]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.109 2 DEBUG nova.network.neutron [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Successfully updated port: 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.125 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.126 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.126 2 DEBUG nova.network.neutron [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:52:05 compute-0 sudo[352029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:05 compute-0 sudo[352029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-0 sudo[352029]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.180 2 INFO nova.virt.libvirt.driver [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Beginning cold snapshot process
Oct 02 12:52:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:52:05 compute-0 sudo[352054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:52:05 compute-0 sudo[352054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-0 sudo[352054]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.339 2 DEBUG nova.virt.libvirt.imagebackend [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:52:05 compute-0 sudo[352112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:05 compute-0 sudo[352112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-0 sudo[352112]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-0 sudo[352137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:05 compute-0 sudo[352137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-0 sudo[352137]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-0 sudo[352162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:05 compute-0 sudo[352162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-0 sudo[352162]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-0 sudo[352187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:52:05 compute-0 sudo[352187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.690 2 DEBUG nova.storage.rbd_utils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(a1cd384245634a13b06ca9fcdd753642) on rbd image(b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:52:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-0 nova_compute[256940]: 2025-10-02 12:52:05.782 2 DEBUG nova.network.neutron [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:52:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 344 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Oct 02 12:52:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:05.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:05 compute-0 ceph-mon[73668]: pgmap v2340: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 02 12:52:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:05 compute-0 sudo[352187]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:52:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:52:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:52:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.452 2 DEBUG nova.compute.manager [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.452 2 DEBUG nova.compute.manager [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing instance network info cache due to event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.453 2 DEBUG oslo_concurrency.lockutils [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:06 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 42d15e7e-3d75-4d26-afca-371d31196653 does not exist
Oct 02 12:52:06 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c53c4802-02bf-42fe-94f2-27c78fd1073c does not exist
Oct 02 12:52:06 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b0d9607d-7046-462a-b8a7-fdae633e4a7a does not exist
Oct 02 12:52:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:52:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:52:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:52:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:52:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:52:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:06 compute-0 sudo[352260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:06 compute-0 sudo[352260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:06 compute-0 sudo[352260]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:06 compute-0 sudo[352286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:06 compute-0 sudo[352286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:06 compute-0 sudo[352286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:06 compute-0 sudo[352311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:06 compute-0 sudo[352311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:06 compute-0 sudo[352311]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.783 2 DEBUG nova.compute.manager [req-cf8fddef-0875-4db0-9c41-fa52f276bf5b req-beca86e7-66a9-4c44-9fc7-453f7a0f2318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received event network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.784 2 DEBUG oslo_concurrency.lockutils [req-cf8fddef-0875-4db0-9c41-fa52f276bf5b req-beca86e7-66a9-4c44-9fc7-453f7a0f2318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.784 2 DEBUG oslo_concurrency.lockutils [req-cf8fddef-0875-4db0-9c41-fa52f276bf5b req-beca86e7-66a9-4c44-9fc7-453f7a0f2318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.784 2 DEBUG oslo_concurrency.lockutils [req-cf8fddef-0875-4db0-9c41-fa52f276bf5b req-beca86e7-66a9-4c44-9fc7-453f7a0f2318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.784 2 DEBUG nova.compute.manager [req-cf8fddef-0875-4db0-9c41-fa52f276bf5b req-beca86e7-66a9-4c44-9fc7-453f7a0f2318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] No waiting events found dispatching network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:52:06 compute-0 nova_compute[256940]: 2025-10-02 12:52:06.785 2 WARNING nova.compute.manager [req-cf8fddef-0875-4db0-9c41-fa52f276bf5b req-beca86e7-66a9-4c44-9fc7-453f7a0f2318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received unexpected event network-vif-plugged-2c773e57-b93e-4b14-812e-ac1359508815 for instance with vm_state paused and task_state shelving_image_uploading.
Oct 02 12:52:06 compute-0 sudo[352336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:52:06 compute-0 sudo[352336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4094236586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: pgmap v2341: 305 pgs: 305 active+clean; 344 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/723638720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3880822609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:52:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:07 compute-0 podman[352404]: 2025-10-02 12:52:07.149281261 +0000 UTC m=+0.040220935 container create 8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:52:07 compute-0 systemd[1]: Started libpod-conmon-8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983.scope.
Oct 02 12:52:07 compute-0 podman[352404]: 2025-10-02 12:52:07.129529113 +0000 UTC m=+0.020468777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:07 compute-0 podman[352404]: 2025-10-02 12:52:07.342064294 +0000 UTC m=+0.233003988 container init 8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:52:07 compute-0 podman[352404]: 2025-10-02 12:52:07.351501547 +0000 UTC m=+0.242441211 container start 8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:52:07 compute-0 focused_brahmagupta[352420]: 167 167
Oct 02 12:52:07 compute-0 systemd[1]: libpod-8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983.scope: Deactivated successfully.
Oct 02 12:52:07 compute-0 podman[352404]: 2025-10-02 12:52:07.404526349 +0000 UTC m=+0.295466033 container attach 8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:52:07 compute-0 podman[352404]: 2025-10-02 12:52:07.40492029 +0000 UTC m=+0.295859964 container died 8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.467 2 DEBUG nova.network.neutron [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.507 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.508 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance network_info: |[{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.509 2 DEBUG oslo_concurrency.lockutils [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.510 2 DEBUG nova.network.neutron [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.513 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Start _get_guest_xml network_info=[{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:52:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-581b46543d2f8f1384958b62835859329e237327a5d2fe07d9f34eea5b67192b-merged.mount: Deactivated successfully.
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.520 2 WARNING nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.527 2 DEBUG nova.virt.libvirt.host [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.529 2 DEBUG nova.virt.libvirt.host [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.533 2 DEBUG nova.virt.libvirt.host [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.534 2 DEBUG nova.virt.libvirt.host [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.535 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.536 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.536 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.536 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.536 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.537 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.537 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.537 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.537 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.537 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.538 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.538 2 DEBUG nova.virt.hardware [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:52:07 compute-0 nova_compute[256940]: 2025-10-02 12:52:07.540 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:07 compute-0 podman[352404]: 2025-10-02 12:52:07.553883196 +0000 UTC m=+0.444822860 container remove 8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 12:52:07 compute-0 systemd[1]: libpod-conmon-8cb216ebd9c65c40e473470c3644abf6a0509a10e1aae46b965ba2480434d983.scope: Deactivated successfully.
Oct 02 12:52:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Oct 02 12:52:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Oct 02 12:52:07 compute-0 podman[352458]: 2025-10-02 12:52:07.733083381 +0000 UTC m=+0.048592170 container create 117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 12:52:07 compute-0 systemd[1]: Started libpod-conmon-117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58.scope.
Oct 02 12:52:07 compute-0 podman[352458]: 2025-10-02 12:52:07.705046911 +0000 UTC m=+0.020555720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbf4b226752b1bf54c0785a09c4c7a533e2149f5d77df17033a5b1baa78bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbf4b226752b1bf54c0785a09c4c7a533e2149f5d77df17033a5b1baa78bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbf4b226752b1bf54c0785a09c4c7a533e2149f5d77df17033a5b1baa78bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbf4b226752b1bf54c0785a09c4c7a533e2149f5d77df17033a5b1baa78bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbf4b226752b1bf54c0785a09c4c7a533e2149f5d77df17033a5b1baa78bce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:07 compute-0 podman[352458]: 2025-10-02 12:52:07.871044756 +0000 UTC m=+0.186553575 container init 117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:52:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:52:07 compute-0 podman[352458]: 2025-10-02 12:52:07.879329489 +0000 UTC m=+0.194838278 container start 117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:52:07 compute-0 podman[352458]: 2025-10-02 12:52:07.88792734 +0000 UTC m=+0.203436149 container attach 117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:52:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:07.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:07.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.938269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409527938336, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 256, "total_data_size": 1913052, "memory_usage": 1942600, "flush_reason": "Manual Compaction"}
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409527953731, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1204616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51254, "largest_seqno": 52485, "table_properties": {"data_size": 1199873, "index_size": 2139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12972, "raw_average_key_size": 21, "raw_value_size": 1189398, "raw_average_value_size": 1962, "num_data_blocks": 94, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409428, "oldest_key_time": 1759409428, "file_creation_time": 1759409527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 15524 microseconds, and 4500 cpu microseconds.
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.953797) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1204616 bytes OK
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.953824) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.961332) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.961373) EVENT_LOG_v1 {"time_micros": 1759409527961364, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.961395) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1907496, prev total WAL file size 1907496, number of live WAL files 2.
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.962216) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1176KB)], [113(12MB)]
Oct 02 12:52:07 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409527962278, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 14049189, "oldest_snapshot_seqno": -1}
Oct 02 12:52:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:52:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918952010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.021 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.057 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.061 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 8036 keys, 10810407 bytes, temperature: kUnknown
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528105587, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10810407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10758237, "index_size": 31002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 208297, "raw_average_key_size": 25, "raw_value_size": 10616762, "raw_average_value_size": 1321, "num_data_blocks": 1215, "num_entries": 8036, "num_filter_entries": 8036, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.106744) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10810407 bytes
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.109863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.4 rd, 74.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(20.6) write-amplify(9.0) OK, records in: 8523, records dropped: 487 output_compression: NoCompression
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.109891) EVENT_LOG_v1 {"time_micros": 1759409528109879, "job": 68, "event": "compaction_finished", "compaction_time_micros": 144239, "compaction_time_cpu_micros": 25660, "output_level": 6, "num_output_files": 1, "total_output_size": 10810407, "num_input_records": 8523, "num_output_records": 8036, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528110282, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528113234, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:07.962096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.113286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.113292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.113293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.113294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:52:08.113296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.222 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:52:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392679263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.535 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.538 2 DEBUG nova.virt.libvirt.vif [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-657670950',display_name='tempest-TestNetworkAdvancedServerOps-server-657670950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-657670950',id=141,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJUsVCzPVQ4EYUnLe4xVecX/G7C+Cia09idavSODc4ZN//6Cqf+a8ivFPaF6ii5km7SztqC4ETT2rQva0v04xuCgbV1S1NVEoEr76v1/FpEPV08UhMxhurTufTiANa0c8g==',key_name='tempest-TestNetworkAdvancedServerOps-2083529481',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-kcomn0fd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:52:00Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=063c7d3e-98b4-46a4-a75e-de10a2135604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.538 2 DEBUG nova.network.os_vif_util [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.539 2 DEBUG nova.network.os_vif_util [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.541 2 DEBUG nova.objects.instance [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.558 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <uuid>063c7d3e-98b4-46a4-a75e-de10a2135604</uuid>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <name>instance-0000008d</name>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-657670950</nova:name>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:52:07</nova:creationTime>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <nova:port uuid="3b7a0e63-af58-4d73-8bc7-684e63bb5e96">
Oct 02 12:52:08 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <system>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <entry name="serial">063c7d3e-98b4-46a4-a75e-de10a2135604</entry>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <entry name="uuid">063c7d3e-98b4-46a4-a75e-de10a2135604</entry>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </system>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <os>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   </os>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <features>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   </features>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/063c7d3e-98b4-46a4-a75e-de10a2135604_disk">
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       </source>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/063c7d3e-98b4-46a4-a75e-de10a2135604_disk.config">
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       </source>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:52:08 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:f0:dd:e3"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <target dev="tap3b7a0e63-af"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/console.log" append="off"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <video>
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </video>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:52:08 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:52:08 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:52:08 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:52:08 compute-0 nova_compute[256940]: </domain>
Oct 02 12:52:08 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.566 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Preparing to wait for external event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.566 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.567 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.567 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.568 2 DEBUG nova.virt.libvirt.vif [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-657670950',display_name='tempest-TestNetworkAdvancedServerOps-server-657670950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-657670950',id=141,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJUsVCzPVQ4EYUnLe4xVecX/G7C+Cia09idavSODc4ZN//6Cqf+a8ivFPaF6ii5km7SztqC4ETT2rQva0v04xuCgbV1S1NVEoEr76v1/FpEPV08UhMxhurTufTiANa0c8g==',key_name='tempest-TestNetworkAdvancedServerOps-2083529481',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-kcomn0fd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:52:00Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=063c7d3e-98b4-46a4-a75e-de10a2135604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.569 2 DEBUG nova.network.os_vif_util [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.569 2 DEBUG nova.network.os_vif_util [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.570 2 DEBUG os_vif [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b7a0e63-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b7a0e63-af, col_values=(('external_ids', {'iface-id': '3b7a0e63-af58-4d73-8bc7-684e63bb5e96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:dd:e3', 'vm-uuid': '063c7d3e-98b4-46a4-a75e-de10a2135604'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:52:08 compute-0 NetworkManager[44981]: <info>  [1759409528.5802] manager: (tap3b7a0e63-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.591 2 INFO os_vif [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af')
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.643 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409513.6427188, 840e5e36-5883-4877-a201-6f2da064a653 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.645 2 INFO nova.compute.manager [-] [instance: 840e5e36-5883-4877-a201-6f2da064a653] VM Stopped (Lifecycle Event)
Oct 02 12:52:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1114343258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-0 ceph-mon[73668]: osdmap e321: 3 total, 3 up, 3 in
Oct 02 12:52:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2918952010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1669602666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/392679263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.664 2 DEBUG nova.storage.rbd_utils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] cloning vms/b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk@a1cd384245634a13b06ca9fcdd753642 to images/06496a06-9459-4a12-80f1-2f72dd513f0f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.704 2 DEBUG nova.compute.manager [None req-c7b275c5-5f3d-462e-8697-a28c6b1c401d - - - - - -] [instance: 840e5e36-5883-4877-a201-6f2da064a653] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.726 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.727 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.727 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:f0:dd:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.728 2 INFO nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Using config drive
Oct 02 12:52:08 compute-0 nova_compute[256940]: 2025-10-02 12:52:08.757 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:08 compute-0 pensive_jepsen[352481]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:52:08 compute-0 pensive_jepsen[352481]: --> relative data size: 1.0
Oct 02 12:52:08 compute-0 pensive_jepsen[352481]: --> All data devices are unavailable
Oct 02 12:52:08 compute-0 systemd[1]: libpod-117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58.scope: Deactivated successfully.
Oct 02 12:52:08 compute-0 podman[352458]: 2025-10-02 12:52:08.84761103 +0000 UTC m=+1.163119819 container died 117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:52:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-17bbf4b226752b1bf54c0785a09c4c7a533e2149f5d77df17033a5b1baa78bce-merged.mount: Deactivated successfully.
Oct 02 12:52:08 compute-0 podman[352458]: 2025-10-02 12:52:08.902390697 +0000 UTC m=+1.217899486 container remove 117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:08 compute-0 systemd[1]: libpod-conmon-117e06511c51e298f4f995db5725730b9425855f2eec62a86c5a618d893f6d58.scope: Deactivated successfully.
Oct 02 12:52:08 compute-0 sudo[352336]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:09 compute-0 sudo[352607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:09 compute-0 sudo[352607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:09 compute-0 sudo[352607]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:09 compute-0 sudo[352632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:09 compute-0 sudo[352632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:09 compute-0 sudo[352632]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:09 compute-0 sudo[352657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:09 compute-0 sudo[352657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:09 compute-0 sudo[352657]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:09 compute-0 sudo[352685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:52:09 compute-0 sudo[352685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:09 compute-0 nova_compute[256940]: 2025-10-02 12:52:09.218 2 DEBUG nova.storage.rbd_utils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] flattening images/06496a06-9459-4a12-80f1-2f72dd513f0f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:52:09 compute-0 podman[352767]: 2025-10-02 12:52:09.540661428 +0000 UTC m=+0.039920017 container create 9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:09 compute-0 systemd[1]: Started libpod-conmon-9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b.scope.
Oct 02 12:52:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:09 compute-0 podman[352767]: 2025-10-02 12:52:09.523439226 +0000 UTC m=+0.022697835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:09 compute-0 podman[352767]: 2025-10-02 12:52:09.619685539 +0000 UTC m=+0.118944138 container init 9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:52:09 compute-0 podman[352767]: 2025-10-02 12:52:09.625919649 +0000 UTC m=+0.125178228 container start 9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:52:09 compute-0 podman[352767]: 2025-10-02 12:52:09.62908092 +0000 UTC m=+0.128339499 container attach 9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:52:09 compute-0 friendly_morse[352783]: 167 167
Oct 02 12:52:09 compute-0 systemd[1]: libpod-9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b.scope: Deactivated successfully.
Oct 02 12:52:09 compute-0 conmon[352783]: conmon 9b86671982e50c31d7e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b.scope/container/memory.events
Oct 02 12:52:09 compute-0 podman[352767]: 2025-10-02 12:52:09.632133799 +0000 UTC m=+0.131392378 container died 9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:52:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-96448604b46be5a11363129344b978f9e58cc7458ec29ff894e5f028a1153793-merged.mount: Deactivated successfully.
Oct 02 12:52:09 compute-0 podman[352767]: 2025-10-02 12:52:09.671544631 +0000 UTC m=+0.170803200 container remove 9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 12:52:09 compute-0 systemd[1]: libpod-conmon-9b86671982e50c31d7e1bc03539e8a0de1967c8cb422d3ee17c56b1799dcf76b.scope: Deactivated successfully.
Oct 02 12:52:09 compute-0 ceph-mon[73668]: pgmap v2343: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:52:09 compute-0 nova_compute[256940]: 2025-10-02 12:52:09.743 2 INFO nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Creating config drive at /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/disk.config
Oct 02 12:52:09 compute-0 nova_compute[256940]: 2025-10-02 12:52:09.749 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9fz72ebq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:09 compute-0 podman[352807]: 2025-10-02 12:52:09.834656433 +0000 UTC m=+0.042768790 container create f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:52:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 365 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 12:52:09 compute-0 systemd[1]: Started libpod-conmon-f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf.scope.
Oct 02 12:52:09 compute-0 nova_compute[256940]: 2025-10-02 12:52:09.889 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9fz72ebq" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:09.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:09 compute-0 podman[352807]: 2025-10-02 12:52:09.817246275 +0000 UTC m=+0.025358632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9385bf1cad4889676515c9e487bd51a3af5818fdd4b6835ac5e8fe512ab0c367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9385bf1cad4889676515c9e487bd51a3af5818fdd4b6835ac5e8fe512ab0c367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9385bf1cad4889676515c9e487bd51a3af5818fdd4b6835ac5e8fe512ab0c367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9385bf1cad4889676515c9e487bd51a3af5818fdd4b6835ac5e8fe512ab0c367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:09 compute-0 nova_compute[256940]: 2025-10-02 12:52:09.923 2 DEBUG nova.storage.rbd_utils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 063c7d3e-98b4-46a4-a75e-de10a2135604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:09 compute-0 nova_compute[256940]: 2025-10-02 12:52:09.930 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/disk.config 063c7d3e-98b4-46a4-a75e-de10a2135604_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:09 compute-0 podman[352807]: 2025-10-02 12:52:09.934010256 +0000 UTC m=+0.142122613 container init f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:52:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:09.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:09 compute-0 podman[352807]: 2025-10-02 12:52:09.942155135 +0000 UTC m=+0.150267472 container start f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:52:09 compute-0 podman[352807]: 2025-10-02 12:52:09.949321709 +0000 UTC m=+0.157434076 container attach f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 12:52:10 compute-0 nova_compute[256940]: 2025-10-02 12:52:10.374 2 DEBUG nova.storage.rbd_utils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] removing snapshot(a1cd384245634a13b06ca9fcdd753642) on rbd image(b461d8a1-bc66-4c64-87ec-dac07e9585c4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:52:10 compute-0 nova_compute[256940]: 2025-10-02 12:52:10.489 2 DEBUG nova.network.neutron [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updated VIF entry in instance network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:52:10 compute-0 nova_compute[256940]: 2025-10-02 12:52:10.490 2 DEBUG nova.network.neutron [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:10 compute-0 nova_compute[256940]: 2025-10-02 12:52:10.522 2 DEBUG oslo_concurrency.lockutils [req-cdb1f073-0795-4fd9-a114-f797b32e856b req-92943a02-c3d7-4770-b485-3bcfc8962ff4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:10 compute-0 goofy_bouman[352825]: {
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:     "1": [
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:         {
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "devices": [
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "/dev/loop3"
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             ],
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "lv_name": "ceph_lv0",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "lv_size": "7511998464",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "name": "ceph_lv0",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "tags": {
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.cluster_name": "ceph",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.crush_device_class": "",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.encrypted": "0",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.osd_id": "1",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.type": "block",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:                 "ceph.vdo": "0"
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             },
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "type": "block",
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:             "vg_name": "ceph_vg0"
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:         }
Oct 02 12:52:10 compute-0 goofy_bouman[352825]:     ]
Oct 02 12:52:10 compute-0 goofy_bouman[352825]: }
Oct 02 12:52:10 compute-0 systemd[1]: libpod-f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf.scope: Deactivated successfully.
Oct 02 12:52:10 compute-0 podman[352807]: 2025-10-02 12:52:10.770447329 +0000 UTC m=+0.978559696 container died f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 12:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9385bf1cad4889676515c9e487bd51a3af5818fdd4b6835ac5e8fe512ab0c367-merged.mount: Deactivated successfully.
Oct 02 12:52:11 compute-0 podman[352894]: 2025-10-02 12:52:11.026811016 +0000 UTC m=+0.221327608 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:52:11 compute-0 podman[352888]: 2025-10-02 12:52:11.037785288 +0000 UTC m=+0.231274104 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 12:52:11 compute-0 podman[352807]: 2025-10-02 12:52:11.056109819 +0000 UTC m=+1.264222156 container remove f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:52:11 compute-0 ceph-mon[73668]: pgmap v2344: 305 pgs: 305 active+clean; 365 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 12:52:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:11 compute-0 systemd[1]: libpod-conmon-f4604002a2684144dec421beaa21b41a36f574d0747327e99c22d9e6f38d57bf.scope: Deactivated successfully.
Oct 02 12:52:11 compute-0 sudo[352685]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:11 compute-0 sudo[352945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:11 compute-0 sudo[352945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:11 compute-0 sudo[352945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.248 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.248 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:11 compute-0 sudo[352970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:11 compute-0 sudo[352970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:11 compute-0 sudo[352970]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:11 compute-0 sudo[352999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:11 compute-0 sudo[352999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:11 compute-0 sudo[352999]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:11 compute-0 sudo[353024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:52:11 compute-0 sudo[353024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2444992257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.723 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:11 compute-0 podman[353110]: 2025-10-02 12:52:11.726232927 +0000 UTC m=+0.023808493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:11 compute-0 podman[353110]: 2025-10-02 12:52:11.839222381 +0000 UTC m=+0.136797917 container create cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:52:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 335 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 02 12:52:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:11.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.931 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.931 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.934 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:11 compute-0 nova_compute[256940]: 2025-10-02 12:52:11.934 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:52:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:11.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:11 compute-0 systemd[1]: Started libpod-conmon-cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891.scope.
Oct 02 12:52:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Oct 02 12:52:12 compute-0 podman[353110]: 2025-10-02 12:52:12.141732404 +0000 UTC m=+0.439307950 container init cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 podman[353110]: 2025-10-02 12:52:12.159357607 +0000 UTC m=+0.456933153 container start cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:52:12 compute-0 pedantic_noether[353128]: 167 167
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.171 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:52:12 compute-0 podman[353110]: 2025-10-02 12:52:12.17234109 +0000 UTC m=+0.469916616 container attach cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.172 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4278MB free_disk=20.855548858642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.172 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.173 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:12 compute-0 systemd[1]: libpod-cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891.scope: Deactivated successfully.
Oct 02 12:52:12 compute-0 podman[353110]: 2025-10-02 12:52:12.174175078 +0000 UTC m=+0.471750604 container died cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:52:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2444992257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-00e10256b3d29f9ad22e2c6ee8902cc588e94377bda47b48398a23487ac5d2db-merged.mount: Deactivated successfully.
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.275 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance b461d8a1-bc66-4c64-87ec-dac07e9585c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.275 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 063c7d3e-98b4-46a4-a75e-de10a2135604 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.275 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.276 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:52:12 compute-0 podman[353110]: 2025-10-02 12:52:12.284641426 +0000 UTC m=+0.582216962 container remove cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:52:12 compute-0 systemd[1]: libpod-conmon-cccd27fee22ec5799ddaa0560314d78002b47bb5fea4761dd1c1afe21b1a9891.scope: Deactivated successfully.
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.345 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.386 2 DEBUG oslo_concurrency.processutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/disk.config 063c7d3e-98b4-46a4-a75e-de10a2135604_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.388 2 INFO nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Deleting local config drive /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/disk.config because it was imported into RBD.
Oct 02 12:52:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.413 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:12 compute-0 kernel: tap3b7a0e63-af: entered promiscuous mode
Oct 02 12:52:12 compute-0 NetworkManager[44981]: <info>  [1759409532.4752] manager: (tap3b7a0e63-af): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Oct 02 12:52:12 compute-0 ovn_controller[148123]: 2025-10-02T12:52:12Z|00712|binding|INFO|Claiming lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for this chassis.
Oct 02 12:52:12 compute-0 ovn_controller[148123]: 2025-10-02T12:52:12Z|00713|binding|INFO|3b7a0e63-af58-4d73-8bc7-684e63bb5e96: Claiming fa:16:3e:f0:dd:e3 10.100.0.6
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.487 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:dd:e3 10.100.0.6'], port_security=['fa:16:3e:f0:dd:e3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '063c7d3e-98b4-46a4-a75e-de10a2135604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'de779411-ca14-48cf-b925-43960a45cd14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=184be778-bd1f-45cf-8f02-03b61731fc05, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=3b7a0e63-af58-4d73-8bc7-684e63bb5e96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.488 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 in datapath c011060c-3c24-4fdd-8151-c45f0e81f0db bound to our chassis
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.490 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c011060c-3c24-4fdd-8151-c45f0e81f0db
Oct 02 12:52:12 compute-0 ovn_controller[148123]: 2025-10-02T12:52:12Z|00714|binding|INFO|Setting lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 ovn-installed in OVS
Oct 02 12:52:12 compute-0 ovn_controller[148123]: 2025-10-02T12:52:12Z|00715|binding|INFO|Setting lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 up in Southbound
Oct 02 12:52:12 compute-0 podman[353157]: 2025-10-02 12:52:12.506861426 +0000 UTC m=+0.064711104 container create ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.505 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[57d34c50-9a0a-4aba-b955-f85ca21f19f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.507 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc011060c-31 in ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.509 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc011060c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.509 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[44585e82-f016-4fd0-8284-56d921c62276]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.510 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a056fb2d-b8c7-4776-9c02-5b8f6f2bafda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 systemd-udevd[353199]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 systemd-machined[210927]: New machine qemu-75-instance-0000008d.
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.527 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[29c0801d-5c32-41ef-b1f4-7d33803f3604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 NetworkManager[44981]: <info>  [1759409532.5338] device (tap3b7a0e63-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:52:12 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-0000008d.
Oct 02 12:52:12 compute-0 NetworkManager[44981]: <info>  [1759409532.5358] device (tap3b7a0e63-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.551 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6c213798-0f17-4bd5-a758-5a736178f765]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:12 compute-0 podman[353157]: 2025-10-02 12:52:12.481080334 +0000 UTC m=+0.038930032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:52:12 compute-0 systemd[1]: Started libpod-conmon-ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9.scope.
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.597 2 DEBUG nova.storage.rbd_utils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(snap) on rbd image(06496a06-9459-4a12-80f1-2f72dd513f0f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.599 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[000179b0-1c03-4dd3-aaf9-93edd71003f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.607 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8c25cea4-0738-4d23-8f5f-9d43bfce7f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 NetworkManager[44981]: <info>  [1759409532.6089] manager: (tapc011060c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/316)
Oct 02 12:52:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfc8949058194a560db410d6003b30ffeb7f9b1a73d33063320ccc3ac0debf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfc8949058194a560db410d6003b30ffeb7f9b1a73d33063320ccc3ac0debf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfc8949058194a560db410d6003b30ffeb7f9b1a73d33063320ccc3ac0debf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfc8949058194a560db410d6003b30ffeb7f9b1a73d33063320ccc3ac0debf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:12 compute-0 podman[353157]: 2025-10-02 12:52:12.640950462 +0000 UTC m=+0.198800170 container init ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:52:12 compute-0 podman[353157]: 2025-10-02 12:52:12.65217612 +0000 UTC m=+0.210025788 container start ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.652 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8c8d46-7307-4caf-a699-47ffd9fdbfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 podman[353157]: 2025-10-02 12:52:12.656770498 +0000 UTC m=+0.214620176 container attach ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mcclintock, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.661 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[248b459c-a671-401e-bace-2a1e0975332f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 NetworkManager[44981]: <info>  [1759409532.6954] device (tapc011060c-30): carrier: link connected
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.704 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[36690f9a-e9c0-4d9d-9088-ad9feef6c00e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.728 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd1e5a7-9612-4965-9230-61d3d0785b6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc011060c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:3a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742817, 'reachable_time': 32035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353267, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.754 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec88974-d8ed-48f2-b27f-5b9a0e016aa4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:3a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742817, 'tstamp': 742817}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353268, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.777 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[578d3875-7550-4c87-813e-77e6adc7b1cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc011060c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:3a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742817, 'reachable_time': 32035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353269, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.818 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[90df72ab-9203-4bfc-bc5e-312bbdae8cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378941821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.906 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c0fa39-3e05-4599-997a-8a950088739f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.908 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc011060c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.908 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.909 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc011060c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 NetworkManager[44981]: <info>  [1759409532.9116] manager: (tapc011060c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Oct 02 12:52:12 compute-0 kernel: tapc011060c-30: entered promiscuous mode
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.918 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.918 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc011060c-30, col_values=(('external_ids', {'iface-id': 'b93b6a15-3b4f-4af6-9700-32891dbbf041'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 ovn_controller[148123]: 2025-10-02T12:52:12Z|00716|binding|INFO|Releasing lport b93b6a15-3b4f-4af6-9700-32891dbbf041 from this chassis (sb_readonly=0)
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.924 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.941 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c011060c-3c24-4fdd-8151-c45f0e81f0db.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c011060c-3c24-4fdd-8151-c45f0e81f0db.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.942 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1144b8e4-a350-4898-bc52-531e724f43d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.943 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-c011060c-3c24-4fdd-8151-c45f0e81f0db
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/c011060c-3c24-4fdd-8151-c45f0e81f0db.pid.haproxy
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID c011060c-3c24-4fdd-8151-c45f0e81f0db
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:52:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:12.943 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'env', 'PROCESS_TAG=haproxy-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c011060c-3c24-4fdd-8151-c45f0e81f0db.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.946 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.975 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:52:12 compute-0 nova_compute[256940]: 2025-10-02 12:52:12.975 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:13 compute-0 nova_compute[256940]: 2025-10-02 12:52:13.248 2 DEBUG nova.compute.manager [req-b63acd31-e8d4-443c-b7d5-9fdeb679bc69 req-7eac9a0c-7045-4299-9802-9408df5760c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:13 compute-0 nova_compute[256940]: 2025-10-02 12:52:13.249 2 DEBUG oslo_concurrency.lockutils [req-b63acd31-e8d4-443c-b7d5-9fdeb679bc69 req-7eac9a0c-7045-4299-9802-9408df5760c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:13 compute-0 nova_compute[256940]: 2025-10-02 12:52:13.250 2 DEBUG oslo_concurrency.lockutils [req-b63acd31-e8d4-443c-b7d5-9fdeb679bc69 req-7eac9a0c-7045-4299-9802-9408df5760c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:13 compute-0 nova_compute[256940]: 2025-10-02 12:52:13.250 2 DEBUG oslo_concurrency.lockutils [req-b63acd31-e8d4-443c-b7d5-9fdeb679bc69 req-7eac9a0c-7045-4299-9802-9408df5760c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:13 compute-0 nova_compute[256940]: 2025-10-02 12:52:13.250 2 DEBUG nova.compute.manager [req-b63acd31-e8d4-443c-b7d5-9fdeb679bc69 req-7eac9a0c-7045-4299-9802-9408df5760c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Processing event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:52:13 compute-0 podman[353306]: 2025-10-02 12:52:13.335337865 +0000 UTC m=+0.056133144 container create 4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:52:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Oct 02 12:52:13 compute-0 systemd[1]: Started libpod-conmon-4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef.scope.
Oct 02 12:52:13 compute-0 podman[353306]: 2025-10-02 12:52:13.302790508 +0000 UTC m=+0.023585807 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:52:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598a037951341fa7eb2ba68c49df285c8eb78e1f55329c0d25abb26bb6d18550/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:13 compute-0 podman[353306]: 2025-10-02 12:52:13.445224728 +0000 UTC m=+0.166020017 container init 4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:52:13 compute-0 podman[353306]: 2025-10-02 12:52:13.451887579 +0000 UTC m=+0.172682868 container start 4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:52:13 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [NOTICE]   (353372) : New worker (353375) forked
Oct 02 12:52:13 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [NOTICE]   (353372) : Loading success.
Oct 02 12:52:13 compute-0 ceph-mon[73668]: pgmap v2345: 305 pgs: 305 active+clean; 335 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 02 12:52:13 compute-0 ceph-mon[73668]: osdmap e322: 3 total, 3 up, 3 in
Oct 02 12:52:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3378941821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]: {
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]:         "osd_id": 1,
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]:         "type": "bluestore"
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]:     }
Oct 02 12:52:13 compute-0 gifted_mcclintock[353217]: }
Oct 02 12:52:13 compute-0 systemd[1]: libpod-ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9.scope: Deactivated successfully.
Oct 02 12:52:13 compute-0 podman[353157]: 2025-10-02 12:52:13.56322371 +0000 UTC m=+1.121073408 container died ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:52:13 compute-0 nova_compute[256940]: 2025-10-02 12:52:13.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccfc8949058194a560db410d6003b30ffeb7f9b1a73d33063320ccc3ac0debf8-merged.mount: Deactivated successfully.
Oct 02 12:52:13 compute-0 podman[353157]: 2025-10-02 12:52:13.627755538 +0000 UTC m=+1.185605216 container remove ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 12:52:13 compute-0 systemd[1]: libpod-conmon-ec0cf1388d952587d688056cefa7e0f0c65573ee7235496b2a509f3552595aa9.scope: Deactivated successfully.
Oct 02 12:52:13 compute-0 sudo[353024]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:52:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 335 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.5 MiB/s wr, 128 op/s
Oct 02 12:52:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Oct 02 12:52:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Oct 02 12:52:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:13.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:13 compute-0 nova_compute[256940]: 2025-10-02 12:52:13.973 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:52:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 50cf9556-502f-4320-b4ae-de81854597b8 does not exist
Oct 02 12:52:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3b8e0a7b-d4c2-4b96-962d-90aeb7ac97ad does not exist
Oct 02 12:52:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 17102e7a-75c1-422c-aad4-1b61256c661d does not exist
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.145 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409534.144622, 063c7d3e-98b4-46a4-a75e-de10a2135604 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.145 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] VM Started (Lifecycle Event)
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.147 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.152 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.156 2 INFO nova.virt.libvirt.driver [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance spawned successfully.
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.156 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:52:14 compute-0 sudo[353411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:14 compute-0 sudo[353411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:14 compute-0 sudo[353411]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.180 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.187 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.191 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.192 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.192 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.193 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.193 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.194 2 DEBUG nova.virt.libvirt.driver [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.229 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.230 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409534.1447213, 063c7d3e-98b4-46a4-a75e-de10a2135604 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.230 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] VM Paused (Lifecycle Event)
Oct 02 12:52:14 compute-0 sudo[353436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:52:14 compute-0 sudo[353436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:14 compute-0 sudo[353436]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.263 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.283 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409534.150745, 063c7d3e-98b4-46a4-a75e-de10a2135604 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.284 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] VM Resumed (Lifecycle Event)
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.298 2 INFO nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Took 13.90 seconds to spawn the instance on the hypervisor.
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.299 2 DEBUG nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.310 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.315 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.366 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.397 2 INFO nova.compute.manager [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Took 15.11 seconds to build instance.
Oct 02 12:52:14 compute-0 nova_compute[256940]: 2025-10-02 12:52:14.417 2 DEBUG oslo_concurrency.lockutils [None req-852b2cf4-fb44-40d4-abd2-79af301b7bd7 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:14 compute-0 ceph-mon[73668]: pgmap v2347: 305 pgs: 305 active+clean; 335 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.5 MiB/s wr, 128 op/s
Oct 02 12:52:14 compute-0 ceph-mon[73668]: osdmap e323: 3 total, 3 up, 3 in
Oct 02 12:52:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.214 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.391 2 DEBUG nova.compute.manager [req-a2a01ab6-ddb5-4a8c-85e1-abdc8638acb5 req-4b477472-90c6-4f8d-9f57-89cb3e901171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.392 2 DEBUG oslo_concurrency.lockutils [req-a2a01ab6-ddb5-4a8c-85e1-abdc8638acb5 req-4b477472-90c6-4f8d-9f57-89cb3e901171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.392 2 DEBUG oslo_concurrency.lockutils [req-a2a01ab6-ddb5-4a8c-85e1-abdc8638acb5 req-4b477472-90c6-4f8d-9f57-89cb3e901171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.393 2 DEBUG oslo_concurrency.lockutils [req-a2a01ab6-ddb5-4a8c-85e1-abdc8638acb5 req-4b477472-90c6-4f8d-9f57-89cb3e901171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.393 2 DEBUG nova.compute.manager [req-a2a01ab6-ddb5-4a8c-85e1-abdc8638acb5 req-4b477472-90c6-4f8d-9f57-89cb3e901171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.393 2 WARNING nova.compute.manager [req-a2a01ab6-ddb5-4a8c-85e1-abdc8638acb5 req-4b477472-90c6-4f8d-9f57-89cb3e901171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state active and task_state None.
Oct 02 12:52:15 compute-0 ovn_controller[148123]: 2025-10-02T12:52:15Z|00717|binding|INFO|Releasing lport b93b6a15-3b4f-4af6-9700-32891dbbf041 from this chassis (sb_readonly=0)
Oct 02 12:52:15 compute-0 nova_compute[256940]: 2025-10-02 12:52:15.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 91 op/s
Oct 02 12:52:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:15.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:16 compute-0 ovn_controller[148123]: 2025-10-02T12:52:16Z|00718|binding|INFO|Releasing lport b93b6a15-3b4f-4af6-9700-32891dbbf041 from this chassis (sb_readonly=0)
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.499 2 INFO nova.virt.libvirt.driver [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Snapshot image upload complete
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.500 2 DEBUG nova.compute.manager [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.556 2 INFO nova.compute.manager [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Shelve offloading
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.563 2 INFO nova.virt.libvirt.driver [-] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Instance destroyed successfully.
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.564 2 DEBUG nova.compute.manager [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.567 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.567 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:16 compute-0 nova_compute[256940]: 2025-10-02 12:52:16.567 2 DEBUG nova.network.neutron [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:52:16 compute-0 ceph-mon[73668]: pgmap v2349: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 91 op/s
Oct 02 12:52:17 compute-0 nova_compute[256940]: 2025-10-02 12:52:17.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 12:52:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:18 compute-0 nova_compute[256940]: 2025-10-02 12:52:18.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:19 compute-0 NetworkManager[44981]: <info>  [1759409539.0609] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Oct 02 12:52:19 compute-0 NetworkManager[44981]: <info>  [1759409539.0619] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct 02 12:52:19 compute-0 ceph-mon[73668]: pgmap v2350: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:19 compute-0 ovn_controller[148123]: 2025-10-02T12:52:19Z|00719|binding|INFO|Releasing lport b93b6a15-3b4f-4af6-9700-32891dbbf041 from this chassis (sb_readonly=0)
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.531 2 DEBUG nova.network.neutron [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Updating instance_info_cache with network_info: [{"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.536 2 DEBUG nova.compute.manager [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.537 2 DEBUG nova.compute.manager [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing instance network info cache due to event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.538 2 DEBUG oslo_concurrency.lockutils [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.538 2 DEBUG oslo_concurrency.lockutils [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.538 2 DEBUG nova.network.neutron [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.551 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.580 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409524.579084, b461d8a1-bc66-4c64-87ec-dac07e9585c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.581 2 INFO nova.compute.manager [-] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] VM Stopped (Lifecycle Event)
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.614 2 DEBUG nova.compute.manager [None req-34f20149-e1a5-4c35-9fa2-462df7b7c5f1 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.617 2 DEBUG nova.compute.manager [None req-34f20149-e1a5-4c35-9fa2-462df7b7c5f1 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:19 compute-0 nova_compute[256940]: 2025-10-02 12:52:19.653 2 INFO nova.compute.manager [None req-34f20149-e1a5-4c35-9fa2-462df7b7c5f1 - - - - - -] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Oct 02 12:52:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 544 KiB/s wr, 139 op/s
Oct 02 12:52:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:19.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:19.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:20 compute-0 nova_compute[256940]: 2025-10-02 12:52:20.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:20 compute-0 nova_compute[256940]: 2025-10-02 12:52:20.249 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:20 compute-0 nova_compute[256940]: 2025-10-02 12:52:20.249 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:52:20 compute-0 nova_compute[256940]: 2025-10-02 12:52:20.276 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:52:21 compute-0 ceph-mon[73668]: pgmap v2351: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 544 KiB/s wr, 139 op/s
Oct 02 12:52:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 458 KiB/s wr, 117 op/s
Oct 02 12:52:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:21.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.375 2 INFO nova.virt.libvirt.driver [-] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Instance destroyed successfully.
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.379 2 DEBUG nova.objects.instance [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid b461d8a1-bc66-4c64-87ec-dac07e9585c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.431 2 DEBUG nova.virt.libvirt.vif [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-417900419',display_name='tempest-ServerActionsTestOtherB-server-417900419',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-417900419',id=140,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-b53m285n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member',shelved_at='2025-10-02T12:52:16.500524',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='06496a06-9459-4a12-80f1-2f72dd513f0f'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:05Z,user_data=None,user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=b461d8a1-bc66-4c64-87ec-dac07e9585c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.432 2 DEBUG nova.network.os_vif_util [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c773e57-b9", "ovs_interfaceid": "2c773e57-b93e-4b14-812e-ac1359508815", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.433 2 DEBUG nova.network.os_vif_util [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:02:b2,bridge_name='br-int',has_traffic_filtering=True,id=2c773e57-b93e-4b14-812e-ac1359508815,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c773e57-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.433 2 DEBUG os_vif [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:02:b2,bridge_name='br-int',has_traffic_filtering=True,id=2c773e57-b93e-4b14-812e-ac1359508815,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c773e57-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c773e57-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.443 2 INFO os_vif [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:02:b2,bridge_name='br-int',has_traffic_filtering=True,id=2c773e57-b93e-4b14-812e-ac1359508815,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c773e57-b9')
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.527 2 DEBUG nova.compute.manager [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Received event network-changed-2c773e57-b93e-4b14-812e-ac1359508815 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.527 2 DEBUG nova.compute.manager [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Refreshing instance network info cache due to event network-changed-2c773e57-b93e-4b14-812e-ac1359508815. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.528 2 DEBUG oslo_concurrency.lockutils [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.528 2 DEBUG oslo_concurrency.lockutils [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:22 compute-0 nova_compute[256940]: 2025-10-02 12:52:22.529 2 DEBUG nova.network.neutron [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Refreshing network info cache for port 2c773e57-b93e-4b14-812e-ac1359508815 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:52:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Oct 02 12:52:23 compute-0 nova_compute[256940]: 2025-10-02 12:52:23.022 2 DEBUG nova.network.neutron [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updated VIF entry in instance network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:52:23 compute-0 nova_compute[256940]: 2025-10-02 12:52:23.022 2 DEBUG nova.network.neutron [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:23 compute-0 nova_compute[256940]: 2025-10-02 12:52:23.067 2 DEBUG oslo_concurrency.lockutils [req-483340da-5619-4727-ba50-709c58f9b17f req-42eb66cf-6f08-4447-96ed-325f96a66e5f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Oct 02 12:52:23 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Oct 02 12:52:23 compute-0 ceph-mon[73668]: pgmap v2352: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 458 KiB/s wr, 117 op/s
Oct 02 12:52:23 compute-0 ceph-mon[73668]: osdmap e324: 3 total, 3 up, 3 in
Oct 02 12:52:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 436 KiB/s wr, 111 op/s
Oct 02 12:52:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:23.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:24 compute-0 podman[353488]: 2025-10-02 12:52:24.404459171 +0000 UTC m=+0.062083826 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:52:24 compute-0 podman[353489]: 2025-10-02 12:52:24.413544565 +0000 UTC m=+0.071696783 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 12:52:24 compute-0 nova_compute[256940]: 2025-10-02 12:52:24.961 2 DEBUG nova.network.neutron [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Updated VIF entry in instance network info cache for port 2c773e57-b93e-4b14-812e-ac1359508815. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:52:24 compute-0 nova_compute[256940]: 2025-10-02 12:52:24.962 2 DEBUG nova.network.neutron [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Updating instance_info_cache with network_info: [{"id": "2c773e57-b93e-4b14-812e-ac1359508815", "address": "fa:16:3e:c5:02:b2", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2c773e57-b9", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:24 compute-0 nova_compute[256940]: 2025-10-02 12:52:24.990 2 DEBUG oslo_concurrency.lockutils [req-54a323f8-4eba-49a1-a203-ac44172c17f6 req-50ad83b7-d38e-412c-98b6-c75e95a026ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b461d8a1-bc66-4c64-87ec-dac07e9585c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:25 compute-0 ceph-mon[73668]: pgmap v2354: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 436 KiB/s wr, 111 op/s
Oct 02 12:52:25 compute-0 sudo[353529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:25 compute-0 sudo[353529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:25 compute-0 sudo[353529]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:25 compute-0 sudo[353554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:25 compute-0 sudo[353554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:25 compute-0 sudo[353554]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 103 op/s
Oct 02 12:52:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:25.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:26.490 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:26.490 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:26.490 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1864420010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:27 compute-0 nova_compute[256940]: 2025-10-02 12:52:27.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:27 compute-0 nova_compute[256940]: 2025-10-02 12:52:27.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 296 KiB/s wr, 62 op/s
Oct 02 12:52:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:27.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:27 compute-0 ceph-mon[73668]: pgmap v2355: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 103 op/s
Oct 02 12:52:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3065473880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:52:28
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Oct 02 12:52:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:52:29 compute-0 ceph-mon[73668]: pgmap v2356: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 296 KiB/s wr, 62 op/s
Oct 02 12:52:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 329 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.3 MiB/s wr, 48 op/s
Oct 02 12:52:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:29.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:29.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4066300551' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3447191113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:31 compute-0 ceph-mon[73668]: pgmap v2357: 305 pgs: 305 active+clean; 329 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.3 MiB/s wr, 48 op/s
Oct 02 12:52:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 6.2 MiB/s wr, 107 op/s
Oct 02 12:52:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:31.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:31.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:32 compute-0 nova_compute[256940]: 2025-10-02 12:52:32.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:32 compute-0 nova_compute[256940]: 2025-10-02 12:52:32.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:32 compute-0 ovn_controller[148123]: 2025-10-02T12:52:32Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:dd:e3 10.100.0.6
Oct 02 12:52:32 compute-0 ovn_controller[148123]: 2025-10-02T12:52:32Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:dd:e3 10.100.0.6
Oct 02 12:52:32 compute-0 ceph-mon[73668]: pgmap v2358: 305 pgs: 305 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 6.2 MiB/s wr, 107 op/s
Oct 02 12:52:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 263 KiB/s rd, 5.8 MiB/s wr, 100 op/s
Oct 02 12:52:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:33.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 411 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 264 KiB/s rd, 5.6 MiB/s wr, 122 op/s
Oct 02 12:52:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:35 compute-0 ceph-mon[73668]: pgmap v2359: 305 pgs: 305 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 263 KiB/s rd, 5.8 MiB/s wr, 100 op/s
Oct 02 12:52:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:36 compute-0 nova_compute[256940]: 2025-10-02 12:52:36.580 2 INFO nova.virt.libvirt.driver [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Deleting instance files /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4_del
Oct 02 12:52:36 compute-0 nova_compute[256940]: 2025-10-02 12:52:36.581 2 INFO nova.virt.libvirt.driver [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: b461d8a1-bc66-4c64-87ec-dac07e9585c4] Deletion of /var/lib/nova/instances/b461d8a1-bc66-4c64-87ec-dac07e9585c4_del complete
Oct 02 12:52:36 compute-0 nova_compute[256940]: 2025-10-02 12:52:36.692 2 INFO nova.scheduler.client.report [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocations for instance b461d8a1-bc66-4c64-87ec-dac07e9585c4
Oct 02 12:52:36 compute-0 nova_compute[256940]: 2025-10-02 12:52:36.818 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:36 compute-0 nova_compute[256940]: 2025-10-02 12:52:36.818 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.161 2 DEBUG oslo_concurrency.processutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:37 compute-0 ceph-mon[73668]: pgmap v2360: 305 pgs: 305 active+clean; 411 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 264 KiB/s rd, 5.6 MiB/s wr, 122 op/s
Oct 02 12:52:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1082652408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1889640611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297911692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.619 2 DEBUG oslo_concurrency.processutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.625 2 DEBUG nova.compute.provider_tree [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.639 2 DEBUG nova.scheduler.client.report [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.668 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:37 compute-0 nova_compute[256940]: 2025-10-02 12:52:37.723 2 DEBUG oslo_concurrency.lockutils [None req-c7c01eb8-db90-4339-8c6e-e753af8fdf96 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "b461d8a1-bc66-4c64-87ec-dac07e9585c4" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 33.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 415 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 5.7 MiB/s wr, 142 op/s
Oct 02 12:52:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:38 compute-0 nova_compute[256940]: 2025-10-02 12:52:38.146 2 INFO nova.compute.manager [None req-192988dc-6b50-4249-83f9-b465f9fb5b0b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Get console output
Oct 02 12:52:38 compute-0 nova_compute[256940]: 2025-10-02 12:52:38.152 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:52:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/297911692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:39 compute-0 ceph-mon[73668]: pgmap v2361: 305 pgs: 305 active+clean; 415 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 5.7 MiB/s wr, 142 op/s
Oct 02 12:52:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 166 op/s
Oct 02 12:52:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008487533151870463 of space, bias 1.0, pg target 2.5462599455611388 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8618352177554439 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:52:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:52:40 compute-0 ceph-mon[73668]: pgmap v2362: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 166 op/s
Oct 02 12:52:41 compute-0 podman[353610]: 2025-10-02 12:52:41.378289324 +0000 UTC m=+0.050572951 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:52:41 compute-0 podman[353611]: 2025-10-02 12:52:41.430034823 +0000 UTC m=+0.100254067 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 12:52:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 187 op/s
Oct 02 12:52:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:41.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:41.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:42 compute-0 nova_compute[256940]: 2025-10-02 12:52:42.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:42 compute-0 nova_compute[256940]: 2025-10-02 12:52:42.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:42 compute-0 nova_compute[256940]: 2025-10-02 12:52:42.486 2 INFO nova.compute.manager [None req-7f40f1fd-b3c8-4e26-90c2-200dc7d9f55d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Get console output
Oct 02 12:52:42 compute-0 nova_compute[256940]: 2025-10-02 12:52:42.489 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:52:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:43 compute-0 ceph-mon[73668]: pgmap v2363: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 187 op/s
Oct 02 12:52:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 555 KiB/s wr, 138 op/s
Oct 02 12:52:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:43.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:45 compute-0 sudo[353657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:45 compute-0 sudo[353657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:45 compute-0 sudo[353657]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:45 compute-0 sudo[353682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:45 compute-0 sudo[353682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:45 compute-0 sudo[353682]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:45 compute-0 ceph-mon[73668]: pgmap v2364: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 555 KiB/s wr, 138 op/s
Oct 02 12:52:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 567 KiB/s wr, 159 op/s
Oct 02 12:52:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:45.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:47 compute-0 ceph-mon[73668]: pgmap v2365: 305 pgs: 305 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 567 KiB/s wr, 159 op/s
Oct 02 12:52:47 compute-0 nova_compute[256940]: 2025-10-02 12:52:47.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:47 compute-0 nova_compute[256940]: 2025-10-02 12:52:47.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 140 KiB/s wr, 194 op/s
Oct 02 12:52:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:49 compute-0 ceph-mon[73668]: pgmap v2366: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 140 KiB/s wr, 194 op/s
Oct 02 12:52:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2136771465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 41 KiB/s wr, 178 op/s
Oct 02 12:52:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:49.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:52:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:52:50 compute-0 nova_compute[256940]: 2025-10-02 12:52:50.293 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:50 compute-0 nova_compute[256940]: 2025-10-02 12:52:50.293 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:50 compute-0 nova_compute[256940]: 2025-10-02 12:52:50.293 2 DEBUG nova.network.neutron [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:52:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3424068853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 138 op/s
Oct 02 12:52:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:51 compute-0 ceph-mon[73668]: pgmap v2367: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 41 KiB/s wr, 178 op/s
Oct 02 12:52:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:52 compute-0 nova_compute[256940]: 2025-10-02 12:52:52.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Oct 02 12:52:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:52.334 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:52:52.335 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:52:52 compute-0 nova_compute[256940]: 2025-10-02 12:52:52.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:52 compute-0 nova_compute[256940]: 2025-10-02 12:52:52.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Oct 02 12:52:52 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Oct 02 12:52:53 compute-0 ceph-mon[73668]: pgmap v2368: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 138 op/s
Oct 02 12:52:53 compute-0 ceph-mon[73668]: osdmap e325: 3 total, 3 up, 3 in
Oct 02 12:52:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 40 KiB/s wr, 120 op/s
Oct 02 12:52:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:55 compute-0 podman[353712]: 2025-10-02 12:52:55.394095151 +0000 UTC m=+0.068364967 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 02 12:52:55 compute-0 podman[353713]: 2025-10-02 12:52:55.410819891 +0000 UTC m=+0.084601495 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:52:55 compute-0 ceph-mon[73668]: pgmap v2370: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 40 KiB/s wr, 120 op/s
Oct 02 12:52:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 725 KiB/s wr, 112 op/s
Oct 02 12:52:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:52:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:55.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:52:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:56 compute-0 nova_compute[256940]: 2025-10-02 12:52:56.864 2 DEBUG nova.network.neutron [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:56 compute-0 nova_compute[256940]: 2025-10-02 12:52:56.906 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.086 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.087 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Creating file /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/d9aa394df99649788e5b42f63d0dde19.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.087 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/d9aa394df99649788e5b42f63d0dde19.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:57 compute-0 ceph-mon[73668]: pgmap v2371: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 725 KiB/s wr, 112 op/s
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.518 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/d9aa394df99649788e5b42f63d0dde19.tmp" returned: 1 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.519 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/d9aa394df99649788e5b42f63d0dde19.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.519 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Creating directory /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.520 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.741 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:57 compute-0 nova_compute[256940]: 2025-10-02 12:52:57.759 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:52:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 432 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 91 op/s
Oct 02 12:52:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:57.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:52:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:52:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:52:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Oct 02 12:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:52:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:52:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Oct 02 12:52:58 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Oct 02 12:52:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Oct 02 12:52:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 138 op/s
Oct 02 12:52:59 compute-0 ceph-mon[73668]: pgmap v2372: 305 pgs: 305 active+clean; 432 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 91 op/s
Oct 02 12:52:59 compute-0 ceph-mon[73668]: osdmap e326: 3 total, 3 up, 3 in
Oct 02 12:52:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:52:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:52:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:59.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:00.336 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Oct 02 12:53:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Oct 02 12:53:01 compute-0 ceph-mon[73668]: pgmap v2374: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 138 op/s
Oct 02 12:53:01 compute-0 ceph-mon[73668]: osdmap e327: 3 total, 3 up, 3 in
Oct 02 12:53:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 8.9 MiB/s wr, 179 op/s
Oct 02 12:53:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:01.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:02 compute-0 nova_compute[256940]: 2025-10-02 12:53:02.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:02 compute-0 nova_compute[256940]: 2025-10-02 12:53:02.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 ceph-mon[73668]: pgmap v2376: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 8.9 MiB/s wr, 179 op/s
Oct 02 12:53:03 compute-0 kernel: tap3b7a0e63-af (unregistering): left promiscuous mode
Oct 02 12:53:03 compute-0 NetworkManager[44981]: <info>  [1759409583.4800] device (tap3b7a0e63-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:53:03 compute-0 ovn_controller[148123]: 2025-10-02T12:53:03Z|00720|binding|INFO|Releasing lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 from this chassis (sb_readonly=0)
Oct 02 12:53:03 compute-0 ovn_controller[148123]: 2025-10-02T12:53:03Z|00721|binding|INFO|Setting lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 down in Southbound
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 ovn_controller[148123]: 2025-10-02T12:53:03Z|00722|binding|INFO|Removing iface tap3b7a0e63-af ovn-installed in OVS
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:03.497 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:dd:e3 10.100.0.6'], port_security=['fa:16:3e:f0:dd:e3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '063c7d3e-98b4-46a4-a75e-de10a2135604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'de779411-ca14-48cf-b925-43960a45cd14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=184be778-bd1f-45cf-8f02-03b61731fc05, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=3b7a0e63-af58-4d73-8bc7-684e63bb5e96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:03.499 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 in datapath c011060c-3c24-4fdd-8151-c45f0e81f0db unbound from our chassis
Oct 02 12:53:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:03.501 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c011060c-3c24-4fdd-8151-c45f0e81f0db, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:53:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:03.502 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0fe825-fa08-4ca6-90ba-f7144f3610f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:03.503 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db namespace which is not needed anymore
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 02 12:53:03 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000008d.scope: Consumed 15.780s CPU time.
Oct 02 12:53:03 compute-0 systemd-machined[210927]: Machine qemu-75-instance-0000008d terminated.
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.790 2 INFO nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance shutdown successfully after 6 seconds.
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.797 2 INFO nova.virt.libvirt.driver [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance destroyed successfully.
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.799 2 DEBUG nova.virt.libvirt.vif [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-657670950',display_name='tempest-TestNetworkAdvancedServerOps-server-657670950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-657670950',id=141,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJUsVCzPVQ4EYUnLe4xVecX/G7C+Cia09idavSODc4ZN//6Cqf+a8ivFPaF6ii5km7SztqC4ETT2rQva0v04xuCgbV1S1NVEoEr76v1/FpEPV08UhMxhurTufTiANa0c8g==',key_name='tempest-TestNetworkAdvancedServerOps-2083529481',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-kcomn0fd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:48Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=063c7d3e-98b4-46a4-a75e-de10a2135604,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--631630504", "vif_mac": "fa:16:3e:f0:dd:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.799 2 DEBUG nova.network.os_vif_util [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--631630504", "vif_mac": "fa:16:3e:f0:dd:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.800 2 DEBUG nova.network.os_vif_util [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.801 2 DEBUG os_vif [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b7a0e63-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.808 2 INFO os_vif [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af')
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.814 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.815 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.899 2 DEBUG nova.compute.manager [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.899 2 DEBUG oslo_concurrency.lockutils [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.899 2 DEBUG oslo_concurrency.lockutils [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.900 2 DEBUG oslo_concurrency.lockutils [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.900 2 DEBUG nova.compute.manager [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:03 compute-0 nova_compute[256940]: 2025-10-02 12:53:03.900 2 WARNING nova.compute.manager [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state active and task_state resize_migrating.
Oct 02 12:53:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.1 MiB/s wr, 157 op/s
Oct 02 12:53:03 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [NOTICE]   (353372) : haproxy version is 2.8.14-c23fe91
Oct 02 12:53:03 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [NOTICE]   (353372) : path to executable is /usr/sbin/haproxy
Oct 02 12:53:03 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [WARNING]  (353372) : Exiting Master process...
Oct 02 12:53:03 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [WARNING]  (353372) : Exiting Master process...
Oct 02 12:53:03 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [ALERT]    (353372) : Current worker (353375) exited with code 143 (Terminated)
Oct 02 12:53:03 compute-0 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[353344]: [WARNING]  (353372) : All workers exited. Exiting... (0)
Oct 02 12:53:03 compute-0 systemd[1]: libpod-4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef.scope: Deactivated successfully.
Oct 02 12:53:03 compute-0 podman[353784]: 2025-10-02 12:53:03.933152178 +0000 UTC m=+0.296721515 container died 4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:53:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:04 compute-0 nova_compute[256940]: 2025-10-02 12:53:04.125 2 DEBUG neutronclient.v2_0.client [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef-userdata-shm.mount: Deactivated successfully.
Oct 02 12:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-598a037951341fa7eb2ba68c49df285c8eb78e1f55329c0d25abb26bb6d18550-merged.mount: Deactivated successfully.
Oct 02 12:53:04 compute-0 nova_compute[256940]: 2025-10-02 12:53:04.470 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:04 compute-0 nova_compute[256940]: 2025-10-02 12:53:04.472 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:04 compute-0 nova_compute[256940]: 2025-10-02 12:53:04.473 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:04 compute-0 podman[353784]: 2025-10-02 12:53:04.92096537 +0000 UTC m=+1.284534697 container cleanup 4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:53:04 compute-0 systemd[1]: libpod-conmon-4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef.scope: Deactivated successfully.
Oct 02 12:53:05 compute-0 podman[353826]: 2025-10-02 12:53:05.245188061 +0000 UTC m=+0.293932914 container remove 4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.258 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8a49c9d3-9a72-4145-a939-db9fdf3284d4]: (4, ('Thu Oct  2 12:53:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db (4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef)\n4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef\nThu Oct  2 12:53:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db (4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef)\n4c2fe332b7f489a4126c79a0950f0d9f05036f4ed9065690f645699ed667f0ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.260 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7bae87f7-4350-4d0d-8652-c253b70f47be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.261 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc011060c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:05 compute-0 kernel: tapc011060c-30: left promiscuous mode
Oct 02 12:53:05 compute-0 nova_compute[256940]: 2025-10-02 12:53:05.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:05 compute-0 nova_compute[256940]: 2025-10-02 12:53:05.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:05 compute-0 nova_compute[256940]: 2025-10-02 12:53:05.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.299 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a1669928-d36d-451f-9e91-4b7fef43820f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.327 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[390f15fd-01fa-4bbb-8279-ab079fa370d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.328 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c0f003-1682-440e-8186-a6bad3a327cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:53:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2154277201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:53:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:53:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2154277201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.350 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ab538f-d198-4df5-9593-eba95ba80a3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742807, 'reachable_time': 35415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353842, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:05 compute-0 systemd[1]: run-netns-ovnmeta\x2dc011060c\x2d3c24\x2d4fdd\x2d8151\x2dc45f0e81f0db.mount: Deactivated successfully.
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.356 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:53:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:05.356 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[04e65b77-70f0-4329-beee-86635e338575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:05 compute-0 sudo[353843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:05 compute-0 sudo[353843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:05 compute-0 sudo[353843]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:05 compute-0 sudo[353868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:05 compute-0 sudo[353868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:05 compute-0 sudo[353868]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:05 compute-0 ceph-mon[73668]: pgmap v2377: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.1 MiB/s wr, 157 op/s
Oct 02 12:53:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2154277201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:53:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2154277201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:53:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 478 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Oct 02 12:53:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:06.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.118 2 DEBUG nova.compute.manager [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.119 2 DEBUG oslo_concurrency.lockutils [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.119 2 DEBUG oslo_concurrency.lockutils [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.119 2 DEBUG oslo_concurrency.lockutils [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.120 2 DEBUG nova.compute.manager [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.120 2 WARNING nova.compute.manager [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.491 2 DEBUG nova.compute.manager [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.491 2 DEBUG nova.compute.manager [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing instance network info cache due to event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.492 2 DEBUG oslo_concurrency.lockutils [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.492 2 DEBUG oslo_concurrency.lockutils [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:06 compute-0 nova_compute[256940]: 2025-10-02 12:53:06.492 2 DEBUG nova.network.neutron [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:53:06 compute-0 ceph-mon[73668]: pgmap v2378: 305 pgs: 305 active+clean; 478 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Oct 02 12:53:07 compute-0 nova_compute[256940]: 2025-10-02 12:53:07.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Oct 02 12:53:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Oct 02 12:53:07 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Oct 02 12:53:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 152 KiB/s rd, 1.2 MiB/s wr, 77 op/s
Oct 02 12:53:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:07.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:08.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:08 compute-0 nova_compute[256940]: 2025-10-02 12:53:08.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:08 compute-0 nova_compute[256940]: 2025-10-02 12:53:08.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:08 compute-0 ceph-mon[73668]: osdmap e328: 3 total, 3 up, 3 in
Oct 02 12:53:08 compute-0 ceph-mon[73668]: pgmap v2380: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 152 KiB/s rd, 1.2 MiB/s wr, 77 op/s
Oct 02 12:53:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1871141040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2500340169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:08 compute-0 nova_compute[256940]: 2025-10-02 12:53:08.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:09 compute-0 nova_compute[256940]: 2025-10-02 12:53:09.483 2 DEBUG nova.network.neutron [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updated VIF entry in instance network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:53:09 compute-0 nova_compute[256940]: 2025-10-02 12:53:09.484 2 DEBUG nova.network.neutron [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:09 compute-0 nova_compute[256940]: 2025-10-02 12:53:09.505 2 DEBUG oslo_concurrency.lockutils [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Oct 02 12:53:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 1014 KiB/s wr, 82 op/s
Oct 02 12:53:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Oct 02 12:53:09 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Oct 02 12:53:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:09.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2854629276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:11 compute-0 ceph-mon[73668]: pgmap v2381: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 1014 KiB/s wr, 82 op/s
Oct 02 12:53:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/67641422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:11 compute-0 ceph-mon[73668]: osdmap e329: 3 total, 3 up, 3 in
Oct 02 12:53:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3794160655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2288584448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.239 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.239 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2526815352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.686 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:11 compute-0 podman[353918]: 2025-10-02 12:53:11.794418677 +0000 UTC m=+0.060644899 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:53:11 compute-0 podman[353919]: 2025-10-02 12:53:11.83655721 +0000 UTC m=+0.092118988 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.866 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:11 compute-0 nova_compute[256940]: 2025-10-02 12:53:11.866 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 145 KiB/s wr, 96 op/s
Oct 02 12:53:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.083 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.085 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4324MB free_disk=20.820602416992188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.085 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.086 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.098 2 DEBUG nova.compute.manager [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.099 2 DEBUG oslo_concurrency.lockutils [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.100 2 DEBUG oslo_concurrency.lockutils [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.100 2 DEBUG oslo_concurrency.lockutils [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.100 2 DEBUG nova.compute.manager [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.101 2 WARNING nova.compute.manager [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state active and task_state resize_finish.
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.144 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration for instance 063c7d3e-98b4-46a4-a75e-de10a2135604 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:53:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/301379997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2526815352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.189 2 INFO nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating resource usage from migration 5556a224-fcba-479b-9e8e-b1ada4008517
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.190 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Starting to track outgoing migration 5556a224-fcba-479b-9e8e-b1ada4008517 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration 5556a224-fcba-479b-9e8e-b1ada4008517 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.248 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.249 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.312 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446103540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.756 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.762 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.791 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.852 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:53:12 compute-0 nova_compute[256940]: 2025-10-02 12:53:12.853 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:13 compute-0 ceph-mon[73668]: pgmap v2383: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 145 KiB/s wr, 96 op/s
Oct 02 12:53:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/446103540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:13 compute-0 nova_compute[256940]: 2025-10-02 12:53:13.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:13 compute-0 nova_compute[256940]: 2025-10-02 12:53:13.852 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:13 compute-0 nova_compute[256940]: 2025-10-02 12:53:13.853 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:13 compute-0 nova_compute[256940]: 2025-10-02 12:53:13.853 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:13 compute-0 nova_compute[256940]: 2025-10-02 12:53:13.853 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:53:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 33 KiB/s wr, 75 op/s
Oct 02 12:53:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:13.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:14.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.258 2 DEBUG nova.compute.manager [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.258 2 DEBUG oslo_concurrency.lockutils [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.258 2 DEBUG oslo_concurrency.lockutils [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.259 2 DEBUG oslo_concurrency.lockutils [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.259 2 DEBUG nova.compute.manager [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.259 2 WARNING nova.compute.manager [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state resized and task_state None.
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.429 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.429 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:14 compute-0 nova_compute[256940]: 2025-10-02 12:53:14.430 2 DEBUG nova.compute.manager [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Going to confirm migration 19 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 12:53:14 compute-0 sudo[353989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:14 compute-0 sudo[353989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:14 compute-0 sudo[353989]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:14 compute-0 sudo[354014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:14 compute-0 sudo[354014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:14 compute-0 sudo[354014]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:14 compute-0 sudo[354040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:14 compute-0 sudo[354040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:14 compute-0 sudo[354040]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:14 compute-0 sudo[354065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:53:14 compute-0 sudo[354065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:15 compute-0 nova_compute[256940]: 2025-10-02 12:53:15.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:15 compute-0 ceph-mon[73668]: pgmap v2384: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 33 KiB/s wr, 75 op/s
Oct 02 12:53:15 compute-0 sudo[354065]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:53:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:53:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:53:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:53:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1f9258af-eaf4-4735-b561-9a0049ba4244 does not exist
Oct 02 12:53:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1fbc11fc-3d9c-4d04-a7ae-b9cfc1ab2ceb does not exist
Oct 02 12:53:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ab26b2b4-e4fa-47f7-a98d-fe826fdf8763 does not exist
Oct 02 12:53:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:53:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:53:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:53:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:53:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:53:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:15 compute-0 sudo[354121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:15 compute-0 sudo[354121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:15 compute-0 sudo[354121]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:15 compute-0 sudo[354146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:15 compute-0 sudo[354146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:15 compute-0 sudo[354146]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:15 compute-0 sudo[354171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:15 compute-0 sudo[354171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:15 compute-0 sudo[354171]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:15 compute-0 sudo[354196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:53:15 compute-0 sudo[354196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 21 KiB/s wr, 113 op/s
Oct 02 12:53:15 compute-0 podman[354262]: 2025-10-02 12:53:15.927127999 +0000 UTC m=+0.038390167 container create c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:53:15 compute-0 systemd[1]: Started libpod-conmon-c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1.scope.
Oct 02 12:53:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:15.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:16 compute-0 podman[354262]: 2025-10-02 12:53:15.910510372 +0000 UTC m=+0.021772560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:16 compute-0 podman[354262]: 2025-10-02 12:53:16.013605581 +0000 UTC m=+0.124867759 container init c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:16 compute-0 podman[354262]: 2025-10-02 12:53:16.021266828 +0000 UTC m=+0.132528996 container start c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:53:16 compute-0 podman[354262]: 2025-10-02 12:53:16.024279056 +0000 UTC m=+0.135541224 container attach c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:16 compute-0 upbeat_brown[354278]: 167 167
Oct 02 12:53:16 compute-0 systemd[1]: libpod-c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1.scope: Deactivated successfully.
Oct 02 12:53:16 compute-0 podman[354262]: 2025-10-02 12:53:16.027779866 +0000 UTC m=+0.139042034 container died c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8998acc2e6af95fb128faefd434082ff07cf8009a62ef41b3e1d4fd40c04f95b-merged.mount: Deactivated successfully.
Oct 02 12:53:16 compute-0 podman[354262]: 2025-10-02 12:53:16.064491809 +0000 UTC m=+0.175753977 container remove c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:53:16 compute-0 systemd[1]: libpod-conmon-c3d22d1059696245aa62d7584367826b4bd93ad28ebd9b8b23c5d07e319956a1.scope: Deactivated successfully.
Oct 02 12:53:16 compute-0 nova_compute[256940]: 2025-10-02 12:53:16.199 2 DEBUG neutronclient.v2_0.client [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:53:16 compute-0 nova_compute[256940]: 2025-10-02 12:53:16.201 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:16 compute-0 nova_compute[256940]: 2025-10-02 12:53:16.201 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:16 compute-0 nova_compute[256940]: 2025-10-02 12:53:16.201 2 DEBUG nova.network.neutron [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:53:16 compute-0 nova_compute[256940]: 2025-10-02 12:53:16.201 2 DEBUG nova.objects.instance [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'info_cache' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:16 compute-0 nova_compute[256940]: 2025-10-02 12:53:16.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:16 compute-0 nova_compute[256940]: 2025-10-02 12:53:16.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:16 compute-0 podman[354302]: 2025-10-02 12:53:16.216019913 +0000 UTC m=+0.043740775 container create c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:53:16 compute-0 systemd[1]: Started libpod-conmon-c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86.scope.
Oct 02 12:53:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:53:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:53:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:53:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ed242805dc7db27fbccc69341d987ba6b1691a40254bf6119d6412c9a078c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ed242805dc7db27fbccc69341d987ba6b1691a40254bf6119d6412c9a078c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ed242805dc7db27fbccc69341d987ba6b1691a40254bf6119d6412c9a078c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ed242805dc7db27fbccc69341d987ba6b1691a40254bf6119d6412c9a078c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ed242805dc7db27fbccc69341d987ba6b1691a40254bf6119d6412c9a078c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:16 compute-0 podman[354302]: 2025-10-02 12:53:16.198925333 +0000 UTC m=+0.026646225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:16 compute-0 podman[354302]: 2025-10-02 12:53:16.296911241 +0000 UTC m=+0.124632123 container init c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:53:16 compute-0 podman[354302]: 2025-10-02 12:53:16.308183951 +0000 UTC m=+0.135904813 container start c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:16 compute-0 podman[354302]: 2025-10-02 12:53:16.311461125 +0000 UTC m=+0.139182007 container attach c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:53:17 compute-0 happy_nash[354318]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:53:17 compute-0 happy_nash[354318]: --> relative data size: 1.0
Oct 02 12:53:17 compute-0 happy_nash[354318]: --> All data devices are unavailable
Oct 02 12:53:17 compute-0 systemd[1]: libpod-c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86.scope: Deactivated successfully.
Oct 02 12:53:17 compute-0 nova_compute[256940]: 2025-10-02 12:53:17.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:17 compute-0 podman[354334]: 2025-10-02 12:53:17.215735341 +0000 UTC m=+0.024253414 container died c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-42ed242805dc7db27fbccc69341d987ba6b1691a40254bf6119d6412c9a078c0-merged.mount: Deactivated successfully.
Oct 02 12:53:17 compute-0 podman[354334]: 2025-10-02 12:53:17.270263902 +0000 UTC m=+0.078781945 container remove c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:53:17 compute-0 systemd[1]: libpod-conmon-c565610be9c70b4cdb740359d0193f8d6d0f31ba08a32245d64b791ab6310a86.scope: Deactivated successfully.
Oct 02 12:53:17 compute-0 sudo[354196]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:17 compute-0 sudo[354349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:17 compute-0 sudo[354349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:17 compute-0 sudo[354349]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:17 compute-0 sudo[354374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:17 compute-0 sudo[354374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:17 compute-0 sudo[354374]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:17 compute-0 sudo[354399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:17 compute-0 sudo[354399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:17 compute-0 sudo[354399]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:17 compute-0 ceph-mon[73668]: pgmap v2385: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 21 KiB/s wr, 113 op/s
Oct 02 12:53:17 compute-0 sudo[354424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:53:17 compute-0 sudo[354424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 137 op/s
Oct 02 12:53:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:17 compute-0 podman[354492]: 2025-10-02 12:53:17.984634088 +0000 UTC m=+0.050316314 container create ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:53:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:17.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:18 compute-0 systemd[1]: Started libpod-conmon-ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d.scope.
Oct 02 12:53:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:18 compute-0 podman[354492]: 2025-10-02 12:53:17.963261149 +0000 UTC m=+0.028943375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:18 compute-0 podman[354492]: 2025-10-02 12:53:18.070905135 +0000 UTC m=+0.136587361 container init ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:18 compute-0 podman[354492]: 2025-10-02 12:53:18.078794028 +0000 UTC m=+0.144476234 container start ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:18 compute-0 podman[354492]: 2025-10-02 12:53:18.082065322 +0000 UTC m=+0.147747528 container attach ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:53:18 compute-0 compassionate_tesla[354508]: 167 167
Oct 02 12:53:18 compute-0 systemd[1]: libpod-ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d.scope: Deactivated successfully.
Oct 02 12:53:18 compute-0 podman[354492]: 2025-10-02 12:53:18.084527795 +0000 UTC m=+0.150210001 container died ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tesla, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7ff855bfd9250e7d3c1ec9e68ae6a3f54d114a2a99b18560f46cb72f51c8fd3-merged.mount: Deactivated successfully.
Oct 02 12:53:18 compute-0 podman[354492]: 2025-10-02 12:53:18.129042729 +0000 UTC m=+0.194724935 container remove ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:53:18 compute-0 systemd[1]: libpod-conmon-ef67755d6f3ec15d749e686cf00afef8c9210bbf88f4004253d7de1fdb62312d.scope: Deactivated successfully.
Oct 02 12:53:18 compute-0 podman[354532]: 2025-10-02 12:53:18.308832589 +0000 UTC m=+0.054081811 container create 18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:18 compute-0 systemd[1]: Started libpod-conmon-18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513.scope.
Oct 02 12:53:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:18 compute-0 podman[354532]: 2025-10-02 12:53:18.287440279 +0000 UTC m=+0.032689521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4b961f34aca0ff071598cfeb68335f62178e0c457fee6bd77ef2facb6629c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4b961f34aca0ff071598cfeb68335f62178e0c457fee6bd77ef2facb6629c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4b961f34aca0ff071598cfeb68335f62178e0c457fee6bd77ef2facb6629c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4b961f34aca0ff071598cfeb68335f62178e0c457fee6bd77ef2facb6629c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:18 compute-0 podman[354532]: 2025-10-02 12:53:18.395863495 +0000 UTC m=+0.141112717 container init 18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:53:18 compute-0 podman[354532]: 2025-10-02 12:53:18.403246165 +0000 UTC m=+0.148495387 container start 18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:18 compute-0 podman[354532]: 2025-10-02 12:53:18.406600701 +0000 UTC m=+0.151849943 container attach 18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 12:53:18 compute-0 nova_compute[256940]: 2025-10-02 12:53:18.739 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409583.7386465, 063c7d3e-98b4-46a4-a75e-de10a2135604 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:18 compute-0 nova_compute[256940]: 2025-10-02 12:53:18.743 2 INFO nova.compute.manager [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] VM Stopped (Lifecycle Event)
Oct 02 12:53:18 compute-0 nova_compute[256940]: 2025-10-02 12:53:18.812 2 DEBUG nova.compute.manager [None req-18b90e03-3f31-467c-865a-3bb84101a143 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:18 compute-0 nova_compute[256940]: 2025-10-02 12:53:18.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:18 compute-0 nova_compute[256940]: 2025-10-02 12:53:18.817 2 DEBUG nova.compute.manager [None req-18b90e03-3f31-467c-865a-3bb84101a143 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:53:18 compute-0 nova_compute[256940]: 2025-10-02 12:53:18.841 2 INFO nova.compute.manager [None req-18b90e03-3f31-467c-865a-3bb84101a143 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]: {
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:     "1": [
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:         {
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "devices": [
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "/dev/loop3"
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             ],
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "lv_name": "ceph_lv0",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "lv_size": "7511998464",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "name": "ceph_lv0",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "tags": {
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.cluster_name": "ceph",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.crush_device_class": "",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.encrypted": "0",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.osd_id": "1",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.type": "block",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:                 "ceph.vdo": "0"
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             },
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "type": "block",
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:             "vg_name": "ceph_vg0"
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:         }
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]:     ]
Oct 02 12:53:19 compute-0 pedantic_lamport[354549]: }
Oct 02 12:53:19 compute-0 systemd[1]: libpod-18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513.scope: Deactivated successfully.
Oct 02 12:53:19 compute-0 podman[354532]: 2025-10-02 12:53:19.213823722 +0000 UTC m=+0.959072944 container died 18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:53:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad4b961f34aca0ff071598cfeb68335f62178e0c457fee6bd77ef2facb6629c7-merged.mount: Deactivated successfully.
Oct 02 12:53:19 compute-0 podman[354532]: 2025-10-02 12:53:19.301335771 +0000 UTC m=+1.046584983 container remove 18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:19 compute-0 systemd[1]: libpod-conmon-18f4a7ef0632d4f94fa0f3d9238b65c2920bd5e8a420603297b30adadafe6513.scope: Deactivated successfully.
Oct 02 12:53:19 compute-0 sudo[354424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:19 compute-0 sudo[354571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:19 compute-0 sudo[354571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:19 compute-0 sudo[354571]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:19 compute-0 sudo[354596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:19 compute-0 sudo[354596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:19 compute-0 sudo[354596]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:19 compute-0 ceph-mon[73668]: pgmap v2386: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 137 op/s
Oct 02 12:53:19 compute-0 sudo[354621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:19 compute-0 sudo[354621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:19 compute-0 sudo[354621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:19 compute-0 sudo[354646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:53:19 compute-0 sudo[354646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 KiB/s wr, 121 op/s
Oct 02 12:53:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:19.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:20 compute-0 podman[354711]: 2025-10-02 12:53:20.014038394 +0000 UTC m=+0.063058511 container create efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:20.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:20 compute-0 systemd[1]: Started libpod-conmon-efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743.scope.
Oct 02 12:53:20 compute-0 podman[354711]: 2025-10-02 12:53:19.971777278 +0000 UTC m=+0.020797495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:20 compute-0 podman[354711]: 2025-10-02 12:53:20.115229055 +0000 UTC m=+0.164249172 container init efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:53:20 compute-0 podman[354711]: 2025-10-02 12:53:20.123302442 +0000 UTC m=+0.172322559 container start efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:20 compute-0 youthful_gates[354727]: 167 167
Oct 02 12:53:20 compute-0 systemd[1]: libpod-efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743.scope: Deactivated successfully.
Oct 02 12:53:20 compute-0 podman[354711]: 2025-10-02 12:53:20.139323944 +0000 UTC m=+0.188344111 container attach efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:53:20 compute-0 podman[354711]: 2025-10-02 12:53:20.139780436 +0000 UTC m=+0.188800573 container died efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4afc3b734029723ab6bfb4ee95d302f346607895289d6d7272d4887cff1c804-merged.mount: Deactivated successfully.
Oct 02 12:53:20 compute-0 nova_compute[256940]: 2025-10-02 12:53:20.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:20 compute-0 podman[354711]: 2025-10-02 12:53:20.251683921 +0000 UTC m=+0.300704038 container remove efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:53:20 compute-0 systemd[1]: libpod-conmon-efe9191cff1a15c1a0add8973163a2a753faf39ab5339af776caa2b432f57743.scope: Deactivated successfully.
Oct 02 12:53:20 compute-0 nova_compute[256940]: 2025-10-02 12:53:20.356 2 DEBUG nova.network.neutron [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:20 compute-0 nova_compute[256940]: 2025-10-02 12:53:20.387 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:20 compute-0 nova_compute[256940]: 2025-10-02 12:53:20.387 2 DEBUG nova.objects.instance [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:20 compute-0 podman[354753]: 2025-10-02 12:53:20.475149303 +0000 UTC m=+0.109139795 container create 64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:53:20 compute-0 podman[354753]: 2025-10-02 12:53:20.390776935 +0000 UTC m=+0.024767427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:53:20 compute-0 systemd[1]: Started libpod-conmon-64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc.scope.
Oct 02 12:53:20 compute-0 nova_compute[256940]: 2025-10-02 12:53:20.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:20 compute-0 nova_compute[256940]: 2025-10-02 12:53:20.529 2 DEBUG nova.storage.rbd_utils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] removing snapshot(nova-resize) on rbd image(063c7d3e-98b4-46a4-a75e-de10a2135604_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:53:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03d832850cad887b0ce6f53ba1bbc677b66a989e0e314f66e2707fddb853187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03d832850cad887b0ce6f53ba1bbc677b66a989e0e314f66e2707fddb853187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03d832850cad887b0ce6f53ba1bbc677b66a989e0e314f66e2707fddb853187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03d832850cad887b0ce6f53ba1bbc677b66a989e0e314f66e2707fddb853187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:20 compute-0 podman[354753]: 2025-10-02 12:53:20.568849741 +0000 UTC m=+0.202840263 container init 64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:53:20 compute-0 podman[354753]: 2025-10-02 12:53:20.576005675 +0000 UTC m=+0.209996167 container start 64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:53:20 compute-0 podman[354753]: 2025-10-02 12:53:20.583556619 +0000 UTC m=+0.217547161 container attach 64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.244 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:53:21 compute-0 pensive_hellman[354806]: {
Oct 02 12:53:21 compute-0 pensive_hellman[354806]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:53:21 compute-0 pensive_hellman[354806]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:53:21 compute-0 pensive_hellman[354806]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:53:21 compute-0 pensive_hellman[354806]:         "osd_id": 1,
Oct 02 12:53:21 compute-0 pensive_hellman[354806]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:53:21 compute-0 pensive_hellman[354806]:         "type": "bluestore"
Oct 02 12:53:21 compute-0 pensive_hellman[354806]:     }
Oct 02 12:53:21 compute-0 pensive_hellman[354806]: }
Oct 02 12:53:21 compute-0 systemd[1]: libpod-64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc.scope: Deactivated successfully.
Oct 02 12:53:21 compute-0 podman[354753]: 2025-10-02 12:53:21.439554374 +0000 UTC m=+1.073544866 container died 64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:53:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a03d832850cad887b0ce6f53ba1bbc677b66a989e0e314f66e2707fddb853187-merged.mount: Deactivated successfully.
Oct 02 12:53:21 compute-0 podman[354753]: 2025-10-02 12:53:21.521568342 +0000 UTC m=+1.155558834 container remove 64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:53:21 compute-0 systemd[1]: libpod-conmon-64ca36147f92ce4a49ed7ce7c44b03cea171499f63aabe0b82ffee5dd8e13ddc.scope: Deactivated successfully.
Oct 02 12:53:21 compute-0 sudo[354646]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:53:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Oct 02 12:53:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:53:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Oct 02 12:53:21 compute-0 ceph-mon[73668]: pgmap v2387: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 KiB/s wr, 121 op/s
Oct 02 12:53:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1354063422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Oct 02 12:53:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 119c8ca5-3ac3-4d8c-b0e7-4026c41a242e does not exist
Oct 02 12:53:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d9ac46c2-02fe-4be3-8bca-c36fc39eb69e does not exist
Oct 02 12:53:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f46c81ce-41a2-4bc7-a59c-3c6d0287a5ba does not exist
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.647 2 DEBUG nova.virt.libvirt.vif [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-657670950',display_name='tempest-TestNetworkAdvancedServerOps-server-657670950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-657670950',id=141,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJUsVCzPVQ4EYUnLe4xVecX/G7C+Cia09idavSODc4ZN//6Cqf+a8ivFPaF6ii5km7SztqC4ETT2rQva0v04xuCgbV1S1NVEoEr76v1/FpEPV08UhMxhurTufTiANa0c8g==',key_name='tempest-TestNetworkAdvancedServerOps-2083529481',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:53:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-kcomn0fd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:53:12Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=063c7d3e-98b4-46a4-a75e-de10a2135604,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.647 2 DEBUG nova.network.os_vif_util [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.648 2 DEBUG nova.network.os_vif_util [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.648 2 DEBUG os_vif [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b7a0e63-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.653 2 INFO os_vif [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af')
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.654 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.654 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:21 compute-0 sudo[354840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:21 compute-0 sudo[354840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:21 compute-0 sudo[354840]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:21 compute-0 sudo[354865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:53:21 compute-0 sudo[354865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:21 compute-0 sudo[354865]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:21 compute-0 nova_compute[256940]: 2025-10-02 12:53:21.752 2 DEBUG oslo_concurrency.processutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Oct 02 12:53:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898851491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:22 compute-0 nova_compute[256940]: 2025-10-02 12:53:22.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:22 compute-0 nova_compute[256940]: 2025-10-02 12:53:22.195 2 DEBUG oslo_concurrency.processutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:22 compute-0 nova_compute[256940]: 2025-10-02 12:53:22.202 2 DEBUG nova.compute.provider_tree [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:22 compute-0 nova_compute[256940]: 2025-10-02 12:53:22.228 2 DEBUG nova.scheduler.client.report [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:22 compute-0 nova_compute[256940]: 2025-10-02 12:53:22.286 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:22 compute-0 nova_compute[256940]: 2025-10-02 12:53:22.444 2 INFO nova.scheduler.client.report [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocation for migration 5556a224-fcba-479b-9e8e-b1ada4008517
Oct 02 12:53:22 compute-0 nova_compute[256940]: 2025-10-02 12:53:22.542 2 DEBUG oslo_concurrency.lockutils [None req-d76ead56-18d2-4b97-b0b3-190dd248af4c 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:22 compute-0 ceph-mon[73668]: osdmap e330: 3 total, 3 up, 3 in
Oct 02 12:53:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1898851491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:23 compute-0 ceph-mon[73668]: pgmap v2389: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Oct 02 12:53:23 compute-0 nova_compute[256940]: 2025-10-02 12:53:23.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Oct 02 12:53:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:25 compute-0 ceph-mon[73668]: pgmap v2390: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Oct 02 12:53:25 compute-0 sudo[354914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:25 compute-0 sudo[354914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:25 compute-0 sudo[354914]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:25 compute-0 sudo[354951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:25 compute-0 sudo[354951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:25 compute-0 podman[354938]: 2025-10-02 12:53:25.861535843 +0000 UTC m=+0.060249575 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:53:25 compute-0 sudo[354951]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:25 compute-0 podman[354939]: 2025-10-02 12:53:25.867131108 +0000 UTC m=+0.063255423 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct 02 12:53:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 KiB/s wr, 65 op/s
Oct 02 12:53:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:26.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:26.491 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:26.492 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:26.493 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:27 compute-0 nova_compute[256940]: 2025-10-02 12:53:27.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:27 compute-0 ceph-mon[73668]: pgmap v2391: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 KiB/s wr, 65 op/s
Oct 02 12:53:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 102 op/s
Oct 02 12:53:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:27.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:28.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:28 compute-0 nova_compute[256940]: 2025-10-02 12:53:28.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:53:28
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', '.rgw.root']
Oct 02 12:53:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:53:29 compute-0 ceph-mon[73668]: pgmap v2392: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 102 op/s
Oct 02 12:53:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 147 op/s
Oct 02 12:53:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:29.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1327837543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:31 compute-0 ceph-mon[73668]: pgmap v2393: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 147 op/s
Oct 02 12:53:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/297649764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.5 MiB/s wr, 155 op/s
Oct 02 12:53:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:32.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:32 compute-0 nova_compute[256940]: 2025-10-02 12:53:32.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Oct 02 12:53:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Oct 02 12:53:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Oct 02 12:53:33 compute-0 ceph-mon[73668]: pgmap v2394: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.5 MiB/s wr, 155 op/s
Oct 02 12:53:33 compute-0 ceph-mon[73668]: osdmap e331: 3 total, 3 up, 3 in
Oct 02 12:53:33 compute-0 nova_compute[256940]: 2025-10-02 12:53:33.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 150 op/s
Oct 02 12:53:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:34 compute-0 ceph-mon[73668]: pgmap v2396: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 150 op/s
Oct 02 12:53:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 151 op/s
Oct 02 12:53:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:36.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.122 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.122 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.138 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.244 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.245 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.253 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.253 2 INFO nova.compute.claims [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.371 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781070299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.836 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.844 2 DEBUG nova.compute.provider_tree [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.863 2 DEBUG nova.scheduler.client.report [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.895 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.896 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.940 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.941 2 DEBUG nova.network.neutron [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.957 2 INFO nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:53:36 compute-0 nova_compute[256940]: 2025-10-02 12:53:36.981 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.072 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.073 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.074 2 INFO nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Creating image(s)
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.098 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.121 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.145 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.148 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:37 compute-0 ceph-mon[73668]: pgmap v2397: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 151 op/s
Oct 02 12:53:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2781070299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.180 2 DEBUG nova.policy [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00be63ea13c84e3d9419078865524099', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.219 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.219 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.220 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.221 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.250 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:37 compute-0 nova_compute[256940]: 2025-10-02 12:53:37.253 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e22a7944-9200-4204-a219-3f7bd720667b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 437 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.7 MiB/s wr, 105 op/s
Oct 02 12:53:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:38.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.462 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e22a7944-9200-4204-a219-3f7bd720667b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.529 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] resizing rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.661 2 DEBUG nova.network.neutron [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Successfully created port: b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.803 2 DEBUG nova.objects.instance [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'migration_context' on Instance uuid e22a7944-9200-4204-a219-3f7bd720667b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.819 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.819 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Ensure instance console log exists: /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.819 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.820 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.820 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:38 compute-0 nova_compute[256940]: 2025-10-02 12:53:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:39 compute-0 ceph-mon[73668]: pgmap v2398: 305 pgs: 305 active+clean; 437 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.7 MiB/s wr, 105 op/s
Oct 02 12:53:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:53:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 53K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1732 writes, 7969 keys, 1732 commit groups, 1.0 writes per commit group, ingest: 11.44 MB, 0.02 MB/s
                                           Interval WAL: 1733 writes, 1733 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     37.3      1.86              0.21        34    0.055       0      0       0.0       0.0
                                             L6      1/0   10.31 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6     68.8     58.0      5.49              0.94        33    0.166    206K    18K       0.0       0.0
                                            Sum      1/0   10.31 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     51.4     52.8      7.36              1.15        67    0.110    206K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     42.7     42.4      2.16              0.23        14    0.154     57K   3637       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     68.8     58.0      5.49              0.94        33    0.166    206K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     37.4      1.86              0.21        33    0.056       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.068, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.38 GB write, 0.09 MB/s write, 0.37 GB read, 0.09 MB/s read, 7.4 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 2.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 41.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000262 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2431,39.84 MB,13.1052%) FilterBlock(68,582.11 KB,0.186995%) IndexBlock(68,1011.17 KB,0.324826%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:53:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 113 op/s
Oct 02 12:53:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:40.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Oct 02 12:53:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Oct 02 12:53:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Oct 02 12:53:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2661338451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:53:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006078751686674111 of space, bias 1.0, pg target 1.8236255060022333 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0050583865508096034 of space, bias 1.0, pg target 1.5124575786920715 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:53:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.614 2 DEBUG nova.network.neutron [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Successfully updated port: b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:53:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:40.641 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:40.642 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.642 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.642 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.643 2 DEBUG nova.network.neutron [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.826 2 DEBUG nova.compute.manager [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received event network-changed-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.827 2 DEBUG nova.compute.manager [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Refreshing instance network info cache due to event network-changed-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.827 2 DEBUG oslo_concurrency.lockutils [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:40 compute-0 nova_compute[256940]: 2025-10-02 12:53:40.966 2 DEBUG nova.network.neutron [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:53:41 compute-0 ceph-mon[73668]: pgmap v2399: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 113 op/s
Oct 02 12:53:41 compute-0 ceph-mon[73668]: osdmap e332: 3 total, 3 up, 3 in
Oct 02 12:53:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1513231900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 196 op/s
Oct 02 12:53:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:42.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:42.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.113 2 DEBUG nova.network.neutron [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updating instance_info_cache with network_info: [{"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.138 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.139 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Instance network_info: |[{"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.139 2 DEBUG oslo_concurrency.lockutils [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.139 2 DEBUG nova.network.neutron [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Refreshing network info cache for port b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.143 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Start _get_guest_xml network_info=[{"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.148 2 WARNING nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.155 2 DEBUG nova.virt.libvirt.host [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.156 2 DEBUG nova.virt.libvirt.host [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.160 2 DEBUG nova.virt.libvirt.host [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.161 2 DEBUG nova.virt.libvirt.host [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.162 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.162 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.163 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.163 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.163 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.163 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.164 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.164 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.164 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.164 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.165 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.165 2 DEBUG nova.virt.hardware [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.168 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:42 compute-0 podman[355221]: 2025-10-02 12:53:42.383637423 +0000 UTC m=+0.052395721 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:53:42 compute-0 podman[355222]: 2025-10-02 12:53:42.455381766 +0000 UTC m=+0.121291100 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:53:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:53:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245949945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.607 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.637 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:42 compute-0 nova_compute[256940]: 2025-10-02 12:53:42.641 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:42.645 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:53:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220270278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.088 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.090 2 DEBUG nova.virt.libvirt.vif [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-96780390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-96780390',id=144,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-a7fscw8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:37Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=e22a7944-9200-4204-a219-3f7bd720667b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.090 2 DEBUG nova.network.os_vif_util [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.091 2 DEBUG nova.network.os_vif_util [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:37:22,bridge_name='br-int',has_traffic_filtering=True,id=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f9a641-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.094 2 DEBUG nova.objects.instance [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e22a7944-9200-4204-a219-3f7bd720667b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.122 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <uuid>e22a7944-9200-4204-a219-3f7bd720667b</uuid>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <name>instance-00000090</name>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-96780390</nova:name>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:53:42</nova:creationTime>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:user uuid="00be63ea13c84e3d9419078865524099">tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member</nova:user>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:project uuid="cb2da64acac041cb8d38c3b43fe4dbe9">tempest-ServerBootFromVolumeStableRescueTest-1641553658</nova:project>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <nova:port uuid="b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9">
Oct 02 12:53:43 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <system>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <entry name="serial">e22a7944-9200-4204-a219-3f7bd720667b</entry>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <entry name="uuid">e22a7944-9200-4204-a219-3f7bd720667b</entry>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </system>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <os>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   </os>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <features>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   </features>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e22a7944-9200-4204-a219-3f7bd720667b_disk">
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       </source>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e22a7944-9200-4204-a219-3f7bd720667b_disk.config">
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       </source>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:53:43 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:28:37:22"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <target dev="tapb9f9a641-2c"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/console.log" append="off"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <video>
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </video>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:53:43 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:53:43 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:53:43 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:53:43 compute-0 nova_compute[256940]: </domain>
Oct 02 12:53:43 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.123 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Preparing to wait for external event network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.123 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.124 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.124 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.125 2 DEBUG nova.virt.libvirt.vif [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-96780390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-96780390',id=144,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-a7fscw8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:37Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=e22a7944-9200-4204-a219-3f7bd720667b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.126 2 DEBUG nova.network.os_vif_util [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.127 2 DEBUG nova.network.os_vif_util [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:37:22,bridge_name='br-int',has_traffic_filtering=True,id=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f9a641-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.127 2 DEBUG os_vif [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:37:22,bridge_name='br-int',has_traffic_filtering=True,id=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f9a641-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f9a641-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9f9a641-2c, col_values=(('external_ids', {'iface-id': 'b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:37:22', 'vm-uuid': 'e22a7944-9200-4204-a219-3f7bd720667b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:43 compute-0 NetworkManager[44981]: <info>  [1759409623.1419] manager: (tapb9f9a641-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.152 2 INFO os_vif [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:37:22,bridge_name='br-int',has_traffic_filtering=True,id=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f9a641-2c')
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.351 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.352 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.352 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No VIF found with MAC fa:16:3e:28:37:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.354 2 INFO nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Using config drive
Oct 02 12:53:43 compute-0 nova_compute[256940]: 2025-10-02 12:53:43.385 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:43 compute-0 ceph-mon[73668]: pgmap v2401: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 196 op/s
Oct 02 12:53:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4245949945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/220270278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Oct 02 12:53:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:53:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:44.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:53:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Oct 02 12:53:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Oct 02 12:53:44 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.585 2 INFO nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Creating config drive at /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/disk.config
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.591 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmp6ryjbg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.626 2 DEBUG nova.network.neutron [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updated VIF entry in instance network info cache for port b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.628 2 DEBUG nova.network.neutron [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updating instance_info_cache with network_info: [{"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.644 2 DEBUG oslo_concurrency.lockutils [req-70c2317a-8daf-4cd7-9db7-2204f0b12e0f req-558244ef-d2c8-49f7-a920-944cdc1c0ebb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.731 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmp6ryjbg" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.768 2 DEBUG nova.storage.rbd_utils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image e22a7944-9200-4204-a219-3f7bd720667b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:44 compute-0 nova_compute[256940]: 2025-10-02 12:53:44.772 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/disk.config e22a7944-9200-4204-a219-3f7bd720667b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.361 2 DEBUG oslo_concurrency.processutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/disk.config e22a7944-9200-4204-a219-3f7bd720667b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.363 2 INFO nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Deleting local config drive /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b/disk.config because it was imported into RBD.
Oct 02 12:53:45 compute-0 kernel: tapb9f9a641-2c: entered promiscuous mode
Oct 02 12:53:45 compute-0 ovn_controller[148123]: 2025-10-02T12:53:45Z|00723|binding|INFO|Claiming lport b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 for this chassis.
Oct 02 12:53:45 compute-0 ovn_controller[148123]: 2025-10-02T12:53:45Z|00724|binding|INFO|b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9: Claiming fa:16:3e:28:37:22 10.100.0.3
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 NetworkManager[44981]: <info>  [1759409625.4261] manager: (tapb9f9a641-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.448 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:37:22 10.100.0.3'], port_security=['fa:16:3e:28:37:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e22a7944-9200-4204-a219-3f7bd720667b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.450 158104 INFO neutron.agent.ovn.metadata.agent [-] Port b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 bound to our chassis
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.451 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct 02 12:53:45 compute-0 systemd-machined[210927]: New machine qemu-76-instance-00000090.
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.465 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7abfb815-abcd-4c3d-bbd4-3079136c76d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.466 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b0ec11e-01 in ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.469 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b0ec11e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.469 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e15dd157-ad50-4583-9d60-9665b735a1db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.470 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4135861f-7189-4661-8058-c95b08dd7fa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 systemd-udevd[355379]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.483 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[82f117a9-7ac3-4e60-81a8-83132995fa6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 NetworkManager[44981]: <info>  [1759409625.4962] device (tapb9f9a641-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:53:45 compute-0 NetworkManager[44981]: <info>  [1759409625.4973] device (tapb9f9a641-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000090.
Oct 02 12:53:45 compute-0 ovn_controller[148123]: 2025-10-02T12:53:45Z|00725|binding|INFO|Setting lport b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 ovn-installed in OVS
Oct 02 12:53:45 compute-0 ovn_controller[148123]: 2025-10-02T12:53:45Z|00726|binding|INFO|Setting lport b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 up in Southbound
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.512 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd998b2-b472-4876-954b-c45006bc4adc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.542 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[56d1d55c-090b-4259-950d-1cd49aeeaa86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 NetworkManager[44981]: <info>  [1759409625.5497] manager: (tap7b0ec11e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/322)
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.548 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[319137a2-4a21-4ffb-b1aa-8c1bcca02f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ceph-mon[73668]: pgmap v2402: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Oct 02 12:53:45 compute-0 ceph-mon[73668]: osdmap e333: 3 total, 3 up, 3 in
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.570347) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625570384, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1253, "num_deletes": 255, "total_data_size": 1942738, "memory_usage": 1966112, "flush_reason": "Manual Compaction"}
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625587664, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 1908022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52486, "largest_seqno": 53738, "table_properties": {"data_size": 1901941, "index_size": 3348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13366, "raw_average_key_size": 20, "raw_value_size": 1889629, "raw_average_value_size": 2902, "num_data_blocks": 147, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409528, "oldest_key_time": 1759409528, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 17401 microseconds, and 4654 cpu microseconds.
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.587738) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 1908022 bytes OK
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.587770) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.590796) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.590816) EVENT_LOG_v1 {"time_micros": 1759409625590809, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.590843) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1937084, prev total WAL file size 1937084, number of live WAL files 2.
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.591852) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(1863KB)], [116(10MB)]
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625591904, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12718429, "oldest_snapshot_seqno": -1}
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.596 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2428ec24-1eaf-45c1-973e-3571a93d2c07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.603 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2848b01e-9ad2-401e-a25a-4381e6f3838c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 NetworkManager[44981]: <info>  [1759409625.6304] device (tap7b0ec11e-00): carrier: link connected
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.635 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[66157a35-4cdb-4b3a-b082-ec9bb71a6d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 8163 keys, 10743098 bytes, temperature: kUnknown
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625654445, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10743098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10690147, "index_size": 31463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 211858, "raw_average_key_size": 25, "raw_value_size": 10546447, "raw_average_value_size": 1291, "num_data_blocks": 1228, "num_entries": 8163, "num_filter_entries": 8163, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.654788) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10743098 bytes
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.656271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 171.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 8687, records dropped: 524 output_compression: NoCompression
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.656287) EVENT_LOG_v1 {"time_micros": 1759409625656279, "job": 70, "event": "compaction_finished", "compaction_time_micros": 62607, "compaction_time_cpu_micros": 24102, "output_level": 6, "num_output_files": 1, "total_output_size": 10743098, "num_input_records": 8687, "num_output_records": 8163, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625656979, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625658958, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.591721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.659004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.659010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.659011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.659012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:53:45.659014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.659 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[34f6a07d-0991-4936-9666-93c497c5e226]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752111, 'reachable_time': 33746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355411, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.675 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60cc3d76-c04c-4762-b0c5-f75924bacf63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:f76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752111, 'tstamp': 752111}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355412, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.694 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[66aaffb9-5c28-4ab8-ade3-6c8c8bc9454f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752111, 'reachable_time': 33746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355413, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.731 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[22aef5d8-4573-4002-af89-dba4c30ce15c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.797 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[45736be0-4e39-4654-ba95-196ae893129b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.799 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.799 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.800 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:45 compute-0 kernel: tap7b0ec11e-00: entered promiscuous mode
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 NetworkManager[44981]: <info>  [1759409625.8024] manager: (tap7b0ec11e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.805 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 ovn_controller[148123]: 2025-10-02T12:53:45Z|00727|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.822 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.823 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3d0c0b-94c9-4f45-aa88-33df53ddf370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.824 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:53:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:53:45.824 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'env', 'PROCESS_TAG=haproxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:53:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Oct 02 12:53:45 compute-0 sudo[355465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:45 compute-0 sudo[355465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:45 compute-0 sudo[355465]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.955 2 DEBUG nova.compute.manager [req-931aafd2-bddf-4084-9f1a-ac83316c8557 req-0d2b70bb-b4f3-4d9e-8d42-96387e4aec8b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received event network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.955 2 DEBUG oslo_concurrency.lockutils [req-931aafd2-bddf-4084-9f1a-ac83316c8557 req-0d2b70bb-b4f3-4d9e-8d42-96387e4aec8b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.956 2 DEBUG oslo_concurrency.lockutils [req-931aafd2-bddf-4084-9f1a-ac83316c8557 req-0d2b70bb-b4f3-4d9e-8d42-96387e4aec8b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.956 2 DEBUG oslo_concurrency.lockutils [req-931aafd2-bddf-4084-9f1a-ac83316c8557 req-0d2b70bb-b4f3-4d9e-8d42-96387e4aec8b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:45 compute-0 nova_compute[256940]: 2025-10-02 12:53:45.956 2 DEBUG nova.compute.manager [req-931aafd2-bddf-4084-9f1a-ac83316c8557 req-0d2b70bb-b4f3-4d9e-8d42-96387e4aec8b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Processing event network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:53:46 compute-0 sudo[355490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:46 compute-0 sudo[355490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:46 compute-0 sudo[355490]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:46.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:46.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:46 compute-0 podman[355536]: 2025-10-02 12:53:46.232230044 +0000 UTC m=+0.073728605 container create 266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.265 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.266 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409626.2653387, e22a7944-9200-4204-a219-3f7bd720667b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.267 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] VM Started (Lifecycle Event)
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.269 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.272 2 INFO nova.virt.libvirt.driver [-] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Instance spawned successfully.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.272 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:53:46 compute-0 podman[355536]: 2025-10-02 12:53:46.183199352 +0000 UTC m=+0.024697933 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:53:46 compute-0 systemd[1]: Started libpod-conmon-266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46.scope.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.301 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.303 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.304 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.304 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.304 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:53:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.305 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.307 2 DEBUG nova.virt.libvirt.driver [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40ab198a6fd60c5f020d5c05aec05ec2fbaecaeed1f470ec6b5f6231fcd1e0d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.315 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:53:46 compute-0 podman[355536]: 2025-10-02 12:53:46.326779509 +0000 UTC m=+0.168278090 container init 266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:53:46 compute-0 podman[355536]: 2025-10-02 12:53:46.332041475 +0000 UTC m=+0.173540036 container start 266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.354 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.355 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409626.266387, e22a7944-9200-4204-a219-3f7bd720667b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.355 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] VM Paused (Lifecycle Event)
Oct 02 12:53:46 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [NOTICE]   (355555) : New worker (355557) forked
Oct 02 12:53:46 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [NOTICE]   (355555) : Loading success.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.387 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.391 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409626.268675, e22a7944-9200-4204-a219-3f7bd720667b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.391 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] VM Resumed (Lifecycle Event)
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.400 2 INFO nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Took 9.33 seconds to spawn the instance on the hypervisor.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.401 2 DEBUG nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.415 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.418 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.442 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.470 2 INFO nova.compute.manager [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Took 10.27 seconds to build instance.
Oct 02 12:53:46 compute-0 nova_compute[256940]: 2025-10-02 12:53:46.488 2 DEBUG oslo_concurrency.lockutils [None req-e44f7026-d9b3-4935-ad4f-2148621d170a 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:47 compute-0 nova_compute[256940]: 2025-10-02 12:53:47.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-0 ceph-mon[73668]: pgmap v2404: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Oct 02 12:53:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Oct 02 12:53:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Oct 02 12:53:47 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Oct 02 12:53:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 198 op/s
Oct 02 12:53:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:48.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:48 compute-0 nova_compute[256940]: 2025-10-02 12:53:48.058 2 DEBUG nova.compute.manager [req-c46499c0-ddd6-4e79-bdd5-7da0300bd3d5 req-8ff3c8e5-ddc2-4c10-8dcb-7287c21f54bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received event network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:48 compute-0 nova_compute[256940]: 2025-10-02 12:53:48.059 2 DEBUG oslo_concurrency.lockutils [req-c46499c0-ddd6-4e79-bdd5-7da0300bd3d5 req-8ff3c8e5-ddc2-4c10-8dcb-7287c21f54bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:48 compute-0 nova_compute[256940]: 2025-10-02 12:53:48.059 2 DEBUG oslo_concurrency.lockutils [req-c46499c0-ddd6-4e79-bdd5-7da0300bd3d5 req-8ff3c8e5-ddc2-4c10-8dcb-7287c21f54bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:48 compute-0 nova_compute[256940]: 2025-10-02 12:53:48.059 2 DEBUG oslo_concurrency.lockutils [req-c46499c0-ddd6-4e79-bdd5-7da0300bd3d5 req-8ff3c8e5-ddc2-4c10-8dcb-7287c21f54bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:48 compute-0 nova_compute[256940]: 2025-10-02 12:53:48.060 2 DEBUG nova.compute.manager [req-c46499c0-ddd6-4e79-bdd5-7da0300bd3d5 req-8ff3c8e5-ddc2-4c10-8dcb-7287c21f54bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] No waiting events found dispatching network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:48 compute-0 nova_compute[256940]: 2025-10-02 12:53:48.060 2 WARNING nova.compute.manager [req-c46499c0-ddd6-4e79-bdd5-7da0300bd3d5 req-8ff3c8e5-ddc2-4c10-8dcb-7287c21f54bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received unexpected event network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 for instance with vm_state active and task_state None.
Oct 02 12:53:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:48.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:48 compute-0 nova_compute[256940]: 2025-10-02 12:53:48.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:48 compute-0 ceph-mon[73668]: osdmap e334: 3 total, 3 up, 3 in
Oct 02 12:53:48 compute-0 ceph-mon[73668]: pgmap v2406: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 198 op/s
Oct 02 12:53:49 compute-0 nova_compute[256940]: 2025-10-02 12:53:49.090 2 DEBUG nova.compute.manager [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:49 compute-0 nova_compute[256940]: 2025-10-02 12:53:49.297 2 INFO nova.compute.manager [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] instance snapshotting
Oct 02 12:53:49 compute-0 nova_compute[256940]: 2025-10-02 12:53:49.622 2 INFO nova.virt.libvirt.driver [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Beginning live snapshot process
Oct 02 12:53:49 compute-0 nova_compute[256940]: 2025-10-02 12:53:49.801 2 DEBUG nova.virt.libvirt.imagebackend [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:53:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 21 KiB/s wr, 166 op/s
Oct 02 12:53:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:50 compute-0 nova_compute[256940]: 2025-10-02 12:53:50.063 2 DEBUG nova.storage.rbd_utils [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] creating snapshot(6c2038b2f1ca456b947db22b967049e2) on rbd image(e22a7944-9200-4204-a219-3f7bd720667b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:53:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Oct 02 12:53:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Oct 02 12:53:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Oct 02 12:53:51 compute-0 ceph-mon[73668]: pgmap v2407: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 21 KiB/s wr, 166 op/s
Oct 02 12:53:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3360298542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 24 KiB/s wr, 250 op/s
Oct 02 12:53:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:52.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:52 compute-0 nova_compute[256940]: 2025-10-02 12:53:52.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:52 compute-0 nova_compute[256940]: 2025-10-02 12:53:52.406 2 DEBUG nova.storage.rbd_utils [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] cloning vms/e22a7944-9200-4204-a219-3f7bd720667b_disk@6c2038b2f1ca456b947db22b967049e2 to images/38d5e3ab-8ce6-4053-8f4a-2a76356bf535 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:53:52 compute-0 ceph-mon[73668]: osdmap e335: 3 total, 3 up, 3 in
Oct 02 12:53:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Oct 02 12:53:52 compute-0 nova_compute[256940]: 2025-10-02 12:53:52.750 2 DEBUG nova.storage.rbd_utils [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] flattening images/38d5e3ab-8ce6-4053-8f4a-2a76356bf535 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:53:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Oct 02 12:53:52 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Oct 02 12:53:53 compute-0 nova_compute[256940]: 2025-10-02 12:53:53.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:53 compute-0 ceph-mon[73668]: pgmap v2409: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 24 KiB/s wr, 250 op/s
Oct 02 12:53:53 compute-0 ceph-mon[73668]: osdmap e336: 3 total, 3 up, 3 in
Oct 02 12:53:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.9 KiB/s wr, 181 op/s
Oct 02 12:53:53 compute-0 nova_compute[256940]: 2025-10-02 12:53:53.965 2 DEBUG nova.storage.rbd_utils [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] removing snapshot(6c2038b2f1ca456b947db22b967049e2) on rbd image(e22a7944-9200-4204-a219-3f7bd720667b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:53:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:54.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:54.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Oct 02 12:53:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:53:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:53:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Oct 02 12:53:54 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Oct 02 12:53:55 compute-0 nova_compute[256940]: 2025-10-02 12:53:55.350 2 DEBUG nova.storage.rbd_utils [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] creating snapshot(snap) on rbd image(38d5e3ab-8ce6-4053-8f4a-2a76356bf535) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Oct 02 12:53:55 compute-0 ceph-mon[73668]: pgmap v2411: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.9 KiB/s wr, 181 op/s
Oct 02 12:53:55 compute-0 ceph-mon[73668]: osdmap e337: 3 total, 3 up, 3 in
Oct 02 12:53:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 186 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.7 MiB/s wr, 172 op/s
Oct 02 12:53:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Oct 02 12:53:55 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Oct 02 12:53:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:56.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:56.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:56 compute-0 podman[355713]: 2025-10-02 12:53:56.403327061 +0000 UTC m=+0.066038756 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:53:56 compute-0 podman[355712]: 2025-10-02 12:53:56.404206523 +0000 UTC m=+0.066723403 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 02 12:53:57 compute-0 ceph-mon[73668]: pgmap v2413: 305 pgs: 305 active+clean; 186 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.7 MiB/s wr, 172 op/s
Oct 02 12:53:57 compute-0 ceph-mon[73668]: osdmap e338: 3 total, 3 up, 3 in
Oct 02 12:53:57 compute-0 nova_compute[256940]: 2025-10-02 12:53:57.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Oct 02 12:53:57 compute-0 nova_compute[256940]: 2025-10-02 12:53:57.950 2 INFO nova.virt.libvirt.driver [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Snapshot image upload complete
Oct 02 12:53:57 compute-0 nova_compute[256940]: 2025-10-02 12:53:57.951 2 INFO nova.compute.manager [None req-8fa4b199-e295-4b1b-80b3-fe6e6d722819 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Took 8.65 seconds to snapshot the instance on the hypervisor.
Oct 02 12:53:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:53:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:58.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:58 compute-0 nova_compute[256940]: 2025-10-02 12:53:58.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:53:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:53:59 compute-0 ceph-mon[73668]: pgmap v2415: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Oct 02 12:53:59 compute-0 ovn_controller[148123]: 2025-10-02T12:53:59Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:37:22 10.100.0.3
Oct 02 12:53:59 compute-0 ovn_controller[148123]: 2025-10-02T12:53:59Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:37:22 10.100.0.3
Oct 02 12:53:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 173 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Oct 02 12:54:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:00.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:00.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:01 compute-0 ceph-mon[73668]: pgmap v2416: 305 pgs: 305 active+clean; 173 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Oct 02 12:54:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2879670665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4268433024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 156 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 236 op/s
Oct 02 12:54:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:02.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:02 compute-0 nova_compute[256940]: 2025-10-02 12:54:02.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Oct 02 12:54:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1477771253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:03 compute-0 nova_compute[256940]: 2025-10-02 12:54:03.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Oct 02 12:54:03 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Oct 02 12:54:03 compute-0 ceph-mon[73668]: pgmap v2417: 305 pgs: 305 active+clean; 156 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 236 op/s
Oct 02 12:54:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2957687789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:03 compute-0 ceph-mon[73668]: osdmap e339: 3 total, 3 up, 3 in
Oct 02 12:54:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 156 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 171 op/s
Oct 02 12:54:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:04.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:04.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:04 compute-0 ceph-mon[73668]: pgmap v2419: 305 pgs: 305 active+clean; 156 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 171 op/s
Oct 02 12:54:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.4 MiB/s wr, 170 op/s
Oct 02 12:54:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:06.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:06 compute-0 sudo[355756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:06 compute-0 sudo[355756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:06 compute-0 sudo[355756]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:06.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:06 compute-0 sudo[355781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:06 compute-0 sudo[355781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:06 compute-0 sudo[355781]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2143275758' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:54:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2143275758' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:54:07 compute-0 nova_compute[256940]: 2025-10-02 12:54:07.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:07 compute-0 ceph-mon[73668]: pgmap v2420: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.4 MiB/s wr, 170 op/s
Oct 02 12:54:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3687837290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1284789999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/651591536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 6.8 MiB/s wr, 188 op/s
Oct 02 12:54:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:08 compute-0 nova_compute[256940]: 2025-10-02 12:54:08.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3023350745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:09 compute-0 nova_compute[256940]: 2025-10-02 12:54:09.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:09 compute-0 ceph-mon[73668]: pgmap v2421: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 6.8 MiB/s wr, 188 op/s
Oct 02 12:54:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2532031233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3621289055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 6.2 MiB/s wr, 158 op/s
Oct 02 12:54:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:10.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:10.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/498936511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2341730290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4040131513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:11 compute-0 ceph-mon[73668]: pgmap v2422: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 6.2 MiB/s wr, 158 op/s
Oct 02 12:54:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2087824584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Oct 02 12:54:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.237 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.237 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033829171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.719 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:12 compute-0 podman[355833]: 2025-10-02 12:54:12.832175982 +0000 UTC m=+0.058866119 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.838 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.839 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:12 compute-0 podman[355835]: 2025-10-02 12:54:12.868091135 +0000 UTC m=+0.092292487 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.995 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.996 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4103MB free_disk=20.900859832763672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.996 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:12 compute-0 nova_compute[256940]: 2025-10-02 12:54:12.996 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.077 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e22a7944-9200-4204-a219-3f7bd720667b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.078 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.078 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.221 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:13 compute-0 ceph-mon[73668]: pgmap v2423: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Oct 02 12:54:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3033829171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352908275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.668 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.674 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.695 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.732 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:54:13 compute-0 nova_compute[256940]: 2025-10-02 12:54:13.733 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 194 op/s
Oct 02 12:54:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:14.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1352908275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:14 compute-0 nova_compute[256940]: 2025-10-02 12:54:14.733 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:14 compute-0 nova_compute[256940]: 2025-10-02 12:54:14.733 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:14 compute-0 nova_compute[256940]: 2025-10-02 12:54:14.734 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:14 compute-0 nova_compute[256940]: 2025-10-02 12:54:14.734 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:54:15 compute-0 ceph-mon[73668]: pgmap v2424: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 194 op/s
Oct 02 12:54:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Oct 02 12:54:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:16.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:16 compute-0 nova_compute[256940]: 2025-10-02 12:54:16.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:17 compute-0 ceph-mon[73668]: pgmap v2425: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Oct 02 12:54:17 compute-0 nova_compute[256940]: 2025-10-02 12:54:17.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:17 compute-0 nova_compute[256940]: 2025-10-02 12:54:17.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:17 compute-0 nova_compute[256940]: 2025-10-02 12:54:17.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 196 op/s
Oct 02 12:54:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:18.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:18 compute-0 nova_compute[256940]: 2025-10-02 12:54:18.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:19 compute-0 ceph-mon[73668]: pgmap v2426: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 196 op/s
Oct 02 12:54:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 50 KiB/s wr, 157 op/s
Oct 02 12:54:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:20.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:21 compute-0 nova_compute[256940]: 2025-10-02 12:54:21.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 265 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 852 KiB/s wr, 172 op/s
Oct 02 12:54:21 compute-0 ceph-mon[73668]: pgmap v2427: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 50 KiB/s wr, 157 op/s
Oct 02 12:54:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:22 compute-0 sudo[355906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:22 compute-0 sudo[355906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-0 sudo[355906]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:22 compute-0 sudo[355931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:22 compute-0 sudo[355931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-0 sudo[355931]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-0 nova_compute[256940]: 2025-10-02 12:54:22.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:22 compute-0 sudo[355956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:22 compute-0 sudo[355956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-0 sudo[355956]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-0 sudo[355981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:54:22 compute-0 sudo[355981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:22 compute-0 sudo[355981]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:54:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:54:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:54:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:54:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b59d8dcb-16a1-4a43-bb40-8b9474fc64e5 does not exist
Oct 02 12:54:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 64dde99e-6a80-47e3-a007-3a2825813597 does not exist
Oct 02 12:54:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c3d89cea-162b-45e3-80e7-e07025207790 does not exist
Oct 02 12:54:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:54:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:54:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:54:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:54:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:54:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:23 compute-0 sudo[356040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:23 compute-0 sudo[356040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:23 compute-0 sudo[356040]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:23 compute-0 sudo[356065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:23 compute-0 sudo[356065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:23 compute-0 sudo[356065]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:54:23 compute-0 sudo[356090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:23 compute-0 sudo[356090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:23 compute-0 sudo[356090]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:23 compute-0 sudo[356115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:54:23 compute-0 sudo[356115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.413 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.413 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.414 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:54:23 compute-0 nova_compute[256940]: 2025-10-02 12:54:23.414 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e22a7944-9200-4204-a219-3f7bd720667b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:23 compute-0 podman[356179]: 2025-10-02 12:54:23.658520957 +0000 UTC m=+0.108541548 container create 391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 02 12:54:23 compute-0 podman[356179]: 2025-10-02 12:54:23.573212353 +0000 UTC m=+0.023232964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:23 compute-0 systemd[1]: Started libpod-conmon-391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4.scope.
Oct 02 12:54:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:23 compute-0 ceph-mon[73668]: pgmap v2428: 305 pgs: 305 active+clean; 265 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 852 KiB/s wr, 172 op/s
Oct 02 12:54:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:54:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:54:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:54:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:23 compute-0 podman[356179]: 2025-10-02 12:54:23.937993581 +0000 UTC m=+0.388014192 container init 391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:54:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 265 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 827 KiB/s wr, 70 op/s
Oct 02 12:54:23 compute-0 podman[356179]: 2025-10-02 12:54:23.946128933 +0000 UTC m=+0.396149524 container start 391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:54:23 compute-0 objective_hugle[356195]: 167 167
Oct 02 12:54:23 compute-0 systemd[1]: libpod-391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4.scope: Deactivated successfully.
Oct 02 12:54:23 compute-0 podman[356179]: 2025-10-02 12:54:23.972515977 +0000 UTC m=+0.422536568 container attach 391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:54:23 compute-0 podman[356179]: 2025-10-02 12:54:23.973823691 +0000 UTC m=+0.423844292 container died 391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:54:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a1512b61f790ad9e54aced9601758574a3e57f2ccb8746d2ecfc376453a8a03-merged.mount: Deactivated successfully.
Oct 02 12:54:24 compute-0 podman[356179]: 2025-10-02 12:54:24.224353155 +0000 UTC m=+0.674373746 container remove 391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:54:24 compute-0 systemd[1]: libpod-conmon-391d0bd26484ead6f3cfacf0dd524269134153997bce8a00cb1f5429e28330f4.scope: Deactivated successfully.
Oct 02 12:54:24 compute-0 podman[356220]: 2025-10-02 12:54:24.405384964 +0000 UTC m=+0.042838733 container create b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:54:24 compute-0 systemd[1]: Started libpod-conmon-b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5.scope.
Oct 02 12:54:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:24 compute-0 podman[356220]: 2025-10-02 12:54:24.386648628 +0000 UTC m=+0.024102427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e36dcd757448ffb158810a821be998547c0ddfefbe7b9e2ea762787d4263dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e36dcd757448ffb158810a821be998547c0ddfefbe7b9e2ea762787d4263dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e36dcd757448ffb158810a821be998547c0ddfefbe7b9e2ea762787d4263dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e36dcd757448ffb158810a821be998547c0ddfefbe7b9e2ea762787d4263dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e36dcd757448ffb158810a821be998547c0ddfefbe7b9e2ea762787d4263dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:24 compute-0 podman[356220]: 2025-10-02 12:54:24.503213043 +0000 UTC m=+0.140666842 container init b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_grothendieck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:54:24 compute-0 podman[356220]: 2025-10-02 12:54:24.514975999 +0000 UTC m=+0.152429768 container start b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_grothendieck, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:54:24 compute-0 podman[356220]: 2025-10-02 12:54:24.520884812 +0000 UTC m=+0.158338601 container attach b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:54:25 compute-0 ceph-mon[73668]: pgmap v2429: 305 pgs: 305 active+clean; 265 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 827 KiB/s wr, 70 op/s
Oct 02 12:54:25 compute-0 determined_grothendieck[356237]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:54:25 compute-0 determined_grothendieck[356237]: --> relative data size: 1.0
Oct 02 12:54:25 compute-0 determined_grothendieck[356237]: --> All data devices are unavailable
Oct 02 12:54:25 compute-0 nova_compute[256940]: 2025-10-02 12:54:25.462 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updating instance_info_cache with network_info: [{"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:25 compute-0 systemd[1]: libpod-b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5.scope: Deactivated successfully.
Oct 02 12:54:25 compute-0 podman[356220]: 2025-10-02 12:54:25.477155645 +0000 UTC m=+1.114609444 container died b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 12:54:25 compute-0 nova_compute[256940]: 2025-10-02 12:54:25.485 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:25 compute-0 nova_compute[256940]: 2025-10-02 12:54:25.485 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e36dcd757448ffb158810a821be998547c0ddfefbe7b9e2ea762787d4263dc-merged.mount: Deactivated successfully.
Oct 02 12:54:25 compute-0 podman[356220]: 2025-10-02 12:54:25.815675632 +0000 UTC m=+1.453129491 container remove b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_grothendieck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:54:25 compute-0 systemd[1]: libpod-conmon-b292630b7f326225a6e2903a749818c362cdc9d30e474b76c45faefbe0d370a5.scope: Deactivated successfully.
Oct 02 12:54:25 compute-0 sudo[356115]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:25 compute-0 sudo[356268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:25 compute-0 sudo[356268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:25 compute-0 sudo[356268]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 286 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 93 op/s
Oct 02 12:54:25 compute-0 sudo[356293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:25 compute-0 sudo[356293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:25 compute-0 sudo[356293]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:26 compute-0 sudo[356318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:26 compute-0 sudo[356318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:26 compute-0 sudo[356318]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:26.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:26 compute-0 sudo[356343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:54:26 compute-0 sudo[356343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:26.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:26 compute-0 sudo[356368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:26 compute-0 sudo[356368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:26 compute-0 sudo[356368]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:26 compute-0 sudo[356412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:26 compute-0 sudo[356412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:26 compute-0 sudo[356412]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:26 compute-0 podman[356457]: 2025-10-02 12:54:26.370238107 +0000 UTC m=+0.019730313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:26 compute-0 podman[356457]: 2025-10-02 12:54:26.492223844 +0000 UTC m=+0.141716030 container create 7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:54:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:26.491 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:26.492 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:26.493 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:26 compute-0 systemd[1]: Started libpod-conmon-7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a.scope.
Oct 02 12:54:26 compute-0 podman[356471]: 2025-10-02 12:54:26.606066669 +0000 UTC m=+0.068709435 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:54:26 compute-0 podman[356472]: 2025-10-02 12:54:26.606294265 +0000 UTC m=+0.068854049 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:54:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:26 compute-0 podman[356457]: 2025-10-02 12:54:26.651697953 +0000 UTC m=+0.301190159 container init 7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:54:26 compute-0 podman[356457]: 2025-10-02 12:54:26.66005323 +0000 UTC m=+0.309545416 container start 7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:54:26 compute-0 naughty_shannon[356505]: 167 167
Oct 02 12:54:26 compute-0 systemd[1]: libpod-7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a.scope: Deactivated successfully.
Oct 02 12:54:26 compute-0 conmon[356505]: conmon 7797a117d01f764bd4d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a.scope/container/memory.events
Oct 02 12:54:26 compute-0 podman[356457]: 2025-10-02 12:54:26.872750801 +0000 UTC m=+0.522243077 container attach 7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:54:26 compute-0 podman[356457]: 2025-10-02 12:54:26.874810065 +0000 UTC m=+0.524302271 container died 7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c7c2c7ed4e7604fee19112e2739723a64c94a8493bdd14c562788d96215dd75-merged.mount: Deactivated successfully.
Oct 02 12:54:27 compute-0 podman[356457]: 2025-10-02 12:54:27.177326006 +0000 UTC m=+0.826818182 container remove 7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:54:27 compute-0 systemd[1]: libpod-conmon-7797a117d01f764bd4d4e21068feda1db97919cca5aa48d7f2dfecdf6c1b3d0a.scope: Deactivated successfully.
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:27 compute-0 ceph-mon[73668]: pgmap v2430: 305 pgs: 305 active+clean; 286 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 93 op/s
Oct 02 12:54:27 compute-0 podman[356541]: 2025-10-02 12:54:27.326302653 +0000 UTC m=+0.023653125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.555 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.555 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:27 compute-0 podman[356541]: 2025-10-02 12:54:27.579580388 +0000 UTC m=+0.276930850 container create 205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.584 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.654 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.654 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.663 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.664 2 INFO nova.compute.claims [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:54:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:27 compute-0 systemd[1]: Started libpod-conmon-205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23.scope.
Oct 02 12:54:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd309a3c0d9a293b6f8dfaaaba2038587ce409429d9864b00ef26654170fb32b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd309a3c0d9a293b6f8dfaaaba2038587ce409429d9864b00ef26654170fb32b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd309a3c0d9a293b6f8dfaaaba2038587ce409429d9864b00ef26654170fb32b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd309a3c0d9a293b6f8dfaaaba2038587ce409429d9864b00ef26654170fb32b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:27 compute-0 podman[356541]: 2025-10-02 12:54:27.79076991 +0000 UTC m=+0.488120382 container init 205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 12:54:27 compute-0 podman[356541]: 2025-10-02 12:54:27.800905753 +0000 UTC m=+0.498256215 container start 205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:27 compute-0 nova_compute[256940]: 2025-10-02 12:54:27.808 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:27 compute-0 podman[356541]: 2025-10-02 12:54:27.885516119 +0000 UTC m=+0.582866581 container attach 205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:54:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 307 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 472 KiB/s rd, 4.1 MiB/s wr, 90 op/s
Oct 02 12:54:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:28.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280822267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.239 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.247 2 DEBUG nova.compute.provider_tree [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.263 2 DEBUG nova.scheduler.client.report [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.284 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.326 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "58c9377d-a280-4dd5-b479-074042fe3a93" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.327 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "58c9377d-a280-4dd5-b479-074042fe3a93" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.336 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "58c9377d-a280-4dd5-b479-074042fe3a93" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.337 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.376 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.376 2 DEBUG nova.network.neutron [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.396 2 INFO nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.415 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4280822267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]: {
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:     "1": [
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:         {
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "devices": [
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "/dev/loop3"
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             ],
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "lv_name": "ceph_lv0",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "lv_size": "7511998464",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "name": "ceph_lv0",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "tags": {
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.cluster_name": "ceph",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.crush_device_class": "",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.encrypted": "0",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.osd_id": "1",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.type": "block",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:                 "ceph.vdo": "0"
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             },
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "type": "block",
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:             "vg_name": "ceph_vg0"
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:         }
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]:     ]
Oct 02 12:54:28 compute-0 nervous_mcnulty[356557]: }
Oct 02 12:54:28 compute-0 systemd[1]: libpod-205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23.scope: Deactivated successfully.
Oct 02 12:54:28 compute-0 podman[356541]: 2025-10-02 12:54:28.656043511 +0000 UTC m=+1.353394003 container died 205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.673 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.675 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.676 2 INFO nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Creating image(s)
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.724 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.763 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.797 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd309a3c0d9a293b6f8dfaaaba2038587ce409429d9864b00ef26654170fb32b-merged.mount: Deactivated successfully.
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.806 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:54:28
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data']
Oct 02 12:54:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.893 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.895 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.896 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.896 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.926 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.931 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:28 compute-0 nova_compute[256940]: 2025-10-02 12:54:28.991 2 DEBUG nova.policy [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '27c79bf1a6974758951669da9f6801c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd532dce9bec34631ae80bf8dea164f24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:54:29 compute-0 podman[356541]: 2025-10-02 12:54:29.021834446 +0000 UTC m=+1.719184908 container remove 205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 12:54:29 compute-0 systemd[1]: libpod-conmon-205a752d2d65c9240b1c1442c040e32bce95838d2a965f2031e8b19e3f0d3c23.scope: Deactivated successfully.
Oct 02 12:54:29 compute-0 sudo[356343]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:29 compute-0 sudo[356694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:29 compute-0 sudo[356694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:29 compute-0 sudo[356694]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:29 compute-0 sudo[356722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:29 compute-0 sudo[356722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:29 compute-0 sudo[356722]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:29 compute-0 sudo[356747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:29 compute-0 sudo[356747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:29 compute-0 sudo[356747]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:29 compute-0 sudo[356772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:54:29 compute-0 sudo[356772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:54:29 compute-0 ceph-mon[73668]: pgmap v2431: 305 pgs: 305 active+clean; 307 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 472 KiB/s rd, 4.1 MiB/s wr, 90 op/s
Oct 02 12:54:29 compute-0 nova_compute[256940]: 2025-10-02 12:54:29.619 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:29 compute-0 nova_compute[256940]: 2025-10-02 12:54:29.726 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] resizing rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:54:29 compute-0 podman[356836]: 2025-10-02 12:54:29.671588902 +0000 UTC m=+0.031650243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:29 compute-0 podman[356836]: 2025-10-02 12:54:29.790269643 +0000 UTC m=+0.150330954 container create 9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:54:29 compute-0 nova_compute[256940]: 2025-10-02 12:54:29.858 2 DEBUG nova.network.neutron [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Successfully created port: 4cafcc32-a350-4cae-bac5-13455086f2d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:54:29 compute-0 systemd[1]: Started libpod-conmon-9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571.scope.
Oct 02 12:54:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 564 KiB/s rd, 4.9 MiB/s wr, 122 op/s
Oct 02 12:54:30 compute-0 podman[356836]: 2025-10-02 12:54:30.031861794 +0000 UTC m=+0.391923125 container init 9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_greider, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 12:54:30 compute-0 podman[356836]: 2025-10-02 12:54:30.040864698 +0000 UTC m=+0.400925999 container start 9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_greider, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 12:54:30 compute-0 crazy_greider[356907]: 167 167
Oct 02 12:54:30 compute-0 systemd[1]: libpod-9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571.scope: Deactivated successfully.
Oct 02 12:54:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:30 compute-0 podman[356836]: 2025-10-02 12:54:30.08604037 +0000 UTC m=+0.446101681 container attach 9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:54:30 compute-0 podman[356836]: 2025-10-02 12:54:30.08680026 +0000 UTC m=+0.446861571 container died 9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_greider, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:54:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-011a4479ba17ef85c454ed9c02ebe39252afa64960487226f182695299e555d5-merged.mount: Deactivated successfully.
Oct 02 12:54:30 compute-0 nova_compute[256940]: 2025-10-02 12:54:30.431 2 DEBUG nova.objects.instance [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f7bf190-7797-46d6-936a-b9eb2e85fba5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:30 compute-0 nova_compute[256940]: 2025-10-02 12:54:30.445 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:54:30 compute-0 nova_compute[256940]: 2025-10-02 12:54:30.446 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Ensure instance console log exists: /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:54:30 compute-0 nova_compute[256940]: 2025-10-02 12:54:30.447 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:30 compute-0 nova_compute[256940]: 2025-10-02 12:54:30.447 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:30 compute-0 nova_compute[256940]: 2025-10-02 12:54:30.447 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:30 compute-0 podman[356836]: 2025-10-02 12:54:30.793564096 +0000 UTC m=+1.153625407 container remove 9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:54:30 compute-0 systemd[1]: libpod-conmon-9ec1e27de85dff17468e65b139fd053ad5047b632ed916f03ad3afdbfe0f8571.scope: Deactivated successfully.
Oct 02 12:54:30 compute-0 ceph-mon[73668]: pgmap v2432: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 564 KiB/s rd, 4.9 MiB/s wr, 122 op/s
Oct 02 12:54:31 compute-0 podman[356952]: 2025-10-02 12:54:31.021477692 +0000 UTC m=+0.091761723 container create d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:54:31 compute-0 podman[356952]: 2025-10-02 12:54:30.956548917 +0000 UTC m=+0.026832968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:54:31 compute-0 systemd[1]: Started libpod-conmon-d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24.scope.
Oct 02 12:54:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902cf643ade7eb5ed905d72b5bd54904095012beee255319fb30c8f5f3a58e48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902cf643ade7eb5ed905d72b5bd54904095012beee255319fb30c8f5f3a58e48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902cf643ade7eb5ed905d72b5bd54904095012beee255319fb30c8f5f3a58e48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902cf643ade7eb5ed905d72b5bd54904095012beee255319fb30c8f5f3a58e48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:31 compute-0 podman[356952]: 2025-10-02 12:54:31.227547011 +0000 UTC m=+0.297831072 container init d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:54:31 compute-0 podman[356952]: 2025-10-02 12:54:31.236203536 +0000 UTC m=+0.306487567 container start d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:54:31 compute-0 podman[356952]: 2025-10-02 12:54:31.256651577 +0000 UTC m=+0.326935608 container attach d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.319 2 DEBUG nova.network.neutron [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Successfully updated port: 4cafcc32-a350-4cae-bac5-13455086f2d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.335 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "refresh_cache-3f7bf190-7797-46d6-936a-b9eb2e85fba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.335 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquired lock "refresh_cache-3f7bf190-7797-46d6-936a-b9eb2e85fba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.335 2 DEBUG nova.network.neutron [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.413 2 DEBUG nova.compute.manager [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received event network-changed-4cafcc32-a350-4cae-bac5-13455086f2d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.413 2 DEBUG nova.compute.manager [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Refreshing instance network info cache due to event network-changed-4cafcc32-a350-4cae-bac5-13455086f2d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.413 2 DEBUG oslo_concurrency.lockutils [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3f7bf190-7797-46d6-936a-b9eb2e85fba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:31 compute-0 nova_compute[256940]: 2025-10-02 12:54:31.920 2 DEBUG nova.network.neutron [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:54:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 6.0 MiB/s wr, 150 op/s
Oct 02 12:54:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:32.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:32 compute-0 stoic_williams[356969]: {
Oct 02 12:54:32 compute-0 stoic_williams[356969]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:54:32 compute-0 stoic_williams[356969]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:54:32 compute-0 stoic_williams[356969]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:54:32 compute-0 stoic_williams[356969]:         "osd_id": 1,
Oct 02 12:54:32 compute-0 stoic_williams[356969]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:54:32 compute-0 stoic_williams[356969]:         "type": "bluestore"
Oct 02 12:54:32 compute-0 stoic_williams[356969]:     }
Oct 02 12:54:32 compute-0 stoic_williams[356969]: }
Oct 02 12:54:32 compute-0 systemd[1]: libpod-d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24.scope: Deactivated successfully.
Oct 02 12:54:32 compute-0 podman[356990]: 2025-10-02 12:54:32.146230489 +0000 UTC m=+0.022709911 container died d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 12:54:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:32.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-902cf643ade7eb5ed905d72b5bd54904095012beee255319fb30c8f5f3a58e48-merged.mount: Deactivated successfully.
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:32 compute-0 podman[356990]: 2025-10-02 12:54:32.390688973 +0000 UTC m=+0.267168385 container remove d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_williams, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:54:32 compute-0 systemd[1]: libpod-conmon-d1bc863642434f218dabed962079c198c6f49ac04c5aa209de9c91abe3defb24.scope: Deactivated successfully.
Oct 02 12:54:32 compute-0 sudo[356772]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:54:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:54:32 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 61db68b6-a416-4d51-bed8-da313b8d4cd1 does not exist
Oct 02 12:54:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f8482f6a-cb7b-472a-b51d-ede7ba76b3c3 does not exist
Oct 02 12:54:32 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c7554f0c-a05f-44d6-a839-c622128bdfc0 does not exist
Oct 02 12:54:32 compute-0 sudo[357006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:32 compute-0 sudo[357006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:32 compute-0 sudo[357006]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.726 2 DEBUG nova.network.neutron [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Updating instance_info_cache with network_info: [{"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.754 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Releasing lock "refresh_cache-3f7bf190-7797-46d6-936a-b9eb2e85fba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.755 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Instance network_info: |[{"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.755 2 DEBUG oslo_concurrency.lockutils [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3f7bf190-7797-46d6-936a-b9eb2e85fba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.755 2 DEBUG nova.network.neutron [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Refreshing network info cache for port 4cafcc32-a350-4cae-bac5-13455086f2d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.759 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Start _get_guest_xml network_info=[{"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:54:32 compute-0 sudo[357032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:54:32 compute-0 sudo[357032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.767 2 WARNING nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:54:32 compute-0 sudo[357032]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.780 2 DEBUG nova.virt.libvirt.host [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.782 2 DEBUG nova.virt.libvirt.host [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.787 2 DEBUG nova.virt.libvirt.host [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.787 2 DEBUG nova.virt.libvirt.host [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.790 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.790 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.791 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.791 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.792 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.792 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.792 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.793 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.793 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.793 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.794 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.794 2 DEBUG nova.virt.hardware [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:54:32 compute-0 nova_compute[256940]: 2025-10-02 12:54:32.799 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433984474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.318 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.354 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.359 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:33 compute-0 ceph-mon[73668]: pgmap v2433: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 6.0 MiB/s wr, 150 op/s
Oct 02 12:54:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/433984474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457786826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.863 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.865 2 DEBUG nova.virt.libvirt.vif [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:54:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-867183804',display_name='tempest-ServerGroupTestJSON-server-867183804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-867183804',id=148,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d532dce9bec34631ae80bf8dea164f24',ramdisk_id='',reservation_id='r-5ar4a5n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-854462451',owner_user_name='tempest-ServerGroupTestJSON-854462451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:28Z,user_data=None,user_id='27c79bf1a6974758951669da9f6801c8',uuid=3f7bf190-7797-46d6-936a-b9eb2e85fba5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.865 2 DEBUG nova.network.os_vif_util [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Converting VIF {"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.866 2 DEBUG nova.network.os_vif_util [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:10:b1,bridge_name='br-int',has_traffic_filtering=True,id=4cafcc32-a350-4cae-bac5-13455086f2d3,network=Network(ea949c9b-7f43-40fc-983e-58016b188d39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cafcc32-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.867 2 DEBUG nova.objects.instance [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f7bf190-7797-46d6-936a-b9eb2e85fba5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.883 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <uuid>3f7bf190-7797-46d6-936a-b9eb2e85fba5</uuid>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <name>instance-00000094</name>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerGroupTestJSON-server-867183804</nova:name>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:54:32</nova:creationTime>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:user uuid="27c79bf1a6974758951669da9f6801c8">tempest-ServerGroupTestJSON-854462451-project-member</nova:user>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:project uuid="d532dce9bec34631ae80bf8dea164f24">tempest-ServerGroupTestJSON-854462451</nova:project>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <nova:port uuid="4cafcc32-a350-4cae-bac5-13455086f2d3">
Oct 02 12:54:33 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <system>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <entry name="serial">3f7bf190-7797-46d6-936a-b9eb2e85fba5</entry>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <entry name="uuid">3f7bf190-7797-46d6-936a-b9eb2e85fba5</entry>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </system>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <os>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   </os>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <features>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   </features>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk">
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       </source>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk.config">
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       </source>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:54:33 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:56:10:b1"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <target dev="tap4cafcc32-a3"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/console.log" append="off"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <video>
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </video>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:54:33 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:54:33 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:54:33 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:54:33 compute-0 nova_compute[256940]: </domain>
Oct 02 12:54:33 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.883 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Preparing to wait for external event network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.884 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.884 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.884 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.885 2 DEBUG nova.virt.libvirt.vif [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:54:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-867183804',display_name='tempest-ServerGroupTestJSON-server-867183804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-867183804',id=148,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d532dce9bec34631ae80bf8dea164f24',ramdisk_id='',reservation_id='r-5ar4a5n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-854462451',owner_user_name='tempest-ServerGroupTestJSON-854462451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:28Z,user_data=None,user_id='27c79bf1a6974758951669da9f6801c8',uuid=3f7bf190-7797-46d6-936a-b9eb2e85fba5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.885 2 DEBUG nova.network.os_vif_util [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Converting VIF {"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.885 2 DEBUG nova.network.os_vif_util [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:10:b1,bridge_name='br-int',has_traffic_filtering=True,id=4cafcc32-a350-4cae-bac5-13455086f2d3,network=Network(ea949c9b-7f43-40fc-983e-58016b188d39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cafcc32-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.886 2 DEBUG os_vif [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:10:b1,bridge_name='br-int',has_traffic_filtering=True,id=4cafcc32-a350-4cae-bac5-13455086f2d3,network=Network(ea949c9b-7f43-40fc-983e-58016b188d39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cafcc32-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.890 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4cafcc32-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4cafcc32-a3, col_values=(('external_ids', {'iface-id': '4cafcc32-a350-4cae-bac5-13455086f2d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:10:b1', 'vm-uuid': '3f7bf190-7797-46d6-936a-b9eb2e85fba5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-0 NetworkManager[44981]: <info>  [1759409673.8944] manager: (tap4cafcc32-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.901 2 INFO os_vif [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:10:b1,bridge_name='br-int',has_traffic_filtering=True,id=4cafcc32-a350-4cae-bac5-13455086f2d3,network=Network(ea949c9b-7f43-40fc-983e-58016b188d39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cafcc32-a3')
Oct 02 12:54:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 5.2 MiB/s wr, 131 op/s
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.993 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.994 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.994 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] No VIF found with MAC fa:16:3e:56:10:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:54:33 compute-0 nova_compute[256940]: 2025-10-02 12:54:33.994 2 INFO nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Using config drive
Oct 02 12:54:34 compute-0 nova_compute[256940]: 2025-10-02 12:54:34.020 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:34.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 12:54:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:34.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 12:54:34 compute-0 nova_compute[256940]: 2025-10-02 12:54:34.473 2 INFO nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Creating config drive at /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/disk.config
Oct 02 12:54:34 compute-0 nova_compute[256940]: 2025-10-02 12:54:34.479 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvwvmgc61 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:34 compute-0 nova_compute[256940]: 2025-10-02 12:54:34.623 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvwvmgc61" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:34 compute-0 nova_compute[256940]: 2025-10-02 12:54:34.655 2 DEBUG nova.storage.rbd_utils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] rbd image 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:34 compute-0 nova_compute[256940]: 2025-10-02 12:54:34.658 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/disk.config 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2457786826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.271 2 DEBUG oslo_concurrency.processutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/disk.config 3f7bf190-7797-46d6-936a-b9eb2e85fba5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.271 2 INFO nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Deleting local config drive /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5/disk.config because it was imported into RBD.
Oct 02 12:54:35 compute-0 kernel: tap4cafcc32-a3: entered promiscuous mode
Oct 02 12:54:35 compute-0 NetworkManager[44981]: <info>  [1759409675.3339] manager: (tap4cafcc32-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Oct 02 12:54:35 compute-0 ovn_controller[148123]: 2025-10-02T12:54:35Z|00728|binding|INFO|Claiming lport 4cafcc32-a350-4cae-bac5-13455086f2d3 for this chassis.
Oct 02 12:54:35 compute-0 ovn_controller[148123]: 2025-10-02T12:54:35Z|00729|binding|INFO|4cafcc32-a350-4cae-bac5-13455086f2d3: Claiming fa:16:3e:56:10:b1 10.100.0.3
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.349 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:10:b1 10.100.0.3'], port_security=['fa:16:3e:56:10:b1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f7bf190-7797-46d6-936a-b9eb2e85fba5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea949c9b-7f43-40fc-983e-58016b188d39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd532dce9bec34631ae80bf8dea164f24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71cb45df-06c5-4642-aa25-5a4470e279a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d5a32b8-dd8a-4b17-87cc-7a465104726a, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4cafcc32-a350-4cae-bac5-13455086f2d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.351 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4cafcc32-a350-4cae-bac5-13455086f2d3 in datapath ea949c9b-7f43-40fc-983e-58016b188d39 bound to our chassis
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.353 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea949c9b-7f43-40fc-983e-58016b188d39
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.367 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba13814-4854-4b77-a6a9-722cdb21fe6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.368 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapea949c9b-71 in ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.370 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapea949c9b-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.370 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3abd450f-1df2-44c6-9ac2-101525622526]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 systemd-udevd[357193]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.371 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ec62e215-ddd6-4c64-abc0-92041897d492]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 systemd-machined[210927]: New machine qemu-77-instance-00000094.
Oct 02 12:54:35 compute-0 NetworkManager[44981]: <info>  [1759409675.3867] device (tap4cafcc32-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.385 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b78608-2ab2-4260-bd3d-cca10d34748f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 NetworkManager[44981]: <info>  [1759409675.3904] device (tap4cafcc32-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_controller[148123]: 2025-10-02T12:54:35Z|00730|binding|INFO|Setting lport 4cafcc32-a350-4cae-bac5-13455086f2d3 ovn-installed in OVS
Oct 02 12:54:35 compute-0 ovn_controller[148123]: 2025-10-02T12:54:35Z|00731|binding|INFO|Setting lport 4cafcc32-a350-4cae-bac5-13455086f2d3 up in Southbound
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000094.
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.417 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fb219c6b-cf0e-482e-b5af-a3aa42de607f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.448 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fdcae5-e025-4c4b-9048-46bf53d3f06c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 systemd-udevd[357197]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:54:35 compute-0 NetworkManager[44981]: <info>  [1759409675.4587] manager: (tapea949c9b-70): new Veth device (/org/freedesktop/NetworkManager/Devices/326)
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.458 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[40168aba-80b0-4af9-ac58-bb6c18fdbd76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.513 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[470aee6c-ebf5-45cc-8490-56c7d609c683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.517 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1f111d5e-90bd-4a8c-8c93-4264eed94930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 NetworkManager[44981]: <info>  [1759409675.5484] device (tapea949c9b-70): carrier: link connected
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.558 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[51c4fce5-4b4e-41f0-92ba-6a742f1b1b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.581 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9495896a-2c55-4938-9e56-236a471ed63c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea949c9b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757103, 'reachable_time': 22046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357226, 'error': None, 'target': 'ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.601 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8f872751-166d-4d36-a633-e499d682e9d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:b737'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757103, 'tstamp': 757103}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357227, 'error': None, 'target': 'ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.625 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab8c0f6-93c2-49c1-90b3-3526c71c6cc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea949c9b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757103, 'reachable_time': 22046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357228, 'error': None, 'target': 'ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.665 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[85dab8ee-9eb6-4517-84ae-acc9410b8089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ceph-mon[73668]: pgmap v2434: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 5.2 MiB/s wr, 131 op/s
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.751 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ea66cd-d02e-46b1-b227-bd04a394fad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.754 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea949c9b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.754 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.755 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea949c9b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 kernel: tapea949c9b-70: entered promiscuous mode
Oct 02 12:54:35 compute-0 NetworkManager[44981]: <info>  [1759409675.7583] manager: (tapea949c9b-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.768 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea949c9b-70, col_values=(('external_ids', {'iface-id': 'ba02a373-624c-4a1d-bde8-ca350f9a695b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:35 compute-0 ovn_controller[148123]: 2025-10-02T12:54:35Z|00732|binding|INFO|Releasing lport ba02a373-624c-4a1d-bde8-ca350f9a695b from this chassis (sb_readonly=0)
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.774 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ea949c9b-7f43-40fc-983e-58016b188d39.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ea949c9b-7f43-40fc-983e-58016b188d39.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.775 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5662c5-d4b0-4e6b-9a67-5403c7b8935d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.776 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-ea949c9b-7f43-40fc-983e-58016b188d39
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/ea949c9b-7f43-40fc-983e-58016b188d39.pid.haproxy
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID ea949c9b-7f43-40fc-983e-58016b188d39
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:54:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:35.777 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39', 'env', 'PROCESS_TAG=haproxy-ea949c9b-7f43-40fc-983e-58016b188d39', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ea949c9b-7f43-40fc-983e-58016b188d39.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:54:35 compute-0 nova_compute[256940]: 2025-10-02 12:54:35.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 342 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 619 KiB/s rd, 5.2 MiB/s wr, 143 op/s
Oct 02 12:54:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:36.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:36.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.273 2 DEBUG nova.network.neutron [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Updated VIF entry in instance network info cache for port 4cafcc32-a350-4cae-bac5-13455086f2d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.274 2 DEBUG nova.network.neutron [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Updating instance_info_cache with network_info: [{"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:36 compute-0 podman[357271]: 2025-10-02 12:54:36.193605287 +0000 UTC m=+0.042531445 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.294 2 DEBUG oslo_concurrency.lockutils [req-3e57ea93-760c-453f-9110-1311ee94a47d req-73ec381e-377b-4b37-b0cb-7e03cde22d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3f7bf190-7797-46d6-936a-b9eb2e85fba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:36 compute-0 podman[357271]: 2025-10-02 12:54:36.496938701 +0000 UTC m=+0.345864839 container create 8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:36 compute-0 systemd[1]: Started libpod-conmon-8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb.scope.
Oct 02 12:54:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c999df69927ecc05b486561bcfe153e5cc9571646d3160c60dfe226b1000319/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:36 compute-0 podman[357271]: 2025-10-02 12:54:36.751827618 +0000 UTC m=+0.600753756 container init 8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:36 compute-0 podman[357271]: 2025-10-02 12:54:36.758502651 +0000 UTC m=+0.607428779 container start 8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:54:36 compute-0 neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39[357318]: [NOTICE]   (357323) : New worker (357325) forked
Oct 02 12:54:36 compute-0 neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39[357318]: [NOTICE]   (357323) : Loading success.
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.932 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409676.931762, 3f7bf190-7797-46d6-936a-b9eb2e85fba5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.932 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] VM Started (Lifecycle Event)
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.949 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.954 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409676.9320862, 3f7bf190-7797-46d6-936a-b9eb2e85fba5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.954 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] VM Paused (Lifecycle Event)
Oct 02 12:54:36 compute-0 ceph-mon[73668]: pgmap v2435: 305 pgs: 305 active+clean; 342 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 619 KiB/s rd, 5.2 MiB/s wr, 143 op/s
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.969 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.972 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:36 compute-0 nova_compute[256940]: 2025-10-02 12:54:36.989 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.646 2 DEBUG nova.compute.manager [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received event network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.646 2 DEBUG oslo_concurrency.lockutils [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.647 2 DEBUG oslo_concurrency.lockutils [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.647 2 DEBUG oslo_concurrency.lockutils [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.647 2 DEBUG nova.compute.manager [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Processing event network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.647 2 DEBUG nova.compute.manager [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received event network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.648 2 DEBUG oslo_concurrency.lockutils [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.648 2 DEBUG oslo_concurrency.lockutils [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.648 2 DEBUG oslo_concurrency.lockutils [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.648 2 DEBUG nova.compute.manager [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] No waiting events found dispatching network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.648 2 WARNING nova.compute.manager [req-a2a23340-297f-4790-8746-f81e129d4cb6 req-e30f853b-778d-4e0a-bd4c-65559592619e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received unexpected event network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 for instance with vm_state building and task_state spawning.
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.649 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.652 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409677.6528304, 3f7bf190-7797-46d6-936a-b9eb2e85fba5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.653 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] VM Resumed (Lifecycle Event)
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.654 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.657 2 INFO nova.virt.libvirt.driver [-] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Instance spawned successfully.
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.658 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.676 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.687 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.691 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.692 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.693 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.693 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.694 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.694 2 DEBUG nova.virt.libvirt.driver [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.723 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.758 2 INFO nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Took 9.08 seconds to spawn the instance on the hypervisor.
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.759 2 DEBUG nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.852 2 INFO nova.compute.manager [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Took 10.22 seconds to build instance.
Oct 02 12:54:37 compute-0 nova_compute[256940]: 2025-10-02 12:54:37.869 2 DEBUG oslo_concurrency.lockutils [None req-2208c8c2-0b36-41db-8c35-cc68b6e044f4 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 3.7 MiB/s wr, 147 op/s
Oct 02 12:54:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:38.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/365730846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:38 compute-0 nova_compute[256940]: 2025-10-02 12:54:38.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:39 compute-0 ceph-mon[73668]: pgmap v2436: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 3.7 MiB/s wr, 147 op/s
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.467 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.468 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.468 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.469 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.469 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.470 2 INFO nova.compute.manager [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Terminating instance
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.471 2 DEBUG nova.compute.manager [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:54:39 compute-0 kernel: tap4cafcc32-a3 (unregistering): left promiscuous mode
Oct 02 12:54:39 compute-0 NetworkManager[44981]: <info>  [1759409679.6912] device (tap4cafcc32-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:54:39 compute-0 ovn_controller[148123]: 2025-10-02T12:54:39Z|00733|binding|INFO|Releasing lport 4cafcc32-a350-4cae-bac5-13455086f2d3 from this chassis (sb_readonly=0)
Oct 02 12:54:39 compute-0 ovn_controller[148123]: 2025-10-02T12:54:39Z|00734|binding|INFO|Setting lport 4cafcc32-a350-4cae-bac5-13455086f2d3 down in Southbound
Oct 02 12:54:39 compute-0 ovn_controller[148123]: 2025-10-02T12:54:39Z|00735|binding|INFO|Removing iface tap4cafcc32-a3 ovn-installed in OVS
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:39.745 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:10:b1 10.100.0.3'], port_security=['fa:16:3e:56:10:b1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f7bf190-7797-46d6-936a-b9eb2e85fba5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea949c9b-7f43-40fc-983e-58016b188d39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd532dce9bec34631ae80bf8dea164f24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71cb45df-06c5-4642-aa25-5a4470e279a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d5a32b8-dd8a-4b17-87cc-7a465104726a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=4cafcc32-a350-4cae-bac5-13455086f2d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:39.747 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 4cafcc32-a350-4cae-bac5-13455086f2d3 in datapath ea949c9b-7f43-40fc-983e-58016b188d39 unbound from our chassis
Oct 02 12:54:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:39.749 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea949c9b-7f43-40fc-983e-58016b188d39, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:54:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:39.750 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2653b820-e9bd-4a94-98aa-241bfbaeff2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:39.751 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39 namespace which is not needed anymore
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:39 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 02 12:54:39 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000094.scope: Consumed 3.254s CPU time.
Oct 02 12:54:39 compute-0 systemd-machined[210927]: Machine qemu-77-instance-00000094 terminated.
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.906 2 INFO nova.virt.libvirt.driver [-] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Instance destroyed successfully.
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.906 2 DEBUG nova.objects.instance [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lazy-loading 'resources' on Instance uuid 3f7bf190-7797-46d6-936a-b9eb2e85fba5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.918 2 DEBUG nova.virt.libvirt.vif [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:54:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-867183804',display_name='tempest-ServerGroupTestJSON-server-867183804',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-867183804',id=148,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d532dce9bec34631ae80bf8dea164f24',ramdisk_id='',reservation_id='r-5ar4a5n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-854462451',owner_user_name='tempest-ServerGroupTestJSON-854462451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:37Z,user_data=None,user_id='27c79bf1a6974758951669da9f6801c8',uuid=3f7bf190-7797-46d6-936a-b9eb2e85fba5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.919 2 DEBUG nova.network.os_vif_util [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Converting VIF {"id": "4cafcc32-a350-4cae-bac5-13455086f2d3", "address": "fa:16:3e:56:10:b1", "network": {"id": "ea949c9b-7f43-40fc-983e-58016b188d39", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-402387673-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d532dce9bec34631ae80bf8dea164f24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cafcc32-a3", "ovs_interfaceid": "4cafcc32-a350-4cae-bac5-13455086f2d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.920 2 DEBUG nova.network.os_vif_util [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:10:b1,bridge_name='br-int',has_traffic_filtering=True,id=4cafcc32-a350-4cae-bac5-13455086f2d3,network=Network(ea949c9b-7f43-40fc-983e-58016b188d39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cafcc32-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.920 2 DEBUG os_vif [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:10:b1,bridge_name='br-int',has_traffic_filtering=True,id=4cafcc32-a350-4cae-bac5-13455086f2d3,network=Network(ea949c9b-7f43-40fc-983e-58016b188d39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cafcc32-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cafcc32-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:54:39 compute-0 nova_compute[256940]: 2025-10-02 12:54:39.927 2 INFO os_vif [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:10:b1,bridge_name='br-int',has_traffic_filtering=True,id=4cafcc32-a350-4cae-bac5-13455086f2d3,network=Network(ea949c9b-7f43-40fc-983e-58016b188d39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cafcc32-a3')
Oct 02 12:54:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1011 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 12:54:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:40.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:40 compute-0 neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39[357318]: [NOTICE]   (357323) : haproxy version is 2.8.14-c23fe91
Oct 02 12:54:40 compute-0 neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39[357318]: [NOTICE]   (357323) : path to executable is /usr/sbin/haproxy
Oct 02 12:54:40 compute-0 neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39[357318]: [WARNING]  (357323) : Exiting Master process...
Oct 02 12:54:40 compute-0 neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39[357318]: [ALERT]    (357323) : Current worker (357325) exited with code 143 (Terminated)
Oct 02 12:54:40 compute-0 neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39[357318]: [WARNING]  (357323) : All workers exited. Exiting... (0)
Oct 02 12:54:40 compute-0 systemd[1]: libpod-8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb.scope: Deactivated successfully.
Oct 02 12:54:40 compute-0 podman[357358]: 2025-10-02 12:54:40.115004807 +0000 UTC m=+0.241658674 container died 8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:54:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:40.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c999df69927ecc05b486561bcfe153e5cc9571646d3160c60dfe226b1000319-merged.mount: Deactivated successfully.
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.371 2 DEBUG nova.compute.manager [req-0642eba5-5339-403f-971e-f19651794c55 req-5d959bb7-ca82-49bc-b557-98a0b1f74504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received event network-vif-unplugged-4cafcc32-a350-4cae-bac5-13455086f2d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.373 2 DEBUG oslo_concurrency.lockutils [req-0642eba5-5339-403f-971e-f19651794c55 req-5d959bb7-ca82-49bc-b557-98a0b1f74504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.373 2 DEBUG oslo_concurrency.lockutils [req-0642eba5-5339-403f-971e-f19651794c55 req-5d959bb7-ca82-49bc-b557-98a0b1f74504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.374 2 DEBUG oslo_concurrency.lockutils [req-0642eba5-5339-403f-971e-f19651794c55 req-5d959bb7-ca82-49bc-b557-98a0b1f74504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.374 2 DEBUG nova.compute.manager [req-0642eba5-5339-403f-971e-f19651794c55 req-5d959bb7-ca82-49bc-b557-98a0b1f74504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] No waiting events found dispatching network-vif-unplugged-4cafcc32-a350-4cae-bac5-13455086f2d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.374 2 DEBUG nova.compute.manager [req-0642eba5-5339-403f-971e-f19651794c55 req-5d959bb7-ca82-49bc-b557-98a0b1f74504 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received event network-vif-unplugged-4cafcc32-a350-4cae-bac5-13455086f2d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:54:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1167518381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:40 compute-0 podman[357358]: 2025-10-02 12:54:40.450228629 +0000 UTC m=+0.576882516 container cleanup 8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:54:40 compute-0 systemd[1]: libpod-conmon-8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb.scope: Deactivated successfully.
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005339922994602643 of space, bias 1.0, pg target 1.601976898380793 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:54:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:54:40 compute-0 podman[357417]: 2025-10-02 12:54:40.638659871 +0000 UTC m=+0.159494132 container remove 8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.645 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[54fae80d-9c48-483c-a95d-9e2f54b7a61b]: (4, ('Thu Oct  2 12:54:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39 (8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb)\n8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb\nThu Oct  2 12:54:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39 (8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb)\n8a4bfe592c49634a039d1f9df4762e022a44344b5988784ecfb81c59679cd5bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.647 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[67ba6397-2003-478d-a1be-38839ea1074b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.648 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea949c9b-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:40 compute-0 kernel: tapea949c9b-70: left promiscuous mode
Oct 02 12:54:40 compute-0 nova_compute[256940]: 2025-10-02 12:54:40.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.670 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11b7d420-7773-4975-b6e0-00ceb50cf996]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.694 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d7ad6e-df37-4b1f-89da-21a9d9e94fec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.696 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbf3bb5-544b-4b9f-af03-02423e6ec920]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.714 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[107bb4cc-e630-4e71-8515-f0d654f29508]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757092, 'reachable_time': 21800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357433, 'error': None, 'target': 'ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.716 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ea949c9b-7f43-40fc-983e-58016b188d39 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:54:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:40.716 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[fc87e59b-f8ef-49c5-a174-5c8c9df456da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:40 compute-0 systemd[1]: run-netns-ovnmeta\x2dea949c9b\x2d7f43\x2d40fc\x2d983e\x2d58016b188d39.mount: Deactivated successfully.
Oct 02 12:54:41 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Oct 02 12:54:41 compute-0 ceph-mon[73668]: pgmap v2437: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1011 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 12:54:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Oct 02 12:54:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:42.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:42.121 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:42.123 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:42.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.172 2 INFO nova.virt.libvirt.driver [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Deleting instance files /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5_del
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.174 2 INFO nova.virt.libvirt.driver [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Deletion of /var/lib/nova/instances/3f7bf190-7797-46d6-936a-b9eb2e85fba5_del complete
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.218 2 INFO nova.compute.manager [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Took 2.75 seconds to destroy the instance on the hypervisor.
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.220 2 DEBUG oslo.service.loopingcall [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.221 2 DEBUG nova.compute.manager [-] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.221 2 DEBUG nova.network.neutron [-] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.465 2 DEBUG nova.compute.manager [req-904b5d7a-bdd9-40dc-8fa2-64962797b978 req-bcab2ccc-8449-41ab-a3d4-1e460b655ad4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received event network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.466 2 DEBUG oslo_concurrency.lockutils [req-904b5d7a-bdd9-40dc-8fa2-64962797b978 req-bcab2ccc-8449-41ab-a3d4-1e460b655ad4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.466 2 DEBUG oslo_concurrency.lockutils [req-904b5d7a-bdd9-40dc-8fa2-64962797b978 req-bcab2ccc-8449-41ab-a3d4-1e460b655ad4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.467 2 DEBUG oslo_concurrency.lockutils [req-904b5d7a-bdd9-40dc-8fa2-64962797b978 req-bcab2ccc-8449-41ab-a3d4-1e460b655ad4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.467 2 DEBUG nova.compute.manager [req-904b5d7a-bdd9-40dc-8fa2-64962797b978 req-bcab2ccc-8449-41ab-a3d4-1e460b655ad4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] No waiting events found dispatching network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:42 compute-0 nova_compute[256940]: 2025-10-02 12:54:42.467 2 WARNING nova.compute.manager [req-904b5d7a-bdd9-40dc-8fa2-64962797b978 req-bcab2ccc-8449-41ab-a3d4-1e460b655ad4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received unexpected event network-vif-plugged-4cafcc32-a350-4cae-bac5-13455086f2d3 for instance with vm_state active and task_state deleting.
Oct 02 12:54:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:54:43.125 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:43 compute-0 podman[357436]: 2025-10-02 12:54:43.404925164 +0000 UTC m=+0.070246064 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:43 compute-0 podman[357437]: 2025-10-02 12:54:43.458219017 +0000 UTC m=+0.110187501 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:54:43 compute-0 nova_compute[256940]: 2025-10-02 12:54:43.519 2 DEBUG nova.network.neutron [-] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:43 compute-0 nova_compute[256940]: 2025-10-02 12:54:43.535 2 INFO nova.compute.manager [-] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Took 1.31 seconds to deallocate network for instance.
Oct 02 12:54:43 compute-0 nova_compute[256940]: 2025-10-02 12:54:43.576 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:43 compute-0 nova_compute[256940]: 2025-10-02 12:54:43.577 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:43 compute-0 nova_compute[256940]: 2025-10-02 12:54:43.593 2 DEBUG nova.compute.manager [req-417313e8-6f0f-417e-927e-f5b56dd57db4 req-c1b629f5-d193-4b16-9c95-7cb31c3e513f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Received event network-vif-deleted-4cafcc32-a350-4cae-bac5-13455086f2d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:43 compute-0 nova_compute[256940]: 2025-10-02 12:54:43.641 2 DEBUG oslo_concurrency.processutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:43 compute-0 ceph-mon[73668]: pgmap v2438: 305 pgs: 305 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Oct 02 12:54:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 113 op/s
Oct 02 12:54:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011278444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:44 compute-0 nova_compute[256940]: 2025-10-02 12:54:44.055 2 DEBUG oslo_concurrency.processutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:44 compute-0 nova_compute[256940]: 2025-10-02 12:54:44.063 2 DEBUG nova.compute.provider_tree [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:44 compute-0 nova_compute[256940]: 2025-10-02 12:54:44.078 2 DEBUG nova.scheduler.client.report [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:44 compute-0 nova_compute[256940]: 2025-10-02 12:54:44.098 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:44 compute-0 nova_compute[256940]: 2025-10-02 12:54:44.122 2 INFO nova.scheduler.client.report [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Deleted allocations for instance 3f7bf190-7797-46d6-936a-b9eb2e85fba5
Oct 02 12:54:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:44.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:44 compute-0 nova_compute[256940]: 2025-10-02 12:54:44.218 2 DEBUG oslo_concurrency.lockutils [None req-a96e5965-c81f-4f5e-8621-b9d78fef5442 27c79bf1a6974758951669da9f6801c8 d532dce9bec34631ae80bf8dea164f24 - - default default] Lock "3f7bf190-7797-46d6-936a-b9eb2e85fba5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1011278444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:44 compute-0 nova_compute[256940]: 2025-10-02 12:54:44.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:45 compute-0 ceph-mon[73668]: pgmap v2439: 305 pgs: 305 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 113 op/s
Oct 02 12:54:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 118 op/s
Oct 02 12:54:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:46.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:46 compute-0 sudo[357505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:46 compute-0 sudo[357505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:46 compute-0 sudo[357505]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:46 compute-0 sudo[357530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:46 compute-0 sudo[357530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:46 compute-0 sudo[357530]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:46 compute-0 ceph-mon[73668]: pgmap v2440: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 118 op/s
Oct 02 12:54:47 compute-0 nova_compute[256940]: 2025-10-02 12:54:47.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 47 KiB/s wr, 122 op/s
Oct 02 12:54:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:49 compute-0 ovn_controller[148123]: 2025-10-02T12:54:49Z|00736|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:54:49 compute-0 nova_compute[256940]: 2025-10-02 12:54:49.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Oct 02 12:54:49 compute-0 ceph-mon[73668]: pgmap v2441: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 47 KiB/s wr, 122 op/s
Oct 02 12:54:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Oct 02 12:54:49 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Oct 02 12:54:49 compute-0 nova_compute[256940]: 2025-10-02 12:54:49.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 23 KiB/s wr, 81 op/s
Oct 02 12:54:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:54:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:50.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:54:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:50.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:50 compute-0 ceph-mon[73668]: osdmap e340: 3 total, 3 up, 3 in
Oct 02 12:54:51 compute-0 ceph-mon[73668]: pgmap v2443: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 23 KiB/s wr, 81 op/s
Oct 02 12:54:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3158757264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/121809429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 31 KiB/s wr, 50 op/s
Oct 02 12:54:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:52.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:52.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:52 compute-0 nova_compute[256940]: 2025-10-02 12:54:52.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:53 compute-0 ceph-mon[73668]: pgmap v2444: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 31 KiB/s wr, 50 op/s
Oct 02 12:54:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 31 KiB/s wr, 50 op/s
Oct 02 12:54:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:54:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:54.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:54:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:54.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:54 compute-0 nova_compute[256940]: 2025-10-02 12:54:54.904 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409679.9034722, 3f7bf190-7797-46d6-936a-b9eb2e85fba5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:54 compute-0 nova_compute[256940]: 2025-10-02 12:54:54.905 2 INFO nova.compute.manager [-] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] VM Stopped (Lifecycle Event)
Oct 02 12:54:54 compute-0 nova_compute[256940]: 2025-10-02 12:54:54.942 2 DEBUG nova.compute.manager [None req-8429343d-a55f-4b45-865d-28457664cd61 - - - - - -] [instance: 3f7bf190-7797-46d6-936a-b9eb2e85fba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:54 compute-0 nova_compute[256940]: 2025-10-02 12:54:54.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:55 compute-0 ceph-mon[73668]: pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 31 KiB/s wr, 50 op/s
Oct 02 12:54:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 759 KiB/s rd, 30 KiB/s wr, 67 op/s
Oct 02 12:54:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:56.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:54:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:56.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:54:57 compute-0 nova_compute[256940]: 2025-10-02 12:54:57.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:57 compute-0 podman[357562]: 2025-10-02 12:54:57.387873664 +0000 UTC m=+0.056511828 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:54:57 compute-0 podman[357561]: 2025-10-02 12:54:57.408950271 +0000 UTC m=+0.077584665 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 12:54:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:57 compute-0 ceph-mon[73668]: pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 759 KiB/s rd, 30 KiB/s wr, 67 op/s
Oct 02 12:54:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/141177385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 KiB/s wr, 105 op/s
Oct 02 12:54:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:58.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:54:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:58.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:54:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.617 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.617 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.643 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.761 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.763 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.769 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.770 2 INFO nova.compute.claims [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:54:58 compute-0 ceph-mon[73668]: pgmap v2447: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 KiB/s wr, 105 op/s
Oct 02 12:54:58 compute-0 nova_compute[256940]: 2025-10-02 12:54:58.964 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2007440849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.446 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.453 2 DEBUG nova.compute.provider_tree [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.480 2 DEBUG nova.scheduler.client.report [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.505 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.506 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.555 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.556 2 DEBUG nova.network.neutron [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.578 2 INFO nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.618 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.742 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.743 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.744 2 INFO nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Creating image(s)
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.771 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.803 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.836 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2007440849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.842 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.929 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.930 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.931 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.931 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 9.5 KiB/s wr, 98 op/s
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.963 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:59 compute-0 nova_compute[256940]: 2025-10-02 12:54:59.969 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a9564d79-5032-4cde-aff3-447400d7dc2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.014 2 DEBUG nova.policy [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:55:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:00.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:00.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.519 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a9564d79-5032-4cde-aff3-447400d7dc2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.600 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.777 2 DEBUG nova.objects.instance [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid a9564d79-5032-4cde-aff3-447400d7dc2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.815 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.815 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Ensure instance console log exists: /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.816 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.816 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:00 compute-0 nova_compute[256940]: 2025-10-02 12:55:00.816 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:00 compute-0 ceph-mon[73668]: pgmap v2448: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 9.5 KiB/s wr, 98 op/s
Oct 02 12:55:01 compute-0 nova_compute[256940]: 2025-10-02 12:55:01.678 2 DEBUG nova.network.neutron [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Successfully created port: 20aff22c-836c-41ba-b60f-462a819f7822 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:55:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Oct 02 12:55:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 271 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct 02 12:55:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Oct 02 12:55:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Oct 02 12:55:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:02.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:02.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:02 compute-0 nova_compute[256940]: 2025-10-02 12:55:02.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:03 compute-0 ceph-mon[73668]: pgmap v2449: 305 pgs: 305 active+clean; 271 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct 02 12:55:03 compute-0 ceph-mon[73668]: osdmap e341: 3 total, 3 up, 3 in
Oct 02 12:55:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3427044163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 271 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Oct 02 12:55:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:04.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:04.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:04 compute-0 nova_compute[256940]: 2025-10-02 12:55:04.278 2 DEBUG nova.network.neutron [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Successfully updated port: 20aff22c-836c-41ba-b60f-462a819f7822 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:55:04 compute-0 nova_compute[256940]: 2025-10-02 12:55:04.299 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:04 compute-0 nova_compute[256940]: 2025-10-02 12:55:04.299 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:04 compute-0 nova_compute[256940]: 2025-10-02 12:55:04.300 2 DEBUG nova.network.neutron [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:55:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1586802724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:04 compute-0 nova_compute[256940]: 2025-10-02 12:55:04.420 2 DEBUG nova.compute.manager [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-changed-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:04 compute-0 nova_compute[256940]: 2025-10-02 12:55:04.421 2 DEBUG nova.compute.manager [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Refreshing instance network info cache due to event network-changed-20aff22c-836c-41ba-b60f-462a819f7822. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:04 compute-0 nova_compute[256940]: 2025-10-02 12:55:04.421 2 DEBUG oslo_concurrency.lockutils [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:05 compute-0 nova_compute[256940]: 2025-10-02 12:55:05.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:05 compute-0 nova_compute[256940]: 2025-10-02 12:55:05.215 2 DEBUG nova.network.neutron [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:55:05 compute-0 ceph-mon[73668]: pgmap v2451: 305 pgs: 305 active+clean; 271 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Oct 02 12:55:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2182082830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:55:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2182082830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:55:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 12:55:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:06.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.160 2 DEBUG nova.network.neutron [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updating instance_info_cache with network_info: [{"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.182 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.182 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Instance network_info: |[{"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.182 2 DEBUG oslo_concurrency.lockutils [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.183 2 DEBUG nova.network.neutron [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Refreshing network info cache for port 20aff22c-836c-41ba-b60f-462a819f7822 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.186 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Start _get_guest_xml network_info=[{"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.191 2 WARNING nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.197 2 DEBUG nova.virt.libvirt.host [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.197 2 DEBUG nova.virt.libvirt.host [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.202 2 DEBUG nova.virt.libvirt.host [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.203 2 DEBUG nova.virt.libvirt.host [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.204 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.204 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.205 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.205 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.205 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.205 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.205 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.206 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.206 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.206 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.206 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.206 2 DEBUG nova.virt.hardware [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:55:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:06.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.209 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:06 compute-0 sudo[357814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:06 compute-0 sudo[357814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:06 compute-0 sudo[357814]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:06 compute-0 sudo[357839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:06 compute-0 sudo[357839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:06 compute-0 sudo[357839]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717806451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.718 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.743 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:06 compute-0 nova_compute[256940]: 2025-10-02 12:55:06.747 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1717806451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772510293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.191 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.195 2 DEBUG nova.virt.libvirt.vif [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1284261477',display_name='tempest-TestNetworkBasicOps-server-1284261477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1284261477',id=149,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElQstkE41+GOd6moF9K3IstLgWEjClG13CyAdWSb2M73noXL86Vl5KH68ZUi95kvcmAkcIhDh++GVbB3a0pYM7imf/5hyDotrkezzAQqt6QG9gmL3EnE0E4J4p0Bwq+iw==',key_name='tempest-TestNetworkBasicOps-422991809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-fqh11rud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:59Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=a9564d79-5032-4cde-aff3-447400d7dc2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.196 2 DEBUG nova.network.os_vif_util [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.198 2 DEBUG nova.network.os_vif_util [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:d0:16,bridge_name='br-int',has_traffic_filtering=True,id=20aff22c-836c-41ba-b60f-462a819f7822,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20aff22c-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.200 2 DEBUG nova.objects.instance [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid a9564d79-5032-4cde-aff3-447400d7dc2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.221 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <uuid>a9564d79-5032-4cde-aff3-447400d7dc2f</uuid>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <name>instance-00000095</name>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkBasicOps-server-1284261477</nova:name>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:55:06</nova:creationTime>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <nova:port uuid="20aff22c-836c-41ba-b60f-462a819f7822">
Oct 02 12:55:07 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <system>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <entry name="serial">a9564d79-5032-4cde-aff3-447400d7dc2f</entry>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <entry name="uuid">a9564d79-5032-4cde-aff3-447400d7dc2f</entry>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </system>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <os>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   </os>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <features>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   </features>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a9564d79-5032-4cde-aff3-447400d7dc2f_disk">
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/a9564d79-5032-4cde-aff3-447400d7dc2f_disk.config">
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:55:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:49:d0:16"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <target dev="tap20aff22c-83"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/console.log" append="off"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <video>
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </video>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:55:07 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:55:07 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:55:07 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:55:07 compute-0 nova_compute[256940]: </domain>
Oct 02 12:55:07 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.223 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Preparing to wait for external event network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.223 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.224 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.224 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.225 2 DEBUG nova.virt.libvirt.vif [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1284261477',display_name='tempest-TestNetworkBasicOps-server-1284261477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1284261477',id=149,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElQstkE41+GOd6moF9K3IstLgWEjClG13CyAdWSb2M73noXL86Vl5KH68ZUi95kvcmAkcIhDh++GVbB3a0pYM7imf/5hyDotrkezzAQqt6QG9gmL3EnE0E4J4p0Bwq+iw==',key_name='tempest-TestNetworkBasicOps-422991809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-fqh11rud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:59Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=a9564d79-5032-4cde-aff3-447400d7dc2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.225 2 DEBUG nova.network.os_vif_util [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.226 2 DEBUG nova.network.os_vif_util [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:d0:16,bridge_name='br-int',has_traffic_filtering=True,id=20aff22c-836c-41ba-b60f-462a819f7822,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20aff22c-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.226 2 DEBUG os_vif [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:d0:16,bridge_name='br-int',has_traffic_filtering=True,id=20aff22c-836c-41ba-b60f-462a819f7822,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20aff22c-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.227 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20aff22c-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20aff22c-83, col_values=(('external_ids', {'iface-id': '20aff22c-836c-41ba-b60f-462a819f7822', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:d0:16', 'vm-uuid': 'a9564d79-5032-4cde-aff3-447400d7dc2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:07 compute-0 NetworkManager[44981]: <info>  [1759409707.2341] manager: (tap20aff22c-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.241 2 INFO os_vif [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:d0:16,bridge_name='br-int',has_traffic_filtering=True,id=20aff22c-836c-41ba-b60f-462a819f7822,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20aff22c-83')
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.369 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.369 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.370 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:49:d0:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.370 2 INFO nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Using config drive
Oct 02 12:55:07 compute-0 nova_compute[256940]: 2025-10-02 12:55:07.394 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:07 compute-0 ceph-mon[73668]: pgmap v2452: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 12:55:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1772510293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:55:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:08.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.397 2 INFO nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Creating config drive at /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/disk.config
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.402 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5neljdy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.545 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5neljdy" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.569 2 DEBUG nova.storage.rbd_utils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image a9564d79-5032-4cde-aff3-447400d7dc2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.575 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/disk.config a9564d79-5032-4cde-aff3-447400d7dc2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.909 2 DEBUG oslo_concurrency.processutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/disk.config a9564d79-5032-4cde-aff3-447400d7dc2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.910 2 INFO nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Deleting local config drive /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f/disk.config because it was imported into RBD.
Oct 02 12:55:08 compute-0 kernel: tap20aff22c-83: entered promiscuous mode
Oct 02 12:55:08 compute-0 NetworkManager[44981]: <info>  [1759409708.9590] manager: (tap20aff22c-83): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Oct 02 12:55:08 compute-0 ovn_controller[148123]: 2025-10-02T12:55:08Z|00737|binding|INFO|Claiming lport 20aff22c-836c-41ba-b60f-462a819f7822 for this chassis.
Oct 02 12:55:08 compute-0 nova_compute[256940]: 2025-10-02 12:55:08.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:08 compute-0 ovn_controller[148123]: 2025-10-02T12:55:08Z|00738|binding|INFO|20aff22c-836c-41ba-b60f-462a819f7822: Claiming fa:16:3e:49:d0:16 10.100.0.8
Oct 02 12:55:08 compute-0 systemd-udevd[357981]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:09 compute-0 systemd-machined[210927]: New machine qemu-78-instance-00000095.
Oct 02 12:55:09 compute-0 NetworkManager[44981]: <info>  [1759409709.0082] device (tap20aff22c-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:55:09 compute-0 NetworkManager[44981]: <info>  [1759409709.0091] device (tap20aff22c-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.024 2 DEBUG nova.network.neutron [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updated VIF entry in instance network info cache for port 20aff22c-836c-41ba-b60f-462a819f7822. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.026 2 DEBUG nova.network.neutron [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updating instance_info_cache with network_info: [{"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.031 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:d0:16 10.100.0.8'], port_security=['fa:16:3e:49:d0:16 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a9564d79-5032-4cde-aff3-447400d7dc2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9fda5bf-9101-4cc5-8c4b-2704f6c0a12e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20aff22c-836c-41ba-b60f-462a819f7822) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.032 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20aff22c-836c-41ba-b60f-462a819f7822 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 bound to our chassis
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.034 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000095.
Oct 02 12:55:09 compute-0 ovn_controller[148123]: 2025-10-02T12:55:09Z|00739|binding|INFO|Setting lport 20aff22c-836c-41ba-b60f-462a819f7822 ovn-installed in OVS
Oct 02 12:55:09 compute-0 ovn_controller[148123]: 2025-10-02T12:55:09Z|00740|binding|INFO|Setting lport 20aff22c-836c-41ba-b60f-462a819f7822 up in Southbound
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.047 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2528ea5d-73ae-491d-92db-342addd67d79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.047 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24ea8f37-71 in ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.049 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24ea8f37-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.049 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[69d299a2-aad2-4cd6-82fc-7ed64613d226]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.050 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0195c6-bc46-4cb1-89c3-f1f0b4dd22f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.056 2 DEBUG oslo_concurrency.lockutils [req-4c72ef4c-70b3-4f72-b7fc-5ceb9445d7ca req-55162507-cbb3-446b-9d51-8ecb6c214133 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.060 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b07b32-8d4d-4253-a3b1-3d6e6187438b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ceph-mon[73668]: pgmap v2453: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.085 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[36715604-af25-4692-9814-c91a856b5d5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.118 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[24324f0d-fb5e-47b8-b1bc-a2eee376e3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.124 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e5da44-04f0-44f6-b1a4-2accb6299098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 NetworkManager[44981]: <info>  [1759409709.1258] manager: (tap24ea8f37-70): new Veth device (/org/freedesktop/NetworkManager/Devices/330)
Oct 02 12:55:09 compute-0 systemd-udevd[357984]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.163 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[08cbdce0-606c-4cd6-99e0-c6c628fd6a76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.168 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[827896d2-35e3-4922-a631-aabe23abbaff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 NetworkManager[44981]: <info>  [1759409709.1973] device (tap24ea8f37-70): carrier: link connected
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.207 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b09fa27a-47f0-4b52-a297-9917f6190dd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.229 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[80e1ee23-5f75-494e-a710-566a4f4e7a62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24ea8f37-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:22:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 760468, 'reachable_time': 34491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358030, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.251 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[23b12d62-be0f-4fff-bbd0-104ce2900261]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:22dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 760468, 'tstamp': 760468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358034, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.273 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[497f4923-3b61-4f96-8a21-242846ec9f38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24ea8f37-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:22:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 760468, 'reachable_time': 34491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358050, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.304 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[681489c4-d3c5-4b1c-98a7-dcc81677e414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.363 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[71c1c334-511d-49d9-b650-26f9ee268a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.364 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24ea8f37-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.365 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.365 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24ea8f37-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 kernel: tap24ea8f37-70: entered promiscuous mode
Oct 02 12:55:09 compute-0 NetworkManager[44981]: <info>  [1759409709.3682] manager: (tap24ea8f37-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.371 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24ea8f37-70, col_values=(('external_ids', {'iface-id': '47294e02-9c49-4b4e-8e89-07ae56f17131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 ovn_controller[148123]: 2025-10-02T12:55:09Z|00741|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.389 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.390 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[42e64907-eb1b-4053-8e21-bc83eb53bea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.391 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:55:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:09.393 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'env', 'PROCESS_TAG=haproxy-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.518 2 DEBUG nova.compute.manager [req-4e333e1e-bc48-4b48-a443-408734063c99 req-d8ebbd9a-bb22-4a2d-b12c-5d781f22ed7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.519 2 DEBUG oslo_concurrency.lockutils [req-4e333e1e-bc48-4b48-a443-408734063c99 req-d8ebbd9a-bb22-4a2d-b12c-5d781f22ed7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.519 2 DEBUG oslo_concurrency.lockutils [req-4e333e1e-bc48-4b48-a443-408734063c99 req-d8ebbd9a-bb22-4a2d-b12c-5d781f22ed7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.519 2 DEBUG oslo_concurrency.lockutils [req-4e333e1e-bc48-4b48-a443-408734063c99 req-d8ebbd9a-bb22-4a2d-b12c-5d781f22ed7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.519 2 DEBUG nova.compute.manager [req-4e333e1e-bc48-4b48-a443-408734063c99 req-d8ebbd9a-bb22-4a2d-b12c-5d781f22ed7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Processing event network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:55:09 compute-0 podman[358087]: 2025-10-02 12:55:09.794225113 +0000 UTC m=+0.079315440 container create 181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:55:09 compute-0 systemd[1]: Started libpod-conmon-181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf.scope.
Oct 02 12:55:09 compute-0 podman[358087]: 2025-10-02 12:55:09.740310733 +0000 UTC m=+0.025401110 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.839 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409709.8394864, a9564d79-5032-4cde-aff3-447400d7dc2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.840 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] VM Started (Lifecycle Event)
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.841 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:55:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.845 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1148ce710efd271df90f5cfe29ada769102eba9b1ca98508e41744916da34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.849 2 INFO nova.virt.libvirt.driver [-] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Instance spawned successfully.
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.849 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:55:09 compute-0 podman[358087]: 2025-10-02 12:55:09.867936736 +0000 UTC m=+0.153027103 container init 181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:55:09 compute-0 podman[358087]: 2025-10-02 12:55:09.874095436 +0000 UTC m=+0.159185783 container start 181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:55:09 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [NOTICE]   (358104) : New worker (358106) forked
Oct 02 12:55:09 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [NOTICE]   (358104) : Loading success.
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.918 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.926 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.927 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.927 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.928 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.928 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.929 2 DEBUG nova.virt.libvirt.driver [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:09 compute-0 nova_compute[256940]: 2025-10-02 12:55:09.936 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 02 12:55:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:10.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.122 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.122 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409709.8395534, a9564d79-5032-4cde-aff3-447400d7dc2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.122 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] VM Paused (Lifecycle Event)
Oct 02 12:55:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3776642989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2393667972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:10.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.289 2 INFO nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Took 10.55 seconds to spawn the instance on the hypervisor.
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.290 2 DEBUG nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.305 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.309 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409709.8448522, a9564d79-5032-4cde-aff3-447400d7dc2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.309 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] VM Resumed (Lifecycle Event)
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.445 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.449 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.529 2 INFO nova.compute.manager [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Took 11.81 seconds to build instance.
Oct 02 12:55:10 compute-0 nova_compute[256940]: 2025-10-02 12:55:10.693 2 DEBUG oslo_concurrency.lockutils [None req-31e750eb-f0d0-4b0d-94e1-ab870786b47e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:11 compute-0 ceph-mon[73668]: pgmap v2454: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 02 12:55:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3005431859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2956126067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:11 compute-0 nova_compute[256940]: 2025-10-02 12:55:11.722 2 DEBUG nova.compute.manager [req-922e12cf-bfa1-4377-8033-ec1617aa34f1 req-3aafe51b-4230-4efe-8e4d-c041463c51d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:11 compute-0 nova_compute[256940]: 2025-10-02 12:55:11.722 2 DEBUG oslo_concurrency.lockutils [req-922e12cf-bfa1-4377-8033-ec1617aa34f1 req-3aafe51b-4230-4efe-8e4d-c041463c51d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:11 compute-0 nova_compute[256940]: 2025-10-02 12:55:11.722 2 DEBUG oslo_concurrency.lockutils [req-922e12cf-bfa1-4377-8033-ec1617aa34f1 req-3aafe51b-4230-4efe-8e4d-c041463c51d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:11 compute-0 nova_compute[256940]: 2025-10-02 12:55:11.723 2 DEBUG oslo_concurrency.lockutils [req-922e12cf-bfa1-4377-8033-ec1617aa34f1 req-3aafe51b-4230-4efe-8e4d-c041463c51d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:11 compute-0 nova_compute[256940]: 2025-10-02 12:55:11.723 2 DEBUG nova.compute.manager [req-922e12cf-bfa1-4377-8033-ec1617aa34f1 req-3aafe51b-4230-4efe-8e4d-c041463c51d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] No waiting events found dispatching network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:11 compute-0 nova_compute[256940]: 2025-10-02 12:55:11.723 2 WARNING nova.compute.manager [req-922e12cf-bfa1-4377-8033-ec1617aa34f1 req-3aafe51b-4230-4efe-8e4d-c041463c51d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received unexpected event network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 for instance with vm_state active and task_state None.
Oct 02 12:55:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 953 KiB/s wr, 160 op/s
Oct 02 12:55:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:12.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:12.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:12 compute-0 nova_compute[256940]: 2025-10-02 12:55:12.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:12 compute-0 nova_compute[256940]: 2025-10-02 12:55:12.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Oct 02 12:55:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Oct 02 12:55:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.254 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.255 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.255 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.255 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.256 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794191949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.718 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.802 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.803 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.810 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.811 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:13 compute-0 podman[358140]: 2025-10-02 12:55:13.848555857 +0000 UTC m=+0.070713306 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 12:55:13 compute-0 ceph-mon[73668]: pgmap v2455: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 953 KiB/s wr, 160 op/s
Oct 02 12:55:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2576941608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:13 compute-0 ceph-mon[73668]: osdmap e342: 3 total, 3 up, 3 in
Oct 02 12:55:13 compute-0 podman[358142]: 2025-10-02 12:55:13.903797631 +0000 UTC m=+0.130985951 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:55:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 953 KiB/s wr, 160 op/s
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.986 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.987 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3935MB free_disk=20.876136779785156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.988 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:13 compute-0 nova_compute[256940]: 2025-10-02 12:55:13.988 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.082 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e22a7944-9200-4204-a219-3f7bd720667b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.082 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance a9564d79-5032-4cde-aff3-447400d7dc2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.084 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.085 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:55:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:14.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.131 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:14.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140536109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.593 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.600 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.617 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.648 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:55:14 compute-0 nova_compute[256940]: 2025-10-02 12:55:14.649 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1794191949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:15 compute-0 ceph-mon[73668]: pgmap v2457: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 953 KiB/s wr, 160 op/s
Oct 02 12:55:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/140536109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 307 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 190 op/s
Oct 02 12:55:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:16.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:16 compute-0 nova_compute[256940]: 2025-10-02 12:55:16.650 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:17 compute-0 ceph-mon[73668]: pgmap v2458: 305 pgs: 305 active+clean; 307 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 190 op/s
Oct 02 12:55:17 compute-0 nova_compute[256940]: 2025-10-02 12:55:17.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:17 compute-0 nova_compute[256940]: 2025-10-02 12:55:17.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:17 compute-0 nova_compute[256940]: 2025-10-02 12:55:17.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 150 op/s
Oct 02 12:55:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:18.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:18 compute-0 NetworkManager[44981]: <info>  [1759409718.3139] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-0 NetworkManager[44981]: <info>  [1759409718.3161] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-0 ovn_controller[148123]: 2025-10-02T12:55:18Z|00742|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct 02 12:55:18 compute-0 ovn_controller[148123]: 2025-10-02T12:55:18Z|00743|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.703 2 DEBUG nova.compute.manager [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-changed-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.703 2 DEBUG nova.compute.manager [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Refreshing instance network info cache due to event network-changed-20aff22c-836c-41ba-b60f-462a819f7822. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.704 2 DEBUG oslo_concurrency.lockutils [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.704 2 DEBUG oslo_concurrency.lockutils [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:18 compute-0 nova_compute[256940]: 2025-10-02 12:55:18.704 2 DEBUG nova.network.neutron [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Refreshing network info cache for port 20aff22c-836c-41ba-b60f-462a819f7822 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:19 compute-0 nova_compute[256940]: 2025-10-02 12:55:19.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:19 compute-0 ceph-mon[73668]: pgmap v2459: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 150 op/s
Oct 02 12:55:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Oct 02 12:55:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:20 compute-0 nova_compute[256940]: 2025-10-02 12:55:20.229 2 DEBUG nova.network.neutron [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updated VIF entry in instance network info cache for port 20aff22c-836c-41ba-b60f-462a819f7822. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:20 compute-0 nova_compute[256940]: 2025-10-02 12:55:20.229 2 DEBUG nova.network.neutron [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updating instance_info_cache with network_info: [{"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:20 compute-0 nova_compute[256940]: 2025-10-02 12:55:20.250 2 DEBUG oslo_concurrency.lockutils [req-5d61d8fb-53d7-4f58-a872-c2eff22746a8 req-75e26374-52b8-47bd-89b8-b750e759a9c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:21 compute-0 ceph-mon[73668]: pgmap v2460: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Oct 02 12:55:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1742637085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2386159734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3610127590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 143 op/s
Oct 02 12:55:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:22.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:22.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:22 compute-0 nova_compute[256940]: 2025-10-02 12:55:22.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:22 compute-0 nova_compute[256940]: 2025-10-02 12:55:22.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4102663619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:23 compute-0 nova_compute[256940]: 2025-10-02 12:55:23.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:23 compute-0 nova_compute[256940]: 2025-10-02 12:55:23.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:55:23 compute-0 nova_compute[256940]: 2025-10-02 12:55:23.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:55:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:55:24 compute-0 ceph-mon[73668]: pgmap v2461: 305 pgs: 305 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 143 op/s
Oct 02 12:55:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:24.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:24.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:24 compute-0 nova_compute[256940]: 2025-10-02 12:55:24.271 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:24 compute-0 nova_compute[256940]: 2025-10-02 12:55:24.272 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:24 compute-0 nova_compute[256940]: 2025-10-02 12:55:24.272 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:55:24 compute-0 nova_compute[256940]: 2025-10-02 12:55:24.272 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e22a7944-9200-4204-a219-3f7bd720667b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:25 compute-0 ceph-mon[73668]: pgmap v2462: 305 pgs: 305 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:55:25 compute-0 ovn_controller[148123]: 2025-10-02T12:55:25Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:d0:16 10.100.0.8
Oct 02 12:55:25 compute-0 ovn_controller[148123]: 2025-10-02T12:55:25Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:d0:16 10.100.0.8
Oct 02 12:55:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 345 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 137 op/s
Oct 02 12:55:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:26.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:26.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:26.492 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:26.493 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:26.493 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:26 compute-0 sudo[358217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:26 compute-0 sudo[358217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:26 compute-0 sudo[358217]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:26 compute-0 sudo[358243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:26 compute-0 sudo[358243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:26 compute-0 sudo[358243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:27 compute-0 nova_compute[256940]: 2025-10-02 12:55:27.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:27 compute-0 nova_compute[256940]: 2025-10-02 12:55:27.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:27 compute-0 nova_compute[256940]: 2025-10-02 12:55:27.398 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updating instance_info_cache with network_info: [{"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:27 compute-0 nova_compute[256940]: 2025-10-02 12:55:27.426 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:27 compute-0 nova_compute[256940]: 2025-10-02 12:55:27.426 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:55:27 compute-0 ceph-mon[73668]: pgmap v2463: 305 pgs: 305 active+clean; 345 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 137 op/s
Oct 02 12:55:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 364 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 180 op/s
Oct 02 12:55:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:28.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:28.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:28 compute-0 podman[358269]: 2025-10-02 12:55:28.388033157 +0000 UTC m=+0.060509301 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:55:28 compute-0 podman[358268]: 2025-10-02 12:55:28.395027609 +0000 UTC m=+0.060648535 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:55:28
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.meta']
Oct 02 12:55:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:55:29 compute-0 ceph-mon[73668]: pgmap v2464: 305 pgs: 305 active+clean; 364 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 180 op/s
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:55:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 223 op/s
Oct 02 12:55:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:30.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:30.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:31 compute-0 nova_compute[256940]: 2025-10-02 12:55:31.393 2 INFO nova.compute.manager [None req-432a3af0-a2d6-4cf5-88af-95e6c6d27026 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Get console output
Oct 02 12:55:31 compute-0 nova_compute[256940]: 2025-10-02 12:55:31.403 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:55:31 compute-0 ceph-mon[73668]: pgmap v2465: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 223 op/s
Oct 02 12:55:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 257 op/s
Oct 02 12:55:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:32.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:32 compute-0 nova_compute[256940]: 2025-10-02 12:55:32.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:32 compute-0 nova_compute[256940]: 2025-10-02 12:55:32.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:33 compute-0 sudo[358311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:33 compute-0 sudo[358311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-0 sudo[358311]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-0 sudo[358336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:55:33 compute-0 sudo[358336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-0 sudo[358336]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-0 sudo[358361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:33 compute-0 sudo[358361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-0 sudo[358361]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-0 sudo[358386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:55:33 compute-0 sudo[358386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-0 nova_compute[256940]: 2025-10-02 12:55:33.385 2 DEBUG nova.compute.manager [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-changed-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:33 compute-0 nova_compute[256940]: 2025-10-02 12:55:33.385 2 DEBUG nova.compute.manager [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Refreshing instance network info cache due to event network-changed-20aff22c-836c-41ba-b60f-462a819f7822. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:33 compute-0 nova_compute[256940]: 2025-10-02 12:55:33.385 2 DEBUG oslo_concurrency.lockutils [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:33 compute-0 nova_compute[256940]: 2025-10-02 12:55:33.386 2 DEBUG oslo_concurrency.lockutils [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:33 compute-0 nova_compute[256940]: 2025-10-02 12:55:33.386 2 DEBUG nova.network.neutron [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Refreshing network info cache for port 20aff22c-836c-41ba-b60f-462a819f7822 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:33 compute-0 ceph-mon[73668]: pgmap v2466: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 257 op/s
Oct 02 12:55:33 compute-0 sudo[358386]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 231 op/s
Oct 02 12:55:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:34.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2393714639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 12:55:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 12:55:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:35 compute-0 nova_compute[256940]: 2025-10-02 12:55:35.000 2 DEBUG nova.network.neutron [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updated VIF entry in instance network info cache for port 20aff22c-836c-41ba-b60f-462a819f7822. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:35 compute-0 nova_compute[256940]: 2025-10-02 12:55:35.001 2 DEBUG nova.network.neutron [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updating instance_info_cache with network_info: [{"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:35 compute-0 nova_compute[256940]: 2025-10-02 12:55:35.028 2 DEBUG oslo_concurrency.lockutils [req-3deefaed-3c78-4520-920e-b522b59e2e14 req-e18e398b-7ab5-41a9-9ba3-4de70c761004 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:55:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:55:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:55:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:55:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:55:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6b65625f-b2ed-40f4-a8fd-23678f1b9801 does not exist
Oct 02 12:55:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9524779d-05e7-4987-9971-5a1aee5e2c67 does not exist
Oct 02 12:55:35 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d713dea4-4d92-458f-bf76-ade9837f16f8 does not exist
Oct 02 12:55:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:55:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:55:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:55:35 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:55:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:55:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:55:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 263 op/s
Oct 02 12:55:35 compute-0 sudo[358442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:35 compute-0 sudo[358442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:35 compute-0 sudo[358442]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:36 compute-0 sudo[358467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:55:36 compute-0 sudo[358467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:36 compute-0 sudo[358467]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:36 compute-0 sudo[358492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:36 compute-0 sudo[358492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:36 compute-0 sudo[358492]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:36 compute-0 sudo[358517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:55:36 compute-0 sudo[358517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:36 compute-0 ceph-mon[73668]: pgmap v2467: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 231 op/s
Oct 02 12:55:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/428658818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:55:36 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:55:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:36.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:36 compute-0 podman[358583]: 2025-10-02 12:55:36.497944901 +0000 UTC m=+0.047739991 container create 0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wescoff, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:55:36 compute-0 systemd[1]: Started libpod-conmon-0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528.scope.
Oct 02 12:55:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:36 compute-0 podman[358583]: 2025-10-02 12:55:36.477360556 +0000 UTC m=+0.027155676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:55:36 compute-0 podman[358583]: 2025-10-02 12:55:36.584482627 +0000 UTC m=+0.134277727 container init 0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:55:36 compute-0 podman[358583]: 2025-10-02 12:55:36.592507925 +0000 UTC m=+0.142303005 container start 0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 12:55:36 compute-0 sweet_wescoff[358599]: 167 167
Oct 02 12:55:36 compute-0 podman[358583]: 2025-10-02 12:55:36.60000026 +0000 UTC m=+0.149795360 container attach 0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:55:36 compute-0 systemd[1]: libpod-0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528.scope: Deactivated successfully.
Oct 02 12:55:36 compute-0 podman[358583]: 2025-10-02 12:55:36.600672707 +0000 UTC m=+0.150467787 container died 0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 12:55:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac69866064141af463e13fcbcb10a454cebb6f046c0a69ddd87ebf5127a2c79-merged.mount: Deactivated successfully.
Oct 02 12:55:36 compute-0 podman[358583]: 2025-10-02 12:55:36.640160752 +0000 UTC m=+0.189955842 container remove 0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wescoff, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:55:36 compute-0 systemd[1]: libpod-conmon-0dde785b28b1d6839758127ab85208cecc1eb81c3619849a7012ff819947d528.scope: Deactivated successfully.
Oct 02 12:55:36 compute-0 podman[358625]: 2025-10-02 12:55:36.854338832 +0000 UTC m=+0.056072667 container create ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_easley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:36 compute-0 systemd[1]: Started libpod-conmon-ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596.scope.
Oct 02 12:55:36 compute-0 podman[358625]: 2025-10-02 12:55:36.826652503 +0000 UTC m=+0.028386358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:55:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ee38b97c560a5bc49c6ac08e559fe16afb1ecef6809d2b1d3dcb3fc0a88f70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ee38b97c560a5bc49c6ac08e559fe16afb1ecef6809d2b1d3dcb3fc0a88f70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ee38b97c560a5bc49c6ac08e559fe16afb1ecef6809d2b1d3dcb3fc0a88f70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ee38b97c560a5bc49c6ac08e559fe16afb1ecef6809d2b1d3dcb3fc0a88f70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ee38b97c560a5bc49c6ac08e559fe16afb1ecef6809d2b1d3dcb3fc0a88f70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:36 compute-0 podman[358625]: 2025-10-02 12:55:36.967567171 +0000 UTC m=+0.169300996 container init ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_easley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:55:36 compute-0 podman[358625]: 2025-10-02 12:55:36.976275887 +0000 UTC m=+0.178009712 container start ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_easley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:55:36 compute-0 podman[358625]: 2025-10-02 12:55:36.98063059 +0000 UTC m=+0.182364475 container attach ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 12:55:37 compute-0 nova_compute[256940]: 2025-10-02 12:55:37.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:37 compute-0 nova_compute[256940]: 2025-10-02 12:55:37.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:55:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:55:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:55:37 compute-0 ceph-mon[73668]: pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 263 op/s
Oct 02 12:55:37 compute-0 magical_easley[358641]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:55:37 compute-0 magical_easley[358641]: --> relative data size: 1.0
Oct 02 12:55:37 compute-0 magical_easley[358641]: --> All data devices are unavailable
Oct 02 12:55:37 compute-0 systemd[1]: libpod-ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596.scope: Deactivated successfully.
Oct 02 12:55:37 compute-0 podman[358625]: 2025-10-02 12:55:37.838535739 +0000 UTC m=+1.040269594 container died ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_easley, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:55:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.4 MiB/s wr, 269 op/s
Oct 02 12:55:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:55:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:38.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:55:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3ee38b97c560a5bc49c6ac08e559fe16afb1ecef6809d2b1d3dcb3fc0a88f70-merged.mount: Deactivated successfully.
Oct 02 12:55:38 compute-0 podman[358625]: 2025-10-02 12:55:38.753388146 +0000 UTC m=+1.955122011 container remove ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:55:38 compute-0 systemd[1]: libpod-conmon-ae0f6c436f9eca773a8472f6417c432c61941813f88a945772a024a9d070a596.scope: Deactivated successfully.
Oct 02 12:55:38 compute-0 sudo[358517]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:38 compute-0 sudo[358671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:38 compute-0 sudo[358671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:38 compute-0 sudo[358671]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:38 compute-0 sudo[358696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:55:38 compute-0 sudo[358696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:38 compute-0 sudo[358696]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:39 compute-0 ceph-mon[73668]: pgmap v2469: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.4 MiB/s wr, 269 op/s
Oct 02 12:55:39 compute-0 sudo[358721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:39 compute-0 sudo[358721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:39 compute-0 sudo[358721]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:39 compute-0 sudo[358746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:55:39 compute-0 sudo[358746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:55:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 40K writes, 164K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 40K writes, 13K syncs, 3.00 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8016 writes, 31K keys, 8016 commit groups, 1.0 writes per commit group, ingest: 32.90 MB, 0.05 MB/s
                                           Interval WAL: 8016 writes, 3112 syncs, 2.58 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:55:39 compute-0 podman[358811]: 2025-10-02 12:55:39.528573598 +0000 UTC m=+0.068357585 container create e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williams, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:55:39 compute-0 podman[358811]: 2025-10-02 12:55:39.493452367 +0000 UTC m=+0.033236354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:55:39 compute-0 systemd[1]: Started libpod-conmon-e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5.scope.
Oct 02 12:55:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:39 compute-0 podman[358811]: 2025-10-02 12:55:39.652461764 +0000 UTC m=+0.192245821 container init e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williams, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 12:55:39 compute-0 podman[358811]: 2025-10-02 12:55:39.660832591 +0000 UTC m=+0.200616558 container start e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 12:55:39 compute-0 sharp_williams[358827]: 167 167
Oct 02 12:55:39 compute-0 systemd[1]: libpod-e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5.scope: Deactivated successfully.
Oct 02 12:55:39 compute-0 conmon[358827]: conmon e7e8fb918189998e1604 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5.scope/container/memory.events
Oct 02 12:55:39 compute-0 podman[358811]: 2025-10-02 12:55:39.668167382 +0000 UTC m=+0.207951389 container attach e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williams, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:55:39 compute-0 podman[358811]: 2025-10-02 12:55:39.668927302 +0000 UTC m=+0.208711349 container died e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williams, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:55:39 compute-0 ovn_controller[148123]: 2025-10-02T12:55:39Z|00744|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct 02 12:55:39 compute-0 ovn_controller[148123]: 2025-10-02T12:55:39Z|00745|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:55:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab348e8c4711fb0fa40e88f281f13a6315134346c99be14ee04c22edfc85db7-merged.mount: Deactivated successfully.
Oct 02 12:55:39 compute-0 nova_compute[256940]: 2025-10-02 12:55:39.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:39 compute-0 podman[358811]: 2025-10-02 12:55:39.745274163 +0000 UTC m=+0.285058130 container remove e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 12:55:39 compute-0 systemd[1]: libpod-conmon-e7e8fb918189998e160413cb78375d39a23a9cff8b48d894a6c8f492e1409ff5.scope: Deactivated successfully.
Oct 02 12:55:39 compute-0 ovn_controller[148123]: 2025-10-02T12:55:39Z|00746|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct 02 12:55:39 compute-0 ovn_controller[148123]: 2025-10-02T12:55:39Z|00747|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:55:39 compute-0 nova_compute[256940]: 2025-10-02 12:55:39.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:39 compute-0 podman[358854]: 2025-10-02 12:55:39.965176811 +0000 UTC m=+0.057728379 container create 8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cori, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 70 KiB/s wr, 189 op/s
Oct 02 12:55:40 compute-0 systemd[1]: Started libpod-conmon-8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea.scope.
Oct 02 12:55:40 compute-0 podman[358854]: 2025-10-02 12:55:39.928649753 +0000 UTC m=+0.021201351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:55:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56372b27ae8f0249adcbc3bfc139a3d2ba07f7b0b6202d9b92d0c6e73d889209/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56372b27ae8f0249adcbc3bfc139a3d2ba07f7b0b6202d9b92d0c6e73d889209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56372b27ae8f0249adcbc3bfc139a3d2ba07f7b0b6202d9b92d0c6e73d889209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56372b27ae8f0249adcbc3bfc139a3d2ba07f7b0b6202d9b92d0c6e73d889209/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:40 compute-0 podman[358854]: 2025-10-02 12:55:40.098075071 +0000 UTC m=+0.190626679 container init 8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cori, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:55:40 compute-0 podman[358854]: 2025-10-02 12:55:40.105424312 +0000 UTC m=+0.197975880 container start 8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:55:40 compute-0 podman[358854]: 2025-10-02 12:55:40.119643731 +0000 UTC m=+0.212195319 container attach 8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cori, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:55:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:40.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004345548634375582 of space, bias 1.0, pg target 1.3036645903126745 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:55:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:55:40 compute-0 bold_cori[358871]: {
Oct 02 12:55:40 compute-0 bold_cori[358871]:     "1": [
Oct 02 12:55:40 compute-0 bold_cori[358871]:         {
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "devices": [
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "/dev/loop3"
Oct 02 12:55:40 compute-0 bold_cori[358871]:             ],
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "lv_name": "ceph_lv0",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "lv_size": "7511998464",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "name": "ceph_lv0",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "tags": {
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.cluster_name": "ceph",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.crush_device_class": "",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.encrypted": "0",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.osd_id": "1",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.type": "block",
Oct 02 12:55:40 compute-0 bold_cori[358871]:                 "ceph.vdo": "0"
Oct 02 12:55:40 compute-0 bold_cori[358871]:             },
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "type": "block",
Oct 02 12:55:40 compute-0 bold_cori[358871]:             "vg_name": "ceph_vg0"
Oct 02 12:55:40 compute-0 bold_cori[358871]:         }
Oct 02 12:55:40 compute-0 bold_cori[358871]:     ]
Oct 02 12:55:40 compute-0 bold_cori[358871]: }
Oct 02 12:55:40 compute-0 systemd[1]: libpod-8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea.scope: Deactivated successfully.
Oct 02 12:55:40 compute-0 podman[358881]: 2025-10-02 12:55:40.890518571 +0000 UTC m=+0.026450797 container died 8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cori, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-56372b27ae8f0249adcbc3bfc139a3d2ba07f7b0b6202d9b92d0c6e73d889209-merged.mount: Deactivated successfully.
Oct 02 12:55:41 compute-0 podman[358881]: 2025-10-02 12:55:41.012512998 +0000 UTC m=+0.148445194 container remove 8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cori, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 12:55:41 compute-0 systemd[1]: libpod-conmon-8f31f339c5a95eeb7ff79cc61f2928582f03d095cc1543396acb29bcbd30ceea.scope: Deactivated successfully.
Oct 02 12:55:41 compute-0 sudo[358746]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:41 compute-0 sudo[358896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:41 compute-0 sudo[358896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:41 compute-0 sudo[358896]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:41 compute-0 sudo[358921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:55:41 compute-0 sudo[358921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:41 compute-0 sudo[358921]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:41 compute-0 sudo[358946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:41 compute-0 sudo[358946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:41 compute-0 sudo[358946]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:41 compute-0 sudo[358971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:55:41 compute-0 sudo[358971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:41 compute-0 ceph-mon[73668]: pgmap v2470: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 70 KiB/s wr, 189 op/s
Oct 02 12:55:41 compute-0 podman[359036]: 2025-10-02 12:55:41.71333611 +0000 UTC m=+0.082513943 container create 66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:55:41 compute-0 podman[359036]: 2025-10-02 12:55:41.656804772 +0000 UTC m=+0.025982645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:55:41 compute-0 systemd[1]: Started libpod-conmon-66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065.scope.
Oct 02 12:55:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:41 compute-0 podman[359036]: 2025-10-02 12:55:41.82235758 +0000 UTC m=+0.191535443 container init 66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:55:41 compute-0 podman[359036]: 2025-10-02 12:55:41.830521042 +0000 UTC m=+0.199698885 container start 66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 12:55:41 compute-0 podman[359036]: 2025-10-02 12:55:41.83393757 +0000 UTC m=+0.203115413 container attach 66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:55:41 compute-0 mystifying_tharp[359052]: 167 167
Oct 02 12:55:41 compute-0 systemd[1]: libpod-66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065.scope: Deactivated successfully.
Oct 02 12:55:41 compute-0 podman[359036]: 2025-10-02 12:55:41.83739349 +0000 UTC m=+0.206571353 container died 66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 12:55:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-411067a1f89be40d0f95494abc389b102d8928ccce10b34cb6a87e972675cf77-merged.mount: Deactivated successfully.
Oct 02 12:55:41 compute-0 podman[359036]: 2025-10-02 12:55:41.911033882 +0000 UTC m=+0.280211725 container remove 66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 12:55:41 compute-0 systemd[1]: libpod-conmon-66ef8f1d128904cdf8975a1343f1d2ade04d9bd72f8d01f0d2af80b5f8faa065.scope: Deactivated successfully.
Oct 02 12:55:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 133 op/s
Oct 02 12:55:42 compute-0 podman[359076]: 2025-10-02 12:55:42.094900794 +0000 UTC m=+0.039793594 container create d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:55:42 compute-0 systemd[1]: Started libpod-conmon-d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd.scope.
Oct 02 12:55:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:42.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c42c411a8490c7e16750a4efc49c32caa2d97bb5339622ee7270363b8b3400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c42c411a8490c7e16750a4efc49c32caa2d97bb5339622ee7270363b8b3400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c42c411a8490c7e16750a4efc49c32caa2d97bb5339622ee7270363b8b3400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c42c411a8490c7e16750a4efc49c32caa2d97bb5339622ee7270363b8b3400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:42 compute-0 podman[359076]: 2025-10-02 12:55:42.075916952 +0000 UTC m=+0.020809772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:55:42 compute-0 podman[359076]: 2025-10-02 12:55:42.208804021 +0000 UTC m=+0.153696851 container init d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 02 12:55:42 compute-0 podman[359076]: 2025-10-02 12:55:42.216228664 +0000 UTC m=+0.161121464 container start d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 12:55:42 compute-0 podman[359076]: 2025-10-02 12:55:42.258906211 +0000 UTC m=+0.203799041 container attach d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:55:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:42.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:42 compute-0 nova_compute[256940]: 2025-10-02 12:55:42.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:42 compute-0 nova_compute[256940]: 2025-10-02 12:55:42.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1353320155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]: {
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]:         "osd_id": 1,
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]:         "type": "bluestore"
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]:     }
Oct 02 12:55:43 compute-0 flamboyant_franklin[359092]: }
Oct 02 12:55:43 compute-0 systemd[1]: libpod-d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd.scope: Deactivated successfully.
Oct 02 12:55:43 compute-0 conmon[359092]: conmon d569f183116c945063cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd.scope/container/memory.events
Oct 02 12:55:43 compute-0 podman[359076]: 2025-10-02 12:55:43.063574308 +0000 UTC m=+1.008467108 container died d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-79c42c411a8490c7e16750a4efc49c32caa2d97bb5339622ee7270363b8b3400-merged.mount: Deactivated successfully.
Oct 02 12:55:43 compute-0 podman[359076]: 2025-10-02 12:55:43.211760775 +0000 UTC m=+1.156653565 container remove d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 12:55:43 compute-0 systemd[1]: libpod-conmon-d569f183116c945063cbff818d9fc2a47f973a9591f63c32ee3e09bff25c36fd.scope: Deactivated successfully.
Oct 02 12:55:43 compute-0 sudo[358971]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:55:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:55:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 17 KiB/s wr, 64 op/s
Oct 02 12:55:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 28881897-779e-483c-8cd3-5a2e59530d87 does not exist
Oct 02 12:55:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6f182e83-6909-44da-bd3c-5ef6fa46e210 does not exist
Oct 02 12:55:43 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c6a62b97-9839-4a65-a84f-e36fadf854e1 does not exist
Oct 02 12:55:44 compute-0 sudo[359126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:44 compute-0 sudo[359126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:44 compute-0 sudo[359126]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:44 compute-0 sudo[359163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:55:44 compute-0 podman[359150]: 2025-10-02 12:55:44.173031197 +0000 UTC m=+0.078323454 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:55:44 compute-0 sudo[359163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:44 compute-0 sudo[359163]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:44 compute-0 ceph-mon[73668]: pgmap v2471: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 133 op/s
Oct 02 12:55:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:44 compute-0 podman[359151]: 2025-10-02 12:55:44.217764278 +0000 UTC m=+0.122282655 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:44.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:45 compute-0 ceph-mon[73668]: pgmap v2472: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 17 KiB/s wr, 64 op/s
Oct 02 12:55:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 18 KiB/s wr, 64 op/s
Oct 02 12:55:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:46.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:46 compute-0 sudo[359225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:46 compute-0 sudo[359225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:46 compute-0 sudo[359225]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:46 compute-0 sudo[359250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:46 compute-0 sudo[359250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:46 compute-0 sudo[359250]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:47 compute-0 nova_compute[256940]: 2025-10-02 12:55:47.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:47 compute-0 nova_compute[256940]: 2025-10-02 12:55:47.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:47 compute-0 nova_compute[256940]: 2025-10-02 12:55:47.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:47.574 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:47.575 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:55:47 compute-0 ceph-mon[73668]: pgmap v2473: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 18 KiB/s wr, 64 op/s
Oct 02 12:55:47 compute-0 nova_compute[256940]: 2025-10-02 12:55:47.703 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:47 compute-0 nova_compute[256940]: 2025-10-02 12:55:47.704 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:47 compute-0 nova_compute[256940]: 2025-10-02 12:55:47.723 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:55:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 12:55:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 255 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 76 KiB/s wr, 45 op/s
Oct 02 12:55:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.037 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.038 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.050 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.050 2 INFO nova.compute.claims [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:55:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.186 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:48.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609859162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.608 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.614 2 DEBUG nova.compute.provider_tree [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.629 2 DEBUG nova.scheduler.client.report [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.655 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.656 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:55:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/609859162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.704 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.705 2 DEBUG nova.network.neutron [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.727 2 INFO nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.744 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.895 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.897 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.898 2 INFO nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Creating image(s)
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.921 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.947 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.971 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:48 compute-0 nova_compute[256940]: 2025-10-02 12:55:48.975 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:49 compute-0 nova_compute[256940]: 2025-10-02 12:55:49.043 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:49 compute-0 nova_compute[256940]: 2025-10-02 12:55:49.044 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:49 compute-0 nova_compute[256940]: 2025-10-02 12:55:49.045 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:49 compute-0 nova_compute[256940]: 2025-10-02 12:55:49.045 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:49 compute-0 nova_compute[256940]: 2025-10-02 12:55:49.069 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:49 compute-0 nova_compute[256940]: 2025-10-02 12:55:49.073 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 74e69165-006c-41de-82af-bc44cba0a843_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:49 compute-0 nova_compute[256940]: 2025-10-02 12:55:49.103 2 DEBUG nova.policy [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00be63ea13c84e3d9419078865524099', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:55:49 compute-0 ceph-mon[73668]: pgmap v2474: 305 pgs: 305 active+clean; 255 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 76 KiB/s wr, 45 op/s
Oct 02 12:55:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/959533747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/153925789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 282 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 24 op/s
Oct 02 12:55:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:50.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:50 compute-0 ceph-mon[73668]: pgmap v2475: 305 pgs: 305 active+clean; 282 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 24 op/s
Oct 02 12:55:51 compute-0 nova_compute[256940]: 2025-10-02 12:55:51.265 2 DEBUG nova.network.neutron [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Successfully created port: 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:55:51 compute-0 nova_compute[256940]: 2025-10-02 12:55:51.820 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 74e69165-006c-41de-82af-bc44cba0a843_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:51 compute-0 nova_compute[256940]: 2025-10-02 12:55:51.922 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] resizing rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:55:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 48 op/s
Oct 02 12:55:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:52.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.580 2 DEBUG nova.network.neutron [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Successfully updated port: 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.610 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-74e69165-006c-41de-82af-bc44cba0a843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.610 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-74e69165-006c-41de-82af-bc44cba0a843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.611 2 DEBUG nova.network.neutron [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.732 2 DEBUG nova.compute.manager [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received event network-changed-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.733 2 DEBUG nova.compute.manager [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Refreshing instance network info cache due to event network-changed-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.733 2 DEBUG oslo_concurrency.lockutils [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-74e69165-006c-41de-82af-bc44cba0a843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:52 compute-0 nova_compute[256940]: 2025-10-02 12:55:52.875 2 DEBUG nova.network.neutron [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:55:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:53 compute-0 nova_compute[256940]: 2025-10-02 12:55:53.313 2 DEBUG nova.objects.instance [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'migration_context' on Instance uuid 74e69165-006c-41de-82af-bc44cba0a843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:53 compute-0 nova_compute[256940]: 2025-10-02 12:55:53.345 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:55:53 compute-0 nova_compute[256940]: 2025-10-02 12:55:53.345 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Ensure instance console log exists: /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:55:53 compute-0 nova_compute[256940]: 2025-10-02 12:55:53.346 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:53 compute-0 nova_compute[256940]: 2025-10-02 12:55:53.346 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:53 compute-0 nova_compute[256940]: 2025-10-02 12:55:53.346 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:53 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:53.577 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:53 compute-0 ceph-mon[73668]: pgmap v2476: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 48 op/s
Oct 02 12:55:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 02 12:55:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:55:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:54.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.717 2 DEBUG nova.network.neutron [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Updating instance_info_cache with network_info: [{"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.751 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-74e69165-006c-41de-82af-bc44cba0a843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.752 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Instance network_info: |[{"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.752 2 DEBUG oslo_concurrency.lockutils [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-74e69165-006c-41de-82af-bc44cba0a843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.752 2 DEBUG nova.network.neutron [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Refreshing network info cache for port 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.756 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Start _get_guest_xml network_info=[{"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.761 2 WARNING nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.766 2 DEBUG nova.virt.libvirt.host [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.766 2 DEBUG nova.virt.libvirt.host [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.773 2 DEBUG nova.virt.libvirt.host [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.774 2 DEBUG nova.virt.libvirt.host [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.775 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.775 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.775 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.775 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.776 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.776 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.776 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.776 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.776 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.777 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.777 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.777 2 DEBUG nova.virt.hardware [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:55:54 compute-0 nova_compute[256940]: 2025-10-02 12:55:54.779 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535558062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.223 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.251 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.256 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:55 compute-0 ceph-mon[73668]: pgmap v2477: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 02 12:55:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2336472609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.725 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.727 2 DEBUG nova.virt.libvirt.vif [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-972017235',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-972017235',id=152,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-4cwplwm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:48Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=74e69165-006c-41de-82af-bc44cba0a843,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.727 2 DEBUG nova.network.os_vif_util [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.728 2 DEBUG nova.network.os_vif_util [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:91:25,bridge_name='br-int',has_traffic_filtering=True,id=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b58c8c0-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.729 2 DEBUG nova.objects.instance [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74e69165-006c-41de-82af-bc44cba0a843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.744 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <uuid>74e69165-006c-41de-82af-bc44cba0a843</uuid>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <name>instance-00000098</name>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-972017235</nova:name>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:55:54</nova:creationTime>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:user uuid="00be63ea13c84e3d9419078865524099">tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member</nova:user>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:project uuid="cb2da64acac041cb8d38c3b43fe4dbe9">tempest-ServerBootFromVolumeStableRescueTest-1641553658</nova:project>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <nova:port uuid="1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad">
Oct 02 12:55:55 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <system>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <entry name="serial">74e69165-006c-41de-82af-bc44cba0a843</entry>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <entry name="uuid">74e69165-006c-41de-82af-bc44cba0a843</entry>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </system>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <os>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   </os>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <features>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   </features>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/74e69165-006c-41de-82af-bc44cba0a843_disk">
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       </source>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/74e69165-006c-41de-82af-bc44cba0a843_disk.config">
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       </source>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:55:55 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:ae:91:25"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <target dev="tap1b58c8c0-ff"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/console.log" append="off"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <video>
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </video>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:55:55 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:55:55 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:55:55 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:55:55 compute-0 nova_compute[256940]: </domain>
Oct 02 12:55:55 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.745 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Preparing to wait for external event network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.745 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.746 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.746 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.747 2 DEBUG nova.virt.libvirt.vif [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-972017235',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-972017235',id=152,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-4cwplwm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:48Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=74e69165-006c-41de-82af-bc44cba0a843,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.747 2 DEBUG nova.network.os_vif_util [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.747 2 DEBUG nova.network.os_vif_util [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:91:25,bridge_name='br-int',has_traffic_filtering=True,id=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b58c8c0-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.748 2 DEBUG os_vif [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:91:25,bridge_name='br-int',has_traffic_filtering=True,id=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b58c8c0-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.749 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.749 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.753 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b58c8c0-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.753 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b58c8c0-ff, col_values=(('external_ids', {'iface-id': '1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:91:25', 'vm-uuid': '74e69165-006c-41de-82af-bc44cba0a843'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:55 compute-0 NetworkManager[44981]: <info>  [1759409755.7557] manager: (tap1b58c8c0-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.766 2 INFO os_vif [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:91:25,bridge_name='br-int',has_traffic_filtering=True,id=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b58c8c0-ff')
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.826 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.827 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.827 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No VIF found with MAC fa:16:3e:ae:91:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.827 2 INFO nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Using config drive
Oct 02 12:55:55 compute-0 nova_compute[256940]: 2025-10-02 12:55:55.854 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 90 op/s
Oct 02 12:55:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:56.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:56.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/535558062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2336472609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.649 2 INFO nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Creating config drive at /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/disk.config
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.654 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpftmkw5dr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.788 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpftmkw5dr" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.828 2 DEBUG nova.storage.rbd_utils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 74e69165-006c-41de-82af-bc44cba0a843_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.835 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/disk.config 74e69165-006c-41de-82af-bc44cba0a843_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.936 2 DEBUG nova.network.neutron [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Updated VIF entry in instance network info cache for port 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.937 2 DEBUG nova.network.neutron [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Updating instance_info_cache with network_info: [{"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:56 compute-0 nova_compute[256940]: 2025-10-02 12:55:56.958 2 DEBUG oslo_concurrency.lockutils [req-842aff3e-c597-436f-90b7-9c7ccf7a199b req-089e9da7-290e-4dd4-bd47-a38337aff020 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-74e69165-006c-41de-82af-bc44cba0a843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.069 2 DEBUG oslo_concurrency.processutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/disk.config 74e69165-006c-41de-82af-bc44cba0a843_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.069 2 INFO nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Deleting local config drive /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843/disk.config because it was imported into RBD.
Oct 02 12:55:57 compute-0 NetworkManager[44981]: <info>  [1759409757.1319] manager: (tap1b58c8c0-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Oct 02 12:55:57 compute-0 kernel: tap1b58c8c0-ff: entered promiscuous mode
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:57 compute-0 ovn_controller[148123]: 2025-10-02T12:55:57Z|00748|binding|INFO|Claiming lport 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad for this chassis.
Oct 02 12:55:57 compute-0 ovn_controller[148123]: 2025-10-02T12:55:57Z|00749|binding|INFO|1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad: Claiming fa:16:3e:ae:91:25 10.100.0.12
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.144 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:91:25 10.100.0.12'], port_security=['fa:16:3e:ae:91:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '74e69165-006c-41de-82af-bc44cba0a843', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.145 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 bound to our chassis
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.147 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.166 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbc4ed8-84d4-4500-a6dc-e92a534acb74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:57 compute-0 systemd-udevd[359604]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:57 compute-0 ovn_controller[148123]: 2025-10-02T12:55:57Z|00750|binding|INFO|Setting lport 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad ovn-installed in OVS
Oct 02 12:55:57 compute-0 ovn_controller[148123]: 2025-10-02T12:55:57Z|00751|binding|INFO|Setting lport 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad up in Southbound
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:57 compute-0 systemd-machined[210927]: New machine qemu-79-instance-00000098.
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:57 compute-0 NetworkManager[44981]: <info>  [1759409757.1844] device (tap1b58c8c0-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:55:57 compute-0 NetworkManager[44981]: <info>  [1759409757.1853] device (tap1b58c8c0-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:55:57 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000098.
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.203 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ff34b101-1430-457e-bd24-42c99304413f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.207 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[528510bc-5380-4469-9a7c-152eec64f231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.236 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a65e7293-57d9-4730-8718-1dc118fbba11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.255 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[26cf4c04-b1ef-48b5-a793-6caa9774f296]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752111, 'reachable_time': 18694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359616, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.274 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4cab2467-8b49-4bbe-b035-ee2c931c2c33]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752124, 'tstamp': 752124}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359618, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752127, 'tstamp': 752127}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359618, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.275 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.278 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.278 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.278 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:55:57.279 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.490 2 DEBUG nova.compute.manager [req-9df4fe27-d8cf-4508-b56e-937ad53c6f2b req-8cfd7c64-6c04-4383-8a9a-a7bef8555075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received event network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.490 2 DEBUG oslo_concurrency.lockutils [req-9df4fe27-d8cf-4508-b56e-937ad53c6f2b req-8cfd7c64-6c04-4383-8a9a-a7bef8555075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.490 2 DEBUG oslo_concurrency.lockutils [req-9df4fe27-d8cf-4508-b56e-937ad53c6f2b req-8cfd7c64-6c04-4383-8a9a-a7bef8555075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.491 2 DEBUG oslo_concurrency.lockutils [req-9df4fe27-d8cf-4508-b56e-937ad53c6f2b req-8cfd7c64-6c04-4383-8a9a-a7bef8555075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:57 compute-0 nova_compute[256940]: 2025-10-02 12:55:57.491 2 DEBUG nova.compute.manager [req-9df4fe27-d8cf-4508-b56e-937ad53c6f2b req-8cfd7c64-6c04-4383-8a9a-a7bef8555075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Processing event network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:55:57 compute-0 ceph-mon[73668]: pgmap v2478: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 90 op/s
Oct 02 12:55:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3254957633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 02 12:55:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:58.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.278 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.280 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409758.2795475, 74e69165-006c-41de-82af-bc44cba0a843 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.280 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] VM Started (Lifecycle Event)
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.283 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.287 2 INFO nova.virt.libvirt.driver [-] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Instance spawned successfully.
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.287 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:55:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:55:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:58.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.304 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.310 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.313 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.314 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.314 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.314 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.315 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.315 2 DEBUG nova.virt.libvirt.driver [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.355 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.355 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409758.279776, 74e69165-006c-41de-82af-bc44cba0a843 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.355 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] VM Paused (Lifecycle Event)
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.375 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.380 2 INFO nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Took 9.48 seconds to spawn the instance on the hypervisor.
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.381 2 DEBUG nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.382 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409758.2823553, 74e69165-006c-41de-82af-bc44cba0a843 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.382 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] VM Resumed (Lifecycle Event)
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.420 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.422 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.456 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:55:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.476 2 INFO nova.compute.manager [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Took 10.48 seconds to build instance.
Oct 02 12:55:58 compute-0 nova_compute[256940]: 2025-10-02 12:55:58.507 2 DEBUG oslo_concurrency.lockutils [None req-ed7d8c82-64f2-4c0e-88c9-fc61ee0232ba 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-0 NetworkManager[44981]: <info>  [1759409759.0855] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Oct 02 12:55:59 compute-0 NetworkManager[44981]: <info>  [1759409759.0861] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-0 ovn_controller[148123]: 2025-10-02T12:55:59Z|00752|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct 02 12:55:59 compute-0 ovn_controller[148123]: 2025-10-02T12:55:59Z|00753|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-0 podman[359663]: 2025-10-02 12:55:59.414011703 +0000 UTC m=+0.077806121 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 12:55:59 compute-0 podman[359664]: 2025-10-02 12:55:59.452138122 +0000 UTC m=+0.111609888 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.601 2 DEBUG nova.compute.manager [req-6cfe6837-d902-4827-8464-b25e5fcacf67 req-aeaa77c1-86ee-44c4-83f5-43f936ed1cf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received event network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.601 2 DEBUG oslo_concurrency.lockutils [req-6cfe6837-d902-4827-8464-b25e5fcacf67 req-aeaa77c1-86ee-44c4-83f5-43f936ed1cf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.601 2 DEBUG oslo_concurrency.lockutils [req-6cfe6837-d902-4827-8464-b25e5fcacf67 req-aeaa77c1-86ee-44c4-83f5-43f936ed1cf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.602 2 DEBUG oslo_concurrency.lockutils [req-6cfe6837-d902-4827-8464-b25e5fcacf67 req-aeaa77c1-86ee-44c4-83f5-43f936ed1cf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.602 2 DEBUG nova.compute.manager [req-6cfe6837-d902-4827-8464-b25e5fcacf67 req-aeaa77c1-86ee-44c4-83f5-43f936ed1cf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] No waiting events found dispatching network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:59 compute-0 nova_compute[256940]: 2025-10-02 12:55:59.602 2 WARNING nova.compute.manager [req-6cfe6837-d902-4827-8464-b25e5fcacf67 req-aeaa77c1-86ee-44c4-83f5-43f936ed1cf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received unexpected event network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad for instance with vm_state active and task_state None.
Oct 02 12:55:59 compute-0 ceph-mon[73668]: pgmap v2479: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 02 12:55:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 359 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 152 op/s
Oct 02 12:56:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:00.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:00.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:00 compute-0 nova_compute[256940]: 2025-10-02 12:56:00.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:00 compute-0 ceph-mon[73668]: pgmap v2480: 305 pgs: 305 active+clean; 359 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 152 op/s
Oct 02 12:56:00 compute-0 nova_compute[256940]: 2025-10-02 12:56:00.968 2 DEBUG nova.compute.manager [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:01 compute-0 nova_compute[256940]: 2025-10-02 12:56:01.039 2 INFO nova.compute.manager [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] instance snapshotting
Oct 02 12:56:01 compute-0 nova_compute[256940]: 2025-10-02 12:56:01.349 2 INFO nova.virt.libvirt.driver [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Beginning live snapshot process
Oct 02 12:56:01 compute-0 nova_compute[256940]: 2025-10-02 12:56:01.505 2 DEBUG nova.virt.libvirt.imagebackend [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:56:01 compute-0 nova_compute[256940]: 2025-10-02 12:56:01.726 2 DEBUG nova.storage.rbd_utils [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] creating snapshot(1d3915064f5549fd8b7eabcc3f5d0160) on rbd image(74e69165-006c-41de-82af-bc44cba0a843_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:56:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.0 MiB/s wr, 203 op/s
Oct 02 12:56:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Oct 02 12:56:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Oct 02 12:56:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:02.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Oct 02 12:56:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:02 compute-0 nova_compute[256940]: 2025-10-02 12:56:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:03 compute-0 ceph-mon[73668]: pgmap v2481: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.0 MiB/s wr, 203 op/s
Oct 02 12:56:03 compute-0 ceph-mon[73668]: osdmap e343: 3 total, 3 up, 3 in
Oct 02 12:56:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3350021409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:03 compute-0 nova_compute[256940]: 2025-10-02 12:56:03.462 2 DEBUG nova.storage.rbd_utils [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] cloning vms/74e69165-006c-41de-82af-bc44cba0a843_disk@1d3915064f5549fd8b7eabcc3f5d0160 to images/dbd87d84-17a6-4523-8be2-18de440d9003 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:56:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 212 op/s
Oct 02 12:56:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:04.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:04.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1541807916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3883695521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:56:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852111110' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:56:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:56:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852111110' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:56:05 compute-0 ceph-mon[73668]: pgmap v2483: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 212 op/s
Oct 02 12:56:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/852111110' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:56:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/852111110' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:56:05 compute-0 nova_compute[256940]: 2025-10-02 12:56:05.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Oct 02 12:56:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:06.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:06.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:06 compute-0 sudo[359795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:06 compute-0 sudo[359795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:06 compute-0 sudo[359795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:07 compute-0 sudo[359820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:07 compute-0 sudo[359820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:07 compute-0 sudo[359820]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:07 compute-0 nova_compute[256940]: 2025-10-02 12:56:07.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:07 compute-0 nova_compute[256940]: 2025-10-02 12:56:07.714 2 DEBUG nova.storage.rbd_utils [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] flattening images/dbd87d84-17a6-4523-8be2-18de440d9003 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:56:07 compute-0 ceph-mon[73668]: pgmap v2484: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Oct 02 12:56:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 173 op/s
Oct 02 12:56:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:08.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:08 compute-0 ceph-mon[73668]: pgmap v2485: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 173 op/s
Oct 02 12:56:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 158 op/s
Oct 02 12:56:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:10 compute-0 nova_compute[256940]: 2025-10-02 12:56:10.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:10.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2440780115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3013568106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1041890516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:10 compute-0 nova_compute[256940]: 2025-10-02 12:56:10.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:11 compute-0 ceph-mon[73668]: pgmap v2486: 305 pgs: 305 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 158 op/s
Oct 02 12:56:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4169217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 496 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.3 MiB/s wr, 181 op/s
Oct 02 12:56:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:12.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:12 compute-0 nova_compute[256940]: 2025-10-02 12:56:12.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:13 compute-0 ceph-mon[73668]: pgmap v2487: 305 pgs: 305 active+clean; 496 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.3 MiB/s wr, 181 op/s
Oct 02 12:56:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1941349339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 496 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 02 12:56:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:14.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:14 compute-0 nova_compute[256940]: 2025-10-02 12:56:14.253 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:14 compute-0 nova_compute[256940]: 2025-10-02 12:56:14.256 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:14 compute-0 nova_compute[256940]: 2025-10-02 12:56:14.257 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:56:14 compute-0 nova_compute[256940]: 2025-10-02 12:56:14.257 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:14 compute-0 podman[359885]: 2025-10-02 12:56:14.455079731 +0000 UTC m=+0.111090624 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 12:56:14 compute-0 podman[359886]: 2025-10-02 12:56:14.462027852 +0000 UTC m=+0.115628553 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 12:56:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3594326844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.304 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.305 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.305 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.305 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.306 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275587219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.792 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.897 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.898 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.901 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.901 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.904 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:15 compute-0 nova_compute[256940]: 2025-10-02 12:56:15.904 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 516 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.8 MiB/s wr, 213 op/s
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.119 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.120 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3725MB free_disk=20.78989028930664GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.121 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.121 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:16.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.222 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e22a7944-9200-4204-a219-3f7bd720667b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.224 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance a9564d79-5032-4cde-aff3-447400d7dc2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.224 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 74e69165-006c-41de-82af-bc44cba0a843 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.225 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.225 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.243 2 DEBUG nova.storage.rbd_utils [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] removing snapshot(1d3915064f5549fd8b7eabcc3f5d0160) on rbd image(74e69165-006c-41de-82af-bc44cba0a843_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:56:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:16.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.327 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:56:16 compute-0 ceph-mon[73668]: pgmap v2488: 305 pgs: 305 active+clean; 496 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 02 12:56:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4275587219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.395 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.395 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.408 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.430 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:56:16 compute-0 nova_compute[256940]: 2025-10-02 12:56:16.498 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274072784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:17 compute-0 nova_compute[256940]: 2025-10-02 12:56:17.001 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:17 compute-0 nova_compute[256940]: 2025-10-02 12:56:17.007 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:17 compute-0 nova_compute[256940]: 2025-10-02 12:56:17.030 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:17 compute-0 nova_compute[256940]: 2025-10-02 12:56:17.063 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:56:17 compute-0 nova_compute[256940]: 2025-10-02 12:56:17.064 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:17 compute-0 nova_compute[256940]: 2025-10-02 12:56:17.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:17 compute-0 ceph-mon[73668]: pgmap v2489: 305 pgs: 305 active+clean; 516 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.8 MiB/s wr, 213 op/s
Oct 02 12:56:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1274072784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.4 MiB/s wr, 244 op/s
Oct 02 12:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:18 compute-0 ovn_controller[148123]: 2025-10-02T12:56:18Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:91:25 10.100.0.12
Oct 02 12:56:18 compute-0 ovn_controller[148123]: 2025-10-02T12:56:18Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:91:25 10.100.0.12
Oct 02 12:56:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:18.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:18.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Oct 02 12:56:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Oct 02 12:56:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Oct 02 12:56:19 compute-0 nova_compute[256940]: 2025-10-02 12:56:19.022 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:19 compute-0 nova_compute[256940]: 2025-10-02 12:56:19.023 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:19 compute-0 nova_compute[256940]: 2025-10-02 12:56:19.024 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:19 compute-0 nova_compute[256940]: 2025-10-02 12:56:19.294 2 DEBUG nova.storage.rbd_utils [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] creating snapshot(snap) on rbd image(dbd87d84-17a6-4523-8be2-18de440d9003) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:56:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Oct 02 12:56:19 compute-0 ceph-mon[73668]: pgmap v2490: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.4 MiB/s wr, 244 op/s
Oct 02 12:56:19 compute-0 ceph-mon[73668]: osdmap e344: 3 total, 3 up, 3 in
Oct 02 12:56:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Oct 02 12:56:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Oct 02 12:56:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 185 op/s
Oct 02 12:56:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:20.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:20 compute-0 nova_compute[256940]: 2025-10-02 12:56:20.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:20 compute-0 ceph-mon[73668]: osdmap e345: 3 total, 3 up, 3 in
Oct 02 12:56:21 compute-0 nova_compute[256940]: 2025-10-02 12:56:21.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 264 op/s
Oct 02 12:56:22 compute-0 ceph-mon[73668]: pgmap v2493: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 185 op/s
Oct 02 12:56:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:22.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:22.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:22 compute-0 nova_compute[256940]: 2025-10-02 12:56:22.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Oct 02 12:56:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Oct 02 12:56:23 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Oct 02 12:56:23 compute-0 ceph-mon[73668]: pgmap v2494: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 264 op/s
Oct 02 12:56:23 compute-0 ceph-mon[73668]: osdmap e346: 3 total, 3 up, 3 in
Oct 02 12:56:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 622 KiB/s rd, 830 KiB/s wr, 136 op/s
Oct 02 12:56:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:24.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:25 compute-0 nova_compute[256940]: 2025-10-02 12:56:25.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:25 compute-0 nova_compute[256940]: 2025-10-02 12:56:25.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:56:25 compute-0 ceph-mon[73668]: pgmap v2496: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 622 KiB/s rd, 830 KiB/s wr, 136 op/s
Oct 02 12:56:25 compute-0 nova_compute[256940]: 2025-10-02 12:56:25.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 464 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Oct 02 12:56:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:26.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:26.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:26 compute-0 nova_compute[256940]: 2025-10-02 12:56:26.348 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:26 compute-0 nova_compute[256940]: 2025-10-02 12:56:26.348 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:26 compute-0 nova_compute[256940]: 2025-10-02 12:56:26.349 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:56:26 compute-0 nova_compute[256940]: 2025-10-02 12:56:26.435 2 INFO nova.virt.libvirt.driver [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Snapshot image upload complete
Oct 02 12:56:26 compute-0 nova_compute[256940]: 2025-10-02 12:56:26.436 2 INFO nova.compute.manager [None req-a90cca29-e095-4423-a28d-66b6511deff0 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Took 25.39 seconds to snapshot the instance on the hypervisor.
Oct 02 12:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:26.493 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:26.494 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:26.494 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:27 compute-0 sudo[360001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:27 compute-0 sudo[360001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:27 compute-0 sudo[360001]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:27 compute-0 sudo[360026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:27 compute-0 sudo[360026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:27 compute-0 sudo[360026]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:27 compute-0 nova_compute[256940]: 2025-10-02 12:56:27.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:27 compute-0 nova_compute[256940]: 2025-10-02 12:56:27.887 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updating instance_info_cache with network_info: [{"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:27 compute-0 nova_compute[256940]: 2025-10-02 12:56:27.910 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-a9564d79-5032-4cde-aff3-447400d7dc2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:27 compute-0 nova_compute[256940]: 2025-10-02 12:56:27.911 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:56:27 compute-0 ceph-mon[73668]: pgmap v2497: 305 pgs: 305 active+clean; 464 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Oct 02 12:56:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1580477687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3334793604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Oct 02 12:56:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:28.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:28.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.381 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.382 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.383 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.383 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.383 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.384 2 INFO nova.compute.manager [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Terminating instance
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.386 2 DEBUG nova.compute.manager [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:56:28
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'vms', '.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log']
Oct 02 12:56:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:56:28 compute-0 nova_compute[256940]: 2025-10-02 12:56:28.905 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:29 compute-0 kernel: tap20aff22c-83 (unregistering): left promiscuous mode
Oct 02 12:56:29 compute-0 NetworkManager[44981]: <info>  [1759409789.4123] device (tap20aff22c-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00754|binding|INFO|Releasing lport 20aff22c-836c-41ba-b60f-462a819f7822 from this chassis (sb_readonly=0)
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00755|binding|INFO|Setting lport 20aff22c-836c-41ba-b60f-462a819f7822 down in Southbound
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00756|binding|INFO|Removing iface tap20aff22c-83 ovn-installed in OVS
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.439 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:d0:16 10.100.0.8'], port_security=['fa:16:3e:49:d0:16 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a9564d79-5032-4cde-aff3-447400d7dc2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9fda5bf-9101-4cc5-8c4b-2704f6c0a12e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20aff22c-836c-41ba-b60f-462a819f7822) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.440 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20aff22c-836c-41ba-b60f-462a819f7822 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 unbound from our chassis
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.442 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.444 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c1990645-b4e2-47d6-ad8e-b2edb08022aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.445 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 namespace which is not needed anymore
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct 02 12:56:29 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000095.scope: Consumed 16.538s CPU time.
Oct 02 12:56:29 compute-0 systemd-machined[210927]: Machine qemu-78-instance-00000095 terminated.
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:56:29 compute-0 podman[360054]: 2025-10-02 12:56:29.547819537 +0000 UTC m=+0.074368831 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:56:29 compute-0 podman[360056]: 2025-10-02 12:56:29.550184639 +0000 UTC m=+0.072490363 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:56:29 compute-0 ceph-mon[73668]: pgmap v2498: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:56:29 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [NOTICE]   (358104) : haproxy version is 2.8.14-c23fe91
Oct 02 12:56:29 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [NOTICE]   (358104) : path to executable is /usr/sbin/haproxy
Oct 02 12:56:29 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [WARNING]  (358104) : Exiting Master process...
Oct 02 12:56:29 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [WARNING]  (358104) : Exiting Master process...
Oct 02 12:56:29 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [ALERT]    (358104) : Current worker (358106) exited with code 143 (Terminated)
Oct 02 12:56:29 compute-0 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[358100]: [WARNING]  (358104) : All workers exited. Exiting... (0)
Oct 02 12:56:29 compute-0 systemd[1]: libpod-181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf.scope: Deactivated successfully.
Oct 02 12:56:29 compute-0 podman[360106]: 2025-10-02 12:56:29.589723525 +0000 UTC m=+0.044385403 container died 181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:29 compute-0 kernel: tap20aff22c-83: entered promiscuous mode
Oct 02 12:56:29 compute-0 systemd-udevd[360083]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:29 compute-0 NetworkManager[44981]: <info>  [1759409789.6050] manager: (tap20aff22c-83): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00757|binding|INFO|Claiming lport 20aff22c-836c-41ba-b60f-462a819f7822 for this chassis.
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00758|binding|INFO|20aff22c-836c-41ba-b60f-462a819f7822: Claiming fa:16:3e:49:d0:16 10.100.0.8
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 kernel: tap20aff22c-83 (unregistering): left promiscuous mode
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00759|binding|INFO|Setting lport 20aff22c-836c-41ba-b60f-462a819f7822 ovn-installed in OVS
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00760|if_status|INFO|Dropped 5 log messages in last 611 seconds (most recently, 611 seconds ago) due to excessive rate
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00761|if_status|INFO|Not setting lport 20aff22c-836c-41ba-b60f-462a819f7822 down as sb is readonly
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: hostname: compute-0
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.639 2 INFO nova.virt.libvirt.driver [-] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Instance destroyed successfully.
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.640 2 DEBUG nova.objects.instance [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid a9564d79-5032-4cde-aff3-447400d7dc2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf-userdata-shm.mount: Deactivated successfully.
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bca1148ce710efd271df90f5cfe29ada769102eba9b1ca98508e41744916da34-merged.mount: Deactivated successfully.
Oct 02 12:56:29 compute-0 virtnodedevd[257869]: ethtool ioctl error on tap20aff22c-83: No such device
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.688 2 DEBUG nova.compute.manager [req-76c320a6-cb93-4a96-a5dc-19e460299126 req-228a6b08-7f05-413e-bf46-e7b0e9e7ccb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-vif-unplugged-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.688 2 DEBUG oslo_concurrency.lockutils [req-76c320a6-cb93-4a96-a5dc-19e460299126 req-228a6b08-7f05-413e-bf46-e7b0e9e7ccb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.688 2 DEBUG oslo_concurrency.lockutils [req-76c320a6-cb93-4a96-a5dc-19e460299126 req-228a6b08-7f05-413e-bf46-e7b0e9e7ccb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:29 compute-0 ovn_controller[148123]: 2025-10-02T12:56:29Z|00762|binding|INFO|Releasing lport 20aff22c-836c-41ba-b60f-462a819f7822 from this chassis (sb_readonly=0)
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.690 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:d0:16 10.100.0.8'], port_security=['fa:16:3e:49:d0:16 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a9564d79-5032-4cde-aff3-447400d7dc2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9fda5bf-9101-4cc5-8c4b-2704f6c0a12e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20aff22c-836c-41ba-b60f-462a819f7822) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.689 2 DEBUG oslo_concurrency.lockutils [req-76c320a6-cb93-4a96-a5dc-19e460299126 req-228a6b08-7f05-413e-bf46-e7b0e9e7ccb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.689 2 DEBUG nova.compute.manager [req-76c320a6-cb93-4a96-a5dc-19e460299126 req-228a6b08-7f05-413e-bf46-e7b0e9e7ccb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] No waiting events found dispatching network-vif-unplugged-20aff22c-836c-41ba-b60f-462a819f7822 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.689 2 DEBUG nova.compute.manager [req-76c320a6-cb93-4a96-a5dc-19e460299126 req-228a6b08-7f05-413e-bf46-e7b0e9e7ccb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-vif-unplugged-20aff22c-836c-41ba-b60f-462a819f7822 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.700 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:d0:16 10.100.0.8'], port_security=['fa:16:3e:49:d0:16 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a9564d79-5032-4cde-aff3-447400d7dc2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9fda5bf-9101-4cc5-8c4b-2704f6c0a12e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20aff22c-836c-41ba-b60f-462a819f7822) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.704 2 DEBUG nova.virt.libvirt.vif [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1284261477',display_name='tempest-TestNetworkBasicOps-server-1284261477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1284261477',id=149,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElQstkE41+GOd6moF9K3IstLgWEjClG13CyAdWSb2M73noXL86Vl5KH68ZUi95kvcmAkcIhDh++GVbB3a0pYM7imf/5hyDotrkezzAQqt6QG9gmL3EnE0E4J4p0Bwq+iw==',key_name='tempest-TestNetworkBasicOps-422991809',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-fqh11rud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:10Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=a9564d79-5032-4cde-aff3-447400d7dc2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.704 2 DEBUG nova.network.os_vif_util [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "20aff22c-836c-41ba-b60f-462a819f7822", "address": "fa:16:3e:49:d0:16", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20aff22c-83", "ovs_interfaceid": "20aff22c-836c-41ba-b60f-462a819f7822", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.705 2 DEBUG nova.network.os_vif_util [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:d0:16,bridge_name='br-int',has_traffic_filtering=True,id=20aff22c-836c-41ba-b60f-462a819f7822,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20aff22c-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.706 2 DEBUG os_vif [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:d0:16,bridge_name='br-int',has_traffic_filtering=True,id=20aff22c-836c-41ba-b60f-462a819f7822,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20aff22c-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20aff22c-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.720 2 INFO os_vif [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:d0:16,bridge_name='br-int',has_traffic_filtering=True,id=20aff22c-836c-41ba-b60f-462a819f7822,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20aff22c-83')
Oct 02 12:56:29 compute-0 podman[360106]: 2025-10-02 12:56:29.72203812 +0000 UTC m=+0.176699998 container cleanup 181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:56:29 compute-0 systemd[1]: libpod-conmon-181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf.scope: Deactivated successfully.
Oct 02 12:56:29 compute-0 podman[360170]: 2025-10-02 12:56:29.840239868 +0000 UTC m=+0.090670265 container remove 181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.846 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c3903a1d-5339-4f24-bd4a-8cdd37a77b95]: (4, ('Thu Oct  2 12:56:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 (181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf)\n181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf\nThu Oct  2 12:56:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 (181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf)\n181add9f06fc7104fe1510664277350f6e0d7fcc7248eff738d3384e22ca72bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.847 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7452e024-7afa-4920-846a-81ee7dd3c4d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.848 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24ea8f37-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 kernel: tap24ea8f37-70: left promiscuous mode
Oct 02 12:56:29 compute-0 nova_compute[256940]: 2025-10-02 12:56:29.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.866 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[29e5b0cb-a835-4c6d-8464-3fa9be26eb66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.899 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5313fc6a-894c-46c3-a3b4-0795ceed7be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.900 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[89d9148b-b26b-41be-ac9a-a5f2ec5718c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.917 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5af8d4ef-e612-48e5-a01a-22573f4dbe59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 760459, 'reachable_time': 20290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360196, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d24ea8f37\x2d7508\x2d4c75\x2dae14\x2de4cc7b9f8e97.mount: Deactivated successfully.
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.920 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.920 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa573d6-33f4-4301-a186-3473b88c535f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.921 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20aff22c-836c-41ba-b60f-462a819f7822 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 unbound from our chassis
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.922 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.923 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[94870f7c-03b9-4544-a2b3-65b47305463d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.923 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20aff22c-836c-41ba-b60f-462a819f7822 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 unbound from our chassis
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.925 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:56:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:29.926 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[879d2ea5-8ac6-49ec-bc32-36a57d5c1e38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 267 op/s
Oct 02 12:56:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:30.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:30 compute-0 ovn_controller[148123]: 2025-10-02T12:56:30Z|00763|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:56:30 compute-0 nova_compute[256940]: 2025-10-02 12:56:30.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:30 compute-0 ovn_controller[148123]: 2025-10-02T12:56:30Z|00764|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:56:30 compute-0 nova_compute[256940]: 2025-10-02 12:56:30.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:31 compute-0 ceph-mon[73668]: pgmap v2499: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 267 op/s
Oct 02 12:56:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 206 op/s
Oct 02 12:56:32 compute-0 nova_compute[256940]: 2025-10-02 12:56:32.109 2 DEBUG nova.compute.manager [req-c800df7f-5c85-4bfc-ba20-3896cccf419e req-0558f6f3-45af-4ebc-be63-af13e339c4e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:32 compute-0 nova_compute[256940]: 2025-10-02 12:56:32.109 2 DEBUG oslo_concurrency.lockutils [req-c800df7f-5c85-4bfc-ba20-3896cccf419e req-0558f6f3-45af-4ebc-be63-af13e339c4e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:32 compute-0 nova_compute[256940]: 2025-10-02 12:56:32.110 2 DEBUG oslo_concurrency.lockutils [req-c800df7f-5c85-4bfc-ba20-3896cccf419e req-0558f6f3-45af-4ebc-be63-af13e339c4e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:32 compute-0 nova_compute[256940]: 2025-10-02 12:56:32.110 2 DEBUG oslo_concurrency.lockutils [req-c800df7f-5c85-4bfc-ba20-3896cccf419e req-0558f6f3-45af-4ebc-be63-af13e339c4e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:32 compute-0 nova_compute[256940]: 2025-10-02 12:56:32.110 2 DEBUG nova.compute.manager [req-c800df7f-5c85-4bfc-ba20-3896cccf419e req-0558f6f3-45af-4ebc-be63-af13e339c4e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] No waiting events found dispatching network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:32 compute-0 nova_compute[256940]: 2025-10-02 12:56:32.111 2 WARNING nova.compute.manager [req-c800df7f-5c85-4bfc-ba20-3896cccf419e req-0558f6f3-45af-4ebc-be63-af13e339c4e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received unexpected event network-vif-plugged-20aff22c-836c-41ba-b60f-462a819f7822 for instance with vm_state active and task_state deleting.
Oct 02 12:56:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:32.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:32.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:32 compute-0 nova_compute[256940]: 2025-10-02 12:56:32.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1910028212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:33 compute-0 ceph-mon[73668]: pgmap v2500: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 206 op/s
Oct 02 12:56:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 188 op/s
Oct 02 12:56:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:34.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:34.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:34 compute-0 nova_compute[256940]: 2025-10-02 12:56:34.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:35 compute-0 nova_compute[256940]: 2025-10-02 12:56:35.270 2 INFO nova.virt.libvirt.driver [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Deleting instance files /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f_del
Oct 02 12:56:35 compute-0 nova_compute[256940]: 2025-10-02 12:56:35.271 2 INFO nova.virt.libvirt.driver [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Deletion of /var/lib/nova/instances/a9564d79-5032-4cde-aff3-447400d7dc2f_del complete
Oct 02 12:56:35 compute-0 nova_compute[256940]: 2025-10-02 12:56:35.392 2 INFO nova.compute.manager [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Took 7.01 seconds to destroy the instance on the hypervisor.
Oct 02 12:56:35 compute-0 nova_compute[256940]: 2025-10-02 12:56:35.393 2 DEBUG oslo.service.loopingcall [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:56:35 compute-0 nova_compute[256940]: 2025-10-02 12:56:35.393 2 DEBUG nova.compute.manager [-] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:56:35 compute-0 nova_compute[256940]: 2025-10-02 12:56:35.393 2 DEBUG nova.network.neutron [-] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:56:35 compute-0 ceph-mon[73668]: pgmap v2501: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 188 op/s
Oct 02 12:56:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Oct 02 12:56:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:36.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:36.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:36 compute-0 nova_compute[256940]: 2025-10-02 12:56:36.719 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating tmpfile /var/lib/nova/instances/tmppyu_z2_l to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:56:36 compute-0 ceph-mon[73668]: pgmap v2502: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Oct 02 12:56:36 compute-0 nova_compute[256940]: 2025-10-02 12:56:36.982 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.001 2 DEBUG nova.network.neutron [-] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.030 2 INFO nova.compute.manager [-] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Took 1.64 seconds to deallocate network for instance.
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.076 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.076 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.092 2 DEBUG nova.compute.manager [req-aadbea04-d7b5-4359-bfdf-669d068d2346 req-57eb6730-b954-46af-b3bc-f2e3a3aa3566 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Received event network-vif-deleted-20aff22c-836c-41ba-b60f-462a819f7822 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.213 2 DEBUG oslo_concurrency.processutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869001422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.713 2 DEBUG oslo_concurrency.processutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.720 2 DEBUG nova.compute.provider_tree [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.740 2 DEBUG nova.scheduler.client.report [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.795 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.866 2 INFO nova.scheduler.client.report [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance a9564d79-5032-4cde-aff3-447400d7dc2f
Oct 02 12:56:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3869001422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:37 compute-0 nova_compute[256940]: 2025-10-02 12:56:37.949 2 DEBUG oslo_concurrency.lockutils [None req-ae0a09b3-5936-4a3f-aefd-ae6ea4f0a9f8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "a9564d79-5032-4cde-aff3-447400d7dc2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Oct 02 12:56:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.059039) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798059125, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1983, "num_deletes": 261, "total_data_size": 3259024, "memory_usage": 3299536, "flush_reason": "Manual Compaction"}
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798078222, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 3206826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53740, "largest_seqno": 55721, "table_properties": {"data_size": 3197927, "index_size": 5457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19297, "raw_average_key_size": 20, "raw_value_size": 3179726, "raw_average_value_size": 3382, "num_data_blocks": 236, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409625, "oldest_key_time": 1759409625, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 19245 microseconds, and 7454 cpu microseconds.
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.078285) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 3206826 bytes OK
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.078316) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.079750) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.079770) EVENT_LOG_v1 {"time_micros": 1759409798079763, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.079797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3250762, prev total WAL file size 3250762, number of live WAL files 2.
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.081157) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303133' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(3131KB)], [119(10MB)]
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798081223, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 13949924, "oldest_snapshot_seqno": -1}
Oct 02 12:56:38 compute-0 nova_compute[256940]: 2025-10-02 12:56:38.113 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:56:38 compute-0 nova_compute[256940]: 2025-10-02 12:56:38.142 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:38 compute-0 nova_compute[256940]: 2025-10-02 12:56:38.143 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:38 compute-0 nova_compute[256940]: 2025-10-02 12:56:38.143 2 DEBUG nova.network.neutron [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 8560 keys, 13798438 bytes, temperature: kUnknown
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798155615, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 13798438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13739578, "index_size": 36348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 221324, "raw_average_key_size": 25, "raw_value_size": 13585848, "raw_average_value_size": 1587, "num_data_blocks": 1431, "num_entries": 8560, "num_filter_entries": 8560, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.156034) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13798438 bytes
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.157351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.3 rd, 185.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.2 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 9103, records dropped: 543 output_compression: NoCompression
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.157383) EVENT_LOG_v1 {"time_micros": 1759409798157368, "job": 72, "event": "compaction_finished", "compaction_time_micros": 74489, "compaction_time_cpu_micros": 38113, "output_level": 6, "num_output_files": 1, "total_output_size": 13798438, "num_input_records": 9103, "num_output_records": 8560, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798158567, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798162402, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.080948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.162460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.162467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.162469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.162472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:38.162474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:38.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:38 compute-0 nova_compute[256940]: 2025-10-02 12:56:38.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:38 compute-0 nova_compute[256940]: 2025-10-02 12:56:38.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:56:38 compute-0 nova_compute[256940]: 2025-10-02 12:56:38.229 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:56:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:38.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:39 compute-0 ceph-mon[73668]: pgmap v2503: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Oct 02 12:56:39 compute-0 nova_compute[256940]: 2025-10-02 12:56:39.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 30 KiB/s wr, 43 op/s
Oct 02 12:56:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:40.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.232 2 DEBUG nova.network.neutron [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.253 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.256 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.257 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating instance directory: /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.257 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Ensure instance console log exists: /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.258 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.259 2 DEBUG nova.virt.libvirt.vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:11Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.259 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.260 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.261 2 DEBUG os_vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6237bc28-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6237bc28-d7, col_values=(('external_ids', {'iface-id': '6237bc28-d790-4861-976b-cda2e8dc93a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:c9:23', 'vm-uuid': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:40 compute-0 NetworkManager[44981]: <info>  [1759409800.2688] manager: (tap6237bc28-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.276 2 INFO os_vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7')
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.277 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:56:40 compute-0 nova_compute[256940]: 2025-10-02 12:56:40.277 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:56:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065116889423971714 of space, bias 1.0, pg target 1.9535066827191514 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:56:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:56:41 compute-0 ceph-mon[73668]: pgmap v2504: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 30 KiB/s wr, 43 op/s
Oct 02 12:56:41 compute-0 nova_compute[256940]: 2025-10-02 12:56:41.417 2 DEBUG nova.network.neutron [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Port 6237bc28-d790-4861-976b-cda2e8dc93a9 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:56:41 compute-0 nova_compute[256940]: 2025-10-02 12:56:41.418 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:56:41 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 12:56:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 12:56:41 compute-0 kernel: tap6237bc28-d7: entered promiscuous mode
Oct 02 12:56:41 compute-0 NetworkManager[44981]: <info>  [1759409801.6983] manager: (tap6237bc28-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Oct 02 12:56:41 compute-0 nova_compute[256940]: 2025-10-02 12:56:41.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:41 compute-0 ovn_controller[148123]: 2025-10-02T12:56:41Z|00765|binding|INFO|Claiming lport 6237bc28-d790-4861-976b-cda2e8dc93a9 for this additional chassis.
Oct 02 12:56:41 compute-0 ovn_controller[148123]: 2025-10-02T12:56:41Z|00766|binding|INFO|6237bc28-d790-4861-976b-cda2e8dc93a9: Claiming fa:16:3e:f2:c9:23 10.100.0.13
Oct 02 12:56:41 compute-0 NetworkManager[44981]: <info>  [1759409801.7110] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Oct 02 12:56:41 compute-0 nova_compute[256940]: 2025-10-02 12:56:41.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:41 compute-0 NetworkManager[44981]: <info>  [1759409801.7120] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct 02 12:56:41 compute-0 systemd-udevd[360258]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:41 compute-0 systemd-machined[210927]: New machine qemu-80-instance-00000099.
Oct 02 12:56:41 compute-0 NetworkManager[44981]: <info>  [1759409801.7466] device (tap6237bc28-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:56:41 compute-0 NetworkManager[44981]: <info>  [1759409801.7478] device (tap6237bc28-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:56:41 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000099.
Oct 02 12:56:41 compute-0 nova_compute[256940]: 2025-10-02 12:56:41.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:41 compute-0 ovn_controller[148123]: 2025-10-02T12:56:41Z|00767|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:56:41 compute-0 nova_compute[256940]: 2025-10-02 12:56:41.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:41 compute-0 ovn_controller[148123]: 2025-10-02T12:56:41Z|00768|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 ovn-installed in OVS
Oct 02 12:56:41 compute-0 nova_compute[256940]: 2025-10-02 12:56:41.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 68 op/s
Oct 02 12:56:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:42.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:42 compute-0 nova_compute[256940]: 2025-10-02 12:56:42.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:43 compute-0 ovn_controller[148123]: 2025-10-02T12:56:43Z|00769|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.464 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409803.4640758, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.464 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Started (Lifecycle Event)
Oct 02 12:56:43 compute-0 ceph-mon[73668]: pgmap v2505: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 68 op/s
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.486 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.954 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409803.954255, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.955 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Resumed (Lifecycle Event)
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.983 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:43 compute-0 nova_compute[256940]: 2025-10-02 12:56:43.986 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 66 op/s
Oct 02 12:56:44 compute-0 nova_compute[256940]: 2025-10-02 12:56:44.008 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 12:56:44 compute-0 nova_compute[256940]: 2025-10-02 12:56:44.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:44 compute-0 nova_compute[256940]: 2025-10-02 12:56:44.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:56:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:44.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:44.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:44 compute-0 nova_compute[256940]: 2025-10-02 12:56:44.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-0 sudo[360311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:44 compute-0 sudo[360311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:44 compute-0 sudo[360311]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:44 compute-0 sudo[360348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:44 compute-0 sudo[360348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:44 compute-0 sudo[360348]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:44 compute-0 nova_compute[256940]: 2025-10-02 12:56:44.639 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409789.6375155, a9564d79-5032-4cde-aff3-447400d7dc2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:44 compute-0 nova_compute[256940]: 2025-10-02 12:56:44.639 2 INFO nova.compute.manager [-] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] VM Stopped (Lifecycle Event)
Oct 02 12:56:44 compute-0 podman[360335]: 2025-10-02 12:56:44.644042149 +0000 UTC m=+0.071074156 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:56:44 compute-0 nova_compute[256940]: 2025-10-02 12:56:44.671 2 DEBUG nova.compute.manager [None req-f45ee85d-d58f-4ba8-9745-4fda4e25427e - - - - - -] [instance: a9564d79-5032-4cde-aff3-447400d7dc2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:44 compute-0 podman[360336]: 2025-10-02 12:56:44.672927219 +0000 UTC m=+0.099733630 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:56:44 compute-0 sudo[360395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:44 compute-0 sudo[360395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:44 compute-0 sudo[360395]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:44 compute-0 sudo[360427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:56:44 compute-0 sudo[360427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:45 compute-0 ovn_controller[148123]: 2025-10-02T12:56:45Z|00770|binding|INFO|Claiming lport 6237bc28-d790-4861-976b-cda2e8dc93a9 for this chassis.
Oct 02 12:56:45 compute-0 ovn_controller[148123]: 2025-10-02T12:56:45Z|00771|binding|INFO|6237bc28-d790-4861-976b-cda2e8dc93a9: Claiming fa:16:3e:f2:c9:23 10.100.0.13
Oct 02 12:56:45 compute-0 ovn_controller[148123]: 2025-10-02T12:56:45Z|00772|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 up in Southbound
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.158 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c9:23 10.100.0.13'], port_security=['fa:16:3e:f2:c9:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f30d280-519f-404a-a0ab-55bcf121986d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'b996e116-bce4-4795-a454-74528663ff58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccf6ef12-40dd-424f-abb3-518813baf9b4, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6237bc28-d790-4861-976b-cda2e8dc93a9) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.159 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6237bc28-d790-4861-976b-cda2e8dc93a9 in datapath 5f30d280-519f-404a-a0ab-55bcf121986d bound to our chassis
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.160 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5f30d280-519f-404a-a0ab-55bcf121986d
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.173 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[97fa187d-5bee-44fe-9920-27b9db3b04b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.174 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5f30d280-51 in ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.176 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5f30d280-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.176 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fabf96b3-13e5-453e-bdb3-d7b7fadfd015]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.177 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7d553d11-e474-46f7-93ea-b3e87a2f6134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.189 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[6184f9d0-7747-44c5-9213-5265890beb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.216 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5083df-cecb-423b-9282-b53657170753]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 sudo[360427]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.248 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fe988a32-17ae-4a1c-bca6-f69995247204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.284 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[75cca002-d1c5-4f7e-a3ee-8c458b07952e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 NetworkManager[44981]: <info>  [1759409805.2860] manager: (tap5f30d280-50): new Veth device (/org/freedesktop/NetworkManager/Devices/343)
Oct 02 12:56:45 compute-0 systemd-udevd[360490]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.313 2 INFO nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Post operation of migration started
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.319 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5e59398a-9a11-4392-b3d2-447cf8037144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.322 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[94a22c11-f413-4ae6-b459-77d0597078c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:56:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:56:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:56:45 compute-0 NetworkManager[44981]: <info>  [1759409805.3570] device (tap5f30d280-50): carrier: link connected
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.362 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e07f5f88-bcc3-4f3f-bef2-ed2aba5a8258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.381 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[072d51a7-3121-4e3d-86f7-6fb21e0f11aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f30d280-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:61:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770084, 'reachable_time': 30225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360509, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6773e535-8f76-4e07-9d51-0583068b0ec5 does not exist
Oct 02 12:56:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c5290717-86a8-4985-9dfa-0531dcc47225 does not exist
Oct 02 12:56:45 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 57089e4f-a2bd-496b-80de-63f708d7c6de does not exist
Oct 02 12:56:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:56:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:56:45 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:56:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.397 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4af02a-7f31-4c45-88c6-542deaea8e58]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:61fc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770084, 'tstamp': 770084}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360510, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.415 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[725cf133-50e9-43fd-9d6a-5120e701c8ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f30d280-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:61:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770084, 'reachable_time': 30225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360512, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.456 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0af6aeeb-d80c-4c5d-900d-be46ca59dfe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 sudo[360511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:45 compute-0 sudo[360511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:45 compute-0 sudo[360511]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.522 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8facf7d6-4004-4aa7-ba43-4700477b988d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.525 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f30d280-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.525 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.526 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f30d280-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:45 compute-0 NetworkManager[44981]: <info>  [1759409805.5284] manager: (tap5f30d280-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Oct 02 12:56:45 compute-0 kernel: tap5f30d280-50: entered promiscuous mode
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.530 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5f30d280-50, col_values=(('external_ids', {'iface-id': 'ffbc5ae9-0886-4744-96e3-66a615b2f014'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-0 ovn_controller[148123]: 2025-10-02T12:56:45Z|00773|binding|INFO|Releasing lport ffbc5ae9-0886-4744-96e3-66a615b2f014 from this chassis (sb_readonly=0)
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-0 sudo[360541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:45 compute-0 sudo[360541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:45 compute-0 sudo[360541]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.549 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.550 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[98bdb45f-a109-4303-a7ef-c73c19a47d31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.551 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-5f30d280-519f-404a-a0ab-55bcf121986d
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 5f30d280-519f-404a-a0ab-55bcf121986d
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:56:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:45.552 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'env', 'PROCESS_TAG=haproxy-5f30d280-519f-404a-a0ab-55bcf121986d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5f30d280-519f-404a-a0ab-55bcf121986d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:56:45 compute-0 ceph-mon[73668]: pgmap v2506: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 66 op/s
Oct 02 12:56:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1756742258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:56:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:45 compute-0 sudo[360569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:45 compute-0 sudo[360569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:45 compute-0 sudo[360569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.624 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.624 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:45 compute-0 nova_compute[256940]: 2025-10-02 12:56:45.625 2 DEBUG nova.network.neutron [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:56:45 compute-0 sudo[360597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:56:45 compute-0 sudo[360597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:45 compute-0 podman[360666]: 2025-10-02 12:56:45.967261327 +0000 UTC m=+0.109550725 container create d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:56:45 compute-0 podman[360666]: 2025-10-02 12:56:45.879636472 +0000 UTC m=+0.021925890 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:56:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Oct 02 12:56:46 compute-0 podman[360696]: 2025-10-02 12:56:46.024537214 +0000 UTC m=+0.071649671 container create e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:56:46 compute-0 systemd[1]: Started libpod-conmon-d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0.scope.
Oct 02 12:56:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:46 compute-0 podman[360696]: 2025-10-02 12:56:45.980728286 +0000 UTC m=+0.027840773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339a42788013b32dd5f0202ce83aa14dc95bb981854c7d809cd335f8371f680a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:46 compute-0 systemd[1]: Started libpod-conmon-e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63.scope.
Oct 02 12:56:46 compute-0 podman[360666]: 2025-10-02 12:56:46.09645172 +0000 UTC m=+0.238741148 container init d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:56:46 compute-0 podman[360666]: 2025-10-02 12:56:46.105465874 +0000 UTC m=+0.247755272 container start d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:56:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:46 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [NOTICE]   (360721) : New worker (360723) forked
Oct 02 12:56:46 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [NOTICE]   (360721) : Loading success.
Oct 02 12:56:46 compute-0 podman[360696]: 2025-10-02 12:56:46.134979941 +0000 UTC m=+0.182092418 container init e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:56:46 compute-0 podman[360696]: 2025-10-02 12:56:46.143556043 +0000 UTC m=+0.190668500 container start e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:56:46 compute-0 podman[360696]: 2025-10-02 12:56:46.148671686 +0000 UTC m=+0.195784163 container attach e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:46 compute-0 systemd[1]: libpod-e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63.scope: Deactivated successfully.
Oct 02 12:56:46 compute-0 confident_burnell[360717]: 167 167
Oct 02 12:56:46 compute-0 podman[360696]: 2025-10-02 12:56:46.152741092 +0000 UTC m=+0.199853549 container died e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:46.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3752b5c10a002dd6e57a7931ccc3cac891b8af911e1543e448fd97cd07d5bb9-merged.mount: Deactivated successfully.
Oct 02 12:56:46 compute-0 podman[360696]: 2025-10-02 12:56:46.316640256 +0000 UTC m=+0.363752703 container remove e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:56:46 compute-0 systemd[1]: libpod-conmon-e7a05481186946e96fda7e5d18fb43fb09f69681c411a1122dfaaf5f4a6e0b63.scope: Deactivated successfully.
Oct 02 12:56:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:46.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:46 compute-0 podman[360755]: 2025-10-02 12:56:46.514249285 +0000 UTC m=+0.062094592 container create b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:46 compute-0 podman[360755]: 2025-10-02 12:56:46.475008777 +0000 UTC m=+0.022854104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:46 compute-0 systemd[1]: Started libpod-conmon-b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3.scope.
Oct 02 12:56:46 compute-0 nova_compute[256940]: 2025-10-02 12:56:46.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2894943659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5288a275c124687117aea9ff01922426d2a3b33cfcac5b04dcf3fab65c68847f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5288a275c124687117aea9ff01922426d2a3b33cfcac5b04dcf3fab65c68847f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5288a275c124687117aea9ff01922426d2a3b33cfcac5b04dcf3fab65c68847f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5288a275c124687117aea9ff01922426d2a3b33cfcac5b04dcf3fab65c68847f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5288a275c124687117aea9ff01922426d2a3b33cfcac5b04dcf3fab65c68847f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:46 compute-0 podman[360755]: 2025-10-02 12:56:46.672650057 +0000 UTC m=+0.220495364 container init b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:46 compute-0 podman[360755]: 2025-10-02 12:56:46.67935112 +0000 UTC m=+0.227196427 container start b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:56:46 compute-0 podman[360755]: 2025-10-02 12:56:46.685595262 +0000 UTC m=+0.233440569 container attach b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:56:47 compute-0 sudo[360778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:47 compute-0 sudo[360778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:47 compute-0 sudo[360778]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:47 compute-0 sudo[360803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:47 compute-0 sudo[360803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:47 compute-0 sudo[360803]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:47 compute-0 nova_compute[256940]: 2025-10-02 12:56:47.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:47 compute-0 awesome_pasteur[360772]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:56:47 compute-0 awesome_pasteur[360772]: --> relative data size: 1.0
Oct 02 12:56:47 compute-0 awesome_pasteur[360772]: --> All data devices are unavailable
Oct 02 12:56:47 compute-0 nova_compute[256940]: 2025-10-02 12:56:47.556 2 DEBUG nova.network.neutron [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:47 compute-0 systemd[1]: libpod-b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3.scope: Deactivated successfully.
Oct 02 12:56:47 compute-0 podman[360755]: 2025-10-02 12:56:47.566662093 +0000 UTC m=+1.114507430 container died b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5288a275c124687117aea9ff01922426d2a3b33cfcac5b04dcf3fab65c68847f-merged.mount: Deactivated successfully.
Oct 02 12:56:47 compute-0 podman[360755]: 2025-10-02 12:56:47.625136521 +0000 UTC m=+1.172981828 container remove b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:56:47 compute-0 systemd[1]: libpod-conmon-b5780ff4b315344976fc048cc62291ad18cae053bab297c53b99f0d3d67d81f3.scope: Deactivated successfully.
Oct 02 12:56:47 compute-0 sudo[360597]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:47 compute-0 ceph-mon[73668]: pgmap v2507: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Oct 02 12:56:47 compute-0 nova_compute[256940]: 2025-10-02 12:56:47.682 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:47 compute-0 nova_compute[256940]: 2025-10-02 12:56:47.703 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:47 compute-0 nova_compute[256940]: 2025-10-02 12:56:47.703 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:47 compute-0 nova_compute[256940]: 2025-10-02 12:56:47.703 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:47 compute-0 nova_compute[256940]: 2025-10-02 12:56:47.707 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:56:47 compute-0 virtqemud[257589]: Domain id=80 name='instance-00000099' uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736 is tainted: custom-monitor
Oct 02 12:56:47 compute-0 sudo[360851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:47 compute-0 sudo[360851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:47 compute-0 sudo[360851]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:47 compute-0 sudo[360876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:47 compute-0 sudo[360876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:47 compute-0 sudo[360876]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:47 compute-0 sudo[360901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:47 compute-0 sudo[360901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:47 compute-0 sudo[360901]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:47 compute-0 sudo[360926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:56:47 compute-0 sudo[360926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:56:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:48 compute-0 podman[360993]: 2025-10-02 12:56:48.21339053 +0000 UTC m=+0.034535647 container create 3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:56:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:48.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:48 compute-0 systemd[1]: Started libpod-conmon-3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706.scope.
Oct 02 12:56:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:48 compute-0 podman[360993]: 2025-10-02 12:56:48.19911546 +0000 UTC m=+0.020260607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:48 compute-0 podman[360993]: 2025-10-02 12:56:48.301439896 +0000 UTC m=+0.122585023 container init 3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:56:48 compute-0 podman[360993]: 2025-10-02 12:56:48.308249283 +0000 UTC m=+0.129394400 container start 3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:56:48 compute-0 podman[360993]: 2025-10-02 12:56:48.312060422 +0000 UTC m=+0.133205539 container attach 3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:56:48 compute-0 cool_shtern[361009]: 167 167
Oct 02 12:56:48 compute-0 podman[360993]: 2025-10-02 12:56:48.313427587 +0000 UTC m=+0.134572704 container died 3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 12:56:48 compute-0 systemd[1]: libpod-3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706.scope: Deactivated successfully.
Oct 02 12:56:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea35789b8983b1ccc31a9229ca0406a624712c06070282dcbc27877203bc0c30-merged.mount: Deactivated successfully.
Oct 02 12:56:48 compute-0 podman[360993]: 2025-10-02 12:56:48.350060478 +0000 UTC m=+0.171205595 container remove 3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 12:56:48 compute-0 systemd[1]: libpod-conmon-3e260ab23545ed24b3b8fad70ce299407d24881ca0c59d682c0e5c6d6cd69706.scope: Deactivated successfully.
Oct 02 12:56:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:48 compute-0 podman[361033]: 2025-10-02 12:56:48.557355469 +0000 UTC m=+0.046361674 container create 1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:56:48 compute-0 systemd[1]: Started libpod-conmon-1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957.scope.
Oct 02 12:56:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f0b757fb09bc0768f36b99d1fde98b78ce20e205dd749b7b6fc45d40704b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f0b757fb09bc0768f36b99d1fde98b78ce20e205dd749b7b6fc45d40704b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f0b757fb09bc0768f36b99d1fde98b78ce20e205dd749b7b6fc45d40704b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f0b757fb09bc0768f36b99d1fde98b78ce20e205dd749b7b6fc45d40704b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:48 compute-0 podman[361033]: 2025-10-02 12:56:48.535263846 +0000 UTC m=+0.024270071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:48 compute-0 podman[361033]: 2025-10-02 12:56:48.64445172 +0000 UTC m=+0.133457925 container init 1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:48 compute-0 podman[361033]: 2025-10-02 12:56:48.652225822 +0000 UTC m=+0.141232027 container start 1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:56:48 compute-0 podman[361033]: 2025-10-02 12:56:48.660114516 +0000 UTC m=+0.149120741 container attach 1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:56:48 compute-0 nova_compute[256940]: 2025-10-02 12:56:48.715 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:56:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2370856319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]: {
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:     "1": [
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:         {
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "devices": [
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "/dev/loop3"
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             ],
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "lv_name": "ceph_lv0",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "lv_size": "7511998464",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "name": "ceph_lv0",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "tags": {
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.cluster_name": "ceph",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.crush_device_class": "",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.encrypted": "0",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.osd_id": "1",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.type": "block",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:                 "ceph.vdo": "0"
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             },
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "type": "block",
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:             "vg_name": "ceph_vg0"
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:         }
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]:     ]
Oct 02 12:56:49 compute-0 unruffled_chatterjee[361049]: }
Oct 02 12:56:49 compute-0 systemd[1]: libpod-1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957.scope: Deactivated successfully.
Oct 02 12:56:49 compute-0 podman[361033]: 2025-10-02 12:56:49.439867187 +0000 UTC m=+0.928873402 container died 1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c95f0b757fb09bc0768f36b99d1fde98b78ce20e205dd749b7b6fc45d40704b3-merged.mount: Deactivated successfully.
Oct 02 12:56:49 compute-0 podman[361033]: 2025-10-02 12:56:49.495652985 +0000 UTC m=+0.984659190 container remove 1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 12:56:49 compute-0 systemd[1]: libpod-conmon-1c347581a5ba47e28153bf8770ba736a5a833f6971e7d00421f3db5830cbb957.scope: Deactivated successfully.
Oct 02 12:56:49 compute-0 sudo[360926]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:49 compute-0 sudo[361073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:49 compute-0 sudo[361073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:49 compute-0 sudo[361073]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:49 compute-0 sudo[361098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:49 compute-0 sudo[361098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:49 compute-0 sudo[361098]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:49 compute-0 nova_compute[256940]: 2025-10-02 12:56:49.723 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:56:49 compute-0 nova_compute[256940]: 2025-10-02 12:56:49.729 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:49 compute-0 sudo[361123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:49 compute-0 sudo[361123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:49 compute-0 ceph-mon[73668]: pgmap v2508: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:56:49 compute-0 sudo[361123]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:49 compute-0 nova_compute[256940]: 2025-10-02 12:56:49.747 2 DEBUG nova.objects.instance [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:56:49 compute-0 sudo[361148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:56:49 compute-0 sudo[361148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Oct 02 12:56:50 compute-0 podman[361216]: 2025-10-02 12:56:50.134810136 +0000 UTC m=+0.048143860 container create bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:56:50 compute-0 systemd[1]: Started libpod-conmon-bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685.scope.
Oct 02 12:56:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:50 compute-0 podman[361216]: 2025-10-02 12:56:50.206933658 +0000 UTC m=+0.120267402 container init bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 02 12:56:50 compute-0 podman[361216]: 2025-10-02 12:56:50.116332737 +0000 UTC m=+0.029666491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:50 compute-0 podman[361216]: 2025-10-02 12:56:50.213761346 +0000 UTC m=+0.127095070 container start bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 12:56:50 compute-0 podman[361216]: 2025-10-02 12:56:50.216725583 +0000 UTC m=+0.130059307 container attach bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:56:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:50.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:50 compute-0 objective_keller[361232]: 167 167
Oct 02 12:56:50 compute-0 systemd[1]: libpod-bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685.scope: Deactivated successfully.
Oct 02 12:56:50 compute-0 podman[361216]: 2025-10-02 12:56:50.220017408 +0000 UTC m=+0.133351132 container died bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 12:56:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-829bfc19029b9163994194c397522b0d2c3e1bb610461fa94d6908341177e1cb-merged.mount: Deactivated successfully.
Oct 02 12:56:50 compute-0 podman[361216]: 2025-10-02 12:56:50.253741203 +0000 UTC m=+0.167074927 container remove bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:56:50 compute-0 systemd[1]: libpod-conmon-bff7b7de4ecb417b5fdd673470193fa92a9a798e3ad6410b0d4e7a62357c5685.scope: Deactivated successfully.
Oct 02 12:56:50 compute-0 nova_compute[256940]: 2025-10-02 12:56:50.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:50.350 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:50.351 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:56:50 compute-0 nova_compute[256940]: 2025-10-02 12:56:50.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:50.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:50 compute-0 podman[361256]: 2025-10-02 12:56:50.43048103 +0000 UTC m=+0.038444159 container create a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:56:50 compute-0 systemd[1]: Started libpod-conmon-a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b.scope.
Oct 02 12:56:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b687b3c113292722a69867bde45a5095c025f82a01c9a5fa5ff1340ce88da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b687b3c113292722a69867bde45a5095c025f82a01c9a5fa5ff1340ce88da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b687b3c113292722a69867bde45a5095c025f82a01c9a5fa5ff1340ce88da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b687b3c113292722a69867bde45a5095c025f82a01c9a5fa5ff1340ce88da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:50 compute-0 podman[361256]: 2025-10-02 12:56:50.501948995 +0000 UTC m=+0.109912154 container init a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:56:50 compute-0 podman[361256]: 2025-10-02 12:56:50.50866202 +0000 UTC m=+0.116625149 container start a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 12:56:50 compute-0 podman[361256]: 2025-10-02 12:56:50.415091481 +0000 UTC m=+0.023054630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:56:50 compute-0 podman[361256]: 2025-10-02 12:56:50.51253223 +0000 UTC m=+0.120495379 container attach a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 12:56:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1101605346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:51 compute-0 charming_moser[361273]: {
Oct 02 12:56:51 compute-0 charming_moser[361273]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:56:51 compute-0 charming_moser[361273]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:56:51 compute-0 charming_moser[361273]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:56:51 compute-0 charming_moser[361273]:         "osd_id": 1,
Oct 02 12:56:51 compute-0 charming_moser[361273]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:56:51 compute-0 charming_moser[361273]:         "type": "bluestore"
Oct 02 12:56:51 compute-0 charming_moser[361273]:     }
Oct 02 12:56:51 compute-0 charming_moser[361273]: }
Oct 02 12:56:51 compute-0 systemd[1]: libpod-a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b.scope: Deactivated successfully.
Oct 02 12:56:51 compute-0 podman[361256]: 2025-10-02 12:56:51.362777151 +0000 UTC m=+0.970740290 container died a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a82b687b3c113292722a69867bde45a5095c025f82a01c9a5fa5ff1340ce88da-merged.mount: Deactivated successfully.
Oct 02 12:56:51 compute-0 podman[361256]: 2025-10-02 12:56:51.418819005 +0000 UTC m=+1.026782124 container remove a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:56:51 compute-0 systemd[1]: libpod-conmon-a2b8b89cd526404cec0a767c6296c0f569f6720b86e96a704d2d9d5a0b5cb54b.scope: Deactivated successfully.
Oct 02 12:56:51 compute-0 sudo[361148]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:56:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:56:51 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 25987dfc-a460-40d8-9e97-7230bbda3b54 does not exist
Oct 02 12:56:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 14105151-3f09-4a1e-9704-98dff19293cd does not exist
Oct 02 12:56:51 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 54c31556-e6c0-4859-adea-3da8dacd8d09 does not exist
Oct 02 12:56:51 compute-0 sudo[361309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:51 compute-0 sudo[361309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:51 compute-0 sudo[361309]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:51 compute-0 sudo[361334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:56:51 compute-0 sudo[361334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:51 compute-0 sudo[361334]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:51 compute-0 ceph-mon[73668]: pgmap v2509: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Oct 02 12:56:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1856637000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct 02 12:56:52 compute-0 nova_compute[256940]: 2025-10-02 12:56:52.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:52.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:52 compute-0 nova_compute[256940]: 2025-10-02 12:56:52.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:53 compute-0 ceph-mon[73668]: pgmap v2510: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct 02 12:56:53 compute-0 nova_compute[256940]: 2025-10-02 12:56:53.898 2 INFO nova.compute.manager [None req-9c1b3e4c-3297-4cee-8444-a76c8ed656cf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Get console output
Oct 02 12:56:53 compute-0 nova_compute[256940]: 2025-10-02 12:56:53.903 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:56:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 12:56:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/233049042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1654015826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:56:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:54.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:56:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.838 2 DEBUG nova.compute.manager [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.839 2 DEBUG nova.compute.manager [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing instance network info cache due to event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.839 2 DEBUG oslo_concurrency.lockutils [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.839 2 DEBUG oslo_concurrency.lockutils [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.839 2 DEBUG nova.network.neutron [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.858 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.858 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.858 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.858 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.859 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.859 2 INFO nova.compute.manager [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Terminating instance
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.860 2 DEBUG nova.compute.manager [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:56:54 compute-0 kernel: tap6237bc28-d7 (unregistering): left promiscuous mode
Oct 02 12:56:54 compute-0 NetworkManager[44981]: <info>  [1759409814.9091] device (tap6237bc28-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:56:54 compute-0 ovn_controller[148123]: 2025-10-02T12:56:54Z|00774|binding|INFO|Releasing lport 6237bc28-d790-4861-976b-cda2e8dc93a9 from this chassis (sb_readonly=0)
Oct 02 12:56:54 compute-0 ovn_controller[148123]: 2025-10-02T12:56:54Z|00775|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 down in Southbound
Oct 02 12:56:54 compute-0 ovn_controller[148123]: 2025-10-02T12:56:54Z|00776|binding|INFO|Removing iface tap6237bc28-d7 ovn-installed in OVS
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:54.932 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c9:23 10.100.0.13'], port_security=['fa:16:3e:f2:c9:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f30d280-519f-404a-a0ab-55bcf121986d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'b996e116-bce4-4795-a454-74528663ff58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccf6ef12-40dd-424f-abb3-518813baf9b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=6237bc28-d790-4861-976b-cda2e8dc93a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:54.933 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 6237bc28-d790-4861-976b-cda2e8dc93a9 in datapath 5f30d280-519f-404a-a0ab-55bcf121986d unbound from our chassis
Oct 02 12:56:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:54.934 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f30d280-519f-404a-a0ab-55bcf121986d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:56:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:54.937 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[51162471-8b15-405d-a639-8d439894e652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:54.937 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d namespace which is not needed anymore
Oct 02 12:56:54 compute-0 nova_compute[256940]: 2025-10-02 12:56:54.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct 02 12:56:54 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000099.scope: Consumed 2.340s CPU time.
Oct 02 12:56:54 compute-0 systemd-machined[210927]: Machine qemu-80-instance-00000099 terminated.
Oct 02 12:56:55 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [NOTICE]   (360721) : haproxy version is 2.8.14-c23fe91
Oct 02 12:56:55 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [NOTICE]   (360721) : path to executable is /usr/sbin/haproxy
Oct 02 12:56:55 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [WARNING]  (360721) : Exiting Master process...
Oct 02 12:56:55 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [WARNING]  (360721) : Exiting Master process...
Oct 02 12:56:55 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [ALERT]    (360721) : Current worker (360723) exited with code 143 (Terminated)
Oct 02 12:56:55 compute-0 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[360711]: [WARNING]  (360721) : All workers exited. Exiting... (0)
Oct 02 12:56:55 compute-0 systemd[1]: libpod-d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0.scope: Deactivated successfully.
Oct 02 12:56:55 compute-0 podman[361383]: 2025-10-02 12:56:55.087929416 +0000 UTC m=+0.046321463 container died d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.115 2 INFO nova.virt.libvirt.driver [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance destroyed successfully.
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.116 2 DEBUG nova.objects.instance [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0-userdata-shm.mount: Deactivated successfully.
Oct 02 12:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-339a42788013b32dd5f0202ce83aa14dc95bb981854c7d809cd335f8371f680a-merged.mount: Deactivated successfully.
Oct 02 12:56:55 compute-0 podman[361383]: 2025-10-02 12:56:55.155599943 +0000 UTC m=+0.113991970 container cleanup d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:56:55 compute-0 systemd[1]: libpod-conmon-d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0.scope: Deactivated successfully.
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.204 2 DEBUG nova.virt.libvirt.vif [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:49Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.205 2 DEBUG nova.network.os_vif_util [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.206 2 DEBUG nova.network.os_vif_util [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.206 2 DEBUG os_vif [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6237bc28-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.214 2 INFO os_vif [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7')
Oct 02 12:56:55 compute-0 ceph-mon[73668]: pgmap v2511: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 12:56:55 compute-0 podman[361426]: 2025-10-02 12:56:55.281905951 +0000 UTC m=+0.099471543 container remove d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.291 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b99bd5d0-f241-4196-95ab-863fb3dda866]: (4, ('Thu Oct  2 12:56:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d (d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0)\nd6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0\nThu Oct  2 12:56:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d (d6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0)\nd6e21cf551e25bb82c13e71263fccda6cf4d983b0aabde9e96a42b2dd782e4b0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.294 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d65ca60f-3add-402f-9e29-f9723073d21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.295 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f30d280-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:55 compute-0 kernel: tap5f30d280-50: left promiscuous mode
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.329 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd1938f-dff0-42f2-a51d-0a105ce05604]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.355 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3d669afc-a0f6-46e7-baa9-53b39dd56304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.357 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[31549dd1-f046-459b-a3b8-879a772944bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.377 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e22ea6c1-0557-46c0-a7d4-79e4510b7f38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770072, 'reachable_time': 38565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361459, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d5f30d280\x2d519f\x2d404a\x2da0ab\x2d55bcf121986d.mount: Deactivated successfully.
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.381 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:56:55 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:55.382 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[14e7f6aa-5e3c-4410-90ea-5f587c99e58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.772 2 INFO nova.virt.libvirt.driver [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deleting instance files /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736_del
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.773 2 INFO nova.virt.libvirt.driver [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deletion of /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736_del complete
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.845 2 INFO nova.compute.manager [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.845 2 DEBUG oslo.service.loopingcall [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.846 2 DEBUG nova.compute.manager [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:56:55 compute-0 nova_compute[256940]: 2025-10-02 12:56:55.846 2 DEBUG nova.network.neutron [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:56:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 02 12:56:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:56.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:56.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.471 2 DEBUG nova.network.neutron [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated VIF entry in instance network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.472 2 DEBUG nova.network.neutron [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.501 2 DEBUG oslo_concurrency.lockutils [req-29026904-1e50-4a91-9ce2-7dc6509e6f2c req-c3c8e04c-c6fd-4f3f-be80-410e8f24f13d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.782 2 DEBUG nova.network.neutron [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.809 2 INFO nova.compute.manager [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 0.96 seconds to deallocate network for instance.
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.859 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.859 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.864 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.927 2 INFO nova.scheduler.client.report [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance f6c0a66d-64f1-484a-ae4e-ece25fddf736
Oct 02 12:56:56 compute-0 nova_compute[256940]: 2025-10-02 12:56:56.930 2 DEBUG nova.compute.manager [req-89746ce4-9242-4a7a-938f-5b7a023d005e req-c435bcba-17e4-43e0-8381-2a7eacd732c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-deleted-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:57 compute-0 nova_compute[256940]: 2025-10-02 12:56:57.007 2 DEBUG oslo_concurrency.lockutils [None req-7226d8f7-25f6-4bfe-b5f4-7f4be879a887 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:57 compute-0 ceph-mon[73668]: pgmap v2512: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 02 12:56:57 compute-0 nova_compute[256940]: 2025-10-02 12:56:57.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 12:56:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:56:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:56:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:56:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:56:59.352 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:59 compute-0 ceph-mon[73668]: pgmap v2513: 305 pgs: 305 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.507 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.508 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.527 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.612 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.613 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.621 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.622 2 INFO nova.compute.claims [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.634407) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819634479, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 474, "num_deletes": 251, "total_data_size": 432063, "memory_usage": 441928, "flush_reason": "Manual Compaction"}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819640238, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 427620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55722, "largest_seqno": 56195, "table_properties": {"data_size": 424910, "index_size": 746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6701, "raw_average_key_size": 19, "raw_value_size": 419428, "raw_average_value_size": 1205, "num_data_blocks": 32, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409798, "oldest_key_time": 1759409798, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 5904 microseconds, and 2935 cpu microseconds.
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.640307) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 427620 bytes OK
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.640342) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.641928) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.641950) EVENT_LOG_v1 {"time_micros": 1759409819641942, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.641980) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 429262, prev total WAL file size 429262, number of live WAL files 2.
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.642563) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(417KB)], [122(13MB)]
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819642620, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 14226058, "oldest_snapshot_seqno": -1}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 8394 keys, 12292311 bytes, temperature: kUnknown
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819710650, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 12292311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12235917, "index_size": 34330, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20997, "raw_key_size": 218587, "raw_average_key_size": 26, "raw_value_size": 12086303, "raw_average_value_size": 1439, "num_data_blocks": 1340, "num_entries": 8394, "num_filter_entries": 8394, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.711055) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12292311 bytes
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.712329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.7 rd, 180.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(62.0) write-amplify(28.7) OK, records in: 8908, records dropped: 514 output_compression: NoCompression
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.712356) EVENT_LOG_v1 {"time_micros": 1759409819712343, "job": 74, "event": "compaction_finished", "compaction_time_micros": 68162, "compaction_time_cpu_micros": 28097, "output_level": 6, "num_output_files": 1, "total_output_size": 12292311, "num_input_records": 8908, "num_output_records": 8394, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819712643, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819715424, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.642475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.715536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.715544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.715546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.715547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-12:56:59.715549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:59 compute-0 nova_compute[256940]: 2025-10-02 12:56:59.762 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 02 12:57:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554919007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.182 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.188 2 DEBUG nova.compute.provider_tree [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.205 2 DEBUG nova.scheduler.client.report [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:00.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.233 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.234 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.278 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.278 2 DEBUG nova.network.neutron [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.300 2 INFO nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.322 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:57:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:00.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:00 compute-0 podman[361486]: 2025-10-02 12:57:00.388951248 +0000 UTC m=+0.061070797 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.399 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.401 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.402 2 INFO nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Creating image(s)
Oct 02 12:57:00 compute-0 podman[361485]: 2025-10-02 12:57:00.410982049 +0000 UTC m=+0.075202053 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.439 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.468 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.493 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.497 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.576 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.577 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.577 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.578 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.605 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.609 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 d09efb55-ff68-4671-b89f-a35231b739e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3554919007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.729 2 DEBUG nova.policy [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:57:00 compute-0 ovn_controller[148123]: 2025-10-02T12:57:00Z|00777|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:57:00 compute-0 nova_compute[256940]: 2025-10-02 12:57:00.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:01 compute-0 ovn_controller[148123]: 2025-10-02T12:57:01Z|00778|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.531 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 d09efb55-ff68-4671-b89f-a35231b739e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.922s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.601 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.704 2 DEBUG nova.network.neutron [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Successfully created port: e71909c9-67aa-432f-923c-787d02eb9db3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:57:01 compute-0 ceph-mon[73668]: pgmap v2514: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.816 2 DEBUG nova.objects.instance [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.838 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.838 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Ensure instance console log exists: /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.838 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.839 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:01 compute-0 nova_compute[256940]: 2025-10-02 12:57:01.839 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Oct 02 12:57:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:02.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:02 compute-0 nova_compute[256940]: 2025-10-02 12:57:02.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:02 compute-0 nova_compute[256940]: 2025-10-02 12:57:02.869 2 DEBUG nova.network.neutron [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Successfully updated port: e71909c9-67aa-432f-923c-787d02eb9db3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:57:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:03 compute-0 nova_compute[256940]: 2025-10-02 12:57:03.084 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:03 compute-0 nova_compute[256940]: 2025-10-02 12:57:03.084 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:03 compute-0 nova_compute[256940]: 2025-10-02 12:57:03.084 2 DEBUG nova.network.neutron [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:57:03 compute-0 nova_compute[256940]: 2025-10-02 12:57:03.236 2 DEBUG nova.compute.manager [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-changed-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:03 compute-0 nova_compute[256940]: 2025-10-02 12:57:03.236 2 DEBUG nova.compute.manager [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing instance network info cache due to event network-changed-e71909c9-67aa-432f-923c-787d02eb9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:03 compute-0 nova_compute[256940]: 2025-10-02 12:57:03.237 2 DEBUG oslo_concurrency.lockutils [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:03 compute-0 ceph-mon[73668]: pgmap v2515: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Oct 02 12:57:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Oct 02 12:57:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:04.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.370 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:04.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.395 2 WARNING nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] While synchronizing instance power states, found 3 instances in the database and 2 instances on the hypervisor.
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.396 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid e22a7944-9200-4204-a219-3f7bd720667b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.396 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid 74e69165-006c-41de-82af-bc44cba0a843 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.396 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid d09efb55-ff68-4671-b89f-a35231b739e2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.397 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.397 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "e22a7944-9200-4204-a219-3f7bd720667b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.397 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.398 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "74e69165-006c-41de-82af-bc44cba0a843" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.398 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.430 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "74e69165-006c-41de-82af-bc44cba0a843" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:04 compute-0 nova_compute[256940]: 2025-10-02 12:57:04.431 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "e22a7944-9200-4204-a219-3f7bd720667b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:05 compute-0 ceph-mon[73668]: pgmap v2516: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Oct 02 12:57:05 compute-0 nova_compute[256940]: 2025-10-02 12:57:05.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:05 compute-0 nova_compute[256940]: 2025-10-02 12:57:05.321 2 DEBUG nova.network.neutron [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:57:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Oct 02 12:57:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/548337609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:57:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/548337609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:57:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:06.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:07 compute-0 sudo[361696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:07 compute-0 sudo[361696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:07 compute-0 sudo[361696]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:07 compute-0 sudo[361721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:07 compute-0 sudo[361721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:07 compute-0 sudo[361721]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:07 compute-0 ceph-mon[73668]: pgmap v2517: 305 pgs: 305 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Oct 02 12:57:07 compute-0 nova_compute[256940]: 2025-10-02 12:57:07.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Oct 02 12:57:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:08.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.349 2 DEBUG nova.network.neutron [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.457 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.457 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Instance network_info: |[{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.458 2 DEBUG oslo_concurrency.lockutils [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.458 2 DEBUG nova.network.neutron [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing network info cache for port e71909c9-67aa-432f-923c-787d02eb9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.460 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Start _get_guest_xml network_info=[{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.464 2 WARNING nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.470 2 DEBUG nova.virt.libvirt.host [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.471 2 DEBUG nova.virt.libvirt.host [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.477 2 DEBUG nova.virt.libvirt.host [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.478 2 DEBUG nova.virt.libvirt.host [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.480 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.481 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.481 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.482 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.482 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.482 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.482 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.483 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.483 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.483 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.483 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.483 2 DEBUG nova.virt.hardware [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.486 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4051056186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.927 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.955 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:08 compute-0 nova_compute[256940]: 2025-10-02 12:57:08.958 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432920824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.435 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.436 2 DEBUG nova.virt.libvirt.vif [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:00Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.437 2 DEBUG nova.network.os_vif_util [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.438 2 DEBUG nova.network.os_vif_util [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=e71909c9-67aa-432f-923c-787d02eb9db3,network=Network(1e4b6d1b-55de-4f6c-bff6-fc56357cf40e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71909c9-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.439 2 DEBUG nova.objects.instance [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:09 compute-0 ceph-mon[73668]: pgmap v2518: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Oct 02 12:57:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4051056186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/432920824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.727 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <uuid>d09efb55-ff68-4671-b89f-a35231b739e2</uuid>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <name>instance-0000009d</name>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:57:08</nova:creationTime>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:57:09 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <system>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <entry name="serial">d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <entry name="uuid">d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </system>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <os>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   </os>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <features>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   </features>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk">
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk.config">
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:57:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:dd:9c:44"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <target dev="tape71909c9-67"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log" append="off"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <video>
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </video>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:57:09 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:57:09 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:57:09 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:57:09 compute-0 nova_compute[256940]: </domain>
Oct 02 12:57:09 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.729 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Preparing to wait for external event network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.729 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.729 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.729 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.730 2 DEBUG nova.virt.libvirt.vif [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:00Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.730 2 DEBUG nova.network.os_vif_util [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.731 2 DEBUG nova.network.os_vif_util [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=e71909c9-67aa-432f-923c-787d02eb9db3,network=Network(1e4b6d1b-55de-4f6c-bff6-fc56357cf40e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71909c9-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.731 2 DEBUG os_vif [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=e71909c9-67aa-432f-923c-787d02eb9db3,network=Network(1e4b6d1b-55de-4f6c-bff6-fc56357cf40e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71909c9-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape71909c9-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape71909c9-67, col_values=(('external_ids', {'iface-id': 'e71909c9-67aa-432f-923c-787d02eb9db3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:9c:44', 'vm-uuid': 'd09efb55-ff68-4671-b89f-a35231b739e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:09 compute-0 NetworkManager[44981]: <info>  [1759409829.7406] manager: (tape71909c9-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.748 2 INFO os_vif [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=e71909c9-67aa-432f-923c-787d02eb9db3,network=Network(1e4b6d1b-55de-4f6c-bff6-fc56357cf40e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71909c9-67')
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.846 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.847 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.847 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:dd:9c:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.847 2 INFO nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Using config drive
Oct 02 12:57:09 compute-0 nova_compute[256940]: 2025-10-02 12:57:09.872 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 469 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 185 op/s
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.109 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409815.1070087, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.109 2 INFO nova.compute.manager [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Stopped (Lifecycle Event)
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.137 2 DEBUG nova.compute.manager [None req-65eadf57-47ca-4878-a323-e83f14ec7703 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:10.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.239 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.391 2 INFO nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Creating config drive at /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/disk.config
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.398 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplvy67qea execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 12:57:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:10.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.547 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplvy67qea" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.581 2 DEBUG nova.storage.rbd_utils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d09efb55-ff68-4671-b89f-a35231b739e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:10 compute-0 nova_compute[256940]: 2025-10-02 12:57:10.585 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/disk.config d09efb55-ff68-4671-b89f-a35231b739e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4262556935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:11 compute-0 nova_compute[256940]: 2025-10-02 12:57:11.652 2 DEBUG oslo_concurrency.processutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/disk.config d09efb55-ff68-4671-b89f-a35231b739e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:11 compute-0 nova_compute[256940]: 2025-10-02 12:57:11.653 2 INFO nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Deleting local config drive /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/disk.config because it was imported into RBD.
Oct 02 12:57:11 compute-0 kernel: tape71909c9-67: entered promiscuous mode
Oct 02 12:57:11 compute-0 NetworkManager[44981]: <info>  [1759409831.7207] manager: (tape71909c9-67): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Oct 02 12:57:11 compute-0 ovn_controller[148123]: 2025-10-02T12:57:11Z|00779|binding|INFO|Claiming lport e71909c9-67aa-432f-923c-787d02eb9db3 for this chassis.
Oct 02 12:57:11 compute-0 ovn_controller[148123]: 2025-10-02T12:57:11Z|00780|binding|INFO|e71909c9-67aa-432f-923c-787d02eb9db3: Claiming fa:16:3e:dd:9c:44 10.100.0.9
Oct 02 12:57:11 compute-0 nova_compute[256940]: 2025-10-02 12:57:11.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:11 compute-0 nova_compute[256940]: 2025-10-02 12:57:11.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:11 compute-0 systemd-udevd[361881]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:11 compute-0 NetworkManager[44981]: <info>  [1759409831.7688] device (tape71909c9-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:57:11 compute-0 NetworkManager[44981]: <info>  [1759409831.7715] device (tape71909c9-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.772 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:9c:44 10.100.0.9'], port_security=['fa:16:3e:dd:9c:44 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd09efb55-ff68-4671-b89f-a35231b739e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8cc2c0e-6346-43da-b946-82bae1683dfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d3e502-6055-4397-819e-b5bef1e88e2e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e71909c9-67aa-432f-923c-787d02eb9db3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.774 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e71909c9-67aa-432f-923c-787d02eb9db3 in datapath 1e4b6d1b-55de-4f6c-bff6-fc56357cf40e bound to our chassis
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.779 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e4b6d1b-55de-4f6c-bff6-fc56357cf40e
Oct 02 12:57:11 compute-0 systemd-machined[210927]: New machine qemu-81-instance-0000009d.
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.792 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d376dd-73a0-4ab7-a5c5-f355feb83f4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.793 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e4b6d1b-51 in ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.795 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e4b6d1b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.795 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a76f34d-0235-4b42-ae0b-96b337edc756]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.796 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fabba52b-97fa-494b-81b4-3023e57f38b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 nova_compute[256940]: 2025-10-02 12:57:11.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:11 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-0000009d.
Oct 02 12:57:11 compute-0 ovn_controller[148123]: 2025-10-02T12:57:11Z|00781|binding|INFO|Setting lport e71909c9-67aa-432f-923c-787d02eb9db3 ovn-installed in OVS
Oct 02 12:57:11 compute-0 ovn_controller[148123]: 2025-10-02T12:57:11Z|00782|binding|INFO|Setting lport e71909c9-67aa-432f-923c-787d02eb9db3 up in Southbound
Oct 02 12:57:11 compute-0 nova_compute[256940]: 2025-10-02 12:57:11.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.809 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[56054458-e96b-400e-be32-254049513b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.826 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1adc4d45-0467-4921-865e-f97462fd94ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.856 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[98318b09-173e-42d7-b1e7-06c2701d5480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.860 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2837018a-6093-48b1-a74b-d4c7e64a55a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 NetworkManager[44981]: <info>  [1759409831.8620] manager: (tap1e4b6d1b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.894 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[af9c12dd-133f-47fc-9436-809802f909e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.898 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[478df3e2-24fb-48eb-8016-d165b7b4050a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 NetworkManager[44981]: <info>  [1759409831.9304] device (tap1e4b6d1b-50): carrier: link connected
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.938 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f45399-8453-4c2e-b93c-510246023f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.957 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93aea32b-ee31-492f-aee6-2f34ba4552a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e4b6d1b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:b3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772741, 'reachable_time': 35539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361917, 'error': None, 'target': 'ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:11 compute-0 ceph-mon[73668]: pgmap v2519: 305 pgs: 305 active+clean; 469 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 185 op/s
Oct 02 12:57:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2546884810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:11.976 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[73301972-1115-49a5-935f-b1912ca00893]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:b3b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772741, 'tstamp': 772741}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361918, 'error': None, 'target': 'ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.004 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfbdd3d-44e0-45b4-87c3-dac64231c90a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e4b6d1b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:b3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772741, 'reachable_time': 35539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361919, 'error': None, 'target': 'ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.0 MiB/s wr, 176 op/s
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.043 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f98c98-e8a1-4133-9afa-18cea98f168a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.230 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[13b19d5b-1abf-42de-8f99-1b166809c972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.231 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4b6d1b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.231 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.232 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e4b6d1b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:12 compute-0 NetworkManager[44981]: <info>  [1759409832.2342] manager: (tap1e4b6d1b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Oct 02 12:57:12 compute-0 kernel: tap1e4b6d1b-50: entered promiscuous mode
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.238 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e4b6d1b-50, col_values=(('external_ids', {'iface-id': 'd4ed0cb9-c5ef-468c-8d04-1cebdb7dab47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:12 compute-0 ovn_controller[148123]: 2025-10-02T12:57:12Z|00783|binding|INFO|Releasing lport d4ed0cb9-c5ef-468c-8d04-1cebdb7dab47 from this chassis (sb_readonly=0)
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:12.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.255 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e4b6d1b-55de-4f6c-bff6-fc56357cf40e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e4b6d1b-55de-4f6c-bff6-fc56357cf40e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.256 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9b665f-42d3-4cad-b3f7-0dbb933ae2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.257 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/1e4b6d1b-55de-4f6c-bff6-fc56357cf40e.pid.haproxy
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 1e4b6d1b-55de-4f6c-bff6-fc56357cf40e
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:57:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:12.257 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'env', 'PROCESS_TAG=haproxy-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e4b6d1b-55de-4f6c-bff6-fc56357cf40e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:57:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:12.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:12 compute-0 podman[361993]: 2025-10-02 12:57:12.647217537 +0000 UTC m=+0.060257495 container create e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:57:12 compute-0 systemd[1]: Started libpod-conmon-e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52.scope.
Oct 02 12:57:12 compute-0 podman[361993]: 2025-10-02 12:57:12.614660702 +0000 UTC m=+0.027700690 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:57:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5be618cc70338bf2e99dd1619af4f94c8bab968df2d028651913844b0bbdb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:12 compute-0 podman[361993]: 2025-10-02 12:57:12.75866257 +0000 UTC m=+0.171702548 container init e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:57:12 compute-0 podman[361993]: 2025-10-02 12:57:12.76558317 +0000 UTC m=+0.178623128 container start e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:57:12 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [NOTICE]   (362013) : New worker (362015) forked
Oct 02 12:57:12 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [NOTICE]   (362013) : Loading success.
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.882 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409832.881686, d09efb55-ff68-4671-b89f-a35231b739e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.882 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] VM Started (Lifecycle Event)
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.918 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.922 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409832.882573, d09efb55-ff68-4671-b89f-a35231b739e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.923 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] VM Paused (Lifecycle Event)
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.979 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:12 compute-0 nova_compute[256940]: 2025-10-02 12:57:12.982 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.032 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:13 compute-0 ceph-mon[73668]: pgmap v2520: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.0 MiB/s wr, 176 op/s
Oct 02 12:57:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1790729380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.095 2 DEBUG nova.network.neutron [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updated VIF entry in instance network info cache for port e71909c9-67aa-432f-923c-787d02eb9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.096 2 DEBUG nova.network.neutron [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.130 2 DEBUG oslo_concurrency.lockutils [req-b2f4c9a0-cde6-4c0e-8723-4c8d01f822cc req-794a3aa8-3524-4204-a9ad-d55c065cf771 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.287 2 DEBUG nova.compute.manager [req-76d0ec11-2077-40a6-98c1-24a5508446d4 req-40363eba-5837-45a7-9e87-c631a8aa63a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.288 2 DEBUG oslo_concurrency.lockutils [req-76d0ec11-2077-40a6-98c1-24a5508446d4 req-40363eba-5837-45a7-9e87-c631a8aa63a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.288 2 DEBUG oslo_concurrency.lockutils [req-76d0ec11-2077-40a6-98c1-24a5508446d4 req-40363eba-5837-45a7-9e87-c631a8aa63a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.288 2 DEBUG oslo_concurrency.lockutils [req-76d0ec11-2077-40a6-98c1-24a5508446d4 req-40363eba-5837-45a7-9e87-c631a8aa63a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.289 2 DEBUG nova.compute.manager [req-76d0ec11-2077-40a6-98c1-24a5508446d4 req-40363eba-5837-45a7-9e87-c631a8aa63a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Processing event network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.289 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.293 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.294 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409833.29339, d09efb55-ff68-4671-b89f-a35231b739e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.294 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] VM Resumed (Lifecycle Event)
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.299 2 INFO nova.virt.libvirt.driver [-] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Instance spawned successfully.
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.299 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.329 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.334 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.337 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.337 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.338 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.338 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.339 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.339 2 DEBUG nova.virt.libvirt.driver [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.371 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.452 2 INFO nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Took 13.05 seconds to spawn the instance on the hypervisor.
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.453 2 DEBUG nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.541 2 INFO nova.compute.manager [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Took 13.95 seconds to build instance.
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.570 2 DEBUG oslo_concurrency.lockutils [None req-47d04a20-992a-4164-8260-be305acba833 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.571 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "d09efb55-ff68-4671-b89f-a35231b739e2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.571 2 INFO nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:13 compute-0 nova_compute[256940]: 2025-10-02 12:57:13.572 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "d09efb55-ff68-4671-b89f-a35231b739e2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 3.5 MiB/s wr, 97 op/s
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.239 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.239 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880338796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.715 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1667620999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4205440800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2894204180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:14 compute-0 podman[362048]: 2025-10-02 12:57:14.816134827 +0000 UTC m=+0.054773473 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.827 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.827 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.830 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.831 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.834 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:14 compute-0 nova_compute[256940]: 2025-10-02 12:57:14.835 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:14 compute-0 podman[362049]: 2025-10-02 12:57:14.843312973 +0000 UTC m=+0.080422109 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.065 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.067 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3738MB free_disk=20.831363677978516GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.067 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.067 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.139 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e22a7944-9200-4204-a219-3f7bd720667b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.139 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 74e69165-006c-41de-82af-bc44cba0a843 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.140 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance d09efb55-ff68-4671-b89f-a35231b739e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.140 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.140 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.206 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.388 2 DEBUG nova.compute.manager [req-8c340776-ba4d-44b8-ab94-eef69c2615c9 req-70ef7e11-8c5f-4e4b-a772-09c707e4520a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.389 2 DEBUG oslo_concurrency.lockutils [req-8c340776-ba4d-44b8-ab94-eef69c2615c9 req-70ef7e11-8c5f-4e4b-a772-09c707e4520a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.389 2 DEBUG oslo_concurrency.lockutils [req-8c340776-ba4d-44b8-ab94-eef69c2615c9 req-70ef7e11-8c5f-4e4b-a772-09c707e4520a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.389 2 DEBUG oslo_concurrency.lockutils [req-8c340776-ba4d-44b8-ab94-eef69c2615c9 req-70ef7e11-8c5f-4e4b-a772-09c707e4520a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.389 2 DEBUG nova.compute.manager [req-8c340776-ba4d-44b8-ab94-eef69c2615c9 req-70ef7e11-8c5f-4e4b-a772-09c707e4520a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] No waiting events found dispatching network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.390 2 WARNING nova.compute.manager [req-8c340776-ba4d-44b8-ab94-eef69c2615c9 req-70ef7e11-8c5f-4e4b-a772-09c707e4520a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received unexpected event network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 for instance with vm_state active and task_state None.
Oct 02 12:57:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3993617063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.664 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.670 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.700 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.891 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:57:15 compute-0 nova_compute[256940]: 2025-10-02 12:57:15.892 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:15 compute-0 ceph-mon[73668]: pgmap v2521: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 3.5 MiB/s wr, 97 op/s
Oct 02 12:57:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3880338796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3993617063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 02 12:57:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 02 12:57:16 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 02 12:57:16 compute-0 nova_compute[256940]: 2025-10-02 12:57:16.892 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:16 compute-0 nova_compute[256940]: 2025-10-02 12:57:16.893 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:16 compute-0 nova_compute[256940]: 2025-10-02 12:57:16.893 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:16 compute-0 nova_compute[256940]: 2025-10-02 12:57:16.893 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:57:17 compute-0 ceph-mon[73668]: pgmap v2522: 305 pgs: 305 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 02 12:57:17 compute-0 nova_compute[256940]: 2025-10-02 12:57:17.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:17 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 02 12:57:17 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 02 12:57:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 195 op/s
Oct 02 12:57:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:18 compute-0 nova_compute[256940]: 2025-10-02 12:57:18.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:18 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 02 12:57:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:19 compute-0 NetworkManager[44981]: <info>  [1759409839.2325] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 02 12:57:19 compute-0 NetworkManager[44981]: <info>  [1759409839.2341] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Oct 02 12:57:19 compute-0 nova_compute[256940]: 2025-10-02 12:57:19.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:19 compute-0 nova_compute[256940]: 2025-10-02 12:57:19.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:19 compute-0 ovn_controller[148123]: 2025-10-02T12:57:19Z|00784|binding|INFO|Releasing lport d4ed0cb9-c5ef-468c-8d04-1cebdb7dab47 from this chassis (sb_readonly=0)
Oct 02 12:57:19 compute-0 ovn_controller[148123]: 2025-10-02T12:57:19Z|00785|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct 02 12:57:19 compute-0 nova_compute[256940]: 2025-10-02 12:57:19.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:19 compute-0 ceph-mon[73668]: pgmap v2523: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 195 op/s
Oct 02 12:57:19 compute-0 nova_compute[256940]: 2025-10-02 12:57:19.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 200 op/s
Oct 02 12:57:20 compute-0 nova_compute[256940]: 2025-10-02 12:57:20.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1416982405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/12721875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:20 compute-0 nova_compute[256940]: 2025-10-02 12:57:20.964 2 DEBUG nova.compute.manager [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-changed-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:20 compute-0 nova_compute[256940]: 2025-10-02 12:57:20.964 2 DEBUG nova.compute.manager [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing instance network info cache due to event network-changed-e71909c9-67aa-432f-923c-787d02eb9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:20 compute-0 nova_compute[256940]: 2025-10-02 12:57:20.964 2 DEBUG oslo_concurrency.lockutils [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:20 compute-0 nova_compute[256940]: 2025-10-02 12:57:20.964 2 DEBUG oslo_concurrency.lockutils [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:20 compute-0 nova_compute[256940]: 2025-10-02 12:57:20.965 2 DEBUG nova.network.neutron [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing network info cache for port e71909c9-67aa-432f-923c-787d02eb9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:21 compute-0 ceph-mon[73668]: pgmap v2524: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 200 op/s
Oct 02 12:57:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3284991121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 346 op/s
Oct 02 12:57:22 compute-0 nova_compute[256940]: 2025-10-02 12:57:22.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:22.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:22.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:22 compute-0 nova_compute[256940]: 2025-10-02 12:57:22.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:22 compute-0 ceph-mon[73668]: pgmap v2525: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 346 op/s
Oct 02 12:57:23 compute-0 nova_compute[256940]: 2025-10-02 12:57:23.039 2 DEBUG nova.network.neutron [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updated VIF entry in instance network info cache for port e71909c9-67aa-432f-923c-787d02eb9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:23 compute-0 nova_compute[256940]: 2025-10-02 12:57:23.040 2 DEBUG nova.network.neutron [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:23 compute-0 nova_compute[256940]: 2025-10-02 12:57:23.200 2 DEBUG oslo_concurrency.lockutils [req-e30062e6-4fce-4dec-9bf5-502936044feb req-3f15e76b-76b4-4e34-868c-158f5a9e028f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 314 op/s
Oct 02 12:57:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:24.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:24.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:24 compute-0 nova_compute[256940]: 2025-10-02 12:57:24.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:25 compute-0 nova_compute[256940]: 2025-10-02 12:57:25.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:25 compute-0 nova_compute[256940]: 2025-10-02 12:57:25.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:57:25 compute-0 nova_compute[256940]: 2025-10-02 12:57:25.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:57:25 compute-0 ceph-mon[73668]: pgmap v2526: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 314 op/s
Oct 02 12:57:25 compute-0 nova_compute[256940]: 2025-10-02 12:57:25.940 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:25 compute-0 nova_compute[256940]: 2025-10-02 12:57:25.940 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:25 compute-0 nova_compute[256940]: 2025-10-02 12:57:25.941 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:57:25 compute-0 nova_compute[256940]: 2025-10-02 12:57:25.941 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e22a7944-9200-4204-a219-3f7bd720667b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 360 op/s
Oct 02 12:57:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:26.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:26.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:26.494 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:26.495 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:26.496 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:26 compute-0 nova_compute[256940]: 2025-10-02 12:57:26.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:27 compute-0 nova_compute[256940]: 2025-10-02 12:57:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:27 compute-0 sudo[362122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:27 compute-0 sudo[362122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:27 compute-0 sudo[362122]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:27 compute-0 ceph-mon[73668]: pgmap v2527: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 360 op/s
Oct 02 12:57:27 compute-0 sudo[362147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:27 compute-0 sudo[362147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:27 compute-0 sudo[362147]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 554 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.7 MiB/s wr, 466 op/s
Oct 02 12:57:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:28.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:28 compute-0 ovn_controller[148123]: 2025-10-02T12:57:28Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:9c:44 10.100.0.9
Oct 02 12:57:28 compute-0 ovn_controller[148123]: 2025-10-02T12:57:28Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:9c:44 10.100.0.9
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:28.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:57:28
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.control']
Oct 02 12:57:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:57:29 compute-0 nova_compute[256940]: 2025-10-02 12:57:29.191 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updating instance_info_cache with network_info: [{"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:29 compute-0 nova_compute[256940]: 2025-10-02 12:57:29.261 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-e22a7944-9200-4204-a219-3f7bd720667b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:29 compute-0 nova_compute[256940]: 2025-10-02 12:57:29.262 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:57:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:57:29 compute-0 ceph-mon[73668]: pgmap v2528: 305 pgs: 305 active+clean; 554 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.7 MiB/s wr, 466 op/s
Oct 02 12:57:29 compute-0 nova_compute[256940]: 2025-10-02 12:57:29.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 490 op/s
Oct 02 12:57:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.308 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.309 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.328 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.402 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.402 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.408 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.409 2 INFO nova.compute.claims [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:57:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:30.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.591 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.676 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.676 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.694 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:57:30 compute-0 nova_compute[256940]: 2025-10-02 12:57:30.763 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181327579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.059 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.066 2 DEBUG nova.compute.provider_tree [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.079 2 DEBUG nova.scheduler.client.report [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.102 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.103 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.105 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.111 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.111 2 INFO nova.compute.claims [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.185 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.185 2 DEBUG nova.network.neutron [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.241 2 INFO nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.260 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:57:31 compute-0 podman[362196]: 2025-10-02 12:57:31.287462592 +0000 UTC m=+0.068692764 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:57:31 compute-0 podman[362197]: 2025-10-02 12:57:31.317127092 +0000 UTC m=+0.083700274 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.351 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.387 2 DEBUG nova.policy [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dfe96a8fa48c4243b6262a0359f5b208', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.390 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.392 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.392 2 INFO nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Creating image(s)
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.419 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.449 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.477 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.482 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.557 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.557 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.558 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.558 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.581 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.587 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:31 compute-0 ceph-mon[73668]: pgmap v2529: 305 pgs: 305 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 490 op/s
Oct 02 12:57:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4181327579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/827417965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260972818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.813 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.819 2 DEBUG nova.compute.provider_tree [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.868 2 DEBUG nova.scheduler.client.report [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.936 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.937 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.985 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:57:31 compute-0 nova_compute[256940]: 2025-10-02 12:57:31.986 2 DEBUG nova.network.neutron [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:57:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.8 MiB/s wr, 463 op/s
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.016 2 INFO nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.037 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.084 2 DEBUG nova.network.neutron [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Successfully created port: 9adf2da2-7037-4b71-94a1-53519ef0db70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.133 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.134 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.134 2 INFO nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Creating image(s)
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.162 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.191 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.225 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.232 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:32.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.269 2 DEBUG nova.policy [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.306 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.307 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.307 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.307 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.335 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.338 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 536a7b9a-251e-4caf-9625-d7add1094a1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:32.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.832 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:32 compute-0 nova_compute[256940]: 2025-10-02 12:57:32.928 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] resizing rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:57:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2260972818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.565 2 DEBUG nova.network.neutron [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Successfully updated port: 9adf2da2-7037-4b71-94a1-53519ef0db70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.589 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.589 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquired lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.589 2 DEBUG nova.network.neutron [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.654 2 DEBUG nova.compute.manager [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-changed-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.654 2 DEBUG nova.compute.manager [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Refreshing instance network info cache due to event network-changed-9adf2da2-7037-4b71-94a1-53519ef0db70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.654 2 DEBUG oslo_concurrency.lockutils [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.751 2 DEBUG nova.objects.instance [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'migration_context' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.769 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.769 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Ensure instance console log exists: /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.770 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.770 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.771 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:33 compute-0 nova_compute[256940]: 2025-10-02 12:57:33.913 2 DEBUG nova.network.neutron [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:57:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 304 op/s
Oct 02 12:57:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.026 2 DEBUG nova.network.neutron [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Successfully created port: 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:57:34 compute-0 ceph-mon[73668]: pgmap v2530: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.8 MiB/s wr, 463 op/s
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.064 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 536a7b9a-251e-4caf-9625-d7add1094a1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.726s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.213 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:57:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Oct 02 12:57:34 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Oct 02 12:57:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:34.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:34.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.736 2 DEBUG nova.network.neutron [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Updating instance_info_cache with network_info: [{"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.759 2 INFO nova.compute.manager [None req-c12e144a-e486-43d3-9003-f814385e0891 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Get console output
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.763 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Releasing lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.763 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance network_info: |[{"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.764 2 DEBUG oslo_concurrency.lockutils [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.764 2 DEBUG nova.network.neutron [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Refreshing network info cache for port 9adf2da2-7037-4b71-94a1-53519ef0db70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.767 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Start _get_guest_xml network_info=[{"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.775 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.781 2 WARNING nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.823 2 DEBUG nova.virt.libvirt.host [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.824 2 DEBUG nova.virt.libvirt.host [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.830 2 DEBUG nova.objects.instance [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 536a7b9a-251e-4caf-9625-d7add1094a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.832 2 DEBUG nova.virt.libvirt.host [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.832 2 DEBUG nova.virt.libvirt.host [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.833 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.833 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.834 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.834 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.834 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.834 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.834 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.834 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.835 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.835 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.835 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.835 2 DEBUG nova.virt.hardware [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.837 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.877 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.878 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Ensure instance console log exists: /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.879 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.879 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:34 compute-0 nova_compute[256940]: 2025-10-02 12:57:34.879 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.166 2 DEBUG nova.network.neutron [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Successfully updated port: 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.184 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.184 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.184 2 DEBUG nova.network.neutron [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:57:35 compute-0 ceph-mon[73668]: pgmap v2531: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 304 op/s
Oct 02 12:57:35 compute-0 ceph-mon[73668]: osdmap e347: 3 total, 3 up, 3 in
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.270 2 DEBUG nova.compute.manager [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-changed-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.271 2 DEBUG nova.compute.manager [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Refreshing instance network info cache due to event network-changed-2906704f-c1f9-471b-a4c1-9d6f7eeaee99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.271 2 DEBUG oslo_concurrency.lockutils [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688898913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.341 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.367 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.371 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.407 2 DEBUG nova.network.neutron [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.410 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.411 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.411 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.411 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.412 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.413 2 INFO nova.compute.manager [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Terminating instance
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.414 2 DEBUG nova.compute.manager [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:57:35 compute-0 kernel: tap1b58c8c0-ff (unregistering): left promiscuous mode
Oct 02 12:57:35 compute-0 NetworkManager[44981]: <info>  [1759409855.4789] device (tap1b58c8c0-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_controller[148123]: 2025-10-02T12:57:35Z|00786|binding|INFO|Releasing lport 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad from this chassis (sb_readonly=0)
Oct 02 12:57:35 compute-0 ovn_controller[148123]: 2025-10-02T12:57:35Z|00787|binding|INFO|Setting lport 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad down in Southbound
Oct 02 12:57:35 compute-0 ovn_controller[148123]: 2025-10-02T12:57:35Z|00788|binding|INFO|Removing iface tap1b58c8c0-ff ovn-installed in OVS
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.498 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:91:25 10.100.0.12'], port_security=['fa:16:3e:ae:91:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '74e69165-006c-41de-82af-bc44cba0a843', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.500 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.504 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.524 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a2981dc1-719c-4c88-8ab2-976a9645169e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.554 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[155fccb0-b8ce-454f-b232-f0dcdf78999a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000098.scope: Deactivated successfully.
Oct 02 12:57:35 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000098.scope: Consumed 16.876s CPU time.
Oct 02 12:57:35 compute-0 systemd-machined[210927]: Machine qemu-79-instance-00000098 terminated.
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.560 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7c3894-15fb-4cbc-bd0f-190c4929eefb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.586 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[724e23cb-7d3d-4a58-b691-943da8f33fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.606 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b5130dbe-7503-4f69-b766-274be399b568]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752111, 'reachable_time': 18694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362659, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.625 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c5380cae-d04c-4390-88f6-2e8228a25b38]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752124, 'tstamp': 752124}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362660, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752127, 'tstamp': 752127}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362660, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.626 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.646 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.647 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.647 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:35.648 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.677 2 INFO nova.virt.libvirt.driver [-] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Instance destroyed successfully.
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.678 2 DEBUG nova.objects.instance [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'resources' on Instance uuid 74e69165-006c-41de-82af-bc44cba0a843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.695 2 DEBUG nova.virt.libvirt.vif [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-972017235',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-972017235',id=152,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-4cwplwm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:26Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=74e69165-006c-41de-82af-bc44cba0a843,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.696 2 DEBUG nova.network.os_vif_util [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "address": "fa:16:3e:ae:91:25", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b58c8c0-ff", "ovs_interfaceid": "1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.697 2 DEBUG nova.network.os_vif_util [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:91:25,bridge_name='br-int',has_traffic_filtering=True,id=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b58c8c0-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.697 2 DEBUG os_vif [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:91:25,bridge_name='br-int',has_traffic_filtering=True,id=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b58c8c0-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b58c8c0-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.703 2 INFO os_vif [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:91:25,bridge_name='br-int',has_traffic_filtering=True,id=1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b58c8c0-ff')
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.807 2 DEBUG nova.compute.manager [req-92031d53-2b20-48dc-8bc2-5a1e30b81896 req-50c11af7-c2c6-4f8e-90a8-5a015ceb53f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received event network-vif-unplugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.808 2 DEBUG oslo_concurrency.lockutils [req-92031d53-2b20-48dc-8bc2-5a1e30b81896 req-50c11af7-c2c6-4f8e-90a8-5a015ceb53f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.808 2 DEBUG oslo_concurrency.lockutils [req-92031d53-2b20-48dc-8bc2-5a1e30b81896 req-50c11af7-c2c6-4f8e-90a8-5a015ceb53f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.809 2 DEBUG oslo_concurrency.lockutils [req-92031d53-2b20-48dc-8bc2-5a1e30b81896 req-50c11af7-c2c6-4f8e-90a8-5a015ceb53f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.809 2 DEBUG nova.compute.manager [req-92031d53-2b20-48dc-8bc2-5a1e30b81896 req-50c11af7-c2c6-4f8e-90a8-5a015ceb53f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] No waiting events found dispatching network-vif-unplugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.809 2 DEBUG nova.compute.manager [req-92031d53-2b20-48dc-8bc2-5a1e30b81896 req-50c11af7-c2c6-4f8e-90a8-5a015ceb53f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received event network-vif-unplugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.846 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.847 2 DEBUG nova.virt.libvirt.vif [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1523338527',display_name='tempest-ServerRescueTestJSON-server-1523338527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1523338527',id=158,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-kabplxb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:31Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=42cbf332-5b16-48ac-b3c9-9a21a922b6f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.848 2 DEBUG nova.network.os_vif_util [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.848 2 DEBUG nova.network.os_vif_util [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.850 2 DEBUG nova.objects.instance [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.867 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <uuid>42cbf332-5b16-48ac-b3c9-9a21a922b6f9</uuid>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <name>instance-0000009e</name>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerRescueTestJSON-server-1523338527</nova:name>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:57:34</nova:creationTime>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:user uuid="dfe96a8fa48c4243b6262a0359f5b208">tempest-ServerRescueTestJSON-791200975-project-member</nova:user>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:project uuid="d8f55f9d9ed144629bd9a03edb020c4f">tempest-ServerRescueTestJSON-791200975</nova:project>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <nova:port uuid="9adf2da2-7037-4b71-94a1-53519ef0db70">
Oct 02 12:57:35 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <system>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <entry name="serial">42cbf332-5b16-48ac-b3c9-9a21a922b6f9</entry>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <entry name="uuid">42cbf332-5b16-48ac-b3c9-9a21a922b6f9</entry>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </system>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <os>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   </os>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <features>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   </features>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk">
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config">
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       </source>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:57:35 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:da:07:2e"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <target dev="tap9adf2da2-70"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/console.log" append="off"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <video>
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </video>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:57:35 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:57:35 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:57:35 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:57:35 compute-0 nova_compute[256940]: </domain>
Oct 02 12:57:35 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.868 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Preparing to wait for external event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.869 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.869 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.869 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.870 2 DEBUG nova.virt.libvirt.vif [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1523338527',display_name='tempest-ServerRescueTestJSON-server-1523338527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1523338527',id=158,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-kabplxb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:31Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=42cbf332-5b16-48ac-b3c9-9a21a922b6f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.871 2 DEBUG nova.network.os_vif_util [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.871 2 DEBUG nova.network.os_vif_util [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.872 2 DEBUG os_vif [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.872 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.873 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.875 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9adf2da2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.875 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9adf2da2-70, col_values=(('external_ids', {'iface-id': '9adf2da2-7037-4b71-94a1-53519ef0db70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:07:2e', 'vm-uuid': '42cbf332-5b16-48ac-b3c9-9a21a922b6f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 NetworkManager[44981]: <info>  [1759409855.8778] manager: (tap9adf2da2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.882 2 INFO os_vif [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70')
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.942 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.942 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.942 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No VIF found with MAC fa:16:3e:da:07:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.943 2 INFO nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Using config drive
Oct 02 12:57:35 compute-0 nova_compute[256940]: 2025-10-02 12:57:35.967 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 633 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.2 MiB/s wr, 376 op/s
Oct 02 12:57:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:36.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3688898913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1591168438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.380 2 INFO nova.virt.libvirt.driver [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Deleting instance files /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843_del
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.381 2 INFO nova.virt.libvirt.driver [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Deletion of /var/lib/nova/instances/74e69165-006c-41de-82af-bc44cba0a843_del complete
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.446 2 INFO nova.compute.manager [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Took 1.03 seconds to destroy the instance on the hypervisor.
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.446 2 DEBUG oslo.service.loopingcall [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.447 2 DEBUG nova.compute.manager [-] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.447 2 DEBUG nova.network.neutron [-] [instance: 74e69165-006c-41de-82af-bc44cba0a843] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:57:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:36.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.665 2 INFO nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Creating config drive at /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.674 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaio25sgf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.714 2 DEBUG nova.network.neutron [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Updated VIF entry in instance network info cache for port 9adf2da2-7037-4b71-94a1-53519ef0db70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.716 2 DEBUG nova.network.neutron [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Updating instance_info_cache with network_info: [{"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.737 2 DEBUG oslo_concurrency.lockutils [req-be61ab1d-faef-45af-89e5-7e9c3a2d6c08 req-22c3f780-5db1-4987-8899-47e505b41519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.817 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaio25sgf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.850 2 DEBUG nova.storage.rbd_utils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:36 compute-0 nova_compute[256940]: 2025-10-02 12:57:36.854 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:37 compute-0 ceph-mon[73668]: pgmap v2533: 305 pgs: 305 active+clean; 633 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.2 MiB/s wr, 376 op/s
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.603 2 DEBUG nova.network.neutron [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updating instance_info_cache with network_info: [{"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.629 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.630 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Instance network_info: |[{"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.631 2 DEBUG oslo_concurrency.lockutils [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.631 2 DEBUG nova.network.neutron [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Refreshing network info cache for port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.636 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Start _get_guest_xml network_info=[{"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.642 2 WARNING nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.649 2 DEBUG nova.virt.libvirt.host [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.650 2 DEBUG nova.virt.libvirt.host [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.664 2 DEBUG nova.virt.libvirt.host [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.664 2 DEBUG nova.virt.libvirt.host [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.665 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.666 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.666 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.666 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.667 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.667 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.667 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.668 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.668 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.668 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.668 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.669 2 DEBUG nova.virt.hardware [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.672 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.705 2 DEBUG nova.network.neutron [-] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.744 2 INFO nova.compute.manager [-] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Took 1.30 seconds to deallocate network for instance.
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.773 2 DEBUG nova.compute.manager [req-70e4c352-617d-4eef-8bd6-c5ed18544c15 req-20ffd921-dc5d-4cb0-908c-59630cb48063 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received event network-vif-deleted-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.845 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.846 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.870 2 DEBUG oslo_concurrency.processutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.871 2 INFO nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Deleting local config drive /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config because it was imported into RBD.
Oct 02 12:57:37 compute-0 kernel: tap9adf2da2-70: entered promiscuous mode
Oct 02 12:57:37 compute-0 systemd-udevd[362645]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:37 compute-0 ovn_controller[148123]: 2025-10-02T12:57:37Z|00789|binding|INFO|Claiming lport 9adf2da2-7037-4b71-94a1-53519ef0db70 for this chassis.
Oct 02 12:57:37 compute-0 ovn_controller[148123]: 2025-10-02T12:57:37Z|00790|binding|INFO|9adf2da2-7037-4b71-94a1-53519ef0db70: Claiming fa:16:3e:da:07:2e 10.100.0.14
Oct 02 12:57:37 compute-0 NetworkManager[44981]: <info>  [1759409857.9160] manager: (tap9adf2da2-70): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:37.923 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:07:2e 10.100.0.14'], port_security=['fa:16:3e:da:07:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '42cbf332-5b16-48ac-b3c9-9a21a922b6f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9adf2da2-7037-4b71-94a1-53519ef0db70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:37.925 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9adf2da2-7037-4b71-94a1-53519ef0db70 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 bound to our chassis
Oct 02 12:57:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:37.926 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:57:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:37.927 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[24591aa8-df80-4c3b-a565-c073eac274f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:37 compute-0 NetworkManager[44981]: <info>  [1759409857.9320] device (tap9adf2da2-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:57:37 compute-0 NetworkManager[44981]: <info>  [1759409857.9329] device (tap9adf2da2-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:57:37 compute-0 ovn_controller[148123]: 2025-10-02T12:57:37Z|00791|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 up in Southbound
Oct 02 12:57:37 compute-0 ovn_controller[148123]: 2025-10-02T12:57:37Z|00792|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 ovn-installed in OVS
Oct 02 12:57:37 compute-0 nova_compute[256940]: 2025-10-02 12:57:37.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:37 compute-0 systemd-machined[210927]: New machine qemu-82-instance-0000009e.
Oct 02 12:57:37 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-0000009e.
Oct 02 12:57:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 262 op/s
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.031 2 DEBUG oslo_concurrency.processutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010598752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.120 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.148 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.153 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:38.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:38.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2834199112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.496 2 DEBUG oslo_concurrency.processutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.506 2 DEBUG nova.compute.provider_tree [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.526 2 DEBUG nova.scheduler.client.report [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.549 2 DEBUG nova.compute.manager [req-7c1fd829-62bd-4565-a1a4-1311a17dfe12 req-e36c63e1-50f5-4153-ab30-c245ac8d8a2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.550 2 DEBUG oslo_concurrency.lockutils [req-7c1fd829-62bd-4565-a1a4-1311a17dfe12 req-e36c63e1-50f5-4153-ab30-c245ac8d8a2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.551 2 DEBUG oslo_concurrency.lockutils [req-7c1fd829-62bd-4565-a1a4-1311a17dfe12 req-e36c63e1-50f5-4153-ab30-c245ac8d8a2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.552 2 DEBUG oslo_concurrency.lockutils [req-7c1fd829-62bd-4565-a1a4-1311a17dfe12 req-e36c63e1-50f5-4153-ab30-c245ac8d8a2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.552 2 DEBUG nova.compute.manager [req-7c1fd829-62bd-4565-a1a4-1311a17dfe12 req-e36c63e1-50f5-4153-ab30-c245ac8d8a2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Processing event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.559 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702702585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.591 2 INFO nova.scheduler.client.report [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Deleted allocations for instance 74e69165-006c-41de-82af-bc44cba0a843
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.603 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.605 2 DEBUG nova.virt.libvirt.vif [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-144315854',display_name='tempest-TestNetworkAdvancedServerOps-server-144315854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-144315854',id=159,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD6z5UG9v+QPALMlduFTZXKsNHAzUmhi1vVAPQaqRh/Eb1m/Bh8U9Nz+GfWZmjfT42Y3dsNer57yhh6nNEihCii1sxD04kytNUaBs6FhiA+YLAdp8+IEl0xCF5jLoyktqg==',key_name='tempest-TestNetworkAdvancedServerOps-1673917750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hmeegdtc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:32Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=536a7b9a-251e-4caf-9625-d7add1094a1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.605 2 DEBUG nova.network.os_vif_util [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.606 2 DEBUG nova.network.os_vif_util [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b2:58,bridge_name='br-int',has_traffic_filtering=True,id=2906704f-c1f9-471b-a4c1-9d6f7eeaee99,network=Network(10bd5c42-4cc6-4b0d-b045-612e6f3fee9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2906704f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.607 2 DEBUG nova.objects.instance [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 536a7b9a-251e-4caf-9625-d7add1094a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.628 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <uuid>536a7b9a-251e-4caf-9625-d7add1094a1e</uuid>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <name>instance-0000009f</name>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-144315854</nova:name>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:57:37</nova:creationTime>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <nova:port uuid="2906704f-c1f9-471b-a4c1-9d6f7eeaee99">
Oct 02 12:57:38 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <system>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <entry name="serial">536a7b9a-251e-4caf-9625-d7add1094a1e</entry>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <entry name="uuid">536a7b9a-251e-4caf-9625-d7add1094a1e</entry>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </system>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <os>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   </os>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <features>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   </features>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/536a7b9a-251e-4caf-9625-d7add1094a1e_disk">
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/536a7b9a-251e-4caf-9625-d7add1094a1e_disk.config">
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       </source>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:57:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:08:b2:58"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <target dev="tap2906704f-c1"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/console.log" append="off"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <video>
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </video>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:57:38 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:57:38 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:57:38 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:57:38 compute-0 nova_compute[256940]: </domain>
Oct 02 12:57:38 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.628 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Preparing to wait for external event network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.628 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.629 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.629 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.629 2 DEBUG nova.virt.libvirt.vif [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-144315854',display_name='tempest-TestNetworkAdvancedServerOps-server-144315854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-144315854',id=159,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD6z5UG9v+QPALMlduFTZXKsNHAzUmhi1vVAPQaqRh/Eb1m/Bh8U9Nz+GfWZmjfT42Y3dsNer57yhh6nNEihCii1sxD04kytNUaBs6FhiA+YLAdp8+IEl0xCF5jLoyktqg==',key_name='tempest-TestNetworkAdvancedServerOps-1673917750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hmeegdtc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:32Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=536a7b9a-251e-4caf-9625-d7add1094a1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.629 2 DEBUG nova.network.os_vif_util [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.630 2 DEBUG nova.network.os_vif_util [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b2:58,bridge_name='br-int',has_traffic_filtering=True,id=2906704f-c1f9-471b-a4c1-9d6f7eeaee99,network=Network(10bd5c42-4cc6-4b0d-b045-612e6f3fee9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2906704f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.630 2 DEBUG os_vif [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b2:58,bridge_name='br-int',has_traffic_filtering=True,id=2906704f-c1f9-471b-a4c1-9d6f7eeaee99,network=Network(10bd5c42-4cc6-4b0d-b045-612e6f3fee9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2906704f-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.635 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2906704f-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2906704f-c1, col_values=(('external_ids', {'iface-id': '2906704f-c1f9-471b-a4c1-9d6f7eeaee99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:b2:58', 'vm-uuid': '536a7b9a-251e-4caf-9625-d7add1094a1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:38 compute-0 NetworkManager[44981]: <info>  [1759409858.6388] manager: (tap2906704f-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.645 2 INFO os_vif [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b2:58,bridge_name='br-int',has_traffic_filtering=True,id=2906704f-c1f9-471b-a4c1-9d6f7eeaee99,network=Network(10bd5c42-4cc6-4b0d-b045-612e6f3fee9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2906704f-c1')
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.685 2 DEBUG oslo_concurrency.lockutils [None req-8318fb1a-a072-45de-bf27-087f89d65ef1 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2010598752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.719 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.720 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.720 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:08:b2:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.720 2 INFO nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Using config drive
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.756 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.764 2 DEBUG nova.compute.manager [req-ebe92d2d-6105-4379-bc79-a3b7ad0ff025 req-6c415be4-96fa-4ce6-845d-83b5a5b323e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received event network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.764 2 DEBUG oslo_concurrency.lockutils [req-ebe92d2d-6105-4379-bc79-a3b7ad0ff025 req-6c415be4-96fa-4ce6-845d-83b5a5b323e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74e69165-006c-41de-82af-bc44cba0a843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.765 2 DEBUG oslo_concurrency.lockutils [req-ebe92d2d-6105-4379-bc79-a3b7ad0ff025 req-6c415be4-96fa-4ce6-845d-83b5a5b323e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.765 2 DEBUG oslo_concurrency.lockutils [req-ebe92d2d-6105-4379-bc79-a3b7ad0ff025 req-6c415be4-96fa-4ce6-845d-83b5a5b323e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74e69165-006c-41de-82af-bc44cba0a843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.765 2 DEBUG nova.compute.manager [req-ebe92d2d-6105-4379-bc79-a3b7ad0ff025 req-6c415be4-96fa-4ce6-845d-83b5a5b323e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] No waiting events found dispatching network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:38 compute-0 nova_compute[256940]: 2025-10-02 12:57:38.765 2 WARNING nova.compute.manager [req-ebe92d2d-6105-4379-bc79-a3b7ad0ff025 req-6c415be4-96fa-4ce6-845d-83b5a5b323e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Received unexpected event network-vif-plugged-1b58c8c0-ffe5-4c04-9e63-a4ff9c50a6ad for instance with vm_state deleted and task_state None.
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.153 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409859.1530583, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.154 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Started (Lifecycle Event)
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.156 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.158 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.162 2 INFO nova.virt.libvirt.driver [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance spawned successfully.
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.162 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.178 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.182 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.185 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.185 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.186 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.186 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.186 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.187 2 DEBUG nova.virt.libvirt.driver [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.221 2 INFO nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Creating config drive at /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/disk.config
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.225 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lp3o5x8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.261 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.261 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409859.1534152, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.262 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Paused (Lifecycle Event)
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.264 2 INFO nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Took 7.87 seconds to spawn the instance on the hypervisor.
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.265 2 DEBUG nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.296 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.299 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409859.1581607, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.299 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Resumed (Lifecycle Event)
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.329 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.332 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.340 2 INFO nova.compute.manager [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Took 8.96 seconds to build instance.
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.358 2 DEBUG oslo_concurrency.lockutils [None req-c9ee4738-fc63-4e45-a1b2-6e2a6ebea11b dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.366 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lp3o5x8" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.397 2 DEBUG nova.storage.rbd_utils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 536a7b9a-251e-4caf-9625-d7add1094a1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.401 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/disk.config 536a7b9a-251e-4caf-9625-d7add1094a1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.655 2 DEBUG nova.network.neutron [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updated VIF entry in instance network info cache for port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.656 2 DEBUG nova.network.neutron [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updating instance_info_cache with network_info: [{"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.678 2 DEBUG oslo_concurrency.lockutils [req-53160ccd-f320-4dfc-ba8b-04aefafc60d9 req-2ccfea31-9e35-4081-876a-20da1e820b72 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.756 2 DEBUG oslo_concurrency.lockutils [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "interface-d09efb55-ff68-4671-b89f-a35231b739e2-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.757 2 DEBUG oslo_concurrency.lockutils [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d09efb55-ff68-4671-b89f-a35231b739e2-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.757 2 DEBUG nova.objects.instance [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'flavor' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:39 compute-0 ceph-mon[73668]: pgmap v2534: 305 pgs: 305 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 262 op/s
Oct 02 12:57:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2834199112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2702702585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.984 2 DEBUG oslo_concurrency.processutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/disk.config 536a7b9a-251e-4caf-9625-d7add1094a1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:39 compute-0 nova_compute[256940]: 2025-10-02 12:57:39.985 2 INFO nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Deleting local config drive /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e/disk.config because it was imported into RBD.
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 4.3 MiB/s wr, 226 op/s
Oct 02 12:57:40 compute-0 kernel: tap2906704f-c1: entered promiscuous mode
Oct 02 12:57:40 compute-0 ovn_controller[148123]: 2025-10-02T12:57:40Z|00793|binding|INFO|Claiming lport 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 for this chassis.
Oct 02 12:57:40 compute-0 NetworkManager[44981]: <info>  [1759409860.0343] manager: (tap2906704f-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 ovn_controller[148123]: 2025-10-02T12:57:40Z|00794|binding|INFO|2906704f-c1f9-471b-a4c1-9d6f7eeaee99: Claiming fa:16:3e:08:b2:58 10.100.0.14
Oct 02 12:57:40 compute-0 systemd-udevd[362903]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.041 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b2:58 10.100.0.14'], port_security=['fa:16:3e:08:b2:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '536a7b9a-251e-4caf-9625-d7add1094a1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '954e63ef-62ce-4a1c-978f-3dfc5cc86176', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5cef5ef-2ebc-4724-b755-6f8f78ea6768, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2906704f-c1f9-471b-a4c1-9d6f7eeaee99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.042 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 in datapath 10bd5c42-4cc6-4b0d-b045-612e6f3fee9a bound to our chassis
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.044 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10bd5c42-4cc6-4b0d-b045-612e6f3fee9a
Oct 02 12:57:40 compute-0 NetworkManager[44981]: <info>  [1759409860.0502] device (tap2906704f-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:57:40 compute-0 NetworkManager[44981]: <info>  [1759409860.0517] device (tap2906704f-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:57:40 compute-0 ovn_controller[148123]: 2025-10-02T12:57:40Z|00795|binding|INFO|Setting lport 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 ovn-installed in OVS
Oct 02 12:57:40 compute-0 ovn_controller[148123]: 2025-10-02T12:57:40Z|00796|binding|INFO|Setting lport 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 up in Southbound
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.062 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[79afdcd8-1fc0-4af7-a5dd-a579cee98da2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.063 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10bd5c42-41 in ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.065 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10bd5c42-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.065 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c385f040-028e-44a0-a3f0-8bdb4bfb6d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.066 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce54208-c33c-4ee9-86e5-b57aeabf1fc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 systemd-machined[210927]: New machine qemu-83-instance-0000009f.
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.079 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[dad380bc-4411-4de1-85a8-dc5592948d3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-0000009f.
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.098 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e0341972-e59b-4e57-823e-cfd6e279c782]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.137 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[79ad4ef3-9cd0-4c5b-81b9-a2013504eaf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 systemd-udevd[362978]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:40 compute-0 NetworkManager[44981]: <info>  [1759409860.1449] manager: (tap10bd5c42-40): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.143 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c68d3795-4aa1-4c52-b36a-5e89e078824a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.192 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fa834e1a-90f8-4edc-9ec4-f1f6956941e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.195 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[26d015d5-d4d1-4256-b440-1db6c052ede5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 NetworkManager[44981]: <info>  [1759409860.2191] device (tap10bd5c42-40): carrier: link connected
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.225 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[39f0552f-83a2-474d-81a5-f3b64d931990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.243 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b99f52f4-e68f-4063-9658-b5b590810088]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10bd5c42-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:6a:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775570, 'reachable_time': 25347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363009, 'error': None, 'target': 'ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.263 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e1966e2e-a25d-472f-92df-c020d4da50bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:6a7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775570, 'tstamp': 775570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363010, 'error': None, 'target': 'ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:40.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.282 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bed13380-2438-4b0a-a9be-d5cc24d5459f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10bd5c42-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:6a:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775570, 'reachable_time': 25347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363011, 'error': None, 'target': 'ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.320 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7271bea4-493f-4520-8a49-8970a9be2279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.403 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9d97aa-63d4-4fd9-b8a6-10c05ca806a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.405 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10bd5c42-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.406 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.407 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10bd5c42-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:40 compute-0 kernel: tap10bd5c42-40: entered promiscuous mode
Oct 02 12:57:40 compute-0 NetworkManager[44981]: <info>  [1759409860.4100] manager: (tap10bd5c42-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.413 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10bd5c42-40, col_values=(('external_ids', {'iface-id': 'b9b521b9-f530-4704-8f5f-f7d4cbd1e547'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:40 compute-0 ovn_controller[148123]: 2025-10-02T12:57:40Z|00797|binding|INFO|Releasing lport b9b521b9-f530-4704-8f5f-f7d4cbd1e547 from this chassis (sb_readonly=0)
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.418 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10bd5c42-4cc6-4b0d-b045-612e6f3fee9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10bd5c42-4cc6-4b0d-b045-612e6f3fee9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.420 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[feee5d4c-a643-439a-b44c-693dca44ef4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.421 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/10bd5c42-4cc6-4b0d-b045-612e6f3fee9a.pid.haproxy
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 10bd5c42-4cc6-4b0d-b045-612e6f3fee9a
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:57:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:40.423 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'env', 'PROCESS_TAG=haproxy-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10bd5c42-4cc6-4b0d-b045-612e6f3fee9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:40.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009484997324585893 of space, bias 1.0, pg target 2.845499197375768 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6441557469058254 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8618352177554439 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:57:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.640 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.641 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.641 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.642 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.642 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.642 2 WARNING nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state active and task_state None.
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.642 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.643 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.643 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.644 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.644 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Processing event network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.660 2 DEBUG nova.objects.instance [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_requests' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.675 2 DEBUG nova.network.neutron [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.816 2 INFO nova.compute.manager [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Rescuing
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.816 2 DEBUG oslo_concurrency.lockutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.816 2 DEBUG oslo_concurrency.lockutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquired lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.816 2 DEBUG nova.network.neutron [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:57:40 compute-0 nova_compute[256940]: 2025-10-02 12:57:40.860 2 DEBUG nova.policy [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:57:40 compute-0 podman[363086]: 2025-10-02 12:57:40.873076109 +0000 UTC m=+0.085752557 container create e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:57:40 compute-0 podman[363086]: 2025-10-02 12:57:40.815173956 +0000 UTC m=+0.027850434 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:57:40 compute-0 systemd[1]: Started libpod-conmon-e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954.scope.
Oct 02 12:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc4292a4f97363af7abd6c4a565c552efda5cf2cde9d04311c74a7801df429/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:40 compute-0 podman[363086]: 2025-10-02 12:57:40.98289945 +0000 UTC m=+0.195575928 container init e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:57:40 compute-0 podman[363086]: 2025-10-02 12:57:40.988878705 +0000 UTC m=+0.201555153 container start e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:57:41 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [NOTICE]   (363105) : New worker (363107) forked
Oct 02 12:57:41 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [NOTICE]   (363105) : Loading success.
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.122 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409861.1216652, 536a7b9a-251e-4caf-9625-d7add1094a1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.123 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] VM Started (Lifecycle Event)
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.125 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:57:41 compute-0 ceph-mon[73668]: pgmap v2535: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 4.3 MiB/s wr, 226 op/s
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.143 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.152 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.156 2 INFO nova.virt.libvirt.driver [-] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Instance spawned successfully.
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.157 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.160 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.184 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.184 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.185 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.185 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.186 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.186 2 DEBUG nova.virt.libvirt.driver [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.191 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.191 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409861.1219184, 536a7b9a-251e-4caf-9625-d7add1094a1e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.191 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] VM Paused (Lifecycle Event)
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.226 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.231 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409861.1295455, 536a7b9a-251e-4caf-9625-d7add1094a1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.231 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] VM Resumed (Lifecycle Event)
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.256 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.259 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.271 2 INFO nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Took 9.14 seconds to spawn the instance on the hypervisor.
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.271 2 DEBUG nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.282 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.327 2 INFO nova.compute.manager [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Took 10.58 seconds to build instance.
Oct 02 12:57:41 compute-0 nova_compute[256940]: 2025-10-02 12:57:41.353 2 DEBUG oslo_concurrency.lockutils [None req-ecd27b69-fa35-4d77-99e8-53f010349891 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 275 op/s
Oct 02 12:57:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:42.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:42 compute-0 nova_compute[256940]: 2025-10-02 12:57:42.350 2 DEBUG nova.network.neutron [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Successfully created port: e81302d5-61c1-4e2a-ba60-2692d1d9f8ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:57:42 compute-0 nova_compute[256940]: 2025-10-02 12:57:42.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:42.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:42 compute-0 nova_compute[256940]: 2025-10-02 12:57:42.537 2 DEBUG nova.network.neutron [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Updating instance_info_cache with network_info: [{"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:42 compute-0 nova_compute[256940]: 2025-10-02 12:57:42.573 2 DEBUG oslo_concurrency.lockutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Releasing lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:42 compute-0 nova_compute[256940]: 2025-10-02 12:57:42.858 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.008 2 DEBUG nova.compute.manager [req-f5478e6a-9266-439e-9ca0-4610c6f105bc req-19a5f0d5-56d9-4051-8db8-456f00b35c8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.008 2 DEBUG oslo_concurrency.lockutils [req-f5478e6a-9266-439e-9ca0-4610c6f105bc req-19a5f0d5-56d9-4051-8db8-456f00b35c8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.009 2 DEBUG oslo_concurrency.lockutils [req-f5478e6a-9266-439e-9ca0-4610c6f105bc req-19a5f0d5-56d9-4051-8db8-456f00b35c8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.009 2 DEBUG oslo_concurrency.lockutils [req-f5478e6a-9266-439e-9ca0-4610c6f105bc req-19a5f0d5-56d9-4051-8db8-456f00b35c8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.009 2 DEBUG nova.compute.manager [req-f5478e6a-9266-439e-9ca0-4610c6f105bc req-19a5f0d5-56d9-4051-8db8-456f00b35c8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] No waiting events found dispatching network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.010 2 WARNING nova.compute.manager [req-f5478e6a-9266-439e-9ca0-4610c6f105bc req-19a5f0d5-56d9-4051-8db8-456f00b35c8d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received unexpected event network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 for instance with vm_state active and task_state None.
Oct 02 12:57:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Oct 02 12:57:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Oct 02 12:57:43 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Oct 02 12:57:43 compute-0 ceph-mon[73668]: pgmap v2536: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 275 op/s
Oct 02 12:57:43 compute-0 ceph-mon[73668]: osdmap e348: 3 total, 3 up, 3 in
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.424 2 DEBUG nova.network.neutron [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Successfully updated port: e81302d5-61c1-4e2a-ba60-2692d1d9f8ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.444 2 DEBUG oslo_concurrency.lockutils [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.445 2 DEBUG oslo_concurrency.lockutils [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.445 2 DEBUG nova.network.neutron [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.515 2 DEBUG nova.compute.manager [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-changed-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.516 2 DEBUG nova.compute.manager [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing instance network info cache due to event network-changed-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.517 2 DEBUG oslo_concurrency.lockutils [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:43 compute-0 nova_compute[256940]: 2025-10-02 12:57:43.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.4 MiB/s wr, 281 op/s
Oct 02 12:57:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:44.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:44.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:44 compute-0 nova_compute[256940]: 2025-10-02 12:57:44.745 2 DEBUG nova.compute.manager [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-changed-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:44 compute-0 nova_compute[256940]: 2025-10-02 12:57:44.745 2 DEBUG nova.compute.manager [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Refreshing instance network info cache due to event network-changed-2906704f-c1f9-471b-a4c1-9d6f7eeaee99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:57:44 compute-0 nova_compute[256940]: 2025-10-02 12:57:44.746 2 DEBUG oslo_concurrency.lockutils [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:44 compute-0 nova_compute[256940]: 2025-10-02 12:57:44.746 2 DEBUG oslo_concurrency.lockutils [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:44 compute-0 nova_compute[256940]: 2025-10-02 12:57:44.746 2 DEBUG nova.network.neutron [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Refreshing network info cache for port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:45 compute-0 ceph-mon[73668]: pgmap v2538: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.4 MiB/s wr, 281 op/s
Oct 02 12:57:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3758325169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:45 compute-0 podman[363118]: 2025-10-02 12:57:45.434061661 +0000 UTC m=+0.099264308 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:57:45 compute-0 podman[363119]: 2025-10-02 12:57:45.451954335 +0000 UTC m=+0.113042875 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:57:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.7 MiB/s wr, 316 op/s
Oct 02 12:57:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.370 2 DEBUG nova.network.neutron [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.398 2 DEBUG oslo_concurrency.lockutils [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.403 2 DEBUG oslo_concurrency.lockutils [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.403 2 DEBUG nova.network.neutron [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing network info cache for port e81302d5-61c1-4e2a-ba60-2692d1d9f8ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.407 2 DEBUG nova.virt.libvirt.vif [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:13Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.407 2 DEBUG nova.network.os_vif_util [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.408 2 DEBUG nova.network.os_vif_util [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.408 2 DEBUG os_vif [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape81302d5-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape81302d5-61, col_values=(('external_ids', {'iface-id': 'e81302d5-61c1-4e2a-ba60-2692d1d9f8ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:14:41', 'vm-uuid': 'd09efb55-ff68-4671-b89f-a35231b739e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:46 compute-0 NetworkManager[44981]: <info>  [1759409866.4167] manager: (tape81302d5-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.425 2 INFO os_vif [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61')
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.426 2 DEBUG nova.virt.libvirt.vif [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:13Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.427 2 DEBUG nova.network.os_vif_util [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.427 2 DEBUG nova.network.os_vif_util [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.430 2 DEBUG nova.virt.libvirt.guest [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:0e:14:41"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <target dev="tape81302d5-61"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]: </interface>
Oct 02 12:57:46 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:57:46 compute-0 kernel: tape81302d5-61: entered promiscuous mode
Oct 02 12:57:46 compute-0 ovn_controller[148123]: 2025-10-02T12:57:46Z|00798|binding|INFO|Claiming lport e81302d5-61c1-4e2a-ba60-2692d1d9f8ed for this chassis.
Oct 02 12:57:46 compute-0 ovn_controller[148123]: 2025-10-02T12:57:46Z|00799|binding|INFO|e81302d5-61c1-4e2a-ba60-2692d1d9f8ed: Claiming fa:16:3e:0e:14:41 10.100.0.22
Oct 02 12:57:46 compute-0 NetworkManager[44981]: <info>  [1759409866.4520] manager: (tape81302d5-61): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.460 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:14:41 10.100.0.22'], port_security=['fa:16:3e:0e:14:41 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'd09efb55-ff68-4671-b89f-a35231b739e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62f8ddde-a575-4c3a-bc2f-e3ff31baaf2d, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.461 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e81302d5-61c1-4e2a-ba60-2692d1d9f8ed in datapath bc72facc-29fc-4f60-8da4-b2b18aba70d2 bound to our chassis
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.462 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc72facc-29fc-4f60-8da4-b2b18aba70d2
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.477 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7ae303-3892-4cb0-8902-d6d6debeea21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.478 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc72facc-21 in ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.480 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc72facc-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.480 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ff803912-c3bf-42aa-afbf-3a697e29f942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.481 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[142fa482-b329-4cda-8658-364653da0b8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 systemd-udevd[363165]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:46 compute-0 NetworkManager[44981]: <info>  [1759409866.4981] device (tape81302d5-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:57:46 compute-0 NetworkManager[44981]: <info>  [1759409866.4993] device (tape81302d5-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.502 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[17853e42-79a0-4a67-a99b-0de06844e793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 ovn_controller[148123]: 2025-10-02T12:57:46Z|00800|binding|INFO|Setting lport e81302d5-61c1-4e2a-ba60-2692d1d9f8ed ovn-installed in OVS
Oct 02 12:57:46 compute-0 ovn_controller[148123]: 2025-10-02T12:57:46Z|00801|binding|INFO|Setting lport e81302d5-61c1-4e2a-ba60-2692d1d9f8ed up in Southbound
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.520 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[63533d3c-5332-421c-a183-5ceac0576117]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.557 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cd408854-37dc-451e-8c3a-aa66a76cf1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.557 2 DEBUG nova.network.neutron [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updated VIF entry in instance network info cache for port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.557 2 DEBUG nova.network.neutron [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updating instance_info_cache with network_info: [{"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:46 compute-0 NetworkManager[44981]: <info>  [1759409866.5639] manager: (tapbc72facc-20): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.562 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[64116ca9-5f77-4ff2-a44a-f658d14c6a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.573 2 DEBUG oslo_concurrency.lockutils [req-f384ac60-ddc8-4949-99fc-59d4df858899 req-ff21169c-eec3-4904-8cad-f239839f8aeb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Oct 02 12:57:46 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.615 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[81fea616-361c-4fdc-8bd5-97b451e63219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.619 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bf41933f-95f7-490d-8528-ea10fb5eea26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.643 2 DEBUG nova.virt.libvirt.driver [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.643 2 DEBUG nova.virt.libvirt.driver [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.644 2 DEBUG nova.virt.libvirt.driver [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:dd:9c:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.644 2 DEBUG nova.virt.libvirt.driver [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:0e:14:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:46 compute-0 NetworkManager[44981]: <info>  [1759409866.6501] device (tapbc72facc-20): carrier: link connected
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.658 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8edf6d16-78f7-4961-82af-57da05d02344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.667 2 DEBUG nova.virt.libvirt.guest [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:57:46</nova:creationTime>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:57:46 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     <nova:port uuid="e81302d5-61c1-4e2a-ba60-2692d1d9f8ed">
Oct 02 12:57:46 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 02 12:57:46 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:57:46 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:57:46 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:57:46 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.680 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[441cd3fa-ed38-43a9-a188-0f6ada692d50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc72facc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b8:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776213, 'reachable_time': 44930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363192, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.693 2 DEBUG oslo_concurrency.lockutils [None req-689fbb6f-2fa1-47a6-9ddb-a790a2537d7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d09efb55-ff68-4671-b89f-a35231b739e2-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.705 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aff8ff41-7299-40fd-9352-1821557dffb4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:b83f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776213, 'tstamp': 776213}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363194, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.717 2 DEBUG nova.compute.manager [req-c0fccf9a-4327-4c3c-8f81-154defbecd38 req-270e6e6f-b518-4f7f-a559-70fb8466eb61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.718 2 DEBUG oslo_concurrency.lockutils [req-c0fccf9a-4327-4c3c-8f81-154defbecd38 req-270e6e6f-b518-4f7f-a559-70fb8466eb61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.718 2 DEBUG oslo_concurrency.lockutils [req-c0fccf9a-4327-4c3c-8f81-154defbecd38 req-270e6e6f-b518-4f7f-a559-70fb8466eb61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.718 2 DEBUG oslo_concurrency.lockutils [req-c0fccf9a-4327-4c3c-8f81-154defbecd38 req-270e6e6f-b518-4f7f-a559-70fb8466eb61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.718 2 DEBUG nova.compute.manager [req-c0fccf9a-4327-4c3c-8f81-154defbecd38 req-270e6e6f-b518-4f7f-a559-70fb8466eb61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] No waiting events found dispatching network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.718 2 WARNING nova.compute.manager [req-c0fccf9a-4327-4c3c-8f81-154defbecd38 req-270e6e6f-b518-4f7f-a559-70fb8466eb61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received unexpected event network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed for instance with vm_state active and task_state None.
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.738 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1c394283-9f9f-47c4-8e5a-9642698f726e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc72facc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b8:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776213, 'reachable_time': 44930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363195, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.773 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1fb5b2-e16a-43ac-9334-244740804d5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.878 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f37a628f-ba98-4104-b42c-f6a1c5fdb749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 kernel: tapbc72facc-20: entered promiscuous mode
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.879 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc72facc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.880 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.880 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc72facc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 NetworkManager[44981]: <info>  [1759409866.8853] manager: (tapbc72facc-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.890 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc72facc-20, col_values=(('external_ids', {'iface-id': '00b1af69-0dd2-4e03-9090-dc7ccfcae6b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:46 compute-0 ovn_controller[148123]: 2025-10-02T12:57:46Z|00802|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.892 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc72facc-29fc-4f60-8da4-b2b18aba70d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc72facc-29fc-4f60-8da4-b2b18aba70d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.893 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b507b225-5de4-4e77-a81a-cbd0a5947f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.894 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: global
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-bc72facc-29fc-4f60-8da4-b2b18aba70d2
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/bc72facc-29fc-4f60-8da4-b2b18aba70d2.pid.haproxy
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID bc72facc-29fc-4f60-8da4-b2b18aba70d2
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:57:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:46.895 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'env', 'PROCESS_TAG=haproxy-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc72facc-29fc-4f60-8da4-b2b18aba70d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:57:46 compute-0 nova_compute[256940]: 2025-10-02 12:57:46.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 podman[363226]: 2025-10-02 12:57:47.263586591 +0000 UTC m=+0.057002261 container create b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:57:47 compute-0 systemd[1]: Started libpod-conmon-b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270.scope.
Oct 02 12:57:47 compute-0 podman[363226]: 2025-10-02 12:57:47.230477202 +0000 UTC m=+0.023892902 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:57:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79688e8ef41fef4b0dd9e7758ab2871ad01b8621c7bc73a55d1be4c8dce86a00/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:47 compute-0 podman[363226]: 2025-10-02 12:57:47.35100474 +0000 UTC m=+0.144420430 container init b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:57:47 compute-0 podman[363226]: 2025-10-02 12:57:47.356794771 +0000 UTC m=+0.150210441 container start b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[363241]: [NOTICE]   (363245) : New worker (363247) forked
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[363241]: [NOTICE]   (363245) : Loading success.
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.468 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.468 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.469 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.469 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.469 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.470 2 INFO nova.compute.manager [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Terminating instance
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.471 2 DEBUG nova.compute.manager [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 kernel: tapb9f9a641-2c (unregistering): left promiscuous mode
Oct 02 12:57:47 compute-0 NetworkManager[44981]: <info>  [1759409867.5296] device (tapb9f9a641-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 ovn_controller[148123]: 2025-10-02T12:57:47Z|00803|binding|INFO|Releasing lport b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 from this chassis (sb_readonly=0)
Oct 02 12:57:47 compute-0 ovn_controller[148123]: 2025-10-02T12:57:47Z|00804|binding|INFO|Setting lport b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 down in Southbound
Oct 02 12:57:47 compute-0 ovn_controller[148123]: 2025-10-02T12:57:47Z|00805|binding|INFO|Removing iface tapb9f9a641-2c ovn-installed in OVS
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.554 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:37:22 10.100.0.3'], port_security=['fa:16:3e:28:37:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e22a7944-9200-4204-a219-3f7bd720667b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.555 158104 INFO neutron.agent.ovn.metadata.agent [-] Port b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.557 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.558 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[413ff085-eea1-4b9d-8f7f-60c485d02079]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.558 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace which is not needed anymore
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 ceph-mon[73668]: pgmap v2539: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.7 MiB/s wr, 316 op/s
Oct 02 12:57:47 compute-0 ceph-mon[73668]: osdmap e349: 3 total, 3 up, 3 in
Oct 02 12:57:47 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct 02 12:57:47 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000090.scope: Consumed 23.265s CPU time.
Oct 02 12:57:47 compute-0 systemd-machined[210927]: Machine qemu-76-instance-00000090 terminated.
Oct 02 12:57:47 compute-0 sudo[363263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:47 compute-0 sudo[363263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:47 compute-0 sudo[363263]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.708 2 INFO nova.virt.libvirt.driver [-] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Instance destroyed successfully.
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.709 2 DEBUG nova.objects.instance [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'resources' on Instance uuid e22a7944-9200-4204-a219-3f7bd720667b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [NOTICE]   (355555) : haproxy version is 2.8.14-c23fe91
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [NOTICE]   (355555) : path to executable is /usr/sbin/haproxy
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [WARNING]  (355555) : Exiting Master process...
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [WARNING]  (355555) : Exiting Master process...
Oct 02 12:57:47 compute-0 systemd[1]: libpod-266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46.scope: Deactivated successfully.
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.724 2 DEBUG nova.virt.libvirt.vif [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-96780390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-96780390',id=144,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:53:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-a7fscw8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:53:57Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=e22a7944-9200-4204-a219-3f7bd720667b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.725 2 DEBUG nova.network.os_vif_util [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "address": "fa:16:3e:28:37:22", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f9a641-2c", "ovs_interfaceid": "b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [ALERT]    (355555) : Current worker (355557) exited with code 143 (Terminated)
Oct 02 12:57:47 compute-0 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[355551]: [WARNING]  (355555) : All workers exited. Exiting... (0)
Oct 02 12:57:47 compute-0 conmon[355551]: conmon 266c894d4da3c2598fba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46.scope/container/memory.events
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.730 2 DEBUG nova.network.os_vif_util [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:37:22,bridge_name='br-int',has_traffic_filtering=True,id=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f9a641-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.730 2 DEBUG os_vif [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:37:22,bridge_name='br-int',has_traffic_filtering=True,id=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f9a641-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:57:47 compute-0 podman[363297]: 2025-10-02 12:57:47.731358822 +0000 UTC m=+0.074152366 container died 266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9a641-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:57:47 compute-0 sudo[363307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.742 2 INFO os_vif [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:37:22,bridge_name='br-int',has_traffic_filtering=True,id=b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f9a641-2c')
Oct 02 12:57:47 compute-0 sudo[363307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:47 compute-0 sudo[363307]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46-userdata-shm.mount: Deactivated successfully.
Oct 02 12:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40ab198a6fd60c5f020d5c05aec05ec2fbaecaeed1f470ec6b5f6231fcd1e0d-merged.mount: Deactivated successfully.
Oct 02 12:57:47 compute-0 podman[363297]: 2025-10-02 12:57:47.787484439 +0000 UTC m=+0.130277983 container cleanup 266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:57:47 compute-0 systemd[1]: libpod-conmon-266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46.scope: Deactivated successfully.
Oct 02 12:57:47 compute-0 podman[363380]: 2025-10-02 12:57:47.857691922 +0000 UTC m=+0.042210307 container remove 266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.872 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e8af4a2d-a6c5-4696-aed0-ba9fbd2327bb]: (4, ('Thu Oct  2 12:57:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46)\n266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46\nThu Oct  2 12:57:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46)\n266c894d4da3c2598fba5891fe40004ebf32d795d185f6a2e7197b5fb5350b46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.874 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[70e1a02b-0480-4d7f-858c-f4fd37a26de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.875 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 kernel: tap7b0ec11e-00: left promiscuous mode
Oct 02 12:57:47 compute-0 nova_compute[256940]: 2025-10-02 12:57:47.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.955 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8916f4ed-5c17-45dd-a5ca-0409f8eae1e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.974 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c0807bf6-6a2a-4678-970d-cc04f74ae81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.975 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9e195ccc-6810-4d9e-8182-f7f42138a6e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:47.997 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b3051810-525d-410a-a77c-cf9c999af87c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752101, 'reachable_time': 29156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363398, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d7b0ec11e\x2d03f1\x2d4b98\x2dac7a\x2d50b364660bd2.mount: Deactivated successfully.
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:48.001 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:57:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:48.001 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[16df5d07-2c3b-4019-ba87-b16d32b6c15c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 57 KiB/s wr, 257 op/s
Oct 02 12:57:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.113 2 DEBUG nova.network.neutron [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updated VIF entry in instance network info cache for port e81302d5-61c1-4e2a-ba60-2692d1d9f8ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.113 2 DEBUG nova.network.neutron [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.156 2 DEBUG oslo_concurrency.lockutils [req-0797570a-e083-4aaa-99db-cd1c9a9af95a req-5a40596a-2db6-4717-93fa-8fb5b6b21943 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:48.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.408 2 INFO nova.virt.libvirt.driver [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Deleting instance files /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b_del
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.408 2 INFO nova.virt.libvirt.driver [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Deletion of /var/lib/nova/instances/e22a7944-9200-4204-a219-3f7bd720667b_del complete
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.482 2 INFO nova.compute.manager [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Took 1.01 seconds to destroy the instance on the hypervisor.
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.482 2 DEBUG oslo.service.loopingcall [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.483 2 DEBUG nova.compute.manager [-] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.483 2 DEBUG nova.network.neutron [-] [instance: e22a7944-9200-4204-a219-3f7bd720667b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:57:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:48.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.505 2 DEBUG nova.compute.manager [req-76c8a5b9-d35f-4351-841b-729137cb38b9 req-852a0939-4490-4934-aad8-ef65f57211f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received event network-vif-unplugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.505 2 DEBUG oslo_concurrency.lockutils [req-76c8a5b9-d35f-4351-841b-729137cb38b9 req-852a0939-4490-4934-aad8-ef65f57211f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.505 2 DEBUG oslo_concurrency.lockutils [req-76c8a5b9-d35f-4351-841b-729137cb38b9 req-852a0939-4490-4934-aad8-ef65f57211f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.505 2 DEBUG oslo_concurrency.lockutils [req-76c8a5b9-d35f-4351-841b-729137cb38b9 req-852a0939-4490-4934-aad8-ef65f57211f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.506 2 DEBUG nova.compute.manager [req-76c8a5b9-d35f-4351-841b-729137cb38b9 req-852a0939-4490-4934-aad8-ef65f57211f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] No waiting events found dispatching network-vif-unplugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.506 2 DEBUG nova.compute.manager [req-76c8a5b9-d35f-4351-841b-729137cb38b9 req-852a0939-4490-4934-aad8-ef65f57211f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received event network-vif-unplugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:57:48 compute-0 ovn_controller[148123]: 2025-10-02T12:57:48Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:14:41 10.100.0.22
Oct 02 12:57:48 compute-0 ovn_controller[148123]: 2025-10-02T12:57:48Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:14:41 10.100.0.22
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.831 2 DEBUG nova.compute.manager [req-6cbecf3f-5ac8-4c07-8e3b-5b93fd498df1 req-111607af-beb1-4a0f-8352-ea2976c66fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.832 2 DEBUG oslo_concurrency.lockutils [req-6cbecf3f-5ac8-4c07-8e3b-5b93fd498df1 req-111607af-beb1-4a0f-8352-ea2976c66fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.832 2 DEBUG oslo_concurrency.lockutils [req-6cbecf3f-5ac8-4c07-8e3b-5b93fd498df1 req-111607af-beb1-4a0f-8352-ea2976c66fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.832 2 DEBUG oslo_concurrency.lockutils [req-6cbecf3f-5ac8-4c07-8e3b-5b93fd498df1 req-111607af-beb1-4a0f-8352-ea2976c66fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.832 2 DEBUG nova.compute.manager [req-6cbecf3f-5ac8-4c07-8e3b-5b93fd498df1 req-111607af-beb1-4a0f-8352-ea2976c66fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] No waiting events found dispatching network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:48 compute-0 nova_compute[256940]: 2025-10-02 12:57:48.833 2 WARNING nova.compute.manager [req-6cbecf3f-5ac8-4c07-8e3b-5b93fd498df1 req-111607af-beb1-4a0f-8352-ea2976c66fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received unexpected event network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed for instance with vm_state active and task_state None.
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.205 2 DEBUG nova.network.neutron [-] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.223 2 INFO nova.compute.manager [-] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Took 0.74 seconds to deallocate network for instance.
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.258 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.259 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.340 2 DEBUG oslo_concurrency.processutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:49 compute-0 ceph-mon[73668]: pgmap v2541: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 57 KiB/s wr, 257 op/s
Oct 02 12:57:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249706266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.819 2 DEBUG oslo_concurrency.processutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.825 2 DEBUG nova.compute.provider_tree [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.844 2 DEBUG nova.scheduler.client.report [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.869 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.898 2 INFO nova.scheduler.client.report [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Deleted allocations for instance e22a7944-9200-4204-a219-3f7bd720667b
Oct 02 12:57:49 compute-0 nova_compute[256940]: 2025-10-02 12:57:49.965 2 DEBUG oslo_concurrency.lockutils [None req-89ab0151-1653-4296-8a45-08e59f2212fd 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 38 KiB/s wr, 200 op/s
Oct 02 12:57:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:50.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:50.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.591 2 DEBUG nova.compute.manager [req-9219ba72-fd0b-4677-9313-3c40f6de746d req-bfbfa1db-73ba-4260-94a1-4573e3ff7202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received event network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.591 2 DEBUG oslo_concurrency.lockutils [req-9219ba72-fd0b-4677-9313-3c40f6de746d req-bfbfa1db-73ba-4260-94a1-4573e3ff7202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e22a7944-9200-4204-a219-3f7bd720667b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.591 2 DEBUG oslo_concurrency.lockutils [req-9219ba72-fd0b-4677-9313-3c40f6de746d req-bfbfa1db-73ba-4260-94a1-4573e3ff7202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.592 2 DEBUG oslo_concurrency.lockutils [req-9219ba72-fd0b-4677-9313-3c40f6de746d req-bfbfa1db-73ba-4260-94a1-4573e3ff7202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e22a7944-9200-4204-a219-3f7bd720667b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.592 2 DEBUG nova.compute.manager [req-9219ba72-fd0b-4677-9313-3c40f6de746d req-bfbfa1db-73ba-4260-94a1-4573e3ff7202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] No waiting events found dispatching network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.592 2 WARNING nova.compute.manager [req-9219ba72-fd0b-4677-9313-3c40f6de746d req-bfbfa1db-73ba-4260-94a1-4573e3ff7202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received unexpected event network-vif-plugged-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 for instance with vm_state deleted and task_state None.
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.592 2 DEBUG nova.compute.manager [req-9219ba72-fd0b-4677-9313-3c40f6de746d req-bfbfa1db-73ba-4260-94a1-4573e3ff7202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Received event network-vif-deleted-b9f9a641-2cf3-42c8-80ed-0f1dbc678ef9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1249706266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.675 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409855.67412, 74e69165-006c-41de-82af-bc44cba0a843 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.675 2 INFO nova.compute.manager [-] [instance: 74e69165-006c-41de-82af-bc44cba0a843] VM Stopped (Lifecycle Event)
Oct 02 12:57:50 compute-0 nova_compute[256940]: 2025-10-02 12:57:50.694 2 DEBUG nova.compute.manager [None req-17d839dd-2a39-41c2-bb1c-11b144675be1 - - - - - -] [instance: 74e69165-006c-41de-82af-bc44cba0a843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:51 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Oct 02 12:57:51 compute-0 ceph-mon[73668]: pgmap v2542: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 38 KiB/s wr, 200 op/s
Oct 02 12:57:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 428 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 807 KiB/s wr, 217 op/s
Oct 02 12:57:52 compute-0 sudo[363424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:52 compute-0 sudo[363424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-0 sudo[363424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-0 sudo[363449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:52 compute-0 sudo[363449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-0 sudo[363449]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-0 sudo[363474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:52 compute-0 sudo[363474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-0 sudo[363474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:52.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:52 compute-0 sudo[363499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:57:52 compute-0 sudo[363499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-0 nova_compute[256940]: 2025-10-02 12:57:52.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:52.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:52.737 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:52 compute-0 nova_compute[256940]: 2025-10-02 12:57:52.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:52.740 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:57:52 compute-0 sudo[363499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:57:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:57:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:57:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:57:52 compute-0 nova_compute[256940]: 2025-10-02 12:57:52.952 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:57:52 compute-0 ovn_controller[148123]: 2025-10-02T12:57:52Z|00806|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct 02 12:57:52 compute-0 ovn_controller[148123]: 2025-10-02T12:57:52Z|00807|binding|INFO|Releasing lport b9b521b9-f530-4704-8f5f-f7d4cbd1e547 from this chassis (sb_readonly=0)
Oct 02 12:57:52 compute-0 ovn_controller[148123]: 2025-10-02T12:57:52Z|00808|binding|INFO|Releasing lport d4ed0cb9-c5ef-468c-8d04-1cebdb7dab47 from this chassis (sb_readonly=0)
Oct 02 12:57:53 compute-0 nova_compute[256940]: 2025-10-02 12:57:53.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Oct 02 12:57:53 compute-0 ceph-mon[73668]: pgmap v2543: 305 pgs: 305 active+clean; 428 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 807 KiB/s wr, 217 op/s
Oct 02 12:57:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:57:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 939d499b-1342-45dc-a25f-1315420f97d4 does not exist
Oct 02 12:57:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 92d993ce-05dc-4387-94ce-09c50ff8cf13 does not exist
Oct 02 12:57:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev af8e66aa-df2f-4180-957b-dd51e61ffe87 does not exist
Oct 02 12:57:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:57:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:57:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:57:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:57:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:57:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:53 compute-0 sudo[363556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:53 compute-0 sudo[363556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:53 compute-0 sudo[363556]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:53 compute-0 sudo[363581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:53 compute-0 sudo[363581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:53 compute-0 sudo[363581]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Oct 02 12:57:53 compute-0 sudo[363606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:53 compute-0 sudo[363606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:53 compute-0 sudo[363606]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:53 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Oct 02 12:57:53 compute-0 sudo[363631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:57:53 compute-0 sudo[363631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 428 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 624 KiB/s rd, 862 KiB/s wr, 107 op/s
Oct 02 12:57:54 compute-0 podman[363697]: 2025-10-02 12:57:54.108707043 +0000 UTC m=+0.097058231 container create 5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:57:54 compute-0 podman[363697]: 2025-10-02 12:57:54.037523885 +0000 UTC m=+0.025875053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:54 compute-0 systemd[1]: Started libpod-conmon-5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114.scope.
Oct 02 12:57:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:54.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:54 compute-0 podman[363697]: 2025-10-02 12:57:54.294989678 +0000 UTC m=+0.283340886 container init 5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:57:54 compute-0 podman[363697]: 2025-10-02 12:57:54.303383706 +0000 UTC m=+0.291734874 container start 5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:57:54 compute-0 hardcore_babbage[363713]: 167 167
Oct 02 12:57:54 compute-0 systemd[1]: libpod-5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114.scope: Deactivated successfully.
Oct 02 12:57:54 compute-0 podman[363697]: 2025-10-02 12:57:54.315562673 +0000 UTC m=+0.303913871 container attach 5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:57:54 compute-0 podman[363697]: 2025-10-02 12:57:54.315904201 +0000 UTC m=+0.304255399 container died 5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 12:57:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:57:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:57:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:57:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:57:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:54 compute-0 ceph-mon[73668]: osdmap e350: 3 total, 3 up, 3 in
Oct 02 12:57:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3185490209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-053d83fbc94284e6203c6113fd6478b8bee22abc02d7880d5aa39e009ac42698-merged.mount: Deactivated successfully.
Oct 02 12:57:54 compute-0 podman[363697]: 2025-10-02 12:57:54.415413204 +0000 UTC m=+0.403764382 container remove 5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:57:54 compute-0 systemd[1]: libpod-conmon-5641a8b942496ca16ba5faca8651a0f25986b06968d2d2ef184190e4169ca114.scope: Deactivated successfully.
Oct 02 12:57:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:54.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:54 compute-0 podman[363737]: 2025-10-02 12:57:54.643578397 +0000 UTC m=+0.071235640 container create 89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:57:54 compute-0 podman[363737]: 2025-10-02 12:57:54.598209479 +0000 UTC m=+0.025866742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:54 compute-0 systemd[1]: Started libpod-conmon-89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b.scope.
Oct 02 12:57:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b393f797798d254268c58833488b687b0e8377689f05bd001becdf8d9d7986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b393f797798d254268c58833488b687b0e8377689f05bd001becdf8d9d7986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b393f797798d254268c58833488b687b0e8377689f05bd001becdf8d9d7986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b393f797798d254268c58833488b687b0e8377689f05bd001becdf8d9d7986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b393f797798d254268c58833488b687b0e8377689f05bd001becdf8d9d7986/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:54 compute-0 podman[363737]: 2025-10-02 12:57:54.764515365 +0000 UTC m=+0.192172608 container init 89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:57:54 compute-0 podman[363737]: 2025-10-02 12:57:54.772478542 +0000 UTC m=+0.200135785 container start 89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:57:54 compute-0 podman[363737]: 2025-10-02 12:57:54.781564158 +0000 UTC m=+0.209221401 container attach 89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:57:55 compute-0 ceph-mon[73668]: pgmap v2545: 305 pgs: 305 active+clean; 428 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 624 KiB/s rd, 862 KiB/s wr, 107 op/s
Oct 02 12:57:55 compute-0 zealous_hawking[363754]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:57:55 compute-0 zealous_hawking[363754]: --> relative data size: 1.0
Oct 02 12:57:55 compute-0 zealous_hawking[363754]: --> All data devices are unavailable
Oct 02 12:57:55 compute-0 systemd[1]: libpod-89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b.scope: Deactivated successfully.
Oct 02 12:57:55 compute-0 podman[363737]: 2025-10-02 12:57:55.886633573 +0000 UTC m=+1.314290836 container died 89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:57:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 2.6 MiB/s wr, 131 op/s
Oct 02 12:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b393f797798d254268c58833488b687b0e8377689f05bd001becdf8d9d7986-merged.mount: Deactivated successfully.
Oct 02 12:57:56 compute-0 podman[363737]: 2025-10-02 12:57:56.221163477 +0000 UTC m=+1.648820720 container remove 89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hawking, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 12:57:56 compute-0 systemd[1]: libpod-conmon-89c65b79bf3d2222676f9024ce61c5ff8e74208095035fae251e7c5133d80f0b.scope: Deactivated successfully.
Oct 02 12:57:56 compute-0 sudo[363631]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:57:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:57:56 compute-0 sudo[363783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:56 compute-0 sudo[363783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:56 compute-0 sudo[363783]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:56 compute-0 sudo[363808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:56 compute-0 sudo[363808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:56 compute-0 sudo[363808]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:56 compute-0 sudo[363833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:56 compute-0 sudo[363833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:56 compute-0 sudo[363833]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:56 compute-0 ovn_controller[148123]: 2025-10-02T12:57:56Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:b2:58 10.100.0.14
Oct 02 12:57:56 compute-0 ovn_controller[148123]: 2025-10-02T12:57:56Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:b2:58 10.100.0.14
Oct 02 12:57:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:56.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:56 compute-0 sudo[363858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:57:56 compute-0 sudo[363858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:56 compute-0 podman[363924]: 2025-10-02 12:57:56.880338947 +0000 UTC m=+0.082433721 container create 5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:57:56 compute-0 podman[363924]: 2025-10-02 12:57:56.819765895 +0000 UTC m=+0.021860709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:56 compute-0 systemd[1]: Started libpod-conmon-5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b.scope.
Oct 02 12:57:56 compute-0 ceph-mon[73668]: pgmap v2546: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 2.6 MiB/s wr, 131 op/s
Oct 02 12:57:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:56 compute-0 podman[363924]: 2025-10-02 12:57:56.992964761 +0000 UTC m=+0.195059565 container init 5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:57:57 compute-0 podman[363924]: 2025-10-02 12:57:57.004340766 +0000 UTC m=+0.206435550 container start 5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:57:57 compute-0 busy_dubinsky[363940]: 167 167
Oct 02 12:57:57 compute-0 systemd[1]: libpod-5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b.scope: Deactivated successfully.
Oct 02 12:57:57 compute-0 conmon[363940]: conmon 5e2879a410ccc12368f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b.scope/container/memory.events
Oct 02 12:57:57 compute-0 podman[363924]: 2025-10-02 12:57:57.029339795 +0000 UTC m=+0.231434609 container attach 5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:57:57 compute-0 podman[363924]: 2025-10-02 12:57:57.030746552 +0000 UTC m=+0.232841356 container died 5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b8a9e86363c0d0b966aaff174e78a4b16273f54234fe7b6478d8ce57c0df5ba-merged.mount: Deactivated successfully.
Oct 02 12:57:57 compute-0 podman[363924]: 2025-10-02 12:57:57.301160451 +0000 UTC m=+0.503255235 container remove 5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:57:57 compute-0 systemd[1]: libpod-conmon-5e2879a410ccc12368f11dbbaa590f97bac75faced39cc6cc93a7db41934e18b.scope: Deactivated successfully.
Oct 02 12:57:57 compute-0 nova_compute[256940]: 2025-10-02 12:57:57.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:57 compute-0 podman[363966]: 2025-10-02 12:57:57.543421909 +0000 UTC m=+0.050688037 container create b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:57:57 compute-0 systemd[1]: Started libpod-conmon-b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62.scope.
Oct 02 12:57:57 compute-0 podman[363966]: 2025-10-02 12:57:57.518763289 +0000 UTC m=+0.026029436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec565c8082a3e6bce618a5fcc2e9739a2c2893f9b398c833296b7cc84e8c160b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec565c8082a3e6bce618a5fcc2e9739a2c2893f9b398c833296b7cc84e8c160b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec565c8082a3e6bce618a5fcc2e9739a2c2893f9b398c833296b7cc84e8c160b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec565c8082a3e6bce618a5fcc2e9739a2c2893f9b398c833296b7cc84e8c160b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:57:57 compute-0 podman[363966]: 2025-10-02 12:57:57.656390562 +0000 UTC m=+0.163656699 container init b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:57:57 compute-0 podman[363966]: 2025-10-02 12:57:57.664447701 +0000 UTC m=+0.171713818 container start b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 12:57:57 compute-0 podman[363966]: 2025-10-02 12:57:57.67097118 +0000 UTC m=+0.178237287 container attach b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:57:57 compute-0 nova_compute[256940]: 2025-10-02 12:57:57.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 483 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 5.5 MiB/s wr, 161 op/s
Oct 02 12:57:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:58.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:58 compute-0 awesome_faraday[363983]: {
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:     "1": [
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:         {
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "devices": [
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "/dev/loop3"
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             ],
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "lv_name": "ceph_lv0",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "lv_size": "7511998464",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "name": "ceph_lv0",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "tags": {
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.cluster_name": "ceph",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.crush_device_class": "",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.encrypted": "0",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.osd_id": "1",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.type": "block",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:                 "ceph.vdo": "0"
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             },
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "type": "block",
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:             "vg_name": "ceph_vg0"
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:         }
Oct 02 12:57:58 compute-0 awesome_faraday[363983]:     ]
Oct 02 12:57:58 compute-0 awesome_faraday[363983]: }
Oct 02 12:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:57:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:57:58 compute-0 systemd[1]: libpod-b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62.scope: Deactivated successfully.
Oct 02 12:57:58 compute-0 conmon[363983]: conmon b98e3613b2a0519dde09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62.scope/container/memory.events
Oct 02 12:57:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:57:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:58.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:58 compute-0 podman[363992]: 2025-10-02 12:57:58.523191911 +0000 UTC m=+0.024830065 container died b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec565c8082a3e6bce618a5fcc2e9739a2c2893f9b398c833296b7cc84e8c160b-merged.mount: Deactivated successfully.
Oct 02 12:57:58 compute-0 podman[363992]: 2025-10-02 12:57:58.731433617 +0000 UTC m=+0.233071761 container remove b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:57:58 compute-0 systemd[1]: libpod-conmon-b98e3613b2a0519dde097982c63c6b9ad135e6855e66ad148bc4edcaf0489d62.scope: Deactivated successfully.
Oct 02 12:57:58 compute-0 sudo[363858]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:58 compute-0 sudo[364008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:58 compute-0 sudo[364008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:58 compute-0 sudo[364008]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:58 compute-0 sudo[364033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:58 compute-0 sudo[364033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:58 compute-0 sudo[364033]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:58 compute-0 sudo[364058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:58 compute-0 sudo[364058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:58 compute-0 sudo[364058]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:59 compute-0 sudo[364083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:57:59 compute-0 sudo[364083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:59 compute-0 ceph-mon[73668]: pgmap v2547: 305 pgs: 305 active+clean; 483 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 5.5 MiB/s wr, 161 op/s
Oct 02 12:57:59 compute-0 podman[364149]: 2025-10-02 12:57:59.447549146 +0000 UTC m=+0.055424700 container create b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:57:59 compute-0 systemd[1]: Started libpod-conmon-b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3.scope.
Oct 02 12:57:59 compute-0 podman[364149]: 2025-10-02 12:57:59.416204182 +0000 UTC m=+0.024079776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:57:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:57:59 compute-0 podman[364149]: 2025-10-02 12:57:59.573012302 +0000 UTC m=+0.180887866 container init b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:57:59 compute-0 podman[364149]: 2025-10-02 12:57:59.582053977 +0000 UTC m=+0.189929521 container start b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:57:59 compute-0 sleepy_mclaren[364166]: 167 167
Oct 02 12:57:59 compute-0 systemd[1]: libpod-b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3.scope: Deactivated successfully.
Oct 02 12:57:59 compute-0 podman[364149]: 2025-10-02 12:57:59.609764686 +0000 UTC m=+0.217640250 container attach b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 12:57:59 compute-0 podman[364149]: 2025-10-02 12:57:59.610841504 +0000 UTC m=+0.218717038 container died b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclaren, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:57:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:57:59.742 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f22281b844107c0126fb61b791e3bc56a50bc1fb0182ca2cbd4cf36c8a8ee01-merged.mount: Deactivated successfully.
Oct 02 12:58:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 527 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 788 KiB/s rd, 7.2 MiB/s wr, 204 op/s
Oct 02 12:58:00 compute-0 podman[364149]: 2025-10-02 12:58:00.162882474 +0000 UTC m=+0.770758018 container remove b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 12:58:00 compute-0 systemd[1]: libpod-conmon-b40b4bc1a09bff271eb8286e0d6724b76dfc1cd0501be86bbd03c343b59f08c3.scope: Deactivated successfully.
Oct 02 12:58:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4123427180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2241388146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:00.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:00 compute-0 podman[364193]: 2025-10-02 12:58:00.339638552 +0000 UTC m=+0.022055563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:58:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:00.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:00 compute-0 podman[364193]: 2025-10-02 12:58:00.518730531 +0000 UTC m=+0.201147522 container create 98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:58:00 compute-0 systemd[1]: Started libpod-conmon-98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338.scope.
Oct 02 12:58:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3afc80d461c9536dbf7aad97540ce9442623eec917c44de1c6fb418ece634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3afc80d461c9536dbf7aad97540ce9442623eec917c44de1c6fb418ece634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3afc80d461c9536dbf7aad97540ce9442623eec917c44de1c6fb418ece634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3afc80d461c9536dbf7aad97540ce9442623eec917c44de1c6fb418ece634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:00 compute-0 podman[364193]: 2025-10-02 12:58:00.819629932 +0000 UTC m=+0.502046933 container init 98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:58:00 compute-0 podman[364193]: 2025-10-02 12:58:00.828091501 +0000 UTC m=+0.510508492 container start 98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 12:58:00 compute-0 podman[364193]: 2025-10-02 12:58:00.896519618 +0000 UTC m=+0.578936609 container attach 98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 12:58:01 compute-0 ceph-mon[73668]: pgmap v2548: 305 pgs: 305 active+clean; 527 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 788 KiB/s rd, 7.2 MiB/s wr, 204 op/s
Oct 02 12:58:01 compute-0 podman[364215]: 2025-10-02 12:58:01.411243049 +0000 UTC m=+0.071223380 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:58:01 compute-0 podman[364235]: 2025-10-02 12:58:01.511932702 +0000 UTC m=+0.075734927 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd)
Oct 02 12:58:01 compute-0 bold_kirch[364209]: {
Oct 02 12:58:01 compute-0 bold_kirch[364209]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:58:01 compute-0 bold_kirch[364209]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:58:01 compute-0 bold_kirch[364209]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:58:01 compute-0 bold_kirch[364209]:         "osd_id": 1,
Oct 02 12:58:01 compute-0 bold_kirch[364209]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:58:01 compute-0 bold_kirch[364209]:         "type": "bluestore"
Oct 02 12:58:01 compute-0 bold_kirch[364209]:     }
Oct 02 12:58:01 compute-0 bold_kirch[364209]: }
Oct 02 12:58:01 compute-0 systemd[1]: libpod-98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338.scope: Deactivated successfully.
Oct 02 12:58:01 compute-0 conmon[364209]: conmon 98aab685de5cb650e7b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338.scope/container/memory.events
Oct 02 12:58:01 compute-0 podman[364274]: 2025-10-02 12:58:01.78382543 +0000 UTC m=+0.027008632 container died 98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 12:58:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf3afc80d461c9536dbf7aad97540ce9442623eec917c44de1c6fb418ece634-merged.mount: Deactivated successfully.
Oct 02 12:58:01 compute-0 podman[364274]: 2025-10-02 12:58:01.940352992 +0000 UTC m=+0.183536164 container remove 98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 12:58:01 compute-0 systemd[1]: libpod-conmon-98aab685de5cb650e7b822bb870729f3d81c8e64ff561168411e8b4228539338.scope: Deactivated successfully.
Oct 02 12:58:01 compute-0 sudo[364083]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:58:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:58:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:58:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 766 KiB/s rd, 6.6 MiB/s wr, 182 op/s
Oct 02 12:58:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:58:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f63edb2e-635f-44d8-a2cb-ea196d65c3df does not exist
Oct 02 12:58:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8ba81815-b558-423e-b233-414cae4fae55 does not exist
Oct 02 12:58:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c0fadd13-9395-4f28-9c4e-26f80cdcd0af does not exist
Oct 02 12:58:02 compute-0 sudo[364289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:02 compute-0 sudo[364289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:02 compute-0 sudo[364289]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:02 compute-0 sudo[364314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:58:02 compute-0 sudo[364314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:02 compute-0 sudo[364314]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.313 2 INFO nova.compute.manager [None req-8e95a8ab-9787-484d-a133-526a2bbb75ab 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Get console output
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.319 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.576 2 INFO nova.compute.manager [None req-72058835-1d55-46ad-a117-317df91394ca 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Pausing
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.577 2 DEBUG nova.objects.instance [None req-72058835-1d55-46ad-a117-317df91394ca 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'flavor' on Instance uuid 536a7b9a-251e-4caf-9625-d7add1094a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.619 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409882.618777, 536a7b9a-251e-4caf-9625-d7add1094a1e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.619 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] VM Paused (Lifecycle Event)
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.621 2 DEBUG nova.compute.manager [None req-72058835-1d55-46ad-a117-317df91394ca 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.648 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.653 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.706 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409867.7033, e22a7944-9200-4204-a219-3f7bd720667b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.706 2 INFO nova.compute.manager [-] [instance: e22a7944-9200-4204-a219-3f7bd720667b] VM Stopped (Lifecycle Event)
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.708 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:02 compute-0 nova_compute[256940]: 2025-10-02 12:58:02.763 2 DEBUG nova.compute.manager [None req-3fd99fc7-e900-415e-85b5-734bbf985883 - - - - - -] [instance: e22a7944-9200-4204-a219-3f7bd720667b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:58:03 compute-0 ceph-mon[73668]: pgmap v2549: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 766 KiB/s rd, 6.6 MiB/s wr, 182 op/s
Oct 02 12:58:03 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:58:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:04 compute-0 nova_compute[256940]: 2025-10-02 12:58:04.005 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:58:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 6.3 MiB/s wr, 176 op/s
Oct 02 12:58:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:04.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:05 compute-0 ceph-mon[73668]: pgmap v2550: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 6.3 MiB/s wr, 176 op/s
Oct 02 12:58:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:58:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:58:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.5 MiB/s wr, 189 op/s
Oct 02 12:58:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:06.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:06 compute-0 kernel: tap9adf2da2-70 (unregistering): left promiscuous mode
Oct 02 12:58:06 compute-0 NetworkManager[44981]: <info>  [1759409886.4127] device (tap9adf2da2-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:58:06 compute-0 nova_compute[256940]: 2025-10-02 12:58:06.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:06 compute-0 nova_compute[256940]: 2025-10-02 12:58:06.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:06 compute-0 ovn_controller[148123]: 2025-10-02T12:58:06Z|00809|binding|INFO|Releasing lport 9adf2da2-7037-4b71-94a1-53519ef0db70 from this chassis (sb_readonly=0)
Oct 02 12:58:06 compute-0 ovn_controller[148123]: 2025-10-02T12:58:06Z|00810|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 down in Southbound
Oct 02 12:58:06 compute-0 ovn_controller[148123]: 2025-10-02T12:58:06Z|00811|binding|INFO|Removing iface tap9adf2da2-70 ovn-installed in OVS
Oct 02 12:58:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:06.431 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:07:2e 10.100.0.14'], port_security=['fa:16:3e:da:07:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '42cbf332-5b16-48ac-b3c9-9a21a922b6f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9adf2da2-7037-4b71-94a1-53519ef0db70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:06.432 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9adf2da2-7037-4b71-94a1-53519ef0db70 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 unbound from our chassis
Oct 02 12:58:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:06.435 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:58:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:06.436 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bd21f6-f5a1-48be-969e-8e34873ec930]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:06 compute-0 nova_compute[256940]: 2025-10-02 12:58:06.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:06 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Oct 02 12:58:06 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000009e.scope: Consumed 14.795s CPU time.
Oct 02 12:58:06 compute-0 systemd-machined[210927]: Machine qemu-82-instance-0000009e terminated.
Oct 02 12:58:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.023 2 INFO nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance shutdown successfully after 24 seconds.
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.028 2 INFO nova.virt.libvirt.driver [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance destroyed successfully.
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.028 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.044 2 INFO nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Attempting rescue
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.045 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.048 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.049 2 INFO nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Creating image(s)
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.078 2 DEBUG nova.storage.rbd_utils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.082 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.120 2 DEBUG nova.storage.rbd_utils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.146 2 DEBUG nova.storage.rbd_utils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.151 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.225 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.227 2 DEBUG oslo_concurrency.lockutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.227 2 DEBUG oslo_concurrency.lockutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.228 2 DEBUG oslo_concurrency.lockutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.256 2 DEBUG nova.storage.rbd_utils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.259 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:07 compute-0 ceph-mon[73668]: pgmap v2551: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.5 MiB/s wr, 189 op/s
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:07 compute-0 sudo[364454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:07 compute-0 sudo[364454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:07 compute-0 sudo[364454]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.866 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.867 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'migration_context' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.888 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:58:07 compute-0 sudo[364479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.889 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Start _get_guest_xml network_info=[{"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:da:07:2e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.889 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'resources' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:07 compute-0 sudo[364479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:07 compute-0 sudo[364479]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.904 2 WARNING nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.910 2 DEBUG nova.virt.libvirt.host [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.911 2 DEBUG nova.virt.libvirt.host [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.913 2 DEBUG nova.virt.libvirt.host [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.914 2 DEBUG nova.virt.libvirt.host [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.915 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.915 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.915 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.915 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.916 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.916 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.916 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.916 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.916 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.916 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.916 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.917 2 DEBUG nova.virt.hardware [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.917 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:07 compute-0 nova_compute[256940]: 2025-10-02 12:58:07.930 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 MiB/s wr, 194 op/s
Oct 02 12:58:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:58:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:08.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:58:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/161162007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.399 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.401 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:08.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.531 2 DEBUG nova.compute.manager [req-00723e56-4632-4d25-a338-1b7d78c40588 req-5826441e-1776-4cdf-b2c2-5f1e1fdd8eea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.531 2 DEBUG oslo_concurrency.lockutils [req-00723e56-4632-4d25-a338-1b7d78c40588 req-5826441e-1776-4cdf-b2c2-5f1e1fdd8eea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.531 2 DEBUG oslo_concurrency.lockutils [req-00723e56-4632-4d25-a338-1b7d78c40588 req-5826441e-1776-4cdf-b2c2-5f1e1fdd8eea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.532 2 DEBUG oslo_concurrency.lockutils [req-00723e56-4632-4d25-a338-1b7d78c40588 req-5826441e-1776-4cdf-b2c2-5f1e1fdd8eea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.532 2 DEBUG nova.compute.manager [req-00723e56-4632-4d25-a338-1b7d78c40588 req-5826441e-1776-4cdf-b2c2-5f1e1fdd8eea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.532 2 WARNING nova.compute.manager [req-00723e56-4632-4d25-a338-1b7d78c40588 req-5826441e-1776-4cdf-b2c2-5f1e1fdd8eea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state active and task_state rescuing.
Oct 02 12:58:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/161162007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913417872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.863 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:08 compute-0 nova_compute[256940]: 2025-10-02 12:58:08.865 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/931723888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.299 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.301 2 DEBUG nova.virt.libvirt.vif [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1523338527',display_name='tempest-ServerRescueTestJSON-server-1523338527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1523338527',id=158,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-kabplxb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:39Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=42cbf332-5b16-48ac-b3c9-9a21a922b6f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:da:07:2e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.301 2 DEBUG nova.network.os_vif_util [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:da:07:2e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.302 2 DEBUG nova.network.os_vif_util [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.303 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.318 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <uuid>42cbf332-5b16-48ac-b3c9-9a21a922b6f9</uuid>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <name>instance-0000009e</name>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerRescueTestJSON-server-1523338527</nova:name>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:58:07</nova:creationTime>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:user uuid="dfe96a8fa48c4243b6262a0359f5b208">tempest-ServerRescueTestJSON-791200975-project-member</nova:user>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:project uuid="d8f55f9d9ed144629bd9a03edb020c4f">tempest-ServerRescueTestJSON-791200975</nova:project>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <nova:port uuid="9adf2da2-7037-4b71-94a1-53519ef0db70">
Oct 02 12:58:09 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <system>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <entry name="serial">42cbf332-5b16-48ac-b3c9-9a21a922b6f9</entry>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <entry name="uuid">42cbf332-5b16-48ac-b3c9-9a21a922b6f9</entry>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </system>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <os>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   </os>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <features>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   </features>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.rescue">
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk">
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config.rescue">
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </source>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:58:09 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:da:07:2e"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <target dev="tap9adf2da2-70"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/console.log" append="off"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <video>
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </video>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:58:09 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:58:09 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:58:09 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:58:09 compute-0 nova_compute[256940]: </domain>
Oct 02 12:58:09 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.325 2 INFO nova.virt.libvirt.driver [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance destroyed successfully.
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.393 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.394 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.394 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.394 2 DEBUG nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No VIF found with MAC fa:16:3e:da:07:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.394 2 INFO nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Using config drive
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.417 2 DEBUG nova.storage.rbd_utils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.436 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.464 2 DEBUG nova.objects.instance [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'keypairs' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.576 2 INFO nova.compute.manager [None req-1bcc440b-a38c-4b05-b91d-27f2e822e583 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Get console output
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.580 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:58:09 compute-0 ceph-mon[73668]: pgmap v2552: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 MiB/s wr, 194 op/s
Oct 02 12:58:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/913417872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/931723888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.748 2 INFO nova.compute.manager [None req-20fe604e-29a7-4961-a0bb-688035c849ce 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Unpausing
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.749 2 DEBUG nova.objects.instance [None req-20fe604e-29a7-4961-a0bb-688035c849ce 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'flavor' on Instance uuid 536a7b9a-251e-4caf-9625-d7add1094a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.782 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409889.7815523, 536a7b9a-251e-4caf-9625-d7add1094a1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.783 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] VM Resumed (Lifecycle Event)
Oct 02 12:58:09 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.786 2 DEBUG nova.virt.libvirt.guest [None req-20fe604e-29a7-4961-a0bb-688035c849ce 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.787 2 DEBUG nova.compute.manager [None req-20fe604e-29a7-4961-a0bb-688035c849ce 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.831 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.834 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:09 compute-0 nova_compute[256940]: 2025-10-02 12:58:09.865 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.001 2 INFO nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Creating config drive at /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config.rescue
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.005 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprwblx18b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.147 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprwblx18b" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.179 2 DEBUG nova.storage.rbd_utils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.182 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config.rescue 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.220 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:10.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.664 2 DEBUG nova.compute.manager [req-01eeb835-5407-4d44-8854-5acbf18cabf7 req-a4257900-8e79-43c2-bed7-4d5a961fcc40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.665 2 DEBUG oslo_concurrency.lockutils [req-01eeb835-5407-4d44-8854-5acbf18cabf7 req-a4257900-8e79-43c2-bed7-4d5a961fcc40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.665 2 DEBUG oslo_concurrency.lockutils [req-01eeb835-5407-4d44-8854-5acbf18cabf7 req-a4257900-8e79-43c2-bed7-4d5a961fcc40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.666 2 DEBUG oslo_concurrency.lockutils [req-01eeb835-5407-4d44-8854-5acbf18cabf7 req-a4257900-8e79-43c2-bed7-4d5a961fcc40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.666 2 DEBUG nova.compute.manager [req-01eeb835-5407-4d44-8854-5acbf18cabf7 req-a4257900-8e79-43c2-bed7-4d5a961fcc40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.666 2 WARNING nova.compute.manager [req-01eeb835-5407-4d44-8854-5acbf18cabf7 req-a4257900-8e79-43c2-bed7-4d5a961fcc40 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state active and task_state rescuing.
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.763 2 INFO nova.compute.manager [None req-ddada5e2-8647-4c72-af86-01936f0b99d1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Get console output
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.772 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.870 2 DEBUG oslo_concurrency.processutils [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config.rescue 42cbf332-5b16-48ac-b3c9-9a21a922b6f9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.871 2 INFO nova.virt.libvirt.driver [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Deleting local config drive /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9/disk.config.rescue because it was imported into RBD.
Oct 02 12:58:10 compute-0 kernel: tap9adf2da2-70: entered promiscuous mode
Oct 02 12:58:10 compute-0 NetworkManager[44981]: <info>  [1759409890.9260] manager: (tap9adf2da2-70): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Oct 02 12:58:10 compute-0 ovn_controller[148123]: 2025-10-02T12:58:10Z|00812|binding|INFO|Claiming lport 9adf2da2-7037-4b71-94a1-53519ef0db70 for this chassis.
Oct 02 12:58:10 compute-0 ovn_controller[148123]: 2025-10-02T12:58:10Z|00813|binding|INFO|9adf2da2-7037-4b71-94a1-53519ef0db70: Claiming fa:16:3e:da:07:2e 10.100.0.14
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:10.935 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:07:2e 10.100.0.14'], port_security=['fa:16:3e:da:07:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '42cbf332-5b16-48ac-b3c9-9a21a922b6f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9adf2da2-7037-4b71-94a1-53519ef0db70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:10.937 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9adf2da2-7037-4b71-94a1-53519ef0db70 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 bound to our chassis
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:10.938 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:58:10 compute-0 ovn_controller[148123]: 2025-10-02T12:58:10Z|00814|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct 02 12:58:10 compute-0 ovn_controller[148123]: 2025-10-02T12:58:10Z|00815|binding|INFO|Releasing lport b9b521b9-f530-4704-8f5f-f7d4cbd1e547 from this chassis (sb_readonly=0)
Oct 02 12:58:10 compute-0 ovn_controller[148123]: 2025-10-02T12:58:10Z|00816|binding|INFO|Releasing lport d4ed0cb9-c5ef-468c-8d04-1cebdb7dab47 from this chassis (sb_readonly=0)
Oct 02 12:58:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:10.939 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0bafba-86cf-453e-87c6-b33b4e128f46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 ovn_controller[148123]: 2025-10-02T12:58:10Z|00817|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 ovn-installed in OVS
Oct 02 12:58:10 compute-0 ovn_controller[148123]: 2025-10-02T12:58:10Z|00818|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 up in Southbound
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 nova_compute[256940]: 2025-10-02 12:58:10.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:10 compute-0 systemd-udevd[364643]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:58:10 compute-0 systemd-machined[210927]: New machine qemu-84-instance-0000009e.
Oct 02 12:58:10 compute-0 NetworkManager[44981]: <info>  [1759409890.9744] device (tap9adf2da2-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:58:10 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-0000009e.
Oct 02 12:58:10 compute-0 NetworkManager[44981]: <info>  [1759409890.9753] device (tap9adf2da2-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.565 2 DEBUG nova.compute.manager [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-changed-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.567 2 DEBUG nova.compute.manager [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Refreshing instance network info cache due to event network-changed-2906704f-c1f9-471b-a4c1-9d6f7eeaee99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.567 2 DEBUG oslo_concurrency.lockutils [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.568 2 DEBUG oslo_concurrency.lockutils [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.568 2 DEBUG nova.network.neutron [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Refreshing network info cache for port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.631 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.632 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.632 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.632 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.632 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.634 2 INFO nova.compute.manager [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Terminating instance
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.635 2 DEBUG nova.compute.manager [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:58:11 compute-0 kernel: tap2906704f-c1 (unregistering): left promiscuous mode
Oct 02 12:58:11 compute-0 NetworkManager[44981]: <info>  [1759409891.6851] device (tap2906704f-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 ovn_controller[148123]: 2025-10-02T12:58:11Z|00819|binding|INFO|Releasing lport 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 from this chassis (sb_readonly=0)
Oct 02 12:58:11 compute-0 ovn_controller[148123]: 2025-10-02T12:58:11Z|00820|binding|INFO|Setting lport 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 down in Southbound
Oct 02 12:58:11 compute-0 ovn_controller[148123]: 2025-10-02T12:58:11Z|00821|binding|INFO|Removing iface tap2906704f-c1 ovn-installed in OVS
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:11.701 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b2:58 10.100.0.14'], port_security=['fa:16:3e:08:b2:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '536a7b9a-251e-4caf-9625-d7add1094a1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '954e63ef-62ce-4a1c-978f-3dfc5cc86176', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5cef5ef-2ebc-4724-b755-6f8f78ea6768, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2906704f-c1f9-471b-a4c1-9d6f7eeaee99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:11.702 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99 in datapath 10bd5c42-4cc6-4b0d-b045-612e6f3fee9a unbound from our chassis
Oct 02 12:58:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:11.703 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10bd5c42-4cc6-4b0d-b045-612e6f3fee9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:58:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:11.711 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e96befee-418d-411b-90ea-ac3fc8d6a084]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:11.711 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a namespace which is not needed anymore
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Oct 02 12:58:11 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000009f.scope: Consumed 13.837s CPU time.
Oct 02 12:58:11 compute-0 systemd-machined[210927]: Machine qemu-83-instance-0000009f terminated.
Oct 02 12:58:11 compute-0 ceph-mon[73668]: pgmap v2553: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:58:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4122840404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.924 2 INFO nova.virt.libvirt.driver [-] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Instance destroyed successfully.
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.924 2 DEBUG nova.objects.instance [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid 536a7b9a-251e-4caf-9625-d7add1094a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.952 2 DEBUG nova.compute.manager [None req-f177f6f9-636d-40a0-a823-cf9321d9b3bb dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.953 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.953 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409891.9497957, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.953 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Resumed (Lifecycle Event)
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.955 2 DEBUG nova.virt.libvirt.vif [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-144315854',display_name='tempest-TestNetworkAdvancedServerOps-server-144315854',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-144315854',id=159,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD6z5UG9v+QPALMlduFTZXKsNHAzUmhi1vVAPQaqRh/Eb1m/Bh8U9Nz+GfWZmjfT42Y3dsNer57yhh6nNEihCii1sxD04kytNUaBs6FhiA+YLAdp8+IEl0xCF5jLoyktqg==',key_name='tempest-TestNetworkAdvancedServerOps-1673917750',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hmeegdtc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:09Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=536a7b9a-251e-4caf-9625-d7add1094a1e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.955 2 DEBUG nova.network.os_vif_util [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.956 2 DEBUG nova.network.os_vif_util [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:b2:58,bridge_name='br-int',has_traffic_filtering=True,id=2906704f-c1f9-471b-a4c1-9d6f7eeaee99,network=Network(10bd5c42-4cc6-4b0d-b045-612e6f3fee9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2906704f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.956 2 DEBUG os_vif [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:b2:58,bridge_name='br-int',has_traffic_filtering=True,id=2906704f-c1f9-471b-a4c1-9d6f7eeaee99,network=Network(10bd5c42-4cc6-4b0d-b045-612e6f3fee9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2906704f-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2906704f-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.964 2 INFO os_vif [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:b2:58,bridge_name='br-int',has_traffic_filtering=True,id=2906704f-c1f9-471b-a4c1-9d6f7eeaee99,network=Network(10bd5c42-4cc6-4b0d-b045-612e6f3fee9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2906704f-c1')
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [NOTICE]   (363105) : haproxy version is 2.8.14-c23fe91
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [NOTICE]   (363105) : path to executable is /usr/sbin/haproxy
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [WARNING]  (363105) : Exiting Master process...
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [WARNING]  (363105) : Exiting Master process...
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [ALERT]    (363105) : Current worker (363107) exited with code 143 (Terminated)
Oct 02 12:58:11 compute-0 neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a[363101]: [WARNING]  (363105) : All workers exited. Exiting... (0)
Oct 02 12:58:11 compute-0 systemd[1]: libpod-e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954.scope: Deactivated successfully.
Oct 02 12:58:11 compute-0 nova_compute[256940]: 2025-10-02 12:58:11.992 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:11 compute-0 podman[364733]: 2025-10-02 12:58:11.994053642 +0000 UTC m=+0.178820173 container died e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.005 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 105 op/s
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.074 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.075 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409891.9499285, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.075 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Started (Lifecycle Event)
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.094 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.098 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bcc4292a4f97363af7abd6c4a565c552efda5cf2cde9d04311c74a7801df429-merged.mount: Deactivated successfully.
Oct 02 12:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954-userdata-shm.mount: Deactivated successfully.
Oct 02 12:58:12 compute-0 podman[364733]: 2025-10-02 12:58:12.157544716 +0000 UTC m=+0.342311247 container cleanup e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:58:12 compute-0 systemd[1]: libpod-conmon-e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954.scope: Deactivated successfully.
Oct 02 12:58:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:12 compute-0 podman[364790]: 2025-10-02 12:58:12.39927583 +0000 UTC m=+0.214183160 container remove e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.405 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[12a0d786-ee18-4cf1-a6e0-7990b926ceac]: (4, ('Thu Oct  2 12:58:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a (e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954)\ne7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954\nThu Oct  2 12:58:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a (e7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954)\ne7f636851f6964342c452b586cc06207d9bacf3e079b9f20d21be9e245c60954\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.407 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba89ca1-9b0d-4a7b-be59-0a914c950f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.408 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10bd5c42-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:12 compute-0 kernel: tap10bd5c42-40: left promiscuous mode
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.417 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[608babd9-d29c-47f7-85b5-a1cad48dd303]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.451 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[617eb503-0f32-4dd2-8e3e-97a5adec75a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.453 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[96f50197-c164-437a-bed8-11c0a2d3d620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.473 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3b321c-1c00-474c-af08-7668ed042dbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775561, 'reachable_time': 36694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364806, 'error': None, 'target': 'ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.476 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10bd5c42-4cc6-4b0d-b045-612e6f3fee9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:58:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:12.476 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[b153243b-de8a-44f3-b4b0-99ecbb81d2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d10bd5c42\x2d4cc6\x2d4b0d\x2db045\x2d612e6f3fee9a.mount: Deactivated successfully.
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:12.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.764 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.766 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.766 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.767 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.768 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.768 2 WARNING nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state rescued and task_state None.
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.769 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.769 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.770 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.770 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.770 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.771 2 WARNING nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state rescued and task_state None.
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.771 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-vif-unplugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.771 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.771 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.771 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.772 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] No waiting events found dispatching network-vif-unplugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.772 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-vif-unplugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.772 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.772 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.773 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.773 2 DEBUG oslo_concurrency.lockutils [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.773 2 DEBUG nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] No waiting events found dispatching network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.773 2 WARNING nova.compute.manager [req-154890c6-7ef4-4a54-acb6-886b41a3d13a req-419f8a08-fcff-4e79-9a26-12b796942ab1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received unexpected event network-vif-plugged-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 for instance with vm_state active and task_state deleting.
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.932 2 INFO nova.virt.libvirt.driver [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Deleting instance files /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e_del
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.933 2 INFO nova.virt.libvirt.driver [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Deletion of /var/lib/nova/instances/536a7b9a-251e-4caf-9625-d7add1094a1e_del complete
Oct 02 12:58:12 compute-0 ceph-mon[73668]: pgmap v2554: 305 pgs: 305 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 105 op/s
Oct 02 12:58:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1559841655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.980 2 INFO nova.compute.manager [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Took 1.34 seconds to destroy the instance on the hypervisor.
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.980 2 DEBUG oslo.service.loopingcall [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.981 2 DEBUG nova.compute.manager [-] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:58:12 compute-0 nova_compute[256940]: 2025-10-02 12:58:12.981 2 DEBUG nova.network.neutron [-] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:58:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:13 compute-0 nova_compute[256940]: 2025-10-02 12:58:13.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:13 compute-0 nova_compute[256940]: 2025-10-02 12:58:13.601 2 INFO nova.compute.manager [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Unrescuing
Oct 02 12:58:13 compute-0 nova_compute[256940]: 2025-10-02 12:58:13.601 2 DEBUG oslo_concurrency.lockutils [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:13 compute-0 nova_compute[256940]: 2025-10-02 12:58:13.601 2 DEBUG oslo_concurrency.lockutils [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquired lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:13 compute-0 nova_compute[256940]: 2025-10-02 12:58:13.601 2 DEBUG nova.network.neutron [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:58:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/815310072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3666288749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.016 2 DEBUG nova.network.neutron [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updated VIF entry in instance network info cache for port 2906704f-c1f9-471b-a4c1-9d6f7eeaee99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.017 2 DEBUG nova.network.neutron [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updating instance_info_cache with network_info: [{"id": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "address": "fa:16:3e:08:b2:58", "network": {"id": "10bd5c42-4cc6-4b0d-b045-612e6f3fee9a", "bridge": "br-int", "label": "tempest-network-smoke--7240349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2906704f-c1", "ovs_interfaceid": "2906704f-c1f9-471b-a4c1-9d6f7eeaee99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.046 2 DEBUG oslo_concurrency.lockutils [req-48e0a071-75c1-4f22-bbeb-06e9a235c689 req-d0f190b1-2f56-4f84-b7fe-7bab84b3b7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-536a7b9a-251e-4caf-9625-d7add1094a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.235 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.235 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:14.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.507 2 DEBUG nova.network.neutron [-] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:14.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.549 2 INFO nova.compute.manager [-] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Took 1.57 seconds to deallocate network for instance.
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.601 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.601 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.605 2 DEBUG nova.compute.manager [req-fbaa3882-0ae1-4c2f-955a-f92ca353e1c2 req-18afb1fb-f6e8-4a8d-8289-45cf23a26104 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Received event network-vif-deleted-2906704f-c1f9-471b-a4c1-9d6f7eeaee99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.705 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.736 2 DEBUG oslo_concurrency.processutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.806 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.806 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.810 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.811 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.811 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.981 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.982 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3835MB free_disk=20.743385314941406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:58:14 compute-0 nova_compute[256940]: 2025-10-02 12:58:14.982 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987287269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.202 2 DEBUG oslo_concurrency.processutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.206 2 DEBUG nova.compute.provider_tree [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.220 2 DEBUG nova.scheduler.client.report [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.244 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.395 2 INFO nova.scheduler.client.report [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance 536a7b9a-251e-4caf-9625-d7add1094a1e
Oct 02 12:58:15 compute-0 ceph-mon[73668]: pgmap v2555: 305 pgs: 305 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 12:58:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2625827684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.464 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance d09efb55-ff68-4671-b89f-a35231b739e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.464 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.464 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.464 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.497 2 DEBUG oslo_concurrency.lockutils [None req-3d79150b-8a7b-4062-82d4-9af2f07f0b35 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "536a7b9a-251e-4caf-9625-d7add1094a1e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:15 compute-0 nova_compute[256940]: 2025-10-02 12:58:15.536 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 540 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 166 op/s
Oct 02 12:58:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273215587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.078 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.085 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.102 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.124 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.124 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:16.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:16 compute-0 podman[364876]: 2025-10-02 12:58:16.407125503 +0000 UTC m=+0.074551736 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:58:16 compute-0 podman[364877]: 2025-10-02 12:58:16.452257685 +0000 UTC m=+0.116965397 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:58:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3987287269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/273215587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.632 2 DEBUG nova.network.neutron [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Updating instance_info_cache with network_info: [{"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.647 2 DEBUG oslo_concurrency.lockutils [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Releasing lock "refresh_cache-42cbf332-5b16-48ac-b3c9-9a21a922b6f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.648 2 DEBUG nova.objects.instance [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'flavor' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:16 compute-0 kernel: tap9adf2da2-70 (unregistering): left promiscuous mode
Oct 02 12:58:16 compute-0 NetworkManager[44981]: <info>  [1759409896.7244] device (tap9adf2da2-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:58:16 compute-0 ovn_controller[148123]: 2025-10-02T12:58:16Z|00822|binding|INFO|Releasing lport 9adf2da2-7037-4b71-94a1-53519ef0db70 from this chassis (sb_readonly=0)
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:16 compute-0 ovn_controller[148123]: 2025-10-02T12:58:16Z|00823|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 down in Southbound
Oct 02 12:58:16 compute-0 ovn_controller[148123]: 2025-10-02T12:58:16Z|00824|binding|INFO|Removing iface tap9adf2da2-70 ovn-installed in OVS
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:16.800 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:07:2e 10.100.0.14'], port_security=['fa:16:3e:da:07:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '42cbf332-5b16-48ac-b3c9-9a21a922b6f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9adf2da2-7037-4b71-94a1-53519ef0db70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:16.802 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9adf2da2-7037-4b71-94a1-53519ef0db70 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 unbound from our chassis
Oct 02 12:58:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:16.803 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:58:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:16.805 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e48190-8aec-47f1-91b0-fcde6f9e0597]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:16 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Oct 02 12:58:16 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000009e.scope: Consumed 5.694s CPU time.
Oct 02 12:58:16 compute-0 systemd-machined[210927]: Machine qemu-84-instance-0000009e terminated.
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.926 2 INFO nova.virt.libvirt.driver [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance destroyed successfully.
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.928 2 DEBUG nova.objects.instance [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:16 compute-0 nova_compute[256940]: 2025-10-02 12:58:16.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:17 compute-0 kernel: tap9adf2da2-70: entered promiscuous mode
Oct 02 12:58:17 compute-0 NetworkManager[44981]: <info>  [1759409897.0582] manager: (tap9adf2da2-70): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Oct 02 12:58:17 compute-0 systemd-udevd[364924]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:17 compute-0 ovn_controller[148123]: 2025-10-02T12:58:17Z|00825|binding|INFO|Claiming lport 9adf2da2-7037-4b71-94a1-53519ef0db70 for this chassis.
Oct 02 12:58:17 compute-0 ovn_controller[148123]: 2025-10-02T12:58:17Z|00826|binding|INFO|9adf2da2-7037-4b71-94a1-53519ef0db70: Claiming fa:16:3e:da:07:2e 10.100.0.14
Oct 02 12:58:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:17.071 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:07:2e 10.100.0.14'], port_security=['fa:16:3e:da:07:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '42cbf332-5b16-48ac-b3c9-9a21a922b6f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9adf2da2-7037-4b71-94a1-53519ef0db70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:17.073 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9adf2da2-7037-4b71-94a1-53519ef0db70 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 bound to our chassis
Oct 02 12:58:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:17.075 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:58:17 compute-0 NetworkManager[44981]: <info>  [1759409897.0779] device (tap9adf2da2-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:58:17 compute-0 NetworkManager[44981]: <info>  [1759409897.0788] device (tap9adf2da2-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:58:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:17.076 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0f695d-6813-4a67-8d37-8b5f6adf7d60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:17 compute-0 ovn_controller[148123]: 2025-10-02T12:58:17Z|00827|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 ovn-installed in OVS
Oct 02 12:58:17 compute-0 ovn_controller[148123]: 2025-10-02T12:58:17Z|00828|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 up in Southbound
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:17 compute-0 systemd-machined[210927]: New machine qemu-85-instance-0000009e.
Oct 02 12:58:17 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-0000009e.
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.125 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.125 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:17 compute-0 ceph-mon[73668]: pgmap v2556: 305 pgs: 305 active+clean; 540 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 166 op/s
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.726 2 DEBUG nova.compute.manager [req-83c193e1-980b-4a59-90d6-5cee45c09302 req-f0c4f544-76bd-46f6-a659-9b675a1a5117 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.726 2 DEBUG oslo_concurrency.lockutils [req-83c193e1-980b-4a59-90d6-5cee45c09302 req-f0c4f544-76bd-46f6-a659-9b675a1a5117 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.726 2 DEBUG oslo_concurrency.lockutils [req-83c193e1-980b-4a59-90d6-5cee45c09302 req-f0c4f544-76bd-46f6-a659-9b675a1a5117 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.727 2 DEBUG oslo_concurrency.lockutils [req-83c193e1-980b-4a59-90d6-5cee45c09302 req-f0c4f544-76bd-46f6-a659-9b675a1a5117 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.727 2 DEBUG nova.compute.manager [req-83c193e1-980b-4a59-90d6-5cee45c09302 req-f0c4f544-76bd-46f6-a659-9b675a1a5117 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:17 compute-0 nova_compute[256940]: 2025-10-02 12:58:17.727 2 WARNING nova.compute.manager [req-83c193e1-980b-4a59-90d6-5cee45c09302 req-f0c4f544-76bd-46f6-a659-9b675a1a5117 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:58:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.056 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.057 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409898.0561457, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.057 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Resumed (Lifecycle Event)
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.098 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.102 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.121 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.121 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409898.0571613, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.121 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Started (Lifecycle Event)
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.149 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.153 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.179 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:58:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:18.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:18 compute-0 nova_compute[256940]: 2025-10-02 12:58:18.546 2 DEBUG nova.compute.manager [None req-1cce646f-5b2e-4ce1-9773-2d3285c7c96e dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:58:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:58:19 compute-0 ceph-mon[73668]: pgmap v2557: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.856 2 DEBUG nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.858 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.859 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.859 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.859 2 DEBUG nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.860 2 WARNING nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state active and task_state None.
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.860 2 DEBUG nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.861 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.861 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.862 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.862 2 DEBUG nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.863 2 WARNING nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state active and task_state None.
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.863 2 DEBUG nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.864 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.864 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.865 2 DEBUG oslo_concurrency.lockutils [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.865 2 DEBUG nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:19 compute-0 nova_compute[256940]: 2025-10-02 12:58:19.866 2 WARNING nova.compute.manager [req-aa7c469c-7fbe-4747-8de9-14537cb98f57 req-bf422a8a-f859-47f1-9726-cfb4f5d0dbde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state active and task_state None.
Oct 02 12:58:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 521 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Oct 02 12:58:20 compute-0 ovn_controller[148123]: 2025-10-02T12:58:20Z|00829|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct 02 12:58:20 compute-0 ovn_controller[148123]: 2025-10-02T12:58:20Z|00830|binding|INFO|Releasing lport d4ed0cb9-c5ef-468c-8d04-1cebdb7dab47 from this chassis (sb_readonly=0)
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.106 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.107 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.107 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.108 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.108 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.110 2 INFO nova.compute.manager [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Terminating instance
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.111 2 DEBUG nova.compute.manager [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 kernel: tap9adf2da2-70 (unregistering): left promiscuous mode
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:20 compute-0 NetworkManager[44981]: <info>  [1759409900.2124] device (tap9adf2da2-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 ovn_controller[148123]: 2025-10-02T12:58:20Z|00831|binding|INFO|Releasing lport 9adf2da2-7037-4b71-94a1-53519ef0db70 from this chassis (sb_readonly=0)
Oct 02 12:58:20 compute-0 ovn_controller[148123]: 2025-10-02T12:58:20Z|00832|binding|INFO|Setting lport 9adf2da2-7037-4b71-94a1-53519ef0db70 down in Southbound
Oct 02 12:58:20 compute-0 ovn_controller[148123]: 2025-10-02T12:58:20Z|00833|binding|INFO|Removing iface tap9adf2da2-70 ovn-installed in OVS
Oct 02 12:58:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:20.233 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:07:2e 10.100.0.14'], port_security=['fa:16:3e:da:07:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '42cbf332-5b16-48ac-b3c9-9a21a922b6f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9adf2da2-7037-4b71-94a1-53519ef0db70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:20.235 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9adf2da2-7037-4b71-94a1-53519ef0db70 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 unbound from our chassis
Oct 02 12:58:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:20.236 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:58:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:20.238 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2372cde3-c947-4df9-825e-af7320b84c42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Oct 02 12:58:20 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000009e.scope: Consumed 3.119s CPU time.
Oct 02 12:58:20 compute-0 systemd-machined[210927]: Machine qemu-85-instance-0000009e terminated.
Oct 02 12:58:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:20.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.370 2 INFO nova.virt.libvirt.driver [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Instance destroyed successfully.
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.371 2 DEBUG nova.objects.instance [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'resources' on Instance uuid 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.388 2 DEBUG nova.virt.libvirt.vif [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1523338527',display_name='tempest-ServerRescueTestJSON-server-1523338527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1523338527',id=158,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-kabplxb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:18Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=42cbf332-5b16-48ac-b3c9-9a21a922b6f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.388 2 DEBUG nova.network.os_vif_util [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "9adf2da2-7037-4b71-94a1-53519ef0db70", "address": "fa:16:3e:da:07:2e", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9adf2da2-70", "ovs_interfaceid": "9adf2da2-7037-4b71-94a1-53519ef0db70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.389 2 DEBUG nova.network.os_vif_util [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.389 2 DEBUG os_vif [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9adf2da2-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:20 compute-0 nova_compute[256940]: 2025-10-02 12:58:20.400 2 INFO os_vif [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:07:2e,bridge_name='br-int',has_traffic_filtering=True,id=9adf2da2-7037-4b71-94a1-53519ef0db70,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9adf2da2-70')
Oct 02 12:58:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:21 compute-0 nova_compute[256940]: 2025-10-02 12:58:21.154 2 INFO nova.virt.libvirt.driver [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Deleting instance files /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9_del
Oct 02 12:58:21 compute-0 nova_compute[256940]: 2025-10-02 12:58:21.156 2 INFO nova.virt.libvirt.driver [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Deletion of /var/lib/nova/instances/42cbf332-5b16-48ac-b3c9-9a21a922b6f9_del complete
Oct 02 12:58:21 compute-0 nova_compute[256940]: 2025-10-02 12:58:21.223 2 INFO nova.compute.manager [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Took 1.11 seconds to destroy the instance on the hypervisor.
Oct 02 12:58:21 compute-0 nova_compute[256940]: 2025-10-02 12:58:21.224 2 DEBUG oslo.service.loopingcall [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:58:21 compute-0 nova_compute[256940]: 2025-10-02 12:58:21.224 2 DEBUG nova.compute.manager [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:58:21 compute-0 nova_compute[256940]: 2025-10-02 12:58:21.224 2 DEBUG nova.network.neutron [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:58:21 compute-0 ceph-mon[73668]: pgmap v2558: 305 pgs: 305 active+clean; 521 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Oct 02 12:58:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 286 op/s
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.090 2 DEBUG nova.compute.manager [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.090 2 DEBUG oslo_concurrency.lockutils [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.091 2 DEBUG oslo_concurrency.lockutils [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.092 2 DEBUG oslo_concurrency.lockutils [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.092 2 DEBUG nova.compute.manager [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.092 2 DEBUG nova.compute.manager [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-unplugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.093 2 DEBUG nova.compute.manager [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.093 2 DEBUG oslo_concurrency.lockutils [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.094 2 DEBUG oslo_concurrency.lockutils [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.094 2 DEBUG oslo_concurrency.lockutils [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.094 2 DEBUG nova.compute.manager [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] No waiting events found dispatching network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.095 2 WARNING nova.compute.manager [req-a59d15d1-956f-4e2c-8ba6-8a59aea49a57 req-e57c5958-789b-4219-89dc-16a6b7512f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received unexpected event network-vif-plugged-9adf2da2-7037-4b71-94a1-53519ef0db70 for instance with vm_state active and task_state deleting.
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.443 2 DEBUG nova.network.neutron [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.474 2 INFO nova.compute.manager [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Took 1.25 seconds to deallocate network for instance.
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.534 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.535 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.577 2 DEBUG nova.compute.manager [req-6003a7f2-0de1-47b6-a1a3-f7e9e757fa5e req-0d76c37d-935e-4972-a31a-9a5f62acc21f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Received event network-vif-deleted-9adf2da2-7037-4b71-94a1-53519ef0db70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:22 compute-0 nova_compute[256940]: 2025-10-02 12:58:22.627 2 DEBUG oslo_concurrency.processutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852028571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:23 compute-0 nova_compute[256940]: 2025-10-02 12:58:23.113 2 DEBUG oslo_concurrency.processutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:23 compute-0 nova_compute[256940]: 2025-10-02 12:58:23.119 2 DEBUG nova.compute.provider_tree [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:23 compute-0 nova_compute[256940]: 2025-10-02 12:58:23.135 2 DEBUG nova.scheduler.client.report [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:23 compute-0 nova_compute[256940]: 2025-10-02 12:58:23.156 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:23 compute-0 nova_compute[256940]: 2025-10-02 12:58:23.177 2 INFO nova.scheduler.client.report [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Deleted allocations for instance 42cbf332-5b16-48ac-b3c9-9a21a922b6f9
Oct 02 12:58:23 compute-0 nova_compute[256940]: 2025-10-02 12:58:23.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:23 compute-0 nova_compute[256940]: 2025-10-02 12:58:23.232 2 DEBUG oslo_concurrency.lockutils [None req-48aae531-87b9-4540-8830-375ba505309a dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "42cbf332-5b16-48ac-b3c9-9a21a922b6f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:58:24 compute-0 ceph-mon[73668]: pgmap v2559: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 286 op/s
Oct 02 12:58:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2852028571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1139249947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:24 compute-0 nova_compute[256940]: 2025-10-02 12:58:24.115 2 DEBUG nova.compute.manager [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-changed-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:24 compute-0 nova_compute[256940]: 2025-10-02 12:58:24.115 2 DEBUG nova.compute.manager [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing instance network info cache due to event network-changed-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:58:24 compute-0 nova_compute[256940]: 2025-10-02 12:58:24.116 2 DEBUG oslo_concurrency.lockutils [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:24 compute-0 nova_compute[256940]: 2025-10-02 12:58:24.116 2 DEBUG oslo_concurrency.lockutils [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:24 compute-0 nova_compute[256940]: 2025-10-02 12:58:24.116 2 DEBUG nova.network.neutron [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing network info cache for port e81302d5-61c1-4e2a-ba60-2692d1d9f8ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:58:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:24.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:24.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:25 compute-0 ceph-mon[73668]: pgmap v2560: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:58:25 compute-0 nova_compute[256940]: 2025-10-02 12:58:25.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 287 op/s
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:58:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.456 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.483 2 DEBUG nova.network.neutron [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updated VIF entry in instance network info cache for port e81302d5-61c1-4e2a-ba60-2692d1d9f8ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.484 2 DEBUG nova.network.neutron [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:26.496 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:26.496 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:26.497 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.515 2 DEBUG oslo_concurrency.lockutils [req-330a309f-4060-4d57-9282-340e2d0fa8c4 req-3d265081-d27d-4b3e-912a-d3c2aa88b45b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.517 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.517 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:58:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:26.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.923 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409891.9221547, 536a7b9a-251e-4caf-9625-d7add1094a1e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.924 2 INFO nova.compute.manager [-] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] VM Stopped (Lifecycle Event)
Oct 02 12:58:26 compute-0 nova_compute[256940]: 2025-10-02 12:58:26.953 2 DEBUG nova.compute.manager [None req-5aedc126-a082-4006-bf8b-1264efd462aa - - - - - -] [instance: 536a7b9a-251e-4caf-9625-d7add1094a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:27 compute-0 nova_compute[256940]: 2025-10-02 12:58:27.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:27 compute-0 ceph-mon[73668]: pgmap v2561: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 287 op/s
Oct 02 12:58:27 compute-0 nova_compute[256940]: 2025-10-02 12:58:27.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:27 compute-0 sudo[365084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:27 compute-0 sudo[365084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:27 compute-0 sudo[365084]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 380 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:58:28 compute-0 sudo[365109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:28 compute-0 sudo[365109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:28 compute-0 sudo[365109]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:58:28
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr']
Oct 02 12:58:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:58:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:58:29 compute-0 nova_compute[256940]: 2025-10-02 12:58:29.758 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:29 compute-0 nova_compute[256940]: 2025-10-02 12:58:29.778 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:29 compute-0 nova_compute[256940]: 2025-10-02 12:58:29.778 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:58:29 compute-0 ceph-mon[73668]: pgmap v2562: 305 pgs: 305 active+clean; 380 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:58:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 339 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Oct 02 12:58:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:30.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:30 compute-0 nova_compute[256940]: 2025-10-02 12:58:30.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:30 compute-0 nova_compute[256940]: 2025-10-02 12:58:30.773 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/697967858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:30 compute-0 ceph-mon[73668]: pgmap v2563: 305 pgs: 305 active+clean; 339 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Oct 02 12:58:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1978523503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Oct 02 12:58:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1911091351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:32.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:32 compute-0 podman[365137]: 2025-10-02 12:58:32.397167645 +0000 UTC m=+0.068980302 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:58:32 compute-0 podman[365136]: 2025-10-02 12:58:32.432639125 +0000 UTC m=+0.105035927 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:58:32 compute-0 nova_compute[256940]: 2025-10-02 12:58:32.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:33 compute-0 ceph-mon[73668]: pgmap v2564: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Oct 02 12:58:33 compute-0 nova_compute[256940]: 2025-10-02 12:58:33.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 02 12:58:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:35 compute-0 nova_compute[256940]: 2025-10-02 12:58:35.367 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409900.365472, 42cbf332-5b16-48ac-b3c9-9a21a922b6f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:35 compute-0 nova_compute[256940]: 2025-10-02 12:58:35.367 2 INFO nova.compute.manager [-] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] VM Stopped (Lifecycle Event)
Oct 02 12:58:35 compute-0 nova_compute[256940]: 2025-10-02 12:58:35.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:35 compute-0 nova_compute[256940]: 2025-10-02 12:58:35.412 2 DEBUG nova.compute.manager [None req-6668ff2b-696c-40e2-8225-f6d45430d643 - - - - - -] [instance: 42cbf332-5b16-48ac-b3c9-9a21a922b6f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:35 compute-0 ceph-mon[73668]: pgmap v2565: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 02 12:58:35 compute-0 ovn_controller[148123]: 2025-10-02T12:58:35Z|00834|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct 02 12:58:35 compute-0 ovn_controller[148123]: 2025-10-02T12:58:35Z|00835|binding|INFO|Releasing lport d4ed0cb9-c5ef-468c-8d04-1cebdb7dab47 from this chassis (sb_readonly=0)
Oct 02 12:58:35 compute-0 nova_compute[256940]: 2025-10-02 12:58:35.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 643 KiB/s rd, 1.8 MiB/s wr, 130 op/s
Oct 02 12:58:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:36.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:36.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:37 compute-0 nova_compute[256940]: 2025-10-02 12:58:37.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:37 compute-0 ceph-mon[73668]: pgmap v2566: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 643 KiB/s rd, 1.8 MiB/s wr, 130 op/s
Oct 02 12:58:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4207214228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Oct 02 12:58:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:38.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:39 compute-0 ceph-mon[73668]: pgmap v2567: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Oct 02 12:58:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:40.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:40 compute-0 nova_compute[256940]: 2025-10-02 12:58:40.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005732874441652708 of space, bias 1.0, pg target 1.7198623324958124 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6463173433719523 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:58:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:58:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:40.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:41 compute-0 ceph-mon[73668]: pgmap v2568: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Oct 02 12:58:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 12:58:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:42.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:42 compute-0 nova_compute[256940]: 2025-10-02 12:58:42.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:42.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:43 compute-0 ceph-mon[73668]: pgmap v2569: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 12:58:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2366804538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1199214120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 12:58:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:44.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:44 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 12:58:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:45 compute-0 nova_compute[256940]: 2025-10-02 12:58:45.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-0 ceph-mon[73668]: pgmap v2570: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 12:58:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 12:58:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:46 compute-0 nova_compute[256940]: 2025-10-02 12:58:46.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:46.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:46 compute-0 ceph-mon[73668]: pgmap v2571: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 12:58:47 compute-0 podman[365187]: 2025-10-02 12:58:47.39113934 +0000 UTC m=+0.062087843 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:58:47 compute-0 podman[365188]: 2025-10-02 12:58:47.423946361 +0000 UTC m=+0.089620927 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 12:58:47 compute-0 nova_compute[256940]: 2025-10-02 12:58:47.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 128 op/s
Oct 02 12:58:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:48 compute-0 sudo[365232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:48 compute-0 sudo[365232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:48 compute-0 sudo[365232]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:48 compute-0 sudo[365257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:48 compute-0 sudo[365257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:48 compute-0 sudo[365257]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:48.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:48.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:49 compute-0 ceph-mon[73668]: pgmap v2572: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 128 op/s
Oct 02 12:58:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 137 op/s
Oct 02 12:58:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:50.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:50 compute-0 nova_compute[256940]: 2025-10-02 12:58:50.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:50 compute-0 nova_compute[256940]: 2025-10-02 12:58:50.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:51 compute-0 ceph-mon[73668]: pgmap v2573: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 137 op/s
Oct 02 12:58:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 163 op/s
Oct 02 12:58:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:52 compute-0 nova_compute[256940]: 2025-10-02 12:58:52.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:53 compute-0 ceph-mon[73668]: pgmap v2574: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 163 op/s
Oct 02 12:58:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 02 12:58:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:55 compute-0 ceph-mon[73668]: pgmap v2575: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.580 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.581 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.617 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.704 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.705 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.712 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.712 2 INFO nova.compute.claims [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Claim successful on node compute-0.ctlplane.example.com
Oct 02 12:58:55 compute-0 nova_compute[256940]: 2025-10-02 12:58:55.876 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 12:58:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:56.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/519288926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:56 compute-0 nova_compute[256940]: 2025-10-02 12:58:56.378 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:56 compute-0 nova_compute[256940]: 2025-10-02 12:58:56.385 2 DEBUG nova.compute.provider_tree [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:56.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:57 compute-0 ceph-mon[73668]: pgmap v2576: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 12:58:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/519288926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.415 2 DEBUG nova.scheduler.client.report [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.520 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.520 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.665 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.666 2 DEBUG nova.network.neutron [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.707 2 INFO nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.785 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.979 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.980 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:58:57 compute-0 nova_compute[256940]: 2025-10-02 12:58:57.980 2 INFO nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Creating image(s)
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.018 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.048 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.083 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.087 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.159 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.160 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.161 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.161 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.189 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.194 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e77ed6db-0772-42cd-80a1-109f22d73463_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:58:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.538 2 DEBUG nova.policy [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1c573c08a89345349ee60a0f1fd80d32', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e75bb30d28cd465fb9e94a4b8bc63349', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:58:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:58:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:58:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:58.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.618 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e77ed6db-0772-42cd-80a1-109f22d73463_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.715 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] resizing rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.869 2 DEBUG nova.objects.instance [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'migration_context' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.893 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.894 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Ensure instance console log exists: /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.894 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.895 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:58 compute-0 nova_compute[256940]: 2025-10-02 12:58:58.895 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:59.313 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:59 compute-0 nova_compute[256940]: 2025-10-02 12:58:59.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:58:59.314 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:58:59 compute-0 ceph-mon[73668]: pgmap v2577: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 12:58:59 compute-0 nova_compute[256940]: 2025-10-02 12:58:59.503 2 DEBUG nova.network.neutron [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Successfully created port: 20d2ae3a-8dc1-4f85-9b5c-39501552d110 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:59:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 433 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 120 op/s
Oct 02 12:59:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:00 compute-0 nova_compute[256940]: 2025-10-02 12:59:00.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:00.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:01.316 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:01 compute-0 ceph-mon[73668]: pgmap v2578: 305 pgs: 305 active+clean; 433 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 120 op/s
Oct 02 12:59:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.359 2 DEBUG nova.network.neutron [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Successfully updated port: 20d2ae3a-8dc1-4f85-9b5c-39501552d110 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:59:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.390 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.391 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.391 2 DEBUG nova.network.neutron [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.494 2 DEBUG nova.compute.manager [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.494 2 DEBUG nova.compute.manager [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing instance network info cache due to event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.495 2 DEBUG oslo_concurrency.lockutils [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:02 compute-0 nova_compute[256940]: 2025-10-02 12:59:02.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:02 compute-0 sudo[365477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:02 compute-0 sudo[365477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-0 sudo[365477]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-0 sudo[365514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:02 compute-0 sudo[365514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-0 podman[365501]: 2025-10-02 12:59:02.593541604 +0000 UTC m=+0.063665454 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 12:59:02 compute-0 sudo[365514]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-0 podman[365502]: 2025-10-02 12:59:02.599087658 +0000 UTC m=+0.065716507 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:02 compute-0 sudo[365562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:02 compute-0 sudo[365562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-0 sudo[365562]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-0 sudo[365588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:59:02 compute-0 sudo[365588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-0 ceph-mon[73668]: pgmap v2579: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Oct 02 12:59:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 12:59:02 compute-0 sudo[365588]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:59:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 12:59:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:59:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:03 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:03 compute-0 sudo[365632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:03 compute-0 sudo[365632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-0 sudo[365632]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:03 compute-0 sudo[365657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:03 compute-0 sudo[365657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-0 sudo[365657]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:03 compute-0 sudo[365682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:03 compute-0 sudo[365682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-0 sudo[365682]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:03 compute-0 sudo[365707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:59:03 compute-0 sudo[365707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-0 nova_compute[256940]: 2025-10-02 12:59:03.475 2 DEBUG nova.network.neutron [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:59:03 compute-0 sudo[365707]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:59:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 12:59:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 12:59:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a06cd32f-1f7e-44af-8ad1-7b93df6681cc does not exist
Oct 02 12:59:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2e6d18a9-37ed-46bd-86f1-bc8b831a5d96 does not exist
Oct 02 12:59:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4ad4e340-9aab-4c1c-a4dc-1ef371ed02fc does not exist
Oct 02 12:59:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 12:59:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 12:59:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 12:59:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:59:04 compute-0 sudo[365762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:04 compute-0 sudo[365762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:04 compute-0 sudo[365762]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:59:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:04 compute-0 sudo[365787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:04 compute-0 sudo[365787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:04 compute-0 sudo[365787]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:04 compute-0 sudo[365812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:04 compute-0 sudo[365812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:04 compute-0 sudo[365812]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:04 compute-0 sudo[365837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 12:59:04 compute-0 sudo[365837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:04.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:04 compute-0 podman[365903]: 2025-10-02 12:59:04.640695762 +0000 UTC m=+0.091392893 container create bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_clarke, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:59:04 compute-0 podman[365903]: 2025-10-02 12:59:04.578893288 +0000 UTC m=+0.029590439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:04 compute-0 systemd[1]: Started libpod-conmon-bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44.scope.
Oct 02 12:59:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:04 compute-0 podman[365903]: 2025-10-02 12:59:04.853566398 +0000 UTC m=+0.304263539 container init bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:59:04 compute-0 podman[365903]: 2025-10-02 12:59:04.863085015 +0000 UTC m=+0.313782136 container start bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 12:59:04 compute-0 podman[365903]: 2025-10-02 12:59:04.867362196 +0000 UTC m=+0.318059347 container attach bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_clarke, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 12:59:04 compute-0 hardcore_clarke[365920]: 167 167
Oct 02 12:59:04 compute-0 systemd[1]: libpod-bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44.scope: Deactivated successfully.
Oct 02 12:59:04 compute-0 conmon[365920]: conmon bf823b3e13e21ae32f58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44.scope/container/memory.events
Oct 02 12:59:04 compute-0 podman[365903]: 2025-10-02 12:59:04.873552387 +0000 UTC m=+0.324249508 container died bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_clarke, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 12:59:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a00d145ebd8200ad54cb3e91bf2a47019cee61860e8a9e7ac196c2c9a708be79-merged.mount: Deactivated successfully.
Oct 02 12:59:04 compute-0 podman[365903]: 2025-10-02 12:59:04.92066705 +0000 UTC m=+0.371364171 container remove bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:59:04 compute-0 systemd[1]: libpod-conmon-bf823b3e13e21ae32f5890eb191e5425ccdecaac33cd76714cd1ab7ebb963c44.scope: Deactivated successfully.
Oct 02 12:59:05 compute-0 podman[365944]: 2025-10-02 12:59:05.113791283 +0000 UTC m=+0.051972720 container create bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:59:05 compute-0 ceph-mon[73668]: pgmap v2580: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:59:05 compute-0 systemd[1]: Started libpod-conmon-bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7.scope.
Oct 02 12:59:05 compute-0 podman[365944]: 2025-10-02 12:59:05.091603957 +0000 UTC m=+0.029785424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91836dd95799748a0558162db88ddd7eb9bef069590e5cc529f9783a574fe049/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91836dd95799748a0558162db88ddd7eb9bef069590e5cc529f9783a574fe049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91836dd95799748a0558162db88ddd7eb9bef069590e5cc529f9783a574fe049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91836dd95799748a0558162db88ddd7eb9bef069590e5cc529f9783a574fe049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91836dd95799748a0558162db88ddd7eb9bef069590e5cc529f9783a574fe049/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:05 compute-0 podman[365944]: 2025-10-02 12:59:05.222751171 +0000 UTC m=+0.160932608 container init bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 12:59:05 compute-0 podman[365944]: 2025-10-02 12:59:05.232922525 +0000 UTC m=+0.171103962 container start bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 02 12:59:05 compute-0 podman[365944]: 2025-10-02 12:59:05.238758607 +0000 UTC m=+0.176940044 container attach bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 12:59:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:59:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288783238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:59:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:59:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288783238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:59:05 compute-0 nova_compute[256940]: 2025-10-02 12:59:05.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Oct 02 12:59:06 compute-0 boring_fermi[365960]: --> passed data devices: 0 physical, 1 LVM
Oct 02 12:59:06 compute-0 boring_fermi[365960]: --> relative data size: 1.0
Oct 02 12:59:06 compute-0 boring_fermi[365960]: --> All data devices are unavailable
Oct 02 12:59:06 compute-0 systemd[1]: libpod-bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7.scope: Deactivated successfully.
Oct 02 12:59:06 compute-0 podman[365975]: 2025-10-02 12:59:06.2162758 +0000 UTC m=+0.022688091 container died bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:59:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:06.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.501 2 DEBUG nova.network.neutron [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.551 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.551 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance network_info: |[{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.552 2 DEBUG oslo_concurrency.lockutils [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.552 2 DEBUG nova.network.neutron [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.555 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Start _get_guest_xml network_info=[{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.561 2 WARNING nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.572 2 DEBUG nova.virt.libvirt.host [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.573 2 DEBUG nova.virt.libvirt.host [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.580 2 DEBUG nova.virt.libvirt.host [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.581 2 DEBUG nova.virt.libvirt.host [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.583 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.583 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.585 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.585 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.586 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.586 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.586 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.586 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.587 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.587 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.587 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.588 2 DEBUG nova.virt.hardware [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:59:06 compute-0 nova_compute[256940]: 2025-10-02 12:59:06.591 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4288783238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:59:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4288783238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:59:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-91836dd95799748a0558162db88ddd7eb9bef069590e5cc529f9783a574fe049-merged.mount: Deactivated successfully.
Oct 02 12:59:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444165708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.100 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.131 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.135 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:07 compute-0 podman[365975]: 2025-10-02 12:59:07.45906259 +0000 UTC m=+1.265474871 container remove bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:07 compute-0 systemd[1]: libpod-conmon-bee47ca5d9e64eb72bb5cc2ad8da66ff1511758be21825e2b8fc0b9f48dc04b7.scope: Deactivated successfully.
Oct 02 12:59:07 compute-0 sudo[365837]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:07 compute-0 sudo[366051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862035785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:07 compute-0 sudo[366051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:07 compute-0 sudo[366051]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.620 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.622 2 DEBUG nova.virt.libvirt.vif [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-41587939',display_name='tempest-ServerRescueTestJSONUnderV235-server-41587939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-41587939',id=163,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e75bb30d28cd465fb9e94a4b8bc63349',ramdisk_id='',reservation_id='r-e1uxwewv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-422989214',owner_user_name='tempest-ServerRescueTestJSONUnderV235-422989214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:57Z,user_data=None,user_id='1c573c08a89345349ee60a0f1fd80d32',uuid=e77ed6db-0772-42cd-80a1-109f22d73463,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.623 2 DEBUG nova.network.os_vif_util [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converting VIF {"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.624 2 DEBUG nova.network.os_vif_util [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:07 compute-0 sudo[366078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.625 2 DEBUG nova.objects.instance [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'pci_devices' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:07 compute-0 sudo[366078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:07 compute-0 sudo[366078]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.655 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <uuid>e77ed6db-0772-42cd-80a1-109f22d73463</uuid>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <name>instance-000000a3</name>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-41587939</nova:name>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:59:06</nova:creationTime>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:user uuid="1c573c08a89345349ee60a0f1fd80d32">tempest-ServerRescueTestJSONUnderV235-422989214-project-member</nova:user>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:project uuid="e75bb30d28cd465fb9e94a4b8bc63349">tempest-ServerRescueTestJSONUnderV235-422989214</nova:project>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <nova:port uuid="20d2ae3a-8dc1-4f85-9b5c-39501552d110">
Oct 02 12:59:07 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <system>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <entry name="serial">e77ed6db-0772-42cd-80a1-109f22d73463</entry>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <entry name="uuid">e77ed6db-0772-42cd-80a1-109f22d73463</entry>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </system>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <os>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   </os>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <features>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   </features>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e77ed6db-0772-42cd-80a1-109f22d73463_disk">
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e77ed6db-0772-42cd-80a1-109f22d73463_disk.config">
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:59:07 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:83:e3:e5"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <target dev="tap20d2ae3a-8d"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/console.log" append="off"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <video>
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </video>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:59:07 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:59:07 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:59:07 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:59:07 compute-0 nova_compute[256940]: </domain>
Oct 02 12:59:07 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.657 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Preparing to wait for external event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.657 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.657 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.657 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.658 2 DEBUG nova.virt.libvirt.vif [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-41587939',display_name='tempest-ServerRescueTestJSONUnderV235-server-41587939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-41587939',id=163,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e75bb30d28cd465fb9e94a4b8bc63349',ramdisk_id='',reservation_id='r-e1uxwewv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-422989214',owner_user_name='tempest-ServerRescueTestJSONUnderV235-422989214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:57Z,user_data=None,user_id='1c573c08a89345349ee60a0f1fd80d32',uuid=e77ed6db-0772-42cd-80a1-109f22d73463,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.658 2 DEBUG nova.network.os_vif_util [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converting VIF {"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.659 2 DEBUG nova.network.os_vif_util [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.659 2 DEBUG os_vif [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20d2ae3a-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20d2ae3a-8d, col_values=(('external_ids', {'iface-id': '20d2ae3a-8dc1-4f85-9b5c-39501552d110', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:e3:e5', 'vm-uuid': 'e77ed6db-0772-42cd-80a1-109f22d73463'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:07 compute-0 NetworkManager[44981]: <info>  [1759409947.6671] manager: (tap20d2ae3a-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.673 2 INFO os_vif [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d')
Oct 02 12:59:07 compute-0 sudo[366103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:07 compute-0 sudo[366103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:07 compute-0 sudo[366103]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:07 compute-0 sudo[366131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 12:59:07 compute-0 sudo[366131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.902 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.902 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.902 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] No VIF found with MAC fa:16:3e:83:e3:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.903 2 INFO nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Using config drive
Oct 02 12:59:07 compute-0 nova_compute[256940]: 2025-10-02 12:59:07.937 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:08 compute-0 ceph-mon[73668]: pgmap v2581: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Oct 02 12:59:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/444165708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3862035785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:59:08 compute-0 podman[366216]: 2025-10-02 12:59:08.083083518 +0000 UTC m=+0.026713525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:08 compute-0 podman[366216]: 2025-10-02 12:59:08.241601462 +0000 UTC m=+0.185231449 container create 3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:59:08 compute-0 sudo[366230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:08 compute-0 sudo[366230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:08 compute-0 sudo[366230]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:08 compute-0 sudo[366255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:08 compute-0 sudo[366255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:08 compute-0 sudo[366255]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:08.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:08 compute-0 systemd[1]: Started libpod-conmon-3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7.scope.
Oct 02 12:59:08 compute-0 nova_compute[256940]: 2025-10-02 12:59:08.405 2 INFO nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Creating config drive at /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config
Oct 02 12:59:08 compute-0 nova_compute[256940]: 2025-10-02 12:59:08.415 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_uau54dk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:08 compute-0 nova_compute[256940]: 2025-10-02 12:59:08.562 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_uau54dk" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:08 compute-0 nova_compute[256940]: 2025-10-02 12:59:08.596 2 DEBUG nova.storage.rbd_utils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:08 compute-0 nova_compute[256940]: 2025-10-02 12:59:08.601 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config e77ed6db-0772-42cd-80a1-109f22d73463_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:08.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:08 compute-0 podman[366216]: 2025-10-02 12:59:08.92955051 +0000 UTC m=+0.873180537 container init 3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:59:08 compute-0 podman[366216]: 2025-10-02 12:59:08.946162981 +0000 UTC m=+0.889792978 container start 3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 12:59:08 compute-0 lucid_jepsen[366282]: 167 167
Oct 02 12:59:08 compute-0 systemd[1]: libpod-3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7.scope: Deactivated successfully.
Oct 02 12:59:09 compute-0 podman[366216]: 2025-10-02 12:59:09.075359915 +0000 UTC m=+1.018989892 container attach 3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 12:59:09 compute-0 podman[366216]: 2025-10-02 12:59:09.076742781 +0000 UTC m=+1.020372798 container died 3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.153 2 DEBUG oslo_concurrency.processutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config e77ed6db-0772-42cd-80a1-109f22d73463_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.154 2 INFO nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Deleting local config drive /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config because it was imported into RBD.
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.168 2 DEBUG nova.network.neutron [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updated VIF entry in instance network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.168 2 DEBUG nova.network.neutron [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:09 compute-0 ceph-mon[73668]: pgmap v2582: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.211 2 DEBUG oslo_concurrency.lockutils [req-c2bbe517-c2ed-439e-910c-7a5ac57f17d5 req-b492ea93-11e8-4cc0-9f55-b4d7f9891556 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:09 compute-0 kernel: tap20d2ae3a-8d: entered promiscuous mode
Oct 02 12:59:09 compute-0 NetworkManager[44981]: <info>  [1759409949.2171] manager: (tap20d2ae3a-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Oct 02 12:59:09 compute-0 ovn_controller[148123]: 2025-10-02T12:59:09Z|00836|binding|INFO|Claiming lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 for this chassis.
Oct 02 12:59:09 compute-0 ovn_controller[148123]: 2025-10-02T12:59:09Z|00837|binding|INFO|20d2ae3a-8dc1-4f85-9b5c-39501552d110: Claiming fa:16:3e:83:e3:e5 10.100.0.8
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:09.233 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e3:e5 10.100.0.8'], port_security=['fa:16:3e:83:e3:e5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e77ed6db-0772-42cd-80a1-109f22d73463', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a033bef-73b5-46d2-bb85-453a685531c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e75bb30d28cd465fb9e94a4b8bc63349', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54ba420b-e3c4-44fc-a606-834ea00cf363', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daa96cfc-9b86-40be-a533-ba3c6259bb29, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20d2ae3a-8dc1-4f85-9b5c-39501552d110) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:09.234 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 in datapath 4a033bef-73b5-46d2-bb85-453a685531c4 bound to our chassis
Oct 02 12:59:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:09.235 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4a033bef-73b5-46d2-bb85-453a685531c4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:59:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:09.236 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fab821b0-1685-4c9c-82ae-84f0bc099beb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:09 compute-0 ovn_controller[148123]: 2025-10-02T12:59:09Z|00838|binding|INFO|Setting lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 ovn-installed in OVS
Oct 02 12:59:09 compute-0 ovn_controller[148123]: 2025-10-02T12:59:09Z|00839|binding|INFO|Setting lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 up in Southbound
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:09 compute-0 systemd-udevd[366355]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:59:09 compute-0 systemd-machined[210927]: New machine qemu-86-instance-000000a3.
Oct 02 12:59:09 compute-0 NetworkManager[44981]: <info>  [1759409949.2683] device (tap20d2ae3a-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:59:09 compute-0 NetworkManager[44981]: <info>  [1759409949.2698] device (tap20d2ae3a-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:59:09 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-000000a3.
Oct 02 12:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b257d8dd58f572b9636b886f46779d326e2fb0a64c5fdd6a05672b486ea73dc-merged.mount: Deactivated successfully.
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.541 2 DEBUG nova.compute.manager [req-47aa1f6e-4d4f-46bb-a69e-c27007c32c42 req-d0fe7f3e-a18c-4ab2-a73c-fcd0812cac3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.543 2 DEBUG oslo_concurrency.lockutils [req-47aa1f6e-4d4f-46bb-a69e-c27007c32c42 req-d0fe7f3e-a18c-4ab2-a73c-fcd0812cac3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.544 2 DEBUG oslo_concurrency.lockutils [req-47aa1f6e-4d4f-46bb-a69e-c27007c32c42 req-d0fe7f3e-a18c-4ab2-a73c-fcd0812cac3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.544 2 DEBUG oslo_concurrency.lockutils [req-47aa1f6e-4d4f-46bb-a69e-c27007c32c42 req-d0fe7f3e-a18c-4ab2-a73c-fcd0812cac3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:09 compute-0 nova_compute[256940]: 2025-10-02 12:59:09.545 2 DEBUG nova.compute.manager [req-47aa1f6e-4d4f-46bb-a69e-c27007c32c42 req-d0fe7f3e-a18c-4ab2-a73c-fcd0812cac3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Processing event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:59:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 02 12:59:10 compute-0 podman[366216]: 2025-10-02 12:59:10.1611815 +0000 UTC m=+2.104811477 container remove 3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:59:10 compute-0 systemd[1]: libpod-conmon-3ac1efa1eb2ef817b7e630c95fedf719a3faee666816cb4f7fec4de039098df7.scope: Deactivated successfully.
Oct 02 12:59:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:10 compute-0 podman[366413]: 2025-10-02 12:59:10.354344044 +0000 UTC m=+0.028085260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:10 compute-0 nova_compute[256940]: 2025-10-02 12:59:10.475 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409950.4744442, e77ed6db-0772-42cd-80a1-109f22d73463 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:10 compute-0 nova_compute[256940]: 2025-10-02 12:59:10.475 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] VM Started (Lifecycle Event)
Oct 02 12:59:10 compute-0 nova_compute[256940]: 2025-10-02 12:59:10.480 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:59:10 compute-0 nova_compute[256940]: 2025-10-02 12:59:10.484 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:59:10 compute-0 nova_compute[256940]: 2025-10-02 12:59:10.487 2 INFO nova.virt.libvirt.driver [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance spawned successfully.
Oct 02 12:59:10 compute-0 nova_compute[256940]: 2025-10-02 12:59:10.487 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:59:10 compute-0 podman[366413]: 2025-10-02 12:59:10.551255745 +0000 UTC m=+0.224996911 container create acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_burnell, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 12:59:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:10 compute-0 systemd[1]: Started libpod-conmon-acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2.scope.
Oct 02 12:59:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4596f976c44588a08a44594a6f27b7ba3a6d7f2d4d8da906938ae6f038b5cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4596f976c44588a08a44594a6f27b7ba3a6d7f2d4d8da906938ae6f038b5cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4596f976c44588a08a44594a6f27b7ba3a6d7f2d4d8da906938ae6f038b5cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4596f976c44588a08a44594a6f27b7ba3a6d7f2d4d8da906938ae6f038b5cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:10 compute-0 podman[366413]: 2025-10-02 12:59:10.746313649 +0000 UTC m=+0.420054875 container init acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:59:10 compute-0 podman[366413]: 2025-10-02 12:59:10.758165276 +0000 UTC m=+0.431906442 container start acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 12:59:10 compute-0 podman[366413]: 2025-10-02 12:59:10.763145605 +0000 UTC m=+0.436886801 container attach acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:59:10 compute-0 ceph-mon[73668]: pgmap v2583: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 02 12:59:11 compute-0 goofy_burnell[366430]: {
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:     "1": [
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:         {
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "devices": [
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "/dev/loop3"
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             ],
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "lv_name": "ceph_lv0",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "lv_size": "7511998464",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "name": "ceph_lv0",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "tags": {
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.cluster_name": "ceph",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.crush_device_class": "",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.encrypted": "0",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.osd_id": "1",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.type": "block",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:                 "ceph.vdo": "0"
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             },
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "type": "block",
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:             "vg_name": "ceph_vg0"
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:         }
Oct 02 12:59:11 compute-0 goofy_burnell[366430]:     ]
Oct 02 12:59:11 compute-0 goofy_burnell[366430]: }
Oct 02 12:59:11 compute-0 systemd[1]: libpod-acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2.scope: Deactivated successfully.
Oct 02 12:59:11 compute-0 podman[366439]: 2025-10-02 12:59:11.64552701 +0000 UTC m=+0.025187455 container died acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_burnell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 12:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b4596f976c44588a08a44594a6f27b7ba3a6d7f2d4d8da906938ae6f038b5cc-merged.mount: Deactivated successfully.
Oct 02 12:59:11 compute-0 podman[366439]: 2025-10-02 12:59:11.701692688 +0000 UTC m=+0.081353113 container remove acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 12:59:11 compute-0 systemd[1]: libpod-conmon-acdaee75ffb859ee2279ecf225123b62efb42db71e8bfb9e5ca9c4d6bb0d46a2.scope: Deactivated successfully.
Oct 02 12:59:11 compute-0 sudo[366131]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:11 compute-0 sudo[366454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:11 compute-0 sudo[366454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:11 compute-0 sudo[366454]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:11 compute-0 sudo[366479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:11 compute-0 sudo[366479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:11 compute-0 sudo[366479]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:11 compute-0 sudo[366504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:11 compute-0 sudo[366504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:11 compute-0 sudo[366504]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:12 compute-0 sudo[366529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 12:59:12 compute-0 sudo[366529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 911 KiB/s rd, 1.9 MiB/s wr, 95 op/s
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.096 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.113 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.118 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.118 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.119 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.119 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.120 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.121 2 DEBUG nova.virt.libvirt.driver [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.154 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.157 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409950.4747717, e77ed6db-0772-42cd-80a1-109f22d73463 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.158 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] VM Paused (Lifecycle Event)
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.200 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.204 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409950.4834242, e77ed6db-0772-42cd-80a1-109f22d73463 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.205 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] VM Resumed (Lifecycle Event)
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.235 2 DEBUG nova.compute.manager [req-26cd109f-6591-4f79-91ba-edd89e6a6ae8 req-92146ca8-4bbf-4349-99df-8ca1c0daafb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.236 2 DEBUG oslo_concurrency.lockutils [req-26cd109f-6591-4f79-91ba-edd89e6a6ae8 req-92146ca8-4bbf-4349-99df-8ca1c0daafb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.236 2 DEBUG oslo_concurrency.lockutils [req-26cd109f-6591-4f79-91ba-edd89e6a6ae8 req-92146ca8-4bbf-4349-99df-8ca1c0daafb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.237 2 DEBUG oslo_concurrency.lockutils [req-26cd109f-6591-4f79-91ba-edd89e6a6ae8 req-92146ca8-4bbf-4349-99df-8ca1c0daafb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.237 2 DEBUG nova.compute.manager [req-26cd109f-6591-4f79-91ba-edd89e6a6ae8 req-92146ca8-4bbf-4349-99df-8ca1c0daafb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] No waiting events found dispatching network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.237 2 WARNING nova.compute.manager [req-26cd109f-6591-4f79-91ba-edd89e6a6ae8 req-92146ca8-4bbf-4349-99df-8ca1c0daafb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received unexpected event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 for instance with vm_state building and task_state spawning.
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.239 2 INFO nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Took 14.26 seconds to spawn the instance on the hypervisor.
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.240 2 DEBUG nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.249 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.256 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:12 compute-0 podman[366599]: 2025-10-02 12:59:12.348530679 +0000 UTC m=+0.038318726 container create f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 12:59:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:12.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:12 compute-0 systemd[1]: Started libpod-conmon-f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51.scope.
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.390 2 INFO nova.compute.manager [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Took 16.72 seconds to build instance.
Oct 02 12:59:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:12 compute-0 podman[366599]: 2025-10-02 12:59:12.332000569 +0000 UTC m=+0.021788636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:12 compute-0 podman[366599]: 2025-10-02 12:59:12.430603529 +0000 UTC m=+0.120391606 container init f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_agnesi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:59:12 compute-0 podman[366599]: 2025-10-02 12:59:12.438914435 +0000 UTC m=+0.128702482 container start f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_agnesi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.438 2 DEBUG oslo_concurrency.lockutils [None req-a492b49f-93d9-4020-88e3-e01c77f359c7 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:12 compute-0 podman[366599]: 2025-10-02 12:59:12.442440356 +0000 UTC m=+0.132228403 container attach f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:59:12 compute-0 pedantic_agnesi[366615]: 167 167
Oct 02 12:59:12 compute-0 systemd[1]: libpod-f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51.scope: Deactivated successfully.
Oct 02 12:59:12 compute-0 podman[366599]: 2025-10-02 12:59:12.445472525 +0000 UTC m=+0.135260572 container died f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_agnesi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f32e29212d287602a68a4ffbbe094d0f701194d5b583e6d8c87a4e081224aa5-merged.mount: Deactivated successfully.
Oct 02 12:59:12 compute-0 podman[366599]: 2025-10-02 12:59:12.483348998 +0000 UTC m=+0.173137045 container remove f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:59:12 compute-0 systemd[1]: libpod-conmon-f57cb72738de6751d16945d6d84848c5bfc5639a467ab9ab5f8ceb23842e7a51.scope: Deactivated successfully.
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:12 compute-0 podman[366642]: 2025-10-02 12:59:12.66525623 +0000 UTC m=+0.041816966 container create 01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 12:59:12 compute-0 nova_compute[256940]: 2025-10-02 12:59:12.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:12 compute-0 systemd[1]: Started libpod-conmon-01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd.scope.
Oct 02 12:59:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 12:59:12 compute-0 podman[366642]: 2025-10-02 12:59:12.648058444 +0000 UTC m=+0.024619290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1457f6f03c77302f998bd4f9b98bb43d57172bebc959dd87af55c0078f1f584d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1457f6f03c77302f998bd4f9b98bb43d57172bebc959dd87af55c0078f1f584d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1457f6f03c77302f998bd4f9b98bb43d57172bebc959dd87af55c0078f1f584d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1457f6f03c77302f998bd4f9b98bb43d57172bebc959dd87af55c0078f1f584d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:12 compute-0 podman[366642]: 2025-10-02 12:59:12.754691002 +0000 UTC m=+0.131251758 container init 01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:59:12 compute-0 podman[366642]: 2025-10-02 12:59:12.767361391 +0000 UTC m=+0.143922127 container start 01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 12:59:12 compute-0 podman[366642]: 2025-10-02 12:59:12.770525253 +0000 UTC m=+0.147086009 container attach 01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 12:59:13 compute-0 ceph-mon[73668]: pgmap v2584: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 911 KiB/s rd, 1.9 MiB/s wr, 95 op/s
Oct 02 12:59:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3992647069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:13 compute-0 sshd-session[366554]: Invalid user admin from 78.128.112.74 port 48974
Oct 02 12:59:13 compute-0 sweet_ride[366659]: {
Oct 02 12:59:13 compute-0 sweet_ride[366659]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 12:59:13 compute-0 sweet_ride[366659]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 12:59:13 compute-0 sweet_ride[366659]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 12:59:13 compute-0 sweet_ride[366659]:         "osd_id": 1,
Oct 02 12:59:13 compute-0 sweet_ride[366659]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 12:59:13 compute-0 sweet_ride[366659]:         "type": "bluestore"
Oct 02 12:59:13 compute-0 sweet_ride[366659]:     }
Oct 02 12:59:13 compute-0 sweet_ride[366659]: }
Oct 02 12:59:13 compute-0 systemd[1]: libpod-01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd.scope: Deactivated successfully.
Oct 02 12:59:13 compute-0 podman[366642]: 2025-10-02 12:59:13.645905665 +0000 UTC m=+1.022466401 container died 01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 12:59:13 compute-0 sshd-session[366554]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 12:59:13 compute-0 sshd-session[366554]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74
Oct 02 12:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1457f6f03c77302f998bd4f9b98bb43d57172bebc959dd87af55c0078f1f584d-merged.mount: Deactivated successfully.
Oct 02 12:59:13 compute-0 podman[366642]: 2025-10-02 12:59:13.783910257 +0000 UTC m=+1.160470993 container remove 01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 12:59:13 compute-0 systemd[1]: libpod-conmon-01bcb65a65b5673ebda1ccafc4f107cc63ebbf9906c7c4c60e279030745ededd.scope: Deactivated successfully.
Oct 02 12:59:13 compute-0 sudo[366529]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 12:59:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 12:59:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5b9c3736-bf09-4bca-981d-3d9abe91027c does not exist
Oct 02 12:59:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev dba54611-a7b0-49d9-b729-4fa2885428aa does not exist
Oct 02 12:59:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 028c2073-aca3-49f2-b69a-0a8b8254ffec does not exist
Oct 02 12:59:13 compute-0 sudo[366691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:13 compute-0 sudo[366691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:13 compute-0 sudo[366691]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:13 compute-0 sudo[366716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:59:13 compute-0 sudo[366716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:13 compute-0 sudo[366716]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 752 KiB/s rd, 45 KiB/s wr, 37 op/s
Oct 02 12:59:14 compute-0 nova_compute[256940]: 2025-10-02 12:59:14.064 2 INFO nova.compute.manager [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Rescuing
Oct 02 12:59:14 compute-0 nova_compute[256940]: 2025-10-02 12:59:14.065 2 DEBUG oslo_concurrency.lockutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:14 compute-0 nova_compute[256940]: 2025-10-02 12:59:14.065 2 DEBUG oslo_concurrency.lockutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:14 compute-0 nova_compute[256940]: 2025-10-02 12:59:14.065 2 DEBUG nova.network.neutron [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:59:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1424863242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/779617749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:14 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3718952746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:15 compute-0 ceph-mon[73668]: pgmap v2585: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 752 KiB/s rd, 45 KiB/s wr, 37 op/s
Oct 02 12:59:15 compute-0 sshd-session[366554]: Failed password for invalid user admin from 78.128.112.74 port 48974 ssh2
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.526 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.526 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.527 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.527 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.527 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.905 2 DEBUG nova.network.neutron [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.934 2 DEBUG oslo_concurrency.lockutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169731568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:15 compute-0 nova_compute[256940]: 2025-10-02 12:59:15.989 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:16 compute-0 sshd-session[366554]: Connection closed by invalid user admin 78.128.112.74 port 48974 [preauth]
Oct 02 12:59:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 444 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 50 KiB/s wr, 81 op/s
Oct 02 12:59:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1857352639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3169731568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/505408071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.336 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.338 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.338 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.342 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.342 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.572 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.574 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3809MB free_disk=20.784870147705078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.574 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.574 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:16.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.943 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance d09efb55-ff68-4671-b89f-a35231b739e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.945 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e77ed6db-0772-42cd-80a1-109f22d73463 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.945 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:59:16 compute-0 nova_compute[256940]: 2025-10-02 12:59:16.945 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.032 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:17 compute-0 ceph-mon[73668]: pgmap v2586: 305 pgs: 305 active+clean; 444 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 50 KiB/s wr, 81 op/s
Oct 02 12:59:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232803464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.461 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.467 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.542 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.625 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.627 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:17 compute-0 nova_compute[256940]: 2025-10-02 12:59:17.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 50 KiB/s wr, 154 op/s
Oct 02 12:59:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/232803464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:18 compute-0 podman[366788]: 2025-10-02 12:59:18.393257064 +0000 UTC m=+0.059586497 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 02 12:59:18 compute-0 podman[366789]: 2025-10-02 12:59:18.427938515 +0000 UTC m=+0.094255698 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:59:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:18.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:19 compute-0 ceph-mon[73668]: pgmap v2587: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 50 KiB/s wr, 154 op/s
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.583 2 DEBUG oslo_concurrency.lockutils [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "interface-d09efb55-ff68-4671-b89f-a35231b739e2-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.584 2 DEBUG oslo_concurrency.lockutils [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d09efb55-ff68-4671-b89f-a35231b739e2-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.627 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.628 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.649 2 DEBUG nova.objects.instance [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'flavor' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.686 2 DEBUG nova.virt.libvirt.vif [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:13Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.687 2 DEBUG nova.network.os_vif_util [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.687 2 DEBUG nova.network.os_vif_util [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.690 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.693 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.694 2 DEBUG nova.virt.libvirt.driver [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Attempting to detach device tape81302d5-61 from instance d09efb55-ff68-4671-b89f-a35231b739e2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.695 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:0e:14:41"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <target dev="tape81302d5-61"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]: </interface>
Oct 02 12:59:19 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.705 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.708 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface>not found in domain: <domain type='kvm' id='81'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <name>instance-0000009d</name>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <uuid>d09efb55-ff68-4671-b89f-a35231b739e2</uuid>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:57:46</nova:creationTime>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:port uuid="e81302d5-61c1-4e2a-ba60-2692d1d9f8ed">
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:59:19 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <system>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='serial'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='uuid'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </system>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <os>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </os>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <features>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </features>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk' index='2'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk.config' index='1'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:dd:9c:44'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target dev='tape71909c9-67'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:0e:14:41'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target dev='tape81302d5-61'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='net1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </target>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </console>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <video>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </video>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c814,c933</label>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c814,c933</imagelabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]: </domain>
Oct 02 12:59:19 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.721 2 INFO nova.virt.libvirt.driver [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully detached device tape81302d5-61 from instance d09efb55-ff68-4671-b89f-a35231b739e2 from the persistent domain config.
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.721 2 DEBUG nova.virt.libvirt.driver [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] (1/8): Attempting to detach device tape81302d5-61 with device alias net1 from instance d09efb55-ff68-4671-b89f-a35231b739e2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.721 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <mac address="fa:16:3e:0e:14:41"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <model type="virtio"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <mtu size="1442"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <target dev="tape81302d5-61"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]: </interface>
Oct 02 12:59:19 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:59:19 compute-0 kernel: tape81302d5-61 (unregistering): left promiscuous mode
Oct 02 12:59:19 compute-0 NetworkManager[44981]: <info>  [1759409959.8158] device (tape81302d5-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:59:19 compute-0 ovn_controller[148123]: 2025-10-02T12:59:19Z|00840|binding|INFO|Releasing lport e81302d5-61c1-4e2a-ba60-2692d1d9f8ed from this chassis (sb_readonly=0)
Oct 02 12:59:19 compute-0 ovn_controller[148123]: 2025-10-02T12:59:19Z|00841|binding|INFO|Setting lport e81302d5-61c1-4e2a-ba60-2692d1d9f8ed down in Southbound
Oct 02 12:59:19 compute-0 ovn_controller[148123]: 2025-10-02T12:59:19Z|00842|binding|INFO|Removing iface tape81302d5-61 ovn-installed in OVS
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.866 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759409959.8658085, d09efb55-ff68-4671-b89f-a35231b739e2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:59:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:19.870 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:14:41 10.100.0.22', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'd09efb55-ff68-4671-b89f-a35231b739e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62f8ddde-a575-4c3a-bc2f-e3ff31baaf2d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:19.871 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e81302d5-61c1-4e2a-ba60-2692d1d9f8ed in datapath bc72facc-29fc-4f60-8da4-b2b18aba70d2 unbound from our chassis
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.870 2 DEBUG nova.virt.libvirt.driver [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Start waiting for the detach event from libvirt for device tape81302d5-61 with device alias net1 for instance d09efb55-ff68-4671-b89f-a35231b739e2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.871 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:59:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:19.872 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc72facc-29fc-4f60-8da4-b2b18aba70d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:59:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:19.874 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[48f3ed04-f767-472e-9e53-fd7fb1162609]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:19.874 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 namespace which is not needed anymore
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.877 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface>not found in domain: <domain type='kvm' id='81'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <name>instance-0000009d</name>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <uuid>d09efb55-ff68-4671-b89f-a35231b739e2</uuid>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:57:46</nova:creationTime>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:port uuid="e81302d5-61c1-4e2a-ba60-2692d1d9f8ed">
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:59:19 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <system>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='serial'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='uuid'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </system>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <os>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </os>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <features>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </features>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk' index='2'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk.config' index='1'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:dd:9c:44'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target dev='tape71909c9-67'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       </target>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </console>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <video>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </video>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c814,c933</label>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c814,c933</imagelabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:19 compute-0 nova_compute[256940]: </domain>
Oct 02 12:59:19 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.877 2 INFO nova.virt.libvirt.driver [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully detached device tape81302d5-61 from instance d09efb55-ff68-4671-b89f-a35231b739e2 from the live domain config.
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.883 2 DEBUG nova.virt.libvirt.vif [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:13Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.883 2 DEBUG nova.network.os_vif_util [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.884 2 DEBUG nova.network.os_vif_util [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.884 2 DEBUG os_vif [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.886 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape81302d5-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.895 2 INFO os_vif [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61')
Oct 02 12:59:19 compute-0 nova_compute[256940]: 2025-10-02 12:59:19.896 2 DEBUG nova.virt.libvirt.guest [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:59:19</nova:creationTime>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:59:19 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:59:19 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:19 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:59:19 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:59:19 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:59:20 compute-0 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[363241]: [NOTICE]   (363245) : haproxy version is 2.8.14-c23fe91
Oct 02 12:59:20 compute-0 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[363241]: [NOTICE]   (363245) : path to executable is /usr/sbin/haproxy
Oct 02 12:59:20 compute-0 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[363241]: [ALERT]    (363245) : Current worker (363247) exited with code 143 (Terminated)
Oct 02 12:59:20 compute-0 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[363241]: [WARNING]  (363245) : All workers exited. Exiting... (0)
Oct 02 12:59:20 compute-0 systemd[1]: libpod-b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270.scope: Deactivated successfully.
Oct 02 12:59:20 compute-0 podman[366858]: 2025-10-02 12:59:20.009680683 +0000 UTC m=+0.047077533 container died b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270-userdata-shm.mount: Deactivated successfully.
Oct 02 12:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-79688e8ef41fef4b0dd9e7758ab2871ad01b8621c7bc73a55d1be4c8dce86a00-merged.mount: Deactivated successfully.
Oct 02 12:59:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 54 KiB/s wr, 181 op/s
Oct 02 12:59:20 compute-0 podman[366858]: 2025-10-02 12:59:20.074317201 +0000 UTC m=+0.111714051 container cleanup b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:59:20 compute-0 systemd[1]: libpod-conmon-b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270.scope: Deactivated successfully.
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.160 2 DEBUG nova.compute.manager [req-c8c3e57d-1676-4c49-a8cb-161b095c0521 req-773868c2-6f44-4591-b601-45f5aaa869c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-unplugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.160 2 DEBUG oslo_concurrency.lockutils [req-c8c3e57d-1676-4c49-a8cb-161b095c0521 req-773868c2-6f44-4591-b601-45f5aaa869c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.161 2 DEBUG oslo_concurrency.lockutils [req-c8c3e57d-1676-4c49-a8cb-161b095c0521 req-773868c2-6f44-4591-b601-45f5aaa869c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.161 2 DEBUG oslo_concurrency.lockutils [req-c8c3e57d-1676-4c49-a8cb-161b095c0521 req-773868c2-6f44-4591-b601-45f5aaa869c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.161 2 DEBUG nova.compute.manager [req-c8c3e57d-1676-4c49-a8cb-161b095c0521 req-773868c2-6f44-4591-b601-45f5aaa869c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] No waiting events found dispatching network-vif-unplugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.162 2 WARNING nova.compute.manager [req-c8c3e57d-1676-4c49-a8cb-161b095c0521 req-773868c2-6f44-4591-b601-45f5aaa869c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received unexpected event network-vif-unplugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed for instance with vm_state active and task_state None.
Oct 02 12:59:20 compute-0 podman[366884]: 2025-10-02 12:59:20.163424534 +0000 UTC m=+0.063953121 container remove b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.174 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ad30ec0f-899e-4f19-a636-1a5266857b3c]: (4, ('Thu Oct  2 12:59:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 (b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270)\nb9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270\nThu Oct  2 12:59:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 (b9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270)\nb9269de78676a557be7a29599d77f36c0422e6aa945bcc0448af651f650e7270\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.175 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3193558c-e57f-4471-8602-b5e1abed7bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.176 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc72facc-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:20 compute-0 kernel: tapbc72facc-20: left promiscuous mode
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.199 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[04911c49-98de-408e-a648-64b7e0fed83f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.229 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[22df45ab-5280-44cd-bfcd-e738151e3af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.230 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c5cac1-dc78-47cf-a6f7-3455552e8d9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.246 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[50011ad3-2059-42ae-9b93-5a81914af66d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776203, 'reachable_time': 15822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366900, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.248 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:59:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:20.249 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c09c52f7-b7f3-4401-ad25-791d931f4906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:20 compute-0 systemd[1]: run-netns-ovnmeta\x2dbc72facc\x2d29fc\x2d4f60\x2d8da4\x2db2b18aba70d2.mount: Deactivated successfully.
Oct 02 12:59:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.647 2 DEBUG oslo_concurrency.lockutils [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.647 2 DEBUG oslo_concurrency.lockutils [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.647 2 DEBUG nova.network.neutron [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:59:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:59:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.764 2 DEBUG nova.compute.manager [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-deleted-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.764 2 INFO nova.compute.manager [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Neutron deleted interface e81302d5-61c1-4e2a-ba60-2692d1d9f8ed; detaching it from the instance and deleting it from the info cache
Oct 02 12:59:20 compute-0 nova_compute[256940]: 2025-10-02 12:59:20.765 2 DEBUG nova.network.neutron [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:21 compute-0 nova_compute[256940]: 2025-10-02 12:59:21.112 2 DEBUG nova.objects.instance [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'system_metadata' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:21 compute-0 nova_compute[256940]: 2025-10-02 12:59:21.153 2 DEBUG nova.objects.instance [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'flavor' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:21 compute-0 nova_compute[256940]: 2025-10-02 12:59:21.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:21 compute-0 ceph-mon[73668]: pgmap v2588: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 54 KiB/s wr, 181 op/s
Oct 02 12:59:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 47 KiB/s wr, 175 op/s
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:22.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.391 2 DEBUG nova.compute.manager [req-18710519-7ba8-4a4b-b7cd-f4d04b2d1abc req-cfe6b7bc-0b84-426c-94d0-1ddc33441568 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.392 2 DEBUG oslo_concurrency.lockutils [req-18710519-7ba8-4a4b-b7cd-f4d04b2d1abc req-cfe6b7bc-0b84-426c-94d0-1ddc33441568 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.392 2 DEBUG oslo_concurrency.lockutils [req-18710519-7ba8-4a4b-b7cd-f4d04b2d1abc req-cfe6b7bc-0b84-426c-94d0-1ddc33441568 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.393 2 DEBUG oslo_concurrency.lockutils [req-18710519-7ba8-4a4b-b7cd-f4d04b2d1abc req-cfe6b7bc-0b84-426c-94d0-1ddc33441568 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.393 2 DEBUG nova.compute.manager [req-18710519-7ba8-4a4b-b7cd-f4d04b2d1abc req-cfe6b7bc-0b84-426c-94d0-1ddc33441568 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] No waiting events found dispatching network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.393 2 WARNING nova.compute.manager [req-18710519-7ba8-4a4b-b7cd-f4d04b2d1abc req-cfe6b7bc-0b84-426c-94d0-1ddc33441568 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received unexpected event network-vif-plugged-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed for instance with vm_state active and task_state None.
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.413 2 DEBUG nova.virt.libvirt.vif [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:13Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.413 2 DEBUG nova.network.os_vif_util [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.414 2 DEBUG nova.network.os_vif_util [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.418 2 DEBUG nova.virt.libvirt.guest [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.422 2 DEBUG nova.virt.libvirt.guest [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface>not found in domain: <domain type='kvm' id='81'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <name>instance-0000009d</name>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <uuid>d09efb55-ff68-4671-b89f-a35231b739e2</uuid>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:59:19</nova:creationTime>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:59:22 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <system>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='serial'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='uuid'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </system>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <os>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </os>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <features>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </features>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk' index='2'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk.config' index='1'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:dd:9c:44'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target dev='tape71909c9-67'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </target>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </console>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <video>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </video>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c814,c933</label>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c814,c933</imagelabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]: </domain>
Oct 02 12:59:22 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.425 2 DEBUG nova.virt.libvirt.guest [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.431 2 DEBUG nova.virt.libvirt.guest [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0e:14:41"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape81302d5-61"/></interface>not found in domain: <domain type='kvm' id='81'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <name>instance-0000009d</name>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <uuid>d09efb55-ff68-4671-b89f-a35231b739e2</uuid>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:59:19</nova:creationTime>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:59:22 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <memory unit='KiB'>131072</memory>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <resource>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <partition>/machine</partition>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </resource>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <sysinfo type='smbios'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <system>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='serial'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='uuid'>d09efb55-ff68-4671-b89f-a35231b739e2</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </system>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <os>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <boot dev='hd'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <smbios mode='sysinfo'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </os>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <features>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <vmcoreinfo state='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </features>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <feature policy='require' name='x2apic'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <feature policy='require' name='vme'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <clock offset='utc'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <timer name='hpet' present='no'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <on_reboot>restart</on_reboot>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <on_crash>destroy</on_crash>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <disk type='network' device='disk'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk' index='2'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target dev='vda' bus='virtio'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='virtio-disk0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <disk type='network' device='cdrom'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <auth username='openstack'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source protocol='rbd' name='vms/d09efb55-ff68-4671-b89f-a35231b739e2_disk.config' index='1'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target dev='sda' bus='sata'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <readonly/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='sata0-0-0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pcie.0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='1' port='0x10'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='2' port='0x11'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='3' port='0x12'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='4' port='0x13'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='5' port='0x14'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='6' port='0x15'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='7' port='0x16'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='8' port='0x17'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.8'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='9' port='0x18'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.9'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='10' port='0x19'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.10'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='11' port='0x1a'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.11'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='12' port='0x1b'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.12'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='13' port='0x1c'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.13'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='14' port='0x1d'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.14'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='15' port='0x1e'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.15'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='16' port='0x1f'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.16'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='17' port='0x20'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.17'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='18' port='0x21'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.18'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='19' port='0x22'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.19'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='20' port='0x23'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.20'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='21' port='0x24'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.21'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='22' port='0x25'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.22'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='23' port='0x26'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.23'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='24' port='0x27'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.24'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-root-port'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target chassis='25' port='0x28'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.25'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model name='pcie-pci-bridge'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='pci.26'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='usb'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <controller type='sata' index='0'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='ide'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </controller>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <interface type='ethernet'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <mac address='fa:16:3e:dd:9c:44'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target dev='tape71909c9-67'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model type='virtio'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <mtu size='1442'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='net0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <serial type='pty'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target type='isa-serial' port='0'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:         <model name='isa-serial'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       </target>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <source path='/dev/pts/1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <log file='/var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2/console.log' append='off'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <target type='serial' port='0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='serial0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </console>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <input type='tablet' bus='usb'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='input0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <input type='mouse' bus='ps2'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='input1'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <input type='keyboard' bus='ps2'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='input2'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </input>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <listen type='address' address='::0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </graphics>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <audio id='1' type='none'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <video>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='video0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </video>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <watchdog model='itco' action='reset'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='watchdog0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </watchdog>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <memballoon model='virtio'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <stats period='10'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='balloon0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <rng model='virtio'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <alias name='rng0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <label>system_u:system_r:svirt_t:s0:c814,c933</label>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c814,c933</imagelabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <label>+107:+107</label>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </seclabel>
Oct 02 12:59:22 compute-0 nova_compute[256940]: </domain>
Oct 02 12:59:22 compute-0 nova_compute[256940]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.432 2 WARNING nova.virt.libvirt.driver [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Detaching interface fa:16:3e:0e:14:41 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tape81302d5-61' not found.
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.433 2 DEBUG nova.virt.libvirt.vif [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:13Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.433 2 DEBUG nova.network.os_vif_util [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "address": "fa:16:3e:0e:14:41", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape81302d5-61", "ovs_interfaceid": "e81302d5-61c1-4e2a-ba60-2692d1d9f8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.434 2 DEBUG nova.network.os_vif_util [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.434 2 DEBUG os_vif [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape81302d5-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.439 2 INFO os_vif [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:14:41,bridge_name='br-int',has_traffic_filtering=True,id=e81302d5-61c1-4e2a-ba60-2692d1d9f8ed,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape81302d5-61')
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.440 2 DEBUG nova.virt.libvirt.guest [req-0e9e7e64-0564-41f0-bdbe-41c07cd4be1f req-cdde5cda-9a1e-475c-a9dd-75aaaefaa4b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:name>tempest-TestNetworkBasicOps-server-754576892</nova:name>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:creationTime>2025-10-02 12:59:22</nova:creationTime>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:flavor name="m1.nano">
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:memory>128</nova:memory>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:disk>1</nova:disk>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:swap>0</nova:swap>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:flavor>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:owner>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:owner>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   <nova:ports>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     <nova:port uuid="e71909c9-67aa-432f-923c-787d02eb9db3">
Oct 02 12:59:22 compute-0 nova_compute[256940]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:59:22 compute-0 nova_compute[256940]:     </nova:port>
Oct 02 12:59:22 compute-0 nova_compute[256940]:   </nova:ports>
Oct 02 12:59:22 compute-0 nova_compute[256940]: </nova:instance>
Oct 02 12:59:22 compute-0 nova_compute[256940]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:59:22 compute-0 nova_compute[256940]: 2025-10-02 12:59:22.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:22.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:23 compute-0 ceph-mon[73668]: pgmap v2589: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 47 KiB/s wr, 175 op/s
Oct 02 12:59:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 23 KiB/s wr, 144 op/s
Oct 02 12:59:24 compute-0 ovn_controller[148123]: 2025-10-02T12:59:24Z|00843|binding|INFO|Releasing lport d4ed0cb9-c5ef-468c-8d04-1cebdb7dab47 from this chassis (sb_readonly=0)
Oct 02 12:59:24 compute-0 nova_compute[256940]: 2025-10-02 12:59:24.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:24 compute-0 nova_compute[256940]: 2025-10-02 12:59:24.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:24.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:24 compute-0 nova_compute[256940]: 2025-10-02 12:59:24.551 2 INFO nova.network.neutron [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Port e81302d5-61c1-4e2a-ba60-2692d1d9f8ed from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 12:59:24 compute-0 nova_compute[256940]: 2025-10-02 12:59:24.556 2 DEBUG nova.network.neutron [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:24 compute-0 nova_compute[256940]: 2025-10-02 12:59:24.592 2 DEBUG oslo_concurrency.lockutils [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:24 compute-0 nova_compute[256940]: 2025-10-02 12:59:24.641 2 DEBUG oslo_concurrency.lockutils [None req-27aba7e0-8fd2-4ec9-8335-0d4cae359f6f 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d09efb55-ff68-4671-b89f-a35231b739e2-e81302d5-61c1-4e2a-ba60-2692d1d9f8ed" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1884512294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:24 compute-0 nova_compute[256940]: 2025-10-02 12:59:24.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:25 compute-0 ceph-mon[73668]: pgmap v2590: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 23 KiB/s wr, 144 op/s
Oct 02 12:59:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2065133054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 416 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.0 MiB/s wr, 172 op/s
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.332 2 DEBUG nova.compute.manager [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-changed-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.333 2 DEBUG nova.compute.manager [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing instance network info cache due to event network-changed-e71909c9-67aa-432f-923c-787d02eb9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.333 2 DEBUG oslo_concurrency.lockutils [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.333 2 DEBUG oslo_concurrency.lockutils [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.334 2 DEBUG nova.network.neutron [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Refreshing network info cache for port e71909c9-67aa-432f-923c-787d02eb9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.378 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:59:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 12:59:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.423 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.423 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.424 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.424 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.424 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.425 2 INFO nova.compute.manager [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Terminating instance
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.426 2 DEBUG nova.compute.manager [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:59:26 compute-0 kernel: tape71909c9-67 (unregistering): left promiscuous mode
Oct 02 12:59:26 compute-0 NetworkManager[44981]: <info>  [1759409966.4865] device (tape71909c9-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:26 compute-0 ovn_controller[148123]: 2025-10-02T12:59:26Z|00844|binding|INFO|Releasing lport e71909c9-67aa-432f-923c-787d02eb9db3 from this chassis (sb_readonly=0)
Oct 02 12:59:26 compute-0 ovn_controller[148123]: 2025-10-02T12:59:26Z|00845|binding|INFO|Setting lport e71909c9-67aa-432f-923c-787d02eb9db3 down in Southbound
Oct 02 12:59:26 compute-0 ovn_controller[148123]: 2025-10-02T12:59:26Z|00846|binding|INFO|Removing iface tape71909c9-67 ovn-installed in OVS
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.496 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.496 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.497 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.501 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:9c:44 10.100.0.9'], port_security=['fa:16:3e:dd:9c:44 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd09efb55-ff68-4671-b89f-a35231b739e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8cc2c0e-6346-43da-b946-82bae1683dfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d3e502-6055-4397-819e-b5bef1e88e2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=e71909c9-67aa-432f-923c-787d02eb9db3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.502 158104 INFO neutron.agent.ovn.metadata.agent [-] Port e71909c9-67aa-432f-923c-787d02eb9db3 in datapath 1e4b6d1b-55de-4f6c-bff6-fc56357cf40e unbound from our chassis
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.504 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e4b6d1b-55de-4f6c-bff6-fc56357cf40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.505 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[930704ba-b582-43e5-af16-3c793dd8efde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.505 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e namespace which is not needed anymore
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:26 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Oct 02 12:59:26 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d0000009d.scope: Consumed 19.984s CPU time.
Oct 02 12:59:26 compute-0 systemd-machined[210927]: Machine qemu-81-instance-0000009d terminated.
Oct 02 12:59:26 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [NOTICE]   (362013) : haproxy version is 2.8.14-c23fe91
Oct 02 12:59:26 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [NOTICE]   (362013) : path to executable is /usr/sbin/haproxy
Oct 02 12:59:26 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [WARNING]  (362013) : Exiting Master process...
Oct 02 12:59:26 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [WARNING]  (362013) : Exiting Master process...
Oct 02 12:59:26 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [ALERT]    (362013) : Current worker (362015) exited with code 143 (Terminated)
Oct 02 12:59:26 compute-0 neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e[362009]: [WARNING]  (362013) : All workers exited. Exiting... (0)
Oct 02 12:59:26 compute-0 systemd[1]: libpod-e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52.scope: Deactivated successfully.
Oct 02 12:59:26 compute-0 conmon[362009]: conmon e0cffdd8f4227da687c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52.scope/container/memory.events
Oct 02 12:59:26 compute-0 podman[366927]: 2025-10-02 12:59:26.652623867 +0000 UTC m=+0.052728040 container died e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:59:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:26.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.666 2 INFO nova.virt.libvirt.driver [-] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Instance destroyed successfully.
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.666 2 DEBUG nova.objects.instance [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid d09efb55-ff68-4671-b89f-a35231b739e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52-userdata-shm.mount: Deactivated successfully.
Oct 02 12:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb5be618cc70338bf2e99dd1619af4f94c8bab968df2d028651913844b0bbdb1-merged.mount: Deactivated successfully.
Oct 02 12:59:26 compute-0 podman[366927]: 2025-10-02 12:59:26.713373874 +0000 UTC m=+0.113478047 container cleanup e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:59:26 compute-0 systemd[1]: libpod-conmon-e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52.scope: Deactivated successfully.
Oct 02 12:59:26 compute-0 podman[366969]: 2025-10-02 12:59:26.782870418 +0000 UTC m=+0.044064485 container remove e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.789 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf9df5a-797f-49de-9f6c-67db338241c8]: (4, ('Thu Oct  2 12:59:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e (e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52)\ne0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52\nThu Oct  2 12:59:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e (e0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52)\ne0cffdd8f4227da687c2deed9d43d077f24198b01e95ce79f1a083e0ba2adc52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.792 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[939f1ced-f67f-4c5d-808b-893e1c53b11c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.793 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4b6d1b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:26 compute-0 kernel: tap1e4b6d1b-50: left promiscuous mode
Oct 02 12:59:26 compute-0 nova_compute[256940]: 2025-10-02 12:59:26.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.817 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0ff142-35ec-4d86-b24a-e9edd417f88f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.847 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c298395e-cd92-4156-bee2-855e1a963ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.849 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c22f18-b781-4e82-9f3c-7a44672ff392]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.867 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8384f40f-3872-4351-9ade-0a6adf5cc534]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772733, 'reachable_time': 39798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366988, 'error': None, 'target': 'ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d1e4b6d1b\x2d55de\x2d4f6c\x2dbff6\x2dfc56357cf40e.mount: Deactivated successfully.
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.871 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e4b6d1b-55de-4f6c-bff6-fc56357cf40e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:59:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:26.871 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2263d741-5fd7-43c2-b275-db958a2406d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.020 2 DEBUG nova.compute.manager [req-b2b42c53-823a-49b5-a33c-8c0f91c78b26 req-30a7da89-db60-48fb-b820-b10d7fce0ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-unplugged-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.021 2 DEBUG oslo_concurrency.lockutils [req-b2b42c53-823a-49b5-a33c-8c0f91c78b26 req-30a7da89-db60-48fb-b820-b10d7fce0ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.021 2 DEBUG oslo_concurrency.lockutils [req-b2b42c53-823a-49b5-a33c-8c0f91c78b26 req-30a7da89-db60-48fb-b820-b10d7fce0ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.021 2 DEBUG oslo_concurrency.lockutils [req-b2b42c53-823a-49b5-a33c-8c0f91c78b26 req-30a7da89-db60-48fb-b820-b10d7fce0ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.021 2 DEBUG nova.compute.manager [req-b2b42c53-823a-49b5-a33c-8c0f91c78b26 req-30a7da89-db60-48fb-b820-b10d7fce0ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] No waiting events found dispatching network-vif-unplugged-e71909c9-67aa-432f-923c-787d02eb9db3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.022 2 DEBUG nova.compute.manager [req-b2b42c53-823a-49b5-a33c-8c0f91c78b26 req-30a7da89-db60-48fb-b820-b10d7fce0ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-unplugged-e71909c9-67aa-432f-923c-787d02eb9db3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.045 2 DEBUG nova.virt.libvirt.vif [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-754576892',display_name='tempest-TestNetworkBasicOps-server-754576892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-754576892',id=157,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKXrHwXfd3ny6Yo5QZkQcJhxUujE5uGVZrzu/18x8ohkk3G1DyK2+D9CFoUOajtjCYHQGdh/HwQuSbhEZTKcAh7BO4DQ3+q+xRZnyxJB8Y89K3g+QexwAyIJSAo8BX87A==',key_name='tempest-TestNetworkBasicOps-712788156',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-7zrig75q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:13Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d09efb55-ff68-4671-b89f-a35231b739e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.045 2 DEBUG nova.network.os_vif_util [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.046 2 DEBUG nova.network.os_vif_util [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=e71909c9-67aa-432f-923c-787d02eb9db3,network=Network(1e4b6d1b-55de-4f6c-bff6-fc56357cf40e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71909c9-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.046 2 DEBUG os_vif [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=e71909c9-67aa-432f-923c-787d02eb9db3,network=Network(1e4b6d1b-55de-4f6c-bff6-fc56357cf40e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71909c9-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape71909c9-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.053 2 INFO os_vif [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=e71909c9-67aa-432f-923c-787d02eb9db3,network=Network(1e4b6d1b-55de-4f6c-bff6-fc56357cf40e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71909c9-67')
Oct 02 12:59:27 compute-0 nova_compute[256940]: 2025-10-02 12:59:27.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:27 compute-0 ceph-mon[73668]: pgmap v2591: 305 pgs: 305 active+clean; 416 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.0 MiB/s wr, 172 op/s
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 436 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:59:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.249 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.249 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.250 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.250 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.250 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:28 compute-0 sudo[367008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:28 compute-0 sudo[367008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:28 compute-0 sudo[367008]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:28 compute-0 sudo[367033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:28 compute-0 sudo[367033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:28 compute-0 sudo[367033]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:28.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:28 compute-0 kernel: tap20d2ae3a-8d (unregistering): left promiscuous mode
Oct 02 12:59:28 compute-0 NetworkManager[44981]: <info>  [1759409968.7852] device (tap20d2ae3a-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:59:28 compute-0 ovn_controller[148123]: 2025-10-02T12:59:28Z|00847|binding|INFO|Releasing lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 from this chassis (sb_readonly=0)
Oct 02 12:59:28 compute-0 ovn_controller[148123]: 2025-10-02T12:59:28Z|00848|binding|INFO|Setting lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 down in Southbound
Oct 02 12:59:28 compute-0 ovn_controller[148123]: 2025-10-02T12:59:28Z|00849|binding|INFO|Removing iface tap20d2ae3a-8d ovn-installed in OVS
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_12:59:28
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images']
Oct 02 12:59:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:28 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Oct 02 12:59:28 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000a3.scope: Consumed 14.899s CPU time.
Oct 02 12:59:28 compute-0 systemd-machined[210927]: Machine qemu-86-instance-000000a3 terminated.
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.990 2 DEBUG nova.network.neutron [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updated VIF entry in instance network info cache for port e71909c9-67aa-432f-923c-787d02eb9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:28 compute-0 nova_compute[256940]: 2025-10-02 12:59:28.991 2 DEBUG nova.network.neutron [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [{"id": "e71909c9-67aa-432f-923c-787d02eb9db3", "address": "fa:16:3e:dd:9c:44", "network": {"id": "1e4b6d1b-55de-4f6c-bff6-fc56357cf40e", "bridge": "br-int", "label": "tempest-network-smoke--2076520359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71909c9-67", "ovs_interfaceid": "e71909c9-67aa-432f-923c-787d02eb9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:29 compute-0 NetworkManager[44981]: <info>  [1759409969.0695] manager: (tap20d2ae3a-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Oct 02 12:59:29 compute-0 ceph-mon[73668]: pgmap v2592: 305 pgs: 305 active+clean; 436 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.392 2 INFO nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance shutdown successfully after 13 seconds.
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.399 2 INFO nova.virt.libvirt.driver [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance destroyed successfully.
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.400 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'numa_topology' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 12:59:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.640 2 INFO nova.virt.libvirt.driver [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Deleting instance files /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2_del
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.641 2 INFO nova.virt.libvirt.driver [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Deletion of /var/lib/nova/instances/d09efb55-ff68-4671-b89f-a35231b739e2_del complete
Oct 02 12:59:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:29.719 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e3:e5 10.100.0.8'], port_security=['fa:16:3e:83:e3:e5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e77ed6db-0772-42cd-80a1-109f22d73463', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a033bef-73b5-46d2-bb85-453a685531c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e75bb30d28cd465fb9e94a4b8bc63349', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54ba420b-e3c4-44fc-a606-834ea00cf363', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daa96cfc-9b86-40be-a533-ba3c6259bb29, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20d2ae3a-8dc1-4f85-9b5c-39501552d110) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:29.720 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 in datapath 4a033bef-73b5-46d2-bb85-453a685531c4 unbound from our chassis
Oct 02 12:59:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:29.721 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4a033bef-73b5-46d2-bb85-453a685531c4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:59:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:29.722 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4c656590-3dd1-4fb7-88b1-47ae46038f66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.725 2 DEBUG nova.compute.manager [req-ef0c4608-1918-490d-96e3-87874e1b4d1e req-deb4944a-0f55-4c33-84a2-0448c8c80f6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.726 2 DEBUG oslo_concurrency.lockutils [req-ef0c4608-1918-490d-96e3-87874e1b4d1e req-deb4944a-0f55-4c33-84a2-0448c8c80f6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.726 2 DEBUG oslo_concurrency.lockutils [req-ef0c4608-1918-490d-96e3-87874e1b4d1e req-deb4944a-0f55-4c33-84a2-0448c8c80f6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.726 2 DEBUG oslo_concurrency.lockutils [req-ef0c4608-1918-490d-96e3-87874e1b4d1e req-deb4944a-0f55-4c33-84a2-0448c8c80f6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.727 2 DEBUG nova.compute.manager [req-ef0c4608-1918-490d-96e3-87874e1b4d1e req-deb4944a-0f55-4c33-84a2-0448c8c80f6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] No waiting events found dispatching network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.727 2 WARNING nova.compute.manager [req-ef0c4608-1918-490d-96e3-87874e1b4d1e req-deb4944a-0f55-4c33-84a2-0448c8c80f6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received unexpected event network-vif-plugged-e71909c9-67aa-432f-923c-787d02eb9db3 for instance with vm_state active and task_state deleting.
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.822 2 INFO nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Attempting rescue
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.823 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.825 2 DEBUG oslo_concurrency.lockutils [req-4d088027-1b81-42db-bc2f-c6d93ba8e123 req-e485559d-9503-477d-9e89-28915c43281a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d09efb55-ff68-4671-b89f-a35231b739e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.826 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.827 2 INFO nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Creating image(s)
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.850 2 DEBUG nova.storage.rbd_utils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.855 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.864 2 INFO nova.compute.manager [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Took 3.44 seconds to destroy the instance on the hypervisor.
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.864 2 DEBUG oslo.service.loopingcall [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.865 2 DEBUG nova.compute.manager [-] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.866 2 DEBUG nova.network.neutron [-] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.906 2 DEBUG nova.storage.rbd_utils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.938 2 DEBUG nova.storage.rbd_utils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:29 compute-0 nova_compute[256940]: 2025-10-02 12:59:29.941 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.027 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.028 2 DEBUG oslo_concurrency.lockutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.030 2 DEBUG oslo_concurrency.lockutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.030 2 DEBUG oslo_concurrency.lockutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.063 2 DEBUG nova.storage.rbd_utils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 393 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 192 op/s
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.067 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e77ed6db-0772-42cd-80a1-109f22d73463_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:30.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.898 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e77ed6db-0772-42cd-80a1-109f22d73463_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.831s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.900 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'migration_context' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.921 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.922 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Start _get_guest_xml network_info=[{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "vif_mac": "fa:16:3e:83:e3:e5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.923 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'resources' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.943 2 WARNING nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.950 2 DEBUG nova.virt.libvirt.host [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.951 2 DEBUG nova.virt.libvirt.host [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.954 2 DEBUG nova.virt.libvirt.host [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.954 2 DEBUG nova.virt.libvirt.host [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.955 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.955 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.956 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.956 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.956 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.956 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.957 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.957 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.957 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.957 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.958 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.958 2 DEBUG nova.virt.hardware [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.958 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:30 compute-0 nova_compute[256940]: 2025-10-02 12:59:30.990 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:31 compute-0 ceph-mon[73668]: pgmap v2593: 305 pgs: 305 active+clean; 393 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 192 op/s
Oct 02 12:59:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857272195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.492 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.494 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.630 2 DEBUG nova.compute.manager [req-208263d6-564a-4a34-a430-6cd5929aaa2d req-87e030e2-62e7-486d-8ae5-7de1741092b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-unplugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.631 2 DEBUG oslo_concurrency.lockutils [req-208263d6-564a-4a34-a430-6cd5929aaa2d req-87e030e2-62e7-486d-8ae5-7de1741092b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.631 2 DEBUG oslo_concurrency.lockutils [req-208263d6-564a-4a34-a430-6cd5929aaa2d req-87e030e2-62e7-486d-8ae5-7de1741092b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.631 2 DEBUG oslo_concurrency.lockutils [req-208263d6-564a-4a34-a430-6cd5929aaa2d req-87e030e2-62e7-486d-8ae5-7de1741092b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.632 2 DEBUG nova.compute.manager [req-208263d6-564a-4a34-a430-6cd5929aaa2d req-87e030e2-62e7-486d-8ae5-7de1741092b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] No waiting events found dispatching network-vif-unplugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.632 2 WARNING nova.compute.manager [req-208263d6-564a-4a34-a430-6cd5929aaa2d req-87e030e2-62e7-486d-8ae5-7de1741092b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received unexpected event network-vif-unplugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 for instance with vm_state active and task_state rescuing.
Oct 02 12:59:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163633002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.979 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:31 compute-0 nova_compute[256940]: 2025-10-02 12:59:31.980 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.366 2 DEBUG nova.network.neutron [-] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.396 2 INFO nova.compute.manager [-] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Took 2.53 seconds to deallocate network for instance.
Oct 02 12:59:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000465293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.464 2 DEBUG nova.compute.manager [req-2afc81c8-f372-4228-a4e2-7a6676dd38cd req-68796b0c-d4ac-4ac1-b00a-e3293d7f114b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Received event network-vif-deleted-e71909c9-67aa-432f-923c-787d02eb9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/857272195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3163633002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.465 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.466 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.467 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.468 2 DEBUG nova.virt.libvirt.vif [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-41587939',display_name='tempest-ServerRescueTestJSONUnderV235-server-41587939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-41587939',id=163,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e75bb30d28cd465fb9e94a4b8bc63349',ramdisk_id='',reservation_id='r-e1uxwewv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-422989214',owner_user_name='tempest-ServerRescueTestJSONUnderV235-422989214-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:59:12Z,user_data=None,user_id='1c573c08a89345349ee60a0f1fd80d32',uuid=e77ed6db-0772-42cd-80a1-109f22d73463,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "vif_mac": "fa:16:3e:83:e3:e5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.468 2 DEBUG nova.network.os_vif_util [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converting VIF {"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "vif_mac": "fa:16:3e:83:e3:e5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.469 2 DEBUG nova.network.os_vif_util [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.470 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'pci_devices' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.493 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <uuid>e77ed6db-0772-42cd-80a1-109f22d73463</uuid>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <name>instance-000000a3</name>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <metadata>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-41587939</nova:name>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 12:59:30</nova:creationTime>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:user uuid="1c573c08a89345349ee60a0f1fd80d32">tempest-ServerRescueTestJSONUnderV235-422989214-project-member</nova:user>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:project uuid="e75bb30d28cd465fb9e94a4b8bc63349">tempest-ServerRescueTestJSONUnderV235-422989214</nova:project>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <nova:port uuid="20d2ae3a-8dc1-4f85-9b5c-39501552d110">
Oct 02 12:59:32 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   </metadata>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <system>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <entry name="serial">e77ed6db-0772-42cd-80a1-109f22d73463</entry>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <entry name="uuid">e77ed6db-0772-42cd-80a1-109f22d73463</entry>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </system>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <os>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   </os>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <features>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <apic/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   </features>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   </clock>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   </cpu>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   <devices>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e77ed6db-0772-42cd-80a1-109f22d73463_disk.rescue">
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e77ed6db-0772-42cd-80a1-109f22d73463_disk">
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e77ed6db-0772-42cd-80a1-109f22d73463_disk.config.rescue">
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </source>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 12:59:32 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       </auth>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </disk>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:83:e3:e5"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <target dev="tap20d2ae3a-8d"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </interface>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/console.log" append="off"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </serial>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <video>
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </video>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </rng>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 12:59:32 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 12:59:32 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 12:59:32 compute-0 nova_compute[256940]:   </devices>
Oct 02 12:59:32 compute-0 nova_compute[256940]: </domain>
Oct 02 12:59:32 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.499 2 INFO nova.virt.libvirt.driver [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance destroyed successfully.
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.606 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.607 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.607 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.607 2 DEBUG nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] No VIF found with MAC fa:16:3e:83:e3:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.608 2 INFO nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Using config drive
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.630 2 DEBUG nova.storage.rbd_utils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.639 2 DEBUG oslo_concurrency.processutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:32.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.709 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:32 compute-0 nova_compute[256940]: 2025-10-02 12:59:32.750 2 DEBUG nova.objects.instance [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'keypairs' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591579695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.094 2 DEBUG oslo_concurrency.processutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.102 2 DEBUG nova.compute.provider_tree [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.127 2 DEBUG nova.scheduler.client.report [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.155 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.206 2 INFO nova.scheduler.client.report [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance d09efb55-ff68-4671-b89f-a35231b739e2
Oct 02 12:59:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.300 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.308 2 DEBUG oslo_concurrency.lockutils [None req-929946d6-bd90-4ad9-9076-9e5c8aad55f1 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d09efb55-ff68-4671-b89f-a35231b739e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.322 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.323 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:59:33 compute-0 podman[367279]: 2025-10-02 12:59:33.40109268 +0000 UTC m=+0.062183775 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:59:33 compute-0 podman[367278]: 2025-10-02 12:59:33.401182622 +0000 UTC m=+0.066827475 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.449 2 INFO nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Creating config drive at /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config.rescue
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.455 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxler_z10 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:33 compute-0 ceph-mon[73668]: pgmap v2594: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Oct 02 12:59:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2000465293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3591579695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.595 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxler_z10" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.624 2 DEBUG nova.storage.rbd_utils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] rbd image e77ed6db-0772-42cd-80a1-109f22d73463_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:59:33 compute-0 nova_compute[256940]: 2025-10-02 12:59:33.629 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config.rescue e77ed6db-0772-42cd-80a1-109f22d73463_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:59:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:34 compute-0 nova_compute[256940]: 2025-10-02 12:59:34.634 2 DEBUG nova.compute.manager [req-9f39b7e1-27d8-4e02-96a8-b447a2f614a1 req-9f6b37a6-899f-4d2e-abc2-35a3890e4844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:34 compute-0 nova_compute[256940]: 2025-10-02 12:59:34.635 2 DEBUG oslo_concurrency.lockutils [req-9f39b7e1-27d8-4e02-96a8-b447a2f614a1 req-9f6b37a6-899f-4d2e-abc2-35a3890e4844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:34 compute-0 nova_compute[256940]: 2025-10-02 12:59:34.636 2 DEBUG oslo_concurrency.lockutils [req-9f39b7e1-27d8-4e02-96a8-b447a2f614a1 req-9f6b37a6-899f-4d2e-abc2-35a3890e4844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:34 compute-0 nova_compute[256940]: 2025-10-02 12:59:34.636 2 DEBUG oslo_concurrency.lockutils [req-9f39b7e1-27d8-4e02-96a8-b447a2f614a1 req-9f6b37a6-899f-4d2e-abc2-35a3890e4844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:34 compute-0 nova_compute[256940]: 2025-10-02 12:59:34.636 2 DEBUG nova.compute.manager [req-9f39b7e1-27d8-4e02-96a8-b447a2f614a1 req-9f6b37a6-899f-4d2e-abc2-35a3890e4844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] No waiting events found dispatching network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:34 compute-0 nova_compute[256940]: 2025-10-02 12:59:34.637 2 WARNING nova.compute.manager [req-9f39b7e1-27d8-4e02-96a8-b447a2f614a1 req-9f6b37a6-899f-4d2e-abc2-35a3890e4844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received unexpected event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 for instance with vm_state active and task_state rescuing.
Oct 02 12:59:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:34.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:35 compute-0 ceph-mon[73668]: pgmap v2595: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:59:35 compute-0 nova_compute[256940]: 2025-10-02 12:59:35.073 2 DEBUG oslo_concurrency.processutils [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config.rescue e77ed6db-0772-42cd-80a1-109f22d73463_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:35 compute-0 nova_compute[256940]: 2025-10-02 12:59:35.074 2 INFO nova.virt.libvirt.driver [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Deleting local config drive /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463/disk.config.rescue because it was imported into RBD.
Oct 02 12:59:35 compute-0 kernel: tap20d2ae3a-8d: entered promiscuous mode
Oct 02 12:59:35 compute-0 NetworkManager[44981]: <info>  [1759409975.1355] manager: (tap20d2ae3a-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Oct 02 12:59:35 compute-0 ovn_controller[148123]: 2025-10-02T12:59:35Z|00850|binding|INFO|Claiming lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 for this chassis.
Oct 02 12:59:35 compute-0 ovn_controller[148123]: 2025-10-02T12:59:35Z|00851|binding|INFO|20d2ae3a-8dc1-4f85-9b5c-39501552d110: Claiming fa:16:3e:83:e3:e5 10.100.0.8
Oct 02 12:59:35 compute-0 nova_compute[256940]: 2025-10-02 12:59:35.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:35.142 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e3:e5 10.100.0.8'], port_security=['fa:16:3e:83:e3:e5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e77ed6db-0772-42cd-80a1-109f22d73463', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a033bef-73b5-46d2-bb85-453a685531c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e75bb30d28cd465fb9e94a4b8bc63349', 'neutron:revision_number': '5', 'neutron:security_group_ids': '54ba420b-e3c4-44fc-a606-834ea00cf363', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daa96cfc-9b86-40be-a533-ba3c6259bb29, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20d2ae3a-8dc1-4f85-9b5c-39501552d110) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:35.144 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 in datapath 4a033bef-73b5-46d2-bb85-453a685531c4 bound to our chassis
Oct 02 12:59:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:35.146 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4a033bef-73b5-46d2-bb85-453a685531c4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:59:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 12:59:35.147 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[da0f5755-566c-4eb9-849b-3251ab64bca0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:35 compute-0 ovn_controller[148123]: 2025-10-02T12:59:35Z|00852|binding|INFO|Setting lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 up in Southbound
Oct 02 12:59:35 compute-0 ovn_controller[148123]: 2025-10-02T12:59:35Z|00853|binding|INFO|Setting lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 ovn-installed in OVS
Oct 02 12:59:35 compute-0 nova_compute[256940]: 2025-10-02 12:59:35.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-0 nova_compute[256940]: 2025-10-02 12:59:35.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-0 systemd-udevd[367372]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:59:35 compute-0 systemd-machined[210927]: New machine qemu-87-instance-000000a3.
Oct 02 12:59:35 compute-0 NetworkManager[44981]: <info>  [1759409975.1848] device (tap20d2ae3a-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:59:35 compute-0 NetworkManager[44981]: <info>  [1759409975.1859] device (tap20d2ae3a-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:59:35 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-000000a3.
Oct 02 12:59:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 407 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 235 op/s
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.391 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for e77ed6db-0772-42cd-80a1-109f22d73463 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.393 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409976.3903356, e77ed6db-0772-42cd-80a1-109f22d73463 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.394 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] VM Resumed (Lifecycle Event)
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.401 2 DEBUG nova.compute.manager [None req-c2c741b8-9a20-49a8-b9ad-b60f97299ba5 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.448 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.452 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.506 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.507 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759409976.391083, e77ed6db-0772-42cd-80a1-109f22d73463 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.507 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] VM Started (Lifecycle Event)
Oct 02 12:59:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:36.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.741 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.744 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.848 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.848 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.849 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.849 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.849 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] No waiting events found dispatching network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.850 2 WARNING nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received unexpected event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 for instance with vm_state rescued and task_state None.
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.850 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.850 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.851 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.851 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.851 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] No waiting events found dispatching network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:36 compute-0 nova_compute[256940]: 2025-10-02 12:59:36.852 2 WARNING nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received unexpected event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 for instance with vm_state rescued and task_state None.
Oct 02 12:59:37 compute-0 nova_compute[256940]: 2025-10-02 12:59:37.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:37 compute-0 ceph-mon[73668]: pgmap v2596: 305 pgs: 305 active+clean; 407 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 235 op/s
Oct 02 12:59:37 compute-0 nova_compute[256940]: 2025-10-02 12:59:37.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 228 op/s
Oct 02 12:59:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:38.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:38.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:39 compute-0 ceph-mon[73668]: pgmap v2597: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 228 op/s
Oct 02 12:59:39 compute-0 nova_compute[256940]: 2025-10-02 12:59:39.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:39 compute-0 nova_compute[256940]: 2025-10-02 12:59:39.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 370 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Oct 02 12:59:40 compute-0 nova_compute[256940]: 2025-10-02 12:59:40.335 2 DEBUG nova.compute.manager [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:40 compute-0 nova_compute[256940]: 2025-10-02 12:59:40.336 2 DEBUG nova.compute.manager [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing instance network info cache due to event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:40 compute-0 nova_compute[256940]: 2025-10-02 12:59:40.336 2 DEBUG oslo_concurrency.lockutils [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:40 compute-0 nova_compute[256940]: 2025-10-02 12:59:40.336 2 DEBUG oslo_concurrency.lockutils [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:40 compute-0 nova_compute[256940]: 2025-10-02 12:59:40.336 2 DEBUG nova.network.neutron [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005778494614275079 of space, bias 1.0, pg target 1.7335483842825237 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 12:59:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 12:59:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:40.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:41 compute-0 nova_compute[256940]: 2025-10-02 12:59:41.661 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409966.6600895, d09efb55-ff68-4671-b89f-a35231b739e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:41 compute-0 nova_compute[256940]: 2025-10-02 12:59:41.662 2 INFO nova.compute.manager [-] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] VM Stopped (Lifecycle Event)
Oct 02 12:59:41 compute-0 ceph-mon[73668]: pgmap v2598: 305 pgs: 305 active+clean; 370 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Oct 02 12:59:41 compute-0 nova_compute[256940]: 2025-10-02 12:59:41.701 2 DEBUG nova.compute.manager [None req-c5e35646-702f-40ec-ba88-2f11cdd37cc4 - - - - - -] [instance: d09efb55-ff68-4671-b89f-a35231b739e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:42 compute-0 nova_compute[256940]: 2025-10-02 12:59:42.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 02 12:59:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:42.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:42 compute-0 nova_compute[256940]: 2025-10-02 12:59:42.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:42.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:42 compute-0 nova_compute[256940]: 2025-10-02 12:59:42.971 2 DEBUG nova.network.neutron [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updated VIF entry in instance network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:42 compute-0 nova_compute[256940]: 2025-10-02 12:59:42.972 2 DEBUG nova.network.neutron [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:43 compute-0 nova_compute[256940]: 2025-10-02 12:59:43.026 2 DEBUG oslo_concurrency.lockutils [req-fe431bda-544f-4a71-8286-387794071dfc req-7a3698ca-bb25-48f3-92f2-a530692d199d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:43 compute-0 ceph-mon[73668]: pgmap v2599: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 02 12:59:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 552 KiB/s wr, 152 op/s
Oct 02 12:59:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3512397272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:44.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:44 compute-0 nova_compute[256940]: 2025-10-02 12:59:44.598 2 DEBUG nova.compute.manager [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:44 compute-0 nova_compute[256940]: 2025-10-02 12:59:44.599 2 DEBUG nova.compute.manager [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing instance network info cache due to event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:44 compute-0 nova_compute[256940]: 2025-10-02 12:59:44.599 2 DEBUG oslo_concurrency.lockutils [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:44 compute-0 nova_compute[256940]: 2025-10-02 12:59:44.599 2 DEBUG oslo_concurrency.lockutils [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:44 compute-0 nova_compute[256940]: 2025-10-02 12:59:44.599 2 DEBUG nova.network.neutron [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:44.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:45 compute-0 ceph-mon[73668]: pgmap v2600: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 552 KiB/s wr, 152 op/s
Oct 02 12:59:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 561 KiB/s wr, 152 op/s
Oct 02 12:59:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:46.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:46.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:47 compute-0 nova_compute[256940]: 2025-10-02 12:59:47.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:47 compute-0 nova_compute[256940]: 2025-10-02 12:59:47.185 2 DEBUG nova.network.neutron [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updated VIF entry in instance network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:47 compute-0 nova_compute[256940]: 2025-10-02 12:59:47.186 2 DEBUG nova.network.neutron [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:47 compute-0 nova_compute[256940]: 2025-10-02 12:59:47.291 2 DEBUG oslo_concurrency.lockutils [req-fa386015-87cd-4fee-954c-f14e7946893d req-1df63294-284c-4f56-a6e7-8be9e77b68fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:47 compute-0 ceph-mon[73668]: pgmap v2601: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 561 KiB/s wr, 152 op/s
Oct 02 12:59:47 compute-0 nova_compute[256940]: 2025-10-02 12:59:47.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 148 op/s
Oct 02 12:59:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:48.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:48 compute-0 sudo[367448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:48 compute-0 sudo[367448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:48 compute-0 sudo[367448]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:48 compute-0 sudo[367485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:48 compute-0 sudo[367485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:48 compute-0 sudo[367485]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:48.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:48 compute-0 podman[367472]: 2025-10-02 12:59:48.692752514 +0000 UTC m=+0.068883590 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:59:48 compute-0 podman[367473]: 2025-10-02 12:59:48.721244153 +0000 UTC m=+0.098460807 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:49 compute-0 NetworkManager[44981]: <info>  [1759409989.0598] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Oct 02 12:59:49 compute-0 NetworkManager[44981]: <info>  [1759409989.0604] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.431 2 DEBUG nova.compute.manager [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.432 2 DEBUG nova.compute.manager [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing instance network info cache due to event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.433 2 DEBUG oslo_concurrency.lockutils [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.434 2 DEBUG oslo_concurrency.lockutils [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:49 compute-0 nova_compute[256940]: 2025-10-02 12:59:49.434 2 DEBUG nova.network.neutron [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:49 compute-0 ceph-mon[73668]: pgmap v2602: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 148 op/s
Oct 02 12:59:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 34 KiB/s wr, 172 op/s
Oct 02 12:59:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:50.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:50.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:51 compute-0 ceph-mon[73668]: pgmap v2603: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 34 KiB/s wr, 172 op/s
Oct 02 12:59:52 compute-0 nova_compute[256940]: 2025-10-02 12:59:52.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 32 KiB/s wr, 150 op/s
Oct 02 12:59:52 compute-0 nova_compute[256940]: 2025-10-02 12:59:52.410 2 DEBUG nova.network.neutron [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updated VIF entry in instance network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:52 compute-0 nova_compute[256940]: 2025-10-02 12:59:52.410 2 DEBUG nova.network.neutron [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:52 compute-0 nova_compute[256940]: 2025-10-02 12:59:52.439 2 DEBUG oslo_concurrency.lockutils [req-db662956-310b-4990-a6c9-fa60dfaaff37 req-fc2e0d05-b9ca-48c8-b06e-e400f83716c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:52 compute-0 nova_compute[256940]: 2025-10-02 12:59:52.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:52.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:53 compute-0 ceph-mon[73668]: pgmap v2604: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 32 KiB/s wr, 150 op/s
Oct 02 12:59:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:53 compute-0 nova_compute[256940]: 2025-10-02 12:59:53.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 22 KiB/s wr, 64 op/s
Oct 02 12:59:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:54.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:54.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:55 compute-0 nova_compute[256940]: 2025-10-02 12:59:55.684 2 DEBUG nova.compute.manager [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:55 compute-0 nova_compute[256940]: 2025-10-02 12:59:55.684 2 DEBUG nova.compute.manager [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing instance network info cache due to event network-changed-20d2ae3a-8dc1-4f85-9b5c-39501552d110. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:55 compute-0 nova_compute[256940]: 2025-10-02 12:59:55.685 2 DEBUG oslo_concurrency.lockutils [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:55 compute-0 nova_compute[256940]: 2025-10-02 12:59:55.685 2 DEBUG oslo_concurrency.lockutils [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:55 compute-0 nova_compute[256940]: 2025-10-02 12:59:55.685 2 DEBUG nova.network.neutron [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Refreshing network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 24 KiB/s wr, 64 op/s
Oct 02 12:59:56 compute-0 ceph-mon[73668]: pgmap v2605: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 22 KiB/s wr, 64 op/s
Oct 02 12:59:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:56.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 12:59:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:56.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 12:59:57 compute-0 nova_compute[256940]: 2025-10-02 12:59:57.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:57 compute-0 ceph-mon[73668]: pgmap v2606: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 24 KiB/s wr, 64 op/s
Oct 02 12:59:57 compute-0 nova_compute[256940]: 2025-10-02 12:59:57.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 25 KiB/s wr, 64 op/s
Oct 02 12:59:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 12:59:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:58.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 12:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 12:59:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 12:59:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 12:59:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:58.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:58 compute-0 nova_compute[256940]: 2025-10-02 12:59:58.923 2 DEBUG nova.network.neutron [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updated VIF entry in instance network info cache for port 20d2ae3a-8dc1-4f85-9b5c-39501552d110. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:58 compute-0 nova_compute[256940]: 2025-10-02 12:59:58.923 2 DEBUG nova.network.neutron [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [{"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:58 compute-0 nova_compute[256940]: 2025-10-02 12:59:58.942 2 DEBUG oslo_concurrency.lockutils [req-3334f1da-9a89-4805-b333-171bdeba0677 req-1e1b8806-5ee9-4c20-ab8e-b8717aa27b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e77ed6db-0772-42cd-80a1-109f22d73463" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:59 compute-0 ceph-mon[73668]: pgmap v2607: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 25 KiB/s wr, 64 op/s
Oct 02 13:00:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:00:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 28 KiB/s wr, 64 op/s
Oct 02 13:00:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:00.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:00.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 13:00:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:01.108 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:01.109 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.594 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.594 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.595 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.595 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.595 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.597 2 INFO nova.compute.manager [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Terminating instance
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.598 2 DEBUG nova.compute.manager [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:00:01 compute-0 kernel: tap20d2ae3a-8d (unregistering): left promiscuous mode
Oct 02 13:00:01 compute-0 NetworkManager[44981]: <info>  [1759410001.7780] device (tap20d2ae3a-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:00:01 compute-0 ovn_controller[148123]: 2025-10-02T13:00:01Z|00854|binding|INFO|Releasing lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 from this chassis (sb_readonly=0)
Oct 02 13:00:01 compute-0 ovn_controller[148123]: 2025-10-02T13:00:01Z|00855|binding|INFO|Setting lport 20d2ae3a-8dc1-4f85-9b5c-39501552d110 down in Southbound
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:01 compute-0 ovn_controller[148123]: 2025-10-02T13:00:01Z|00856|binding|INFO|Removing iface tap20d2ae3a-8d ovn-installed in OVS
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:01.796 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e3:e5 10.100.0.8'], port_security=['fa:16:3e:83:e3:e5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e77ed6db-0772-42cd-80a1-109f22d73463', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a033bef-73b5-46d2-bb85-453a685531c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e75bb30d28cd465fb9e94a4b8bc63349', 'neutron:revision_number': '8', 'neutron:security_group_ids': '54ba420b-e3c4-44fc-a606-834ea00cf363', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daa96cfc-9b86-40be-a533-ba3c6259bb29, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=20d2ae3a-8dc1-4f85-9b5c-39501552d110) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:01.797 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 20d2ae3a-8dc1-4f85-9b5c-39501552d110 in datapath 4a033bef-73b5-46d2-bb85-453a685531c4 unbound from our chassis
Oct 02 13:00:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:01.798 158104 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4a033bef-73b5-46d2-bb85-453a685531c4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 13:00:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:01.799 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5117c3-94fd-4946-bbde-32829798a594]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:01 compute-0 ceph-mon[73668]: pgmap v2608: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 28 KiB/s wr, 64 op/s
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:01 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Oct 02 13:00:01 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000a3.scope: Consumed 14.833s CPU time.
Oct 02 13:00:01 compute-0 systemd-machined[210927]: Machine qemu-87-instance-000000a3 terminated.
Oct 02 13:00:01 compute-0 nova_compute[256940]: 2025-10-02 13:00:01.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.042 2 INFO nova.virt.libvirt.driver [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Instance destroyed successfully.
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.043 2 DEBUG nova.objects.instance [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lazy-loading 'resources' on Instance uuid e77ed6db-0772-42cd-80a1-109f22d73463 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.060 2 DEBUG nova.virt.libvirt.vif [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-41587939',display_name='tempest-ServerRescueTestJSONUnderV235-server-41587939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-41587939',id=163,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:59:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e75bb30d28cd465fb9e94a4b8bc63349',ramdisk_id='',reservation_id='r-e1uxwewv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-422989214',owner_user_name='tempest-ServerRescueTestJSONUnderV235-422989214-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:36Z,user_data=None,user_id='1c573c08a89345349ee60a0f1fd80d32',uuid=e77ed6db-0772-42cd-80a1-109f22d73463,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.060 2 DEBUG nova.network.os_vif_util [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converting VIF {"id": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "address": "fa:16:3e:83:e3:e5", "network": {"id": "4a033bef-73b5-46d2-bb85-453a685531c4", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-269532699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e75bb30d28cd465fb9e94a4b8bc63349", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20d2ae3a-8d", "ovs_interfaceid": "20d2ae3a-8dc1-4f85-9b5c-39501552d110", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.061 2 DEBUG nova.network.os_vif_util [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.061 2 DEBUG os_vif [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20d2ae3a-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.069 2 INFO os_vif [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:e3:e5,bridge_name='br-int',has_traffic_filtering=True,id=20d2ae3a-8dc1-4f85-9b5c-39501552d110,network=Network(4a033bef-73b5-46d2-bb85-453a685531c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20d2ae3a-8d')
Oct 02 13:00:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 24 KiB/s wr, 22 op/s
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.319 2 DEBUG nova.compute.manager [req-d1a33d61-1fa6-42a1-a392-1249d836a784 req-c6375c54-c721-427d-860c-ba4966115dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-unplugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.320 2 DEBUG oslo_concurrency.lockutils [req-d1a33d61-1fa6-42a1-a392-1249d836a784 req-c6375c54-c721-427d-860c-ba4966115dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.320 2 DEBUG oslo_concurrency.lockutils [req-d1a33d61-1fa6-42a1-a392-1249d836a784 req-c6375c54-c721-427d-860c-ba4966115dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.320 2 DEBUG oslo_concurrency.lockutils [req-d1a33d61-1fa6-42a1-a392-1249d836a784 req-c6375c54-c721-427d-860c-ba4966115dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.320 2 DEBUG nova.compute.manager [req-d1a33d61-1fa6-42a1-a392-1249d836a784 req-c6375c54-c721-427d-860c-ba4966115dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] No waiting events found dispatching network-vif-unplugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.321 2 DEBUG nova.compute.manager [req-d1a33d61-1fa6-42a1-a392-1249d836a784 req-c6375c54-c721-427d-860c-ba4966115dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-unplugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:00:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:02.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:02 compute-0 nova_compute[256940]: 2025-10-02 13:00:02.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:02.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:03 compute-0 ceph-mon[73668]: pgmap v2609: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 24 KiB/s wr, 22 op/s
Oct 02 13:00:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 22 KiB/s wr, 2 op/s
Oct 02 13:00:04 compute-0 podman[367585]: 2025-10-02 13:00:04.385310561 +0000 UTC m=+0.056380235 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:00:04 compute-0 podman[367586]: 2025-10-02 13:00:04.385338782 +0000 UTC m=+0.056655672 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:00:04 compute-0 nova_compute[256940]: 2025-10-02 13:00:04.426 2 DEBUG nova.compute.manager [req-6bd5aa6b-9ac9-4a58-bafd-be0a94812480 req-f045399f-a9a6-4bad-a462-f941122ea30d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:04 compute-0 nova_compute[256940]: 2025-10-02 13:00:04.426 2 DEBUG oslo_concurrency.lockutils [req-6bd5aa6b-9ac9-4a58-bafd-be0a94812480 req-f045399f-a9a6-4bad-a462-f941122ea30d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:04 compute-0 nova_compute[256940]: 2025-10-02 13:00:04.427 2 DEBUG oslo_concurrency.lockutils [req-6bd5aa6b-9ac9-4a58-bafd-be0a94812480 req-f045399f-a9a6-4bad-a462-f941122ea30d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:04 compute-0 nova_compute[256940]: 2025-10-02 13:00:04.427 2 DEBUG oslo_concurrency.lockutils [req-6bd5aa6b-9ac9-4a58-bafd-be0a94812480 req-f045399f-a9a6-4bad-a462-f941122ea30d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:04 compute-0 nova_compute[256940]: 2025-10-02 13:00:04.427 2 DEBUG nova.compute.manager [req-6bd5aa6b-9ac9-4a58-bafd-be0a94812480 req-f045399f-a9a6-4bad-a462-f941122ea30d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] No waiting events found dispatching network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:04 compute-0 nova_compute[256940]: 2025-10-02 13:00:04.427 2 WARNING nova.compute.manager [req-6bd5aa6b-9ac9-4a58-bafd-be0a94812480 req-f045399f-a9a6-4bad-a462-f941122ea30d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received unexpected event network-vif-plugged-20d2ae3a-8dc1-4f85-9b5c-39501552d110 for instance with vm_state rescued and task_state deleting.
Oct 02 13:00:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:04.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:05 compute-0 ceph-mon[73668]: pgmap v2610: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 22 KiB/s wr, 2 op/s
Oct 02 13:00:05 compute-0 nova_compute[256940]: 2025-10-02 13:00:05.109 2 INFO nova.virt.libvirt.driver [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Deleting instance files /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463_del
Oct 02 13:00:05 compute-0 nova_compute[256940]: 2025-10-02 13:00:05.110 2 INFO nova.virt.libvirt.driver [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Deletion of /var/lib/nova/instances/e77ed6db-0772-42cd-80a1-109f22d73463_del complete
Oct 02 13:00:05 compute-0 nova_compute[256940]: 2025-10-02 13:00:05.158 2 INFO nova.compute.manager [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Took 3.56 seconds to destroy the instance on the hypervisor.
Oct 02 13:00:05 compute-0 nova_compute[256940]: 2025-10-02 13:00:05.159 2 DEBUG oslo.service.loopingcall [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:00:05 compute-0 nova_compute[256940]: 2025-10-02 13:00:05.159 2 DEBUG nova.compute.manager [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:00:05 compute-0 nova_compute[256940]: 2025-10-02 13:00:05.159 2 DEBUG nova.network.neutron [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:00:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4081704541' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:00:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4081704541' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:00:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 300 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 25 KiB/s wr, 21 op/s
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.149 2 DEBUG nova.network.neutron [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.188 2 INFO nova.compute.manager [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Took 1.03 seconds to deallocate network for instance.
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.231 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.232 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.287 2 DEBUG oslo_concurrency.processutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:06.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.617 2 DEBUG nova.compute.manager [req-21f472a4-fa02-4955-94cf-2096c801fec7 req-0332adbf-cdec-462b-9e93-e21352d6cb99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Received event network-vif-deleted-20d2ae3a-8dc1-4f85-9b5c-39501552d110 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:06.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931536052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.759 2 DEBUG oslo_concurrency.processutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.763 2 DEBUG nova.compute.provider_tree [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.786 2 DEBUG nova.scheduler.client.report [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.813 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.838 2 INFO nova.scheduler.client.report [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Deleted allocations for instance e77ed6db-0772-42cd-80a1-109f22d73463
Oct 02 13:00:06 compute-0 nova_compute[256940]: 2025-10-02 13:00:06.916 2 DEBUG oslo_concurrency.lockutils [None req-493eb3d4-57fd-4d59-b7f4-3b380668f009 1c573c08a89345349ee60a0f1fd80d32 e75bb30d28cd465fb9e94a4b8bc63349 - - default default] Lock "e77ed6db-0772-42cd-80a1-109f22d73463" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:07 compute-0 nova_compute[256940]: 2025-10-02 13:00:07.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:07 compute-0 ceph-mon[73668]: pgmap v2611: 305 pgs: 305 active+clean; 300 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 25 KiB/s wr, 21 op/s
Oct 02 13:00:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1931536052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:07 compute-0 nova_compute[256940]: 2025-10-02 13:00:07.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:07 compute-0 nova_compute[256940]: 2025-10-02 13:00:07.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 224 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 26 KiB/s wr, 57 op/s
Oct 02 13:00:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1204448448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:08.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:08.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:08 compute-0 sudo[367647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:08 compute-0 sudo[367647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:08 compute-0 sudo[367647]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:08 compute-0 sudo[367672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:08 compute-0 sudo[367672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:08 compute-0 sudo[367672]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:09.111 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:09 compute-0 ceph-mon[73668]: pgmap v2612: 305 pgs: 305 active+clean; 224 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 26 KiB/s wr, 57 op/s
Oct 02 13:00:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 681 KiB/s wr, 69 op/s
Oct 02 13:00:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1424016671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:10.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:10.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.501 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.502 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:11 compute-0 ceph-mon[73668]: pgmap v2613: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 681 KiB/s wr, 69 op/s
Oct 02 13:00:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/406634081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.549 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.631 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.632 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.638 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.638 2 INFO nova.compute.claims [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.782 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:11 compute-0 nova_compute[256940]: 2025-10-02 13:00:11.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 13:00:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149400941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.224 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.229 2 DEBUG nova.compute.provider_tree [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.265 2 DEBUG nova.scheduler.client.report [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.290 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.291 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.344 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.346 2 DEBUG nova.network.neutron [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.406 2 INFO nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.432 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:00:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:12.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3162448280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3149400941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.670 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.671 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.672 2 INFO nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Creating image(s)
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.696 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.721 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:12.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.745 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.749 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.781 2 DEBUG nova.policy [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.822 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.822 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.823 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.823 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.853 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:12 compute-0 nova_compute[256940]: 2025-10-02 13:00:12.856 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:13 compute-0 ceph-mon[73668]: pgmap v2614: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 13:00:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/376091757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/604262924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2870876322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.687 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.831s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.766 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.883 2 DEBUG nova.objects.instance [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.908 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.909 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Ensure instance console log exists: /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.909 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.910 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:13 compute-0 nova_compute[256940]: 2025-10-02 13:00:13.910 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:14 compute-0 nova_compute[256940]: 2025-10-02 13:00:14.041 2 DEBUG nova.network.neutron [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Successfully created port: 72654905-093f-4292-82c9-4cb6696b256c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:00:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 02 13:00:14 compute-0 sudo[367888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:14 compute-0 sudo[367888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-0 sudo[367888]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:14 compute-0 sudo[367913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:14 compute-0 sudo[367913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-0 sudo[367913]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:14 compute-0 sudo[367938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:14 compute-0 sudo[367938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-0 sudo[367938]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:14 compute-0 sudo[367963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:00:14 compute-0 sudo[367963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:14.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:00:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:00:15 compute-0 sudo[367963]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:15 compute-0 nova_compute[256940]: 2025-10-02 13:00:15.523 2 DEBUG nova.network.neutron [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Successfully updated port: 72654905-093f-4292-82c9-4cb6696b256c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:00:15 compute-0 nova_compute[256940]: 2025-10-02 13:00:15.538 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:15 compute-0 nova_compute[256940]: 2025-10-02 13:00:15.538 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:15 compute-0 nova_compute[256940]: 2025-10-02 13:00:15.538 2 DEBUG nova.network.neutron [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:00:15 compute-0 ceph-mon[73668]: pgmap v2615: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 02 13:00:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:15 compute-0 nova_compute[256940]: 2025-10-02 13:00:15.842 2 DEBUG nova.network.neutron [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9513cf86-c035-465a-9501-8331873855e2 does not exist
Oct 02 13:00:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6956d100-3f7b-4a21-9d7f-67af84dfb7a6 does not exist
Oct 02 13:00:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b8d44117-d40f-4cd6-a021-875fa2cd760f does not exist
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:00:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:00:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:16 compute-0 sudo[368020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:16 compute-0 sudo[368020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:16 compute-0 sudo[368020]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:16 compute-0 sudo[368045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:16 compute-0 sudo[368045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:16 compute-0 sudo[368045]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 780 KiB/s rd, 2.7 MiB/s wr, 118 op/s
Oct 02 13:00:16 compute-0 sudo[368070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:16 compute-0 sudo[368070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:16 compute-0 sudo[368070]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:16 compute-0 sudo[368095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:00:16 compute-0 sudo[368095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:16 compute-0 nova_compute[256940]: 2025-10-02 13:00:16.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:16 compute-0 nova_compute[256940]: 2025-10-02 13:00:16.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:00:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:16.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:16 compute-0 podman[368159]: 2025-10-02 13:00:16.541847364 +0000 UTC m=+0.065550193 container create 6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_robinson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:00:16 compute-0 podman[368159]: 2025-10-02 13:00:16.498235672 +0000 UTC m=+0.021938521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:16 compute-0 systemd[1]: Started libpod-conmon-6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a.scope.
Oct 02 13:00:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:16 compute-0 podman[368159]: 2025-10-02 13:00:16.66807604 +0000 UTC m=+0.191778889 container init 6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:00:16 compute-0 podman[368159]: 2025-10-02 13:00:16.677281979 +0000 UTC m=+0.200984798 container start 6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_robinson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:00:16 compute-0 pedantic_robinson[368176]: 167 167
Oct 02 13:00:16 compute-0 systemd[1]: libpod-6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a.scope: Deactivated successfully.
Oct 02 13:00:16 compute-0 podman[368159]: 2025-10-02 13:00:16.713358076 +0000 UTC m=+0.237060915 container attach 6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_robinson, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:00:16 compute-0 podman[368159]: 2025-10-02 13:00:16.714337431 +0000 UTC m=+0.238040260 container died 6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_robinson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:16.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3382480637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-eda341ae94c235c869e315d6421518f08bd1c32a6751f76c468a6aa1ae69799b-merged.mount: Deactivated successfully.
Oct 02 13:00:16 compute-0 podman[368159]: 2025-10-02 13:00:16.865271659 +0000 UTC m=+0.388974488 container remove 6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:00:16 compute-0 systemd[1]: libpod-conmon-6c69380a81e95a1ebcdfa8f87764610d192a465ade71b37a995a15201a2f892a.scope: Deactivated successfully.
Oct 02 13:00:16 compute-0 nova_compute[256940]: 2025-10-02 13:00:16.963 2 DEBUG nova.network.neutron [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updating instance_info_cache with network_info: [{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:16 compute-0 nova_compute[256940]: 2025-10-02 13:00:16.993 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:16 compute-0 nova_compute[256940]: 2025-10-02 13:00:16.993 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance network_info: |[{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:00:16 compute-0 nova_compute[256940]: 2025-10-02 13:00:16.996 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Start _get_guest_xml network_info=[{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.000 2 WARNING nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.008 2 DEBUG nova.virt.libvirt.host [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.008 2 DEBUG nova.virt.libvirt.host [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.013 2 DEBUG nova.virt.libvirt.host [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.013 2 DEBUG nova.virt.libvirt.host [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.015 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.015 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.016 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.016 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.016 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.017 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.017 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.017 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.017 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.018 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.018 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.018 2 DEBUG nova.virt.hardware [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.020 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:17 compute-0 podman[368201]: 2025-10-02 13:00:17.037613423 +0000 UTC m=+0.044402864 container create 45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.054 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410002.0404959, e77ed6db-0772-42cd-80a1-109f22d73463 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.055 2 INFO nova.compute.manager [-] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] VM Stopped (Lifecycle Event)
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.058 2 DEBUG nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-changed-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.058 2 DEBUG nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Refreshing instance network info cache due to event network-changed-72654905-093f-4292-82c9-4cb6696b256c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.058 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.059 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.059 2 DEBUG nova.network.neutron [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Refreshing network info cache for port 72654905-093f-4292-82c9-4cb6696b256c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.082 2 DEBUG nova.compute.manager [None req-c087ea47-3942-4aa4-abf2-02015b57a86b - - - - - -] [instance: e77ed6db-0772-42cd-80a1-109f22d73463] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:17 compute-0 podman[368201]: 2025-10-02 13:00:17.01669027 +0000 UTC m=+0.023479701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:17 compute-0 systemd[1]: Started libpod-conmon-45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406.scope.
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b823d62fd574315503a64f254fd44757f596524499e717ef226f7c8f6e2a878e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b823d62fd574315503a64f254fd44757f596524499e717ef226f7c8f6e2a878e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b823d62fd574315503a64f254fd44757f596524499e717ef226f7c8f6e2a878e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b823d62fd574315503a64f254fd44757f596524499e717ef226f7c8f6e2a878e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b823d62fd574315503a64f254fd44757f596524499e717ef226f7c8f6e2a878e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:17 compute-0 podman[368201]: 2025-10-02 13:00:17.179227929 +0000 UTC m=+0.186017340 container init 45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:00:17 compute-0 podman[368201]: 2025-10-02 13:00:17.186069076 +0000 UTC m=+0.192858477 container start 45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:17 compute-0 podman[368201]: 2025-10-02 13:00:17.19122083 +0000 UTC m=+0.198010241 container attach 45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.242 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.243 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.248 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.248 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700135540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.497 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.522 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.526 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727126200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.732 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:17 compute-0 ceph-mon[73668]: pgmap v2616: 305 pgs: 305 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 780 KiB/s rd, 2.7 MiB/s wr, 118 op/s
Oct 02 13:00:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1652227038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2700135540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1727126200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.921 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.924 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4183MB free_disk=20.910995483398438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.924 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.925 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424886843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.945 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.946 2 DEBUG nova.virt.libvirt.vif [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-619697043',display_name='tempest-TestNetworkAdvancedServerOps-server-619697043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-619697043',id=165,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKsObdt+XF+nXz0GiEzpxgfhD66IM9G5RUM3MkLbLpWKw8caQ7bkPyY7qo7NNpbjtQqOZyJGAUsYqFFo7AtudrngINJmOT7RB0S+rjZIUavy2PAWDfApKxcY4WPI92IWbQ==',key_name='tempest-TestNetworkAdvancedServerOps-1578338943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hgxxt5am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:12Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e6e7e32c-195f-48e7-a85c-451d3c0e6df6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.946 2 DEBUG nova.network.os_vif_util [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.947 2 DEBUG nova.network.os_vif_util [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.948 2 DEBUG nova.objects.instance [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.974 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <uuid>e6e7e32c-195f-48e7-a85c-451d3c0e6df6</uuid>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <name>instance-000000a5</name>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:00:17 compute-0 condescending_ishizaka[368218]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-619697043</nova:name>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:00:17</nova:creationTime>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:00:17 compute-0 condescending_ishizaka[368218]: --> relative data size: 1.0
Oct 02 13:00:17 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:00:17 compute-0 condescending_ishizaka[368218]: --> All data devices are unavailable
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <nova:port uuid="72654905-093f-4292-82c9-4cb6696b256c">
Oct 02 13:00:17 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <system>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <entry name="serial">e6e7e32c-195f-48e7-a85c-451d3c0e6df6</entry>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <entry name="uuid">e6e7e32c-195f-48e7-a85c-451d3c0e6df6</entry>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </system>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <os>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   </os>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <features>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   </features>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk">
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       </source>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config">
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       </source>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:00:17 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:6b:ce:35"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <target dev="tap72654905-09"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/console.log" append="off"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <video>
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </video>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:00:17 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:00:17 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:00:17 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:00:17 compute-0 nova_compute[256940]: </domain>
Oct 02 13:00:17 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.976 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Preparing to wait for external event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.976 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.976 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.977 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.977 2 DEBUG nova.virt.libvirt.vif [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-619697043',display_name='tempest-TestNetworkAdvancedServerOps-server-619697043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-619697043',id=165,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKsObdt+XF+nXz0GiEzpxgfhD66IM9G5RUM3MkLbLpWKw8caQ7bkPyY7qo7NNpbjtQqOZyJGAUsYqFFo7AtudrngINJmOT7RB0S+rjZIUavy2PAWDfApKxcY4WPI92IWbQ==',key_name='tempest-TestNetworkAdvancedServerOps-1578338943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hgxxt5am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:12Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e6e7e32c-195f-48e7-a85c-451d3c0e6df6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.977 2 DEBUG nova.network.os_vif_util [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.978 2 DEBUG nova.network.os_vif_util [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.978 2 DEBUG os_vif [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72654905-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72654905-09, col_values=(('external_ids', {'iface-id': '72654905-093f-4292-82c9-4cb6696b256c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:ce:35', 'vm-uuid': 'e6e7e32c-195f-48e7-a85c-451d3c0e6df6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:17 compute-0 NetworkManager[44981]: <info>  [1759410017.9922] manager: (tap72654905-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:17 compute-0 nova_compute[256940]: 2025-10-02 13:00:17.999 2 INFO os_vif [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09')
Oct 02 13:00:18 compute-0 systemd[1]: libpod-45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406.scope: Deactivated successfully.
Oct 02 13:00:18 compute-0 podman[368201]: 2025-10-02 13:00:18.01027684 +0000 UTC m=+1.017066241 container died 45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.018 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance e6e7e32c-195f-48e7-a85c-451d3c0e6df6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.018 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.019 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:00:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b823d62fd574315503a64f254fd44757f596524499e717ef226f7c8f6e2a878e-merged.mount: Deactivated successfully.
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.051 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.051 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.052 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:6b:ce:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.052 2 INFO nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Using config drive
Oct 02 13:00:18 compute-0 podman[368201]: 2025-10-02 13:00:18.075938644 +0000 UTC m=+1.082728035 container remove 45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.079 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 02 13:00:18 compute-0 systemd[1]: libpod-conmon-45c5c90f4a2a4d457d199e6822a3c363e62672697120642a34d1f0c51baa2406.scope: Deactivated successfully.
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.092 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:18 compute-0 sudo[368095]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:18 compute-0 sudo[368349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:18 compute-0 sudo[368349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:18 compute-0 sudo[368349]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:18 compute-0 sudo[368392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:18 compute-0 sudo[368392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:18 compute-0 sudo[368392]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.321 2 DEBUG nova.network.neutron [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updated VIF entry in instance network info cache for port 72654905-093f-4292-82c9-4cb6696b256c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.321 2 DEBUG nova.network.neutron [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updating instance_info_cache with network_info: [{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:18 compute-0 sudo[368418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:18 compute-0 sudo[368418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:18 compute-0 sudo[368418]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.342 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:18 compute-0 sudo[368443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:00:18 compute-0 sudo[368443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:18.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898661570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.530 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.536 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.560 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.576 2 INFO nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Creating config drive at /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.581 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1pv5ox3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.609 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.610 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.716 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1pv5ox3" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:18 compute-0 podman[368515]: 2025-10-02 13:00:18.73340031 +0000 UTC m=+0.040485922 container create 3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:00:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:18.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.746 2 DEBUG nova.storage.rbd_utils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:18 compute-0 nova_compute[256940]: 2025-10-02 13:00:18.759 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:18 compute-0 systemd[1]: Started libpod-conmon-3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35.scope.
Oct 02 13:00:18 compute-0 podman[368515]: 2025-10-02 13:00:18.714957152 +0000 UTC m=+0.022042784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:18 compute-0 podman[368515]: 2025-10-02 13:00:18.85358023 +0000 UTC m=+0.160665862 container init 3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:00:18 compute-0 podman[368546]: 2025-10-02 13:00:18.859602746 +0000 UTC m=+0.092062840 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:00:18 compute-0 podman[368515]: 2025-10-02 13:00:18.862796039 +0000 UTC m=+0.169881651 container start 3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:00:18 compute-0 priceless_beaver[368574]: 167 167
Oct 02 13:00:18 compute-0 systemd[1]: libpod-3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35.scope: Deactivated successfully.
Oct 02 13:00:18 compute-0 conmon[368574]: conmon 3e9756ccf50fd5d5b4d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35.scope/container/memory.events
Oct 02 13:00:18 compute-0 podman[368515]: 2025-10-02 13:00:18.875284733 +0000 UTC m=+0.182370345 container attach 3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:00:18 compute-0 podman[368515]: 2025-10-02 13:00:18.876011822 +0000 UTC m=+0.183097434 container died 3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:00:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/424886843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/898661570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:18 compute-0 podman[368551]: 2025-10-02 13:00:18.907724235 +0000 UTC m=+0.139525702 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:00:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-818933961d3eb9a595fc8ecad3b2483a7d9f936e48016e749fe022d035114b52-merged.mount: Deactivated successfully.
Oct 02 13:00:18 compute-0 podman[368515]: 2025-10-02 13:00:18.955502025 +0000 UTC m=+0.262587637 container remove 3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:00:18 compute-0 systemd[1]: libpod-conmon-3e9756ccf50fd5d5b4d3de3b0538598f5bdcc5c0ea444ecb28627bec53cbbb35.scope: Deactivated successfully.
Oct 02 13:00:19 compute-0 podman[368637]: 2025-10-02 13:00:19.113384733 +0000 UTC m=+0.038224123 container create 37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 13:00:19 compute-0 systemd[1]: Started libpod-conmon-37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f.scope.
Oct 02 13:00:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14284e30576c26f928ad53bb60d192a4c7dc302c010f696c4eb2524423b05330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14284e30576c26f928ad53bb60d192a4c7dc302c010f696c4eb2524423b05330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14284e30576c26f928ad53bb60d192a4c7dc302c010f696c4eb2524423b05330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14284e30576c26f928ad53bb60d192a4c7dc302c010f696c4eb2524423b05330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:19 compute-0 podman[368637]: 2025-10-02 13:00:19.190630518 +0000 UTC m=+0.115469988 container init 37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:00:19 compute-0 podman[368637]: 2025-10-02 13:00:19.09594442 +0000 UTC m=+0.020783840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:19 compute-0 podman[368637]: 2025-10-02 13:00:19.200044432 +0000 UTC m=+0.124883822 container start 37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:00:19 compute-0 podman[368637]: 2025-10-02 13:00:19.204463867 +0000 UTC m=+0.129303397 container attach 37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:00:19 compute-0 nova_compute[256940]: 2025-10-02 13:00:19.573 2 DEBUG oslo_concurrency.processutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:19 compute-0 nova_compute[256940]: 2025-10-02 13:00:19.574 2 INFO nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Deleting local config drive /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config because it was imported into RBD.
Oct 02 13:00:19 compute-0 nova_compute[256940]: 2025-10-02 13:00:19.608 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:19 compute-0 kernel: tap72654905-09: entered promiscuous mode
Oct 02 13:00:19 compute-0 NetworkManager[44981]: <info>  [1759410019.6392] manager: (tap72654905-09): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Oct 02 13:00:19 compute-0 nova_compute[256940]: 2025-10-02 13:00:19.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-0 ovn_controller[148123]: 2025-10-02T13:00:19Z|00857|binding|INFO|Claiming lport 72654905-093f-4292-82c9-4cb6696b256c for this chassis.
Oct 02 13:00:19 compute-0 ovn_controller[148123]: 2025-10-02T13:00:19Z|00858|binding|INFO|72654905-093f-4292-82c9-4cb6696b256c: Claiming fa:16:3e:6b:ce:35 10.100.0.5
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.653 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:ce:35 10.100.0.5'], port_security=['fa:16:3e:6b:ce:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e6e7e32c-195f-48e7-a85c-451d3c0e6df6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e559ab6-00b0-44d0-8057-d4820b5e6a73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=283932d0-a067-4834-a84b-a5f86e206cac, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=72654905-093f-4292-82c9-4cb6696b256c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.655 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 72654905-093f-4292-82c9-4cb6696b256c in datapath 4b83ab01-387d-4df9-9d3b-a5032705a6b9 bound to our chassis
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.657 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b83ab01-387d-4df9-9d3b-a5032705a6b9
Oct 02 13:00:19 compute-0 systemd-udevd[368669]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.671 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7f06b6ef-b4f7-4339-875a-f24f80e6b6a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.672 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b83ab01-31 in ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.674 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b83ab01-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.674 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5d17e6e9-4147-4588-bbd8-caeaba30cffe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.675 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4d86ce10-2225-4146-8279-abe33c9efbb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 NetworkManager[44981]: <info>  [1759410019.6886] device (tap72654905-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:00:19 compute-0 NetworkManager[44981]: <info>  [1759410019.6897] device (tap72654905-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.688 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[23818564-be03-41c1-8e3a-3a9aadd34fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 systemd-machined[210927]: New machine qemu-88-instance-000000a5.
Oct 02 13:00:19 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-000000a5.
Oct 02 13:00:19 compute-0 nova_compute[256940]: 2025-10-02 13:00:19.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.715 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c77cf0-07c3-46d9-8446-bfd749215678]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 nova_compute[256940]: 2025-10-02 13:00:19.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-0 ovn_controller[148123]: 2025-10-02T13:00:19Z|00859|binding|INFO|Setting lport 72654905-093f-4292-82c9-4cb6696b256c ovn-installed in OVS
Oct 02 13:00:19 compute-0 ovn_controller[148123]: 2025-10-02T13:00:19Z|00860|binding|INFO|Setting lport 72654905-093f-4292-82c9-4cb6696b256c up in Southbound
Oct 02 13:00:19 compute-0 nova_compute[256940]: 2025-10-02 13:00:19.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.757 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6edb1f6b-bcfe-48ab-8a0e-622909804079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 NetworkManager[44981]: <info>  [1759410019.7646] manager: (tap4b83ab01-30): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.763 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d8244eb4-2210-42b8-9494-485f7d2dae4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.799 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb1eedd-b5c1-48cd-a289-abe86e4564c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.802 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2f51af-1335-4747-9dea-48a15ae11b91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 NetworkManager[44981]: <info>  [1759410019.8323] device (tap4b83ab01-30): carrier: link connected
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.846 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2557bffd-00cf-4bd7-adf3-65938375f787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.863 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ffde3bd1-b1c9-4d50-b86a-e76dc543eebe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b83ab01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:7e:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791531, 'reachable_time': 26430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368704, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.878 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3373422d-fddb-4ff5-99ee-3178f52a90e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:7e79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 791531, 'tstamp': 791531}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368705, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.900 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd2d73a-429b-4b97-916a-c12a34cceb12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b83ab01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:7e:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791531, 'reachable_time': 26430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368706, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-0 ceph-mon[73668]: pgmap v2617: 305 pgs: 305 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 02 13:00:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:19.936 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8f170844-f652-418c-b89b-043cde62fa3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.000 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[515415d6-cb78-41e3-ad07-bb7fc33ed358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.002 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b83ab01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.002 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.003 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b83ab01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:20 compute-0 nova_compute[256940]: 2025-10-02 13:00:20.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:20 compute-0 kernel: tap4b83ab01-30: entered promiscuous mode
Oct 02 13:00:20 compute-0 NetworkManager[44981]: <info>  [1759410020.0055] manager: (tap4b83ab01-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Oct 02 13:00:20 compute-0 nova_compute[256940]: 2025-10-02 13:00:20.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.010 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b83ab01-30, col_values=(('external_ids', {'iface-id': '7ff13fe7-9b08-492c-9bb6-2f8a97feca8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:20 compute-0 nova_compute[256940]: 2025-10-02 13:00:20.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:20 compute-0 ovn_controller[148123]: 2025-10-02T13:00:20Z|00861|binding|INFO|Releasing lport 7ff13fe7-9b08-492c-9bb6-2f8a97feca8e from this chassis (sb_readonly=0)
Oct 02 13:00:20 compute-0 zen_saha[368653]: {
Oct 02 13:00:20 compute-0 zen_saha[368653]:     "1": [
Oct 02 13:00:20 compute-0 zen_saha[368653]:         {
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "devices": [
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "/dev/loop3"
Oct 02 13:00:20 compute-0 zen_saha[368653]:             ],
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "lv_name": "ceph_lv0",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "lv_size": "7511998464",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "name": "ceph_lv0",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "tags": {
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.cluster_name": "ceph",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.crush_device_class": "",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.encrypted": "0",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.osd_id": "1",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.type": "block",
Oct 02 13:00:20 compute-0 zen_saha[368653]:                 "ceph.vdo": "0"
Oct 02 13:00:20 compute-0 zen_saha[368653]:             },
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "type": "block",
Oct 02 13:00:20 compute-0 zen_saha[368653]:             "vg_name": "ceph_vg0"
Oct 02 13:00:20 compute-0 zen_saha[368653]:         }
Oct 02 13:00:20 compute-0 zen_saha[368653]:     ]
Oct 02 13:00:20 compute-0 zen_saha[368653]: }
Oct 02 13:00:20 compute-0 nova_compute[256940]: 2025-10-02 13:00:20.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.029 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b83ab01-387d-4df9-9d3b-a5032705a6b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b83ab01-387d-4df9-9d3b-a5032705a6b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.030 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa1a65a-cc76-4225-9b68-0cb9a70bbeb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.031 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-4b83ab01-387d-4df9-9d3b-a5032705a6b9
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/4b83ab01-387d-4df9-9d3b-a5032705a6b9.pid.haproxy
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 4b83ab01-387d-4df9-9d3b-a5032705a6b9
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:00:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:20.033 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'env', 'PROCESS_TAG=haproxy-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b83ab01-387d-4df9-9d3b-a5032705a6b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:00:20 compute-0 systemd[1]: libpod-37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f.scope: Deactivated successfully.
Oct 02 13:00:20 compute-0 podman[368637]: 2025-10-02 13:00:20.051605157 +0000 UTC m=+0.976444547 container died 37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:00:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 02 13:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-14284e30576c26f928ad53bb60d192a4c7dc302c010f696c4eb2524423b05330-merged.mount: Deactivated successfully.
Oct 02 13:00:20 compute-0 podman[368637]: 2025-10-02 13:00:20.132360973 +0000 UTC m=+1.057200363 container remove 37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:00:20 compute-0 systemd[1]: libpod-conmon-37d60e51b330004187698a0634fd3caaeb5df6984b4099f4cee6f1f51139ee6f.scope: Deactivated successfully.
Oct 02 13:00:20 compute-0 sudo[368443]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:20 compute-0 sudo[368731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:20 compute-0 sudo[368731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:20 compute-0 sudo[368731]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:20 compute-0 sudo[368756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:20 compute-0 sudo[368756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:20 compute-0 sudo[368756]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:20 compute-0 sudo[368782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:20 compute-0 sudo[368782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:20 compute-0 sudo[368782]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:20 compute-0 sudo[368822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:00:20 compute-0 sudo[368822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:20.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:20 compute-0 podman[368840]: 2025-10-02 13:00:20.426720154 +0000 UTC m=+0.024342333 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:00:20 compute-0 podman[368840]: 2025-10-02 13:00:20.664039604 +0000 UTC m=+0.261661763 container create bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:00:20 compute-0 systemd[1]: Started libpod-conmon-bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e.scope.
Oct 02 13:00:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:20.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1ccce1323e8814199792998ed0c7daaa713f105904bbb1acd421cc243598c27/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:20 compute-0 podman[368840]: 2025-10-02 13:00:20.758069535 +0000 UTC m=+0.355691714 container init bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:00:20 compute-0 podman[368840]: 2025-10-02 13:00:20.763862605 +0000 UTC m=+0.361484754 container start bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:00:20 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [NOTICE]   (368939) : New worker (368942) forked
Oct 02 13:00:20 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [NOTICE]   (368939) : Loading success.
Oct 02 13:00:20 compute-0 podman[368964]: 2025-10-02 13:00:20.90274717 +0000 UTC m=+0.042193676 container create d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:20 compute-0 ceph-mon[73668]: pgmap v2618: 305 pgs: 305 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 02 13:00:20 compute-0 systemd[1]: Started libpod-conmon-d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525.scope.
Oct 02 13:00:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:20 compute-0 podman[368964]: 2025-10-02 13:00:20.883528171 +0000 UTC m=+0.022974697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:20 compute-0 podman[368964]: 2025-10-02 13:00:20.996150845 +0000 UTC m=+0.135597381 container init d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:00:21 compute-0 podman[368964]: 2025-10-02 13:00:21.003734121 +0000 UTC m=+0.143180637 container start d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:00:21 compute-0 hungry_cerf[368980]: 167 167
Oct 02 13:00:21 compute-0 systemd[1]: libpod-d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525.scope: Deactivated successfully.
Oct 02 13:00:21 compute-0 podman[368964]: 2025-10-02 13:00:21.010669932 +0000 UTC m=+0.150116458 container attach d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:00:21 compute-0 podman[368964]: 2025-10-02 13:00:21.011599746 +0000 UTC m=+0.151046282 container died d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cerf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1873be5fc7d60068b23564ebcadaea84e5666d2be4bcfbaacee3e7310278be26-merged.mount: Deactivated successfully.
Oct 02 13:00:21 compute-0 podman[368964]: 2025-10-02 13:00:21.066722367 +0000 UTC m=+0.206168873 container remove d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:00:21 compute-0 systemd[1]: libpod-conmon-d32feb8a213bc63d6283ee530ebacfb06dff1ee2b2ee3139c2f5a3b1dc10d525.scope: Deactivated successfully.
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.106 2 DEBUG nova.compute.manager [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.109 2 DEBUG oslo_concurrency.lockutils [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.109 2 DEBUG oslo_concurrency.lockutils [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.109 2 DEBUG oslo_concurrency.lockutils [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.110 2 DEBUG nova.compute.manager [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Processing event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.110 2 DEBUG nova.compute.manager [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.110 2 DEBUG oslo_concurrency.lockutils [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.110 2 DEBUG oslo_concurrency.lockutils [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.111 2 DEBUG oslo_concurrency.lockutils [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.111 2 DEBUG nova.compute.manager [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] No waiting events found dispatching network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.111 2 WARNING nova.compute.manager [req-5cd56e71-9e0e-405d-a36d-999246c67bdd req-35c6cc52-6400-4ca2-a921-ef3a99a538cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received unexpected event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c for instance with vm_state building and task_state spawning.
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.217 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.219 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410021.2181559, e6e7e32c-195f-48e7-a85c-451d3c0e6df6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.221 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] VM Started (Lifecycle Event)
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.225 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.231 2 INFO nova.virt.libvirt.driver [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance spawned successfully.
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.231 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.252 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.258 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.261 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.262 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.262 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.263 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.263 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.263 2 DEBUG nova.virt.libvirt.driver [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.292 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.293 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410021.2186823, e6e7e32c-195f-48e7-a85c-451d3c0e6df6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.293 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] VM Paused (Lifecycle Event)
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.315 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.319 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410021.2244706, e6e7e32c-195f-48e7-a85c-451d3c0e6df6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.319 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] VM Resumed (Lifecycle Event)
Oct 02 13:00:21 compute-0 podman[369003]: 2025-10-02 13:00:21.232252853 +0000 UTC m=+0.031030966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.330 2 INFO nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Took 8.66 seconds to spawn the instance on the hypervisor.
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.331 2 DEBUG nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.341 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.344 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:21 compute-0 podman[369003]: 2025-10-02 13:00:21.362368201 +0000 UTC m=+0.161146294 container create 1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.366 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.397 2 INFO nova.compute.manager [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Took 9.78 seconds to build instance.
Oct 02 13:00:21 compute-0 nova_compute[256940]: 2025-10-02 13:00:21.414 2 DEBUG oslo_concurrency.lockutils [None req-8f2a1974-5e8e-4832-8241-cbf125fbfacb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:21 compute-0 systemd[1]: Started libpod-conmon-1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466.scope.
Oct 02 13:00:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f8954bd1d03a909c0298382c75cd824470cf4acf220c56351ef01d64b35a3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f8954bd1d03a909c0298382c75cd824470cf4acf220c56351ef01d64b35a3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f8954bd1d03a909c0298382c75cd824470cf4acf220c56351ef01d64b35a3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f8954bd1d03a909c0298382c75cd824470cf4acf220c56351ef01d64b35a3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:21 compute-0 podman[369003]: 2025-10-02 13:00:21.656352691 +0000 UTC m=+0.455130814 container init 1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:00:21 compute-0 podman[369003]: 2025-10-02 13:00:21.665533399 +0000 UTC m=+0.464311492 container start 1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:00:21 compute-0 podman[369003]: 2025-10-02 13:00:21.980745301 +0000 UTC m=+0.779523414 container attach 1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:00:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 264 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 218 op/s
Oct 02 13:00:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:22 compute-0 recursing_joliot[369019]: {
Oct 02 13:00:22 compute-0 recursing_joliot[369019]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:00:22 compute-0 recursing_joliot[369019]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:00:22 compute-0 recursing_joliot[369019]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:00:22 compute-0 recursing_joliot[369019]:         "osd_id": 1,
Oct 02 13:00:22 compute-0 recursing_joliot[369019]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:00:22 compute-0 recursing_joliot[369019]:         "type": "bluestore"
Oct 02 13:00:22 compute-0 recursing_joliot[369019]:     }
Oct 02 13:00:22 compute-0 recursing_joliot[369019]: }
Oct 02 13:00:22 compute-0 systemd[1]: libpod-1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466.scope: Deactivated successfully.
Oct 02 13:00:22 compute-0 conmon[369019]: conmon 1c4faa7cceb10450e030 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466.scope/container/memory.events
Oct 02 13:00:22 compute-0 podman[369003]: 2025-10-02 13:00:22.558606301 +0000 UTC m=+1.357384384 container died 1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_joliot, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:00:22 compute-0 nova_compute[256940]: 2025-10-02 13:00:22.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-04f8954bd1d03a909c0298382c75cd824470cf4acf220c56351ef01d64b35a3d-merged.mount: Deactivated successfully.
Oct 02 13:00:22 compute-0 podman[369003]: 2025-10-02 13:00:22.725870273 +0000 UTC m=+1.524648356 container remove 1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:00:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:22.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:22 compute-0 systemd[1]: libpod-conmon-1c4faa7cceb10450e0306465644a25bd83a51de9564dbd140dc22d8a08159466.scope: Deactivated successfully.
Oct 02 13:00:22 compute-0 sudo[368822]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:00:22 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:00:22 compute-0 nova_compute[256940]: 2025-10-02 13:00:22.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2209b044-9533-453b-ba8e-cf903161cb17 does not exist
Oct 02 13:00:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev fcd129d1-7a19-4df6-92b5-c1ef1e1ac465 does not exist
Oct 02 13:00:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cd64434a-5be7-4e8e-a021-f834e8671016 does not exist
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.292116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023292195, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2170, "num_deletes": 253, "total_data_size": 3783096, "memory_usage": 3847696, "flush_reason": "Manual Compaction"}
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 02 13:00:23 compute-0 sudo[369053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:23 compute-0 sudo[369053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023322921, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 3704781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56196, "largest_seqno": 58365, "table_properties": {"data_size": 3695028, "index_size": 6119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20943, "raw_average_key_size": 20, "raw_value_size": 3675280, "raw_average_value_size": 3638, "num_data_blocks": 265, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409820, "oldest_key_time": 1759409820, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 30856 microseconds, and 7749 cpu microseconds.
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:23 compute-0 sudo[369053]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.322979) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 3704781 bytes OK
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.323000) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.330563) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.330604) EVENT_LOG_v1 {"time_micros": 1759410023330595, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.330630) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3774205, prev total WAL file size 3774205, number of live WAL files 2.
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.331785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(3617KB)], [125(11MB)]
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023331834, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 15997092, "oldest_snapshot_seqno": -1}
Oct 02 13:00:23 compute-0 sudo[369078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:00:23 compute-0 sudo[369078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:23 compute-0 sudo[369078]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 8878 keys, 14031453 bytes, temperature: kUnknown
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023444311, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 14031453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13970579, "index_size": 37593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 229603, "raw_average_key_size": 25, "raw_value_size": 13811427, "raw_average_value_size": 1555, "num_data_blocks": 1474, "num_entries": 8878, "num_filter_entries": 8878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.444720) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 14031453 bytes
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.450857) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.0 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.7 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 9404, records dropped: 526 output_compression: NoCompression
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.450911) EVENT_LOG_v1 {"time_micros": 1759410023450891, "job": 76, "event": "compaction_finished", "compaction_time_micros": 112633, "compaction_time_cpu_micros": 30706, "output_level": 6, "num_output_files": 1, "total_output_size": 14031453, "num_input_records": 9404, "num_output_records": 8878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023451737, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023453705, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.331673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.453755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.453761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.453763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.453765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:23.453767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:23 compute-0 ceph-mon[73668]: pgmap v2619: 305 pgs: 305 active+clean; 264 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 218 op/s
Oct 02 13:00:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3281194222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 264 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Oct 02 13:00:24 compute-0 nova_compute[256940]: 2025-10-02 13:00:24.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:24.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:24.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:24 compute-0 ceph-mon[73668]: pgmap v2620: 305 pgs: 305 active+clean; 264 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:25 compute-0 NetworkManager[44981]: <info>  [1759410025.0198] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct 02 13:00:25 compute-0 NetworkManager[44981]: <info>  [1759410025.0209] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:25 compute-0 ovn_controller[148123]: 2025-10-02T13:00:25Z|00862|binding|INFO|Releasing lport 7ff13fe7-9b08-492c-9bb6-2f8a97feca8e from this chassis (sb_readonly=0)
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.379 2 DEBUG nova.compute.manager [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-changed-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.380 2 DEBUG nova.compute.manager [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Refreshing instance network info cache due to event network-changed-72654905-093f-4292-82c9-4cb6696b256c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.380 2 DEBUG oslo_concurrency.lockutils [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.380 2 DEBUG oslo_concurrency.lockutils [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:25 compute-0 nova_compute[256940]: 2025-10-02 13:00:25.381 2 DEBUG nova.network.neutron [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Refreshing network info cache for port 72654905-093f-4292-82c9-4cb6696b256c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:00:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Oct 02 13:00:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:26.497 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:26.497 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:26.498 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:26.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:27 compute-0 nova_compute[256940]: 2025-10-02 13:00:27.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:27 compute-0 ceph-mon[73668]: pgmap v2621: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Oct 02 13:00:27 compute-0 nova_compute[256940]: 2025-10-02 13:00:27.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:27 compute-0 nova_compute[256940]: 2025-10-02 13:00:27.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 906 KiB/s wr, 279 op/s
Oct 02 13:00:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:28.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.670465) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028670501, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 302, "num_deletes": 251, "total_data_size": 114001, "memory_usage": 120080, "flush_reason": "Manual Compaction"}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028673661, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 112858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58366, "largest_seqno": 58667, "table_properties": {"data_size": 110882, "index_size": 203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5645, "raw_average_key_size": 20, "raw_value_size": 106928, "raw_average_value_size": 384, "num_data_blocks": 9, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410023, "oldest_key_time": 1759410023, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 3239 microseconds, and 1028 cpu microseconds.
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.673703) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 112858 bytes OK
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.673723) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675745) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675762) EVENT_LOG_v1 {"time_micros": 1759410028675757, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 111816, prev total WAL file size 111816, number of live WAL files 2.
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.676161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(110KB)], [128(13MB)]
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028676207, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 14144311, "oldest_snapshot_seqno": -1}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 8646 keys, 10298492 bytes, temperature: kUnknown
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028741644, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10298492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10243962, "index_size": 31825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 224988, "raw_average_key_size": 26, "raw_value_size": 10093626, "raw_average_value_size": 1167, "num_data_blocks": 1232, "num_entries": 8646, "num_filter_entries": 8646, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.741851) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10298492 bytes
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.744375) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.0 rd, 157.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(216.6) write-amplify(91.3) OK, records in: 9156, records dropped: 510 output_compression: NoCompression
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.744394) EVENT_LOG_v1 {"time_micros": 1759410028744386, "job": 78, "event": "compaction_finished", "compaction_time_micros": 65490, "compaction_time_cpu_micros": 36253, "output_level": 6, "num_output_files": 1, "total_output_size": 10298492, "num_input_records": 9156, "num_output_records": 8646, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028744529, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028746583, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.676084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.746619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.746624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.746626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.746627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:00:28.746629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:28.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:00:28
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images']
Oct 02 13:00:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:00:28 compute-0 sudo[369107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:28 compute-0 sudo[369107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:28 compute-0 sudo[369107]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:28 compute-0 sudo[369132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:28 compute-0 sudo[369132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:28 compute-0 sudo[369132]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:29 compute-0 nova_compute[256940]: 2025-10-02 13:00:29.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:29 compute-0 nova_compute[256940]: 2025-10-02 13:00:29.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:00:29 compute-0 nova_compute[256940]: 2025-10-02 13:00:29.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:00:29 compute-0 nova_compute[256940]: 2025-10-02 13:00:29.415 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:00:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:00:29 compute-0 ceph-mon[73668]: pgmap v2622: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 906 KiB/s wr, 279 op/s
Oct 02 13:00:29 compute-0 nova_compute[256940]: 2025-10-02 13:00:29.956 2 DEBUG nova.network.neutron [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updated VIF entry in instance network info cache for port 72654905-093f-4292-82c9-4cb6696b256c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:00:29 compute-0 nova_compute[256940]: 2025-10-02 13:00:29.956 2 DEBUG nova.network.neutron [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updating instance_info_cache with network_info: [{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:30 compute-0 nova_compute[256940]: 2025-10-02 13:00:30.005 2 DEBUG oslo_concurrency.lockutils [req-db78ae0b-aad1-4c31-afcc-907c4f28b87d req-32086c15-9bd3-4b45-a73c-20a49c224806 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:30 compute-0 nova_compute[256940]: 2025-10-02 13:00:30.005 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:30 compute-0 nova_compute[256940]: 2025-10-02 13:00:30.006 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:00:30 compute-0 nova_compute[256940]: 2025-10-02 13:00:30.006 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 170 op/s
Oct 02 13:00:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:30.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:30.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:31 compute-0 ceph-mon[73668]: pgmap v2623: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 170 op/s
Oct 02 13:00:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 160 op/s
Oct 02 13:00:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4223990827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:32 compute-0 nova_compute[256940]: 2025-10-02 13:00:32.443 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updating instance_info_cache with network_info: [{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:32.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:32 compute-0 nova_compute[256940]: 2025-10-02 13:00:32.460 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:32 compute-0 nova_compute[256940]: 2025-10-02 13:00:32.462 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:00:32 compute-0 nova_compute[256940]: 2025-10-02 13:00:32.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:32.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:32 compute-0 nova_compute[256940]: 2025-10-02 13:00:32.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:33 compute-0 ceph-mon[73668]: pgmap v2624: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 160 op/s
Oct 02 13:00:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 132 op/s
Oct 02 13:00:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/855805887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:34 compute-0 ovn_controller[148123]: 2025-10-02T13:00:34Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:ce:35 10.100.0.5
Oct 02 13:00:34 compute-0 ovn_controller[148123]: 2025-10-02T13:00:34Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:ce:35 10.100.0.5
Oct 02 13:00:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:34.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:35 compute-0 podman[369160]: 2025-10-02 13:00:35.401203924 +0000 UTC m=+0.056498157 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:00:35 compute-0 podman[369161]: 2025-10-02 13:00:35.418560155 +0000 UTC m=+0.063691515 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:00:35 compute-0 ceph-mon[73668]: pgmap v2625: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 132 op/s
Oct 02 13:00:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 193 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 169 op/s
Oct 02 13:00:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:36.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/707540778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4257415503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4036831251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:36.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:37 compute-0 nova_compute[256940]: 2025-10-02 13:00:37.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:37 compute-0 ceph-mon[73668]: pgmap v2626: 305 pgs: 305 active+clean; 193 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 169 op/s
Oct 02 13:00:38 compute-0 nova_compute[256940]: 2025-10-02 13:00:38.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 277 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 170 op/s
Oct 02 13:00:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:38.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:38.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:39 compute-0 ceph-mon[73668]: pgmap v2627: 305 pgs: 305 active+clean; 277 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 170 op/s
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 5.7 MiB/s wr, 167 op/s
Oct 02 13:00:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:40.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004146891866741113 of space, bias 1.0, pg target 1.2440675600223339 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:00:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:00:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:40.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1953748865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3874696248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:41 compute-0 ceph-mon[73668]: pgmap v2628: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 5.7 MiB/s wr, 167 op/s
Oct 02 13:00:41 compute-0 nova_compute[256940]: 2025-10-02 13:00:41.868 2 INFO nova.compute.manager [None req-d45e4779-d2ea-4105-9463-9ec234887281 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Get console output
Oct 02 13:00:41 compute-0 nova_compute[256940]: 2025-10-02 13:00:41.875 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:00:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 222 op/s
Oct 02 13:00:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:42.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:42 compute-0 nova_compute[256940]: 2025-10-02 13:00:42.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:42.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:42 compute-0 nova_compute[256940]: 2025-10-02 13:00:42.932 2 INFO nova.compute.manager [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Rebuilding instance
Oct 02 13:00:42 compute-0 nova_compute[256940]: 2025-10-02 13:00:42.972 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:42 compute-0 nova_compute[256940]: 2025-10-02 13:00:42.972 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:42 compute-0 nova_compute[256940]: 2025-10-02 13:00:42.993 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.074 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.074 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.082 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.083 2 INFO nova.compute.claims [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.173 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.216 2 DEBUG nova.compute.manager [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.297 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.333 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_requests' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.347 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.359 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.374 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.386 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.391 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:00:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892820477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.779 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.785 2 DEBUG nova.compute.provider_tree [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.807 2 DEBUG nova.scheduler.client.report [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:43 compute-0 ceph-mon[73668]: pgmap v2629: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 222 op/s
Oct 02 13:00:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1892820477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.833 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.834 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.906 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.906 2 DEBUG nova.network.neutron [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.948 2 INFO nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:00:43 compute-0 nova_compute[256940]: 2025-10-02 13:00:43.975 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:00:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 203 op/s
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.105 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.106 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.107 2 INFO nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Creating image(s)
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.132 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.163 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.194 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.198 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.269 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.270 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.271 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.272 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.295 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.299 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.555 2 DEBUG nova.policy [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57a1608ca1fc4bef8b6bc6ad68be3999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:00:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:44.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.782 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.864 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] resizing rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:00:44 compute-0 nova_compute[256940]: 2025-10-02 13:00:44.994 2 DEBUG nova.objects.instance [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'migration_context' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.012 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.012 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Ensure instance console log exists: /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.013 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.013 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.013 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:45 compute-0 kernel: tap72654905-09 (unregistering): left promiscuous mode
Oct 02 13:00:45 compute-0 NetworkManager[44981]: <info>  [1759410045.6569] device (tap72654905-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:00:45 compute-0 ovn_controller[148123]: 2025-10-02T13:00:45Z|00863|binding|INFO|Releasing lport 72654905-093f-4292-82c9-4cb6696b256c from this chassis (sb_readonly=0)
Oct 02 13:00:45 compute-0 ovn_controller[148123]: 2025-10-02T13:00:45Z|00864|binding|INFO|Setting lport 72654905-093f-4292-82c9-4cb6696b256c down in Southbound
Oct 02 13:00:45 compute-0 ovn_controller[148123]: 2025-10-02T13:00:45Z|00865|binding|INFO|Removing iface tap72654905-09 ovn-installed in OVS
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.674 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:ce:35 10.100.0.5'], port_security=['fa:16:3e:6b:ce:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e6e7e32c-195f-48e7-a85c-451d3c0e6df6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e559ab6-00b0-44d0-8057-d4820b5e6a73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=283932d0-a067-4834-a84b-a5f86e206cac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=72654905-093f-4292-82c9-4cb6696b256c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.675 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 72654905-093f-4292-82c9-4cb6696b256c in datapath 4b83ab01-387d-4df9-9d3b-a5032705a6b9 unbound from our chassis
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.677 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b83ab01-387d-4df9-9d3b-a5032705a6b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.678 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c87902a7-9ba1-4a50-aaab-4fa6539b3847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.679 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 namespace which is not needed anymore
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.687 2 DEBUG nova.network.neutron [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Successfully created port: 8823918b-f620-4ff5-8094-450aa259ea57 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:45 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Oct 02 13:00:45 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000a5.scope: Consumed 15.133s CPU time.
Oct 02 13:00:45 compute-0 systemd-machined[210927]: Machine qemu-88-instance-000000a5 terminated.
Oct 02 13:00:45 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [NOTICE]   (368939) : haproxy version is 2.8.14-c23fe91
Oct 02 13:00:45 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [NOTICE]   (368939) : path to executable is /usr/sbin/haproxy
Oct 02 13:00:45 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [WARNING]  (368939) : Exiting Master process...
Oct 02 13:00:45 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [WARNING]  (368939) : Exiting Master process...
Oct 02 13:00:45 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [ALERT]    (368939) : Current worker (368942) exited with code 143 (Terminated)
Oct 02 13:00:45 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[368935]: [WARNING]  (368939) : All workers exited. Exiting... (0)
Oct 02 13:00:45 compute-0 systemd[1]: libpod-bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e.scope: Deactivated successfully.
Oct 02 13:00:45 compute-0 podman[369417]: 2025-10-02 13:00:45.809866558 +0000 UTC m=+0.044185778 container died bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e-userdata-shm.mount: Deactivated successfully.
Oct 02 13:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1ccce1323e8814199792998ed0c7daaa713f105904bbb1acd421cc243598c27-merged.mount: Deactivated successfully.
Oct 02 13:00:45 compute-0 ceph-mon[73668]: pgmap v2630: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 203 op/s
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.883 2 DEBUG nova.compute.manager [req-aaea9de3-e202-4b49-bf5f-b16bcd09cead req-cc979b6c-7fc8-45a5-aa4f-dfe044fe6d1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-vif-unplugged-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.883 2 DEBUG oslo_concurrency.lockutils [req-aaea9de3-e202-4b49-bf5f-b16bcd09cead req-cc979b6c-7fc8-45a5-aa4f-dfe044fe6d1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.883 2 DEBUG oslo_concurrency.lockutils [req-aaea9de3-e202-4b49-bf5f-b16bcd09cead req-cc979b6c-7fc8-45a5-aa4f-dfe044fe6d1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.884 2 DEBUG oslo_concurrency.lockutils [req-aaea9de3-e202-4b49-bf5f-b16bcd09cead req-cc979b6c-7fc8-45a5-aa4f-dfe044fe6d1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.884 2 DEBUG nova.compute.manager [req-aaea9de3-e202-4b49-bf5f-b16bcd09cead req-cc979b6c-7fc8-45a5-aa4f-dfe044fe6d1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] No waiting events found dispatching network-vif-unplugged-72654905-093f-4292-82c9-4cb6696b256c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.884 2 WARNING nova.compute.manager [req-aaea9de3-e202-4b49-bf5f-b16bcd09cead req-cc979b6c-7fc8-45a5-aa4f-dfe044fe6d1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received unexpected event network-vif-unplugged-72654905-093f-4292-82c9-4cb6696b256c for instance with vm_state active and task_state rebuilding.
Oct 02 13:00:45 compute-0 podman[369417]: 2025-10-02 13:00:45.898284393 +0000 UTC m=+0.132603623 container cleanup bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:45 compute-0 systemd[1]: libpod-conmon-bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e.scope: Deactivated successfully.
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:45 compute-0 podman[369458]: 2025-10-02 13:00:45.973221809 +0000 UTC m=+0.050163064 container remove bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.979 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4e3354-76dc-4539-bc45-c9038c3906e4]: (4, ('Thu Oct  2 01:00:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 (bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e)\nbf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e\nThu Oct  2 01:00:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 (bf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e)\nbf46e9dc2e24dbbf9c55273be24355c1f20a71f8c9ca65f3a21f5cb980f18e4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.981 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ffbb87-c757-40fd-a2b0-058c58d10bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:45.982 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b83ab01-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:45 compute-0 nova_compute[256940]: 2025-10-02 13:00:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:45 compute-0 kernel: tap4b83ab01-30: left promiscuous mode
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:46.003 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5c197d-7bec-4f3b-a55f-225517ba35d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:46.033 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[da2bf24e-bac7-4f23-86a6-a3e4028aa17f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:46.035 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f908443d-b12c-4f91-b7b8-8c85177aaf23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:46.052 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[058cd651-e8a3-4aae-8fb9-44411616a056]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791523, 'reachable_time': 44065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369476, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:46.055 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:00:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:46.055 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0102fdf2-990d-4397-86bd-47f9a45d8669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d4b83ab01\x2d387d\x2d4df9\x2d9d3b\x2da5032705a6b9.mount: Deactivated successfully.
Oct 02 13:00:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 324 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.7 MiB/s wr, 277 op/s
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.410 2 INFO nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance shutdown successfully after 3 seconds.
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.416 2 INFO nova.virt.libvirt.driver [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance destroyed successfully.
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.422 2 INFO nova.virt.libvirt.driver [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance destroyed successfully.
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.423 2 DEBUG nova.virt.libvirt.vif [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-619697043',display_name='tempest-TestNetworkAdvancedServerOps-server-619697043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-619697043',id=165,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKsObdt+XF+nXz0GiEzpxgfhD66IM9G5RUM3MkLbLpWKw8caQ7bkPyY7qo7NNpbjtQqOZyJGAUsYqFFo7AtudrngINJmOT7RB0S+rjZIUavy2PAWDfApKxcY4WPI92IWbQ==',key_name='tempest-TestNetworkAdvancedServerOps-1578338943',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hgxxt5am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:42Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e6e7e32c-195f-48e7-a85c-451d3c0e6df6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.424 2 DEBUG nova.network.os_vif_util [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.425 2 DEBUG nova.network.os_vif_util [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.425 2 DEBUG os_vif [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72654905-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.435 2 INFO os_vif [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09')
Oct 02 13:00:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:46.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:46.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.860 2 DEBUG nova.network.neutron [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Successfully updated port: 8823918b-f620-4ff5-8094-450aa259ea57 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.879 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.880 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquired lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.881 2 DEBUG nova.network.neutron [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:00:46 compute-0 ceph-mon[73668]: pgmap v2631: 305 pgs: 305 active+clean; 324 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.7 MiB/s wr, 277 op/s
Oct 02 13:00:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4136340579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.987 2 DEBUG nova.compute.manager [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-changed-8823918b-f620-4ff5-8094-450aa259ea57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.988 2 DEBUG nova.compute.manager [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Refreshing instance network info cache due to event network-changed-8823918b-f620-4ff5-8094-450aa259ea57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:00:46 compute-0 nova_compute[256940]: 2025-10-02 13:00:46.988 2 DEBUG oslo_concurrency.lockutils [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.071 2 DEBUG nova.network.neutron [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.291 2 INFO nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Deleting instance files /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_del
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.291 2 INFO nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Deletion of /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_del complete
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.433 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.433 2 INFO nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Creating image(s)
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.462 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.489 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.517 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.522 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.600 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.601 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.602 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.602 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.632 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-0 nova_compute[256940]: 2025-10-02 13:00:47.635 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.3 MiB/s wr, 276 op/s
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.125 2 DEBUG nova.network.neutron [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updating instance_info_cache with network_info: [{"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.153 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Releasing lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.154 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Instance network_info: |[{"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.154 2 DEBUG oslo_concurrency.lockutils [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.155 2 DEBUG nova.network.neutron [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Refreshing network info cache for port 8823918b-f620-4ff5-8094-450aa259ea57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.158 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Start _get_guest_xml network_info=[{"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.168 2 WARNING nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.181 2 DEBUG nova.virt.libvirt.host [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.182 2 DEBUG nova.virt.libvirt.host [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.185 2 DEBUG nova.virt.libvirt.host [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.186 2 DEBUG nova.virt.libvirt.host [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.187 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.188 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.188 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.188 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.189 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.189 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.189 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.189 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.190 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.190 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.190 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.190 2 DEBUG nova.virt.hardware [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.193 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.247 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.364 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:00:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.523 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.524 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Ensure instance console log exists: /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.525 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.525 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.526 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.528 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Start _get_guest_xml network_info=[{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.533 2 WARNING nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.542 2 DEBUG nova.virt.libvirt.host [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.543 2 DEBUG nova.virt.libvirt.host [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.547 2 DEBUG nova.virt.libvirt.host [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.547 2 DEBUG nova.virt.libvirt.host [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.548 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.549 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.549 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.549 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.550 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.550 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.550 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.551 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.551 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.551 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.551 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.552 2 DEBUG nova.virt.hardware [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.552 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.572 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741611088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.663 2 DEBUG nova.compute.manager [req-48a76c1b-5f5b-4dd3-8f47-2e4125fbfe63 req-2961ddb2-3d35-4593-8d77-7a436d499b68 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.664 2 DEBUG oslo_concurrency.lockutils [req-48a76c1b-5f5b-4dd3-8f47-2e4125fbfe63 req-2961ddb2-3d35-4593-8d77-7a436d499b68 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.664 2 DEBUG oslo_concurrency.lockutils [req-48a76c1b-5f5b-4dd3-8f47-2e4125fbfe63 req-2961ddb2-3d35-4593-8d77-7a436d499b68 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.664 2 DEBUG oslo_concurrency.lockutils [req-48a76c1b-5f5b-4dd3-8f47-2e4125fbfe63 req-2961ddb2-3d35-4593-8d77-7a436d499b68 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.665 2 DEBUG nova.compute.manager [req-48a76c1b-5f5b-4dd3-8f47-2e4125fbfe63 req-2961ddb2-3d35-4593-8d77-7a436d499b68 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] No waiting events found dispatching network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.665 2 WARNING nova.compute.manager [req-48a76c1b-5f5b-4dd3-8f47-2e4125fbfe63 req-2961ddb2-3d35-4593-8d77-7a436d499b68 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received unexpected event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c for instance with vm_state active and task_state rebuild_spawning.
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.666 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.692 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:48 compute-0 nova_compute[256940]: 2025-10-02 13:00:48.697 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:48.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:49 compute-0 sudo[369744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:49 compute-0 sudo[369744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:49 compute-0 sudo[369744]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42663333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.135 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:49 compute-0 sudo[369782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:49 compute-0 sudo[369782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:49 compute-0 sudo[369782]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.172 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.177 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:49 compute-0 podman[369768]: 2025-10-02 13:00:49.181431116 +0000 UTC m=+0.091967998 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:00:49 compute-0 podman[369769]: 2025-10-02 13:00:49.188186571 +0000 UTC m=+0.098001125 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2777311355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.213 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.215 2 DEBUG nova.virt.libvirt.vif [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-52682838',display_name='tempest-AttachVolumeTestJSON-server-52682838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-52682838',id=168,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxKjxbS2/n41eLxczAKqgqOwwMAWCD3i6muDntco5MFydZozeWGZc01YubrWmKvFS4WKfAPmxH3srKwd8xu8RpTq/RPKAQYAlKPc2QD1dPsvDBLB3WmutsaEtwSC/UDSg==',key_name='tempest-keypair-1484800686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-vbz8stz9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.216 2 DEBUG nova.network.os_vif_util [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.217 2 DEBUG nova.network.os_vif_util [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:21:a4,bridge_name='br-int',has_traffic_filtering=True,id=8823918b-f620-4ff5-8094-450aa259ea57,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8823918b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.218 2 DEBUG nova.objects.instance [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.247 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <uuid>c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44</uuid>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <name>instance-000000a8</name>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachVolumeTestJSON-server-52682838</nova:name>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:00:48</nova:creationTime>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:user uuid="57a1608ca1fc4bef8b6bc6ad68be3999">tempest-AttachVolumeTestJSON-68983480-project-member</nova:user>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:project uuid="0b15f29eb32d4c5cba98baa238cc12e1">tempest-AttachVolumeTestJSON-68983480</nova:project>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:port uuid="8823918b-f620-4ff5-8094-450aa259ea57">
Oct 02 13:00:49 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <system>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="serial">c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="uuid">c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </system>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <os>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </os>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <features>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </features>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </source>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk.config">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </source>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:d6:21:a4"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <target dev="tap8823918b-f6"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/console.log" append="off"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <video>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </video>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:00:49 compute-0 nova_compute[256940]: </domain>
Oct 02 13:00:49 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.249 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Preparing to wait for external event network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.249 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.249 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.250 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.250 2 DEBUG nova.virt.libvirt.vif [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-52682838',display_name='tempest-AttachVolumeTestJSON-server-52682838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-52682838',id=168,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxKjxbS2/n41eLxczAKqgqOwwMAWCD3i6muDntco5MFydZozeWGZc01YubrWmKvFS4WKfAPmxH3srKwd8xu8RpTq/RPKAQYAlKPc2QD1dPsvDBLB3WmutsaEtwSC/UDSg==',key_name='tempest-keypair-1484800686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-vbz8stz9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.250 2 DEBUG nova.network.os_vif_util [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.251 2 DEBUG nova.network.os_vif_util [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:21:a4,bridge_name='br-int',has_traffic_filtering=True,id=8823918b-f620-4ff5-8094-450aa259ea57,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8823918b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.251 2 DEBUG os_vif [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:21:a4,bridge_name='br-int',has_traffic_filtering=True,id=8823918b-f620-4ff5-8094-450aa259ea57,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8823918b-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.253 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.253 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8823918b-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8823918b-f6, col_values=(('external_ids', {'iface-id': '8823918b-f620-4ff5-8094-450aa259ea57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:21:a4', 'vm-uuid': 'c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:49 compute-0 ceph-mon[73668]: pgmap v2632: 305 pgs: 305 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.3 MiB/s wr, 276 op/s
Oct 02 13:00:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/741611088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/42663333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2777311355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 NetworkManager[44981]: <info>  [1759410049.2607] manager: (tap8823918b-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.267 2 INFO os_vif [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:21:a4,bridge_name='br-int',has_traffic_filtering=True,id=8823918b-f620-4ff5-8094-450aa259ea57,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8823918b-f6')
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.341 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.341 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.342 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No VIF found with MAC fa:16:3e:d6:21:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.342 2 INFO nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Using config drive
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.369 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021418989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.668 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.669 2 DEBUG nova.virt.libvirt.vif [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:00:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-619697043',display_name='tempest-TestNetworkAdvancedServerOps-server-619697043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-619697043',id=165,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKsObdt+XF+nXz0GiEzpxgfhD66IM9G5RUM3MkLbLpWKw8caQ7bkPyY7qo7NNpbjtQqOZyJGAUsYqFFo7AtudrngINJmOT7RB0S+rjZIUavy2PAWDfApKxcY4WPI92IWbQ==',key_name='tempest-TestNetworkAdvancedServerOps-1578338943',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hgxxt5am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:47Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e6e7e32c-195f-48e7-a85c-451d3c0e6df6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.670 2 DEBUG nova.network.os_vif_util [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.670 2 DEBUG nova.network.os_vif_util [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.672 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <uuid>e6e7e32c-195f-48e7-a85c-451d3c0e6df6</uuid>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <name>instance-000000a5</name>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-619697043</nova:name>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:00:48</nova:creationTime>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <nova:port uuid="72654905-093f-4292-82c9-4cb6696b256c">
Oct 02 13:00:49 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <system>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="serial">e6e7e32c-195f-48e7-a85c-451d3c0e6df6</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="uuid">e6e7e32c-195f-48e7-a85c-451d3c0e6df6</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </system>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <os>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </os>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <features>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </features>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </source>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </source>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:00:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:6b:ce:35"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <target dev="tap72654905-09"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/console.log" append="off"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <video>
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </video>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:00:49 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:00:49 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:00:49 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:00:49 compute-0 nova_compute[256940]: </domain>
Oct 02 13:00:49 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.673 2 DEBUG nova.virt.libvirt.vif [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:00:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-619697043',display_name='tempest-TestNetworkAdvancedServerOps-server-619697043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-619697043',id=165,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKsObdt+XF+nXz0GiEzpxgfhD66IM9G5RUM3MkLbLpWKw8caQ7bkPyY7qo7NNpbjtQqOZyJGAUsYqFFo7AtudrngINJmOT7RB0S+rjZIUavy2PAWDfApKxcY4WPI92IWbQ==',key_name='tempest-TestNetworkAdvancedServerOps-1578338943',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hgxxt5am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:47Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e6e7e32c-195f-48e7-a85c-451d3c0e6df6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.674 2 DEBUG nova.network.os_vif_util [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.674 2 DEBUG nova.network.os_vif_util [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.675 2 DEBUG os_vif [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72654905-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72654905-09, col_values=(('external_ids', {'iface-id': '72654905-093f-4292-82c9-4cb6696b256c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:ce:35', 'vm-uuid': 'e6e7e32c-195f-48e7-a85c-451d3c0e6df6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:49 compute-0 NetworkManager[44981]: <info>  [1759410049.6815] manager: (tap72654905-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.690 2 INFO os_vif [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09')
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.751 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.751 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.751 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:6b:ce:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.752 2 INFO nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Using config drive
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.790 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.815 2 INFO nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Creating config drive at /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/disk.config
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.822 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1hq16j5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.854 2 DEBUG nova.network.neutron [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updated VIF entry in instance network info cache for port 8823918b-f620-4ff5-8094-450aa259ea57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.855 2 DEBUG nova.network.neutron [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updating instance_info_cache with network_info: [{"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.857 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.900 2 DEBUG oslo_concurrency.lockutils [req-368c62ec-450d-44af-8ca8-6e01ea0d77e2 req-6ed83207-fe09-4bb8-8569-7ee18a33ec99 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.909 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'keypairs' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.960 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1hq16j5" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.988 2 DEBUG nova.storage.rbd_utils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:49 compute-0 nova_compute[256940]: 2025-10-02 13:00:49.992 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/disk.config c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.3 MiB/s wr, 241 op/s
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.216 2 INFO nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Creating config drive at /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.221 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_merj954 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4021418989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1634489610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.370 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_merj954" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.407 2 DEBUG nova.storage.rbd_utils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.412 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:50.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.617 2 DEBUG oslo_concurrency.processutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/disk.config c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.618 2 INFO nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Deleting local config drive /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44/disk.config because it was imported into RBD.
Oct 02 13:00:50 compute-0 kernel: tap8823918b-f6: entered promiscuous mode
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.7058] manager: (tap8823918b-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.710 2 DEBUG oslo_concurrency.processutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config e6e7e32c-195f-48e7-a85c-451d3c0e6df6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00866|binding|INFO|Claiming lport 8823918b-f620-4ff5-8094-450aa259ea57 for this chassis.
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00867|binding|INFO|8823918b-f620-4ff5-8094-450aa259ea57: Claiming fa:16:3e:d6:21:a4 10.100.0.7
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.719 2 INFO nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Deleting local config drive /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6/disk.config because it was imported into RBD.
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00868|binding|INFO|Setting lport 8823918b-f620-4ff5-8094-450aa259ea57 ovn-installed in OVS
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00869|binding|INFO|Setting lport 8823918b-f620-4ff5-8094-450aa259ea57 up in Southbound
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.740 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:21:a4 10.100.0.7'], port_security=['fa:16:3e:d6:21:a4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e9550bd-e948-4226-9434-30da0c95df24', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8823918b-f620-4ff5-8094-450aa259ea57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.742 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8823918b-f620-4ff5-8094-450aa259ea57 in datapath cf287730-8b39-470a-9870-d19a70f15c4d bound to our chassis
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.744 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf287730-8b39-470a-9870-d19a70f15c4d
Oct 02 13:00:50 compute-0 systemd-udevd[370018]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:00:50 compute-0 systemd-machined[210927]: New machine qemu-89-instance-000000a8.
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.761 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[583c8c41-a3c1-43a8-a664-5e8467e39dbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.762 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf287730-81 in ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.7653] device (tap8823918b-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.7660] device (tap8823918b-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.766 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf287730-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.767 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[381b59f8-f37f-4f22-b9a7-ea84ede0dd5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.768 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef43491-6317-40fa-9d84-ab8d64c0703b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-000000a8.
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.781 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[963f572f-194d-4254-87e8-2920b86827fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:50.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:50 compute-0 kernel: tap72654905-09: entered promiscuous mode
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.8013] manager: (tap72654905-09): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00870|binding|INFO|Claiming lport 72654905-093f-4292-82c9-4cb6696b256c for this chassis.
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00871|binding|INFO|72654905-093f-4292-82c9-4cb6696b256c: Claiming fa:16:3e:6b:ce:35 10.100.0.5
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.8149] device (tap72654905-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.810 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:ce:35 10.100.0.5'], port_security=['fa:16:3e:6b:ce:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e6e7e32c-195f-48e7-a85c-451d3c0e6df6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7e559ab6-00b0-44d0-8057-d4820b5e6a73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=283932d0-a067-4834-a84b-a5f86e206cac, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=72654905-093f-4292-82c9-4cb6696b256c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.809 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e34c6187-2a43-407a-811d-c9822188b824]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.8160] device (tap72654905-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00872|binding|INFO|Setting lport 72654905-093f-4292-82c9-4cb6696b256c ovn-installed in OVS
Oct 02 13:00:50 compute-0 ovn_controller[148123]: 2025-10-02T13:00:50Z|00873|binding|INFO|Setting lport 72654905-093f-4292-82c9-4cb6696b256c up in Southbound
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 nova_compute[256940]: 2025-10-02 13:00:50.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.850 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7688ad-5c39-48d0-a981-a72ab8022883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.8574] manager: (tapcf287730-80): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.859 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b79b283d-218f-4764-8bc5-f0ac687befa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 systemd-machined[210927]: New machine qemu-90-instance-000000a5.
Oct 02 13:00:50 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-000000a5.
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.894 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6d17d794-4766-41a6-a906-f8955c147baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.897 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8a076d43-bd23-4bea-91af-3021e009be70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 NetworkManager[44981]: <info>  [1759410050.9264] device (tapcf287730-80): carrier: link connected
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.931 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b703b75c-ffbb-4ebd-bba5-1a492f3e427f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.956 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c87605a0-3cb8-4870-a6e5-07b76d9e014e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794641, 'reachable_time': 32317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370068, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.979 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5a5f3b-d760-4dd7-965a-56f6aa0fd43a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:e9a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794641, 'tstamp': 794641}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370070, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:50.999 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3d17f416-3e84-4f5d-82c6-30e7ddcad6ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794641, 'reachable_time': 32317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370078, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.037 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b664f94a-e71e-49bc-babe-9e747875cded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.085 2 DEBUG nova.compute.manager [req-93d41a7a-a2db-4a0f-93ad-602034faad31 req-b4d92a5e-18b5-4127-8f8b-51653abbb37d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.086 2 DEBUG oslo_concurrency.lockutils [req-93d41a7a-a2db-4a0f-93ad-602034faad31 req-b4d92a5e-18b5-4127-8f8b-51653abbb37d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.087 2 DEBUG oslo_concurrency.lockutils [req-93d41a7a-a2db-4a0f-93ad-602034faad31 req-b4d92a5e-18b5-4127-8f8b-51653abbb37d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.087 2 DEBUG oslo_concurrency.lockutils [req-93d41a7a-a2db-4a0f-93ad-602034faad31 req-b4d92a5e-18b5-4127-8f8b-51653abbb37d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.087 2 DEBUG nova.compute.manager [req-93d41a7a-a2db-4a0f-93ad-602034faad31 req-b4d92a5e-18b5-4127-8f8b-51653abbb37d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] No waiting events found dispatching network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.087 2 WARNING nova.compute.manager [req-93d41a7a-a2db-4a0f-93ad-602034faad31 req-b4d92a5e-18b5-4127-8f8b-51653abbb37d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received unexpected event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c for instance with vm_state active and task_state rebuild_spawning.
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.111 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6497864a-1457-4e2b-95ba-749a0773d984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.112 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.113 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.113 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf287730-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-0 kernel: tapcf287730-80: entered promiscuous mode
Oct 02 13:00:51 compute-0 NetworkManager[44981]: <info>  [1759410051.1159] manager: (tapcf287730-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.123 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf287730-80, col_values=(('external_ids', {'iface-id': '9ae1fd94-b5f3-4333-9533-d979eb84ea8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:51 compute-0 ovn_controller[148123]: 2025-10-02T13:00:51Z|00874|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.128 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.130 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7d5eb4-52e1-40a1-8d84-275d726ccc3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.130 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-cf287730-8b39-470a-9870-d19a70f15c4d
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID cf287730-8b39-470a-9870-d19a70f15c4d
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:00:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:51.131 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'env', 'PROCESS_TAG=haproxy-cf287730-8b39-470a-9870-d19a70f15c4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf287730-8b39-470a-9870-d19a70f15c4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-0 ceph-mon[73668]: pgmap v2633: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.3 MiB/s wr, 241 op/s
Oct 02 13:00:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3463278456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1981855118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:51 compute-0 podman[370145]: 2025-10-02 13:00:51.509212319 +0000 UTC m=+0.026116279 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.657 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410051.6559985, c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.658 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] VM Started (Lifecycle Event)
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.711 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.715 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410051.6571312, c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.716 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] VM Paused (Lifecycle Event)
Oct 02 13:00:51 compute-0 podman[370145]: 2025-10-02 13:00:51.844703638 +0000 UTC m=+0.361607568 container create 56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.854 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.858 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:51 compute-0 nova_compute[256940]: 2025-10-02 13:00:51.888 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:51 compute-0 systemd[1]: Started libpod-conmon-56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b.scope.
Oct 02 13:00:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184aef770af41ce9fc38804e81ef53b59abcf54eaaefd603fdaa27c54e310951/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:52 compute-0 podman[370145]: 2025-10-02 13:00:52.05973645 +0000 UTC m=+0.576640390 container init 56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:00:52 compute-0 podman[370145]: 2025-10-02 13:00:52.065961761 +0000 UTC m=+0.582865691 container start 56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:00:52 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [NOTICE]   (370207) : New worker (370209) forked
Oct 02 13:00:52 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [NOTICE]   (370207) : Loading success.
Oct 02 13:00:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 323 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 309 op/s
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.174 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 72654905-093f-4292-82c9-4cb6696b256c in datapath 4b83ab01-387d-4df9-9d3b-a5032705a6b9 unbound from our chassis
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.176 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b83ab01-387d-4df9-9d3b-a5032705a6b9
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.193 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[344c3cf3-f101-4090-934f-9128ceaa648a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.194 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b83ab01-31 in ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.196 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b83ab01-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.196 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dd80269b-b88a-486c-848a-a8c552a87971]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.197 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1d7fb3-3632-4be0-acb6-c69150cccfef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.211 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[e25b44f5-2750-4ad6-bb50-6812f7f9a87c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.239 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a3090326-c9ef-4983-a118-9c3f3c32933e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.244 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for e6e7e32c-195f-48e7-a85c-451d3c0e6df6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.244 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410052.2440236, e6e7e32c-195f-48e7-a85c-451d3c0e6df6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.245 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] VM Resumed (Lifecycle Event)
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.247 2 DEBUG nova.compute.manager [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.248 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.251 2 INFO nova.virt.libvirt.driver [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance spawned successfully.
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.251 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.272 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.275 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[86d78f5f-72f6-46a5-af3b-675cc4a88d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.280 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.284 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[78bffaee-2281-4796-b58c-268085a7176d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 NetworkManager[44981]: <info>  [1759410052.2848] manager: (tap4b83ab01-30): new Veth device (/org/freedesktop/NetworkManager/Devices/381)
Oct 02 13:00:52 compute-0 systemd-udevd[370048]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.289 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.290 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.291 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.291 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.291 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.292 2 DEBUG nova.virt.libvirt.driver [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.318 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[06bb46b3-dd5f-422a-81b9-ee45df6335b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.321 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a60e717b-dc72-4848-a3f4-03ad670bd383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 NetworkManager[44981]: <info>  [1759410052.3456] device (tap4b83ab01-30): carrier: link connected
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.352 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[235f95d4-7f94-48f7-807a-33a1afcd9387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.371 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf7d1bf-d9b9-444d-90a8-1a542b9d5659]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b83ab01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:7e:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794783, 'reachable_time': 20062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370228, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.388 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4df76e79-0fab-4cfc-879b-5ff54fb8bbf7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:7e79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794783, 'tstamp': 794783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370229, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.412 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cd32da8d-402e-4706-b45c-c5fde20fae49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b83ab01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:7e:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794783, 'reachable_time': 20062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370230, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.443 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.444 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410052.2473857, e6e7e32c-195f-48e7-a85c-451d3c0e6df6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.444 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] VM Started (Lifecycle Event)
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.446 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1efbec52-6dff-4f08-82eb-1637059a94a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:52.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.517 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.522 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[30058458-7daa-4ed9-bf66-b2a3fcff0781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.523 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.524 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b83ab01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.524 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.525 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b83ab01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:52 compute-0 NetworkManager[44981]: <info>  [1759410052.5283] manager: (tap4b83ab01-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct 02 13:00:52 compute-0 kernel: tap4b83ab01-30: entered promiscuous mode
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.532 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b83ab01-30, col_values=(('external_ids', {'iface-id': '7ff13fe7-9b08-492c-9bb6-2f8a97feca8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:52 compute-0 ovn_controller[148123]: 2025-10-02T13:00:52Z|00875|binding|INFO|Releasing lport 7ff13fe7-9b08-492c-9bb6-2f8a97feca8e from this chassis (sb_readonly=0)
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.555 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b83ab01-387d-4df9-9d3b-a5032705a6b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b83ab01-387d-4df9-9d3b-a5032705a6b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.556 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9f88fb1d-7bbe-4081-b0d6-6bf3ec39409a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.559 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-4b83ab01-387d-4df9-9d3b-a5032705a6b9
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/4b83ab01-387d-4df9-9d3b-a5032705a6b9.pid.haproxy
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 4b83ab01-387d-4df9-9d3b-a5032705a6b9
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:00:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:00:52.560 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'env', 'PROCESS_TAG=haproxy-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b83ab01-387d-4df9-9d3b-a5032705a6b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.585 2 DEBUG nova.compute.manager [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.587 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.664 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.664 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.665 2 DEBUG nova.objects.instance [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.751 2 DEBUG oslo_concurrency.lockutils [None req-b91b871e-b07f-4168-9467-584c4b17605b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:52.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:52 compute-0 podman[370264]: 2025-10-02 13:00:52.960256665 +0000 UTC m=+0.074888075 container create 13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.986 2 DEBUG nova.compute.manager [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.987 2 DEBUG oslo_concurrency.lockutils [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.987 2 DEBUG oslo_concurrency.lockutils [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.987 2 DEBUG oslo_concurrency.lockutils [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.988 2 DEBUG nova.compute.manager [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Processing event network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.988 2 DEBUG nova.compute.manager [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.988 2 DEBUG oslo_concurrency.lockutils [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.988 2 DEBUG oslo_concurrency.lockutils [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.988 2 DEBUG oslo_concurrency.lockutils [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.989 2 DEBUG nova.compute.manager [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] No waiting events found dispatching network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.989 2 WARNING nova.compute.manager [req-d6ad190e-5035-40a4-b43f-994b1db33841 req-dff67010-cc95-4570-bbbc-e22ebfa1766a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received unexpected event network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 for instance with vm_state building and task_state spawning.
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.989 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.993 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410052.9928923, c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.993 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] VM Resumed (Lifecycle Event)
Oct 02 13:00:52 compute-0 systemd[1]: Started libpod-conmon-13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f.scope.
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.994 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.999 2 INFO nova.virt.libvirt.driver [-] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Instance spawned successfully.
Oct 02 13:00:52 compute-0 nova_compute[256940]: 2025-10-02 13:00:52.999 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:00:53 compute-0 podman[370264]: 2025-10-02 13:00:52.910384961 +0000 UTC m=+0.025016401 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:00:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.019 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040fb006d917bc75b5b56d52234d0d29462010a80fd7076272deb5e57c25f8b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.026 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.031 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.032 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.032 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.033 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.033 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.034 2 DEBUG nova.virt.libvirt.driver [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:53 compute-0 podman[370264]: 2025-10-02 13:00:53.042278494 +0000 UTC m=+0.156909924 container init 13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:00:53 compute-0 podman[370264]: 2025-10-02 13:00:53.047713235 +0000 UTC m=+0.162344645 container start 13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.049 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:53 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[370279]: [NOTICE]   (370283) : New worker (370285) forked
Oct 02 13:00:53 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[370279]: [NOTICE]   (370283) : Loading success.
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.096 2 INFO nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Took 8.99 seconds to spawn the instance on the hypervisor.
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.097 2 DEBUG nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.184 2 INFO nova.compute.manager [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Took 10.14 seconds to build instance.
Oct 02 13:00:53 compute-0 nova_compute[256940]: 2025-10-02 13:00:53.232 2 DEBUG oslo_concurrency.lockutils [None req-a9734187-495b-430b-8075-6d38a668d17a 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:53 compute-0 ceph-mon[73668]: pgmap v2634: 305 pgs: 305 active+clean; 323 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 309 op/s
Oct 02 13:00:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 323 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.0 MiB/s wr, 253 op/s
Oct 02 13:00:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:54.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:54 compute-0 nova_compute[256940]: 2025-10-02 13:00:54.626 2 DEBUG nova.compute.manager [req-21168404-d315-4cf2-ba06-11921dfb034c req-600e86f2-653e-4c5a-9659-9f7df6462d4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:54 compute-0 nova_compute[256940]: 2025-10-02 13:00:54.626 2 DEBUG oslo_concurrency.lockutils [req-21168404-d315-4cf2-ba06-11921dfb034c req-600e86f2-653e-4c5a-9659-9f7df6462d4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:54 compute-0 nova_compute[256940]: 2025-10-02 13:00:54.626 2 DEBUG oslo_concurrency.lockutils [req-21168404-d315-4cf2-ba06-11921dfb034c req-600e86f2-653e-4c5a-9659-9f7df6462d4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:54 compute-0 nova_compute[256940]: 2025-10-02 13:00:54.627 2 DEBUG oslo_concurrency.lockutils [req-21168404-d315-4cf2-ba06-11921dfb034c req-600e86f2-653e-4c5a-9659-9f7df6462d4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:54 compute-0 nova_compute[256940]: 2025-10-02 13:00:54.627 2 DEBUG nova.compute.manager [req-21168404-d315-4cf2-ba06-11921dfb034c req-600e86f2-653e-4c5a-9659-9f7df6462d4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] No waiting events found dispatching network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:54 compute-0 nova_compute[256940]: 2025-10-02 13:00:54.627 2 WARNING nova.compute.manager [req-21168404-d315-4cf2-ba06-11921dfb034c req-600e86f2-653e-4c5a-9659-9f7df6462d4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received unexpected event network-vif-plugged-72654905-093f-4292-82c9-4cb6696b256c for instance with vm_state active and task_state None.
Oct 02 13:00:54 compute-0 nova_compute[256940]: 2025-10-02 13:00:54.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:54.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:55 compute-0 ceph-mon[73668]: pgmap v2635: 305 pgs: 305 active+clean; 323 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.0 MiB/s wr, 253 op/s
Oct 02 13:00:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.5 MiB/s wr, 382 op/s
Oct 02 13:00:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:00:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:56.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:00:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:56.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:57 compute-0 nova_compute[256940]: 2025-10-02 13:00:57.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:57 compute-0 ceph-mon[73668]: pgmap v2636: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.5 MiB/s wr, 382 op/s
Oct 02 13:00:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.5 MiB/s wr, 366 op/s
Oct 02 13:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:00:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:00:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:58.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:00:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:00:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:58.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:00:59 compute-0 nova_compute[256940]: 2025-10-02 13:00:59.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:59 compute-0 ceph-mon[73668]: pgmap v2637: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.5 MiB/s wr, 366 op/s
Oct 02 13:01:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.7 MiB/s wr, 346 op/s
Oct 02 13:01:00 compute-0 nova_compute[256940]: 2025-10-02 13:01:00.203 2 DEBUG nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-changed-8823918b-f620-4ff5-8094-450aa259ea57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:00 compute-0 nova_compute[256940]: 2025-10-02 13:01:00.204 2 DEBUG nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Refreshing instance network info cache due to event network-changed-8823918b-f620-4ff5-8094-450aa259ea57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:01:00 compute-0 nova_compute[256940]: 2025-10-02 13:01:00.204 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:00 compute-0 nova_compute[256940]: 2025-10-02 13:01:00.205 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:00 compute-0 nova_compute[256940]: 2025-10-02 13:01:00.205 2 DEBUG nova.network.neutron [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Refreshing network info cache for port 8823918b-f620-4ff5-8094-450aa259ea57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:01:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:00.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:01 compute-0 CROND[370299]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 13:01:01 compute-0 run-parts[370302]: (/etc/cron.hourly) starting 0anacron
Oct 02 13:01:01 compute-0 run-parts[370308]: (/etc/cron.hourly) finished 0anacron
Oct 02 13:01:01 compute-0 CROND[370298]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 13:01:01 compute-0 ceph-mon[73668]: pgmap v2638: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.7 MiB/s wr, 346 op/s
Oct 02 13:01:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.8 MiB/s wr, 372 op/s
Oct 02 13:01:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:02 compute-0 nova_compute[256940]: 2025-10-02 13:01:02.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:02.722 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:02.724 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:01:02 compute-0 nova_compute[256940]: 2025-10-02 13:01:02.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:02.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:03 compute-0 nova_compute[256940]: 2025-10-02 13:01:03.012 2 DEBUG nova.network.neutron [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updated VIF entry in instance network info cache for port 8823918b-f620-4ff5-8094-450aa259ea57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:01:03 compute-0 nova_compute[256940]: 2025-10-02 13:01:03.013 2 DEBUG nova.network.neutron [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updating instance_info_cache with network_info: [{"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:03 compute-0 nova_compute[256940]: 2025-10-02 13:01:03.035 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:03 compute-0 ceph-mon[73668]: pgmap v2639: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.8 MiB/s wr, 372 op/s
Oct 02 13:01:03 compute-0 ovn_controller[148123]: 2025-10-02T13:01:03Z|00876|binding|INFO|Releasing lport 7ff13fe7-9b08-492c-9bb6-2f8a97feca8e from this chassis (sb_readonly=0)
Oct 02 13:01:03 compute-0 ovn_controller[148123]: 2025-10-02T13:01:03Z|00877|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct 02 13:01:03 compute-0 nova_compute[256940]: 2025-10-02 13:01:03.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:03.727 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 506 KiB/s wr, 256 op/s
Oct 02 13:01:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2464258050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:04.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:04 compute-0 nova_compute[256940]: 2025-10-02 13:01:04.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:04.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:05 compute-0 ceph-mon[73668]: pgmap v2640: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 506 KiB/s wr, 256 op/s
Oct 02 13:01:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 308 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 303 op/s
Oct 02 13:01:06 compute-0 podman[370312]: 2025-10-02 13:01:06.399637165 +0000 UTC m=+0.066833675 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 13:01:06 compute-0 podman[370311]: 2025-10-02 13:01:06.419725037 +0000 UTC m=+0.091189268 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:01:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:06.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:06.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:06 compute-0 ovn_controller[148123]: 2025-10-02T13:01:06Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:21:a4 10.100.0.7
Oct 02 13:01:06 compute-0 ovn_controller[148123]: 2025-10-02T13:01:06Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:21:a4 10.100.0.7
Oct 02 13:01:07 compute-0 ovn_controller[148123]: 2025-10-02T13:01:07Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:ce:35 10.100.0.5
Oct 02 13:01:07 compute-0 ovn_controller[148123]: 2025-10-02T13:01:07Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:ce:35 10.100.0.5
Oct 02 13:01:07 compute-0 nova_compute[256940]: 2025-10-02 13:01:07.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:07 compute-0 ceph-mon[73668]: pgmap v2641: 305 pgs: 305 active+clean; 308 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 303 op/s
Oct 02 13:01:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 324 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 13:01:08 compute-0 nova_compute[256940]: 2025-10-02 13:01:08.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:08.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:08.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:09 compute-0 ceph-mon[73668]: pgmap v2642: 305 pgs: 305 active+clean; 324 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 13:01:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1154594658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:09 compute-0 sudo[370353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:09 compute-0 sudo[370353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:09 compute-0 sudo[370353]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:09 compute-0 sudo[370378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:09 compute-0 sudo[370378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:09 compute-0 sudo[370378]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:09 compute-0 nova_compute[256940]: 2025-10-02 13:01:09.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 187 op/s
Oct 02 13:01:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:10.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:10.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:11 compute-0 ceph-mon[73668]: pgmap v2643: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 187 op/s
Oct 02 13:01:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 214 op/s
Oct 02 13:01:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:12.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:12 compute-0 nova_compute[256940]: 2025-10-02 13:01:12.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1735928497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:12.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:13 compute-0 nova_compute[256940]: 2025-10-02 13:01:13.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:13 compute-0 nova_compute[256940]: 2025-10-02 13:01:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:13 compute-0 nova_compute[256940]: 2025-10-02 13:01:13.319 2 INFO nova.compute.manager [None req-89fff6ca-c87a-4bed-9a31-ba9da8dfffc4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Get console output
Oct 02 13:01:13 compute-0 nova_compute[256940]: 2025-10-02 13:01:13.325 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:01:13 compute-0 ceph-mon[73668]: pgmap v2644: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 214 op/s
Oct 02 13:01:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1881902432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2969665086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 4.8 MiB/s wr, 160 op/s
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.198 2 DEBUG nova.compute.manager [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-changed-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.198 2 DEBUG nova.compute.manager [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Refreshing instance network info cache due to event network-changed-72654905-093f-4292-82c9-4cb6696b256c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.199 2 DEBUG oslo_concurrency.lockutils [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.199 2 DEBUG oslo_concurrency.lockutils [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.199 2 DEBUG nova.network.neutron [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Refreshing network info cache for port 72654905-093f-4292-82c9-4cb6696b256c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.264 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.265 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.265 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.266 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.266 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.267 2 INFO nova.compute.manager [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Terminating instance
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.268 2 DEBUG nova.compute.manager [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:01:14 compute-0 kernel: tap72654905-09 (unregistering): left promiscuous mode
Oct 02 13:01:14 compute-0 NetworkManager[44981]: <info>  [1759410074.3428] device (tap72654905-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:01:14 compute-0 ovn_controller[148123]: 2025-10-02T13:01:14Z|00878|binding|INFO|Releasing lport 72654905-093f-4292-82c9-4cb6696b256c from this chassis (sb_readonly=0)
Oct 02 13:01:14 compute-0 ovn_controller[148123]: 2025-10-02T13:01:14Z|00879|binding|INFO|Setting lport 72654905-093f-4292-82c9-4cb6696b256c down in Southbound
Oct 02 13:01:14 compute-0 ovn_controller[148123]: 2025-10-02T13:01:14Z|00880|binding|INFO|Removing iface tap72654905-09 ovn-installed in OVS
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.360 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:ce:35 10.100.0.5'], port_security=['fa:16:3e:6b:ce:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e6e7e32c-195f-48e7-a85c-451d3c0e6df6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e559ab6-00b0-44d0-8057-d4820b5e6a73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=283932d0-a067-4834-a84b-a5f86e206cac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=72654905-093f-4292-82c9-4cb6696b256c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.361 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 72654905-093f-4292-82c9-4cb6696b256c in datapath 4b83ab01-387d-4df9-9d3b-a5032705a6b9 unbound from our chassis
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.363 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b83ab01-387d-4df9-9d3b-a5032705a6b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.364 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[171cf79f-9f48-444c-9219-a0f458cd6857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.365 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 namespace which is not needed anymore
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Oct 02 13:01:14 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000a5.scope: Consumed 14.364s CPU time.
Oct 02 13:01:14 compute-0 systemd-machined[210927]: Machine qemu-90-instance-000000a5 terminated.
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.503 2 INFO nova.virt.libvirt.driver [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Instance destroyed successfully.
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.503 2 DEBUG nova.objects.instance [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid e6e7e32c-195f-48e7-a85c-451d3c0e6df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:14 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[370279]: [NOTICE]   (370283) : haproxy version is 2.8.14-c23fe91
Oct 02 13:01:14 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[370279]: [NOTICE]   (370283) : path to executable is /usr/sbin/haproxy
Oct 02 13:01:14 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[370279]: [WARNING]  (370283) : Exiting Master process...
Oct 02 13:01:14 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[370279]: [ALERT]    (370283) : Current worker (370285) exited with code 143 (Terminated)
Oct 02 13:01:14 compute-0 neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9[370279]: [WARNING]  (370283) : All workers exited. Exiting... (0)
Oct 02 13:01:14 compute-0 systemd[1]: libpod-13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f.scope: Deactivated successfully.
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.520 2 DEBUG nova.virt.libvirt.vif [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:00:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-619697043',display_name='tempest-TestNetworkAdvancedServerOps-server-619697043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-619697043',id=165,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKsObdt+XF+nXz0GiEzpxgfhD66IM9G5RUM3MkLbLpWKw8caQ7bkPyY7qo7NNpbjtQqOZyJGAUsYqFFo7AtudrngINJmOT7RB0S+rjZIUavy2PAWDfApKxcY4WPI92IWbQ==',key_name='tempest-TestNetworkAdvancedServerOps-1578338943',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hgxxt5am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:52Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e6e7e32c-195f-48e7-a85c-451d3c0e6df6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.521 2 DEBUG nova.network.os_vif_util [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.521 2 DEBUG nova.network.os_vif_util [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.522 2 DEBUG os_vif [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72654905-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 podman[370429]: 2025-10-02 13:01:14.526079018 +0000 UTC m=+0.070824959 container died 13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:14.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.529 2 INFO os_vif [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:ce:35,bridge_name='br-int',has_traffic_filtering=True,id=72654905-093f-4292-82c9-4cb6696b256c,network=Network(4b83ab01-387d-4df9-9d3b-a5032705a6b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72654905-09')
Oct 02 13:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f-userdata-shm.mount: Deactivated successfully.
Oct 02 13:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2040fb006d917bc75b5b56d52234d0d29462010a80fd7076272deb5e57c25f8b-merged.mount: Deactivated successfully.
Oct 02 13:01:14 compute-0 podman[370429]: 2025-10-02 13:01:14.56658551 +0000 UTC m=+0.111331451 container cleanup 13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:01:14 compute-0 systemd[1]: libpod-conmon-13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f.scope: Deactivated successfully.
Oct 02 13:01:14 compute-0 podman[370482]: 2025-10-02 13:01:14.635711244 +0000 UTC m=+0.046051256 container remove 13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.641 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[89462d0b-829e-4de9-9d60-b2dd80c0096a]: (4, ('Thu Oct  2 01:01:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 (13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f)\n13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f\nThu Oct  2 01:01:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 (13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f)\n13d0d05281e2faa891c55924a45aeb4579deb04b30b18b9cf07185921c8dd07f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.644 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5940a7-ffeb-4b48-a27d-e9f63f588cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.645 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b83ab01-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 kernel: tap4b83ab01-30: left promiscuous mode
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.667 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[713d7782-a14c-4977-af39-2d4c6094b700]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.693 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdbb61d-6f3a-4350-82e8-f18adbabb808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.694 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8156791a-ef76-4b96-a59d-4eb484ed3ac3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.711 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe59b20-b71c-4a5a-b805-7dc1f6d8581d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794775, 'reachable_time': 29646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370498, 'error': None, 'target': 'ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.713 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b83ab01-387d-4df9-9d3b-a5032705a6b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:01:14 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:14.713 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[0804035f-7dba-4050-b92e-857a12e171ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d4b83ab01\x2d387d\x2d4df9\x2d9d3b\x2da5032705a6b9.mount: Deactivated successfully.
Oct 02 13:01:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:14.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.995 2 INFO nova.virt.libvirt.driver [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Deleting instance files /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_del
Oct 02 13:01:14 compute-0 nova_compute[256940]: 2025-10-02 13:01:14.995 2 INFO nova.virt.libvirt.driver [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Deletion of /var/lib/nova/instances/e6e7e32c-195f-48e7-a85c-451d3c0e6df6_del complete
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.111 2 INFO nova.compute.manager [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.112 2 DEBUG oslo.service.loopingcall [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.112 2 DEBUG nova.compute.manager [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.112 2 DEBUG nova.network.neutron [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:01:15 compute-0 ceph-mon[73668]: pgmap v2645: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 4.8 MiB/s wr, 160 op/s
Oct 02 13:01:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/507289924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.922 2 DEBUG nova.network.neutron [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.943 2 INFO nova.compute.manager [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Took 0.83 seconds to deallocate network for instance.
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.996 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:15 compute-0 nova_compute[256940]: 2025-10-02 13:01:15.997 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 341 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 6.1 MiB/s wr, 199 op/s
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.224 2 DEBUG oslo_concurrency.processutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.419 2 DEBUG nova.compute.manager [req-4d52eedd-962b-4503-b761-c1d2fcaf6080 req-1c41390d-634f-46b3-b974-b20e909c5ca9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Received event network-vif-deleted-72654905-093f-4292-82c9-4cb6696b256c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:16.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.619 2 DEBUG nova.network.neutron [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updated VIF entry in instance network info cache for port 72654905-093f-4292-82c9-4cb6696b256c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.619 2 DEBUG nova.network.neutron [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Updating instance_info_cache with network_info: [{"id": "72654905-093f-4292-82c9-4cb6696b256c", "address": "fa:16:3e:6b:ce:35", "network": {"id": "4b83ab01-387d-4df9-9d3b-a5032705a6b9", "bridge": "br-int", "label": "tempest-network-smoke--983505299", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72654905-09", "ovs_interfaceid": "72654905-093f-4292-82c9-4cb6696b256c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.639 2 DEBUG oslo_concurrency.lockutils [req-dbdc0ff0-d713-4aeb-b6dd-0d4a5a124dff req-5d17949d-3a95-4896-9c80-945064a6d9fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e6e7e32c-195f-48e7-a85c-451d3c0e6df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2598328140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3221705253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.704 2 DEBUG oslo_concurrency.processutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.710 2 DEBUG nova.compute.provider_tree [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.724 2 DEBUG nova.scheduler.client.report [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.765 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.798 2 INFO nova.scheduler.client.report [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance e6e7e32c-195f-48e7-a85c-451d3c0e6df6
Oct 02 13:01:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:16 compute-0 nova_compute[256940]: 2025-10-02 13:01:16.919 2 DEBUG oslo_concurrency.lockutils [None req-6d6b3bef-7879-4163-ac77-b3d321eae6c5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e6e7e32c-195f-48e7-a85c-451d3c0e6df6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.264 2 DEBUG oslo_concurrency.lockutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.264 2 DEBUG oslo_concurrency.lockutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.281 2 DEBUG nova.objects.instance [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.521 2 DEBUG oslo_concurrency.lockutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:17 compute-0 ceph-mon[73668]: pgmap v2646: 305 pgs: 305 active+clean; 341 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 6.1 MiB/s wr, 199 op/s
Oct 02 13:01:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3221705253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1283251929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.899 2 DEBUG oslo_concurrency.lockutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.900 2 DEBUG oslo_concurrency.lockutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:17 compute-0 nova_compute[256940]: 2025-10-02 13:01:17.900 2 INFO nova.compute.manager [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Attaching volume b91353e7-057c-43fc-be40-fbbc07072322 to /dev/vdb
Oct 02 13:01:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.5 MiB/s wr, 177 op/s
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.179 2 DEBUG os_brick.utils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.180 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.192 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.192 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[cca44c27-74c3-46ac-bc96-36b09818b8df]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.194 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.202 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.202 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[20e12436-b3e2-4ca7-a4e3-c101b40788fd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.204 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.211 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.211 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[04196549-5816-4c43-b952-e3418eeca325]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.212 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1e2f01-ce88-41ce-9a73-a9b1623cd339]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.212 2 DEBUG oslo_concurrency.processutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.245 2 DEBUG oslo_concurrency.processutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.247 2 DEBUG os_brick.initiator.connectors.lightos [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.248 2 DEBUG os_brick.utils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:01:18 compute-0 nova_compute[256940]: 2025-10-02 13:01:18.248 2 DEBUG nova.virt.block_device [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updating existing volume attachment record: 22f3ad43-f6f6-494a-af76-5a8dc92a2077 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:01:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:18.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.104 2 DEBUG nova.objects.instance [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.127 2 DEBUG nova.virt.libvirt.driver [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Attempting to attach volume b91353e7-057c-43fc-be40-fbbc07072322 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.129 2 DEBUG nova.virt.libvirt.guest [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:01:19 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:01:19 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b91353e7-057c-43fc-be40-fbbc07072322">
Oct 02 13:01:19 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:19 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:19 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:19 compute-0 nova_compute[256940]:   </source>
Oct 02 13:01:19 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 13:01:19 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:01:19 compute-0 nova_compute[256940]:   </auth>
Oct 02 13:01:19 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:01:19 compute-0 nova_compute[256940]:   <serial>b91353e7-057c-43fc-be40-fbbc07072322</serial>
Oct 02 13:01:19 compute-0 nova_compute[256940]: </disk>
Oct 02 13:01:19 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.215 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.215 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.216 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.279 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.280 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.280 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.280 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.281 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:19 compute-0 ceph-mon[73668]: pgmap v2647: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.5 MiB/s wr, 177 op/s
Oct 02 13:01:19 compute-0 podman[370548]: 2025-10-02 13:01:19.397064866 +0000 UTC m=+0.059107265 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:01:19 compute-0 podman[370549]: 2025-10-02 13:01:19.481125718 +0000 UTC m=+0.142377067 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.869 2 DEBUG nova.virt.libvirt.driver [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.869 2 DEBUG nova.virt.libvirt.driver [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.870 2 DEBUG nova.virt.libvirt.driver [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.870 2 DEBUG nova.virt.libvirt.driver [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No VIF found with MAC fa:16:3e:d6:21:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:01:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634125951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:19 compute-0 nova_compute[256940]: 2025-10-02 13:01:19.918 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.035 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.035 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.035 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:01:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 212 op/s
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.207 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.209 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4039MB free_disk=20.87619400024414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.209 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.209 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.339 2 DEBUG oslo_concurrency.lockutils [None req-a70291ce-4af7-4a98-be23-cd586a5276da 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.404 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.404 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.405 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.425 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.467 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.468 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.490 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.529 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:01:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:20.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4245233163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3634125951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:20 compute-0 nova_compute[256940]: 2025-10-02 13:01:20.591 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:20.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:21 compute-0 ovn_controller[148123]: 2025-10-02T13:01:21Z|00881|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct 02 13:01:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258272080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:21 compute-0 nova_compute[256940]: 2025-10-02 13:01:21.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:21 compute-0 nova_compute[256940]: 2025-10-02 13:01:21.097 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:21 compute-0 nova_compute[256940]: 2025-10-02 13:01:21.103 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:21 compute-0 nova_compute[256940]: 2025-10-02 13:01:21.147 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:21 compute-0 nova_compute[256940]: 2025-10-02 13:01:21.183 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:01:21 compute-0 nova_compute[256940]: 2025-10-02 13:01:21.183 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:21 compute-0 ceph-mon[73668]: pgmap v2648: 305 pgs: 305 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 212 op/s
Oct 02 13:01:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1258272080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 177 op/s
Oct 02 13:01:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:22.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:22 compute-0 nova_compute[256940]: 2025-10-02 13:01:22.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:22.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2188608974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:23 compute-0 sudo[370643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:23 compute-0 sudo[370643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:23 compute-0 sudo[370643]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:23 compute-0 sudo[370668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:23 compute-0 sudo[370668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:23 compute-0 sudo[370668]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:23 compute-0 sudo[370693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:23 compute-0 sudo[370693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:23 compute-0 sudo[370693]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:23 compute-0 sudo[370718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:01:23 compute-0 sudo[370718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:24 compute-0 ceph-mon[73668]: pgmap v2649: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 177 op/s
Oct 02 13:01:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4026947805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 133 op/s
Oct 02 13:01:24 compute-0 nova_compute[256940]: 2025-10-02 13:01:24.229 2 DEBUG oslo_concurrency.lockutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:24 compute-0 nova_compute[256940]: 2025-10-02 13:01:24.230 2 DEBUG oslo_concurrency.lockutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:24 compute-0 nova_compute[256940]: 2025-10-02 13:01:24.279 2 DEBUG nova.objects.instance [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:24 compute-0 nova_compute[256940]: 2025-10-02 13:01:24.344 2 DEBUG oslo_concurrency.lockutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:24 compute-0 sudo[370718]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:24 compute-0 nova_compute[256940]: 2025-10-02 13:01:24.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:24.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:01:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:01:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:01:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:01:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:24 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 41b5fa6e-50d3-4ece-a881-846f9e930165 does not exist
Oct 02 13:01:24 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2d7c0e88-74d1-4f72-99e3-2f8e1cac651c does not exist
Oct 02 13:01:24 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 710387a9-88f2-4130-959d-00aa3b87f895 does not exist
Oct 02 13:01:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:01:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:01:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:01:24 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:01:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:01:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:24.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:24 compute-0 sudo[370774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:24 compute-0 sudo[370774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:24 compute-0 sudo[370774]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:24 compute-0 sudo[370799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:24 compute-0 sudo[370799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:24 compute-0 sudo[370799]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:25 compute-0 sudo[370824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:25 compute-0 sudo[370824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:25 compute-0 sudo[370824]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:25 compute-0 sudo[370849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:01:25 compute-0 sudo[370849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:25 compute-0 ceph-mon[73668]: pgmap v2650: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 133 op/s
Oct 02 13:01:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:01:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:01:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:01:25 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:25 compute-0 nova_compute[256940]: 2025-10-02 13:01:25.180 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:25 compute-0 nova_compute[256940]: 2025-10-02 13:01:25.290 2 DEBUG oslo_concurrency.lockutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:25 compute-0 nova_compute[256940]: 2025-10-02 13:01:25.291 2 DEBUG oslo_concurrency.lockutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:25 compute-0 nova_compute[256940]: 2025-10-02 13:01:25.291 2 INFO nova.compute.manager [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Attaching volume 04f00d16-4d54-4085-a9eb-bc69d5b5e61c to /dev/vdc
Oct 02 13:01:25 compute-0 podman[370916]: 2025-10-02 13:01:25.416141026 +0000 UTC m=+0.029227960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:25 compute-0 podman[370916]: 2025-10-02 13:01:25.636400904 +0000 UTC m=+0.249487818 container create a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:01:25 compute-0 systemd[1]: Started libpod-conmon-a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452.scope.
Oct 02 13:01:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:26 compute-0 podman[370916]: 2025-10-02 13:01:26.034399244 +0000 UTC m=+0.647486178 container init a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:01:26 compute-0 podman[370916]: 2025-10-02 13:01:26.043985533 +0000 UTC m=+0.657072447 container start a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:01:26 compute-0 cranky_hertz[370932]: 167 167
Oct 02 13:01:26 compute-0 systemd[1]: libpod-a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452.scope: Deactivated successfully.
Oct 02 13:01:26 compute-0 podman[370916]: 2025-10-02 13:01:26.07316338 +0000 UTC m=+0.686250324 container attach a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:26 compute-0 podman[370916]: 2025-10-02 13:01:26.073997402 +0000 UTC m=+0.687084316 container died a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.073 2 DEBUG os_brick.utils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.075 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.088 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.088 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[473cde93-4453-4761-804b-a364314c1a39]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.090 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.099 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.100 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[060365d6-1a7c-443d-a516-fa4b9de4b252]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.101 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.111 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.111 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[9f94b601-c4a7-4d76-931b-74637a639143]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.113 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[5b817630-06fc-4e45-a64f-18593ea5c2a1]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.114 2 DEBUG oslo_concurrency.processutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.147 2 DEBUG oslo_concurrency.processutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.151 2 DEBUG os_brick.initiator.connectors.lightos [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.151 2 DEBUG os_brick.initiator.connectors.lightos [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.151 2 DEBUG os_brick.initiator.connectors.lightos [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.152 2 DEBUG os_brick.utils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.152 2 DEBUG nova.virt.block_device [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updating existing volume attachment record: 5c7368fc-feda-4e5b-bc50-cdc9391a7a63 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:01:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f97d53f4e1747dbba7d3e176d5818a121f126848564a57605a3ceafe60300d5-merged.mount: Deactivated successfully.
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.209 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:26 compute-0 nova_compute[256940]: 2025-10-02 13:01:26.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:26 compute-0 podman[370916]: 2025-10-02 13:01:26.293279054 +0000 UTC m=+0.906365998 container remove a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:26 compute-0 systemd[1]: libpod-conmon-a630ed3db440ad6f282587ff4373904756141584c0d5d78dbef92fc4311f4452.scope: Deactivated successfully.
Oct 02 13:01:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:26.498 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:26.500 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:26.500 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:26.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:26 compute-0 podman[370962]: 2025-10-02 13:01:26.471632753 +0000 UTC m=+0.027948346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:26 compute-0 podman[370962]: 2025-10-02 13:01:26.688346669 +0000 UTC m=+0.244662242 container create 3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:01:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:26.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:26 compute-0 systemd[1]: Started libpod-conmon-3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630.scope.
Oct 02 13:01:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712aaf2db9010ac0507c706f1e74dacc10164dc096dfc3a16993530312e4d753/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712aaf2db9010ac0507c706f1e74dacc10164dc096dfc3a16993530312e4d753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712aaf2db9010ac0507c706f1e74dacc10164dc096dfc3a16993530312e4d753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712aaf2db9010ac0507c706f1e74dacc10164dc096dfc3a16993530312e4d753/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712aaf2db9010ac0507c706f1e74dacc10164dc096dfc3a16993530312e4d753/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:27 compute-0 podman[370962]: 2025-10-02 13:01:27.194638371 +0000 UTC m=+0.750953964 container init 3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:01:27 compute-0 podman[370962]: 2025-10-02 13:01:27.203679276 +0000 UTC m=+0.759994849 container start 3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.568 2 DEBUG nova.objects.instance [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.613 2 DEBUG nova.virt.libvirt.driver [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Attempting to attach volume 04f00d16-4d54-4085-a9eb-bc69d5b5e61c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.623 2 DEBUG nova.virt.libvirt.guest [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:01:27 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:01:27 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-04f00d16-4d54-4085-a9eb-bc69d5b5e61c">
Oct 02 13:01:27 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:27 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:27 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:27 compute-0 nova_compute[256940]:   </source>
Oct 02 13:01:27 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 13:01:27 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:01:27 compute-0 nova_compute[256940]:   </auth>
Oct 02 13:01:27 compute-0 nova_compute[256940]:   <target dev="vdc" bus="virtio"/>
Oct 02 13:01:27 compute-0 nova_compute[256940]:   <serial>04f00d16-4d54-4085-a9eb-bc69d5b5e61c</serial>
Oct 02 13:01:27 compute-0 nova_compute[256940]: </disk>
Oct 02 13:01:27 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:01:27 compute-0 podman[370962]: 2025-10-02 13:01:27.659470557 +0000 UTC m=+1.215786160 container attach 3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:01:27 compute-0 ceph-mon[73668]: pgmap v2651: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Oct 02 13:01:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2992057832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.953 2 DEBUG nova.virt.libvirt.driver [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.955 2 DEBUG nova.virt.libvirt.driver [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.956 2 DEBUG nova.virt.libvirt.driver [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.956 2 DEBUG nova.virt.libvirt.driver [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:27 compute-0 nova_compute[256940]: 2025-10-02 13:01:27.956 2 DEBUG nova.virt.libvirt.driver [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No VIF found with MAC fa:16:3e:d6:21:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:01:28 compute-0 agitated_diffie[370979]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:01:28 compute-0 agitated_diffie[370979]: --> relative data size: 1.0
Oct 02 13:01:28 compute-0 agitated_diffie[370979]: --> All data devices are unavailable
Oct 02 13:01:28 compute-0 systemd[1]: libpod-3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630.scope: Deactivated successfully.
Oct 02 13:01:28 compute-0 podman[370962]: 2025-10-02 13:01:28.080674221 +0000 UTC m=+1.636989794 container died 3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 13:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-712aaf2db9010ac0507c706f1e74dacc10164dc096dfc3a16993530312e4d753-merged.mount: Deactivated successfully.
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:28.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:28 compute-0 podman[370962]: 2025-10-02 13:01:28.677499703 +0000 UTC m=+2.233815276 container remove 3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:01:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:28 compute-0 systemd[1]: libpod-conmon-3228039ff22d07712c6e2a301cb8f241177e862c9d6be0af0269c87358563630.scope: Deactivated successfully.
Oct 02 13:01:28 compute-0 sudo[370849]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:28 compute-0 sudo[371029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:28 compute-0 sudo[371029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:28 compute-0 sudo[371029]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:28 compute-0 nova_compute[256940]: 2025-10-02 13:01:28.838 2 DEBUG oslo_concurrency.lockutils [None req-14b6c749-386b-40b1-841b-90deefee5f9e 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:28.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:01:28
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.mgr']
Oct 02 13:01:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:01:28 compute-0 sudo[371054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:28 compute-0 sudo[371054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:28 compute-0 sudo[371054]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:28 compute-0 sudo[371079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:28 compute-0 sudo[371079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:28 compute-0 sudo[371079]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:29 compute-0 sudo[371104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:01:29 compute-0 sudo[371104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:01:29 compute-0 podman[371172]: 2025-10-02 13:01:29.375388928 +0000 UTC m=+0.065648255 container create bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bhaskara, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:01:29 compute-0 sudo[371179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:29 compute-0 sudo[371179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:29 compute-0 sudo[371179]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:29 compute-0 podman[371172]: 2025-10-02 13:01:29.334747264 +0000 UTC m=+0.025006591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:29 compute-0 sudo[371211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:29 compute-0 sudo[371211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:29 compute-0 sudo[371211]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:29 compute-0 systemd[1]: Started libpod-conmon-bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404.scope.
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.501 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410074.4999106, e6e7e32c-195f-48e7-a85c-451d3c0e6df6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.501 2 INFO nova.compute.manager [-] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] VM Stopped (Lifecycle Event)
Oct 02 13:01:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.529 2 DEBUG nova.compute.manager [None req-d7c24d57-b854-46d7-8ef6-f2ab715802fe - - - - - -] [instance: e6e7e32c-195f-48e7-a85c-451d3c0e6df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.532 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.532 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.533 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:01:29 compute-0 nova_compute[256940]: 2025-10-02 13:01:29.533 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:01:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:01:29 compute-0 podman[371172]: 2025-10-02 13:01:29.652292385 +0000 UTC m=+0.342551742 container init bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:01:29 compute-0 podman[371172]: 2025-10-02 13:01:29.66250011 +0000 UTC m=+0.352759437 container start bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:29 compute-0 funny_bhaskara[371239]: 167 167
Oct 02 13:01:29 compute-0 systemd[1]: libpod-bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404.scope: Deactivated successfully.
Oct 02 13:01:29 compute-0 podman[371172]: 2025-10-02 13:01:29.71297574 +0000 UTC m=+0.403235097 container attach bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bhaskara, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:01:29 compute-0 podman[371172]: 2025-10-02 13:01:29.713627017 +0000 UTC m=+0.403886354 container died bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bhaskara, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:01:29 compute-0 ceph-mon[73668]: pgmap v2652: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 13:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e50bbb8f59d39bc0186c48c1860ece86b8b33af19e27aa21f9a3de6cb45bfe3-merged.mount: Deactivated successfully.
Oct 02 13:01:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 13:01:30 compute-0 podman[371172]: 2025-10-02 13:01:30.185044904 +0000 UTC m=+0.875304221 container remove bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:01:30 compute-0 systemd[1]: libpod-conmon-bcc4da0e7dbc0582267564a971cd15fc48a98d32cbfa52e7f95205df21797404.scope: Deactivated successfully.
Oct 02 13:01:30 compute-0 podman[371262]: 2025-10-02 13:01:30.343369224 +0000 UTC m=+0.026798077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:30 compute-0 podman[371262]: 2025-10-02 13:01:30.451586743 +0000 UTC m=+0.135015576 container create 0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:30 compute-0 systemd[1]: Started libpod-conmon-0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1.scope.
Oct 02 13:01:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:30.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb81ea3314015d75af7260efd9709fc05a086e12e46a55adccd57672474ca53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb81ea3314015d75af7260efd9709fc05a086e12e46a55adccd57672474ca53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb81ea3314015d75af7260efd9709fc05a086e12e46a55adccd57672474ca53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb81ea3314015d75af7260efd9709fc05a086e12e46a55adccd57672474ca53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:30 compute-0 podman[371262]: 2025-10-02 13:01:30.587932952 +0000 UTC m=+0.271361875 container init 0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:01:30 compute-0 podman[371262]: 2025-10-02 13:01:30.603539477 +0000 UTC m=+0.286968350 container start 0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:01:30 compute-0 podman[371262]: 2025-10-02 13:01:30.653166695 +0000 UTC m=+0.336595568 container attach 0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lichterman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:01:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:30.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:31 compute-0 ceph-mon[73668]: pgmap v2653: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.408 2 DEBUG oslo_concurrency.lockutils [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.409 2 DEBUG oslo_concurrency.lockutils [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.451 2 INFO nova.compute.manager [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Detaching volume b91353e7-057c-43fc-be40-fbbc07072322
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]: {
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:     "1": [
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:         {
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "devices": [
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "/dev/loop3"
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             ],
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "lv_name": "ceph_lv0",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "lv_size": "7511998464",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "name": "ceph_lv0",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "tags": {
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.cluster_name": "ceph",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.crush_device_class": "",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.encrypted": "0",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.osd_id": "1",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.type": "block",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:                 "ceph.vdo": "0"
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             },
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "type": "block",
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:             "vg_name": "ceph_vg0"
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:         }
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]:     ]
Oct 02 13:01:31 compute-0 lucid_lichterman[371278]: }
Oct 02 13:01:31 compute-0 systemd[1]: libpod-0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1.scope: Deactivated successfully.
Oct 02 13:01:31 compute-0 podman[371262]: 2025-10-02 13:01:31.544504002 +0000 UTC m=+1.227932845 container died 0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.784 2 INFO nova.virt.block_device [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Attempting to driver detach volume b91353e7-057c-43fc-be40-fbbc07072322 from mountpoint /dev/vdb
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.795 2 DEBUG nova.virt.libvirt.driver [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Attempting to detach device vdb from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.796 2 DEBUG nova.virt.libvirt.guest [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b91353e7-057c-43fc-be40-fbbc07072322">
Oct 02 13:01:31 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   </source>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <serial>b91353e7-057c-43fc-be40-fbbc07072322</serial>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]: </disk>
Oct 02 13:01:31 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.899 2 INFO nova.virt.libvirt.driver [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully detached device vdb from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the persistent domain config.
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.900 2 DEBUG nova.virt.libvirt.driver [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:01:31 compute-0 nova_compute[256940]: 2025-10-02 13:01:31.901 2 DEBUG nova.virt.libvirt.guest [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b91353e7-057c-43fc-be40-fbbc07072322">
Oct 02 13:01:31 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   </source>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <serial>b91353e7-057c-43fc-be40-fbbc07072322</serial>
Oct 02 13:01:31 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:01:31 compute-0 nova_compute[256940]: </disk>
Oct 02 13:01:31 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb81ea3314015d75af7260efd9709fc05a086e12e46a55adccd57672474ca53-merged.mount: Deactivated successfully.
Oct 02 13:01:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 483 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 02 13:01:32 compute-0 nova_compute[256940]: 2025-10-02 13:01:32.387 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759410092.3868024, c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:01:32 compute-0 nova_compute[256940]: 2025-10-02 13:01:32.390 2 DEBUG nova.virt.libvirt.driver [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:01:32 compute-0 nova_compute[256940]: 2025-10-02 13:01:32.392 2 INFO nova.virt.libvirt.driver [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully detached device vdb from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the live domain config.
Oct 02 13:01:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:32 compute-0 nova_compute[256940]: 2025-10-02 13:01:32.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2312900658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:32 compute-0 nova_compute[256940]: 2025-10-02 13:01:32.815 2 DEBUG nova.objects.instance [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:32.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:32 compute-0 nova_compute[256940]: 2025-10-02 13:01:32.888 2 DEBUG oslo_concurrency.lockutils [None req-bb0419e9-21d9-4e99-9bff-8ae6c88fa7a6 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:33 compute-0 nova_compute[256940]: 2025-10-02 13:01:33.059 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updating instance_info_cache with network_info: [{"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:33 compute-0 nova_compute[256940]: 2025-10-02 13:01:33.097 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:33 compute-0 nova_compute[256940]: 2025-10-02 13:01:33.098 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:01:33 compute-0 podman[371262]: 2025-10-02 13:01:33.146008152 +0000 UTC m=+2.829436985 container remove 0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:33 compute-0 systemd[1]: libpod-conmon-0b5953f96ba2155e22534e250164dd578f50abd493fb0da593c9e7308bd2d1b1.scope: Deactivated successfully.
Oct 02 13:01:33 compute-0 sudo[371104]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:33 compute-0 sudo[371307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:33 compute-0 sudo[371307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:33 compute-0 sudo[371307]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:33 compute-0 sudo[371332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:33 compute-0 sudo[371332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:33 compute-0 sudo[371332]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:33 compute-0 sudo[371357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:33 compute-0 sudo[371357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:33 compute-0 sudo[371357]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:33 compute-0 sudo[371382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:01:33 compute-0 sudo[371382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:33 compute-0 podman[371446]: 2025-10-02 13:01:33.791780175 +0000 UTC m=+0.030785710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:34 compute-0 ceph-mon[73668]: pgmap v2654: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 483 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 02 13:01:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/204003872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/524146024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:34 compute-0 podman[371446]: 2025-10-02 13:01:34.078406245 +0000 UTC m=+0.317411740 container create 35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 02 13:01:34 compute-0 systemd[1]: Started libpod-conmon-35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac.scope.
Oct 02 13:01:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:34 compute-0 podman[371446]: 2025-10-02 13:01:34.238285605 +0000 UTC m=+0.477291080 container init 35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_pare, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:34 compute-0 podman[371446]: 2025-10-02 13:01:34.248904161 +0000 UTC m=+0.487909616 container start 35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:34 compute-0 nice_pare[371463]: 167 167
Oct 02 13:01:34 compute-0 systemd[1]: libpod-35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac.scope: Deactivated successfully.
Oct 02 13:01:34 compute-0 podman[371446]: 2025-10-02 13:01:34.265739138 +0000 UTC m=+0.504744603 container attach 35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:01:34 compute-0 podman[371446]: 2025-10-02 13:01:34.266886598 +0000 UTC m=+0.505892073 container died 35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-94aaed36f65d5039c45e56310ebdb49bed75143c31719db8b735dc02dc8629ce-merged.mount: Deactivated successfully.
Oct 02 13:01:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:34.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:34 compute-0 nova_compute[256940]: 2025-10-02 13:01:34.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-0 podman[371446]: 2025-10-02 13:01:34.670534536 +0000 UTC m=+0.909540011 container remove 35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_pare, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:01:34 compute-0 systemd[1]: libpod-conmon-35091a9d264057cef99abcc0db7fce26ae940c5079c6d089b373d3a75626bdac.scope: Deactivated successfully.
Oct 02 13:01:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:34.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:34 compute-0 podman[371487]: 2025-10-02 13:01:34.83327876 +0000 UTC m=+0.022674929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:01:34 compute-0 podman[371487]: 2025-10-02 13:01:34.960067442 +0000 UTC m=+0.149463581 container create ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:01:35 compute-0 systemd[1]: Started libpod-conmon-ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b.scope.
Oct 02 13:01:35 compute-0 ceph-mon[73668]: pgmap v2655: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 02 13:01:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ce2843b0d500557f3fef3777fb69e5d0e7fcd4ba187f896e0f11d53f39fdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ce2843b0d500557f3fef3777fb69e5d0e7fcd4ba187f896e0f11d53f39fdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ce2843b0d500557f3fef3777fb69e5d0e7fcd4ba187f896e0f11d53f39fdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ce2843b0d500557f3fef3777fb69e5d0e7fcd4ba187f896e0f11d53f39fdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.193 2 DEBUG oslo_concurrency.lockutils [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.194 2 DEBUG oslo_concurrency.lockutils [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.221 2 INFO nova.compute.manager [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Detaching volume 04f00d16-4d54-4085-a9eb-bc69d5b5e61c
Oct 02 13:01:35 compute-0 podman[371487]: 2025-10-02 13:01:35.24429907 +0000 UTC m=+0.433695219 container init ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:01:35 compute-0 podman[371487]: 2025-10-02 13:01:35.252959304 +0000 UTC m=+0.442355443 container start ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:35 compute-0 podman[371487]: 2025-10-02 13:01:35.276654859 +0000 UTC m=+0.466050998 container attach ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.443 2 INFO nova.virt.block_device [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Attempting to driver detach volume 04f00d16-4d54-4085-a9eb-bc69d5b5e61c from mountpoint /dev/vdc
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.456 2 DEBUG nova.virt.libvirt.driver [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Attempting to detach device vdc from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.457 2 DEBUG nova.virt.libvirt.guest [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-04f00d16-4d54-4085-a9eb-bc69d5b5e61c">
Oct 02 13:01:35 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   </source>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <target dev="vdc" bus="virtio"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <serial>04f00d16-4d54-4085-a9eb-bc69d5b5e61c</serial>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]: </disk>
Oct 02 13:01:35 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.519 2 INFO nova.virt.libvirt.driver [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully detached device vdc from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the persistent domain config.
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.520 2 DEBUG nova.virt.libvirt.driver [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.520 2 DEBUG nova.virt.libvirt.guest [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-04f00d16-4d54-4085-a9eb-bc69d5b5e61c">
Oct 02 13:01:35 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   </source>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <target dev="vdc" bus="virtio"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <serial>04f00d16-4d54-4085-a9eb-bc69d5b5e61c</serial>
Oct 02 13:01:35 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 13:01:35 compute-0 nova_compute[256940]: </disk>
Oct 02 13:01:35 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.632 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759410095.6319802, c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.636 2 DEBUG nova.virt.libvirt.driver [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.638 2 INFO nova.virt.libvirt.driver [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully detached device vdc from instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 from the live domain config.
Oct 02 13:01:35 compute-0 nova_compute[256940]: 2025-10-02 13:01:35.993 2 DEBUG nova.objects.instance [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:36 compute-0 nova_compute[256940]: 2025-10-02 13:01:36.061 2 DEBUG oslo_concurrency.lockutils [None req-aec10d78-7c98-4ef1-b9b0-07db43e5f04c 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 58 op/s
Oct 02 13:01:36 compute-0 competent_dirac[371504]: {
Oct 02 13:01:36 compute-0 competent_dirac[371504]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:01:36 compute-0 competent_dirac[371504]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:01:36 compute-0 competent_dirac[371504]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:01:36 compute-0 competent_dirac[371504]:         "osd_id": 1,
Oct 02 13:01:36 compute-0 competent_dirac[371504]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:01:36 compute-0 competent_dirac[371504]:         "type": "bluestore"
Oct 02 13:01:36 compute-0 competent_dirac[371504]:     }
Oct 02 13:01:36 compute-0 competent_dirac[371504]: }
Oct 02 13:01:36 compute-0 podman[371487]: 2025-10-02 13:01:36.307748073 +0000 UTC m=+1.497144222 container died ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:01:36 compute-0 systemd[1]: libpod-ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b.scope: Deactivated successfully.
Oct 02 13:01:36 compute-0 systemd[1]: libpod-ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b.scope: Consumed 1.033s CPU time.
Oct 02 13:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-091ce2843b0d500557f3fef3777fb69e5d0e7fcd4ba187f896e0f11d53f39fdf-merged.mount: Deactivated successfully.
Oct 02 13:01:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:36.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:36 compute-0 podman[371487]: 2025-10-02 13:01:36.855022737 +0000 UTC m=+2.044418877 container remove ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dirac, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:01:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:36.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:36 compute-0 sudo[371382]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:36 compute-0 systemd[1]: libpod-conmon-ae4ee657f853d0f8da839878a6901576b5ba6e084cc57142eac10d3ea6fe683b.scope: Deactivated successfully.
Oct 02 13:01:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:01:36 compute-0 podman[371539]: 2025-10-02 13:01:36.94141354 +0000 UTC m=+0.405329720 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:01:36 compute-0 podman[371540]: 2025-10-02 13:01:36.943020142 +0000 UTC m=+0.407269291 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:01:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:01:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:37 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 034a3b0c-d632-49ac-a282-856898d77c63 does not exist
Oct 02 13:01:37 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 07a0534a-689a-47cf-a2fb-18d09373d50e does not exist
Oct 02 13:01:37 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ec021ad6-cd41-4f01-9ec9-d1d45384d62d does not exist
Oct 02 13:01:37 compute-0 sudo[371582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:37 compute-0 sudo[371582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:37 compute-0 sudo[371582]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:37 compute-0 sudo[371607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:01:37 compute-0 sudo[371607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:37 compute-0 sudo[371607]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.582 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.583 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.583 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.584 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.584 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.585 2 INFO nova.compute.manager [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Terminating instance
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.586 2 DEBUG nova.compute.manager [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:37 compute-0 ceph-mon[73668]: pgmap v2656: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 58 op/s
Oct 02 13:01:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:37 compute-0 kernel: tap8823918b-f6 (unregistering): left promiscuous mode
Oct 02 13:01:37 compute-0 NetworkManager[44981]: <info>  [1759410097.8783] device (tap8823918b-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:37 compute-0 ovn_controller[148123]: 2025-10-02T13:01:37Z|00882|binding|INFO|Releasing lport 8823918b-f620-4ff5-8094-450aa259ea57 from this chassis (sb_readonly=0)
Oct 02 13:01:37 compute-0 ovn_controller[148123]: 2025-10-02T13:01:37Z|00883|binding|INFO|Setting lport 8823918b-f620-4ff5-8094-450aa259ea57 down in Southbound
Oct 02 13:01:37 compute-0 ovn_controller[148123]: 2025-10-02T13:01:37Z|00884|binding|INFO|Removing iface tap8823918b-f6 ovn-installed in OVS
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:37.898 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:21:a4 10.100.0.7'], port_security=['fa:16:3e:d6:21:a4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e9550bd-e948-4226-9434-30da0c95df24', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8823918b-f620-4ff5-8094-450aa259ea57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:37.900 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8823918b-f620-4ff5-8094-450aa259ea57 in datapath cf287730-8b39-470a-9870-d19a70f15c4d unbound from our chassis
Oct 02 13:01:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:37.901 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf287730-8b39-470a-9870-d19a70f15c4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:01:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:37.903 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[379beb45-e34a-4851-9023-4a92e82b22a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:37 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:37.903 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace which is not needed anymore
Oct 02 13:01:37 compute-0 nova_compute[256940]: 2025-10-02 13:01:37.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:37 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Oct 02 13:01:37 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000a8.scope: Consumed 15.543s CPU time.
Oct 02 13:01:37 compute-0 systemd-machined[210927]: Machine qemu-89-instance-000000a8 terminated.
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.032 2 INFO nova.virt.libvirt.driver [-] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Instance destroyed successfully.
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.034 2 DEBUG nova.objects.instance [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'resources' on Instance uuid c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.057 2 DEBUG nova.virt.libvirt.vif [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-52682838',display_name='tempest-AttachVolumeTestJSON-server-52682838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-52682838',id=168,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxKjxbS2/n41eLxczAKqgqOwwMAWCD3i6muDntco5MFydZozeWGZc01YubrWmKvFS4WKfAPmxH3srKwd8xu8RpTq/RPKAQYAlKPc2QD1dPsvDBLB3WmutsaEtwSC/UDSg==',key_name='tempest-keypair-1484800686',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-vbz8stz9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.058 2 DEBUG nova.network.os_vif_util [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "8823918b-f620-4ff5-8094-450aa259ea57", "address": "fa:16:3e:d6:21:a4", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8823918b-f6", "ovs_interfaceid": "8823918b-f620-4ff5-8094-450aa259ea57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.059 2 DEBUG nova.network.os_vif_util [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:21:a4,bridge_name='br-int',has_traffic_filtering=True,id=8823918b-f620-4ff5-8094-450aa259ea57,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8823918b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.059 2 DEBUG os_vif [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:21:a4,bridge_name='br-int',has_traffic_filtering=True,id=8823918b-f620-4ff5-8094-450aa259ea57,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8823918b-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8823918b-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:01:38 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [NOTICE]   (370207) : haproxy version is 2.8.14-c23fe91
Oct 02 13:01:38 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [NOTICE]   (370207) : path to executable is /usr/sbin/haproxy
Oct 02 13:01:38 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [WARNING]  (370207) : Exiting Master process...
Oct 02 13:01:38 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [WARNING]  (370207) : Exiting Master process...
Oct 02 13:01:38 compute-0 nova_compute[256940]: 2025-10-02 13:01:38.069 2 INFO os_vif [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:21:a4,bridge_name='br-int',has_traffic_filtering=True,id=8823918b-f620-4ff5-8094-450aa259ea57,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8823918b-f6')
Oct 02 13:01:38 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [ALERT]    (370207) : Current worker (370209) exited with code 143 (Terminated)
Oct 02 13:01:38 compute-0 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[370203]: [WARNING]  (370207) : All workers exited. Exiting... (0)
Oct 02 13:01:38 compute-0 systemd[1]: libpod-56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b.scope: Deactivated successfully.
Oct 02 13:01:38 compute-0 podman[371659]: 2025-10-02 13:01:38.078915386 +0000 UTC m=+0.064848994 container died 56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:01:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Oct 02 13:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b-userdata-shm.mount: Deactivated successfully.
Oct 02 13:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-184aef770af41ce9fc38804e81ef53b59abcf54eaaefd603fdaa27c54e310951-merged.mount: Deactivated successfully.
Oct 02 13:01:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:38.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:38 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:01:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:38.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:38 compute-0 podman[371659]: 2025-10-02 13:01:38.907859034 +0000 UTC m=+0.893792672 container cleanup 56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:01:38 compute-0 systemd[1]: libpod-conmon-56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b.scope: Deactivated successfully.
Oct 02 13:01:39 compute-0 ceph-mon[73668]: pgmap v2657: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.239 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.376 2 DEBUG nova.compute.manager [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-vif-unplugged-8823918b-f620-4ff5-8094-450aa259ea57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.378 2 DEBUG oslo_concurrency.lockutils [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.378 2 DEBUG oslo_concurrency.lockutils [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.379 2 DEBUG oslo_concurrency.lockutils [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.380 2 DEBUG nova.compute.manager [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] No waiting events found dispatching network-vif-unplugged-8823918b-f620-4ff5-8094-450aa259ea57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.381 2 DEBUG nova.compute.manager [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-vif-unplugged-8823918b-f620-4ff5-8094-450aa259ea57 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.381 2 DEBUG nova.compute.manager [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.381 2 DEBUG oslo_concurrency.lockutils [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.382 2 DEBUG oslo_concurrency.lockutils [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.382 2 DEBUG oslo_concurrency.lockutils [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.383 2 DEBUG nova.compute.manager [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] No waiting events found dispatching network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.383 2 WARNING nova.compute.manager [req-9b84d55c-d647-4f55-a991-9be022b18d27 req-088e4e22-1cde-47ab-a4e0-cad54510b3c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received unexpected event network-vif-plugged-8823918b-f620-4ff5-8094-450aa259ea57 for instance with vm_state active and task_state deleting.
Oct 02 13:01:39 compute-0 podman[371718]: 2025-10-02 13:01:39.845096822 +0000 UTC m=+0.911695096 container remove 56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.852 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d100f5cf-25ac-45a5-94fd-138516c69bcb]: (4, ('Thu Oct  2 01:01:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b)\n56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b\nThu Oct  2 01:01:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b)\n56e28f389a2182da49cb26a1ffb1189d60ce257ec2e8ab034c77e3ecbe16797b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.854 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ff642dae-95f3-4216-97f0-7d7679d38112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.855 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:39 compute-0 kernel: tapcf287730-80: left promiscuous mode
Oct 02 13:01:39 compute-0 nova_compute[256940]: 2025-10-02 13:01:39.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.875 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9b525280-df83-47f8-98a4-67cb136b4e52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.909 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1c9502-1749-472b-ae8f-b9c8ccbef245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.911 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1f11c66f-838c-4647-bb0d-118a8761598d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.934 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9656bd-6e68-48ac-b789-8b608e9086d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794632, 'reachable_time': 34699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371736, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:39 compute-0 systemd[1]: run-netns-ovnmeta\x2dcf287730\x2d8b39\x2d470a\x2d9870\x2dd19a70f15c4d.mount: Deactivated successfully.
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.939 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:01:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:39.939 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[b7393c0a-f4f7-408d-88d4-7f0d7336281c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 332 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 02 13:01:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2497424559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3600982715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:40.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005321020612320863 of space, bias 1.0, pg target 1.596306183696259 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.646643409466313 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:01:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:01:40 compute-0 nova_compute[256940]: 2025-10-02 13:01:40.726 2 INFO nova.virt.libvirt.driver [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Deleting instance files /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_del
Oct 02 13:01:40 compute-0 nova_compute[256940]: 2025-10-02 13:01:40.726 2 INFO nova.virt.libvirt.driver [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Deletion of /var/lib/nova/instances/c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44_del complete
Oct 02 13:01:40 compute-0 nova_compute[256940]: 2025-10-02 13:01:40.806 2 INFO nova.compute.manager [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Took 3.22 seconds to destroy the instance on the hypervisor.
Oct 02 13:01:40 compute-0 nova_compute[256940]: 2025-10-02 13:01:40.807 2 DEBUG oslo.service.loopingcall [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:01:40 compute-0 nova_compute[256940]: 2025-10-02 13:01:40.807 2 DEBUG nova.compute.manager [-] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:01:40 compute-0 nova_compute[256940]: 2025-10-02 13:01:40.807 2 DEBUG nova.network.neutron [-] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:01:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:40.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:41 compute-0 ceph-mon[73668]: pgmap v2658: 305 pgs: 305 active+clean; 332 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 02 13:01:41 compute-0 nova_compute[256940]: 2025-10-02 13:01:41.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 02 13:01:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:42.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:42 compute-0 nova_compute[256940]: 2025-10-02 13:01:42.618 2 DEBUG nova.network.neutron [-] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:42 compute-0 nova_compute[256940]: 2025-10-02 13:01:42.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:42 compute-0 nova_compute[256940]: 2025-10-02 13:01:42.671 2 INFO nova.compute.manager [-] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Took 1.86 seconds to deallocate network for instance.
Oct 02 13:01:42 compute-0 nova_compute[256940]: 2025-10-02 13:01:42.723 2 DEBUG nova.compute.manager [req-5ba2ae10-44a8-4b34-b712-8d2f81cc7262 req-974c041a-3e96-49cc-9efb-49948af58227 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Received event network-vif-deleted-8823918b-f620-4ff5-8094-450aa259ea57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:42 compute-0 nova_compute[256940]: 2025-10-02 13:01:42.798 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:42 compute-0 nova_compute[256940]: 2025-10-02 13:01:42.799 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:42.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:42 compute-0 nova_compute[256940]: 2025-10-02 13:01:42.890 2 DEBUG oslo_concurrency.processutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:43 compute-0 nova_compute[256940]: 2025-10-02 13:01:43.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722812126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:43 compute-0 nova_compute[256940]: 2025-10-02 13:01:43.350 2 DEBUG oslo_concurrency.processutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:43 compute-0 nova_compute[256940]: 2025-10-02 13:01:43.357 2 DEBUG nova.compute.provider_tree [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:43 compute-0 nova_compute[256940]: 2025-10-02 13:01:43.380 2 DEBUG nova.scheduler.client.report [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:43 compute-0 nova_compute[256940]: 2025-10-02 13:01:43.412 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:43 compute-0 nova_compute[256940]: 2025-10-02 13:01:43.446 2 INFO nova.scheduler.client.report [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Deleted allocations for instance c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44
Oct 02 13:01:43 compute-0 nova_compute[256940]: 2025-10-02 13:01:43.578 2 DEBUG oslo_concurrency.lockutils [None req-62252d51-2152-4b20-b7f0-6786d7f8a5bd 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:43 compute-0 ceph-mon[73668]: pgmap v2659: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 02 13:01:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1722812126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Oct 02 13:01:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:44.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:44.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:45 compute-0 ceph-mon[73668]: pgmap v2660: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Oct 02 13:01:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 02 13:01:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:46.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:46.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.194 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.194 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.215 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.323 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.324 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.335 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.336 2 INFO nova.compute.claims [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:01:47 compute-0 ceph-mon[73668]: pgmap v2661: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.479 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2515102607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:47 compute-0 nova_compute[256940]: 2025-10-02 13:01:47.994 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.001 2 DEBUG nova.compute.provider_tree [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.024 2 DEBUG nova.scheduler.client.report [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.055 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.056 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.125 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.126 2 DEBUG nova.network.neutron [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:01:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 684 KiB/s wr, 126 op/s
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.156 2 INFO nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.200 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.333 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.335 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.336 2 INFO nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Creating image(s)
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.364 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.397 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.426 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.430 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.497 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.498 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.499 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.499 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.523 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.527 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:48.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:48 compute-0 nova_compute[256940]: 2025-10-02 13:01:48.677 2 DEBUG nova.policy [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:01:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2515102607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:48.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:49 compute-0 sudo[371880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:49 compute-0 nova_compute[256940]: 2025-10-02 13:01:49.545 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:49 compute-0 sudo[371880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:49 compute-0 sudo[371880]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:49 compute-0 sudo[371938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:49 compute-0 sudo[371938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:49 compute-0 sudo[371938]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:49 compute-0 podman[371904]: 2025-10-02 13:01:49.644824999 +0000 UTC m=+0.083542050 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 13:01:49 compute-0 podman[371905]: 2025-10-02 13:01:49.645524977 +0000 UTC m=+0.085057939 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:01:49 compute-0 nova_compute[256940]: 2025-10-02 13:01:49.653 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:01:49 compute-0 ceph-mon[73668]: pgmap v2662: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 684 KiB/s wr, 126 op/s
Oct 02 13:01:50 compute-0 nova_compute[256940]: 2025-10-02 13:01:50.080 2 DEBUG nova.network.neutron [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Successfully created port: 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:01:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 129 op/s
Oct 02 13:01:50 compute-0 nova_compute[256940]: 2025-10-02 13:01:50.212 2 DEBUG nova.objects.instance [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:50 compute-0 nova_compute[256940]: 2025-10-02 13:01:50.311 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:01:50 compute-0 nova_compute[256940]: 2025-10-02 13:01:50.312 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Ensure instance console log exists: /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:01:50 compute-0 nova_compute[256940]: 2025-10-02 13:01:50.313 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:50 compute-0 nova_compute[256940]: 2025-10-02 13:01:50.313 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:50 compute-0 nova_compute[256940]: 2025-10-02 13:01:50.313 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:50.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:50.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:51 compute-0 ceph-mon[73668]: pgmap v2663: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 129 op/s
Oct 02 13:01:51 compute-0 nova_compute[256940]: 2025-10-02 13:01:51.600 2 DEBUG nova.network.neutron [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Successfully updated port: 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:01:51 compute-0 nova_compute[256940]: 2025-10-02 13:01:51.625 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:51 compute-0 nova_compute[256940]: 2025-10-02 13:01:51.626 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:51 compute-0 nova_compute[256940]: 2025-10-02 13:01:51.626 2 DEBUG nova.network.neutron [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:01:51 compute-0 nova_compute[256940]: 2025-10-02 13:01:51.714 2 DEBUG nova.compute.manager [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:51 compute-0 nova_compute[256940]: 2025-10-02 13:01:51.714 2 DEBUG nova.compute.manager [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing instance network info cache due to event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:01:51 compute-0 nova_compute[256940]: 2025-10-02 13:01:51.714 2 DEBUG oslo_concurrency.lockutils [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:52 compute-0 nova_compute[256940]: 2025-10-02 13:01:52.112 2 DEBUG nova.network.neutron [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:01:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 190 op/s
Oct 02 13:01:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:52 compute-0 nova_compute[256940]: 2025-10-02 13:01:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:52 compute-0 nova_compute[256940]: 2025-10-02 13:01:52.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:01:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:52.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:52 compute-0 nova_compute[256940]: 2025-10-02 13:01:52.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:52.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.026 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410098.0255404, c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.027 2 INFO nova.compute.manager [-] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] VM Stopped (Lifecycle Event)
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.051 2 DEBUG nova.compute.manager [None req-64a8fabd-3929-4baa-9f9f-9572267bc49c - - - - - -] [instance: c7bfcc3e-6b4f-469a-b8cb-4084dcffdf44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:01:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2902355469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:01:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2902355469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.348 2 DEBUG nova.network.neutron [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:53 compute-0 ceph-mon[73668]: pgmap v2664: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 190 op/s
Oct 02 13:01:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2902355469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2902355469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.386 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.386 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance network_info: |[{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.387 2 DEBUG oslo_concurrency.lockutils [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.387 2 DEBUG nova.network.neutron [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.389 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Start _get_guest_xml network_info=[{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.395 2 WARNING nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.404 2 DEBUG nova.virt.libvirt.host [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.404 2 DEBUG nova.virt.libvirt.host [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.414 2 DEBUG nova.virt.libvirt.host [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.414 2 DEBUG nova.virt.libvirt.host [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.415 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.416 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.416 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.416 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.416 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.417 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.417 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.417 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.417 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.417 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.418 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.418 2 DEBUG nova.virt.hardware [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.421 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:01:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270528742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.913 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.945 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:53 compute-0 nova_compute[256940]: 2025-10-02 13:01:53.951 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Oct 02 13:01:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:01:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595355531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.502 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.504 2 DEBUG nova.virt.libvirt.vif [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1345789876',display_name='tempest-TestNetworkAdvancedServerOps-server-1345789876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1345789876',id=173,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEIMyLEEiTsPEaNADNpznD0SvPywb5Pg8hG/EPPWqO2JIb485VcJfrXGgJByt8PJyHfyaT1SSoE+QlqZ2pUFHk8hDhg8WQsOqARgR1ox1fbDjGtF3fgbhMsuoES+3OcIQ==',key_name='tempest-TestNetworkAdvancedServerOps-295053675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xifn5c06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:48Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.504 2 DEBUG nova.network.os_vif_util [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.505 2 DEBUG nova.network.os_vif_util [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.507 2 DEBUG nova.objects.instance [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.539 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <uuid>b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07</uuid>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <name>instance-000000ad</name>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1345789876</nova:name>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:01:53</nova:creationTime>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <nova:port uuid="43e0d421-2dcd-4a63-a1ab-dc9711a2b840">
Oct 02 13:01:54 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <system>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <entry name="serial">b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07</entry>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <entry name="uuid">b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07</entry>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </system>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <os>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   </os>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <features>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   </features>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk">
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       </source>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk.config">
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       </source>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:01:54 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:4a:f7:63"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <target dev="tap43e0d421-2d"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/console.log" append="off"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <video>
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </video>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:01:54 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:01:54 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:01:54 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:01:54 compute-0 nova_compute[256940]: </domain>
Oct 02 13:01:54 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.539 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Preparing to wait for external event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.540 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.540 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.540 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.541 2 DEBUG nova.virt.libvirt.vif [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1345789876',display_name='tempest-TestNetworkAdvancedServerOps-server-1345789876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1345789876',id=173,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEIMyLEEiTsPEaNADNpznD0SvPywb5Pg8hG/EPPWqO2JIb485VcJfrXGgJByt8PJyHfyaT1SSoE+QlqZ2pUFHk8hDhg8WQsOqARgR1ox1fbDjGtF3fgbhMsuoES+3OcIQ==',key_name='tempest-TestNetworkAdvancedServerOps-295053675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xifn5c06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:48Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.542 2 DEBUG nova.network.os_vif_util [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.542 2 DEBUG nova.network.os_vif_util [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.543 2 DEBUG os_vif [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.545 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43e0d421-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43e0d421-2d, col_values=(('external_ids', {'iface-id': '43e0d421-2dcd-4a63-a1ab-dc9711a2b840', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:f7:63', 'vm-uuid': 'b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:54 compute-0 NetworkManager[44981]: <info>  [1759410114.5520] manager: (tap43e0d421-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.561 2 INFO os_vif [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d')
Oct 02 13:01:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:54.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1270528742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3628328142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.680 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.680 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.681 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:4a:f7:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.681 2 INFO nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Using config drive
Oct 02 13:01:54 compute-0 nova_compute[256940]: 2025-10-02 13:01:54.769 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:54.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:55 compute-0 nova_compute[256940]: 2025-10-02 13:01:55.696 2 DEBUG nova.network.neutron [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updated VIF entry in instance network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:01:55 compute-0 nova_compute[256940]: 2025-10-02 13:01:55.697 2 DEBUG nova.network.neutron [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:55 compute-0 nova_compute[256940]: 2025-10-02 13:01:55.717 2 DEBUG oslo_concurrency.lockutils [req-c0364a28-5d4a-4985-8d96-a4a2536f83e0 req-ca03e45e-fd11-4851-ada6-d80eca4211e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:55 compute-0 ceph-mon[73668]: pgmap v2665: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Oct 02 13:01:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1595355531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:55 compute-0 nova_compute[256940]: 2025-10-02 13:01:55.934 2 INFO nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Creating config drive at /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/disk.config
Oct 02 13:01:55 compute-0 nova_compute[256940]: 2025-10-02 13:01:55.944 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx_b260em execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.089 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx_b260em" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.126 2 DEBUG nova.storage.rbd_utils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 389 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.131 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/disk.config b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.888 2 DEBUG oslo_concurrency.processutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/disk.config b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.889 2 INFO nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Deleting local config drive /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/disk.config because it was imported into RBD.
Oct 02 13:01:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:56.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3214295918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3214295918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:56 compute-0 kernel: tap43e0d421-2d: entered promiscuous mode
Oct 02 13:01:56 compute-0 ovn_controller[148123]: 2025-10-02T13:01:56Z|00885|binding|INFO|Claiming lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for this chassis.
Oct 02 13:01:56 compute-0 ovn_controller[148123]: 2025-10-02T13:01:56Z|00886|binding|INFO|43e0d421-2dcd-4a63-a1ab-dc9711a2b840: Claiming fa:16:3e:4a:f7:63 10.100.0.8
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:56 compute-0 NetworkManager[44981]: <info>  [1759410116.9456] manager: (tap43e0d421-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/384)
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.949 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:f7:63 10.100.0.8'], port_security=['fa:16:3e:4a:f7:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c2d0411-240e-42d5-b104-fafcf4d7fcf7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17b4abc9-fef6-4b35-b16d-0b00cb9b9f26, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=43e0d421-2dcd-4a63-a1ab-dc9711a2b840) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.950 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 in datapath e00eecd6-70d4-4b18-95b4-609dbcd626b1 bound to our chassis
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.951 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e00eecd6-70d4-4b18-95b4-609dbcd626b1
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.964 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0b374414-943d-47c6-8bbd-ca46da00079e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.965 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape00eecd6-71 in ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:01:56 compute-0 ovn_controller[148123]: 2025-10-02T13:01:56Z|00887|binding|INFO|Setting lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 ovn-installed in OVS
Oct 02 13:01:56 compute-0 ovn_controller[148123]: 2025-10-02T13:01:56Z|00888|binding|INFO|Setting lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 up in Southbound
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.966 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape00eecd6-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.966 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[78e58d19-760d-4d41-9e98-5d60b73965e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.968 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d53ec39f-6c1b-4e47-be46-cbfac49a4cd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:56 compute-0 systemd-udevd[372185]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:01:56 compute-0 nova_compute[256940]: 2025-10-02 13:01:56.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:56.979 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f191bf-88e6-4e36-a512-bfbd2e214a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:56 compute-0 systemd-machined[210927]: New machine qemu-91-instance-000000ad.
Oct 02 13:01:56 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-000000ad.
Oct 02 13:01:56 compute-0 NetworkManager[44981]: <info>  [1759410116.9927] device (tap43e0d421-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:01:56 compute-0 NetworkManager[44981]: <info>  [1759410116.9946] device (tap43e0d421-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.006 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83294522-4d48-46eb-bb25-7f0a4b38bc79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.033 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[db740cf4-ea23-4e1f-acd3-afa3d7461996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 NetworkManager[44981]: <info>  [1759410117.0397] manager: (tape00eecd6-70): new Veth device (/org/freedesktop/NetworkManager/Devices/385)
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.039 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[59c43692-2bd5-45a4-9720-d41fd4ba7815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.071 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[99996192-4b0d-4d27-b594-a03a3c299dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.074 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[30a278b8-ce3a-4479-b153-172af8af2386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 NetworkManager[44981]: <info>  [1759410117.0983] device (tape00eecd6-70): carrier: link connected
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.105 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a8485b35-38de-459a-a0c8-328886cacd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.124 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aa545f17-775f-40cf-b9bb-8d6e4317b4ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape00eecd6-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:bc:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801258, 'reachable_time': 24995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372218, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.140 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[330f7b14-7ee4-470a-ac13-cd52d47ce61c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:bc13'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801258, 'tstamp': 801258}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372219, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.155 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0f545f-1198-492b-a503-c45dda8ade36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape00eecd6-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:bc:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801258, 'reachable_time': 24995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372220, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.189 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8667581d-a43a-4cef-a762-200ab523055c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.248 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[49e76e15-5626-46cf-8185-3b3d3bd3ef61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.249 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape00eecd6-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.249 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.250 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape00eecd6-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:57 compute-0 NetworkManager[44981]: <info>  [1759410117.2527] manager: (tape00eecd6-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Oct 02 13:01:57 compute-0 kernel: tape00eecd6-70: entered promiscuous mode
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.255 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape00eecd6-70, col_values=(('external_ids', {'iface-id': 'b3c16797-3484-4c54-823f-36b3f66b294c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:57 compute-0 ovn_controller[148123]: 2025-10-02T13:01:57Z|00889|binding|INFO|Releasing lport b3c16797-3484-4c54-823f-36b3f66b294c from this chassis (sb_readonly=0)
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.280 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e00eecd6-70d4-4b18-95b4-609dbcd626b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e00eecd6-70d4-4b18-95b4-609dbcd626b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.281 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[076a8cff-8bb2-4750-94ab-82ecd5a3970d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.282 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-e00eecd6-70d4-4b18-95b4-609dbcd626b1
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/e00eecd6-70d4-4b18-95b4-609dbcd626b1.pid.haproxy
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID e00eecd6-70d4-4b18-95b4-609dbcd626b1
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:01:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:01:57.283 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'env', 'PROCESS_TAG=haproxy-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e00eecd6-70d4-4b18-95b4-609dbcd626b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.451 2 DEBUG nova.compute.manager [req-7c4478c6-0029-43d7-aad3-5bc22e031e5c req-1d1e1859-8f11-45dd-b96b-6ae7e404fd98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.452 2 DEBUG oslo_concurrency.lockutils [req-7c4478c6-0029-43d7-aad3-5bc22e031e5c req-1d1e1859-8f11-45dd-b96b-6ae7e404fd98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.452 2 DEBUG oslo_concurrency.lockutils [req-7c4478c6-0029-43d7-aad3-5bc22e031e5c req-1d1e1859-8f11-45dd-b96b-6ae7e404fd98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.452 2 DEBUG oslo_concurrency.lockutils [req-7c4478c6-0029-43d7-aad3-5bc22e031e5c req-1d1e1859-8f11-45dd-b96b-6ae7e404fd98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.452 2 DEBUG nova.compute.manager [req-7c4478c6-0029-43d7-aad3-5bc22e031e5c req-1d1e1859-8f11-45dd-b96b-6ae7e404fd98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Processing event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:01:57 compute-0 nova_compute[256940]: 2025-10-02 13:01:57.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:57 compute-0 podman[372289]: 2025-10-02 13:01:57.648499024 +0000 UTC m=+0.027876395 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:01:58 compute-0 ceph-mon[73668]: pgmap v2666: 305 pgs: 305 active+clean; 389 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Oct 02 13:01:58 compute-0 podman[372289]: 2025-10-02 13:01:58.069753918 +0000 UTC m=+0.449131269 container create 50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.113 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410118.1126196, b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.113 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] VM Started (Lifecycle Event)
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.116 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.119 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.122 2 INFO nova.virt.libvirt.driver [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance spawned successfully.
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.122 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:01:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 400 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 211 op/s
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.163 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.164 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.165 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.165 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.165 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.166 2 DEBUG nova.virt.libvirt.driver [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.173 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.176 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.229 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.230 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410118.1135025, b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.230 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] VM Paused (Lifecycle Event)
Oct 02 13:01:58 compute-0 systemd[1]: Started libpod-conmon-50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b.scope.
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.256 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.260 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410118.1182466, b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.261 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] VM Resumed (Lifecycle Event)
Oct 02 13:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d6db02c1f8e916411259466eb6a7e0c7a64aa5c77fc324957ee7d2a0b5d50a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.266 2 INFO nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Took 9.93 seconds to spawn the instance on the hypervisor.
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.267 2 DEBUG nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.278 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.282 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.311 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.327 2 INFO nova.compute.manager [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Took 11.03 seconds to build instance.
Oct 02 13:01:58 compute-0 nova_compute[256940]: 2025-10-02 13:01:58.354 2 DEBUG oslo_concurrency.lockutils [None req-c209b0c6-85f0-4fe7-a653-02212a2e5014 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:58 compute-0 podman[372289]: 2025-10-02 13:01:58.378531943 +0000 UTC m=+0.757909314 container init 50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 13:01:58 compute-0 podman[372289]: 2025-10-02 13:01:58.385056912 +0000 UTC m=+0.764434263 container start 50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:01:58 compute-0 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[372309]: [NOTICE]   (372313) : New worker (372315) forked
Oct 02 13:01:58 compute-0 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[372309]: [NOTICE]   (372313) : Loading success.
Oct 02 13:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:01:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:01:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:58.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:01:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:01:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:58.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:01:59 compute-0 ceph-mon[73668]: pgmap v2667: 305 pgs: 305 active+clean; 400 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 211 op/s
Oct 02 13:01:59 compute-0 nova_compute[256940]: 2025-10-02 13:01:59.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:59 compute-0 nova_compute[256940]: 2025-10-02 13:01:59.608 2 DEBUG nova.compute.manager [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:59 compute-0 nova_compute[256940]: 2025-10-02 13:01:59.608 2 DEBUG oslo_concurrency.lockutils [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:59 compute-0 nova_compute[256940]: 2025-10-02 13:01:59.609 2 DEBUG oslo_concurrency.lockutils [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:59 compute-0 nova_compute[256940]: 2025-10-02 13:01:59.609 2 DEBUG oslo_concurrency.lockutils [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:59 compute-0 nova_compute[256940]: 2025-10-02 13:01:59.609 2 DEBUG nova.compute.manager [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:01:59 compute-0 nova_compute[256940]: 2025-10-02 13:01:59.609 2 WARNING nova.compute.manager [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state None.
Oct 02 13:02:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 260 op/s
Oct 02 13:02:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:00.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:00 compute-0 ovn_controller[148123]: 2025-10-02T13:02:00Z|00890|binding|INFO|Releasing lport b3c16797-3484-4c54-823f-36b3f66b294c from this chassis (sb_readonly=0)
Oct 02 13:02:00 compute-0 nova_compute[256940]: 2025-10-02 13:02:00.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:00.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:01 compute-0 ovn_controller[148123]: 2025-10-02T13:02:01Z|00891|binding|INFO|Releasing lport b3c16797-3484-4c54-823f-36b3f66b294c from this chassis (sb_readonly=0)
Oct 02 13:02:01 compute-0 nova_compute[256940]: 2025-10-02 13:02:01.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-0 ceph-mon[73668]: pgmap v2668: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 260 op/s
Oct 02 13:02:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4099713465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3896671779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.2 MiB/s wr, 283 op/s
Oct 02 13:02:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:02 compute-0 nova_compute[256940]: 2025-10-02 13:02:02.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:02.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:03 compute-0 ceph-mon[73668]: pgmap v2669: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.2 MiB/s wr, 283 op/s
Oct 02 13:02:03 compute-0 NetworkManager[44981]: <info>  [1759410123.9387] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct 02 13:02:03 compute-0 NetworkManager[44981]: <info>  [1759410123.9402] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct 02 13:02:03 compute-0 nova_compute[256940]: 2025-10-02 13:02:03.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:03.946 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:03.947 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:02:04 compute-0 nova_compute[256940]: 2025-10-02 13:02:04.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:04 compute-0 ovn_controller[148123]: 2025-10-02T13:02:04Z|00892|binding|INFO|Releasing lport b3c16797-3484-4c54-823f-36b3f66b294c from this chassis (sb_readonly=0)
Oct 02 13:02:04 compute-0 nova_compute[256940]: 2025-10-02 13:02:04.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:04 compute-0 nova_compute[256940]: 2025-10-02 13:02:04.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Oct 02 13:02:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:04.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:04 compute-0 nova_compute[256940]: 2025-10-02 13:02:04.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:04.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:05 compute-0 ceph-mon[73668]: pgmap v2670: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Oct 02 13:02:05 compute-0 nova_compute[256940]: 2025-10-02 13:02:05.526 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:05 compute-0 nova_compute[256940]: 2025-10-02 13:02:05.526 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing instance network info cache due to event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:02:05 compute-0 nova_compute[256940]: 2025-10-02 13:02:05.527 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:05 compute-0 nova_compute[256940]: 2025-10-02 13:02:05.527 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:05 compute-0 nova_compute[256940]: 2025-10-02 13:02:05.527 2 DEBUG nova.network.neutron [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:02:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 448 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.0 MiB/s wr, 265 op/s
Oct 02 13:02:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:06.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:02:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:02:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:06.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:06.949 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:07 compute-0 podman[372331]: 2025-10-02 13:02:07.402217886 +0000 UTC m=+0.067653227 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:02:07 compute-0 podman[372332]: 2025-10-02 13:02:07.406935808 +0000 UTC m=+0.070185713 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:02:07 compute-0 nova_compute[256940]: 2025-10-02 13:02:07.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:07 compute-0 ceph-mon[73668]: pgmap v2671: 305 pgs: 305 active+clean; 448 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.0 MiB/s wr, 265 op/s
Oct 02 13:02:08 compute-0 nova_compute[256940]: 2025-10-02 13:02:08.041 2 DEBUG nova.network.neutron [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updated VIF entry in instance network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:08 compute-0 nova_compute[256940]: 2025-10-02 13:02:08.042 2 DEBUG nova.network.neutron [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:08 compute-0 nova_compute[256940]: 2025-10-02 13:02:08.099 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 428 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Oct 02 13:02:08 compute-0 nova_compute[256940]: 2025-10-02 13:02:08.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:08.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:08.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:08 compute-0 ceph-mon[73668]: pgmap v2672: 305 pgs: 305 active+clean; 428 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Oct 02 13:02:09 compute-0 nova_compute[256940]: 2025-10-02 13:02:09.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:09 compute-0 sudo[372374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:09 compute-0 sudo[372374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:09 compute-0 sudo[372374]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:09 compute-0 sudo[372399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:09 compute-0 sudo[372399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:09 compute-0 sudo[372399]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 261 op/s
Oct 02 13:02:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1561182828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:10.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:10.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:11 compute-0 ceph-mon[73668]: pgmap v2673: 305 pgs: 305 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 261 op/s
Oct 02 13:02:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3695471513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:11 compute-0 nova_compute[256940]: 2025-10-02 13:02:11.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:11 compute-0 ovn_controller[148123]: 2025-10-02T13:02:11Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:f7:63 10.100.0.8
Oct 02 13:02:11 compute-0 ovn_controller[148123]: 2025-10-02T13:02:11Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:f7:63 10.100.0.8
Oct 02 13:02:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Oct 02 13:02:12 compute-0 ovn_controller[148123]: 2025-10-02T13:02:12Z|00893|binding|INFO|Releasing lport b3c16797-3484-4c54-823f-36b3f66b294c from this chassis (sb_readonly=0)
Oct 02 13:02:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:12.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:12 compute-0 nova_compute[256940]: 2025-10-02 13:02:12.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:12 compute-0 nova_compute[256940]: 2025-10-02 13:02:12.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:12.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:13 compute-0 ceph-mon[73668]: pgmap v2674: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Oct 02 13:02:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 180 op/s
Oct 02 13:02:14 compute-0 nova_compute[256940]: 2025-10-02 13:02:14.226 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:14.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:14 compute-0 nova_compute[256940]: 2025-10-02 13:02:14.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/513885224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:14.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:15 compute-0 ceph-mon[73668]: pgmap v2675: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 180 op/s
Oct 02 13:02:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3848085517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Oct 02 13:02:16 compute-0 nova_compute[256940]: 2025-10-02 13:02:16.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:16.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:16.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:17 compute-0 ceph-mon[73668]: pgmap v2676: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Oct 02 13:02:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1321131216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:17 compute-0 nova_compute[256940]: 2025-10-02 13:02:17.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:17 compute-0 nova_compute[256940]: 2025-10-02 13:02:17.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 791 KiB/s rd, 2.2 MiB/s wr, 173 op/s
Oct 02 13:02:18 compute-0 nova_compute[256940]: 2025-10-02 13:02:18.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:18 compute-0 nova_compute[256940]: 2025-10-02 13:02:18.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:02:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3731738114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:18 compute-0 nova_compute[256940]: 2025-10-02 13:02:18.539 2 INFO nova.compute.manager [None req-5c7fba6d-b8d1-45cc-ab20-20b78dc7c4b2 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Get console output
Oct 02 13:02:18 compute-0 nova_compute[256940]: 2025-10-02 13:02:18.544 21118 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:02:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 13:02:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:18.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 13:02:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:18.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:19 compute-0 ceph-mon[73668]: pgmap v2677: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 791 KiB/s rd, 2.2 MiB/s wr, 173 op/s
Oct 02 13:02:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/601393838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:19 compute-0 nova_compute[256940]: 2025-10-02 13:02:19.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 2.2 MiB/s wr, 129 op/s
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.242 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.243 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.243 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.243 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.243 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:20 compute-0 podman[372431]: 2025-10-02 13:02:20.379259707 +0000 UTC m=+0.051617931 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:02:20 compute-0 podman[372432]: 2025-10-02 13:02:20.410224331 +0000 UTC m=+0.079864144 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:02:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:20.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537887399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.700 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.803 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.803 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:20.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.945 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.946 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4059MB free_disk=20.897262573242188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.946 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:20 compute-0 nova_compute[256940]: 2025-10-02 13:02:20.946 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.004 2 INFO nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating resource usage from migration 67833b05-bd17-48b1-974a-39a83762f931
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.064 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration 67833b05-bd17-48b1-974a-39a83762f931 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.065 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.065 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.101 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053345908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.547 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.552 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.570 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:21 compute-0 ceph-mon[73668]: pgmap v2678: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 2.2 MiB/s wr, 129 op/s
Oct 02 13:02:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/537887399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.599 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:02:21 compute-0 nova_compute[256940]: 2025-10-02 13:02:21.600 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:22 compute-0 nova_compute[256940]: 2025-10-02 13:02:22.132 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:22 compute-0 nova_compute[256940]: 2025-10-02 13:02:22.132 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:22 compute-0 nova_compute[256940]: 2025-10-02 13:02:22.132 2 DEBUG nova.network.neutron [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:02:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Oct 02 13:02:22 compute-0 nova_compute[256940]: 2025-10-02 13:02:22.600 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:22 compute-0 nova_compute[256940]: 2025-10-02 13:02:22.601 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:22.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2053345908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2643813130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:22 compute-0 nova_compute[256940]: 2025-10-02 13:02:22.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:22 compute-0 nova_compute[256940]: 2025-10-02 13:02:22.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:22.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:23 compute-0 nova_compute[256940]: 2025-10-02 13:02:23.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:23 compute-0 ceph-mon[73668]: pgmap v2679: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Oct 02 13:02:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2704903679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 1.2 MiB/s wr, 71 op/s
Oct 02 13:02:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:24 compute-0 nova_compute[256940]: 2025-10-02 13:02:24.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:24 compute-0 nova_compute[256940]: 2025-10-02 13:02:24.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:24.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:25 compute-0 ceph-mon[73668]: pgmap v2680: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 1.2 MiB/s wr, 71 op/s
Oct 02 13:02:25 compute-0 nova_compute[256940]: 2025-10-02 13:02:25.777 2 DEBUG nova.network.neutron [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:25 compute-0 nova_compute[256940]: 2025-10-02 13:02:25.792 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:25 compute-0 nova_compute[256940]: 2025-10-02 13:02:25.920 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 13:02:25 compute-0 nova_compute[256940]: 2025-10-02 13:02:25.921 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Creating file /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/85316394b37d47089aeb3ea0dce5f8f9.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 13:02:25 compute-0 nova_compute[256940]: 2025-10-02 13:02:25.921 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/85316394b37d47089aeb3ea0dce5f8f9.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 1.9 MiB/s wr, 86 op/s
Oct 02 13:02:26 compute-0 nova_compute[256940]: 2025-10-02 13:02:26.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:26 compute-0 nova_compute[256940]: 2025-10-02 13:02:26.419 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/85316394b37d47089aeb3ea0dce5f8f9.tmp" returned: 1 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:26 compute-0 nova_compute[256940]: 2025-10-02 13:02:26.420 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/85316394b37d47089aeb3ea0dce5f8f9.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 13:02:26 compute-0 nova_compute[256940]: 2025-10-02 13:02:26.421 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Creating directory /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 13:02:26 compute-0 nova_compute[256940]: 2025-10-02 13:02:26.421 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:26.498 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:26.499 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:26.500 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:26.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:26 compute-0 nova_compute[256940]: 2025-10-02 13:02:26.655 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:26 compute-0 nova_compute[256940]: 2025-10-02 13:02:26.660 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:02:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:27 compute-0 ceph-mon[73668]: pgmap v2681: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 1.9 MiB/s wr, 86 op/s
Oct 02 13:02:27 compute-0 nova_compute[256940]: 2025-10-02 13:02:27.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 13:02:28 compute-0 nova_compute[256940]: 2025-10-02 13:02:28.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:02:28
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups']
Oct 02 13:02:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:02:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:28.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:29 compute-0 ceph-mon[73668]: pgmap v2682: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:02:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.679 2 INFO nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance shutdown successfully after 3 seconds.
Oct 02 13:02:29 compute-0 kernel: tap43e0d421-2d (unregistering): left promiscuous mode
Oct 02 13:02:29 compute-0 NetworkManager[44981]: <info>  [1759410149.7320] device (tap43e0d421-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:29 compute-0 ovn_controller[148123]: 2025-10-02T13:02:29Z|00894|binding|INFO|Releasing lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 from this chassis (sb_readonly=0)
Oct 02 13:02:29 compute-0 ovn_controller[148123]: 2025-10-02T13:02:29Z|00895|binding|INFO|Setting lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 down in Southbound
Oct 02 13:02:29 compute-0 ovn_controller[148123]: 2025-10-02T13:02:29Z|00896|binding|INFO|Removing iface tap43e0d421-2d ovn-installed in OVS
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:29.778 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:f7:63 10.100.0.8'], port_security=['fa:16:3e:4a:f7:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c2d0411-240e-42d5-b104-fafcf4d7fcf7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17b4abc9-fef6-4b35-b16d-0b00cb9b9f26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=43e0d421-2dcd-4a63-a1ab-dc9711a2b840) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:29.779 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 in datapath e00eecd6-70d4-4b18-95b4-609dbcd626b1 unbound from our chassis
Oct 02 13:02:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:29.780 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e00eecd6-70d4-4b18-95b4-609dbcd626b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:29.781 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bda988a0-517e-44cc-b77d-5c2ff44b0f26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:29.782 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 namespace which is not needed anymore
Oct 02 13:02:29 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Oct 02 13:02:29 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000ad.scope: Consumed 14.486s CPU time.
Oct 02 13:02:29 compute-0 systemd-machined[210927]: Machine qemu-91-instance-000000ad terminated.
Oct 02 13:02:29 compute-0 sudo[372535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:29 compute-0 sudo[372535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:29 compute-0 sudo[372535]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:29 compute-0 sudo[372576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:29 compute-0 sudo[372576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:29 compute-0 sudo[372576]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.919 2 INFO nova.virt.libvirt.driver [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance destroyed successfully.
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.920 2 DEBUG nova.virt.libvirt.vif [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1345789876',display_name='tempest-TestNetworkAdvancedServerOps-server-1345789876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1345789876',id=173,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEIMyLEEiTsPEaNADNpznD0SvPywb5Pg8hG/EPPWqO2JIb485VcJfrXGgJByt8PJyHfyaT1SSoE+QlqZ2pUFHk8hDhg8WQsOqARgR1ox1fbDjGtF3fgbhMsuoES+3OcIQ==',key_name='tempest-TestNetworkAdvancedServerOps-295053675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:01:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xifn5c06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:21Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1867044655", "vif_mac": "fa:16:3e:4a:f7:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.921 2 DEBUG nova.network.os_vif_util [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1867044655", "vif_mac": "fa:16:3e:4a:f7:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.921 2 DEBUG nova.network.os_vif_util [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.922 2 DEBUG os_vif [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43e0d421-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.931 2 INFO os_vif [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d')
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.939 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:29 compute-0 nova_compute[256940]: 2025-10-02 13:02:29.939 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:30 compute-0 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[372309]: [NOTICE]   (372313) : haproxy version is 2.8.14-c23fe91
Oct 02 13:02:30 compute-0 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[372309]: [NOTICE]   (372313) : path to executable is /usr/sbin/haproxy
Oct 02 13:02:30 compute-0 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[372309]: [WARNING]  (372313) : Exiting Master process...
Oct 02 13:02:30 compute-0 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[372309]: [ALERT]    (372313) : Current worker (372315) exited with code 143 (Terminated)
Oct 02 13:02:30 compute-0 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[372309]: [WARNING]  (372313) : All workers exited. Exiting... (0)
Oct 02 13:02:30 compute-0 systemd[1]: libpod-50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b.scope: Deactivated successfully.
Oct 02 13:02:30 compute-0 podman[372581]: 2025-10-02 13:02:30.03400244 +0000 UTC m=+0.163365721 container died 50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 13:02:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d6db02c1f8e916411259466eb6a7e0c7a64aa5c77fc324957ee7d2a0b5d50a-merged.mount: Deactivated successfully.
Oct 02 13:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b-userdata-shm.mount: Deactivated successfully.
Oct 02 13:02:30 compute-0 podman[372581]: 2025-10-02 13:02:30.429512545 +0000 UTC m=+0.558875826 container cleanup 50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:02:30 compute-0 systemd[1]: libpod-conmon-50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b.scope: Deactivated successfully.
Oct 02 13:02:30 compute-0 podman[372643]: 2025-10-02 13:02:30.508952208 +0000 UTC m=+0.052252937 container remove 50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.516 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[93383e03-d346-4af7-9045-797bf9f3342c]: (4, ('Thu Oct  2 01:02:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 (50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b)\n50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b\nThu Oct  2 01:02:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 (50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b)\n50113c1908a0da7fadba2dd96b2e8a1ba3b8672b224714753f2b5a67ea252e2b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.518 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b9143225-cb32-4985-9a5a-302847df52b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.519 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape00eecd6-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:30 compute-0 kernel: tape00eecd6-70: left promiscuous mode
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.527 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c331e1ab-6420-4fd6-b5fa-64e3d2ea4a69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.558 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[03660d2a-2377-4ebf-84e3-69275776247f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.560 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[207d9ae1-649f-491d-88b4-1ac93f4e702c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.579 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[51696de1-02a9-477a-b0fa-39f604c2bbfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801251, 'reachable_time': 21189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372658, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 systemd[1]: run-netns-ovnmeta\x2de00eecd6\x2d70d4\x2d4b18\x2d95b4\x2d609dbcd626b1.mount: Deactivated successfully.
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.583 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:02:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:30.583 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b731ec-8a8e-4f3b-ad8c-f9f7a0986f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:30.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.676 2 DEBUG neutronclient.v2_0.client [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.781 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.781 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.781 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.878 2 DEBUG nova.compute.manager [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.878 2 DEBUG oslo_concurrency.lockutils [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.880 2 DEBUG oslo_concurrency.lockutils [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.884 2 DEBUG oslo_concurrency.lockutils [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.884 2 DEBUG nova.compute.manager [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:30 compute-0 nova_compute[256940]: 2025-10-02 13:02:30.885 2 WARNING nova.compute.manager [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state resize_migrated.
Oct 02 13:02:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:30.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:31 compute-0 nova_compute[256940]: 2025-10-02 13:02:31.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:31 compute-0 nova_compute[256940]: 2025-10-02 13:02:31.224 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:31 compute-0 nova_compute[256940]: 2025-10-02 13:02:31.225 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:02:31 compute-0 nova_compute[256940]: 2025-10-02 13:02:31.225 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:02:31 compute-0 nova_compute[256940]: 2025-10-02 13:02:31.244 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:02:31 compute-0 ceph-mon[73668]: pgmap v2683: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1114902513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/237972929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:32.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:32 compute-0 nova_compute[256940]: 2025-10-02 13:02:32.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:32 compute-0 nova_compute[256940]: 2025-10-02 13:02:32.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.031 2 DEBUG nova.compute.manager [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.031 2 DEBUG oslo_concurrency.lockutils [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.031 2 DEBUG oslo_concurrency.lockutils [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.032 2 DEBUG oslo_concurrency.lockutils [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.032 2 DEBUG nova.compute.manager [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.032 2 WARNING nova.compute.manager [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state resize_migrated.
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.488 2 DEBUG nova.compute.manager [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.489 2 DEBUG nova.compute.manager [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing instance network info cache due to event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.489 2 DEBUG oslo_concurrency.lockutils [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.490 2 DEBUG oslo_concurrency.lockutils [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:33 compute-0 nova_compute[256940]: 2025-10-02 13:02:33.490 2 DEBUG nova.network.neutron [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:02:33 compute-0 ceph-mon[73668]: pgmap v2684: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:02:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:34.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:02:34 compute-0 nova_compute[256940]: 2025-10-02 13:02:34.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:35 compute-0 nova_compute[256940]: 2025-10-02 13:02:35.105 2 DEBUG nova.network.neutron [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updated VIF entry in instance network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:35 compute-0 nova_compute[256940]: 2025-10-02 13:02:35.106 2 DEBUG nova.network.neutron [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:35 compute-0 nova_compute[256940]: 2025-10-02 13:02:35.121 2 DEBUG oslo_concurrency.lockutils [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Oct 02 13:02:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Oct 02 13:02:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Oct 02 13:02:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Oct 02 13:02:36 compute-0 ceph-mon[73668]: pgmap v2685: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3707887180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:36.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:37 compute-0 ceph-mon[73668]: osdmap e351: 3 total, 3 up, 3 in
Oct 02 13:02:37 compute-0 ceph-mon[73668]: pgmap v2687: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Oct 02 13:02:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3209806150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:37 compute-0 nova_compute[256940]: 2025-10-02 13:02:37.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:37 compute-0 sudo[372663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:37 compute-0 sudo[372663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:37 compute-0 sudo[372663]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:37 compute-0 sudo[372700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:37 compute-0 sudo[372700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:37 compute-0 sudo[372700]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:37 compute-0 podman[372687]: 2025-10-02 13:02:37.842082099 +0000 UTC m=+0.065291406 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:02:37 compute-0 podman[372688]: 2025-10-02 13:02:37.84291329 +0000 UTC m=+0.064497165 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:02:37 compute-0 sudo[372754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:37 compute-0 sudo[372754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:37 compute-0 sudo[372754]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:37 compute-0 sudo[372779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:02:37 compute-0 sudo[372779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 469 KiB/s wr, 52 op/s
Oct 02 13:02:38 compute-0 podman[372875]: 2025-10-02 13:02:38.430807651 +0000 UTC m=+0.060483091 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:02:38 compute-0 podman[372875]: 2025-10-02 13:02:38.528493887 +0000 UTC m=+0.158169297 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:02:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:38.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1729713660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:38.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:02:39 compute-0 podman[373012]: 2025-10-02 13:02:39.085053644 +0000 UTC m=+0.059308771 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:02:39 compute-0 nova_compute[256940]: 2025-10-02 13:02:39.092 2 DEBUG nova.compute.manager [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:39 compute-0 nova_compute[256940]: 2025-10-02 13:02:39.093 2 DEBUG oslo_concurrency.lockutils [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:39 compute-0 nova_compute[256940]: 2025-10-02 13:02:39.093 2 DEBUG oslo_concurrency.lockutils [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:39 compute-0 nova_compute[256940]: 2025-10-02 13:02:39.093 2 DEBUG oslo_concurrency.lockutils [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:39 compute-0 nova_compute[256940]: 2025-10-02 13:02:39.093 2 DEBUG nova.compute.manager [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:39 compute-0 nova_compute[256940]: 2025-10-02 13:02:39.093 2 WARNING nova.compute.manager [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state resize_finish.
Oct 02 13:02:39 compute-0 podman[373012]: 2025-10-02 13:02:39.097487677 +0000 UTC m=+0.071742774 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:02:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:02:39 compute-0 podman[373077]: 2025-10-02 13:02:39.300967108 +0000 UTC m=+0.051225360 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public)
Oct 02 13:02:39 compute-0 podman[373077]: 2025-10-02 13:02:39.312444206 +0000 UTC m=+0.062702438 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived)
Oct 02 13:02:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 sudo[372779]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:02:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:02:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 sudo[373128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:39 compute-0 sudo[373128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-0 sudo[373128]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-0 sudo[373153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:39 compute-0 sudo[373153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-0 sudo[373153]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-0 sudo[373178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:39 compute-0 sudo[373178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-0 sudo[373178]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-0 ceph-mon[73668]: pgmap v2688: 305 pgs: 305 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 469 KiB/s wr, 52 op/s
Oct 02 13:02:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-0 sudo[373203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:02:39 compute-0 sudo[373203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-0 nova_compute[256940]: 2025-10-02 13:02:39.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 345 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 760 KiB/s wr, 83 op/s
Oct 02 13:02:40 compute-0 sudo[373203]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:02:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:02:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:02:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ccce19d9-409e-4bc3-a94d-19b72a097d2a does not exist
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a40754b0-d9e1-4480-be79-aac7d56ac0ee does not exist
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 610d034c-a4d6-4cd4-828e-8aa463e62456 does not exist
Oct 02 13:02:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:02:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:02:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:02:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:40 compute-0 sudo[373257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:40 compute-0 sudo[373257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:40 compute-0 sudo[373257]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:40 compute-0 sudo[373282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:40 compute-0 sudo[373282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:40 compute-0 sudo[373282]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:40 compute-0 sudo[373307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:40 compute-0 sudo[373307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:40 compute-0 sudo[373307]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005674531511725293 of space, bias 1.0, pg target 1.7023594535175879 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6463173433719523 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:02:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:02:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:40.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:40 compute-0 sudo[373332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:02:40 compute-0 sudo[373332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:02:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:40 compute-0 podman[373398]: 2025-10-02 13:02:40.946027541 +0000 UTC m=+0.038008178 container create e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:02:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:40 compute-0 systemd[1]: Started libpod-conmon-e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af.scope.
Oct 02 13:02:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:41 compute-0 podman[373398]: 2025-10-02 13:02:41.022045483 +0000 UTC m=+0.114026160 container init e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:41 compute-0 podman[373398]: 2025-10-02 13:02:40.930481367 +0000 UTC m=+0.022462034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:41 compute-0 podman[373398]: 2025-10-02 13:02:41.029456995 +0000 UTC m=+0.121437642 container start e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:02:41 compute-0 podman[373398]: 2025-10-02 13:02:41.032855243 +0000 UTC m=+0.124835910 container attach e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:02:41 compute-0 musing_chaplygin[373415]: 167 167
Oct 02 13:02:41 compute-0 systemd[1]: libpod-e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af.scope: Deactivated successfully.
Oct 02 13:02:41 compute-0 podman[373420]: 2025-10-02 13:02:41.077276316 +0000 UTC m=+0.029267600 container died e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8026d53603842c5dde8bcc8205d6f20d597bfc55a6c533d518907b2262032323-merged.mount: Deactivated successfully.
Oct 02 13:02:41 compute-0 podman[373420]: 2025-10-02 13:02:41.111515105 +0000 UTC m=+0.063506359 container remove e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:41 compute-0 systemd[1]: libpod-conmon-e13ff8802a5cf39a5def68fafd84351698ff63df8b0d0ba203701ada009ae4af.scope: Deactivated successfully.
Oct 02 13:02:41 compute-0 nova_compute[256940]: 2025-10-02 13:02:41.243 2 DEBUG nova.compute.manager [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:41 compute-0 nova_compute[256940]: 2025-10-02 13:02:41.246 2 DEBUG oslo_concurrency.lockutils [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:41 compute-0 nova_compute[256940]: 2025-10-02 13:02:41.246 2 DEBUG oslo_concurrency.lockutils [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:41 compute-0 nova_compute[256940]: 2025-10-02 13:02:41.246 2 DEBUG oslo_concurrency.lockutils [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:41 compute-0 nova_compute[256940]: 2025-10-02 13:02:41.246 2 DEBUG nova.compute.manager [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:41 compute-0 nova_compute[256940]: 2025-10-02 13:02:41.247 2 WARNING nova.compute.manager [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state resized and task_state None.
Oct 02 13:02:41 compute-0 podman[373442]: 2025-10-02 13:02:41.271548979 +0000 UTC m=+0.036195350 container create 069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:02:41 compute-0 systemd[1]: Started libpod-conmon-069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99.scope.
Oct 02 13:02:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc86f3c6140f57a0bb302c00e2a40ac67b84a29a9e6ffa77ae936adf61736c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc86f3c6140f57a0bb302c00e2a40ac67b84a29a9e6ffa77ae936adf61736c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc86f3c6140f57a0bb302c00e2a40ac67b84a29a9e6ffa77ae936adf61736c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc86f3c6140f57a0bb302c00e2a40ac67b84a29a9e6ffa77ae936adf61736c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc86f3c6140f57a0bb302c00e2a40ac67b84a29a9e6ffa77ae936adf61736c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:41 compute-0 podman[373442]: 2025-10-02 13:02:41.257529585 +0000 UTC m=+0.022175986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:41 compute-0 podman[373442]: 2025-10-02 13:02:41.354622336 +0000 UTC m=+0.119268737 container init 069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:02:41 compute-0 podman[373442]: 2025-10-02 13:02:41.361656558 +0000 UTC m=+0.126302929 container start 069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:02:41 compute-0 podman[373442]: 2025-10-02 13:02:41.365976381 +0000 UTC m=+0.130622752 container attach 069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:02:41 compute-0 ceph-mon[73668]: pgmap v2689: 305 pgs: 305 active+clean; 345 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 760 KiB/s wr, 83 op/s
Oct 02 13:02:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2609484403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 02 13:02:42 compute-0 gracious_raman[373458]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:02:42 compute-0 gracious_raman[373458]: --> relative data size: 1.0
Oct 02 13:02:42 compute-0 gracious_raman[373458]: --> All data devices are unavailable
Oct 02 13:02:42 compute-0 systemd[1]: libpod-069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99.scope: Deactivated successfully.
Oct 02 13:02:42 compute-0 podman[373442]: 2025-10-02 13:02:42.254522525 +0000 UTC m=+1.019168906 container died 069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-08cc86f3c6140f57a0bb302c00e2a40ac67b84a29a9e6ffa77ae936adf61736c-merged.mount: Deactivated successfully.
Oct 02 13:02:42 compute-0 podman[373442]: 2025-10-02 13:02:42.322640413 +0000 UTC m=+1.087286784 container remove 069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:02:42 compute-0 systemd[1]: libpod-conmon-069921bffb1a6a3fde64e72b90d48a4f66479c185888c86928988aa7da520d99.scope: Deactivated successfully.
Oct 02 13:02:42 compute-0 sudo[373332]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:42 compute-0 sudo[373484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:42 compute-0 sudo[373484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:42 compute-0 sudo[373484]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:42 compute-0 nova_compute[256940]: 2025-10-02 13:02:42.491 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:42 compute-0 nova_compute[256940]: 2025-10-02 13:02:42.493 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:42 compute-0 nova_compute[256940]: 2025-10-02 13:02:42.494 2 DEBUG nova.compute.manager [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Going to confirm migration 22 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 13:02:42 compute-0 sudo[373509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:42 compute-0 sudo[373509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:42 compute-0 sudo[373509]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:42 compute-0 sudo[373534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:42 compute-0 sudo[373534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:42 compute-0 sudo[373534]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:42 compute-0 sudo[373559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:02:42 compute-0 sudo[373559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:42.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:42 compute-0 nova_compute[256940]: 2025-10-02 13:02:42.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:42 compute-0 podman[373625]: 2025-10-02 13:02:42.928046108 +0000 UTC m=+0.034412544 container create 7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:02:42 compute-0 systemd[1]: Started libpod-conmon-7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a.scope.
Oct 02 13:02:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:43 compute-0 podman[373625]: 2025-10-02 13:02:43.002222494 +0000 UTC m=+0.108588950 container init 7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:02:43 compute-0 podman[373625]: 2025-10-02 13:02:42.913888771 +0000 UTC m=+0.020255237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:43 compute-0 podman[373625]: 2025-10-02 13:02:43.01170314 +0000 UTC m=+0.118069576 container start 7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:02:43 compute-0 podman[373625]: 2025-10-02 13:02:43.015334084 +0000 UTC m=+0.121700550 container attach 7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_haibt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:02:43 compute-0 elegant_haibt[373641]: 167 167
Oct 02 13:02:43 compute-0 systemd[1]: libpod-7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a.scope: Deactivated successfully.
Oct 02 13:02:43 compute-0 conmon[373641]: conmon 7f2280ed4b52ff68e49e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a.scope/container/memory.events
Oct 02 13:02:43 compute-0 podman[373625]: 2025-10-02 13:02:43.020222861 +0000 UTC m=+0.126589317 container died 7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:02:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1130314900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:43 compute-0 ceph-mon[73668]: pgmap v2690: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 02 13:02:43 compute-0 nova_compute[256940]: 2025-10-02 13:02:43.027 2 DEBUG neutronclient.v2_0.client [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:02:43 compute-0 nova_compute[256940]: 2025-10-02 13:02:43.029 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:43 compute-0 nova_compute[256940]: 2025-10-02 13:02:43.029 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:43 compute-0 nova_compute[256940]: 2025-10-02 13:02:43.029 2 DEBUG nova.network.neutron [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:02:43 compute-0 nova_compute[256940]: 2025-10-02 13:02:43.030 2 DEBUG nova.objects.instance [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'info_cache' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a73dfbe70aeea4d2344b6562d29d1e1484538a996cb686ea7143af4216d05d90-merged.mount: Deactivated successfully.
Oct 02 13:02:43 compute-0 podman[373625]: 2025-10-02 13:02:43.066828791 +0000 UTC m=+0.173195227 container remove 7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_haibt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:02:43 compute-0 systemd[1]: libpod-conmon-7f2280ed4b52ff68e49ecd4daa713adf111cf1f61706d46e647e65181235c85a.scope: Deactivated successfully.
Oct 02 13:02:43 compute-0 podman[373664]: 2025-10-02 13:02:43.23092025 +0000 UTC m=+0.046648281 container create da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jepsen, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:43 compute-0 systemd[1]: Started libpod-conmon-da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24.scope.
Oct 02 13:02:43 compute-0 podman[373664]: 2025-10-02 13:02:43.210216133 +0000 UTC m=+0.025944194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ea9a910d9ffd11dbc7020e67c8b26fc100eda68ee3b182c8af5d16388580f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ea9a910d9ffd11dbc7020e67c8b26fc100eda68ee3b182c8af5d16388580f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ea9a910d9ffd11dbc7020e67c8b26fc100eda68ee3b182c8af5d16388580f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ea9a910d9ffd11dbc7020e67c8b26fc100eda68ee3b182c8af5d16388580f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:43 compute-0 podman[373664]: 2025-10-02 13:02:43.338225886 +0000 UTC m=+0.153953917 container init da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:02:43 compute-0 podman[373664]: 2025-10-02 13:02:43.3464824 +0000 UTC m=+0.162210431 container start da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:02:43 compute-0 podman[373664]: 2025-10-02 13:02:43.379389154 +0000 UTC m=+0.195117215 container attach da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:02:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 02 13:02:44 compute-0 bold_jepsen[373680]: {
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:     "1": [
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:         {
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "devices": [
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "/dev/loop3"
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             ],
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "lv_name": "ceph_lv0",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "lv_size": "7511998464",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "name": "ceph_lv0",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "tags": {
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.cluster_name": "ceph",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.crush_device_class": "",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.encrypted": "0",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.osd_id": "1",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.type": "block",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:                 "ceph.vdo": "0"
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             },
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "type": "block",
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:             "vg_name": "ceph_vg0"
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:         }
Oct 02 13:02:44 compute-0 bold_jepsen[373680]:     ]
Oct 02 13:02:44 compute-0 bold_jepsen[373680]: }
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.197945) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164197981, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1525, "num_deletes": 252, "total_data_size": 2446621, "memory_usage": 2482144, "flush_reason": "Manual Compaction"}
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 02 13:02:44 compute-0 systemd[1]: libpod-da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24.scope: Deactivated successfully.
Oct 02 13:02:44 compute-0 podman[373664]: 2025-10-02 13:02:44.214698117 +0000 UTC m=+1.030426148 container died da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jepsen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164221091, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 2407025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58669, "largest_seqno": 60192, "table_properties": {"data_size": 2400105, "index_size": 3926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14657, "raw_average_key_size": 19, "raw_value_size": 2385867, "raw_average_value_size": 3114, "num_data_blocks": 172, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410028, "oldest_key_time": 1759410028, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 23220 microseconds, and 6913 cpu microseconds.
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.221161) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 2407025 bytes OK
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.221184) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.222802) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.222819) EVENT_LOG_v1 {"time_micros": 1759410164222814, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.222837) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 2440062, prev total WAL file size 2455627, number of live WAL files 2.
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.223747) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353035' seq:0, type:0; will stop at (end)
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(2350KB)], [131(10057KB)]
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164223784, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 12705517, "oldest_snapshot_seqno": -1}
Oct 02 13:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-63ea9a910d9ffd11dbc7020e67c8b26fc100eda68ee3b182c8af5d16388580f2-merged.mount: Deactivated successfully.
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 8889 keys, 11598910 bytes, temperature: kUnknown
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164351267, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 11598910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11541773, "index_size": 33792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 231950, "raw_average_key_size": 26, "raw_value_size": 11386113, "raw_average_value_size": 1280, "num_data_blocks": 1299, "num_entries": 8889, "num_filter_entries": 8889, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.351513) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11598910 bytes
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.391434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.6 rd, 90.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.8 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(10.1) write-amplify(4.8) OK, records in: 9412, records dropped: 523 output_compression: NoCompression
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.391466) EVENT_LOG_v1 {"time_micros": 1759410164391454, "job": 80, "event": "compaction_finished", "compaction_time_micros": 127567, "compaction_time_cpu_micros": 32449, "output_level": 6, "num_output_files": 1, "total_output_size": 11598910, "num_input_records": 9412, "num_output_records": 8889, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164392190, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164394539, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.223653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.394734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.394744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.394746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.394748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:44.394750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.409 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.411 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.426 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:02:44 compute-0 podman[373664]: 2025-10-02 13:02:44.437768077 +0000 UTC m=+1.253496108 container remove da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jepsen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:02:44 compute-0 sudo[373559]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:44 compute-0 systemd[1]: libpod-conmon-da5044860c21a3b9befb8677c3c219d11027795e97faa0ce79041aaae8bfea24.scope: Deactivated successfully.
Oct 02 13:02:44 compute-0 sudo[373701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:44 compute-0 sudo[373701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:44 compute-0 sudo[373701]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.560 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.562 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.569 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.570 2 INFO nova.compute.claims [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:02:44 compute-0 sudo[373726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:44 compute-0 sudo[373726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:44 compute-0 sudo[373726]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:44.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:44 compute-0 sudo[373751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:44 compute-0 sudo[373751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:44 compute-0 sudo[373751]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.704 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:44 compute-0 sudo[373776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:02:44 compute-0 sudo[373776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.917 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410149.9167283, b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.918 2 INFO nova.compute.manager [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] VM Stopped (Lifecycle Event)
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.955 2 DEBUG nova.compute.manager [None req-c83ea6d6-f698-4e71-b356-605e3a9ee24a - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.959 2 DEBUG nova.compute.manager [None req-c83ea6d6-f698-4e71-b356-605e3a9ee24a - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.969 2 DEBUG nova.network.neutron [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:44.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:44 compute-0 nova_compute[256940]: 2025-10-02 13:02:44.996 2 INFO nova.compute.manager [None req-c83ea6d6-f698-4e71-b356-605e3a9ee24a - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.001 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.001 2 DEBUG nova.objects.instance [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:45 compute-0 podman[373863]: 2025-10-02 13:02:45.059249129 +0000 UTC m=+0.044804634 container create f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:02:45 compute-0 systemd[1]: Started libpod-conmon-f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7.scope.
Oct 02 13:02:45 compute-0 podman[373863]: 2025-10-02 13:02:45.038715116 +0000 UTC m=+0.024270641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732233713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:45 compute-0 podman[373863]: 2025-10-02 13:02:45.149597374 +0000 UTC m=+0.135152909 container init f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.155 2 DEBUG nova.storage.rbd_utils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] removing snapshot(nova-resize) on rbd image(b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:02:45 compute-0 podman[373863]: 2025-10-02 13:02:45.157098489 +0000 UTC m=+0.142653994 container start f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:02:45 compute-0 podman[373863]: 2025-10-02 13:02:45.160698242 +0000 UTC m=+0.146253777 container attach f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:02:45 compute-0 modest_burnell[373913]: 167 167
Oct 02 13:02:45 compute-0 systemd[1]: libpod-f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7.scope: Deactivated successfully.
Oct 02 13:02:45 compute-0 podman[373863]: 2025-10-02 13:02:45.164353317 +0000 UTC m=+0.149908832 container died f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.164 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.170 2 DEBUG nova.compute.provider_tree [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.187 2 DEBUG nova.scheduler.client.report [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-daef27bc5ef6b36ed5402a7dcf95123d783d502b0128cf06abb8dc9424ce12ca-merged.mount: Deactivated successfully.
Oct 02 13:02:45 compute-0 podman[373863]: 2025-10-02 13:02:45.20493964 +0000 UTC m=+0.190495145 container remove f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:02:45 compute-0 systemd[1]: libpod-conmon-f667a8e4e0522f1304a6572b4784bbf64954f2bcf408d9569d499f5508d5f0e7.scope: Deactivated successfully.
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.232 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.233 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.332 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.332 2 DEBUG nova.network.neutron [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:02:45 compute-0 podman[373943]: 2025-10-02 13:02:45.357303265 +0000 UTC m=+0.040723368 container create ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gates, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.367 2 INFO nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:02:45 compute-0 systemd[1]: Started libpod-conmon-ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521.scope.
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.398 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:02:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fde7ed047aab88454aa4f03263344fb89ddc8a4bf95db5d0ec5ce05e5f5170/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fde7ed047aab88454aa4f03263344fb89ddc8a4bf95db5d0ec5ce05e5f5170/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fde7ed047aab88454aa4f03263344fb89ddc8a4bf95db5d0ec5ce05e5f5170/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fde7ed047aab88454aa4f03263344fb89ddc8a4bf95db5d0ec5ce05e5f5170/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:45 compute-0 podman[373943]: 2025-10-02 13:02:45.340302324 +0000 UTC m=+0.023722437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:02:45 compute-0 podman[373943]: 2025-10-02 13:02:45.444238982 +0000 UTC m=+0.127659085 container init ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gates, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:02:45 compute-0 podman[373943]: 2025-10-02 13:02:45.454535299 +0000 UTC m=+0.137955402 container start ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gates, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 13:02:45 compute-0 podman[373943]: 2025-10-02 13:02:45.46303802 +0000 UTC m=+0.146458143 container attach ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.502 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.504 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.504 2 INFO nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Creating image(s)
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.534 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] rbd image b6d31622-c503-4790-b277-89de495fb364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:45 compute-0 ceph-mon[73668]: pgmap v2691: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 02 13:02:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3140300285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/732233713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.584 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] rbd image b6d31622-c503-4790-b277-89de495fb364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.614 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] rbd image b6d31622-c503-4790-b277-89de495fb364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.618 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.688 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.689 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.690 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.690 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.713 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] rbd image b6d31622-c503-4790-b277-89de495fb364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.716 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b6d31622-c503-4790-b277-89de495fb364_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:45 compute-0 nova_compute[256940]: 2025-10-02 13:02:45.745 2 DEBUG nova.policy [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cb1643b3981d49ceaebfabe1577596fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6e8f1f6ceb7b40ee9fa002d881b59a44', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:02:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Oct 02 13:02:45 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Oct 02 13:02:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.256 2 DEBUG nova.virt.libvirt.vif [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1345789876',display_name='tempest-TestNetworkAdvancedServerOps-server-1345789876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1345789876',id=173,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEIMyLEEiTsPEaNADNpznD0SvPywb5Pg8hG/EPPWqO2JIb485VcJfrXGgJByt8PJyHfyaT1SSoE+QlqZ2pUFHk8hDhg8WQsOqARgR1ox1fbDjGtF3fgbhMsuoES+3OcIQ==',key_name='tempest-TestNetworkAdvancedServerOps-295053675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xifn5c06',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:39Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.258 2 DEBUG nova.network.os_vif_util [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.258 2 DEBUG nova.network.os_vif_util [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.259 2 DEBUG os_vif [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43e0d421-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.266 2 INFO os_vif [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d')
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.267 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.267 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:46 compute-0 wonderful_gates[373959]: {
Oct 02 13:02:46 compute-0 wonderful_gates[373959]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:02:46 compute-0 wonderful_gates[373959]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:02:46 compute-0 wonderful_gates[373959]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:02:46 compute-0 wonderful_gates[373959]:         "osd_id": 1,
Oct 02 13:02:46 compute-0 wonderful_gates[373959]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:02:46 compute-0 wonderful_gates[373959]:         "type": "bluestore"
Oct 02 13:02:46 compute-0 wonderful_gates[373959]:     }
Oct 02 13:02:46 compute-0 wonderful_gates[373959]: }
Oct 02 13:02:46 compute-0 systemd[1]: libpod-ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521.scope: Deactivated successfully.
Oct 02 13:02:46 compute-0 podman[373943]: 2025-10-02 13:02:46.328200628 +0000 UTC m=+1.011620751 container died ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gates, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.342 2 DEBUG oslo_concurrency.processutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3fde7ed047aab88454aa4f03263344fb89ddc8a4bf95db5d0ec5ce05e5f5170-merged.mount: Deactivated successfully.
Oct 02 13:02:46 compute-0 podman[373943]: 2025-10-02 13:02:46.382157998 +0000 UTC m=+1.065578101 container remove ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:02:46 compute-0 systemd[1]: libpod-conmon-ae21bcd1d25ccb2a92be9240bf34451702829fe8614bfe4f31a0a3fc9dcef521.scope: Deactivated successfully.
Oct 02 13:02:46 compute-0 sudo[373776]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:02:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:02:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:46.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 679b5830-7475-4101-8978-7e3bd6ef6b15 does not exist
Oct 02 13:02:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev dde95c06-bff2-456c-a676-532cddea887b does not exist
Oct 02 13:02:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1334e428-ee3e-49f2-ae95-9813178c0a37 does not exist
Oct 02 13:02:46 compute-0 sudo[374105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:46 compute-0 sudo[374105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:46 compute-0 sudo[374105]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858843834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.816 2 DEBUG nova.network.neutron [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Successfully created port: f52e246b-dedc-40f5-9f35-cb545dae12ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:02:46 compute-0 sudo[374130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:02:46 compute-0 sudo[374130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:46 compute-0 sudo[374130]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.836 2 DEBUG oslo_concurrency.processutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.842 2 DEBUG nova.compute.provider_tree [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.908 2 DEBUG nova.scheduler.client.report [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:46 compute-0 nova_compute[256940]: 2025-10-02 13:02:46.965 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:46.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:47 compute-0 ceph-mon[73668]: osdmap e352: 3 total, 3 up, 3 in
Oct 02 13:02:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.062 2 INFO nova.scheduler.client.report [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocation for migration 67833b05-bd17-48b1-974a-39a83762f931
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.128 2 DEBUG oslo_concurrency.lockutils [None req-6fb6cb42-b3a0-4d7e-b0b1-b7fab19fdc47 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.401 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b6d31622-c503-4790-b277-89de495fb364_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.469 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] resizing rbd image b6d31622-c503-4790-b277-89de495fb364_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.777 2 DEBUG nova.network.neutron [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Successfully updated port: f52e246b-dedc-40f5-9f35-cb545dae12ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.815 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.815 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquired lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.816 2 DEBUG nova.network.neutron [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.879 2 DEBUG nova.objects.instance [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lazy-loading 'migration_context' on Instance uuid b6d31622-c503-4790-b277-89de495fb364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.897 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.898 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Ensure instance console log exists: /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.898 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.898 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.899 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.928 2 DEBUG nova.compute.manager [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-changed-f52e246b-dedc-40f5-9f35-cb545dae12ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.929 2 DEBUG nova.compute.manager [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Refreshing instance network info cache due to event network-changed-f52e246b-dedc-40f5-9f35-cb545dae12ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:02:47 compute-0 nova_compute[256940]: 2025-10-02 13:02:47.929 2 DEBUG oslo_concurrency.lockutils [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.067 2 DEBUG nova.network.neutron [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:02:48 compute-0 ceph-mon[73668]: pgmap v2693: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 02 13:02:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2858843834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Oct 02 13:02:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:48.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.691 2 DEBUG nova.network.neutron [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Updating instance_info_cache with network_info: [{"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.713 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Releasing lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.714 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Instance network_info: |[{"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.714 2 DEBUG oslo_concurrency.lockutils [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.715 2 DEBUG nova.network.neutron [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Refreshing network info cache for port f52e246b-dedc-40f5-9f35-cb545dae12ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.718 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Start _get_guest_xml network_info=[{"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.722 2 WARNING nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.733 2 DEBUG nova.virt.libvirt.host [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.733 2 DEBUG nova.virt.libvirt.host [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.737 2 DEBUG nova.virt.libvirt.host [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.737 2 DEBUG nova.virt.libvirt.host [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.738 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.738 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.739 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.739 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.739 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.740 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.740 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.740 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.741 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.741 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.741 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.741 2 DEBUG nova.virt.hardware [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:02:48 compute-0 nova_compute[256940]: 2025-10-02 13:02:48.743 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:02:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717772748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.164 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.203 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] rbd image b6d31622-c503-4790-b277-89de495fb364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.207 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:49 compute-0 ceph-mon[73668]: pgmap v2694: 305 pgs: 305 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Oct 02 13:02:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1635756819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:02:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217411682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.702 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.705 2 DEBUG nova.virt.libvirt.vif [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-951393784',display_name='tempest-TestEncryptedCinderVolumes-server-951393784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-951393784',id=177,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSoyLWF2+XCS8bUj82a34vxT3lEwL71linifZ2CDd1KovYKNp572sm9cy/t+fe6uihYxittn+S+OmoVHbrvPoN7yrGbVHQxC9i/CaWgYYn4OKOnnBk12mw1R+9EcHw90A==',key_name='tempest-keypair-1469630940',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e8f1f6ceb7b40ee9fa002d881b59a44',ramdisk_id='',reservation_id='r-09ej7bhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1359000024',owner_user_name='tempest-TestEncryptedCinderVolumes-1359000024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cb1643b3981d49ceaebfabe1577596fd',uuid=b6d31622-c503-4790-b277-89de495fb364,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.705 2 DEBUG nova.network.os_vif_util [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Converting VIF {"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.706 2 DEBUG nova.network.os_vif_util [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:23:fa,bridge_name='br-int',has_traffic_filtering=True,id=f52e246b-dedc-40f5-9f35-cb545dae12ff,network=Network(c4c41ca2-0971-4ce7-af35-cd068de87e9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52e246b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.708 2 DEBUG nova.objects.instance [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6d31622-c503-4790-b277-89de495fb364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.722 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <uuid>b6d31622-c503-4790-b277-89de495fb364</uuid>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <name>instance-000000b1</name>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-951393784</nova:name>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:02:48</nova:creationTime>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:user uuid="cb1643b3981d49ceaebfabe1577596fd">tempest-TestEncryptedCinderVolumes-1359000024-project-member</nova:user>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:project uuid="6e8f1f6ceb7b40ee9fa002d881b59a44">tempest-TestEncryptedCinderVolumes-1359000024</nova:project>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <nova:port uuid="f52e246b-dedc-40f5-9f35-cb545dae12ff">
Oct 02 13:02:49 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <system>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <entry name="serial">b6d31622-c503-4790-b277-89de495fb364</entry>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <entry name="uuid">b6d31622-c503-4790-b277-89de495fb364</entry>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </system>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <os>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   </os>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <features>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   </features>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b6d31622-c503-4790-b277-89de495fb364_disk">
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       </source>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/b6d31622-c503-4790-b277-89de495fb364_disk.config">
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       </source>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:02:49 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:8e:23:fa"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <target dev="tapf52e246b-de"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/console.log" append="off"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <video>
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </video>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:02:49 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:02:49 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:02:49 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:02:49 compute-0 nova_compute[256940]: </domain>
Oct 02 13:02:49 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.723 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Preparing to wait for external event network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.724 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.724 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.724 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.725 2 DEBUG nova.virt.libvirt.vif [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-951393784',display_name='tempest-TestEncryptedCinderVolumes-server-951393784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-951393784',id=177,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSoyLWF2+XCS8bUj82a34vxT3lEwL71linifZ2CDd1KovYKNp572sm9cy/t+fe6uihYxittn+S+OmoVHbrvPoN7yrGbVHQxC9i/CaWgYYn4OKOnnBk12mw1R+9EcHw90A==',key_name='tempest-keypair-1469630940',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e8f1f6ceb7b40ee9fa002d881b59a44',ramdisk_id='',reservation_id='r-09ej7bhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1359000024',owner_user_name='tempest-TestEncryptedCinderVolumes-1359000024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cb1643b3981d49ceaebfabe1577596fd',uuid=b6d31622-c503-4790-b277-89de495fb364,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.725 2 DEBUG nova.network.os_vif_util [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Converting VIF {"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.726 2 DEBUG nova.network.os_vif_util [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:23:fa,bridge_name='br-int',has_traffic_filtering=True,id=f52e246b-dedc-40f5-9f35-cb545dae12ff,network=Network(c4c41ca2-0971-4ce7-af35-cd068de87e9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52e246b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.726 2 DEBUG os_vif [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:23:fa,bridge_name='br-int',has_traffic_filtering=True,id=f52e246b-dedc-40f5-9f35-cb545dae12ff,network=Network(c4c41ca2-0971-4ce7-af35-cd068de87e9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52e246b-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf52e246b-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf52e246b-de, col_values=(('external_ids', {'iface-id': 'f52e246b-dedc-40f5-9f35-cb545dae12ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:23:fa', 'vm-uuid': 'b6d31622-c503-4790-b277-89de495fb364'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:49 compute-0 NetworkManager[44981]: <info>  [1759410169.7331] manager: (tapf52e246b-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.740 2 INFO os_vif [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:23:fa,bridge_name='br-int',has_traffic_filtering=True,id=f52e246b-dedc-40f5-9f35-cb545dae12ff,network=Network(c4c41ca2-0971-4ce7-af35-cd068de87e9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52e246b-de')
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.811 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.812 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.812 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] No VIF found with MAC fa:16:3e:8e:23:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.812 2 INFO nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Using config drive
Oct 02 13:02:49 compute-0 nova_compute[256940]: 2025-10-02 13:02:49.839 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] rbd image b6d31622-c503-4790-b277-89de495fb364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:49 compute-0 sudo[374313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:49 compute-0 sudo[374313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:49 compute-0 sudo[374313]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:50 compute-0 sudo[374338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:50 compute-0 sudo[374338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:50 compute-0 sudo[374338]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.140 2 INFO nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Creating config drive at /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/disk.config
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.146 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfe_6er6c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 250 op/s
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.295 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfe_6er6c" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.323 2 DEBUG nova.storage.rbd_utils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] rbd image b6d31622-c503-4790-b277-89de495fb364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.326 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/disk.config b6d31622-c503-4790-b277-89de495fb364_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1717772748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3217411682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.412 2 DEBUG nova.network.neutron [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Updated VIF entry in instance network info cache for port f52e246b-dedc-40f5-9f35-cb545dae12ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.413 2 DEBUG nova.network.neutron [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Updating instance_info_cache with network_info: [{"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:50 compute-0 nova_compute[256940]: 2025-10-02 13:02:50.447 2 DEBUG oslo_concurrency.lockutils [req-0fd2889d-1cf6-48e7-be57-14f0078ca17a req-716d88eb-a7f0-4933-865d-92eb189b4fde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:50.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:51 compute-0 podman[374401]: 2025-10-02 13:02:51.429898556 +0000 UTC m=+0.088865468 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:02:51 compute-0 podman[374402]: 2025-10-02 13:02:51.451318812 +0000 UTC m=+0.110694574 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 13:02:51 compute-0 nova_compute[256940]: 2025-10-02 13:02:51.646 2 DEBUG oslo_concurrency.processutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/disk.config b6d31622-c503-4790-b277-89de495fb364_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:51 compute-0 nova_compute[256940]: 2025-10-02 13:02:51.646 2 INFO nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Deleting local config drive /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364/disk.config because it was imported into RBD.
Oct 02 13:02:51 compute-0 kernel: tapf52e246b-de: entered promiscuous mode
Oct 02 13:02:51 compute-0 NetworkManager[44981]: <info>  [1759410171.7213] manager: (tapf52e246b-de): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Oct 02 13:02:51 compute-0 systemd-udevd[374459]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:02:51 compute-0 ovn_controller[148123]: 2025-10-02T13:02:51Z|00897|binding|INFO|Claiming lport f52e246b-dedc-40f5-9f35-cb545dae12ff for this chassis.
Oct 02 13:02:51 compute-0 ovn_controller[148123]: 2025-10-02T13:02:51Z|00898|binding|INFO|f52e246b-dedc-40f5-9f35-cb545dae12ff: Claiming fa:16:3e:8e:23:fa 10.100.0.5
Oct 02 13:02:51 compute-0 nova_compute[256940]: 2025-10-02 13:02:51.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.771 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:23:fa 10.100.0.5'], port_security=['fa:16:3e:8e:23:fa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b6d31622-c503-4790-b277-89de495fb364', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e8f1f6ceb7b40ee9fa002d881b59a44', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28f71f51-7e3f-468a-a8c9-e3ae5802269f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56e401f3-3346-4f65-a0fd-85677f82d9d3, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f52e246b-dedc-40f5-9f35-cb545dae12ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:51 compute-0 NetworkManager[44981]: <info>  [1759410171.7737] device (tapf52e246b-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:02:51 compute-0 NetworkManager[44981]: <info>  [1759410171.7745] device (tapf52e246b-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.772 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f52e246b-dedc-40f5-9f35-cb545dae12ff in datapath c4c41ca2-0971-4ce7-af35-cd068de87e9c bound to our chassis
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.775 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4c41ca2-0971-4ce7-af35-cd068de87e9c
Oct 02 13:02:51 compute-0 ovn_controller[148123]: 2025-10-02T13:02:51Z|00899|binding|INFO|Setting lport f52e246b-dedc-40f5-9f35-cb545dae12ff ovn-installed in OVS
Oct 02 13:02:51 compute-0 ovn_controller[148123]: 2025-10-02T13:02:51Z|00900|binding|INFO|Setting lport f52e246b-dedc-40f5-9f35-cb545dae12ff up in Southbound
Oct 02 13:02:51 compute-0 nova_compute[256940]: 2025-10-02 13:02:51.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.788 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dca408-e1bb-4a9f-b93f-8d07d9161b16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.789 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4c41ca2-01 in ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.791 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4c41ca2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.791 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9638fc-e489-4aeb-8b2a-996d796fe229]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.791 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09a3ef46-26a1-4a95-8c15-6299bf5665cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 systemd-machined[210927]: New machine qemu-92-instance-000000b1.
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.806 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[02d62ac8-48ec-4b0a-832b-01c9bd57e92f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-000000b1.
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.831 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[12518ccb-374f-45b8-bfb6-270287c1f9b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.863 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[9b85a18d-8456-4cac-a22a-742f2427f828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 NetworkManager[44981]: <info>  [1759410171.8693] manager: (tapc4c41ca2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.868 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6259289c-9db8-4c62-a297-a8febcc8180f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.905 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e6da26f1-d4e4-446c-9893-ee68d47ccbd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.909 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0ade9edb-0903-4ede-93ba-4461766e9f44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ceph-mon[73668]: pgmap v2695: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 250 op/s
Oct 02 13:02:51 compute-0 NetworkManager[44981]: <info>  [1759410171.9321] device (tapc4c41ca2-00): carrier: link connected
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.939 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[619a4724-1b6b-48d6-9b61-3add5ff2f380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.956 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[78e55fdc-1dfe-41a6-ae71-70f399831286]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4c41ca2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:45:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806741, 'reachable_time': 25503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374495, 'error': None, 'target': 'ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.973 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3565c814-f979-4c32-b9ac-7eec1f6c11db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:4575'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806741, 'tstamp': 806741}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374496, 'error': None, 'target': 'ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:51.991 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[68b00d5c-5674-48da-a92f-9dc98339e636]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4c41ca2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:45:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806741, 'reachable_time': 25503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374497, 'error': None, 'target': 'ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.026 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e4738e1f-3f4f-4d3a-913d-eb8177009fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.097 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11cc390d-9051-4425-9633-0d1ff83af610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.099 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4c41ca2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.099 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.100 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4c41ca2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:52 compute-0 NetworkManager[44981]: <info>  [1759410172.1025] manager: (tapc4c41ca2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Oct 02 13:02:52 compute-0 kernel: tapc4c41ca2-00: entered promiscuous mode
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.104 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4c41ca2-00, col_values=(('external_ids', {'iface-id': 'b36c30bb-b987-4a8c-9d02-d8102708b07d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:52 compute-0 ovn_controller[148123]: 2025-10-02T13:02:52Z|00901|binding|INFO|Releasing lport b36c30bb-b987-4a8c-9d02-d8102708b07d from this chassis (sb_readonly=0)
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.121 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4c41ca2-0971-4ce7-af35-cd068de87e9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4c41ca2-0971-4ce7-af35-cd068de87e9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.122 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6484ec-6cfc-4d94-9e61-451d78875f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.123 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-c4c41ca2-0971-4ce7-af35-cd068de87e9c
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/c4c41ca2-0971-4ce7-af35-cd068de87e9c.pid.haproxy
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID c4c41ca2-0971-4ce7-af35-cd068de87e9c
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:02:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:02:52.124 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'env', 'PROCESS_TAG=haproxy-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4c41ca2-0971-4ce7-af35-cd068de87e9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:02:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.1 MiB/s wr, 228 op/s
Oct 02 13:02:52 compute-0 podman[374571]: 2025-10-02 13:02:52.496416529 +0000 UTC m=+0.049139187 container create bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:02:52 compute-0 systemd[1]: Started libpod-conmon-bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d.scope.
Oct 02 13:02:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:02:52 compute-0 podman[374571]: 2025-10-02 13:02:52.468262408 +0000 UTC m=+0.020985086 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0907a4f8faf64e421bd2a04a396b29aaa702518caa8d97309ea2e92877135af6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:52 compute-0 podman[374571]: 2025-10-02 13:02:52.58275501 +0000 UTC m=+0.135477688 container init bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:02:52 compute-0 podman[374571]: 2025-10-02 13:02:52.590224614 +0000 UTC m=+0.142947272 container start bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:02:52 compute-0 neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c[374586]: [NOTICE]   (374590) : New worker (374592) forked
Oct 02 13:02:52 compute-0 neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c[374586]: [NOTICE]   (374590) : Loading success.
Oct 02 13:02:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:52.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.644 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410172.6442816, b6d31622-c503-4790-b277-89de495fb364 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.644 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] VM Started (Lifecycle Event)
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.664 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.667 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410172.644445, b6d31622-c503-4790-b277-89de495fb364 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.667 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] VM Paused (Lifecycle Event)
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.682 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.685 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:02:52 compute-0 nova_compute[256940]: 2025-10-02 13:02:52.708 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:02:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3837070089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:54 compute-0 ceph-mon[73668]: pgmap v2696: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.1 MiB/s wr, 228 op/s
Oct 02 13:02:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/771358143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Oct 02 13:02:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Oct 02 13:02:54 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Oct 02 13:02:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.7 MiB/s wr, 196 op/s
Oct 02 13:02:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:54.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.764 2 DEBUG nova.compute.manager [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.764 2 DEBUG oslo_concurrency.lockutils [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.765 2 DEBUG oslo_concurrency.lockutils [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.765 2 DEBUG oslo_concurrency.lockutils [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.765 2 DEBUG nova.compute.manager [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Processing event network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.766 2 DEBUG nova.compute.manager [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.766 2 DEBUG oslo_concurrency.lockutils [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.766 2 DEBUG oslo_concurrency.lockutils [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.766 2 DEBUG oslo_concurrency.lockutils [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.766 2 DEBUG nova.compute.manager [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] No waiting events found dispatching network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.767 2 WARNING nova.compute.manager [req-2565b3f4-42f4-4df9-8152-4a37de7c5f01 req-116d7df3-8427-46b3-a451-e47f8746a243 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received unexpected event network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff for instance with vm_state building and task_state spawning.
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.767 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.771 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410174.7708519, b6d31622-c503-4790-b277-89de495fb364 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.771 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] VM Resumed (Lifecycle Event)
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.773 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.777 2 INFO nova.virt.libvirt.driver [-] [instance: b6d31622-c503-4790-b277-89de495fb364] Instance spawned successfully.
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.778 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.798 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.803 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.806 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.806 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.806 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.807 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.807 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.807 2 DEBUG nova.virt.libvirt.driver [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.844 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.889 2 INFO nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Took 9.39 seconds to spawn the instance on the hypervisor.
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.890 2 DEBUG nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.956 2 INFO nova.compute.manager [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Took 10.48 seconds to build instance.
Oct 02 13:02:54 compute-0 nova_compute[256940]: 2025-10-02 13:02:54.974 2 DEBUG oslo_concurrency.lockutils [None req-2eff1218-06fd-403e-9bc7-297f04ab04c1 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:54.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:55 compute-0 ceph-mon[73668]: osdmap e353: 3 total, 3 up, 3 in
Oct 02 13:02:55 compute-0 ceph-mon[73668]: pgmap v2698: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.7 MiB/s wr, 196 op/s
Oct 02 13:02:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Oct 02 13:02:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:57 compute-0 nova_compute[256940]: 2025-10-02 13:02:57.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:57 compute-0 ceph-mon[73668]: pgmap v2699: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Oct 02 13:02:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 422 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 208 op/s
Oct 02 13:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:02:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:02:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:02:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:02:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1783475136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:02:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.045378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179045404, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 446, "num_deletes": 257, "total_data_size": 366572, "memory_usage": 376400, "flush_reason": "Manual Compaction"}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179049848, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 363267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60193, "largest_seqno": 60638, "table_properties": {"data_size": 360627, "index_size": 675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6460, "raw_average_key_size": 18, "raw_value_size": 355171, "raw_average_value_size": 1029, "num_data_blocks": 29, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410164, "oldest_key_time": 1759410164, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 4522 microseconds, and 1393 cpu microseconds.
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.049899) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 363267 bytes OK
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.049915) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.050870) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.050882) EVENT_LOG_v1 {"time_micros": 1759410179050878, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.050893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 363829, prev total WAL file size 363829, number of live WAL files 2.
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.051365) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353137' seq:0, type:0; will stop at (end)
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(354KB)], [134(11MB)]
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179051399, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 11962177, "oldest_snapshot_seqno": -1}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8704 keys, 11812660 bytes, temperature: kUnknown
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179123354, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11812660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11756058, "index_size": 33769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 229080, "raw_average_key_size": 26, "raw_value_size": 11602896, "raw_average_value_size": 1333, "num_data_blocks": 1295, "num_entries": 8704, "num_filter_entries": 8704, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.123596) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11812660 bytes
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.124966) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.1 rd, 164.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(65.4) write-amplify(32.5) OK, records in: 9234, records dropped: 530 output_compression: NoCompression
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.124982) EVENT_LOG_v1 {"time_micros": 1759410179124974, "job": 82, "event": "compaction_finished", "compaction_time_micros": 72039, "compaction_time_cpu_micros": 30238, "output_level": 6, "num_output_files": 1, "total_output_size": 11812660, "num_input_records": 9234, "num_output_records": 8704, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179125255, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179127047, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.051263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.127197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.127205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.127206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.127209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:02:59.127211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-0 nova_compute[256940]: 2025-10-02 13:02:59.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:59 compute-0 ceph-mon[73668]: pgmap v2700: 305 pgs: 305 active+clean; 422 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 208 op/s
Oct 02 13:03:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 435 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 269 op/s
Oct 02 13:03:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:00 compute-0 ceph-mon[73668]: pgmap v2701: 305 pgs: 305 active+clean; 435 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 269 op/s
Oct 02 13:03:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:01 compute-0 nova_compute[256940]: 2025-10-02 13:03:01.269 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-changed-f52e246b-dedc-40f5-9f35-cb545dae12ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:01 compute-0 nova_compute[256940]: 2025-10-02 13:03:01.270 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Refreshing instance network info cache due to event network-changed-f52e246b-dedc-40f5-9f35-cb545dae12ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:01 compute-0 nova_compute[256940]: 2025-10-02 13:03:01.270 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:01 compute-0 nova_compute[256940]: 2025-10-02 13:03:01.270 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:01 compute-0 nova_compute[256940]: 2025-10-02 13:03:01.271 2 DEBUG nova.network.neutron [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Refreshing network info cache for port f52e246b-dedc-40f5-9f35-cb545dae12ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 446 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.6 MiB/s wr, 364 op/s
Oct 02 13:03:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:02.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:02 compute-0 nova_compute[256940]: 2025-10-02 13:03:02.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:03 compute-0 nova_compute[256940]: 2025-10-02 13:03:03.023 2 DEBUG nova.network.neutron [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Updated VIF entry in instance network info cache for port f52e246b-dedc-40f5-9f35-cb545dae12ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:03 compute-0 nova_compute[256940]: 2025-10-02 13:03:03.023 2 DEBUG nova.network.neutron [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Updating instance_info_cache with network_info: [{"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:03 compute-0 nova_compute[256940]: 2025-10-02 13:03:03.078 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b6d31622-c503-4790-b277-89de495fb364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:03 compute-0 ceph-mon[73668]: pgmap v2702: 305 pgs: 305 active+clean; 446 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.6 MiB/s wr, 364 op/s
Oct 02 13:03:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 446 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 362 op/s
Oct 02 13:03:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3110993335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:04.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:04 compute-0 nova_compute[256940]: 2025-10-02 13:03:04.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:05 compute-0 ceph-mon[73668]: pgmap v2703: 305 pgs: 305 active+clean; 446 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 362 op/s
Oct 02 13:03:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/425658778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.0 MiB/s wr, 319 op/s
Oct 02 13:03:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/636709142' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:03:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/636709142' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:03:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:06.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:07.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:07 compute-0 ceph-mon[73668]: pgmap v2704: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.0 MiB/s wr, 319 op/s
Oct 02 13:03:07 compute-0 ovn_controller[148123]: 2025-10-02T13:03:07Z|00902|binding|INFO|Releasing lport b36c30bb-b987-4a8c-9d02-d8102708b07d from this chassis (sb_readonly=0)
Oct 02 13:03:07 compute-0 nova_compute[256940]: 2025-10-02 13:03:07.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:07 compute-0 nova_compute[256940]: 2025-10-02 13:03:07.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 421 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Oct 02 13:03:08 compute-0 podman[374609]: 2025-10-02 13:03:08.394075184 +0000 UTC m=+0.056714854 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 02 13:03:08 compute-0 podman[374610]: 2025-10-02 13:03:08.396303401 +0000 UTC m=+0.057484253 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 13:03:08 compute-0 ovn_controller[148123]: 2025-10-02T13:03:08Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:23:fa 10.100.0.5
Oct 02 13:03:08 compute-0 ovn_controller[148123]: 2025-10-02T13:03:08Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:23:fa 10.100.0.5
Oct 02 13:03:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:08.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:09 compute-0 ceph-mon[73668]: pgmap v2705: 305 pgs: 305 active+clean; 421 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Oct 02 13:03:09 compute-0 nova_compute[256940]: 2025-10-02 13:03:09.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:10 compute-0 sudo[374651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:10 compute-0 sudo[374651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:10 compute-0 sudo[374651]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 451 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 309 op/s
Oct 02 13:03:10 compute-0 sudo[374676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:10 compute-0 sudo[374676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:10 compute-0 sudo[374676]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:10.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:11 compute-0 ceph-mon[73668]: pgmap v2706: 305 pgs: 305 active+clean; 451 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 309 op/s
Oct 02 13:03:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.4 MiB/s wr, 350 op/s
Oct 02 13:03:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:12.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:12 compute-0 nova_compute[256940]: 2025-10-02 13:03:12.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:12 compute-0 nova_compute[256940]: 2025-10-02 13:03:12.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:13 compute-0 ceph-mon[73668]: pgmap v2707: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.4 MiB/s wr, 350 op/s
Oct 02 13:03:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 199 op/s
Oct 02 13:03:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:14.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:14 compute-0 nova_compute[256940]: 2025-10-02 13:03:14.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.595 2 DEBUG oslo_concurrency.lockutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.596 2 DEBUG oslo_concurrency.lockutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.621 2 DEBUG nova.objects.instance [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lazy-loading 'flavor' on Instance uuid b6d31622-c503-4790-b277-89de495fb364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.664 2 DEBUG oslo_concurrency.lockutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:15 compute-0 ceph-mon[73668]: pgmap v2708: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 199 op/s
Oct 02 13:03:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2532067342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.939 2 DEBUG oslo_concurrency.lockutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.941 2 DEBUG oslo_concurrency.lockutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:15 compute-0 nova_compute[256940]: 2025-10-02 13:03:15.941 2 INFO nova.compute.manager [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Attaching volume 4318daa3-7890-4295-9085-fac3e8976380 to /dev/vdb
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.129 2 DEBUG os_brick.utils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.130 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.142 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.143 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[78dd5f44-8e90-41ca-a57b-edf94dc90aac]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.144 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.154 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.155 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[d1870bda-9a34-489c-beb7-6185de5ca5b4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.157 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 219 op/s
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.167 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.168 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[4babfe8a-cade-4dcb-a7db-4b32c4baecae]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.170 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6bcc9f-3b72-44f2-8190-94c7223f1365]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.171 2 DEBUG oslo_concurrency.processutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.210 2 DEBUG oslo_concurrency.processutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.213 2 DEBUG os_brick.initiator.connectors.lightos [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.213 2 DEBUG os_brick.initiator.connectors.lightos [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.214 2 DEBUG os_brick.initiator.connectors.lightos [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.214 2 DEBUG os_brick.utils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:03:16 compute-0 nova_compute[256940]: 2025-10-02 13:03:16.214 2 DEBUG nova.virt.block_device [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Updating existing volume attachment record: 08cb844c-29a1-4f83-81ac-285a7ccb50f3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:03:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:16.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1071118977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:17.191 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:17 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:17.194 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.331 2 DEBUG os_brick.encryptors [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Using volume encryption metadata '{'encryption_key_id': 'c869a339-7d8b-47df-83d3-eb6943847d77', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4318daa3-7890-4295-9085-fac3e8976380', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4318daa3-7890-4295-9085-fac3e8976380', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b6d31622-c503-4790-b277-89de495fb364', 'attached_at': '', 'detached_at': '', 'volume_id': '4318daa3-7890-4295-9085-fac3e8976380', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.339 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.370 2 DEBUG barbicanclient.v1.secrets [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/c869a339-7d8b-47df-83d3-eb6943847d77 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.371 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.395 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.396 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.437 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.438 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.486 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.486 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.516 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.517 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.567 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.568 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.600 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.601 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.624 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.625 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.643 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.644 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.671 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.672 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.694 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.694 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.717 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.718 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 ceph-mon[73668]: pgmap v2709: 305 pgs: 305 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 219 op/s
Oct 02 13:03:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2226714011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.742 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.743 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.809 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.810 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.849 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.850 2 INFO barbicanclient.base [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Calculated Secrets uuid ref: secrets/c869a339-7d8b-47df-83d3-eb6943847d77
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.880 2 DEBUG barbicanclient.client [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.881 2 DEBUG nova.virt.libvirt.host [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 02 13:03:17 compute-0 nova_compute[256940]:   <usage type="volume">
Oct 02 13:03:17 compute-0 nova_compute[256940]:     <volume>4318daa3-7890-4295-9085-fac3e8976380</volume>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   </usage>
Oct 02 13:03:17 compute-0 nova_compute[256940]: </secret>
Oct 02 13:03:17 compute-0 nova_compute[256940]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.891 2 DEBUG nova.objects.instance [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lazy-loading 'flavor' on Instance uuid b6d31622-c503-4790-b277-89de495fb364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.916 2 DEBUG nova.virt.libvirt.driver [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Attempting to attach volume 4318daa3-7890-4295-9085-fac3e8976380 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:03:17 compute-0 nova_compute[256940]: 2025-10-02 13:03:17.920 2 DEBUG nova.virt.libvirt.guest [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:03:17 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-4318daa3-7890-4295-9085-fac3e8976380">
Oct 02 13:03:17 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:17 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:17 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   </source>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 13:03:17 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   </auth>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   <serial>4318daa3-7890-4295-9085-fac3e8976380</serial>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   <encryption format="luks">
Oct 02 13:03:17 compute-0 nova_compute[256940]:     <secret type="passphrase" uuid="62fdff1f-8bfe-4c81-be8f-c9dbc8715e4f"/>
Oct 02 13:03:17 compute-0 nova_compute[256940]:   </encryption>
Oct 02 13:03:17 compute-0 nova_compute[256940]: </disk>
Oct 02 13:03:17 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:03:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Oct 02 13:03:18 compute-0 nova_compute[256940]: 2025-10-02 13:03:18.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:18 compute-0 nova_compute[256940]: 2025-10-02 13:03:18.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:03:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:18.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:19.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:19 compute-0 nova_compute[256940]: 2025-10-02 13:03:19.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:19 compute-0 ceph-mon[73668]: pgmap v2710: 305 pgs: 305 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Oct 02 13:03:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2650273310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 459 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.8 MiB/s wr, 215 op/s
Oct 02 13:03:20 compute-0 nova_compute[256940]: 2025-10-02 13:03:20.607 2 DEBUG nova.virt.libvirt.driver [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:03:20 compute-0 nova_compute[256940]: 2025-10-02 13:03:20.607 2 DEBUG nova.virt.libvirt.driver [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:03:20 compute-0 nova_compute[256940]: 2025-10-02 13:03:20.608 2 DEBUG nova.virt.libvirt.driver [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:03:20 compute-0 nova_compute[256940]: 2025-10-02 13:03:20.608 2 DEBUG nova.virt.libvirt.driver [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] No VIF found with MAC fa:16:3e:8e:23:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:03:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4166443559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:21.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.241 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.242 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.242 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.242 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.243 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.279 2 DEBUG oslo_concurrency.lockutils [None req-0c4e9417-355b-4c8f-80a0-283166997dcf cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159773655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.689 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.770 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.771 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.771 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:21 compute-0 podman[374758]: 2025-10-02 13:03:21.790233155 +0000 UTC m=+0.058928611 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 13:03:21 compute-0 podman[374759]: 2025-10-02 13:03:21.819507665 +0000 UTC m=+0.086588649 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.933 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.934 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4031MB free_disk=20.795333862304688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.934 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:21 compute-0 nova_compute[256940]: 2025-10-02 13:03:21.935 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.053 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance b6d31622-c503-4790-b277-89de495fb364 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.053 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.053 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:03:22 compute-0 ceph-mon[73668]: pgmap v2711: 305 pgs: 305 active+clean; 459 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.8 MiB/s wr, 215 op/s
Oct 02 13:03:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1206424062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/159773655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.100 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 434 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 218 op/s
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.398 2 DEBUG oslo_concurrency.lockutils [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.400 2 DEBUG oslo_concurrency.lockutils [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.414 2 INFO nova.compute.manager [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Detaching volume 4318daa3-7890-4295-9085-fac3e8976380
Oct 02 13:03:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732935419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.572 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.579 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.593 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.612 2 INFO nova.virt.block_device [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Attempting to driver detach volume 4318daa3-7890-4295-9085-fac3e8976380 from mountpoint /dev/vdb
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.615 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.616 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:22.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.733 2 DEBUG os_brick.encryptors [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Using volume encryption metadata '{'encryption_key_id': 'c869a339-7d8b-47df-83d3-eb6943847d77', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4318daa3-7890-4295-9085-fac3e8976380', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4318daa3-7890-4295-9085-fac3e8976380', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b6d31622-c503-4790-b277-89de495fb364', 'attached_at': '', 'detached_at': '', 'volume_id': '4318daa3-7890-4295-9085-fac3e8976380', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.740 2 DEBUG nova.virt.libvirt.driver [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Attempting to detach device vdb from instance b6d31622-c503-4790-b277-89de495fb364 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.741 2 DEBUG nova.virt.libvirt.guest [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-4318daa3-7890-4295-9085-fac3e8976380">
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   </source>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <serial>4318daa3-7890-4295-9085-fac3e8976380</serial>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <encryption format="luks">
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <secret type="passphrase" uuid="62fdff1f-8bfe-4c81-be8f-c9dbc8715e4f"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   </encryption>
Oct 02 13:03:22 compute-0 nova_compute[256940]: </disk>
Oct 02 13:03:22 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.748 2 INFO nova.virt.libvirt.driver [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Successfully detached device vdb from instance b6d31622-c503-4790-b277-89de495fb364 from the persistent domain config.
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.748 2 DEBUG nova.virt.libvirt.driver [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b6d31622-c503-4790-b277-89de495fb364 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.749 2 DEBUG nova.virt.libvirt.guest [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-4318daa3-7890-4295-9085-fac3e8976380">
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   </source>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <serial>4318daa3-7890-4295-9085-fac3e8976380</serial>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   <encryption format="luks">
Oct 02 13:03:22 compute-0 nova_compute[256940]:     <secret type="passphrase" uuid="62fdff1f-8bfe-4c81-be8f-c9dbc8715e4f"/>
Oct 02 13:03:22 compute-0 nova_compute[256940]:   </encryption>
Oct 02 13:03:22 compute-0 nova_compute[256940]: </disk>
Oct 02 13:03:22 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.861 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759410202.8612356, b6d31622-c503-4790-b277-89de495fb364 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.863 2 DEBUG nova.virt.libvirt.driver [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b6d31622-c503-4790-b277-89de495fb364 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:03:22 compute-0 nova_compute[256940]: 2025-10-02 13:03:22.865 2 INFO nova.virt.libvirt.driver [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Successfully detached device vdb from instance b6d31622-c503-4790-b277-89de495fb364 from the live domain config.
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.031 2 DEBUG nova.objects.instance [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lazy-loading 'flavor' on Instance uuid b6d31622-c503-4790-b277-89de495fb364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:23.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.090 2 DEBUG oslo_concurrency.lockutils [None req-92440e41-161b-4934-9c5f-c7091aa43130 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:23 compute-0 ceph-mon[73668]: pgmap v2712: 305 pgs: 305 active+clean; 434 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 218 op/s
Oct 02 13:03:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2732935419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2882987751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.972 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.972 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.973 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.973 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.973 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.974 2 INFO nova.compute.manager [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Terminating instance
Oct 02 13:03:23 compute-0 nova_compute[256940]: 2025-10-02 13:03:23.975 2 DEBUG nova.compute.manager [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:03:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 434 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 424 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 13:03:24 compute-0 kernel: tapf52e246b-de (unregistering): left promiscuous mode
Oct 02 13:03:24 compute-0 NetworkManager[44981]: <info>  [1759410204.2870] device (tapf52e246b-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 ovn_controller[148123]: 2025-10-02T13:03:24Z|00903|binding|INFO|Releasing lport f52e246b-dedc-40f5-9f35-cb545dae12ff from this chassis (sb_readonly=0)
Oct 02 13:03:24 compute-0 ovn_controller[148123]: 2025-10-02T13:03:24Z|00904|binding|INFO|Setting lport f52e246b-dedc-40f5-9f35-cb545dae12ff down in Southbound
Oct 02 13:03:24 compute-0 ovn_controller[148123]: 2025-10-02T13:03:24Z|00905|binding|INFO|Removing iface tapf52e246b-de ovn-installed in OVS
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.315 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:23:fa 10.100.0.5'], port_security=['fa:16:3e:8e:23:fa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b6d31622-c503-4790-b277-89de495fb364', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e8f1f6ceb7b40ee9fa002d881b59a44', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28f71f51-7e3f-468a-a8c9-e3ae5802269f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56e401f3-3346-4f65-a0fd-85677f82d9d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f52e246b-dedc-40f5-9f35-cb545dae12ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.316 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f52e246b-dedc-40f5-9f35-cb545dae12ff in datapath c4c41ca2-0971-4ce7-af35-cd068de87e9c unbound from our chassis
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.318 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4c41ca2-0971-4ce7-af35-cd068de87e9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.320 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ebda3b66-35d1-4870-81c9-5917a8beb318]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.321 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c namespace which is not needed anymore
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Oct 02 13:03:24 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b1.scope: Consumed 16.103s CPU time.
Oct 02 13:03:24 compute-0 systemd-machined[210927]: Machine qemu-92-instance-000000b1 terminated.
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.422 2 INFO nova.virt.libvirt.driver [-] [instance: b6d31622-c503-4790-b277-89de495fb364] Instance destroyed successfully.
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.422 2 DEBUG nova.objects.instance [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lazy-loading 'resources' on Instance uuid b6d31622-c503-4790-b277-89de495fb364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.437 2 DEBUG nova.virt.libvirt.vif [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-951393784',display_name='tempest-TestEncryptedCinderVolumes-server-951393784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-951393784',id=177,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSoyLWF2+XCS8bUj82a34vxT3lEwL71linifZ2CDd1KovYKNp572sm9cy/t+fe6uihYxittn+S+OmoVHbrvPoN7yrGbVHQxC9i/CaWgYYn4OKOnnBk12mw1R+9EcHw90A==',key_name='tempest-keypair-1469630940',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e8f1f6ceb7b40ee9fa002d881b59a44',ramdisk_id='',reservation_id='r-09ej7bhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1359000024',owner_user_name='tempest-TestEncryptedCinderVolumes-1359000024-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cb1643b3981d49ceaebfabe1577596fd',uuid=b6d31622-c503-4790-b277-89de495fb364,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.437 2 DEBUG nova.network.os_vif_util [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Converting VIF {"id": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "address": "fa:16:3e:8e:23:fa", "network": {"id": "c4c41ca2-0971-4ce7-af35-cd068de87e9c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-680340753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e8f1f6ceb7b40ee9fa002d881b59a44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52e246b-de", "ovs_interfaceid": "f52e246b-dedc-40f5-9f35-cb545dae12ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.438 2 DEBUG nova.network.os_vif_util [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:23:fa,bridge_name='br-int',has_traffic_filtering=True,id=f52e246b-dedc-40f5-9f35-cb545dae12ff,network=Network(c4c41ca2-0971-4ce7-af35-cd068de87e9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52e246b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.438 2 DEBUG os_vif [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:23:fa,bridge_name='br-int',has_traffic_filtering=True,id=f52e246b-dedc-40f5-9f35-cb545dae12ff,network=Network(c4c41ca2-0971-4ce7-af35-cd068de87e9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52e246b-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf52e246b-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.449 2 INFO os_vif [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:23:fa,bridge_name='br-int',has_traffic_filtering=True,id=f52e246b-dedc-40f5-9f35-cb545dae12ff,network=Network(c4c41ca2-0971-4ce7-af35-cd068de87e9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52e246b-de')
Oct 02 13:03:24 compute-0 neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c[374586]: [NOTICE]   (374590) : haproxy version is 2.8.14-c23fe91
Oct 02 13:03:24 compute-0 neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c[374586]: [NOTICE]   (374590) : path to executable is /usr/sbin/haproxy
Oct 02 13:03:24 compute-0 neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c[374586]: [WARNING]  (374590) : Exiting Master process...
Oct 02 13:03:24 compute-0 neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c[374586]: [ALERT]    (374590) : Current worker (374592) exited with code 143 (Terminated)
Oct 02 13:03:24 compute-0 neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c[374586]: [WARNING]  (374590) : All workers exited. Exiting... (0)
Oct 02 13:03:24 compute-0 systemd[1]: libpod-bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d.scope: Deactivated successfully.
Oct 02 13:03:24 compute-0 podman[374853]: 2025-10-02 13:03:24.47793362 +0000 UTC m=+0.055693287 container died bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:03:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d-userdata-shm.mount: Deactivated successfully.
Oct 02 13:03:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0907a4f8faf64e421bd2a04a396b29aaa702518caa8d97309ea2e92877135af6-merged.mount: Deactivated successfully.
Oct 02 13:03:24 compute-0 podman[374853]: 2025-10-02 13:03:24.522873086 +0000 UTC m=+0.100632743 container cleanup bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:03:24 compute-0 systemd[1]: libpod-conmon-bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d.scope: Deactivated successfully.
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.563 2 DEBUG nova.compute.manager [req-0fcc12e4-2421-4ce7-9d2d-a21dc61bb81f req-06929103-2ab0-43ca-a9a2-acb53ca187a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-vif-unplugged-f52e246b-dedc-40f5-9f35-cb545dae12ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.563 2 DEBUG oslo_concurrency.lockutils [req-0fcc12e4-2421-4ce7-9d2d-a21dc61bb81f req-06929103-2ab0-43ca-a9a2-acb53ca187a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.564 2 DEBUG oslo_concurrency.lockutils [req-0fcc12e4-2421-4ce7-9d2d-a21dc61bb81f req-06929103-2ab0-43ca-a9a2-acb53ca187a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.564 2 DEBUG oslo_concurrency.lockutils [req-0fcc12e4-2421-4ce7-9d2d-a21dc61bb81f req-06929103-2ab0-43ca-a9a2-acb53ca187a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.564 2 DEBUG nova.compute.manager [req-0fcc12e4-2421-4ce7-9d2d-a21dc61bb81f req-06929103-2ab0-43ca-a9a2-acb53ca187a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] No waiting events found dispatching network-vif-unplugged-f52e246b-dedc-40f5-9f35-cb545dae12ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.564 2 DEBUG nova.compute.manager [req-0fcc12e4-2421-4ce7-9d2d-a21dc61bb81f req-06929103-2ab0-43ca-a9a2-acb53ca187a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-vif-unplugged-f52e246b-dedc-40f5-9f35-cb545dae12ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:03:24 compute-0 podman[374906]: 2025-10-02 13:03:24.587941585 +0000 UTC m=+0.044421624 container remove bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.593 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6454d0e8-82e3-46df-941d-a3a163974ae2]: (4, ('Thu Oct  2 01:03:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c (bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d)\nbb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d\nThu Oct  2 01:03:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c (bb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d)\nbb1df8193e84bf1009926c0f1232ac7ceb3747d1bda417149629d58628b0d89d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.594 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cdd75e-c3e1-402c-a846-deedacb3f9fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.595 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4c41ca2-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 kernel: tapc4c41ca2-00: left promiscuous mode
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.605 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d16baa3c-a2c1-4230-a75b-9745485d7747]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-0 nova_compute[256940]: 2025-10-02 13:03:24.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.652 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4810f93c-37cb-4e83-99ca-0b074a044f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.654 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[656118b2-a24a-481e-8294-b6fb901a9ed2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.670 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6de21313-91b8-4431-a5b1-7e821dcf6385]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806734, 'reachable_time': 30749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374921, 'error': None, 'target': 'ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4c41ca2\x2d0971\x2d4ce7\x2daf35\x2dcd068de87e9c.mount: Deactivated successfully.
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.675 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4c41ca2-0971-4ce7-af35-cd068de87e9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:03:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:24.675 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[55c64a60-a045-495d-be26-b45a2776d3a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:25 compute-0 ceph-mon[73668]: pgmap v2713: 305 pgs: 305 active+clean; 434 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 424 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 13:03:25 compute-0 nova_compute[256940]: 2025-10-02 13:03:25.617 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 3.0 MiB/s wr, 134 op/s
Oct 02 13:03:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:26.198 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.322 2 INFO nova.virt.libvirt.driver [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Deleting instance files /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364_del
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.323 2 INFO nova.virt.libvirt.driver [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Deletion of /var/lib/nova/instances/b6d31622-c503-4790-b277-89de495fb364_del complete
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.377 2 INFO nova.compute.manager [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Took 2.40 seconds to destroy the instance on the hypervisor.
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.377 2 DEBUG oslo.service.loopingcall [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.377 2 DEBUG nova.compute.manager [-] [instance: b6d31622-c503-4790-b277-89de495fb364] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.378 2 DEBUG nova.network.neutron [-] [instance: b6d31622-c503-4790-b277-89de495fb364] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:03:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:26.499 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:26.499 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:03:26.500 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.650 2 DEBUG nova.compute.manager [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.651 2 DEBUG oslo_concurrency.lockutils [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6d31622-c503-4790-b277-89de495fb364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.651 2 DEBUG oslo_concurrency.lockutils [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.652 2 DEBUG oslo_concurrency.lockutils [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.652 2 DEBUG nova.compute.manager [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] No waiting events found dispatching network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:26 compute-0 nova_compute[256940]: 2025-10-02 13:03:26.652 2 WARNING nova.compute.manager [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received unexpected event network-vif-plugged-f52e246b-dedc-40f5-9f35-cb545dae12ff for instance with vm_state active and task_state deleting.
Oct 02 13:03:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:26.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:27 compute-0 ceph-mon[73668]: pgmap v2714: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 3.0 MiB/s wr, 134 op/s
Oct 02 13:03:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3195859859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4201269597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:27 compute-0 nova_compute[256940]: 2025-10-02 13:03:27.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 401 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 3.9 MiB/s wr, 142 op/s
Oct 02 13:03:28 compute-0 nova_compute[256940]: 2025-10-02 13:03:28.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:28 compute-0 nova_compute[256940]: 2025-10-02 13:03:28.438 2 DEBUG nova.network.neutron [-] [instance: b6d31622-c503-4790-b277-89de495fb364] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:28 compute-0 nova_compute[256940]: 2025-10-02 13:03:28.457 2 INFO nova.compute.manager [-] [instance: b6d31622-c503-4790-b277-89de495fb364] Took 2.08 seconds to deallocate network for instance.
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:28 compute-0 nova_compute[256940]: 2025-10-02 13:03:28.510 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:28 compute-0 nova_compute[256940]: 2025-10-02 13:03:28.511 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:28 compute-0 nova_compute[256940]: 2025-10-02 13:03:28.603 2 DEBUG oslo_concurrency.processutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:28.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:03:28
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'images', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 02 13:03:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:03:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2987500234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.051 2 DEBUG oslo_concurrency.processutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:29.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.059 2 DEBUG nova.compute.provider_tree [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.076 2 DEBUG nova.scheduler.client.report [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.114 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.135 2 DEBUG nova.compute.manager [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6d31622-c503-4790-b277-89de495fb364] Received event network-vif-deleted-f52e246b-dedc-40f5-9f35-cb545dae12ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:29 compute-0 ceph-mon[73668]: pgmap v2715: 305 pgs: 305 active+clean; 401 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 3.9 MiB/s wr, 142 op/s
Oct 02 13:03:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2987500234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:03:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:03:29 compute-0 nova_compute[256940]: 2025-10-02 13:03:29.949 2 INFO nova.scheduler.client.report [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Deleted allocations for instance b6d31622-c503-4790-b277-89de495fb364
Oct 02 13:03:30 compute-0 nova_compute[256940]: 2025-10-02 13:03:30.039 2 DEBUG oslo_concurrency.lockutils [None req-6ae37499-199e-4cb1-b99a-7ef096133ae4 cb1643b3981d49ceaebfabe1577596fd 6e8f1f6ceb7b40ee9fa002d881b59a44 - - default default] Lock "b6d31622-c503-4790-b277-89de495fb364" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 3.9 MiB/s wr, 183 op/s
Oct 02 13:03:30 compute-0 sudo[374948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:30 compute-0 sudo[374948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:30 compute-0 sudo[374948]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:30 compute-0 sudo[374973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:30 compute-0 sudo[374973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:30 compute-0 sudo[374973]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1405131134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:31 compute-0 ceph-mon[73668]: pgmap v2716: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 3.9 MiB/s wr, 183 op/s
Oct 02 13:03:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.4 MiB/s wr, 200 op/s
Oct 02 13:03:32 compute-0 nova_compute[256940]: 2025-10-02 13:03:32.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:32 compute-0 nova_compute[256940]: 2025-10-02 13:03:32.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:03:32 compute-0 nova_compute[256940]: 2025-10-02 13:03:32.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:03:32 compute-0 nova_compute[256940]: 2025-10-02 13:03:32.231 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:03:32 compute-0 nova_compute[256940]: 2025-10-02 13:03:32.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:33 compute-0 ceph-mon[73668]: pgmap v2717: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.4 MiB/s wr, 200 op/s
Oct 02 13:03:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct 02 13:03:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2683143405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:03:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2683143405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:03:34 compute-0 nova_compute[256940]: 2025-10-02 13:03:34.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:35 compute-0 ceph-mon[73668]: pgmap v2718: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct 02 13:03:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 280 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 02 13:03:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:36.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:37 compute-0 nova_compute[256940]: 2025-10-02 13:03:37.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:37 compute-0 nova_compute[256940]: 2025-10-02 13:03:37.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:37 compute-0 nova_compute[256940]: 2025-10-02 13:03:37.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:37 compute-0 ceph-mon[73668]: pgmap v2719: 305 pgs: 305 active+clean; 280 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 02 13:03:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4018230026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 177 op/s
Oct 02 13:03:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:38.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:03:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 60K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1653 writes, 7574 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 10.67 MB, 0.02 MB/s
                                           Interval WAL: 1653 writes, 1653 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     41.2      1.97              0.24        41    0.048       0      0       0.0       0.0
                                             L6      1/0   11.27 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9     77.2     65.7      6.08              1.16        40    0.152    270K    21K       0.0       0.0
                                            Sum      1/0   11.27 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     58.3     59.7      8.05              1.40        81    0.099    270K    21K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    132.8    134.2      0.69              0.25        14    0.049     63K   3670       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     77.2     65.7      6.08              1.16        40    0.152    270K    21K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.3      1.96              0.24        40    0.049       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.079, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.47 GB write, 0.10 MB/s write, 0.46 GB read, 0.10 MB/s read, 8.0 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 49.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000319 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2922,47.93 MB,15.766%) FilterBlock(82,758.30 KB,0.243594%) IndexBlock(82,1.27 MB,0.416936%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:03:39 compute-0 podman[375005]: 2025-10-02 13:03:39.395877165 +0000 UTC m=+0.063799748 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:03:39 compute-0 podman[375004]: 2025-10-02 13:03:39.421348916 +0000 UTC m=+0.092560504 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 02 13:03:39 compute-0 nova_compute[256940]: 2025-10-02 13:03:39.421 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410204.4199054, b6d31622-c503-4790-b277-89de495fb364 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:03:39 compute-0 nova_compute[256940]: 2025-10-02 13:03:39.421 2 INFO nova.compute.manager [-] [instance: b6d31622-c503-4790-b277-89de495fb364] VM Stopped (Lifecycle Event)
Oct 02 13:03:39 compute-0 nova_compute[256940]: 2025-10-02 13:03:39.448 2 DEBUG nova.compute.manager [None req-5833f910-5612-4e22-9234-a6b40d52f79d - - - - - -] [instance: b6d31622-c503-4790-b277-89de495fb364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:39 compute-0 nova_compute[256940]: 2025-10-02 13:03:39.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:39 compute-0 ceph-mon[73668]: pgmap v2720: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 177 op/s
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 33 KiB/s wr, 152 op/s
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031732374255536945 of space, bias 1.0, pg target 0.9519712276661083 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:03:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:03:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:40.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:41 compute-0 ceph-mon[73668]: pgmap v2721: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 33 KiB/s wr, 152 op/s
Oct 02 13:03:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 10 KiB/s wr, 113 op/s
Oct 02 13:03:42 compute-0 nova_compute[256940]: 2025-10-02 13:03:42.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:42.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:43 compute-0 ceph-mon[73668]: pgmap v2722: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 10 KiB/s wr, 113 op/s
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.072061) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223072137, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 636, "num_deletes": 251, "total_data_size": 754375, "memory_usage": 766168, "flush_reason": "Manual Compaction"}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 02 13:03:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223081172, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 745488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60640, "largest_seqno": 61274, "table_properties": {"data_size": 742207, "index_size": 1188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7812, "raw_average_key_size": 19, "raw_value_size": 735589, "raw_average_value_size": 1816, "num_data_blocks": 53, "num_entries": 405, "num_filter_entries": 405, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410179, "oldest_key_time": 1759410179, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 9165 microseconds, and 2946 cpu microseconds.
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.081221) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 745488 bytes OK
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.081250) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.082414) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.082431) EVENT_LOG_v1 {"time_micros": 1759410223082425, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.082459) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 751028, prev total WAL file size 751028, number of live WAL files 2.
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.082954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(728KB)], [137(11MB)]
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223082992, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12558148, "oldest_snapshot_seqno": -1}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8599 keys, 10707684 bytes, temperature: kUnknown
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223144313, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 10707684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10652740, "index_size": 32367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 227638, "raw_average_key_size": 26, "raw_value_size": 10502280, "raw_average_value_size": 1221, "num_data_blocks": 1230, "num_entries": 8599, "num_filter_entries": 8599, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.144576) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10707684 bytes
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.145841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.6 rd, 174.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(31.2) write-amplify(14.4) OK, records in: 9109, records dropped: 510 output_compression: NoCompression
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.145864) EVENT_LOG_v1 {"time_micros": 1759410223145853, "job": 84, "event": "compaction_finished", "compaction_time_micros": 61392, "compaction_time_cpu_micros": 26937, "output_level": 6, "num_output_files": 1, "total_output_size": 10707684, "num_input_records": 9109, "num_output_records": 8599, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223146156, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223148787, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.082866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.148864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.148870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.148872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.148873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:03:43.148875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 5.7 KiB/s wr, 70 op/s
Oct 02 13:03:44 compute-0 nova_compute[256940]: 2025-10-02 13:03:44.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:44.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:45.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:45 compute-0 ceph-mon[73668]: pgmap v2723: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 5.7 KiB/s wr, 70 op/s
Oct 02 13:03:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3787610328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Oct 02 13:03:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:46.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:47 compute-0 sudo[375047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:47 compute-0 sudo[375047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-0 sudo[375047]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:47 compute-0 sudo[375072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:03:47 compute-0 sudo[375072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-0 sudo[375072]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:47 compute-0 ceph-mon[73668]: pgmap v2724: 305 pgs: 305 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Oct 02 13:03:47 compute-0 sudo[375097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:47 compute-0 sudo[375097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-0 sudo[375097]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:47 compute-0 sudo[375122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:03:47 compute-0 sudo[375122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-0 nova_compute[256940]: 2025-10-02 13:03:47.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:47 compute-0 sudo[375122]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:03:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:03:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:03:47 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:03:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:03:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:48 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 95da79e8-7533-4846-8596-9d4729c67de0 does not exist
Oct 02 13:03:48 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4b4dd172-683d-44eb-b6cc-c6d5cb40ab9a does not exist
Oct 02 13:03:48 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1269041f-b7cd-41c8-a1cd-359490b4af2e does not exist
Oct 02 13:03:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:03:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:03:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:03:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:03:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:03:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:03:48 compute-0 sudo[375179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:48 compute-0 sudo[375179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:48 compute-0 sudo[375179]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:48 compute-0 sudo[375204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:03:48 compute-0 sudo[375204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:48 compute-0 sudo[375204]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 13:03:48 compute-0 sudo[375229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:48 compute-0 sudo[375229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:48 compute-0 sudo[375229]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:48 compute-0 sudo[375254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:03:48 compute-0 sudo[375254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:03:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:03:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:03:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:03:48 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:03:48 compute-0 podman[375317]: 2025-10-02 13:03:48.620546304 +0000 UTC m=+0.043407567 container create 06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:03:48 compute-0 systemd[1]: Started libpod-conmon-06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686.scope.
Oct 02 13:03:48 compute-0 podman[375317]: 2025-10-02 13:03:48.598007269 +0000 UTC m=+0.020868552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:03:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:03:48 compute-0 podman[375317]: 2025-10-02 13:03:48.724423731 +0000 UTC m=+0.147285014 container init 06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:03:48 compute-0 podman[375317]: 2025-10-02 13:03:48.734257826 +0000 UTC m=+0.157119099 container start 06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:03:48 compute-0 podman[375317]: 2025-10-02 13:03:48.737755177 +0000 UTC m=+0.160616470 container attach 06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:03:48 compute-0 serene_blackwell[375335]: 167 167
Oct 02 13:03:48 compute-0 systemd[1]: libpod-06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686.scope: Deactivated successfully.
Oct 02 13:03:48 compute-0 podman[375317]: 2025-10-02 13:03:48.745071767 +0000 UTC m=+0.167933060 container died 06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f063a214435e22fdfb78d0efb78eb8e9a12ef6ba70700ee05c809009a9210313-merged.mount: Deactivated successfully.
Oct 02 13:03:48 compute-0 podman[375317]: 2025-10-02 13:03:48.785743363 +0000 UTC m=+0.208604626 container remove 06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:03:48 compute-0 systemd[1]: libpod-conmon-06df25aa096d5a65f1be93613c7c84192848f7d022005deb4921d80d0b8e8686.scope: Deactivated successfully.
Oct 02 13:03:48 compute-0 podman[375360]: 2025-10-02 13:03:48.958038525 +0000 UTC m=+0.049007503 container create 5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:03:49 compute-0 systemd[1]: Started libpod-conmon-5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da.scope.
Oct 02 13:03:49 compute-0 podman[375360]: 2025-10-02 13:03:48.937895822 +0000 UTC m=+0.028864820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:03:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6801b078f69e0431a9e24b22a50924eadd845ba794ec1eb9f91c940d6fc92a63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6801b078f69e0431a9e24b22a50924eadd845ba794ec1eb9f91c940d6fc92a63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6801b078f69e0431a9e24b22a50924eadd845ba794ec1eb9f91c940d6fc92a63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6801b078f69e0431a9e24b22a50924eadd845ba794ec1eb9f91c940d6fc92a63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6801b078f69e0431a9e24b22a50924eadd845ba794ec1eb9f91c940d6fc92a63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:49.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:49 compute-0 podman[375360]: 2025-10-02 13:03:49.094453975 +0000 UTC m=+0.185422973 container init 5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:49 compute-0 podman[375360]: 2025-10-02 13:03:49.10389517 +0000 UTC m=+0.194864148 container start 5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:03:49 compute-0 podman[375360]: 2025-10-02 13:03:49.113637493 +0000 UTC m=+0.204606501 container attach 5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:49 compute-0 ceph-mon[73668]: pgmap v2725: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 13:03:49 compute-0 nova_compute[256940]: 2025-10-02 13:03:49.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:49 compute-0 elegant_bouman[375376]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:03:49 compute-0 elegant_bouman[375376]: --> relative data size: 1.0
Oct 02 13:03:49 compute-0 elegant_bouman[375376]: --> All data devices are unavailable
Oct 02 13:03:49 compute-0 systemd[1]: libpod-5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da.scope: Deactivated successfully.
Oct 02 13:03:49 compute-0 podman[375360]: 2025-10-02 13:03:49.926812141 +0000 UTC m=+1.017781109 container died 5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6801b078f69e0431a9e24b22a50924eadd845ba794ec1eb9f91c940d6fc92a63-merged.mount: Deactivated successfully.
Oct 02 13:03:49 compute-0 podman[375360]: 2025-10-02 13:03:49.978607416 +0000 UTC m=+1.069576394 container remove 5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:03:49 compute-0 systemd[1]: libpod-conmon-5107caec9e964b34319cf444cd79da2a0a11681545b1fcd4248d9422042d13da.scope: Deactivated successfully.
Oct 02 13:03:50 compute-0 sudo[375254]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:50 compute-0 sudo[375403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:50 compute-0 sudo[375403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:50 compute-0 sudo[375403]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:50 compute-0 sudo[375428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:03:50 compute-0 sudo[375428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:50 compute-0 sudo[375428]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 02 13:03:50 compute-0 sudo[375453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:50 compute-0 sudo[375453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:50 compute-0 sudo[375453]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:50 compute-0 sudo[375478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:03:50 compute-0 sudo[375478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:50 compute-0 sudo[375503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:50 compute-0 sudo[375503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:50 compute-0 sudo[375503]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:50 compute-0 sudo[375543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:50 compute-0 sudo[375543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:50 compute-0 sudo[375543]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:50 compute-0 podman[375594]: 2025-10-02 13:03:50.596550136 +0000 UTC m=+0.054018123 container create 23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:03:50 compute-0 systemd[1]: Started libpod-conmon-23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790.scope.
Oct 02 13:03:50 compute-0 podman[375594]: 2025-10-02 13:03:50.568518048 +0000 UTC m=+0.025986115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:03:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:03:50 compute-0 podman[375594]: 2025-10-02 13:03:50.685370372 +0000 UTC m=+0.142838389 container init 23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:03:50 compute-0 podman[375594]: 2025-10-02 13:03:50.695161206 +0000 UTC m=+0.152629193 container start 23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:03:50 compute-0 podman[375594]: 2025-10-02 13:03:50.698912843 +0000 UTC m=+0.156380840 container attach 23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:03:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:50 compute-0 strange_satoshi[375610]: 167 167
Oct 02 13:03:50 compute-0 systemd[1]: libpod-23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790.scope: Deactivated successfully.
Oct 02 13:03:50 compute-0 podman[375616]: 2025-10-02 13:03:50.747306789 +0000 UTC m=+0.029294051 container died 23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-65375a0f14532f9f16e0a6de6a8ce5f94d4d7b98f93b181fef23260e4ab90f07-merged.mount: Deactivated successfully.
Oct 02 13:03:50 compute-0 podman[375616]: 2025-10-02 13:03:50.785990714 +0000 UTC m=+0.067977976 container remove 23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:03:50 compute-0 systemd[1]: libpod-conmon-23b727a6d474b108bcfb4a77a5560b15562258521fed9fb327bbc4717f834790.scope: Deactivated successfully.
Oct 02 13:03:50 compute-0 podman[375638]: 2025-10-02 13:03:50.953589644 +0000 UTC m=+0.041787876 container create 061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:03:50 compute-0 systemd[1]: Started libpod-conmon-061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a.scope.
Oct 02 13:03:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91a7f772a02e7b9336467952d423f7af8c691568ab71c3be4f153a428613698/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91a7f772a02e7b9336467952d423f7af8c691568ab71c3be4f153a428613698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91a7f772a02e7b9336467952d423f7af8c691568ab71c3be4f153a428613698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91a7f772a02e7b9336467952d423f7af8c691568ab71c3be4f153a428613698/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:51 compute-0 podman[375638]: 2025-10-02 13:03:51.024467474 +0000 UTC m=+0.112665736 container init 061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:03:51 compute-0 podman[375638]: 2025-10-02 13:03:50.934767525 +0000 UTC m=+0.022965787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:03:51 compute-0 podman[375638]: 2025-10-02 13:03:51.033833647 +0000 UTC m=+0.122031879 container start 061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_northcutt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:51 compute-0 podman[375638]: 2025-10-02 13:03:51.03665079 +0000 UTC m=+0.124849022 container attach 061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_northcutt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 02 13:03:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:51.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:51 compute-0 ceph-mon[73668]: pgmap v2726: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]: {
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:     "1": [
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:         {
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "devices": [
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "/dev/loop3"
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             ],
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "lv_name": "ceph_lv0",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "lv_size": "7511998464",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "name": "ceph_lv0",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "tags": {
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.cluster_name": "ceph",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.crush_device_class": "",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.encrypted": "0",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.osd_id": "1",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.type": "block",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:                 "ceph.vdo": "0"
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             },
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "type": "block",
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:             "vg_name": "ceph_vg0"
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:         }
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]:     ]
Oct 02 13:03:51 compute-0 recursing_northcutt[375655]: }
Oct 02 13:03:51 compute-0 systemd[1]: libpod-061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a.scope: Deactivated successfully.
Oct 02 13:03:51 compute-0 podman[375638]: 2025-10-02 13:03:51.861469891 +0000 UTC m=+0.949668123 container died 061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_northcutt, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f91a7f772a02e7b9336467952d423f7af8c691568ab71c3be4f153a428613698-merged.mount: Deactivated successfully.
Oct 02 13:03:52 compute-0 podman[375638]: 2025-10-02 13:03:52.066732709 +0000 UTC m=+1.154930941 container remove 061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:03:52 compute-0 podman[375666]: 2025-10-02 13:03:52.067348995 +0000 UTC m=+0.176167894 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 13:03:52 compute-0 systemd[1]: libpod-conmon-061f1478f1a97ac3b79782d2dc87756a9a26dd088193a65578b4dc0f9e233d4a.scope: Deactivated successfully.
Oct 02 13:03:52 compute-0 sudo[375478]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:52 compute-0 podman[375665]: 2025-10-02 13:03:52.131979142 +0000 UTC m=+0.241109789 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:03:52 compute-0 sudo[375719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:52 compute-0 sudo[375719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:52 compute-0 sudo[375719]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:03:52 compute-0 sudo[375744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:03:52 compute-0 sudo[375744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:52 compute-0 sudo[375744]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:52 compute-0 sudo[375769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:52 compute-0 sudo[375769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:52 compute-0 sudo[375769]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:52 compute-0 sudo[375794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:03:52 compute-0 sudo[375794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:52 compute-0 nova_compute[256940]: 2025-10-02 13:03:52.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:52.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:52 compute-0 podman[375859]: 2025-10-02 13:03:52.733596218 +0000 UTC m=+0.045395239 container create 51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:03:52 compute-0 systemd[1]: Started libpod-conmon-51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7.scope.
Oct 02 13:03:52 compute-0 podman[375859]: 2025-10-02 13:03:52.717845639 +0000 UTC m=+0.029644670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:03:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:03:52 compute-0 podman[375859]: 2025-10-02 13:03:52.837742811 +0000 UTC m=+0.149541852 container init 51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:03:52 compute-0 podman[375859]: 2025-10-02 13:03:52.846280403 +0000 UTC m=+0.158079434 container start 51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:03:52 compute-0 podman[375859]: 2025-10-02 13:03:52.849712552 +0000 UTC m=+0.161511613 container attach 51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:03:52 compute-0 suspicious_pike[375877]: 167 167
Oct 02 13:03:52 compute-0 systemd[1]: libpod-51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7.scope: Deactivated successfully.
Oct 02 13:03:52 compute-0 podman[375859]: 2025-10-02 13:03:52.853896171 +0000 UTC m=+0.165695202 container died 51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-045bed0ef94452d7a44041d44cf6efc68f8053c3e766c39e6aacb6a316383be1-merged.mount: Deactivated successfully.
Oct 02 13:03:52 compute-0 podman[375859]: 2025-10-02 13:03:52.895656645 +0000 UTC m=+0.207455676 container remove 51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:52 compute-0 systemd[1]: libpod-conmon-51c2b03a94b2a81f047307ed3c7256f122cf2b35ce1e12fbf63fc6b39f33b8f7.scope: Deactivated successfully.
Oct 02 13:03:53 compute-0 podman[375901]: 2025-10-02 13:03:53.076772476 +0000 UTC m=+0.043981663 container create 2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:03:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:53.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:53 compute-0 systemd[1]: Started libpod-conmon-2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a.scope.
Oct 02 13:03:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60236702c5c58efbe0520b4b94ba58916cfd9ebe4bfcb731f7081168d3eb32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60236702c5c58efbe0520b4b94ba58916cfd9ebe4bfcb731f7081168d3eb32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60236702c5c58efbe0520b4b94ba58916cfd9ebe4bfcb731f7081168d3eb32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60236702c5c58efbe0520b4b94ba58916cfd9ebe4bfcb731f7081168d3eb32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:03:53 compute-0 podman[375901]: 2025-10-02 13:03:53.058597144 +0000 UTC m=+0.025806371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:03:53 compute-0 podman[375901]: 2025-10-02 13:03:53.345801089 +0000 UTC m=+0.313010306 container init 2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:03:53 compute-0 podman[375901]: 2025-10-02 13:03:53.354757372 +0000 UTC m=+0.321966569 container start 2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:03:53 compute-0 podman[375901]: 2025-10-02 13:03:53.451460452 +0000 UTC m=+0.418669669 container attach 2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:03:53 compute-0 ceph-mon[73668]: pgmap v2727: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:03:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1157188128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 13:03:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:54 compute-0 loving_nightingale[375917]: {
Oct 02 13:03:54 compute-0 loving_nightingale[375917]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:03:54 compute-0 loving_nightingale[375917]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:03:54 compute-0 loving_nightingale[375917]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:03:54 compute-0 loving_nightingale[375917]:         "osd_id": 1,
Oct 02 13:03:54 compute-0 loving_nightingale[375917]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:03:54 compute-0 loving_nightingale[375917]:         "type": "bluestore"
Oct 02 13:03:54 compute-0 loving_nightingale[375917]:     }
Oct 02 13:03:54 compute-0 loving_nightingale[375917]: }
Oct 02 13:03:54 compute-0 systemd[1]: libpod-2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a.scope: Deactivated successfully.
Oct 02 13:03:54 compute-0 conmon[375917]: conmon 2b6d48d5226fd28e2efe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a.scope/container/memory.events
Oct 02 13:03:54 compute-0 podman[375901]: 2025-10-02 13:03:54.337356208 +0000 UTC m=+1.304565405 container died 2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b60236702c5c58efbe0520b4b94ba58916cfd9ebe4bfcb731f7081168d3eb32f-merged.mount: Deactivated successfully.
Oct 02 13:03:54 compute-0 nova_compute[256940]: 2025-10-02 13:03:54.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:54 compute-0 podman[375901]: 2025-10-02 13:03:54.468371749 +0000 UTC m=+1.435580946 container remove 2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:03:54 compute-0 systemd[1]: libpod-conmon-2b6d48d5226fd28e2efe9aa03c1e7867a7f43ec412f43b05202c874b2380516a.scope: Deactivated successfully.
Oct 02 13:03:54 compute-0 sudo[375794]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:03:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:03:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:54.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 873a3cca-50f8-4325-b20e-1026b839698d does not exist
Oct 02 13:03:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3c895836-5e9e-4186-86f1-7a0a4022ed79 does not exist
Oct 02 13:03:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 30f9b430-0b50-4270-8cc7-101eb3af1976 does not exist
Oct 02 13:03:54 compute-0 sudo[375953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:54 compute-0 sudo[375953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:54 compute-0 sudo[375953]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:54 compute-0 sudo[375978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:03:54 compute-0 sudo[375978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:54 compute-0 sudo[375978]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:55.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:55 compute-0 ceph-mon[73668]: pgmap v2728: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 13:03:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 225 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.2 MiB/s wr, 94 op/s
Oct 02 13:03:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:03:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:56.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:03:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:57.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:57 compute-0 nova_compute[256940]: 2025-10-02 13:03:57.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:57 compute-0 ceph-mon[73668]: pgmap v2729: 305 pgs: 305 active+clean; 225 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.2 MiB/s wr, 94 op/s
Oct 02 13:03:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 238 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Oct 02 13:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:03:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:03:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:58.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:03:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:59.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:59 compute-0 ceph-mon[73668]: pgmap v2730: 305 pgs: 305 active+clean; 238 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Oct 02 13:03:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2078953066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:59 compute-0 nova_compute[256940]: 2025-10-02 13:03:59.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:04:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3142643056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:00.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:01.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:01 compute-0 nova_compute[256940]: 2025-10-02 13:04:01.726 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:01 compute-0 nova_compute[256940]: 2025-10-02 13:04:01.726 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:01 compute-0 nova_compute[256940]: 2025-10-02 13:04:01.756 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:04:01 compute-0 nova_compute[256940]: 2025-10-02 13:04:01.850 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:01 compute-0 nova_compute[256940]: 2025-10-02 13:04:01.851 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:01 compute-0 nova_compute[256940]: 2025-10-02 13:04:01.860 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:04:01 compute-0 nova_compute[256940]: 2025-10-02 13:04:01.862 2 INFO nova.compute.claims [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:04:01 compute-0 ceph-mon[73668]: pgmap v2731: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:04:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3400097481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.356 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:02.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058577772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.794 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.801 2 DEBUG nova.compute.provider_tree [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.819 2 DEBUG nova.scheduler.client.report [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.864 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.865 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:04:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2489860827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1058577772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.916 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.916 2 DEBUG nova.network.neutron [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.938 2 INFO nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:04:02 compute-0 nova_compute[256940]: 2025-10-02 13:04:02.953 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.064 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.066 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.066 2 INFO nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Creating image(s)
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.095 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:03.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.125 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.153 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.157 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.197 2 DEBUG nova.policy [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.235 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.236 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.237 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.237 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.265 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.270 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 623fda87-e4b4-4b98-96cb-6f1846429214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.633 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 623fda87-e4b4-4b98-96cb-6f1846429214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.720 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.853 2 DEBUG nova.objects.instance [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 623fda87-e4b4-4b98-96cb-6f1846429214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.864 2 DEBUG nova.network.neutron [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Successfully created port: 2b9e2335-30a2-48b8-91f1-2a3ba80473ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.869 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.869 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Ensure instance console log exists: /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.870 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.870 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:03 compute-0 nova_compute[256940]: 2025-10-02 13:04:03.870 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:03 compute-0 ceph-mon[73668]: pgmap v2732: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 13:04:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:04:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:04 compute-0 nova_compute[256940]: 2025-10-02 13:04:04.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:04.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:05 compute-0 ceph-mon[73668]: pgmap v2733: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:04:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:05.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2200722556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:04:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:04:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 144 op/s
Oct 02 13:04:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:06.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:07 compute-0 ceph-mon[73668]: pgmap v2734: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 144 op/s
Oct 02 13:04:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:07.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:07 compute-0 nova_compute[256940]: 2025-10-02 13:04:07.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:07 compute-0 nova_compute[256940]: 2025-10-02 13:04:07.929 2 DEBUG nova.network.neutron [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Successfully updated port: 2b9e2335-30a2-48b8-91f1-2a3ba80473ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:04:07 compute-0 nova_compute[256940]: 2025-10-02 13:04:07.960 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:07 compute-0 nova_compute[256940]: 2025-10-02 13:04:07.960 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:07 compute-0 nova_compute[256940]: 2025-10-02 13:04:07.960 2 DEBUG nova.network.neutron [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:04:08 compute-0 nova_compute[256940]: 2025-10-02 13:04:08.037 2 DEBUG nova.compute.manager [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-changed-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:08 compute-0 nova_compute[256940]: 2025-10-02 13:04:08.037 2 DEBUG nova.compute.manager [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Refreshing instance network info cache due to event network-changed-2b9e2335-30a2-48b8-91f1-2a3ba80473ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:04:08 compute-0 nova_compute[256940]: 2025-10-02 13:04:08.038 2 DEBUG oslo_concurrency.lockutils [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:08 compute-0 nova_compute[256940]: 2025-10-02 13:04:08.137 2 DEBUG nova.network.neutron [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:04:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Oct 02 13:04:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:08.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.065 2 DEBUG nova.network.neutron [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updating instance_info_cache with network_info: [{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.089 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.089 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Instance network_info: |[{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.090 2 DEBUG oslo_concurrency.lockutils [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.090 2 DEBUG nova.network.neutron [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Refreshing network info cache for port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.092 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Start _get_guest_xml network_info=[{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.098 2 WARNING nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.103 2 DEBUG nova.virt.libvirt.host [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.104 2 DEBUG nova.virt.libvirt.host [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.108 2 DEBUG nova.virt.libvirt.host [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.109 2 DEBUG nova.virt.libvirt.host [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.110 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.110 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.111 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.111 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.111 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.111 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.111 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.112 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.112 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.112 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.112 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.113 2 DEBUG nova.virt.hardware [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.115 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:09.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:09 compute-0 ceph-mon[73668]: pgmap v2735: 305 pgs: 305 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Oct 02 13:04:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/642492568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.614 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.642 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:09 compute-0 nova_compute[256940]: 2025-10-02 13:04:09.647 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501996238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.117 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.119 2 DEBUG nova.virt.libvirt.vif [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-132603722',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-132603722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=182,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUSY1aWRmq0CGi/H2pvD/bCECTpQePcVVb1ww/02gg774n0eyLtedEGOwVYxdhbiBUstusAhPajTI55nJ9x71ILzdP36Ifk9V9OxqNCn3GTT+6F6E95IO0GnzzYXwG7g==',key_name='tempest-TestSecurityGroupsBasicOps-479832544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-q1gybzp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:02Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=623fda87-e4b4-4b98-96cb-6f1846429214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.120 2 DEBUG nova.network.os_vif_util [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.121 2 DEBUG nova.network.os_vif_util [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:70:d1,bridge_name='br-int',has_traffic_filtering=True,id=2b9e2335-30a2-48b8-91f1-2a3ba80473ee,network=Network(2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b9e2335-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.122 2 DEBUG nova.objects.instance [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 623fda87-e4b4-4b98-96cb-6f1846429214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.138 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <uuid>623fda87-e4b4-4b98-96cb-6f1846429214</uuid>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <name>instance-000000b6</name>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-132603722</nova:name>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:04:09</nova:creationTime>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <nova:port uuid="2b9e2335-30a2-48b8-91f1-2a3ba80473ee">
Oct 02 13:04:10 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <system>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <entry name="serial">623fda87-e4b4-4b98-96cb-6f1846429214</entry>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <entry name="uuid">623fda87-e4b4-4b98-96cb-6f1846429214</entry>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </system>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <os>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   </os>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <features>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   </features>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/623fda87-e4b4-4b98-96cb-6f1846429214_disk">
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       </source>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/623fda87-e4b4-4b98-96cb-6f1846429214_disk.config">
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       </source>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:04:10 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9b:70:d1"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <target dev="tap2b9e2335-30"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/console.log" append="off"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <video>
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </video>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:04:10 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:04:10 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:04:10 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:04:10 compute-0 nova_compute[256940]: </domain>
Oct 02 13:04:10 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.139 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Preparing to wait for external event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.140 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.140 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.140 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.141 2 DEBUG nova.virt.libvirt.vif [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-132603722',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-132603722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=182,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUSY1aWRmq0CGi/H2pvD/bCECTpQePcVVb1ww/02gg774n0eyLtedEGOwVYxdhbiBUstusAhPajTI55nJ9x71ILzdP36Ifk9V9OxqNCn3GTT+6F6E95IO0GnzzYXwG7g==',key_name='tempest-TestSecurityGroupsBasicOps-479832544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-q1gybzp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:02Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=623fda87-e4b4-4b98-96cb-6f1846429214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.141 2 DEBUG nova.network.os_vif_util [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.142 2 DEBUG nova.network.os_vif_util [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:70:d1,bridge_name='br-int',has_traffic_filtering=True,id=2b9e2335-30a2-48b8-91f1-2a3ba80473ee,network=Network(2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b9e2335-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.142 2 DEBUG os_vif [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:70:d1,bridge_name='br-int',has_traffic_filtering=True,id=2b9e2335-30a2-48b8-91f1-2a3ba80473ee,network=Network(2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b9e2335-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.143 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.143 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b9e2335-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b9e2335-30, col_values=(('external_ids', {'iface-id': '2b9e2335-30a2-48b8-91f1-2a3ba80473ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:70:d1', 'vm-uuid': '623fda87-e4b4-4b98-96cb-6f1846429214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:10 compute-0 NetworkManager[44981]: <info>  [1759410250.1514] manager: (tap2b9e2335-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.163 2 INFO os_vif [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:70:d1,bridge_name='br-int',has_traffic_filtering=True,id=2b9e2335-30a2-48b8-91f1-2a3ba80473ee,network=Network(2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b9e2335-30')
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.167 2 DEBUG nova.network.neutron [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updated VIF entry in instance network info cache for port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.168 2 DEBUG nova.network.neutron [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updating instance_info_cache with network_info: [{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.187 2 DEBUG oslo_concurrency.lockutils [req-06d812a0-8fd9-4031-9fb3-da1be93ead19 req-1138067f-8641-42fc-a312-1b22eb498464 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 335 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.220 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.221 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.221 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:9b:70:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.221 2 INFO nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Using config drive
Oct 02 13:04:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/642492568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1182037887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1501996238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.261 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:10 compute-0 podman[376263]: 2025-10-02 13:04:10.277446263 +0000 UTC m=+0.068720575 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:10 compute-0 podman[376264]: 2025-10-02 13:04:10.281364154 +0000 UTC m=+0.072293577 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 13:04:10 compute-0 sudo[376322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:10 compute-0 sudo[376322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:10 compute-0 sudo[376322]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.593 2 INFO nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Creating config drive at /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/disk.config
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.598 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg4pr2qld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:10 compute-0 sudo[376347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:10 compute-0 sudo[376347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:10 compute-0 sudo[376347]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:10.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.744 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg4pr2qld" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.773 2 DEBUG nova.storage.rbd_utils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 623fda87-e4b4-4b98-96cb-6f1846429214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.777 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/disk.config 623fda87-e4b4-4b98-96cb-6f1846429214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.953 2 DEBUG oslo_concurrency.processutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/disk.config 623fda87-e4b4-4b98-96cb-6f1846429214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:10 compute-0 nova_compute[256940]: 2025-10-02 13:04:10.954 2 INFO nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Deleting local config drive /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214/disk.config because it was imported into RBD.
Oct 02 13:04:11 compute-0 kernel: tap2b9e2335-30: entered promiscuous mode
Oct 02 13:04:11 compute-0 NetworkManager[44981]: <info>  [1759410251.0306] manager: (tap2b9e2335-30): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Oct 02 13:04:11 compute-0 ovn_controller[148123]: 2025-10-02T13:04:11Z|00906|binding|INFO|Claiming lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee for this chassis.
Oct 02 13:04:11 compute-0 ovn_controller[148123]: 2025-10-02T13:04:11Z|00907|binding|INFO|2b9e2335-30a2-48b8-91f1-2a3ba80473ee: Claiming fa:16:3e:9b:70:d1 10.100.0.13
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.048 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:70:d1 10.100.0.13'], port_security=['fa:16:3e:9b:70:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '623fda87-e4b4-4b98-96cb-6f1846429214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75ade66a-9f20-4007-92c8-0a4b810103cd dc6a6518-75b2-4560-a7df-1de3b215b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4e2472-fc91-44ac-acf9-e1338455a1c5, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2b9e2335-30a2-48b8-91f1-2a3ba80473ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.050 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee in datapath 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 bound to our chassis
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.051 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.065 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e79aaf22-64ed-41ab-947e-d56a1ffbc1af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.066 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cd8aa72-d1 in ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.069 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cd8aa72-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.069 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3da302-fab2-4f30-a2a8-d848cebe4533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.071 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c8877365-3eed-47b8-83f6-0792ee5dab4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 systemd-udevd[376426]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:04:11 compute-0 systemd-machined[210927]: New machine qemu-93-instance-000000b6.
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.085 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[27f68d19-8dbc-4ce7-a2cc-fbc590765ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 NetworkManager[44981]: <info>  [1759410251.0916] device (tap2b9e2335-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:04:11 compute-0 NetworkManager[44981]: <info>  [1759410251.0932] device (tap2b9e2335-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:04:11 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-000000b6.
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 ovn_controller[148123]: 2025-10-02T13:04:11Z|00908|binding|INFO|Setting lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee ovn-installed in OVS
Oct 02 13:04:11 compute-0 ovn_controller[148123]: 2025-10-02T13:04:11Z|00909|binding|INFO|Setting lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee up in Southbound
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.113 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[515098e5-6dc2-4684-8c03-920d20f7bde0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:11.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.141 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[23877750-1207-43ad-b3a6-8dca718aa068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 NetworkManager[44981]: <info>  [1759410251.1486] manager: (tap2cd8aa72-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.147 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aae10646-4ab2-49fe-aeea-f0d6d6072562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 systemd-udevd[376429]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.182 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba9a981-95c9-4d7e-a5f0-307b38621b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.185 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e0837ad0-8aac-4af3-9fbb-5e13f620f078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 NetworkManager[44981]: <info>  [1759410251.2147] device (tap2cd8aa72-d0): carrier: link connected
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.223 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ec14a79b-5ea5-4d5c-8524-f0f327e4f145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.241 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a548241d-9a47-4664-9a1e-84656b784147]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cd8aa72-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:8e:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814669, 'reachable_time': 21580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376458, 'error': None, 'target': 'ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.258 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5d848c41-ae6a-402b-9aad-c07e915c0d3f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:8e49'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814669, 'tstamp': 814669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376459, 'error': None, 'target': 'ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ceph-mon[73668]: pgmap v2736: 305 pgs: 305 active+clean; 335 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.284 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1f2c9a-ca24-43f5-b39b-929e46a75fd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cd8aa72-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:8e:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814669, 'reachable_time': 21580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376460, 'error': None, 'target': 'ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.328 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3985ada7-1d33-46bc-8aae-ae2f3d825edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.396 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[32d0766b-c7ac-4d57-8198-6e6780253858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.397 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cd8aa72-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.397 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.398 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cd8aa72-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:11 compute-0 NetworkManager[44981]: <info>  [1759410251.4005] manager: (tap2cd8aa72-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 kernel: tap2cd8aa72-d0: entered promiscuous mode
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.405 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cd8aa72-d0, col_values=(('external_ids', {'iface-id': '93013e5b-e543-4f32-a0a0-fbe6244eaa94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 ovn_controller[148123]: 2025-10-02T13:04:11Z|00910|binding|INFO|Releasing lport 93013e5b-e543-4f32-a0a0-fbe6244eaa94 from this chassis (sb_readonly=0)
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.408 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.408 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[58d89055-8f29-4473-9933-226cff4b4ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.409 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9.pid.haproxy
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:04:11 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:11.410 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'env', 'PROCESS_TAG=haproxy-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:11 compute-0 podman[376534]: 2025-10-02 13:04:11.790764103 +0000 UTC m=+0.055460951 container create c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:04:11 compute-0 systemd[1]: Started libpod-conmon-c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4.scope.
Oct 02 13:04:11 compute-0 podman[376534]: 2025-10-02 13:04:11.760775414 +0000 UTC m=+0.025472282 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:04:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f091b7a303445f52fd0c3192f61520fb721bec47a5a2b5d2a97c41dd763900/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:11 compute-0 podman[376534]: 2025-10-02 13:04:11.874702902 +0000 UTC m=+0.139399770 container init c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 13:04:11 compute-0 podman[376534]: 2025-10-02 13:04:11.879590749 +0000 UTC m=+0.144287597 container start c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:04:11 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [NOTICE]   (376553) : New worker (376555) forked
Oct 02 13:04:11 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [NOTICE]   (376553) : Loading success.
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.966 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410251.9650865, 623fda87-e4b4-4b98-96cb-6f1846429214 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:11 compute-0 nova_compute[256940]: 2025-10-02 13:04:11.967 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] VM Started (Lifecycle Event)
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.002 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.007 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410251.9652731, 623fda87-e4b4-4b98-96cb-6f1846429214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.007 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] VM Paused (Lifecycle Event)
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.028 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.032 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.053 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 216 op/s
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.302 2 DEBUG nova.compute.manager [req-af64f70e-8053-49b1-a52c-74c2e80da829 req-a5dd80bf-99fa-4944-b783-40c8a8a96c62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.303 2 DEBUG oslo_concurrency.lockutils [req-af64f70e-8053-49b1-a52c-74c2e80da829 req-a5dd80bf-99fa-4944-b783-40c8a8a96c62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.303 2 DEBUG oslo_concurrency.lockutils [req-af64f70e-8053-49b1-a52c-74c2e80da829 req-a5dd80bf-99fa-4944-b783-40c8a8a96c62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.303 2 DEBUG oslo_concurrency.lockutils [req-af64f70e-8053-49b1-a52c-74c2e80da829 req-a5dd80bf-99fa-4944-b783-40c8a8a96c62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.303 2 DEBUG nova.compute.manager [req-af64f70e-8053-49b1-a52c-74c2e80da829 req-a5dd80bf-99fa-4944-b783-40c8a8a96c62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Processing event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.304 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.310 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410252.3104064, 623fda87-e4b4-4b98-96cb-6f1846429214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.310 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] VM Resumed (Lifecycle Event)
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.313 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.317 2 INFO nova.virt.libvirt.driver [-] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Instance spawned successfully.
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.317 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.329 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.331 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.350 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.351 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.351 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.352 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.352 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.352 2 DEBUG nova.virt.libvirt.driver [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.371 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.434 2 INFO nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Took 9.37 seconds to spawn the instance on the hypervisor.
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.435 2 DEBUG nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.563 2 INFO nova.compute.manager [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Took 10.74 seconds to build instance.
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.587 2 DEBUG oslo_concurrency.lockutils [None req-77cc9feb-7636-4669-b639-8c35d0b12b13 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:12 compute-0 nova_compute[256940]: 2025-10-02 13:04:12.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:12.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:13.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:13 compute-0 ceph-mon[73668]: pgmap v2737: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 216 op/s
Oct 02 13:04:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/647808684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1493426402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 215 op/s
Oct 02 13:04:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:14 compute-0 nova_compute[256940]: 2025-10-02 13:04:14.396 2 DEBUG nova.compute.manager [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:14 compute-0 nova_compute[256940]: 2025-10-02 13:04:14.397 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:14 compute-0 nova_compute[256940]: 2025-10-02 13:04:14.398 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:14 compute-0 nova_compute[256940]: 2025-10-02 13:04:14.398 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:14 compute-0 nova_compute[256940]: 2025-10-02 13:04:14.398 2 DEBUG nova.compute.manager [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] No waiting events found dispatching network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:14 compute-0 nova_compute[256940]: 2025-10-02 13:04:14.399 2 WARNING nova.compute.manager [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received unexpected event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee for instance with vm_state active and task_state None.
Oct 02 13:04:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:14.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:15.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:15 compute-0 nova_compute[256940]: 2025-10-02 13:04:15.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:15 compute-0 ceph-mon[73668]: pgmap v2738: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 215 op/s
Oct 02 13:04:16 compute-0 NetworkManager[44981]: <info>  [1759410256.0607] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct 02 13:04:16 compute-0 NetworkManager[44981]: <info>  [1759410256.0614] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Oct 02 13:04:16 compute-0 nova_compute[256940]: 2025-10-02 13:04:16.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 402 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.7 MiB/s wr, 302 op/s
Oct 02 13:04:16 compute-0 nova_compute[256940]: 2025-10-02 13:04:16.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:16 compute-0 nova_compute[256940]: 2025-10-02 13:04:16.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:16 compute-0 ovn_controller[148123]: 2025-10-02T13:04:16Z|00911|binding|INFO|Releasing lport 93013e5b-e543-4f32-a0a0-fbe6244eaa94 from this chassis (sb_readonly=0)
Oct 02 13:04:16 compute-0 nova_compute[256940]: 2025-10-02 13:04:16.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1864796394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2649936971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:16.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:17 compute-0 nova_compute[256940]: 2025-10-02 13:04:17.130 2 DEBUG nova.compute.manager [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-changed-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:17 compute-0 nova_compute[256940]: 2025-10-02 13:04:17.130 2 DEBUG nova.compute.manager [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Refreshing instance network info cache due to event network-changed-2b9e2335-30a2-48b8-91f1-2a3ba80473ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:04:17 compute-0 nova_compute[256940]: 2025-10-02 13:04:17.130 2 DEBUG oslo_concurrency.lockutils [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:17 compute-0 nova_compute[256940]: 2025-10-02 13:04:17.130 2 DEBUG oslo_concurrency.lockutils [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:17 compute-0 nova_compute[256940]: 2025-10-02 13:04:17.131 2 DEBUG nova.network.neutron [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Refreshing network info cache for port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:04:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:17.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:17 compute-0 ceph-mon[73668]: pgmap v2739: 305 pgs: 305 active+clean; 402 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.7 MiB/s wr, 302 op/s
Oct 02 13:04:17 compute-0 nova_compute[256940]: 2025-10-02 13:04:17.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 412 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.3 MiB/s wr, 289 op/s
Oct 02 13:04:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1373024404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:18.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:19.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:19 compute-0 nova_compute[256940]: 2025-10-02 13:04:19.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:19 compute-0 nova_compute[256940]: 2025-10-02 13:04:19.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:04:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:19 compute-0 ceph-mon[73668]: pgmap v2740: 305 pgs: 305 active+clean; 412 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.3 MiB/s wr, 289 op/s
Oct 02 13:04:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3310282002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:19 compute-0 nova_compute[256940]: 2025-10-02 13:04:19.532 2 DEBUG nova.network.neutron [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updated VIF entry in instance network info cache for port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:04:19 compute-0 nova_compute[256940]: 2025-10-02 13:04:19.532 2 DEBUG nova.network.neutron [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updating instance_info_cache with network_info: [{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:19 compute-0 nova_compute[256940]: 2025-10-02 13:04:19.555 2 DEBUG oslo_concurrency.lockutils [req-ab727458-0582-41a8-95b9-93dc3fda6b91 req-2376375d-1580-464d-b5a0-c4e2933e4b85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:20 compute-0 nova_compute[256940]: 2025-10-02 13:04:20.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 414 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.6 MiB/s wr, 278 op/s
Oct 02 13:04:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:20.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:21.113 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:21.114 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:21.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.241 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.241 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.241 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:21 compute-0 ceph-mon[73668]: pgmap v2741: 305 pgs: 305 active+clean; 414 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.6 MiB/s wr, 278 op/s
Oct 02 13:04:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1231900243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195261917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.672 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.740 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.741 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.913 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.914 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4078MB free_disk=20.834693908691406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.914 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:21 compute-0 nova_compute[256940]: 2025-10-02 13:04:21.915 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.003 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 623fda87-e4b4-4b98-96cb-6f1846429214 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.004 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.004 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.058 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 408 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.0 MiB/s wr, 352 op/s
Oct 02 13:04:22 compute-0 podman[376614]: 2025-10-02 13:04:22.392555429 +0000 UTC m=+0.058580611 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:04:22 compute-0 podman[376615]: 2025-10-02 13:04:22.418898483 +0000 UTC m=+0.084559746 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:04:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343463159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.509 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.515 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4195261917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2877419818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1343463159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.554 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.574 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.574 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:22 compute-0 nova_compute[256940]: 2025-10-02 13:04:22.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:23.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:23 compute-0 nova_compute[256940]: 2025-10-02 13:04:23.574 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:23 compute-0 nova_compute[256940]: 2025-10-02 13:04:23.575 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:23 compute-0 ceph-mon[73668]: pgmap v2742: 305 pgs: 305 active+clean; 408 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.0 MiB/s wr, 352 op/s
Oct 02 13:04:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/411876806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:24.116 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 408 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.4 MiB/s wr, 334 op/s
Oct 02 13:04:24 compute-0 nova_compute[256940]: 2025-10-02 13:04:24.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:24.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:25.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:25 compute-0 nova_compute[256940]: 2025-10-02 13:04:25.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:25 compute-0 ceph-mon[73668]: pgmap v2743: 305 pgs: 305 active+clean; 408 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.4 MiB/s wr, 334 op/s
Oct 02 13:04:25 compute-0 ovn_controller[148123]: 2025-10-02T13:04:25Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:70:d1 10.100.0.13
Oct 02 13:04:25 compute-0 ovn_controller[148123]: 2025-10-02T13:04:25Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:70:d1 10.100.0.13
Oct 02 13:04:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.7 MiB/s wr, 392 op/s
Oct 02 13:04:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:26.500 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:26.500 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:26.501 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:26.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1733589827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:27.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:27 compute-0 nova_compute[256940]: 2025-10-02 13:04:27.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:27 compute-0 ceph-mon[73668]: pgmap v2744: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.7 MiB/s wr, 392 op/s
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 328 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.5 MiB/s wr, 352 op/s
Oct 02 13:04:28 compute-0 nova_compute[256940]: 2025-10-02 13:04:28.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:28.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:04:28
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Oct 02 13:04:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:04:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:29.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:29 compute-0 nova_compute[256940]: 2025-10-02 13:04:29.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:04:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:04:29 compute-0 ceph-mon[73668]: pgmap v2745: 305 pgs: 305 active+clean; 328 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.5 MiB/s wr, 352 op/s
Oct 02 13:04:30 compute-0 ovn_controller[148123]: 2025-10-02T13:04:30Z|00912|binding|INFO|Releasing lport 93013e5b-e543-4f32-a0a0-fbe6244eaa94 from this chassis (sb_readonly=0)
Oct 02 13:04:30 compute-0 nova_compute[256940]: 2025-10-02 13:04:30.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:30 compute-0 nova_compute[256940]: 2025-10-02 13:04:30.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 342 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 300 op/s
Oct 02 13:04:30 compute-0 sudo[376665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:30 compute-0 sudo[376665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:30 compute-0 sudo[376665]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:30.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:30 compute-0 sudo[376691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:30 compute-0 sudo[376691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:30 compute-0 sudo[376691]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:31.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:31 compute-0 nova_compute[256940]: 2025-10-02 13:04:31.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:31 compute-0 ceph-mon[73668]: pgmap v2746: 305 pgs: 305 active+clean; 342 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 300 op/s
Oct 02 13:04:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 269 op/s
Oct 02 13:04:32 compute-0 nova_compute[256940]: 2025-10-02 13:04:32.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:32.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:33 compute-0 nova_compute[256940]: 2025-10-02 13:04:33.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:33 compute-0 nova_compute[256940]: 2025-10-02 13:04:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:04:33 compute-0 nova_compute[256940]: 2025-10-02 13:04:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:04:33 compute-0 nova_compute[256940]: 2025-10-02 13:04:33.754 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:33 compute-0 nova_compute[256940]: 2025-10-02 13:04:33.754 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:33 compute-0 nova_compute[256940]: 2025-10-02 13:04:33.754 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:04:33 compute-0 nova_compute[256940]: 2025-10-02 13:04:33.754 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 623fda87-e4b4-4b98-96cb-6f1846429214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:33 compute-0 ceph-mon[73668]: pgmap v2747: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 269 op/s
Oct 02 13:04:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 4.3 MiB/s wr, 181 op/s
Oct 02 13:04:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:34.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:35 compute-0 nova_compute[256940]: 2025-10-02 13:04:35.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:35 compute-0 nova_compute[256940]: 2025-10-02 13:04:35.709 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updating instance_info_cache with network_info: [{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:35 compute-0 nova_compute[256940]: 2025-10-02 13:04:35.744 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:35 compute-0 nova_compute[256940]: 2025-10-02 13:04:35.744 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:04:35 compute-0 ceph-mon[73668]: pgmap v2748: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 4.3 MiB/s wr, 181 op/s
Oct 02 13:04:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 4.3 MiB/s wr, 182 op/s
Oct 02 13:04:36 compute-0 ovn_controller[148123]: 2025-10-02T13:04:36Z|00913|binding|INFO|Releasing lport 93013e5b-e543-4f32-a0a0-fbe6244eaa94 from this chassis (sb_readonly=0)
Oct 02 13:04:36 compute-0 nova_compute[256940]: 2025-10-02 13:04:36.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 13:04:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:36.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 13:04:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:37.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:37 compute-0 nova_compute[256940]: 2025-10-02 13:04:37.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:37 compute-0 nova_compute[256940]: 2025-10-02 13:04:37.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:37 compute-0 ceph-mon[73668]: pgmap v2749: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 4.3 MiB/s wr, 182 op/s
Oct 02 13:04:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 3.0 MiB/s wr, 124 op/s
Oct 02 13:04:38 compute-0 nova_compute[256940]: 2025-10-02 13:04:38.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:38.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:39 compute-0 nova_compute[256940]: 2025-10-02 13:04:39.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:39 compute-0 ceph-mon[73668]: pgmap v2750: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 3.0 MiB/s wr, 124 op/s
Oct 02 13:04:40 compute-0 nova_compute[256940]: 2025-10-02 13:04:40.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 364 KiB/s rd, 1.6 MiB/s wr, 77 op/s
Oct 02 13:04:40 compute-0 podman[376720]: 2025-10-02 13:04:40.405161428 +0000 UTC m=+0.073753856 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:04:40 compute-0 podman[376721]: 2025-10-02 13:04:40.420022204 +0000 UTC m=+0.074618798 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006507690361529871 of space, bias 1.0, pg target 1.9523071084589614 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6465347207681927 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:04:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:04:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:40.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:41 compute-0 ceph-mon[73668]: pgmap v2751: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 364 KiB/s rd, 1.6 MiB/s wr, 77 op/s
Oct 02 13:04:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 196 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Oct 02 13:04:42 compute-0 nova_compute[256940]: 2025-10-02 13:04:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:42.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:43.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:43 compute-0 ceph-mon[73668]: pgmap v2752: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 196 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Oct 02 13:04:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/341473276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:44 compute-0 nova_compute[256940]: 2025-10-02 13:04:44.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:44 compute-0 nova_compute[256940]: 2025-10-02 13:04:44.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 30 KiB/s wr, 15 op/s
Oct 02 13:04:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:44.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:45 compute-0 nova_compute[256940]: 2025-10-02 13:04:45.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:45 compute-0 ceph-mon[73668]: pgmap v2753: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 30 KiB/s wr, 15 op/s
Oct 02 13:04:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 32 KiB/s wr, 29 op/s
Oct 02 13:04:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:04:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:04:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:47.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:47 compute-0 ovn_controller[148123]: 2025-10-02T13:04:47Z|00914|binding|INFO|Releasing lport 93013e5b-e543-4f32-a0a0-fbe6244eaa94 from this chassis (sb_readonly=0)
Oct 02 13:04:47 compute-0 nova_compute[256940]: 2025-10-02 13:04:47.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:47 compute-0 nova_compute[256940]: 2025-10-02 13:04:47.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:47 compute-0 ceph-mon[73668]: pgmap v2754: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 32 KiB/s wr, 29 op/s
Oct 02 13:04:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3717693724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 8.5 KiB/s wr, 28 op/s
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.308 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.308 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.325 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.409 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.409 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.415 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.415 2 INFO nova.compute.claims [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.528 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:48.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958912710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.984 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:48 compute-0 nova_compute[256940]: 2025-10-02 13:04:48.989 2 DEBUG nova.compute.provider_tree [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.010 2 DEBUG nova.scheduler.client.report [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.033 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.033 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:04:49 compute-0 ceph-mon[73668]: pgmap v2755: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 8.5 KiB/s wr, 28 op/s
Oct 02 13:04:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2958912710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.104 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.105 2 DEBUG nova.network.neutron [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.130 2 INFO nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.150 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:04:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:49.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.263 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.265 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.265 2 INFO nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Creating image(s)
Oct 02 13:04:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.304 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.332 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.386 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.390 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.418 2 DEBUG nova.policy [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93f224db5657410889fd6d1e8031770a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aed40f24f86c4d4cb4ce0a3d01cde284', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.453 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.454 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.455 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.455 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.481 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.484 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.807 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:49 compute-0 nova_compute[256940]: 2025-10-02 13:04:49.878 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] resizing rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.050 2 DEBUG nova.objects.instance [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lazy-loading 'migration_context' on Instance uuid 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.064 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.065 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Ensure instance console log exists: /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.066 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.066 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.066 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 46 op/s
Oct 02 13:04:50 compute-0 nova_compute[256940]: 2025-10-02 13:04:50.275 2 DEBUG nova.network.neutron [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Successfully created port: b4547f7a-3de5-4629-97e8-f2b5a06ed444 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:04:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:50.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:50 compute-0 sudo[376953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:50 compute-0 sudo[376953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:50 compute-0 sudo[376953]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:50 compute-0 sudo[376978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:50 compute-0 sudo[376978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:50 compute-0 sudo[376978]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.138 2 DEBUG nova.network.neutron [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Successfully updated port: b4547f7a-3de5-4629-97e8-f2b5a06ed444 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.156 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.157 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquired lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.157 2 DEBUG nova.network.neutron [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:04:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 13:04:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:51.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.280 2 DEBUG nova.compute.manager [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-changed-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.280 2 DEBUG nova.compute.manager [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Refreshing instance network info cache due to event network-changed-b4547f7a-3de5-4629-97e8-f2b5a06ed444. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.280 2 DEBUG oslo_concurrency.lockutils [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:51 compute-0 ceph-mon[73668]: pgmap v2756: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 46 op/s
Oct 02 13:04:51 compute-0 nova_compute[256940]: 2025-10-02 13:04:51.379 2 DEBUG nova.network.neutron [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:04:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.330 2 DEBUG nova.network.neutron [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updating instance_info_cache with network_info: [{"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.353 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Releasing lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.353 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Instance network_info: |[{"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.353 2 DEBUG oslo_concurrency.lockutils [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.354 2 DEBUG nova.network.neutron [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Refreshing network info cache for port b4547f7a-3de5-4629-97e8-f2b5a06ed444 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.356 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Start _get_guest_xml network_info=[{"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.361 2 WARNING nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.367 2 DEBUG nova.virt.libvirt.host [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.368 2 DEBUG nova.virt.libvirt.host [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.375 2 DEBUG nova.virt.libvirt.host [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.375 2 DEBUG nova.virt.libvirt.host [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.377 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.377 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.377 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.378 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.378 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.378 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.378 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.379 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.379 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.379 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.380 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.380 2 DEBUG nova.virt.hardware [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.382 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2652707074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2608477789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:52.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726833015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.841 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.871 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:52 compute-0 nova_compute[256940]: 2025-10-02 13:04:52.874 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:53.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495404971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.369 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.371 2 DEBUG nova.virt.libvirt.vif [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1658999594-access_point-1587879445',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1658999594-access_point-1587879445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1658999594-ac',id=186,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDreqrcPMqyn12TpEK39/iAxuA1VJNw/wAKLFZUIzZu61ixsA73RxNCL7cv1mvwVqQxVz7eqEkvxqDpyU6WYdar8m2ioWMocUQtF78TLmnW81cFDvzS1PVIw+zfesBubfQ==',key_name='tempest-TestSecurityGroupsBasicOps-1604935118',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aed40f24f86c4d4cb4ce0a3d01cde284',ramdisk_id='',reservation_id='r-qdxnr8ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1658999594',owner_user_name='tempest-TestSecurityGroupsBasicOps-1658999594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:49Z,user_data=None,user_id='93f224db5657410889fd6d1e8031770a',uuid=092f2a53-5ba6-4db2-adc9-3c7c8880d9a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.371 2 DEBUG nova.network.os_vif_util [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Converting VIF {"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.372 2 DEBUG nova.network.os_vif_util [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:d6:de,bridge_name='br-int',has_traffic_filtering=True,id=b4547f7a-3de5-4629-97e8-f2b5a06ed444,network=Network(714950b1-cde6-436b-afc3-312c876985ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547f7a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.373 2 DEBUG nova.objects.instance [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lazy-loading 'pci_devices' on Instance uuid 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:53 compute-0 podman[377064]: 2025-10-02 13:04:53.381768958 +0000 UTC m=+0.050819590 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.390 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <uuid>092f2a53-5ba6-4db2-adc9-3c7c8880d9a8</uuid>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <name>instance-000000ba</name>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1658999594-access_point-1587879445</nova:name>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:04:52</nova:creationTime>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:user uuid="93f224db5657410889fd6d1e8031770a">tempest-TestSecurityGroupsBasicOps-1658999594-project-member</nova:user>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:project uuid="aed40f24f86c4d4cb4ce0a3d01cde284">tempest-TestSecurityGroupsBasicOps-1658999594</nova:project>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <nova:port uuid="b4547f7a-3de5-4629-97e8-f2b5a06ed444">
Oct 02 13:04:53 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <system>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <entry name="serial">092f2a53-5ba6-4db2-adc9-3c7c8880d9a8</entry>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <entry name="uuid">092f2a53-5ba6-4db2-adc9-3c7c8880d9a8</entry>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </system>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <os>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   </os>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <features>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   </features>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk">
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       </source>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk.config">
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       </source>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:04:53 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9e:d6:de"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <target dev="tapb4547f7a-3d"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/console.log" append="off"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <video>
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </video>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:04:53 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:04:53 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:04:53 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:04:53 compute-0 nova_compute[256940]: </domain>
Oct 02 13:04:53 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.391 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Preparing to wait for external event network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.391 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.391 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.391 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.392 2 DEBUG nova.virt.libvirt.vif [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1658999594-access_point-1587879445',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1658999594-access_point-1587879445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1658999594-ac',id=186,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDreqrcPMqyn12TpEK39/iAxuA1VJNw/wAKLFZUIzZu61ixsA73RxNCL7cv1mvwVqQxVz7eqEkvxqDpyU6WYdar8m2ioWMocUQtF78TLmnW81cFDvzS1PVIw+zfesBubfQ==',key_name='tempest-TestSecurityGroupsBasicOps-1604935118',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aed40f24f86c4d4cb4ce0a3d01cde284',ramdisk_id='',reservation_id='r-qdxnr8ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1658999594',owner_user_name='tempest-TestSecurityGroupsBasicOps-1658999594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:49Z,user_data=None,user_id='93f224db5657410889fd6d1e8031770a',uuid=092f2a53-5ba6-4db2-adc9-3c7c8880d9a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.392 2 DEBUG nova.network.os_vif_util [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Converting VIF {"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.393 2 DEBUG nova.network.os_vif_util [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:d6:de,bridge_name='br-int',has_traffic_filtering=True,id=b4547f7a-3de5-4629-97e8-f2b5a06ed444,network=Network(714950b1-cde6-436b-afc3-312c876985ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547f7a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.393 2 DEBUG os_vif [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:d6:de,bridge_name='br-int',has_traffic_filtering=True,id=b4547f7a-3de5-4629-97e8-f2b5a06ed444,network=Network(714950b1-cde6-436b-afc3-312c876985ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547f7a-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.394 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.395 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.398 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4547f7a-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.398 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4547f7a-3d, col_values=(('external_ids', {'iface-id': 'b4547f7a-3de5-4629-97e8-f2b5a06ed444', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:d6:de', 'vm-uuid': '092f2a53-5ba6-4db2-adc9-3c7c8880d9a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:53 compute-0 NetworkManager[44981]: <info>  [1759410293.4024] manager: (tapb4547f7a-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.409 2 INFO os_vif [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:d6:de,bridge_name='br-int',has_traffic_filtering=True,id=b4547f7a-3de5-4629-97e8-f2b5a06ed444,network=Network(714950b1-cde6-436b-afc3-312c876985ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547f7a-3d')
Oct 02 13:04:53 compute-0 podman[377065]: 2025-10-02 13:04:53.428032079 +0000 UTC m=+0.088600711 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.473 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.473 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.474 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] No VIF found with MAC fa:16:3e:9e:d6:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.475 2 INFO nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Using config drive
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.505 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.593 2 DEBUG nova.network.neutron [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updated VIF entry in instance network info cache for port b4547f7a-3de5-4629-97e8-f2b5a06ed444. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.594 2 DEBUG nova.network.neutron [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updating instance_info_cache with network_info: [{"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:53 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.610 2 DEBUG oslo_concurrency.lockutils [req-2da12599-0ec0-4651-9329-047afbbf6ba5 req-982559bd-d9fd-40ee-916a-b687369b6f80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:53 compute-0 ceph-mon[73668]: pgmap v2757: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct 02 13:04:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/726833015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/495404971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:54 compute-0 nova_compute[256940]: 2025-10-02 13:04:53.999 2 INFO nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Creating config drive at /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/disk.config
Oct 02 13:04:54 compute-0 nova_compute[256940]: 2025-10-02 13:04:54.005 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6vaejua execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:54 compute-0 nova_compute[256940]: 2025-10-02 13:04:54.141 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6vaejua" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:54 compute-0 nova_compute[256940]: 2025-10-02 13:04:54.177 2 DEBUG nova.storage.rbd_utils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] rbd image 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:54 compute-0 nova_compute[256940]: 2025-10-02 13:04:54.181 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/disk.config 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct 02 13:04:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:54.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:55.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:55 compute-0 sudo[377171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:55 compute-0 sudo[377171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-0 sudo[377171]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:55 compute-0 sudo[377196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:55 compute-0 sudo[377196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-0 sudo[377196]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:55 compute-0 sudo[377221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:55 compute-0 sudo[377221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-0 sudo[377221]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:55 compute-0 sudo[377246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:04:55 compute-0 sudo[377246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-0 sudo[377246]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.101 2 DEBUG oslo_concurrency.processutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/disk.config 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.920s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.102 2 INFO nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Deleting local config drive /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8/disk.config because it was imported into RBD.
Oct 02 13:04:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:04:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:04:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:04:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:04:56 compute-0 kernel: tapb4547f7a-3d: entered promiscuous mode
Oct 02 13:04:56 compute-0 NetworkManager[44981]: <info>  [1759410296.1606] manager: (tapb4547f7a-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:56 compute-0 ovn_controller[148123]: 2025-10-02T13:04:56Z|00915|binding|INFO|Claiming lport b4547f7a-3de5-4629-97e8-f2b5a06ed444 for this chassis.
Oct 02 13:04:56 compute-0 ovn_controller[148123]: 2025-10-02T13:04:56Z|00916|binding|INFO|b4547f7a-3de5-4629-97e8-f2b5a06ed444: Claiming fa:16:3e:9e:d6:de 10.100.0.5
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.170 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:d6:de 10.100.0.5'], port_security=['fa:16:3e:9e:d6:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '092f2a53-5ba6-4db2-adc9-3c7c8880d9a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-714950b1-cde6-436b-afc3-312c876985ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aed40f24f86c4d4cb4ce0a3d01cde284', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1554d381-f0d8-44a8-bce1-c85e1567b05c 1e21d649-34e5-4625-b088-8b58f0f88dbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b6ac383-2e57-4c4d-82f6-7a601de6d31d, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=b4547f7a-3de5-4629-97e8-f2b5a06ed444) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.172 158104 INFO neutron.agent.ovn.metadata.agent [-] Port b4547f7a-3de5-4629-97e8-f2b5a06ed444 in datapath 714950b1-cde6-436b-afc3-312c876985ed bound to our chassis
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.176 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 714950b1-cde6-436b-afc3-312c876985ed
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.190 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1037a36d-6a7b-431f-9b42-dd83b590c17c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.192 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap714950b1-c1 in ovnmeta-714950b1-cde6-436b-afc3-312c876985ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:04:56 compute-0 systemd-udevd[377315]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.194 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap714950b1-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.194 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0235d3-c7d8-4fcf-9483-13251860c1e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.196 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a7eccef8-88b1-4222-a1ba-9bb04d512c0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_controller[148123]: 2025-10-02T13:04:56Z|00917|binding|INFO|Setting lport b4547f7a-3de5-4629-97e8-f2b5a06ed444 ovn-installed in OVS
Oct 02 13:04:56 compute-0 ovn_controller[148123]: 2025-10-02T13:04:56Z|00918|binding|INFO|Setting lport b4547f7a-3de5-4629-97e8-f2b5a06ed444 up in Southbound
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:56 compute-0 NetworkManager[44981]: <info>  [1759410296.2108] device (tapb4547f7a-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:04:56 compute-0 NetworkManager[44981]: <info>  [1759410296.2118] device (tapb4547f7a-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.213 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[f18aee07-5d44-4887-8847-0557f6390fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 systemd-machined[210927]: New machine qemu-94-instance-000000ba.
Oct 02 13:04:56 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-000000ba.
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.237 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd6165c-88e8-48b0-acd0-732cc7ce4c01]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ceph-mon[73668]: pgmap v2758: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.267 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a86a44e6-ef02-4626-a84b-408aeb100c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.271 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[69a7918e-5d48-409a-886f-2160486e6941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 NetworkManager[44981]: <info>  [1759410296.2724] manager: (tap714950b1-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Oct 02 13:04:56 compute-0 systemd-udevd[377319]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.303 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc2b6c4-5ffd-4589-ba77-53597f9524be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.309 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[90ea1677-b091-4bfc-9111-bb0d29dda034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 NetworkManager[44981]: <info>  [1759410296.3309] device (tap714950b1-c0): carrier: link connected
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.336 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[81de18cb-7a4f-4284-99a9-83df877ef9f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.358 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc6f2a5-9d67-488c-b14b-85fffb49a309]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap714950b1-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:70:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819181, 'reachable_time': 25951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377348, 'error': None, 'target': 'ovnmeta-714950b1-cde6-436b-afc3-312c876985ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.373 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[291ce579-491d-4213-9528-6906f7c62ce6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:7085'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 819181, 'tstamp': 819181}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377349, 'error': None, 'target': 'ovnmeta-714950b1-cde6-436b-afc3-312c876985ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.391 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[966787ce-eba6-4b39-9fdf-ccfbfef47498]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap714950b1-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:70:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819181, 'reachable_time': 25951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377350, 'error': None, 'target': 'ovnmeta-714950b1-cde6-436b-afc3-312c876985ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.427 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3573a4-3a85-4ddc-8714-d7a24a52463e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.440 2 DEBUG nova.compute.manager [req-8e33d393-34a6-4241-9f70-e742e9e7c604 req-adf497f9-7a6f-42d4-b3b9-fcf21cd141a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.440 2 DEBUG oslo_concurrency.lockutils [req-8e33d393-34a6-4241-9f70-e742e9e7c604 req-adf497f9-7a6f-42d4-b3b9-fcf21cd141a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.440 2 DEBUG oslo_concurrency.lockutils [req-8e33d393-34a6-4241-9f70-e742e9e7c604 req-adf497f9-7a6f-42d4-b3b9-fcf21cd141a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.441 2 DEBUG oslo_concurrency.lockutils [req-8e33d393-34a6-4241-9f70-e742e9e7c604 req-adf497f9-7a6f-42d4-b3b9-fcf21cd141a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.441 2 DEBUG nova.compute.manager [req-8e33d393-34a6-4241-9f70-e742e9e7c604 req-adf497f9-7a6f-42d4-b3b9-fcf21cd141a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Processing event network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.498 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8aec04-ccb5-462b-9a08-e8092ddced48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.499 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap714950b1-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.500 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.500 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap714950b1-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:56 compute-0 NetworkManager[44981]: <info>  [1759410296.5032] manager: (tap714950b1-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Oct 02 13:04:56 compute-0 kernel: tap714950b1-c0: entered promiscuous mode
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.505 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap714950b1-c0, col_values=(('external_ids', {'iface-id': '03e0e814-64b3-421d-ae70-59312f6df488'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:56 compute-0 ovn_controller[148123]: 2025-10-02T13:04:56Z|00919|binding|INFO|Releasing lport 03e0e814-64b3-421d-ae70-59312f6df488 from this chassis (sb_readonly=0)
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:56 compute-0 nova_compute[256940]: 2025-10-02 13:04:56.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.525 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/714950b1-cde6-436b-afc3-312c876985ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/714950b1-cde6-436b-afc3-312c876985ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.526 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[499c026b-c08b-44b3-8758-86c7e569a96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.527 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-714950b1-cde6-436b-afc3-312c876985ed
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/714950b1-cde6-436b-afc3-312c876985ed.pid.haproxy
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 714950b1-cde6-436b-afc3-312c876985ed
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:04:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:04:56.529 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-714950b1-cde6-436b-afc3-312c876985ed', 'env', 'PROCESS_TAG=haproxy-714950b1-cde6-436b-afc3-312c876985ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/714950b1-cde6-436b-afc3-312c876985ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:04:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:04:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d9c4046b-7734-483d-bc9e-328956089a0a does not exist
Oct 02 13:04:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c380d53d-e1cf-469c-b2f0-de2b08ca688b does not exist
Oct 02 13:04:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 405633bd-80e7-4bf1-be0c-d7db0e80d852 does not exist
Oct 02 13:04:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:04:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:04:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:04:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:04:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:04:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:56 compute-0 sudo[377375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:56 compute-0 sudo[377375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:56 compute-0 sudo[377375]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:56 compute-0 sudo[377421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:56 compute-0 sudo[377421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:56 compute-0 sudo[377421]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:56.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:56 compute-0 sudo[377452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:56 compute-0 sudo[377452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:56 compute-0 sudo[377452]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:56 compute-0 sudo[377483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:04:56 compute-0 sudo[377483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:56 compute-0 podman[377525]: 2025-10-02 13:04:56.902919408 +0000 UTC m=+0.022689630 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:04:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:57.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.202 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410297.2023048, 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.203 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] VM Started (Lifecycle Event)
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.208 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.213 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.216 2 INFO nova.virt.libvirt.driver [-] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Instance spawned successfully.
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.216 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.229 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.236 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.239 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.240 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.241 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.241 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.241 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.242 2 DEBUG nova.virt.libvirt.driver [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.290 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.290 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410297.2024305, 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.291 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] VM Paused (Lifecycle Event)
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.317 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.321 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410297.2114694, 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.322 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] VM Resumed (Lifecycle Event)
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.337 2 INFO nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Took 8.07 seconds to spawn the instance on the hypervisor.
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.337 2 DEBUG nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.378 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.381 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:57 compute-0 podman[377525]: 2025-10-02 13:04:57.407900795 +0000 UTC m=+0.527670997 container create a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.412 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.431 2 INFO nova.compute.manager [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Took 9.04 seconds to build instance.
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.451 2 DEBUG oslo_concurrency.lockutils [None req-f482323f-904a-42ed-b4c7-e784acae3478 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:04:57 compute-0 ceph-mon[73668]: pgmap v2759: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 13:04:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:04:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:04:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:04:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:57 compute-0 systemd[1]: Started libpod-conmon-a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597.scope.
Oct 02 13:04:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdf61fe78cc0a4a81426a808fa25a83ee6dc697b038979d8c95cba0d0700d07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:57 compute-0 podman[377525]: 2025-10-02 13:04:57.637838023 +0000 UTC m=+0.757608245 container init a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:04:57 compute-0 podman[377525]: 2025-10-02 13:04:57.64888333 +0000 UTC m=+0.768653522 container start a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:04:57 compute-0 neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed[377565]: [NOTICE]   (377569) : New worker (377571) forked
Oct 02 13:04:57 compute-0 neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed[377565]: [NOTICE]   (377569) : Loading success.
Oct 02 13:04:57 compute-0 nova_compute[256940]: 2025-10-02 13:04:57.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:57 compute-0 podman[377594]: 2025-10-02 13:04:57.895329017 +0000 UTC m=+0.096028793 container create a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:04:57 compute-0 podman[377594]: 2025-10-02 13:04:57.825414432 +0000 UTC m=+0.026114228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:57 compute-0 systemd[1]: Started libpod-conmon-a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d.scope.
Oct 02 13:04:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:58 compute-0 podman[377594]: 2025-10-02 13:04:58.06687717 +0000 UTC m=+0.267576966 container init a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:04:58 compute-0 podman[377594]: 2025-10-02 13:04:58.076920761 +0000 UTC m=+0.277620537 container start a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:04:58 compute-0 unruffled_blackwell[377610]: 167 167
Oct 02 13:04:58 compute-0 systemd[1]: libpod-a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d.scope: Deactivated successfully.
Oct 02 13:04:58 compute-0 podman[377594]: 2025-10-02 13:04:58.088200384 +0000 UTC m=+0.288900170 container attach a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:58 compute-0 podman[377594]: 2025-10-02 13:04:58.08882142 +0000 UTC m=+0.289521206 container died a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-240f3e68e1a65c1c46f51c68957702592065ff1334a6f2ea4bf7939171202a75-merged.mount: Deactivated successfully.
Oct 02 13:04:58 compute-0 podman[377594]: 2025-10-02 13:04:58.195214432 +0000 UTC m=+0.395914208 container remove a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:04:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 69 op/s
Oct 02 13:04:58 compute-0 systemd[1]: libpod-conmon-a659136aac34bdf90211de6dd01fcc39ad064e65238b1a596dfd20594edeae7d.scope: Deactivated successfully.
Oct 02 13:04:58 compute-0 nova_compute[256940]: 2025-10-02 13:04:58.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:58 compute-0 podman[377635]: 2025-10-02 13:04:58.421618659 +0000 UTC m=+0.067615847 container create 27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:04:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:04:58 compute-0 podman[377635]: 2025-10-02 13:04:58.38661589 +0000 UTC m=+0.032613048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:04:58 compute-0 systemd[1]: Started libpod-conmon-27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07.scope.
Oct 02 13:04:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b53fa7dd5430e5eb5bf063c76fbd760013c80e6b04b98a20a97e0df505838c21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b53fa7dd5430e5eb5bf063c76fbd760013c80e6b04b98a20a97e0df505838c21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b53fa7dd5430e5eb5bf063c76fbd760013c80e6b04b98a20a97e0df505838c21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b53fa7dd5430e5eb5bf063c76fbd760013c80e6b04b98a20a97e0df505838c21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b53fa7dd5430e5eb5bf063c76fbd760013c80e6b04b98a20a97e0df505838c21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:58 compute-0 nova_compute[256940]: 2025-10-02 13:04:58.550 2 DEBUG nova.compute.manager [req-d389ccac-56d7-477b-a934-df99ba7472e3 req-bd669e38-02b3-4ab4-bd96-e4d6ea1d9e78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:58 compute-0 nova_compute[256940]: 2025-10-02 13:04:58.553 2 DEBUG oslo_concurrency.lockutils [req-d389ccac-56d7-477b-a934-df99ba7472e3 req-bd669e38-02b3-4ab4-bd96-e4d6ea1d9e78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:58 compute-0 nova_compute[256940]: 2025-10-02 13:04:58.553 2 DEBUG oslo_concurrency.lockutils [req-d389ccac-56d7-477b-a934-df99ba7472e3 req-bd669e38-02b3-4ab4-bd96-e4d6ea1d9e78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:58 compute-0 nova_compute[256940]: 2025-10-02 13:04:58.553 2 DEBUG oslo_concurrency.lockutils [req-d389ccac-56d7-477b-a934-df99ba7472e3 req-bd669e38-02b3-4ab4-bd96-e4d6ea1d9e78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:58 compute-0 nova_compute[256940]: 2025-10-02 13:04:58.554 2 DEBUG nova.compute.manager [req-d389ccac-56d7-477b-a934-df99ba7472e3 req-bd669e38-02b3-4ab4-bd96-e4d6ea1d9e78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] No waiting events found dispatching network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:58 compute-0 nova_compute[256940]: 2025-10-02 13:04:58.554 2 WARNING nova.compute.manager [req-d389ccac-56d7-477b-a934-df99ba7472e3 req-bd669e38-02b3-4ab4-bd96-e4d6ea1d9e78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received unexpected event network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 for instance with vm_state active and task_state None.
Oct 02 13:04:58 compute-0 podman[377635]: 2025-10-02 13:04:58.577831343 +0000 UTC m=+0.223828511 container init 27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:04:58 compute-0 podman[377635]: 2025-10-02 13:04:58.58808926 +0000 UTC m=+0.234086408 container start 27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_grothendieck, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:04:58 compute-0 podman[377635]: 2025-10-02 13:04:58.613496199 +0000 UTC m=+0.259493347 container attach 27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:04:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:58.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:04:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:04:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:04:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:59 compute-0 cranky_grothendieck[377651]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:04:59 compute-0 cranky_grothendieck[377651]: --> relative data size: 1.0
Oct 02 13:04:59 compute-0 cranky_grothendieck[377651]: --> All data devices are unavailable
Oct 02 13:04:59 compute-0 systemd[1]: libpod-27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07.scope: Deactivated successfully.
Oct 02 13:04:59 compute-0 podman[377635]: 2025-10-02 13:04:59.552725279 +0000 UTC m=+1.198722417 container died 27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b53fa7dd5430e5eb5bf063c76fbd760013c80e6b04b98a20a97e0df505838c21-merged.mount: Deactivated successfully.
Oct 02 13:04:59 compute-0 podman[377635]: 2025-10-02 13:04:59.648695611 +0000 UTC m=+1.294692759 container remove 27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:04:59 compute-0 ceph-mon[73668]: pgmap v2760: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 69 op/s
Oct 02 13:04:59 compute-0 sudo[377483]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:59 compute-0 systemd[1]: libpod-conmon-27e56281dd94d934127a95fe2b5e29aa09829044b16f652db0473fee90b32b07.scope: Deactivated successfully.
Oct 02 13:04:59 compute-0 sudo[377681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:59 compute-0 sudo[377681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:59 compute-0 sudo[377681]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:59 compute-0 sudo[377706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:59 compute-0 sudo[377706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:59 compute-0 sudo[377706]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:59 compute-0 sudo[377731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:59 compute-0 sudo[377731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:59 compute-0 sudo[377731]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:59 compute-0 sudo[377756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:04:59 compute-0 sudo[377756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:00 compute-0 nova_compute[256940]: 2025-10-02 13:05:00.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Oct 02 13:05:00 compute-0 podman[377820]: 2025-10-02 13:05:00.26427876 +0000 UTC m=+0.053195092 container create f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:05:00 compute-0 systemd[1]: Started libpod-conmon-f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26.scope.
Oct 02 13:05:00 compute-0 podman[377820]: 2025-10-02 13:05:00.233369788 +0000 UTC m=+0.022286130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:00 compute-0 podman[377820]: 2025-10-02 13:05:00.365454016 +0000 UTC m=+0.154370338 container init f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:05:00 compute-0 podman[377820]: 2025-10-02 13:05:00.372758896 +0000 UTC m=+0.161675218 container start f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:05:00 compute-0 lucid_jepsen[377836]: 167 167
Oct 02 13:05:00 compute-0 systemd[1]: libpod-f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26.scope: Deactivated successfully.
Oct 02 13:05:00 compute-0 podman[377820]: 2025-10-02 13:05:00.383120425 +0000 UTC m=+0.172036777 container attach f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:05:00 compute-0 podman[377820]: 2025-10-02 13:05:00.383826773 +0000 UTC m=+0.172743095 container died f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f08a7b73c5b49a398805dcf9a784e6086ea39d75369efad1cac60035c97b0817-merged.mount: Deactivated successfully.
Oct 02 13:05:00 compute-0 podman[377820]: 2025-10-02 13:05:00.478758247 +0000 UTC m=+0.267674569 container remove f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:05:00 compute-0 systemd[1]: libpod-conmon-f3db8331fd60229395eea69da6d7fc38527b8e2dad4484250d7a3f90e6519a26.scope: Deactivated successfully.
Oct 02 13:05:00 compute-0 podman[377861]: 2025-10-02 13:05:00.659630641 +0000 UTC m=+0.054334370 container create d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:05:00 compute-0 systemd[1]: Started libpod-conmon-d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602.scope.
Oct 02 13:05:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:00 compute-0 podman[377861]: 2025-10-02 13:05:00.629813597 +0000 UTC m=+0.024517326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a254634f361cc9f643a84e2823a47e915f11649b572d62c43d4643a76b274e7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a254634f361cc9f643a84e2823a47e915f11649b572d62c43d4643a76b274e7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a254634f361cc9f643a84e2823a47e915f11649b572d62c43d4643a76b274e7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a254634f361cc9f643a84e2823a47e915f11649b572d62c43d4643a76b274e7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:00 compute-0 podman[377861]: 2025-10-02 13:05:00.74894141 +0000 UTC m=+0.143645159 container init d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:05:00 compute-0 podman[377861]: 2025-10-02 13:05:00.756520636 +0000 UTC m=+0.151224365 container start d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moore, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 13:05:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:00 compute-0 podman[377861]: 2025-10-02 13:05:00.779324338 +0000 UTC m=+0.174028067 container attach d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:05:00 compute-0 nova_compute[256940]: 2025-10-02 13:05:00.859 2 DEBUG nova.compute.manager [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-changed-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:00 compute-0 nova_compute[256940]: 2025-10-02 13:05:00.859 2 DEBUG nova.compute.manager [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Refreshing instance network info cache due to event network-changed-b4547f7a-3de5-4629-97e8-f2b5a06ed444. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:05:00 compute-0 nova_compute[256940]: 2025-10-02 13:05:00.860 2 DEBUG oslo_concurrency.lockutils [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:00 compute-0 nova_compute[256940]: 2025-10-02 13:05:00.860 2 DEBUG oslo_concurrency.lockutils [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:00 compute-0 nova_compute[256940]: 2025-10-02 13:05:00.860 2 DEBUG nova.network.neutron [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Refreshing network info cache for port b4547f7a-3de5-4629-97e8-f2b5a06ed444 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:05:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:01.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:01 compute-0 condescending_moore[377878]: {
Oct 02 13:05:01 compute-0 condescending_moore[377878]:     "1": [
Oct 02 13:05:01 compute-0 condescending_moore[377878]:         {
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "devices": [
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "/dev/loop3"
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             ],
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "lv_name": "ceph_lv0",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "lv_size": "7511998464",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "name": "ceph_lv0",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "tags": {
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.cluster_name": "ceph",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.crush_device_class": "",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.encrypted": "0",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.osd_id": "1",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.type": "block",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:                 "ceph.vdo": "0"
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             },
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "type": "block",
Oct 02 13:05:01 compute-0 condescending_moore[377878]:             "vg_name": "ceph_vg0"
Oct 02 13:05:01 compute-0 condescending_moore[377878]:         }
Oct 02 13:05:01 compute-0 condescending_moore[377878]:     ]
Oct 02 13:05:01 compute-0 condescending_moore[377878]: }
Oct 02 13:05:01 compute-0 systemd[1]: libpod-d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602.scope: Deactivated successfully.
Oct 02 13:05:01 compute-0 podman[377888]: 2025-10-02 13:05:01.621563381 +0000 UTC m=+0.023299326 container died d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moore, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a254634f361cc9f643a84e2823a47e915f11649b572d62c43d4643a76b274e7a-merged.mount: Deactivated successfully.
Oct 02 13:05:01 compute-0 podman[377888]: 2025-10-02 13:05:01.707195034 +0000 UTC m=+0.108930959 container remove d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:01 compute-0 systemd[1]: libpod-conmon-d6587e61c0df230efaa776390d2a95eb9f5341ce5d977d23219b21ea2928a602.scope: Deactivated successfully.
Oct 02 13:05:01 compute-0 sudo[377756]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:01 compute-0 sudo[377903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:01 compute-0 sudo[377903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:01 compute-0 sudo[377903]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:01 compute-0 sudo[377928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:05:01 compute-0 sudo[377928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:01 compute-0 sudo[377928]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:01 compute-0 ceph-mon[73668]: pgmap v2761: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Oct 02 13:05:01 compute-0 sudo[377953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:01 compute-0 sudo[377953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:01 compute-0 sudo[377953]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:01 compute-0 sudo[377978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:05:01 compute-0 sudo[377978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 185 op/s
Oct 02 13:05:02 compute-0 podman[378044]: 2025-10-02 13:05:02.350511303 +0000 UTC m=+0.043921921 container create 61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:05:02 compute-0 systemd[1]: Started libpod-conmon-61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1.scope.
Oct 02 13:05:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:02 compute-0 podman[378044]: 2025-10-02 13:05:02.330002141 +0000 UTC m=+0.023412779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:02 compute-0 podman[378044]: 2025-10-02 13:05:02.442585133 +0000 UTC m=+0.135995771 container init 61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 13:05:02 compute-0 podman[378044]: 2025-10-02 13:05:02.449216715 +0000 UTC m=+0.142627333 container start 61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:02 compute-0 podman[378044]: 2025-10-02 13:05:02.456572976 +0000 UTC m=+0.149983614 container attach 61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:05:02 compute-0 determined_wright[378061]: 167 167
Oct 02 13:05:02 compute-0 systemd[1]: libpod-61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1.scope: Deactivated successfully.
Oct 02 13:05:02 compute-0 podman[378044]: 2025-10-02 13:05:02.45787934 +0000 UTC m=+0.151289958 container died 61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 02 13:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-93a00156097b190bc4aaea6a8a7c5ce291e85d6c11c55ca9ed44980da3e74f0d-merged.mount: Deactivated successfully.
Oct 02 13:05:02 compute-0 nova_compute[256940]: 2025-10-02 13:05:02.504 2 DEBUG nova.network.neutron [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updated VIF entry in instance network info cache for port b4547f7a-3de5-4629-97e8-f2b5a06ed444. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:05:02 compute-0 nova_compute[256940]: 2025-10-02 13:05:02.507 2 DEBUG nova.network.neutron [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updating instance_info_cache with network_info: [{"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:02 compute-0 podman[378044]: 2025-10-02 13:05:02.513769341 +0000 UTC m=+0.207179959 container remove 61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:05:02 compute-0 systemd[1]: libpod-conmon-61216791eaf23dd2925ab767cc5de21446da79e5b7e887c6b2d8397ebc107fe1.scope: Deactivated successfully.
Oct 02 13:05:02 compute-0 nova_compute[256940]: 2025-10-02 13:05:02.557 2 DEBUG oslo_concurrency.lockutils [req-19a5b8ec-140e-4cfb-83bc-330e38a5b820 req-4af4f4ef-b4c8-4499-8cb8-fc107b0db260 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:02 compute-0 podman[378083]: 2025-10-02 13:05:02.702452049 +0000 UTC m=+0.050843001 container create 6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:05:02 compute-0 nova_compute[256940]: 2025-10-02 13:05:02.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:02 compute-0 systemd[1]: Started libpod-conmon-6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708.scope.
Oct 02 13:05:02 compute-0 podman[378083]: 2025-10-02 13:05:02.676603678 +0000 UTC m=+0.024994630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:05:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:02.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6caaa0bc107662203df2d028d5ddf2c630a49dd14441a625d6d7fae1c2183078/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6caaa0bc107662203df2d028d5ddf2c630a49dd14441a625d6d7fae1c2183078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6caaa0bc107662203df2d028d5ddf2c630a49dd14441a625d6d7fae1c2183078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6caaa0bc107662203df2d028d5ddf2c630a49dd14441a625d6d7fae1c2183078/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:02 compute-0 podman[378083]: 2025-10-02 13:05:02.874357121 +0000 UTC m=+0.222748103 container init 6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 13:05:02 compute-0 podman[378083]: 2025-10-02 13:05:02.883134129 +0000 UTC m=+0.231525091 container start 6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 13:05:02 compute-0 podman[378083]: 2025-10-02 13:05:02.962862638 +0000 UTC m=+0.311253620 container attach 6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:05:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:03.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:03 compute-0 ceph-mon[73668]: pgmap v2762: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 185 op/s
Oct 02 13:05:03 compute-0 nova_compute[256940]: 2025-10-02 13:05:03.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:03 compute-0 serene_albattani[378101]: {
Oct 02 13:05:03 compute-0 serene_albattani[378101]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:05:03 compute-0 serene_albattani[378101]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:05:03 compute-0 serene_albattani[378101]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:05:03 compute-0 serene_albattani[378101]:         "osd_id": 1,
Oct 02 13:05:03 compute-0 serene_albattani[378101]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:05:03 compute-0 serene_albattani[378101]:         "type": "bluestore"
Oct 02 13:05:03 compute-0 serene_albattani[378101]:     }
Oct 02 13:05:03 compute-0 serene_albattani[378101]: }
Oct 02 13:05:03 compute-0 systemd[1]: libpod-6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708.scope: Deactivated successfully.
Oct 02 13:05:03 compute-0 podman[378083]: 2025-10-02 13:05:03.751776597 +0000 UTC m=+1.100167569 container died 6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6caaa0bc107662203df2d028d5ddf2c630a49dd14441a625d6d7fae1c2183078-merged.mount: Deactivated successfully.
Oct 02 13:05:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 158 op/s
Oct 02 13:05:04 compute-0 podman[378083]: 2025-10-02 13:05:04.258057958 +0000 UTC m=+1.606448950 container remove 6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:05:04 compute-0 systemd[1]: libpod-conmon-6759b391470bc9e7a6791515b4a29b7eca76372ba79041e78a60feb759809708.scope: Deactivated successfully.
Oct 02 13:05:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:04 compute-0 sudo[377978]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:05:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:05:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:05:04 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:05:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 15bd14c5-cf46-4b76-9fa5-427d9279080e does not exist
Oct 02 13:05:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d8424fd2-b21c-4b9e-8337-6c7b22ecda3b does not exist
Oct 02 13:05:04 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 69ab3f5d-4454-494f-a0dd-49f3067bc60a does not exist
Oct 02 13:05:04 compute-0 sudo[378133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:04 compute-0 sudo[378133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:04 compute-0 sudo[378133]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:04 compute-0 sudo[378158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:05:04 compute-0 sudo[378158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:04 compute-0 sudo[378158]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:04.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:05.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:05:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2768741561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:05:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:05:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2768741561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:05:05 compute-0 ceph-mon[73668]: pgmap v2763: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 158 op/s
Oct 02 13:05:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:05:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:05:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2768741561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:05:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2768741561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:05:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 158 op/s
Oct 02 13:05:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:06.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:07 compute-0 ceph-mon[73668]: pgmap v2764: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 158 op/s
Oct 02 13:05:07 compute-0 nova_compute[256940]: 2025-10-02 13:05:07.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 144 op/s
Oct 02 13:05:08 compute-0 nova_compute[256940]: 2025-10-02 13:05:08.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:05:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:08.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:05:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:09.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:09 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 02 13:05:09 compute-0 nova_compute[256940]: 2025-10-02 13:05:09.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:09 compute-0 ceph-mon[73668]: pgmap v2765: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 144 op/s
Oct 02 13:05:10 compute-0 ovn_controller[148123]: 2025-10-02T13:05:10Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:d6:de 10.100.0.5
Oct 02 13:05:10 compute-0 ovn_controller[148123]: 2025-10-02T13:05:10Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:d6:de 10.100.0.5
Oct 02 13:05:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 397 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Oct 02 13:05:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:11 compute-0 sudo[378187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:11 compute-0 sudo[378187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:11 compute-0 sudo[378187]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:11 compute-0 sudo[378225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:11 compute-0 sudo[378225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:11 compute-0 sudo[378225]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:11 compute-0 podman[378211]: 2025-10-02 13:05:11.085469551 +0000 UTC m=+0.078308224 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:11 compute-0 podman[378212]: 2025-10-02 13:05:11.085760298 +0000 UTC m=+0.078624341 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:11.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:11 compute-0 ceph-mon[73668]: pgmap v2766: 305 pgs: 305 active+clean; 397 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Oct 02 13:05:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 02 13:05:12 compute-0 nova_compute[256940]: 2025-10-02 13:05:12.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:13 compute-0 ceph-mon[73668]: pgmap v2767: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 02 13:05:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:13.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:13 compute-0 nova_compute[256940]: 2025-10-02 13:05:13.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 4.2 MiB/s wr, 97 op/s
Oct 02 13:05:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:14.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:15 compute-0 nova_compute[256940]: 2025-10-02 13:05:15.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:15.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:15 compute-0 ceph-mon[73668]: pgmap v2768: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 4.2 MiB/s wr, 97 op/s
Oct 02 13:05:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 720 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 13:05:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:16.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:17.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:17 compute-0 nova_compute[256940]: 2025-10-02 13:05:17.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:17 compute-0 nova_compute[256940]: 2025-10-02 13:05:17.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:17 compute-0 ceph-mon[73668]: pgmap v2769: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 720 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 13:05:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2041434741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 13:05:18 compute-0 nova_compute[256940]: 2025-10-02 13:05:18.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/62137085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:19.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:20 compute-0 ceph-mon[73668]: pgmap v2770: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 13:05:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/376673178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 02 13:05:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/634107591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:21 compute-0 ceph-mon[73668]: pgmap v2771: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 02 13:05:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/470398889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:05:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.344 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.344 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.377 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.499 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.499 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.509 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.509 2 INFO nova.compute.claims [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.691 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:21.899 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:21 compute-0 nova_compute[256940]: 2025-10-02 13:05:21.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:21 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:21.901 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.110 2 DEBUG nova.compute.manager [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-changed-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.111 2 DEBUG nova.compute.manager [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Refreshing instance network info cache due to event network-changed-b4547f7a-3de5-4629-97e8-f2b5a06ed444. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.112 2 DEBUG oslo_concurrency.lockutils [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.112 2 DEBUG oslo_concurrency.lockutils [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.113 2 DEBUG nova.network.neutron [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Refreshing network info cache for port b4547f7a-3de5-4629-97e8-f2b5a06ed444 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:05:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511956726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/511956726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.147 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.156 2 DEBUG nova.compute.provider_tree [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.164 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.164 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.164 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.165 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.165 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.166 2 INFO nova.compute.manager [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Terminating instance
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.167 2 DEBUG nova.compute.manager [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.191 2 DEBUG nova.scheduler.client.report [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:22 compute-0 kernel: tapb4547f7a-3d (unregistering): left promiscuous mode
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.223 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.224 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:05:22 compute-0 NetworkManager[44981]: <info>  [1759410322.2248] device (tapb4547f7a-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:05:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 475 KiB/s rd, 3.0 MiB/s wr, 107 op/s
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:22 compute-0 ovn_controller[148123]: 2025-10-02T13:05:22Z|00920|binding|INFO|Releasing lport b4547f7a-3de5-4629-97e8-f2b5a06ed444 from this chassis (sb_readonly=0)
Oct 02 13:05:22 compute-0 ovn_controller[148123]: 2025-10-02T13:05:22Z|00921|binding|INFO|Setting lport b4547f7a-3de5-4629-97e8-f2b5a06ed444 down in Southbound
Oct 02 13:05:22 compute-0 ovn_controller[148123]: 2025-10-02T13:05:22Z|00922|binding|INFO|Removing iface tapb4547f7a-3d ovn-installed in OVS
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.238 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.239 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.259 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:d6:de 10.100.0.5'], port_security=['fa:16:3e:9e:d6:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '092f2a53-5ba6-4db2-adc9-3c7c8880d9a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-714950b1-cde6-436b-afc3-312c876985ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aed40f24f86c4d4cb4ce0a3d01cde284', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1554d381-f0d8-44a8-bce1-c85e1567b05c 1e21d649-34e5-4625-b088-8b58f0f88dbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b6ac383-2e57-4c4d-82f6-7a601de6d31d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=b4547f7a-3de5-4629-97e8-f2b5a06ed444) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.260 158104 INFO neutron.agent.ovn.metadata.agent [-] Port b4547f7a-3de5-4629-97e8-f2b5a06ed444 in datapath 714950b1-cde6-436b-afc3-312c876985ed unbound from our chassis
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.262 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 714950b1-cde6-436b-afc3-312c876985ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.263 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5be1d5c9-eb7f-4fac-8f3b-52f5ff5d7d45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.263 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-714950b1-cde6-436b-afc3-312c876985ed namespace which is not needed anymore
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Oct 02 13:05:22 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ba.scope: Consumed 13.993s CPU time.
Oct 02 13:05:22 compute-0 systemd-machined[210927]: Machine qemu-94-instance-000000ba terminated.
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.292 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.293 2 DEBUG nova.network.neutron [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.320 2 INFO nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.342 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.474 2 INFO nova.virt.libvirt.driver [-] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Instance destroyed successfully.
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.475 2 DEBUG nova.objects.instance [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lazy-loading 'resources' on Instance uuid 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.489 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.490 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.491 2 INFO nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Creating image(s)
Oct 02 13:05:22 compute-0 neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed[377565]: [NOTICE]   (377569) : haproxy version is 2.8.14-c23fe91
Oct 02 13:05:22 compute-0 neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed[377565]: [NOTICE]   (377569) : path to executable is /usr/sbin/haproxy
Oct 02 13:05:22 compute-0 neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed[377565]: [WARNING]  (377569) : Exiting Master process...
Oct 02 13:05:22 compute-0 neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed[377565]: [ALERT]    (377569) : Current worker (377571) exited with code 143 (Terminated)
Oct 02 13:05:22 compute-0 neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed[377565]: [WARNING]  (377569) : All workers exited. Exiting... (0)
Oct 02 13:05:22 compute-0 systemd[1]: libpod-a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597.scope: Deactivated successfully.
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.518 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:22 compute-0 podman[378328]: 2025-10-02 13:05:22.524677053 +0000 UTC m=+0.159767678 container died a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.549 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.577 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.581 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbdf61fe78cc0a4a81426a808fa25a83ee6dc697b038979d8c95cba0d0700d07-merged.mount: Deactivated successfully.
Oct 02 13:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597-userdata-shm.mount: Deactivated successfully.
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.612 2 DEBUG nova.virt.libvirt.vif [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1658999594-access_point-1587879445',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1658999594-access_point-1587879445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1658999594-ac',id=186,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDreqrcPMqyn12TpEK39/iAxuA1VJNw/wAKLFZUIzZu61ixsA73RxNCL7cv1mvwVqQxVz7eqEkvxqDpyU6WYdar8m2ioWMocUQtF78TLmnW81cFDvzS1PVIw+zfesBubfQ==',key_name='tempest-TestSecurityGroupsBasicOps-1604935118',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aed40f24f86c4d4cb4ce0a3d01cde284',ramdisk_id='',reservation_id='r-qdxnr8ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1658999594',owner_user_name='tempest-TestSecurityGroupsBasicOps-1658999594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:57Z,user_data=None,user_id='93f224db5657410889fd6d1e8031770a',uuid=092f2a53-5ba6-4db2-adc9-3c7c8880d9a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.613 2 DEBUG nova.network.os_vif_util [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Converting VIF {"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.614 2 DEBUG nova.network.os_vif_util [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:d6:de,bridge_name='br-int',has_traffic_filtering=True,id=b4547f7a-3de5-4629-97e8-f2b5a06ed444,network=Network(714950b1-cde6-436b-afc3-312c876985ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547f7a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.615 2 DEBUG os_vif [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:d6:de,bridge_name='br-int',has_traffic_filtering=True,id=b4547f7a-3de5-4629-97e8-f2b5a06ed444,network=Network(714950b1-cde6-436b-afc3-312c876985ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547f7a-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4547f7a-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.623 2 INFO os_vif [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:d6:de,bridge_name='br-int',has_traffic_filtering=True,id=b4547f7a-3de5-4629-97e8-f2b5a06ed444,network=Network(714950b1-cde6-436b-afc3-312c876985ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547f7a-3d')
Oct 02 13:05:22 compute-0 podman[378328]: 2025-10-02 13:05:22.650271703 +0000 UTC m=+0.285362328 container cleanup a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.650 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.651 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.652 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.652 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:22 compute-0 systemd[1]: libpod-conmon-a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597.scope: Deactivated successfully.
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.686 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.690 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 podman[378466]: 2025-10-02 13:05:22.736532912 +0000 UTC m=+0.067207565 container remove a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.737 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.743 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f368ce12-5364-4840-8437-d3bf5e9d5465]: (4, ('Thu Oct  2 01:05:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed (a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597)\na0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597\nThu Oct  2 01:05:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-714950b1-cde6-436b-afc3-312c876985ed (a0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597)\na0f01038b598e0cfd31aeb0bd74c059b69472f326a8a8cd52e0ac522a50db597\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.744 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[48ba99c3-8925-42a5-b03b-e4cffedb4145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.745 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap714950b1-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:22 compute-0 kernel: tap714950b1-c0: left promiscuous mode
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.765 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5996ba-0707-443b-b256-3d0e7dcf9ff4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.792 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8cba14b7-65e8-4c59-b03f-2b3ba9144edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.794 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3362fedb-c7ca-4326-9134-f688d788fc88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.810 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d17b09aa-a5bd-4c0a-b75d-188d802c8555]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819174, 'reachable_time': 28857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378514, 'error': None, 'target': 'ovnmeta-714950b1-cde6-436b-afc3-312c876985ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.814 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-714950b1-cde6-436b-afc3-312c876985ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:05:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:22.814 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[c361c5de-b3e1-45b9-bd4c-ba009f316fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d714950b1\x2dcde6\x2d436b\x2dafc3\x2d312c876985ed.mount: Deactivated successfully.
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.900 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.901 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.903 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:05:22 compute-0 nova_compute[256940]: 2025-10-02 13:05:22.904 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.056 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.057 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4024MB free_disk=20.806194305419922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.057 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.057 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.087 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.182 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] resizing rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:05:23 compute-0 ceph-mon[73668]: pgmap v2772: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 475 KiB/s rd, 3.0 MiB/s wr, 107 op/s
Oct 02 13:05:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/588557104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:23.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.228 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 623fda87-e4b4-4b98-96cb-6f1846429214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.228 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.228 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.229 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.229 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.316 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.384 2 DEBUG nova.objects.instance [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'migration_context' on Instance uuid 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.397 2 DEBUG nova.compute.manager [req-908b8a96-33ae-4afa-a583-11981f50c90a req-e15102db-6683-45da-97b4-d063d3086aac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-vif-unplugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.397 2 DEBUG oslo_concurrency.lockutils [req-908b8a96-33ae-4afa-a583-11981f50c90a req-e15102db-6683-45da-97b4-d063d3086aac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.398 2 DEBUG oslo_concurrency.lockutils [req-908b8a96-33ae-4afa-a583-11981f50c90a req-e15102db-6683-45da-97b4-d063d3086aac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.398 2 DEBUG oslo_concurrency.lockutils [req-908b8a96-33ae-4afa-a583-11981f50c90a req-e15102db-6683-45da-97b4-d063d3086aac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.398 2 DEBUG nova.compute.manager [req-908b8a96-33ae-4afa-a583-11981f50c90a req-e15102db-6683-45da-97b4-d063d3086aac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] No waiting events found dispatching network-vif-unplugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.398 2 DEBUG nova.compute.manager [req-908b8a96-33ae-4afa-a583-11981f50c90a req-e15102db-6683-45da-97b4-d063d3086aac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-vif-unplugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.399 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.400 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Ensure instance console log exists: /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.400 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.400 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.400 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.536 2 INFO nova.virt.libvirt.driver [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Deleting instance files /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_del
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.536 2 INFO nova.virt.libvirt.driver [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Deletion of /var/lib/nova/instances/092f2a53-5ba6-4db2-adc9-3c7c8880d9a8_del complete
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.541 2 DEBUG nova.network.neutron [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Successfully created port: 170f8d44-ac19-4975-9721-410cf7e7fcb4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.609 2 INFO nova.compute.manager [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Took 1.44 seconds to destroy the instance on the hypervisor.
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.610 2 DEBUG oslo.service.loopingcall [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.610 2 DEBUG nova.compute.manager [-] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.611 2 DEBUG nova.network.neutron [-] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:05:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647629771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.794 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.799 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.816 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.874 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:05:23 compute-0 nova_compute[256940]: 2025-10-02 13:05:23.875 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 1000 KiB/s wr, 46 op/s
Oct 02 13:05:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/647629771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.363 2 DEBUG nova.network.neutron [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updated VIF entry in instance network info cache for port b4547f7a-3de5-4629-97e8-f2b5a06ed444. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.364 2 DEBUG nova.network.neutron [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updating instance_info_cache with network_info: [{"id": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "address": "fa:16:3e:9e:d6:de", "network": {"id": "714950b1-cde6-436b-afc3-312c876985ed", "bridge": "br-int", "label": "tempest-network-smoke--914394433", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aed40f24f86c4d4cb4ce0a3d01cde284", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547f7a-3d", "ovs_interfaceid": "b4547f7a-3de5-4629-97e8-f2b5a06ed444", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.402 2 DEBUG oslo_concurrency.lockutils [req-9ed412c8-eda8-464f-ab26-39bfc42e7ef9 req-5c1605b1-657f-429f-b4dd-ae0d1b92b901 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.410 2 DEBUG nova.network.neutron [-] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:24 compute-0 podman[378614]: 2025-10-02 13:05:24.416853949 +0000 UTC m=+0.088155829 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:24 compute-0 podman[378613]: 2025-10-02 13:05:24.430139844 +0000 UTC m=+0.095164831 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.438 2 INFO nova.compute.manager [-] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Took 0.83 seconds to deallocate network for instance.
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.497 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.497 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.572 2 DEBUG oslo_concurrency.processutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.604 2 DEBUG nova.network.neutron [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Successfully updated port: 170f8d44-ac19-4975-9721-410cf7e7fcb4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.622 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "refresh_cache-383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.623 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquired lock "refresh_cache-383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.623 2 DEBUG nova.network.neutron [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.685 2 DEBUG nova.compute.manager [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received event network-changed-170f8d44-ac19-4975-9721-410cf7e7fcb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.686 2 DEBUG nova.compute.manager [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Refreshing instance network info cache due to event network-changed-170f8d44-ac19-4975-9721-410cf7e7fcb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.686 2 DEBUG oslo_concurrency.lockutils [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.781 2 DEBUG nova.network.neutron [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:05:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.876 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:24 compute-0 nova_compute[256940]: 2025-10-02 13:05:24.876 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114044563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.030 2 DEBUG oslo_concurrency.processutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.035 2 DEBUG nova.compute.provider_tree [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.059 2 DEBUG nova.scheduler.client.report [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.081 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.112 2 INFO nova.scheduler.client.report [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Deleted allocations for instance 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8
Oct 02 13:05:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:25.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.308 2 DEBUG oslo_concurrency.lockutils [None req-4f9a2e6b-99af-4ec9-be73-69dfc3a256f3 93f224db5657410889fd6d1e8031770a aed40f24f86c4d4cb4ce0a3d01cde284 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:25 compute-0 ceph-mon[73668]: pgmap v2773: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 1000 KiB/s wr, 46 op/s
Oct 02 13:05:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2669593725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3473546659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2114044563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.494 2 DEBUG nova.compute.manager [req-a3d6a4ab-05a9-46c8-a07b-41e03d7f2e5e req-71a4b6ef-3e98-4fcc-b34e-860335f22a63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.495 2 DEBUG oslo_concurrency.lockutils [req-a3d6a4ab-05a9-46c8-a07b-41e03d7f2e5e req-71a4b6ef-3e98-4fcc-b34e-860335f22a63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.495 2 DEBUG oslo_concurrency.lockutils [req-a3d6a4ab-05a9-46c8-a07b-41e03d7f2e5e req-71a4b6ef-3e98-4fcc-b34e-860335f22a63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.495 2 DEBUG oslo_concurrency.lockutils [req-a3d6a4ab-05a9-46c8-a07b-41e03d7f2e5e req-71a4b6ef-3e98-4fcc-b34e-860335f22a63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "092f2a53-5ba6-4db2-adc9-3c7c8880d9a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.496 2 DEBUG nova.compute.manager [req-a3d6a4ab-05a9-46c8-a07b-41e03d7f2e5e req-71a4b6ef-3e98-4fcc-b34e-860335f22a63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] No waiting events found dispatching network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.496 2 WARNING nova.compute.manager [req-a3d6a4ab-05a9-46c8-a07b-41e03d7f2e5e req-71a4b6ef-3e98-4fcc-b34e-860335f22a63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received unexpected event network-vif-plugged-b4547f7a-3de5-4629-97e8-f2b5a06ed444 for instance with vm_state deleted and task_state None.
Oct 02 13:05:25 compute-0 nova_compute[256940]: 2025-10-02 13:05:25.496 2 DEBUG nova.compute.manager [req-a3d6a4ab-05a9-46c8-a07b-41e03d7f2e5e req-71a4b6ef-3e98-4fcc-b34e-860335f22a63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Received event network-vif-deleted-b4547f7a-3de5-4629-97e8-f2b5a06ed444 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.211 2 DEBUG nova.network.neutron [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Updating instance_info_cache with network_info: [{"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 464 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 3.4 MiB/s wr, 93 op/s
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.254 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Releasing lock "refresh_cache-383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.255 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Instance network_info: |[{"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.255 2 DEBUG oslo_concurrency.lockutils [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.255 2 DEBUG nova.network.neutron [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Refreshing network info cache for port 170f8d44-ac19-4975-9721-410cf7e7fcb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.258 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Start _get_guest_xml network_info=[{"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.262 2 WARNING nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.313 2 DEBUG nova.virt.libvirt.host [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.313 2 DEBUG nova.virt.libvirt.host [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.316 2 DEBUG nova.virt.libvirt.host [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.317 2 DEBUG nova.virt.libvirt.host [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.318 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.318 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.318 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.319 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.319 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.319 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.319 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.319 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.320 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.320 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.320 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.320 2 DEBUG nova.virt.hardware [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.323 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/734581899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:26.501 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:26.502 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:26.503 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:26.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:05:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079627697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.950 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.980 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:26 compute-0 nova_compute[256940]: 2025-10-02 13:05:26.984 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:05:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:27.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:05:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:05:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334999725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.421 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.423 2 DEBUG nova.virt.libvirt.vif [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-630913469',display_name='tempest-TestServerMultinode-server-630913469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-630913469',id=188,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-hxkyq1ub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:22Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=383a4ec3-88f1-4bc7-9a1e-1f04a87840d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.423 2 DEBUG nova.network.os_vif_util [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.424 2 DEBUG nova.network.os_vif_util [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:ff:25,bridge_name='br-int',has_traffic_filtering=True,id=170f8d44-ac19-4975-9721-410cf7e7fcb4,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap170f8d44-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.425 2 DEBUG nova.objects.instance [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'pci_devices' on Instance uuid 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.464 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <uuid>383a4ec3-88f1-4bc7-9a1e-1f04a87840d3</uuid>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <name>instance-000000bc</name>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <nova:name>tempest-TestServerMultinode-server-630913469</nova:name>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:05:26</nova:creationTime>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:user uuid="7fb7e45069d34870bc5f4fa70bd8c6de">tempest-TestServerMultinode-2060715482-project-admin</nova:user>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:project uuid="19365f54974d4109ae80bc13ac9ba55a">tempest-TestServerMultinode-2060715482</nova:project>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <nova:port uuid="170f8d44-ac19-4975-9721-410cf7e7fcb4">
Oct 02 13:05:27 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <system>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <entry name="serial">383a4ec3-88f1-4bc7-9a1e-1f04a87840d3</entry>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <entry name="uuid">383a4ec3-88f1-4bc7-9a1e-1f04a87840d3</entry>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </system>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <os>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   </os>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <features>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   </features>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk">
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       </source>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk.config">
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       </source>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:05:27 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:f9:ff:25"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <target dev="tap170f8d44-ac"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/console.log" append="off"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <video>
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </video>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:05:27 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:05:27 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:05:27 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:05:27 compute-0 nova_compute[256940]: </domain>
Oct 02 13:05:27 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.466 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Preparing to wait for external event network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.466 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.466 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.466 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.467 2 DEBUG nova.virt.libvirt.vif [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-630913469',display_name='tempest-TestServerMultinode-server-630913469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-630913469',id=188,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-hxkyq1ub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:22Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=383a4ec3-88f1-4bc7-9a1e-1f04a87840d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.467 2 DEBUG nova.network.os_vif_util [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.468 2 DEBUG nova.network.os_vif_util [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:ff:25,bridge_name='br-int',has_traffic_filtering=True,id=170f8d44-ac19-4975-9721-410cf7e7fcb4,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap170f8d44-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.468 2 DEBUG os_vif [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:ff:25,bridge_name='br-int',has_traffic_filtering=True,id=170f8d44-ac19-4975-9721-410cf7e7fcb4,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap170f8d44-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap170f8d44-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap170f8d44-ac, col_values=(('external_ids', {'iface-id': '170f8d44-ac19-4975-9721-410cf7e7fcb4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:ff:25', 'vm-uuid': '383a4ec3-88f1-4bc7-9a1e-1f04a87840d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:27 compute-0 NetworkManager[44981]: <info>  [1759410327.5223] manager: (tap170f8d44-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.528 2 INFO os_vif [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:ff:25,bridge_name='br-int',has_traffic_filtering=True,id=170f8d44-ac19-4975-9721-410cf7e7fcb4,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap170f8d44-ac')
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.699 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.699 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.699 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No VIF found with MAC fa:16:3e:f9:ff:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.700 2 INFO nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Using config drive
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.732 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:27 compute-0 nova_compute[256940]: 2025-10-02 13:05:27.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-0 ceph-mon[73668]: pgmap v2774: 305 pgs: 305 active+clean; 464 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 3.4 MiB/s wr, 93 op/s
Oct 02 13:05:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1079627697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3334999725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 470 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 4.4 MiB/s wr, 92 op/s
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.467 2 INFO nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Creating config drive at /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/disk.config
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.471 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppkv10yuj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.605 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppkv10yuj" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.633 2 DEBUG nova.storage.rbd_utils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.636 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/disk.config 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.720 2 DEBUG nova.network.neutron [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Updated VIF entry in instance network info cache for port 170f8d44-ac19-4975-9721-410cf7e7fcb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.722 2 DEBUG nova.network.neutron [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Updating instance_info_cache with network_info: [{"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.781 2 DEBUG oslo_concurrency.lockutils [req-f738fccd-151f-4aae-834a-babb61876dd3 req-8b11d5f2-1893-4ee9-b40c-9c9718564616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:28.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:05:28
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'images']
Oct 02 13:05:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:05:28 compute-0 nova_compute[256940]: 2025-10-02 13:05:28.999 2 DEBUG oslo_concurrency.processutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/disk.config 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.000 2 INFO nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Deleting local config drive /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3/disk.config because it was imported into RBD.
Oct 02 13:05:29 compute-0 kernel: tap170f8d44-ac: entered promiscuous mode
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:29 compute-0 ovn_controller[148123]: 2025-10-02T13:05:29Z|00923|binding|INFO|Claiming lport 170f8d44-ac19-4975-9721-410cf7e7fcb4 for this chassis.
Oct 02 13:05:29 compute-0 ovn_controller[148123]: 2025-10-02T13:05:29Z|00924|binding|INFO|170f8d44-ac19-4975-9721-410cf7e7fcb4: Claiming fa:16:3e:f9:ff:25 10.100.0.8
Oct 02 13:05:29 compute-0 NetworkManager[44981]: <info>  [1759410329.0527] manager: (tap170f8d44-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.063 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:ff:25 10.100.0.8'], port_security=['fa:16:3e:f9:ff:25 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '383a4ec3-88f1-4bc7-9a1e-1f04a87840d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=170f8d44-ac19-4975-9721-410cf7e7fcb4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.064 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 170f8d44-ac19-4975-9721-410cf7e7fcb4 in datapath 962339a8-ad45-401e-ae58-50cd40858566 bound to our chassis
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.066 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 962339a8-ad45-401e-ae58-50cd40858566
Oct 02 13:05:29 compute-0 ovn_controller[148123]: 2025-10-02T13:05:29Z|00925|binding|INFO|Setting lport 170f8d44-ac19-4975-9721-410cf7e7fcb4 ovn-installed in OVS
Oct 02 13:05:29 compute-0 ovn_controller[148123]: 2025-10-02T13:05:29Z|00926|binding|INFO|Setting lport 170f8d44-ac19-4975-9721-410cf7e7fcb4 up in Southbound
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.078 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[07ea45a4-42cc-405b-a849-03d9b092f184]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.079 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap962339a8-a1 in ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.081 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap962339a8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.081 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6eb344-adb2-4648-a981-04a457b86011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 systemd-udevd[378819]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.082 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1bcc40-f922-4a4a-bb0e-98b11548a41f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.094 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[133d5589-27b6-4e73-b00a-ccd4765e016f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 NetworkManager[44981]: <info>  [1759410329.0971] device (tap170f8d44-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:05:29 compute-0 NetworkManager[44981]: <info>  [1759410329.0983] device (tap170f8d44-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:05:29 compute-0 systemd-machined[210927]: New machine qemu-95-instance-000000bc.
Oct 02 13:05:29 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-000000bc.
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.121 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc9ec17-c6a7-4a16-9c1c-80af36e062ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.152 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[aca6b3a5-f261-46d0-9364-35d1492a26b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 NetworkManager[44981]: <info>  [1759410329.1584] manager: (tap962339a8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/405)
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.159 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9429f769-7167-483a-aec5-3ffd0cfe4743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.192 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[be61364c-92b1-4080-92bc-c940e8e7bf41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.195 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5d59a52a-5dde-45d4-9a66-87991fcf6a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:29 compute-0 NetworkManager[44981]: <info>  [1759410329.2158] device (tap962339a8-a0): carrier: link connected
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.221 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd184b4-0308-49a5-94d0-89acf6427281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:29.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.238 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[70a97dff-bcab-428b-8e5e-2a9d54727a55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822470, 'reachable_time': 40945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378853, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.253 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[276fbefc-85e6-46a6-9771-996cd3b1820c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:f8da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822470, 'tstamp': 822470}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378854, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.272 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4edb5037-1a1e-486d-a299-95ed6ca9b981]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822470, 'reachable_time': 40945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378855, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.302 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8473dcc2-ad2a-4693-80b8-0cb486512c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.373 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a57e96cf-ae61-4fa2-9dbe-6285abbb187d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.376 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.377 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.377 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap962339a8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:29 compute-0 kernel: tap962339a8-a0: entered promiscuous mode
Oct 02 13:05:29 compute-0 NetworkManager[44981]: <info>  [1759410329.3811] manager: (tap962339a8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:29 compute-0 ovn_controller[148123]: 2025-10-02T13:05:29Z|00927|binding|INFO|Releasing lport 95f6c57c-e568-4ed7-aa6a-02671a012e41 from this chassis (sb_readonly=0)
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.387 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap962339a8-a0, col_values=(('external_ids', {'iface-id': '95f6c57c-e568-4ed7-aa6a-02671a012e41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:29 compute-0 nova_compute[256940]: 2025-10-02 13:05:29.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.412 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.414 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5cdd58-7045-42db-8216-c995151b0a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.414 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-962339a8-ad45-401e-ae58-50cd40858566
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 962339a8-ad45-401e-ae58-50cd40858566
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:05:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:29.415 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'env', 'PROCESS_TAG=haproxy-962339a8-ad45-401e-ae58-50cd40858566', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/962339a8-ad45-401e-ae58-50cd40858566.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:05:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:05:29 compute-0 podman[378929]: 2025-10-02 13:05:29.752315903 +0000 UTC m=+0.024943438 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:05:29 compute-0 ceph-mon[73668]: pgmap v2775: 305 pgs: 305 active+clean; 470 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 4.4 MiB/s wr, 92 op/s
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.019 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410330.0184503, 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:30 compute-0 podman[378929]: 2025-10-02 13:05:30.020019863 +0000 UTC m=+0.292647388 container create 9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.019 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] VM Started (Lifecycle Event)
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.064 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:30 compute-0 systemd[1]: Started libpod-conmon-9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5.scope.
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.070 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410330.019707, 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.071 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] VM Paused (Lifecycle Event)
Oct 02 13:05:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51422e3b1a68b36a678d5a4319ad7f01b0bd19d9aa4f716291fee20ca7544029/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.123 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.127 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:05:30 compute-0 nova_compute[256940]: 2025-10-02 13:05:30.165 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:05:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 447 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 569 KiB/s rd, 4.9 MiB/s wr, 143 op/s
Oct 02 13:05:30 compute-0 podman[378929]: 2025-10-02 13:05:30.261535092 +0000 UTC m=+0.534162617 container init 9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:30 compute-0 podman[378929]: 2025-10-02 13:05:30.269220271 +0000 UTC m=+0.541847786 container start 9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:05:30 compute-0 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[378944]: [NOTICE]   (378948) : New worker (378950) forked
Oct 02 13:05:30 compute-0 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[378944]: [NOTICE]   (378948) : Loading success.
Oct 02 13:05:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:30.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Oct 02 13:05:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:30.903 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:31 compute-0 sudo[378960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:31 compute-0 sudo[378960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:31 compute-0 sudo[378960]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Oct 02 13:05:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000077s ======
Oct 02 13:05:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:31.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000077s
Oct 02 13:05:31 compute-0 sudo[378985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:31 compute-0 sudo[378985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:31 compute-0 sudo[378985]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Oct 02 13:05:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1962952617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 6.4 MiB/s wr, 266 op/s
Oct 02 13:05:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Oct 02 13:05:32 compute-0 ceph-mon[73668]: pgmap v2776: 305 pgs: 305 active+clean; 447 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 569 KiB/s rd, 4.9 MiB/s wr, 143 op/s
Oct 02 13:05:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1386256853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:32 compute-0 ceph-mon[73668]: osdmap e354: 3 total, 3 up, 3 in
Oct 02 13:05:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1531540515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Oct 02 13:05:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Oct 02 13:05:32 compute-0 ovn_controller[148123]: 2025-10-02T13:05:32Z|00928|binding|INFO|Releasing lport 95f6c57c-e568-4ed7-aa6a-02671a012e41 from this chassis (sb_readonly=0)
Oct 02 13:05:32 compute-0 ovn_controller[148123]: 2025-10-02T13:05:32Z|00929|binding|INFO|Releasing lport 93013e5b-e543-4f32-a0a0-fbe6244eaa94 from this chassis (sb_readonly=0)
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.906 2 DEBUG nova.compute.manager [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received event network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.907 2 DEBUG oslo_concurrency.lockutils [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.907 2 DEBUG oslo_concurrency.lockutils [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.907 2 DEBUG oslo_concurrency.lockutils [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.907 2 DEBUG nova.compute.manager [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Processing event network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.908 2 DEBUG nova.compute.manager [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received event network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.908 2 DEBUG oslo_concurrency.lockutils [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.908 2 DEBUG oslo_concurrency.lockutils [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.908 2 DEBUG oslo_concurrency.lockutils [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.908 2 DEBUG nova.compute.manager [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] No waiting events found dispatching network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.909 2 WARNING nova.compute.manager [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received unexpected event network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 for instance with vm_state building and task_state spawning.
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.909 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.918 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410332.9185588, 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.919 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] VM Resumed (Lifecycle Event)
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.924 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.928 2 INFO nova.virt.libvirt.driver [-] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Instance spawned successfully.
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.929 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.978 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.985 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.986 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.987 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.988 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.989 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.990 2 DEBUG nova.virt.libvirt.driver [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:32 compute-0 nova_compute[256940]: 2025-10-02 13:05:32.997 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.045 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.080 2 INFO nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Took 10.59 seconds to spawn the instance on the hypervisor.
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.081 2 DEBUG nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.187 2 INFO nova.compute.manager [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Took 11.73 seconds to build instance.
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.207 2 DEBUG oslo_concurrency.lockutils [None req-18130b33-a2ca-4f98-a74a-4c2178af29e8 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:05:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:33.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Oct 02 13:05:33 compute-0 ceph-mon[73668]: pgmap v2778: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 6.4 MiB/s wr, 266 op/s
Oct 02 13:05:33 compute-0 ceph-mon[73668]: osdmap e355: 3 total, 3 up, 3 in
Oct 02 13:05:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Oct 02 13:05:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.830 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.830 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.831 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:05:33 compute-0 nova_compute[256940]: 2025-10-02 13:05:33.831 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 623fda87-e4b4-4b98-96cb-6f1846429214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.7 MiB/s wr, 291 op/s
Oct 02 13:05:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:34 compute-0 ceph-mon[73668]: osdmap e356: 3 total, 3 up, 3 in
Oct 02 13:05:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:34.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:35.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.379 2 DEBUG nova.compute.manager [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-changed-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.379 2 DEBUG nova.compute.manager [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Refreshing instance network info cache due to event network-changed-2b9e2335-30a2-48b8-91f1-2a3ba80473ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.380 2 DEBUG oslo_concurrency.lockutils [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:35 compute-0 ceph-mon[73668]: pgmap v2781: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.7 MiB/s wr, 291 op/s
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.564 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.565 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.565 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.565 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.566 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.567 2 INFO nova.compute.manager [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Terminating instance
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.568 2 DEBUG nova.compute.manager [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:05:35 compute-0 kernel: tap2b9e2335-30 (unregistering): left promiscuous mode
Oct 02 13:05:35 compute-0 NetworkManager[44981]: <info>  [1759410335.6673] device (tap2b9e2335-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00930|binding|INFO|Releasing lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee from this chassis (sb_readonly=0)
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00931|binding|INFO|Setting lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee down in Southbound
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00932|binding|INFO|Removing iface tap2b9e2335-30 ovn-installed in OVS
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:35.695 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:70:d1 10.100.0.13'], port_security=['fa:16:3e:9b:70:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '623fda87-e4b4-4b98-96cb-6f1846429214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ade66a-9f20-4007-92c8-0a4b810103cd dc6a6518-75b2-4560-a7df-1de3b215b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4e2472-fc91-44ac-acf9-e1338455a1c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2b9e2335-30a2-48b8-91f1-2a3ba80473ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:35.697 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee in datapath 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 unbound from our chassis
Oct 02 13:05:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:35.700 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:35.701 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa16b98-3ddb-453c-9c69-519c5a3e1049]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:35.702 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 namespace which is not needed anymore
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Oct 02 13:05:35 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000b6.scope: Consumed 16.078s CPU time.
Oct 02 13:05:35 compute-0 systemd-machined[210927]: Machine qemu-93-instance-000000b6 terminated.
Oct 02 13:05:35 compute-0 NetworkManager[44981]: <info>  [1759410335.7859] manager: (tap2b9e2335-30): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct 02 13:05:35 compute-0 kernel: tap2b9e2335-30: entered promiscuous mode
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00933|binding|INFO|Claiming lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee for this chassis.
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00934|binding|INFO|2b9e2335-30a2-48b8-91f1-2a3ba80473ee: Claiming fa:16:3e:9b:70:d1 10.100.0.13
Oct 02 13:05:35 compute-0 kernel: tap2b9e2335-30 (unregistering): left promiscuous mode
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:35.808 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:70:d1 10.100.0.13'], port_security=['fa:16:3e:9b:70:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '623fda87-e4b4-4b98-96cb-6f1846429214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ade66a-9f20-4007-92c8-0a4b810103cd dc6a6518-75b2-4560-a7df-1de3b215b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4e2472-fc91-44ac-acf9-e1338455a1c5, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2b9e2335-30a2-48b8-91f1-2a3ba80473ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00935|binding|INFO|Setting lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee ovn-installed in OVS
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00936|binding|INFO|Setting lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee up in Southbound
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00937|binding|INFO|Releasing lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee from this chassis (sb_readonly=1)
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00938|if_status|INFO|Dropped 2 log messages in last 546 seconds (most recently, 546 seconds ago) due to excessive rate
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00939|if_status|INFO|Not setting lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee down as sb is readonly
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.813 2 INFO nova.virt.libvirt.driver [-] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Instance destroyed successfully.
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.814 2 DEBUG nova.objects.instance [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 623fda87-e4b4-4b98-96cb-6f1846429214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00940|binding|INFO|Removing iface tap2b9e2335-30 ovn-installed in OVS
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00941|binding|INFO|Releasing lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee from this chassis (sb_readonly=0)
Oct 02 13:05:35 compute-0 ovn_controller[148123]: 2025-10-02T13:05:35Z|00942|binding|INFO|Setting lport 2b9e2335-30a2-48b8-91f1-2a3ba80473ee down in Southbound
Oct 02 13:05:35 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:35.828 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:70:d1 10.100.0.13'], port_security=['fa:16:3e:9b:70:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '623fda87-e4b4-4b98-96cb-6f1846429214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ade66a-9f20-4007-92c8-0a4b810103cd dc6a6518-75b2-4560-a7df-1de3b215b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4e2472-fc91-44ac-acf9-e1338455a1c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=2b9e2335-30a2-48b8-91f1-2a3ba80473ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [NOTICE]   (376553) : haproxy version is 2.8.14-c23fe91
Oct 02 13:05:35 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [NOTICE]   (376553) : path to executable is /usr/sbin/haproxy
Oct 02 13:05:35 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [WARNING]  (376553) : Exiting Master process...
Oct 02 13:05:35 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [WARNING]  (376553) : Exiting Master process...
Oct 02 13:05:35 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [ALERT]    (376553) : Current worker (376555) exited with code 143 (Terminated)
Oct 02 13:05:35 compute-0 neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9[376549]: [WARNING]  (376553) : All workers exited. Exiting... (0)
Oct 02 13:05:35 compute-0 systemd[1]: libpod-c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4.scope: Deactivated successfully.
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.860 2 DEBUG nova.virt.libvirt.vif [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-132603722',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-132603722',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=182,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUSY1aWRmq0CGi/H2pvD/bCECTpQePcVVb1ww/02gg774n0eyLtedEGOwVYxdhbiBUstusAhPajTI55nJ9x71ILzdP36Ifk9V9OxqNCn3GTT+6F6E95IO0GnzzYXwG7g==',key_name='tempest-TestSecurityGroupsBasicOps-479832544',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-q1gybzp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:12Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=623fda87-e4b4-4b98-96cb-6f1846429214,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.860 2 DEBUG nova.network.os_vif_util [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.861 2 DEBUG nova.network.os_vif_util [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:70:d1,bridge_name='br-int',has_traffic_filtering=True,id=2b9e2335-30a2-48b8-91f1-2a3ba80473ee,network=Network(2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b9e2335-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.861 2 DEBUG os_vif [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:70:d1,bridge_name='br-int',has_traffic_filtering=True,id=2b9e2335-30a2-48b8-91f1-2a3ba80473ee,network=Network(2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b9e2335-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b9e2335-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:35 compute-0 podman[379041]: 2025-10-02 13:05:35.865453996 +0000 UTC m=+0.053081429 container died c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:05:35 compute-0 nova_compute[256940]: 2025-10-02 13:05:35.868 2 INFO os_vif [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:70:d1,bridge_name='br-int',has_traffic_filtering=True,id=2b9e2335-30a2-48b8-91f1-2a3ba80473ee,network=Network(2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b9e2335-30')
Oct 02 13:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4-userdata-shm.mount: Deactivated successfully.
Oct 02 13:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-47f091b7a303445f52fd0c3192f61520fb721bec47a5a2b5d2a97c41dd763900-merged.mount: Deactivated successfully.
Oct 02 13:05:35 compute-0 podman[379041]: 2025-10-02 13:05:35.938351128 +0000 UTC m=+0.125978561 container cleanup c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:05:35 compute-0 systemd[1]: libpod-conmon-c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4.scope: Deactivated successfully.
Oct 02 13:05:36 compute-0 podman[379089]: 2025-10-02 13:05:36.200607646 +0000 UTC m=+0.240716559 container remove c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.206 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cf38ab09-1494-4030-a0c7-adb948e4d30b]: (4, ('Thu Oct  2 01:05:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 (c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4)\nc97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4\nThu Oct  2 01:05:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 (c97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4)\nc97bd49a0ad5d3699dbe97230c96b94d2b4ceb4cdf58b4b6d49caba99886cac4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.208 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd95d3d-983e-4c2a-9f1b-260542506ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.209 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cd8aa72-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:36 compute-0 kernel: tap2cd8aa72-d0: left promiscuous mode
Oct 02 13:05:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 476 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 MiB/s rd, 7.2 MiB/s wr, 504 op/s
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.238 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[150b7777-1c61-4725-a0cc-745d98ce66f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.266 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[84f32845-f961-477e-aacb-537e82872fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.268 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc2980d-cb72-4014-b28a-23d545ef8b14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.285 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c26efe11-860d-406d-ab55-60dba8534cb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814662, 'reachable_time': 41426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379105, 'error': None, 'target': 'ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d2cd8aa72\x2dd2bb\x2d4a84\x2d97e9\x2db501a22ac0a9.mount: Deactivated successfully.
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.291 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.292 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ce012eb2-9dc8-4e13-8865-e8d2ff230cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.295 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee in datapath 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 unbound from our chassis
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.296 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.297 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[378cd964-19b1-401e-b91f-4e0fa0552b95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.298 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee in datapath 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9 unbound from our chassis
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.299 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:36.300 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4a2ed2-1c78-4c19-9d5c-ae1e77ee0392]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:36.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.875 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updating instance_info_cache with network_info: [{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.956 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.957 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.957 2 DEBUG oslo_concurrency.lockutils [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.957 2 DEBUG nova.network.neutron [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Refreshing network info cache for port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.966 2 INFO nova.virt.libvirt.driver [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Deleting instance files /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214_del
Oct 02 13:05:36 compute-0 nova_compute[256940]: 2025-10-02 13:05:36.967 2 INFO nova.virt.libvirt.driver [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Deletion of /var/lib/nova/instances/623fda87-e4b4-4b98-96cb-6f1846429214_del complete
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.033 2 DEBUG nova.compute.manager [req-ab4f7495-7b3a-4e49-970b-8f844b578068 req-33c17f0c-bcf0-462f-82b0-8c34d16fd9fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-vif-unplugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.034 2 DEBUG oslo_concurrency.lockutils [req-ab4f7495-7b3a-4e49-970b-8f844b578068 req-33c17f0c-bcf0-462f-82b0-8c34d16fd9fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.034 2 DEBUG oslo_concurrency.lockutils [req-ab4f7495-7b3a-4e49-970b-8f844b578068 req-33c17f0c-bcf0-462f-82b0-8c34d16fd9fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.034 2 DEBUG oslo_concurrency.lockutils [req-ab4f7495-7b3a-4e49-970b-8f844b578068 req-33c17f0c-bcf0-462f-82b0-8c34d16fd9fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.034 2 DEBUG nova.compute.manager [req-ab4f7495-7b3a-4e49-970b-8f844b578068 req-33c17f0c-bcf0-462f-82b0-8c34d16fd9fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] No waiting events found dispatching network-vif-unplugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.035 2 DEBUG nova.compute.manager [req-ab4f7495-7b3a-4e49-970b-8f844b578068 req-33c17f0c-bcf0-462f-82b0-8c34d16fd9fb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-vif-unplugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.138 2 INFO nova.compute.manager [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Took 1.57 seconds to destroy the instance on the hypervisor.
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.139 2 DEBUG oslo.service.loopingcall [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.139 2 DEBUG nova.compute.manager [-] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.139 2 DEBUG nova.network.neutron [-] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:05:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:37.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.472 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410322.4719875, 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.473 2 INFO nova.compute.manager [-] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] VM Stopped (Lifecycle Event)
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.501 2 DEBUG nova.compute.manager [None req-2b6dee00-f653-4a79-b333-8c45f0a2047c - - - - - -] [instance: 092f2a53-5ba6-4db2-adc9-3c7c8880d9a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:37 compute-0 ceph-mon[73668]: pgmap v2782: 305 pgs: 305 active+clean; 476 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 MiB/s rd, 7.2 MiB/s wr, 504 op/s
Oct 02 13:05:37 compute-0 nova_compute[256940]: 2025-10-02 13:05:37.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 466 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 6.7 MiB/s wr, 512 op/s
Oct 02 13:05:38 compute-0 nova_compute[256940]: 2025-10-02 13:05:38.656 2 DEBUG nova.network.neutron [-] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:38 compute-0 nova_compute[256940]: 2025-10-02 13:05:38.702 2 INFO nova.compute.manager [-] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Took 1.56 seconds to deallocate network for instance.
Oct 02 13:05:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:38 compute-0 nova_compute[256940]: 2025-10-02 13:05:38.874 2 DEBUG nova.compute.manager [req-9a1a7753-e460-423c-a736-9c6284f1fc53 req-b8f26dc6-29d3-4ea1-ae21-52c99955f90e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-vif-deleted-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:38 compute-0 nova_compute[256940]: 2025-10-02 13:05:38.955 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:38 compute-0 nova_compute[256940]: 2025-10-02 13:05:38.956 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.100 2 DEBUG oslo_concurrency.processutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:39.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Oct 02 13:05:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Oct 02 13:05:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Oct 02 13:05:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:05:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 47K writes, 190K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.93 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6725 writes, 25K keys, 6725 commit groups, 1.0 writes per commit group, ingest: 26.21 MB, 0.04 MB/s
                                           Interval WAL: 6725 writes, 2604 syncs, 2.58 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.475 2 DEBUG nova.compute.manager [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.476 2 DEBUG oslo_concurrency.lockutils [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.477 2 DEBUG oslo_concurrency.lockutils [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.477 2 DEBUG oslo_concurrency.lockutils [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.478 2 DEBUG nova.compute.manager [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] No waiting events found dispatching network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.478 2 WARNING nova.compute.manager [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received unexpected event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee for instance with vm_state deleted and task_state None.
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.479 2 DEBUG nova.compute.manager [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.480 2 DEBUG oslo_concurrency.lockutils [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.480 2 DEBUG oslo_concurrency.lockutils [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.481 2 DEBUG oslo_concurrency.lockutils [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.481 2 DEBUG nova.compute.manager [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] No waiting events found dispatching network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.482 2 WARNING nova.compute.manager [req-4596df4a-08bb-4f75-8f29-f8e03dfc0520 req-7e41f85c-a773-4ee7-a685-ce26777823d2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Received unexpected event network-vif-plugged-2b9e2335-30a2-48b8-91f1-2a3ba80473ee for instance with vm_state deleted and task_state None.
Oct 02 13:05:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968769541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:39 compute-0 ceph-mon[73668]: pgmap v2783: 305 pgs: 305 active+clean; 466 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 6.7 MiB/s wr, 512 op/s
Oct 02 13:05:39 compute-0 ceph-mon[73668]: osdmap e357: 3 total, 3 up, 3 in
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.560 2 DEBUG oslo_concurrency.processutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.569 2 DEBUG nova.compute.provider_tree [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.632 2 DEBUG nova.scheduler.client.report [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:39 compute-0 ovn_controller[148123]: 2025-10-02T13:05:39Z|00943|binding|INFO|Releasing lport 95f6c57c-e568-4ed7-aa6a-02671a012e41 from this chassis (sb_readonly=0)
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.746 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.849 2 DEBUG nova.network.neutron [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updated VIF entry in instance network info cache for port 2b9e2335-30a2-48b8-91f1-2a3ba80473ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.850 2 DEBUG nova.network.neutron [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Updating instance_info_cache with network_info: [{"id": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "address": "fa:16:3e:9b:70:d1", "network": {"id": "2cd8aa72-d2bb-4a84-97e9-b501a22ac0a9", "bridge": "br-int", "label": "tempest-network-smoke--783412831", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b9e2335-30", "ovs_interfaceid": "2b9e2335-30a2-48b8-91f1-2a3ba80473ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:39 compute-0 ovn_controller[148123]: 2025-10-02T13:05:39Z|00944|binding|INFO|Releasing lport 95f6c57c-e568-4ed7-aa6a-02671a012e41 from this chassis (sb_readonly=0)
Oct 02 13:05:39 compute-0 nova_compute[256940]: 2025-10-02 13:05:39.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:40 compute-0 nova_compute[256940]: 2025-10-02 13:05:40.013 2 INFO nova.scheduler.client.report [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 623fda87-e4b4-4b98-96cb-6f1846429214
Oct 02 13:05:40 compute-0 nova_compute[256940]: 2025-10-02 13:05:40.121 2 DEBUG oslo_concurrency.lockutils [req-65bc41fa-5268-4833-9658-59b0c696bb73 req-1f4b800d-7318-4187-9e84-fb69d700c53d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-623fda87-e4b4-4b98-96cb-6f1846429214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 5.1 MiB/s wr, 376 op/s
Oct 02 13:05:40 compute-0 nova_compute[256940]: 2025-10-02 13:05:40.227 2 DEBUG oslo_concurrency.lockutils [None req-72c03deb-a13f-4cc8-8c2d-e1dc59b178e4 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "623fda87-e4b4-4b98-96cb-6f1846429214" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2968769541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005332289340219617 of space, bias 1.0, pg target 1.599686802065885 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6465347207681927 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.215954810220082 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:05:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 13:05:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:40.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:40 compute-0 nova_compute[256940]: 2025-10-02 13:05:40.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:41.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:41 compute-0 podman[379133]: 2025-10-02 13:05:41.419618708 +0000 UTC m=+0.076648860 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 13:05:41 compute-0 podman[379132]: 2025-10-02 13:05:41.419754342 +0000 UTC m=+0.079298010 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:41 compute-0 ceph-mon[73668]: pgmap v2785: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 5.1 MiB/s wr, 376 op/s
Oct 02 13:05:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 417 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.6 MiB/s wr, 381 op/s
Oct 02 13:05:42 compute-0 nova_compute[256940]: 2025-10-02 13:05:42.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:42.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:43.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:43 compute-0 ceph-mon[73668]: pgmap v2786: 305 pgs: 305 active+clean; 417 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.6 MiB/s wr, 381 op/s
Oct 02 13:05:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4083777070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 417 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 338 op/s
Oct 02 13:05:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:05:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:44.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:05:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4013527471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:45.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.519 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.520 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.520 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.520 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.520 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.521 2 INFO nova.compute.manager [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Terminating instance
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.522 2 DEBUG nova.compute.manager [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:05:45 compute-0 kernel: tap170f8d44-ac (unregistering): left promiscuous mode
Oct 02 13:05:45 compute-0 NetworkManager[44981]: <info>  [1759410345.5887] device (tap170f8d44-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:05:45 compute-0 ovn_controller[148123]: 2025-10-02T13:05:45Z|00945|binding|INFO|Releasing lport 170f8d44-ac19-4975-9721-410cf7e7fcb4 from this chassis (sb_readonly=0)
Oct 02 13:05:45 compute-0 ovn_controller[148123]: 2025-10-02T13:05:45Z|00946|binding|INFO|Setting lport 170f8d44-ac19-4975-9721-410cf7e7fcb4 down in Southbound
Oct 02 13:05:45 compute-0 ovn_controller[148123]: 2025-10-02T13:05:45Z|00947|binding|INFO|Removing iface tap170f8d44-ac ovn-installed in OVS
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:45.646 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:ff:25 10.100.0.8'], port_security=['fa:16:3e:f9:ff:25 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '383a4ec3-88f1-4bc7-9a1e-1f04a87840d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=170f8d44-ac19-4975-9721-410cf7e7fcb4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:45.647 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 170f8d44-ac19-4975-9721-410cf7e7fcb4 in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis
Oct 02 13:05:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:45.648 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:45.649 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c34d409b-1f21-4477-8ed7-aa000ce21e87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:45.649 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace which is not needed anymore
Oct 02 13:05:45 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Oct 02 13:05:45 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000bc.scope: Consumed 13.069s CPU time.
Oct 02 13:05:45 compute-0 systemd-machined[210927]: Machine qemu-95-instance-000000bc terminated.
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.762 2 INFO nova.virt.libvirt.driver [-] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Instance destroyed successfully.
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.763 2 DEBUG nova.objects.instance [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'resources' on Instance uuid 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.814 2 DEBUG nova.virt.libvirt.vif [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:05:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-630913469',display_name='tempest-TestServerMultinode-server-630913469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-630913469',id=188,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:05:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-hxkyq1ub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:33Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=383a4ec3-88f1-4bc7-9a1e-1f04a87840d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.815 2 DEBUG nova.network.os_vif_util [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "address": "fa:16:3e:f9:ff:25", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap170f8d44-ac", "ovs_interfaceid": "170f8d44-ac19-4975-9721-410cf7e7fcb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.816 2 DEBUG nova.network.os_vif_util [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:ff:25,bridge_name='br-int',has_traffic_filtering=True,id=170f8d44-ac19-4975-9721-410cf7e7fcb4,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap170f8d44-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.816 2 DEBUG os_vif [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:ff:25,bridge_name='br-int',has_traffic_filtering=True,id=170f8d44-ac19-4975-9721-410cf7e7fcb4,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap170f8d44-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap170f8d44-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:45 compute-0 nova_compute[256940]: 2025-10-02 13:05:45.825 2 INFO os_vif [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:ff:25,bridge_name='br-int',has_traffic_filtering=True,id=170f8d44-ac19-4975-9721-410cf7e7fcb4,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap170f8d44-ac')
Oct 02 13:05:45 compute-0 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[378944]: [NOTICE]   (378948) : haproxy version is 2.8.14-c23fe91
Oct 02 13:05:45 compute-0 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[378944]: [NOTICE]   (378948) : path to executable is /usr/sbin/haproxy
Oct 02 13:05:45 compute-0 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[378944]: [WARNING]  (378948) : Exiting Master process...
Oct 02 13:05:45 compute-0 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[378944]: [ALERT]    (378948) : Current worker (378950) exited with code 143 (Terminated)
Oct 02 13:05:45 compute-0 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[378944]: [WARNING]  (378948) : All workers exited. Exiting... (0)
Oct 02 13:05:45 compute-0 systemd[1]: libpod-9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5.scope: Deactivated successfully.
Oct 02 13:05:45 compute-0 podman[379199]: 2025-10-02 13:05:45.880502582 +0000 UTC m=+0.131904265 container died 9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5-userdata-shm.mount: Deactivated successfully.
Oct 02 13:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-51422e3b1a68b36a678d5a4319ad7f01b0bd19d9aa4f716291fee20ca7544029-merged.mount: Deactivated successfully.
Oct 02 13:05:46 compute-0 podman[379199]: 2025-10-02 13:05:46.05262713 +0000 UTC m=+0.304028843 container cleanup 9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:46 compute-0 systemd[1]: libpod-conmon-9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5.scope: Deactivated successfully.
Oct 02 13:05:46 compute-0 ceph-mon[73668]: pgmap v2787: 305 pgs: 305 active+clean; 417 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 338 op/s
Oct 02 13:05:46 compute-0 podman[379259]: 2025-10-02 13:05:46.222258863 +0000 UTC m=+0.142869050 container remove 9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 13:05:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 352 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.229 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd515bd-c818-4a6e-8f77-09bae0f6ff86]: (4, ('Thu Oct  2 01:05:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5)\n9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5\nThu Oct  2 01:05:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5)\n9583ec92045abb1142719a74fa4ed62de7f57f344545f48b9889b78e3526e3c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.232 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[37666cdb-691e-44fa-aafa-3d197ba31aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.233 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:46 compute-0 nova_compute[256940]: 2025-10-02 13:05:46.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:46 compute-0 nova_compute[256940]: 2025-10-02 13:05:46.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:46 compute-0 kernel: tap962339a8-a0: left promiscuous mode
Oct 02 13:05:46 compute-0 nova_compute[256940]: 2025-10-02 13:05:46.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.262 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0752f4-1e17-4193-b95f-ced2bcfdbbd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.294 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[90a7d20c-ce3d-4286-960b-ed0f59636ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.296 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc5f44c-d01f-475e-8ae8-c4b428c0c05f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.315 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f911a21a-c34a-4b4f-bb07-0d50ee3ca952]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822463, 'reachable_time': 37113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379273, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d962339a8\x2dad45\x2d401e\x2dae58\x2d50cd40858566.mount: Deactivated successfully.
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.320 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:05:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:05:46.320 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[55ff644c-343c-4ce3-9cd0-1ae2ebdb8735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:46 compute-0 nova_compute[256940]: 2025-10-02 13:05:46.623 2 INFO nova.virt.libvirt.driver [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Deleting instance files /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_del
Oct 02 13:05:46 compute-0 nova_compute[256940]: 2025-10-02 13:05:46.624 2 INFO nova.virt.libvirt.driver [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Deletion of /var/lib/nova/instances/383a4ec3-88f1-4bc7-9a1e-1f04a87840d3_del complete
Oct 02 13:05:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:46.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.093 2 INFO nova.compute.manager [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Took 1.57 seconds to destroy the instance on the hypervisor.
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.094 2 DEBUG oslo.service.loopingcall [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.095 2 DEBUG nova.compute.manager [-] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.095 2 DEBUG nova.network.neutron [-] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:05:47 compute-0 ceph-mon[73668]: pgmap v2788: 305 pgs: 305 active+clean; 352 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.239 2 DEBUG nova.compute.manager [req-1041d643-fd2a-46ee-889e-3d5b97a69797 req-35f871d9-97a2-4f93-ab7f-6b8335ad4928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received event network-vif-unplugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.240 2 DEBUG oslo_concurrency.lockutils [req-1041d643-fd2a-46ee-889e-3d5b97a69797 req-35f871d9-97a2-4f93-ab7f-6b8335ad4928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.240 2 DEBUG oslo_concurrency.lockutils [req-1041d643-fd2a-46ee-889e-3d5b97a69797 req-35f871d9-97a2-4f93-ab7f-6b8335ad4928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.240 2 DEBUG oslo_concurrency.lockutils [req-1041d643-fd2a-46ee-889e-3d5b97a69797 req-35f871d9-97a2-4f93-ab7f-6b8335ad4928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.240 2 DEBUG nova.compute.manager [req-1041d643-fd2a-46ee-889e-3d5b97a69797 req-35f871d9-97a2-4f93-ab7f-6b8335ad4928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] No waiting events found dispatching network-vif-unplugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.240 2 DEBUG nova.compute.manager [req-1041d643-fd2a-46ee-889e-3d5b97a69797 req-35f871d9-97a2-4f93-ab7f-6b8335ad4928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received event network-vif-unplugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:05:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:47.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:47 compute-0 nova_compute[256940]: 2025-10-02 13:05:47.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 13:05:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 5.0 MiB/s wr, 221 op/s
Oct 02 13:05:48 compute-0 nova_compute[256940]: 2025-10-02 13:05:48.818 2 DEBUG nova.network.neutron [-] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:48.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.065 2 INFO nova.compute.manager [-] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Took 1.97 seconds to deallocate network for instance.
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.070 2 DEBUG nova.compute.manager [req-30dfc301-b667-4040-8c0e-4dfdb7a0a3fa req-6e031da4-02ef-451e-a56a-e239c8a3ebea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received event network-vif-deleted-170f8d44-ac19-4975-9721-410cf7e7fcb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.205 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.206 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:49 compute-0 ceph-mon[73668]: pgmap v2789: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 5.0 MiB/s wr, 221 op/s
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.302 2 DEBUG oslo_concurrency.processutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.446 2 DEBUG nova.compute.manager [req-c4e2abd1-2634-4052-a32b-1c148797adc4 req-608d3a7b-2d67-483b-80cd-01a3105ef0f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received event network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.446 2 DEBUG oslo_concurrency.lockutils [req-c4e2abd1-2634-4052-a32b-1c148797adc4 req-608d3a7b-2d67-483b-80cd-01a3105ef0f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.446 2 DEBUG oslo_concurrency.lockutils [req-c4e2abd1-2634-4052-a32b-1c148797adc4 req-608d3a7b-2d67-483b-80cd-01a3105ef0f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.447 2 DEBUG oslo_concurrency.lockutils [req-c4e2abd1-2634-4052-a32b-1c148797adc4 req-608d3a7b-2d67-483b-80cd-01a3105ef0f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.447 2 DEBUG nova.compute.manager [req-c4e2abd1-2634-4052-a32b-1c148797adc4 req-608d3a7b-2d67-483b-80cd-01a3105ef0f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] No waiting events found dispatching network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.447 2 WARNING nova.compute.manager [req-c4e2abd1-2634-4052-a32b-1c148797adc4 req-608d3a7b-2d67-483b-80cd-01a3105ef0f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Received unexpected event network-vif-plugged-170f8d44-ac19-4975-9721-410cf7e7fcb4 for instance with vm_state deleted and task_state None.
Oct 02 13:05:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72459278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.790 2 DEBUG oslo_concurrency.processutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.798 2 DEBUG nova.compute.provider_tree [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:49 compute-0 nova_compute[256940]: 2025-10-02 13:05:49.889 2 DEBUG nova.scheduler.client.report [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:50 compute-0 nova_compute[256940]: 2025-10-02 13:05:50.021 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 313 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 4.6 MiB/s wr, 221 op/s
Oct 02 13:05:50 compute-0 nova_compute[256940]: 2025-10-02 13:05:50.284 2 INFO nova.scheduler.client.report [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Deleted allocations for instance 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3
Oct 02 13:05:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/72459278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:50 compute-0 nova_compute[256940]: 2025-10-02 13:05:50.490 2 DEBUG oslo_concurrency.lockutils [None req-4e3eb5dd-7ef1-41f7-a969-458a6ce20d85 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "383a4ec3-88f1-4bc7-9a1e-1f04a87840d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:50 compute-0 nova_compute[256940]: 2025-10-02 13:05:50.811 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410335.8102257, 623fda87-e4b4-4b98-96cb-6f1846429214 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:50 compute-0 nova_compute[256940]: 2025-10-02 13:05:50.812 2 INFO nova.compute.manager [-] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] VM Stopped (Lifecycle Event)
Oct 02 13:05:50 compute-0 nova_compute[256940]: 2025-10-02 13:05:50.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:50.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:50 compute-0 nova_compute[256940]: 2025-10-02 13:05:50.872 2 DEBUG nova.compute.manager [None req-3b64fa0d-edcf-4386-bea6-76c2d9bfbf19 - - - - - -] [instance: 623fda87-e4b4-4b98-96cb-6f1846429214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:51 compute-0 ceph-mon[73668]: pgmap v2790: 305 pgs: 305 active+clean; 313 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 4.6 MiB/s wr, 221 op/s
Oct 02 13:05:51 compute-0 sudo[379299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:51 compute-0 sudo[379299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:51 compute-0 sudo[379299]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:51 compute-0 sudo[379324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:51 compute-0 sudo[379324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:51 compute-0 sudo[379324]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 3.9 MiB/s wr, 184 op/s
Oct 02 13:05:52 compute-0 nova_compute[256940]: 2025-10-02 13:05:52.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:52.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:53.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:53 compute-0 ceph-mon[73668]: pgmap v2791: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 3.9 MiB/s wr, 184 op/s
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.594 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.594 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.594 2 INFO nova.compute.manager [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Unshelving
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.716 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.717 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.732 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_requests' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.777 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'numa_topology' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.795 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:05:53 compute-0 nova_compute[256940]: 2025-10-02 13:05:53.796 2 INFO nova.compute.claims [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:05:54 compute-0 nova_compute[256940]: 2025-10-02 13:05:54.015 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 415 KiB/s rd, 2.3 MiB/s wr, 150 op/s
Oct 02 13:05:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975145866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:54 compute-0 nova_compute[256940]: 2025-10-02 13:05:54.505 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:54 compute-0 nova_compute[256940]: 2025-10-02 13:05:54.515 2 DEBUG nova.compute.provider_tree [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:54 compute-0 nova_compute[256940]: 2025-10-02 13:05:54.538 2 DEBUG nova.scheduler.client.report [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/975145866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:54 compute-0 nova_compute[256940]: 2025-10-02 13:05:54.582 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 13:05:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 13:05:55 compute-0 nova_compute[256940]: 2025-10-02 13:05:55.079 2 INFO nova.network.neutron [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 13:05:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:55 compute-0 podman[379373]: 2025-10-02 13:05:55.439170111 +0000 UTC m=+0.094382301 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:55 compute-0 podman[379374]: 2025-10-02 13:05:55.479356934 +0000 UTC m=+0.129694797 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:05:55 compute-0 ceph-mon[73668]: pgmap v2792: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 415 KiB/s rd, 2.3 MiB/s wr, 150 op/s
Oct 02 13:05:55 compute-0 nova_compute[256940]: 2025-10-02 13:05:55.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 2.3 MiB/s wr, 171 op/s
Oct 02 13:05:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:56.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:57 compute-0 ceph-mon[73668]: pgmap v2793: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 2.3 MiB/s wr, 171 op/s
Oct 02 13:05:57 compute-0 nova_compute[256940]: 2025-10-02 13:05:57.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:58 compute-0 nova_compute[256940]: 2025-10-02 13:05:58.031 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:58 compute-0 nova_compute[256940]: 2025-10-02 13:05:58.032 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:58 compute-0 nova_compute[256940]: 2025-10-02 13:05:58.032 2 DEBUG nova.network.neutron [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:05:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 878 KiB/s wr, 85 op/s
Oct 02 13:05:58 compute-0 nova_compute[256940]: 2025-10-02 13:05:58.259 2 DEBUG nova.compute.manager [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:58 compute-0 nova_compute[256940]: 2025-10-02 13:05:58.260 2 DEBUG nova.compute.manager [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing instance network info cache due to event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:05:58 compute-0 nova_compute[256940]: 2025-10-02 13:05:58.260 2 DEBUG oslo_concurrency.lockutils [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:05:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:05:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:05:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:05:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:05:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:59 compute-0 ceph-mon[73668]: pgmap v2794: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 878 KiB/s wr, 85 op/s
Oct 02 13:05:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2926308603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 45 op/s
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.496 2 DEBUG nova.network.neutron [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.606 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.609 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.610 2 INFO nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating image(s)
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.647 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.652 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'trusted_certs' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.653 2 DEBUG oslo_concurrency.lockutils [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.653 2 DEBUG nova.network.neutron [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.751 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.775 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.779 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "6496f5f3ee1220fedc0365dd0cc64c138a158370" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.780 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "6496f5f3ee1220fedc0365dd0cc64c138a158370" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.784 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410345.7592437, 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.784 2 INFO nova.compute.manager [-] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] VM Stopped (Lifecycle Event)
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:00.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:00 compute-0 nova_compute[256940]: 2025-10-02 13:06:00.936 2 DEBUG nova.compute.manager [None req-e0aef70b-d99a-41d6-9847-7c386325ce2d - - - - - -] [instance: 383a4ec3-88f1-4bc7-9a1e-1f04a87840d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:01 compute-0 nova_compute[256940]: 2025-10-02 13:06:01.169 2 DEBUG nova.virt.libvirt.imagebackend [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/3560df73-c585-4179-87ca-fb0ca65743ee/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/3560df73-c585-4179-87ca-fb0ca65743ee/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 13:06:01 compute-0 nova_compute[256940]: 2025-10-02 13:06:01.232 2 DEBUG nova.virt.libvirt.imagebackend [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/3560df73-c585-4179-87ca-fb0ca65743ee/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 13:06:01 compute-0 nova_compute[256940]: 2025-10-02 13:06:01.232 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] cloning images/3560df73-c585-4179-87ca-fb0ca65743ee@snap to None/ea034622-0a48-4de6-8d68-0f2240b54214_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 13:06:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:01 compute-0 nova_compute[256940]: 2025-10-02 13:06:01.438 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "6496f5f3ee1220fedc0365dd0cc64c138a158370" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:01 compute-0 nova_compute[256940]: 2025-10-02 13:06:01.589 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'migration_context' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:01 compute-0 ceph-mon[73668]: pgmap v2795: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 45 op/s
Oct 02 13:06:01 compute-0 nova_compute[256940]: 2025-10-02 13:06:01.683 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] flattening vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.077 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Image rbd:vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.078 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.079 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Ensure instance console log exists: /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.080 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.080 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.080 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.084 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start _get_guest_xml network_info=[{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:05:25Z,direct_url=<?>,disk_format='raw',id=3560df73-c585-4179-87ca-fb0ca65743ee,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-881712342-shelved',owner='52dd3c4419794d0fbecd536c5088c60f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:05:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.089 2 WARNING nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.095 2 DEBUG nova.virt.libvirt.host [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.096 2 DEBUG nova.virt.libvirt.host [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.099 2 DEBUG nova.virt.libvirt.host [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.099 2 DEBUG nova.virt.libvirt.host [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.100 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.100 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:05:25Z,direct_url=<?>,disk_format='raw',id=3560df73-c585-4179-87ca-fb0ca65743ee,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-881712342-shelved',owner='52dd3c4419794d0fbecd536c5088c60f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:05:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.101 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.101 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.101 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.101 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.101 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.102 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.102 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.102 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.102 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.102 2 DEBUG nova.virt.hardware [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.102 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'vcpu_model' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.159 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 13:06:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:06:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556229993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.601 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.634 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.639 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:02 compute-0 nova_compute[256940]: 2025-10-02 13:06:02.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:06:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:02.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:06:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3556229993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:06:03 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854998284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.081 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.083 2 DEBUG nova.virt.libvirt.vif [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='3560df73-c585-4179-87ca-fb0ca65743ee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member',shelved_at='2025-10-02T13:05:35.924546',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='3560df73-c585-4179-87ca-fb0ca65743ee'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:53Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.083 2 DEBUG nova.network.os_vif_util [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.084 2 DEBUG nova.network.os_vif_util [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.086 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_devices' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.104 2 DEBUG nova.network.neutron [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updated VIF entry in instance network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.104 2 DEBUG nova.network.neutron [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.112 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <uuid>ea034622-0a48-4de6-8d68-0f2240b54214</uuid>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <name>instance-000000b5</name>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <nova:name>tempest-ServersNegativeTestJSON-server-881712342</nova:name>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:06:02</nova:creationTime>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:user uuid="5206d24fd75a48758994a57e7fd259f2">tempest-ServersNegativeTestJSON-1205930452-project-member</nova:user>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:project uuid="52dd3c4419794d0fbecd536c5088c60f">tempest-ServersNegativeTestJSON-1205930452</nova:project>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="3560df73-c585-4179-87ca-fb0ca65743ee"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <nova:port uuid="55d951c1-1ce9-4d4a-979c-9be9aef7e283">
Oct 02 13:06:03 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <system>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <entry name="serial">ea034622-0a48-4de6-8d68-0f2240b54214</entry>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <entry name="uuid">ea034622-0a48-4de6-8d68-0f2240b54214</entry>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </system>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <os>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   </os>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <features>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   </features>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk">
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       </source>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk.config">
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       </source>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:06:03 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:e8:64:21"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <target dev="tap55d951c1-1c"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/console.log" append="off"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <video>
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </video>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:06:03 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:06:03 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:06:03 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:06:03 compute-0 nova_compute[256940]: </domain>
Oct 02 13:06:03 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.114 2 DEBUG nova.compute.manager [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Preparing to wait for external event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.114 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.114 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.114 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.115 2 DEBUG nova.virt.libvirt.vif [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='3560df73-c585-4179-87ca-fb0ca65743ee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member',shelved_at='2025-10-02T13:05:35.924546',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='3560df73-c585-4179-87ca-fb0ca65743ee'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:53Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.115 2 DEBUG nova.network.os_vif_util [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.116 2 DEBUG nova.network.os_vif_util [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.116 2 DEBUG os_vif [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.120 2 DEBUG oslo_concurrency.lockutils [req-d62ca682-e1a5-470d-a4f3-518128c19231 req-feccb5b3-06a7-44ef-8f7e-a36a53a79088 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d951c1-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55d951c1-1c, col_values=(('external_ids', {'iface-id': '55d951c1-1ce9-4d4a-979c-9be9aef7e283', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:64:21', 'vm-uuid': 'ea034622-0a48-4de6-8d68-0f2240b54214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:03 compute-0 NetworkManager[44981]: <info>  [1759410363.1251] manager: (tap55d951c1-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.131 2 INFO os_vif [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c')
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.202 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.202 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.202 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No VIF found with MAC fa:16:3e:e8:64:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.203 2 INFO nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Using config drive
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.236 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.256 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'ec2_ids' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:03.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.329 2 DEBUG nova.objects.instance [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'keypairs' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.952 2 INFO nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating config drive at /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config
Oct 02 13:06:03 compute-0 nova_compute[256940]: 2025-10-02 13:06:03.961 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbmq_m1w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:03 compute-0 ceph-mon[73668]: pgmap v2796: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 13:06:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/854998284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.128 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbmq_m1w" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.174 2 DEBUG nova.storage.rbd_utils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.179 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config ea034622-0a48-4de6-8d68-0f2240b54214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 13:06:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.558 2 DEBUG oslo_concurrency.processutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config ea034622-0a48-4de6-8d68-0f2240b54214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.560 2 INFO nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deleting local config drive /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config because it was imported into RBD.
Oct 02 13:06:04 compute-0 kernel: tap55d951c1-1c: entered promiscuous mode
Oct 02 13:06:04 compute-0 NetworkManager[44981]: <info>  [1759410364.6237] manager: (tap55d951c1-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:04 compute-0 ovn_controller[148123]: 2025-10-02T13:06:04Z|00948|binding|INFO|Claiming lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 for this chassis.
Oct 02 13:06:04 compute-0 ovn_controller[148123]: 2025-10-02T13:06:04Z|00949|binding|INFO|55d951c1-1ce9-4d4a-979c-9be9aef7e283: Claiming fa:16:3e:e8:64:21 10.100.0.3
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:04 compute-0 systemd-udevd[379770]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.675 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.677 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 bound to our chassis
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.680 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:06:04 compute-0 systemd-machined[210927]: New machine qemu-96-instance-000000b5.
Oct 02 13:06:04 compute-0 NetworkManager[44981]: <info>  [1759410364.6913] device (tap55d951c1-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:06:04 compute-0 NetworkManager[44981]: <info>  [1759410364.6923] device (tap55d951c1-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:06:04 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-000000b5.
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.698 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fd03af0a-1fe6-4164-9622-d72b6cdde577]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.699 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb07d0c6a-51 in ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.701 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb07d0c6a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.701 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5a09df-5ff9-4aaf-ab02-98f0c237d379]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.701 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[70a63581-00e4-4ce1-961e-add868f3ab6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.716 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[269b744f-b265-4908-aa4d-33a8a2d8c345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.729 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ea775701-677f-49e6-a959-93ed4677d932]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 ovn_controller[148123]: 2025-10-02T13:06:04Z|00950|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 ovn-installed in OVS
Oct 02 13:06:04 compute-0 ovn_controller[148123]: 2025-10-02T13:06:04Z|00951|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 up in Southbound
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.771 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[54f8cc6f-129b-48fb-9552-3e19080478a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 NetworkManager[44981]: <info>  [1759410364.7830] manager: (tapb07d0c6a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/410)
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.784 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce3f625-6c09-486c-bed6-a7dab08d13d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 nova_compute[256940]: 2025-10-02 13:06:04.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.831 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[5c32e30f-810b-4d68-b898-b3724641c445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.837 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[13951fcf-64eb-4fb6-9d72-65916aafc23d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:04.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:04 compute-0 NetworkManager[44981]: <info>  [1759410364.8614] device (tapb07d0c6a-50): carrier: link connected
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.871 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[89ddb7f6-3457-4824-a241-12522ba03e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 sudo[379803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.901 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee98b15-8828-4625-b0f5-a23a9ae226d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 826034, 'reachable_time': 42082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379828, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 sudo[379803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:04 compute-0 sudo[379803]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.923 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8454d0f1-9adc-4689-88aa-b9674eeae0ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:4808'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 826034, 'tstamp': 826034}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379831, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.941 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[825fd642-a418-4d70-a212-97565085d2b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 826034, 'reachable_time': 42082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379836, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:04 compute-0 sudo[379832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:04 compute-0 sudo[379832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:04 compute-0 sudo[379832]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:04.974 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[772fdfbb-1dcf-45e0-a478-154c0f06a4e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:05 compute-0 ceph-mon[73668]: pgmap v2797: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 13:06:05 compute-0 sudo[379860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.036 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd7773b-a5b2-4725-9e97-35fd4ec26774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.038 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:05 compute-0 sudo[379860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.038 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.038 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:05 compute-0 kernel: tapb07d0c6a-50: entered promiscuous mode
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:05 compute-0 sudo[379860]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:05 compute-0 NetworkManager[44981]: <info>  [1759410365.0463] manager: (tapb07d0c6a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.047 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:05 compute-0 ovn_controller[148123]: 2025-10-02T13:06:05Z|00952|binding|INFO|Releasing lport 874a9fce-3ef5-498a-a977-43087c73ea46 from this chassis (sb_readonly=0)
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.051 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.053 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2ce96b-cb0b-4de5-bf2b-d6e7064d390f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.053 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:06:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:05.055 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'env', 'PROCESS_TAG=haproxy-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b07d0c6a-5988-4afb-b4ba-d4048578b224.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:05 compute-0 sudo[379889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:06:05 compute-0 sudo[379889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:05.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:05 compute-0 podman[379997]: 2025-10-02 13:06:05.46311569 +0000 UTC m=+0.050731988 container create caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:05 compute-0 systemd[1]: Started libpod-conmon-caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e.scope.
Oct 02 13:06:05 compute-0 podman[379997]: 2025-10-02 13:06:05.439853526 +0000 UTC m=+0.027469854 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:06:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7e4c44bc282e37e17b080c425d56cbf844f319044fb669b7d9d386c46a9fe6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:05 compute-0 sudo[379889]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:05 compute-0 podman[379997]: 2025-10-02 13:06:05.559184544 +0000 UTC m=+0.146800862 container init caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:06:05 compute-0 podman[379997]: 2025-10-02 13:06:05.564609375 +0000 UTC m=+0.152225683 container start caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:06:05 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [NOTICE]   (380033) : New worker (380035) forked
Oct 02 13:06:05 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [NOTICE]   (380033) : Loading success.
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.849 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410365.8482926, ea034622-0a48-4de6-8d68-0f2240b54214 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.849 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Started (Lifecycle Event)
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.889 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.894 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410365.8486025, ea034622-0a48-4de6-8d68-0f2240b54214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.894 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Paused (Lifecycle Event)
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.923 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.926 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:05 compute-0 nova_compute[256940]: 2025-10-02 13:06:05.957 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:06:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3781703479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1235638161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1235638161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 110 op/s
Oct 02 13:06:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:06:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:06:06 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:06.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:07.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev aa96599a-bb41-482b-874c-ff7dfbab7b6f does not exist
Oct 02 13:06:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2317937a-71c5-4e8b-baf3-e659a3b8e40d does not exist
Oct 02 13:06:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 591aaca5-d870-4e8d-8ea7-b5003262be85 does not exist
Oct 02 13:06:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:06:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:07 compute-0 sudo[380046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:07 compute-0 sudo[380046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:07 compute-0 sudo[380046]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:07 compute-0 sudo[380071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:07 compute-0 sudo[380071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:07 compute-0 sudo[380071]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:07 compute-0 nova_compute[256940]: 2025-10-02 13:06:07.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:07 compute-0 sudo[380096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:07 compute-0 sudo[380096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:07 compute-0 sudo[380096]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:07 compute-0 ceph-mon[73668]: pgmap v2798: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 110 op/s
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:06:07 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:07 compute-0 sudo[380121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:06:07 compute-0 sudo[380121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:08 compute-0 nova_compute[256940]: 2025-10-02 13:06:08.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:08 compute-0 podman[380185]: 2025-10-02 13:06:08.184207743 +0000 UTC m=+0.038377157 container create 67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:06:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Oct 02 13:06:08 compute-0 systemd[1]: Started libpod-conmon-67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c.scope.
Oct 02 13:06:08 compute-0 podman[380185]: 2025-10-02 13:06:08.170077976 +0000 UTC m=+0.024247420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:08 compute-0 podman[380185]: 2025-10-02 13:06:08.307403611 +0000 UTC m=+0.161573055 container init 67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kalam, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:06:08 compute-0 podman[380185]: 2025-10-02 13:06:08.313960831 +0000 UTC m=+0.168130245 container start 67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:06:08 compute-0 podman[380185]: 2025-10-02 13:06:08.317648937 +0000 UTC m=+0.171818351 container attach 67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:06:08 compute-0 wonderful_kalam[380201]: 167 167
Oct 02 13:06:08 compute-0 systemd[1]: libpod-67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c.scope: Deactivated successfully.
Oct 02 13:06:08 compute-0 podman[380185]: 2025-10-02 13:06:08.319731121 +0000 UTC m=+0.173900535 container died 67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9238063ebd13cf1c566908945296ac6a27fa3601e442da213354878313d9f560-merged.mount: Deactivated successfully.
Oct 02 13:06:08 compute-0 podman[380185]: 2025-10-02 13:06:08.363551519 +0000 UTC m=+0.217720933 container remove 67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kalam, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:06:08 compute-0 systemd[1]: libpod-conmon-67a470a39b833c2fee67af779188ee9fc15a14170dbce5538902206ef3ea178c.scope: Deactivated successfully.
Oct 02 13:06:08 compute-0 podman[380225]: 2025-10-02 13:06:08.532613397 +0000 UTC m=+0.042648108 container create d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:06:08 compute-0 systemd[1]: Started libpod-conmon-d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524.scope.
Oct 02 13:06:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:08 compute-0 podman[380225]: 2025-10-02 13:06:08.514302892 +0000 UTC m=+0.024337623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26593dcd9877dfe814768cc1dbd92164b8ab48b1ae2b6e11877249c6278096a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26593dcd9877dfe814768cc1dbd92164b8ab48b1ae2b6e11877249c6278096a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26593dcd9877dfe814768cc1dbd92164b8ab48b1ae2b6e11877249c6278096a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26593dcd9877dfe814768cc1dbd92164b8ab48b1ae2b6e11877249c6278096a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26593dcd9877dfe814768cc1dbd92164b8ab48b1ae2b6e11877249c6278096a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:08 compute-0 podman[380225]: 2025-10-02 13:06:08.636063381 +0000 UTC m=+0.146098192 container init d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:06:08 compute-0 podman[380225]: 2025-10-02 13:06:08.647824957 +0000 UTC m=+0.157859698 container start d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:08 compute-0 podman[380225]: 2025-10-02 13:06:08.652031486 +0000 UTC m=+0.162066207 container attach d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poincare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:06:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:08.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:09.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:09 compute-0 compassionate_poincare[380242]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:06:09 compute-0 compassionate_poincare[380242]: --> relative data size: 1.0
Oct 02 13:06:09 compute-0 compassionate_poincare[380242]: --> All data devices are unavailable
Oct 02 13:06:09 compute-0 systemd[1]: libpod-d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524.scope: Deactivated successfully.
Oct 02 13:06:09 compute-0 podman[380225]: 2025-10-02 13:06:09.428873021 +0000 UTC m=+0.938907732 container died d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-26593dcd9877dfe814768cc1dbd92164b8ab48b1ae2b6e11877249c6278096a1-merged.mount: Deactivated successfully.
Oct 02 13:06:09 compute-0 podman[380225]: 2025-10-02 13:06:09.482245297 +0000 UTC m=+0.992279998 container remove d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poincare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:06:09 compute-0 systemd[1]: libpod-conmon-d6a061fa81068b6506b454b018ccfd648bc58a4c4654535fb3dbc83fdb73f524.scope: Deactivated successfully.
Oct 02 13:06:09 compute-0 sudo[380121]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:09 compute-0 sudo[380271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:09 compute-0 sudo[380271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:09 compute-0 sudo[380271]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:09 compute-0 sudo[380296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:09 compute-0 sudo[380296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:09 compute-0 sudo[380296]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:09 compute-0 sudo[380321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:09 compute-0 sudo[380321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:09 compute-0 sudo[380321]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:09 compute-0 sudo[380346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:06:09 compute-0 sudo[380346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:09 compute-0 ceph-mon[73668]: pgmap v2799: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Oct 02 13:06:10 compute-0 podman[380412]: 2025-10-02 13:06:10.115759801 +0000 UTC m=+0.044525957 container create 9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:06:10 compute-0 systemd[1]: Started libpod-conmon-9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd.scope.
Oct 02 13:06:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:10 compute-0 podman[380412]: 2025-10-02 13:06:10.094090659 +0000 UTC m=+0.022856845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:10 compute-0 podman[380412]: 2025-10-02 13:06:10.203886439 +0000 UTC m=+0.132652665 container init 9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:06:10 compute-0 podman[380412]: 2025-10-02 13:06:10.212463782 +0000 UTC m=+0.141229908 container start 9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:06:10 compute-0 podman[380412]: 2025-10-02 13:06:10.216971809 +0000 UTC m=+0.145737975 container attach 9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:06:10 compute-0 jovial_swanson[380429]: 167 167
Oct 02 13:06:10 compute-0 systemd[1]: libpod-9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd.scope: Deactivated successfully.
Oct 02 13:06:10 compute-0 podman[380412]: 2025-10-02 13:06:10.220456109 +0000 UTC m=+0.149222235 container died 9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 322 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 113 op/s
Oct 02 13:06:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d3417dc8dd9d4b3860a0e691c0290c014d4bb50dd32974554bd0618ef653f67-merged.mount: Deactivated successfully.
Oct 02 13:06:10 compute-0 podman[380412]: 2025-10-02 13:06:10.257381857 +0000 UTC m=+0.186148003 container remove 9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:06:10 compute-0 systemd[1]: libpod-conmon-9059d43847e3fef86065eedb8aa1aa716aa45956fd5824f64dffe407583dd5dd.scope: Deactivated successfully.
Oct 02 13:06:10 compute-0 podman[380453]: 2025-10-02 13:06:10.456029214 +0000 UTC m=+0.051707143 container create 46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shirley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:10 compute-0 systemd[1]: Started libpod-conmon-46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574.scope.
Oct 02 13:06:10 compute-0 podman[380453]: 2025-10-02 13:06:10.43122884 +0000 UTC m=+0.026906789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df683e07a2dd56cb51115e4643baa689bfe33d85d4d8766d399c1765386d21d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df683e07a2dd56cb51115e4643baa689bfe33d85d4d8766d399c1765386d21d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df683e07a2dd56cb51115e4643baa689bfe33d85d4d8766d399c1765386d21d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df683e07a2dd56cb51115e4643baa689bfe33d85d4d8766d399c1765386d21d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:10 compute-0 podman[380453]: 2025-10-02 13:06:10.553448653 +0000 UTC m=+0.149126612 container init 46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:06:10 compute-0 podman[380453]: 2025-10-02 13:06:10.56491611 +0000 UTC m=+0.160594059 container start 46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shirley, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:06:10 compute-0 podman[380453]: 2025-10-02 13:06:10.568836902 +0000 UTC m=+0.164514801 container attach 46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:06:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:10.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:11.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]: {
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:     "1": [
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:         {
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "devices": [
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "/dev/loop3"
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             ],
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "lv_name": "ceph_lv0",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "lv_size": "7511998464",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "name": "ceph_lv0",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "tags": {
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.cluster_name": "ceph",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.crush_device_class": "",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.encrypted": "0",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.osd_id": "1",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.type": "block",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:                 "ceph.vdo": "0"
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             },
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "type": "block",
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:             "vg_name": "ceph_vg0"
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:         }
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]:     ]
Oct 02 13:06:11 compute-0 peaceful_shirley[380470]: }
Oct 02 13:06:11 compute-0 systemd[1]: libpod-46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574.scope: Deactivated successfully.
Oct 02 13:06:11 compute-0 podman[380453]: 2025-10-02 13:06:11.459188974 +0000 UTC m=+1.054866883 container died 46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:06:11 compute-0 sudo[380480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:11 compute-0 sudo[380480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:11 compute-0 sudo[380480]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df683e07a2dd56cb51115e4643baa689bfe33d85d4d8766d399c1765386d21d-merged.mount: Deactivated successfully.
Oct 02 13:06:11 compute-0 sudo[380534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:11 compute-0 sudo[380534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:11 compute-0 sudo[380534]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:11 compute-0 podman[380453]: 2025-10-02 13:06:11.641824194 +0000 UTC m=+1.237502103 container remove 46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:06:11 compute-0 podman[380502]: 2025-10-02 13:06:11.667265335 +0000 UTC m=+0.169988234 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd)
Oct 02 13:06:11 compute-0 podman[380482]: 2025-10-02 13:06:11.667564303 +0000 UTC m=+0.171513604 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:06:11 compute-0 sudo[380346]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:11 compute-0 systemd[1]: libpod-conmon-46b666cf9b8048ba68eee8cd35e18b244c9cab684b83047210d0f038d0325574.scope: Deactivated successfully.
Oct 02 13:06:11 compute-0 sudo[380581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:11 compute-0 sudo[380581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:11 compute-0 sudo[380581]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:11 compute-0 sudo[380606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:11 compute-0 sudo[380606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:11 compute-0 sudo[380606]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:11 compute-0 sudo[380631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:11 compute-0 sudo[380631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:11 compute-0 sudo[380631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:11 compute-0 ceph-mon[73668]: pgmap v2800: 305 pgs: 305 active+clean; 322 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 113 op/s
Oct 02 13:06:11 compute-0 sudo[380656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:06:11 compute-0 sudo[380656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 113 op/s
Oct 02 13:06:12 compute-0 podman[380723]: 2025-10-02 13:06:12.387482909 +0000 UTC m=+0.057478643 container create b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.432 2 DEBUG nova.compute.manager [req-c8e3ac3f-f004-4bae-9256-c3a23c0e575b req-9296b0f2-14b8-4df0-9001-e0f2bc41e5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.433 2 DEBUG oslo_concurrency.lockutils [req-c8e3ac3f-f004-4bae-9256-c3a23c0e575b req-9296b0f2-14b8-4df0-9001-e0f2bc41e5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.433 2 DEBUG oslo_concurrency.lockutils [req-c8e3ac3f-f004-4bae-9256-c3a23c0e575b req-9296b0f2-14b8-4df0-9001-e0f2bc41e5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.434 2 DEBUG oslo_concurrency.lockutils [req-c8e3ac3f-f004-4bae-9256-c3a23c0e575b req-9296b0f2-14b8-4df0-9001-e0f2bc41e5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.434 2 DEBUG nova.compute.manager [req-c8e3ac3f-f004-4bae-9256-c3a23c0e575b req-9296b0f2-14b8-4df0-9001-e0f2bc41e5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Processing event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.434 2 DEBUG nova.compute.manager [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.439 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410372.4392784, ea034622-0a48-4de6-8d68-0f2240b54214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.439 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Resumed (Lifecycle Event)
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.441 2 DEBUG nova.virt.libvirt.driver [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.445 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance spawned successfully.
Oct 02 13:06:12 compute-0 podman[380723]: 2025-10-02 13:06:12.351651029 +0000 UTC m=+0.021646813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:12 compute-0 systemd[1]: Started libpod-conmon-b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc.scope.
Oct 02 13:06:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:12 compute-0 podman[380723]: 2025-10-02 13:06:12.552720298 +0000 UTC m=+0.222716022 container init b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:06:12 compute-0 podman[380723]: 2025-10-02 13:06:12.565911021 +0000 UTC m=+0.235906735 container start b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:06:12 compute-0 dreamy_diffie[380739]: 167 167
Oct 02 13:06:12 compute-0 systemd[1]: libpod-b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc.scope: Deactivated successfully.
Oct 02 13:06:12 compute-0 conmon[380739]: conmon b13d500b6a3ec54eb687 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc.scope/container/memory.events
Oct 02 13:06:12 compute-0 podman[380723]: 2025-10-02 13:06:12.687942008 +0000 UTC m=+0.357937742 container attach b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:06:12 compute-0 podman[380723]: 2025-10-02 13:06:12.689789676 +0000 UTC m=+0.359785410 container died b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.722 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.727 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:12 compute-0 nova_compute[256940]: 2025-10-02 13:06:12.786 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-91b470a0faa39e2179feae07fd70e30ba6ae410204afd00ca34ee0349f254a51-merged.mount: Deactivated successfully.
Oct 02 13:06:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:12.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:12 compute-0 podman[380723]: 2025-10-02 13:06:12.894583102 +0000 UTC m=+0.564578856 container remove b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:06:12 compute-0 systemd[1]: libpod-conmon-b13d500b6a3ec54eb687e2589e3567ac6e59e0c79165d712ffedbc25563c14dc.scope: Deactivated successfully.
Oct 02 13:06:13 compute-0 podman[380765]: 2025-10-02 13:06:13.124966712 +0000 UTC m=+0.064714280 container create 00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:06:13 compute-0 nova_compute[256940]: 2025-10-02 13:06:13.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:13 compute-0 systemd[1]: Started libpod-conmon-00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3.scope.
Oct 02 13:06:13 compute-0 podman[380765]: 2025-10-02 13:06:13.088188438 +0000 UTC m=+0.027936036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:06:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeaed873cbfbe465b3b57f06a1ec6d08faf9c0e6a9db53fce1a71961d70719e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeaed873cbfbe465b3b57f06a1ec6d08faf9c0e6a9db53fce1a71961d70719e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeaed873cbfbe465b3b57f06a1ec6d08faf9c0e6a9db53fce1a71961d70719e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeaed873cbfbe465b3b57f06a1ec6d08faf9c0e6a9db53fce1a71961d70719e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:13 compute-0 podman[380765]: 2025-10-02 13:06:13.227895134 +0000 UTC m=+0.167642742 container init 00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:06:13 compute-0 podman[380765]: 2025-10-02 13:06:13.238849768 +0000 UTC m=+0.178597346 container start 00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:13 compute-0 podman[380765]: 2025-10-02 13:06:13.242584905 +0000 UTC m=+0.182332483 container attach 00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:06:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:13.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Oct 02 13:06:13 compute-0 ceph-mon[73668]: pgmap v2801: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 113 op/s
Oct 02 13:06:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Oct 02 13:06:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Oct 02 13:06:14 compute-0 hungry_faraday[380781]: {
Oct 02 13:06:14 compute-0 hungry_faraday[380781]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:06:14 compute-0 hungry_faraday[380781]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:06:14 compute-0 hungry_faraday[380781]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:06:14 compute-0 hungry_faraday[380781]:         "osd_id": 1,
Oct 02 13:06:14 compute-0 hungry_faraday[380781]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:06:14 compute-0 hungry_faraday[380781]:         "type": "bluestore"
Oct 02 13:06:14 compute-0 hungry_faraday[380781]:     }
Oct 02 13:06:14 compute-0 hungry_faraday[380781]: }
Oct 02 13:06:14 compute-0 systemd[1]: libpod-00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3.scope: Deactivated successfully.
Oct 02 13:06:14 compute-0 podman[380765]: 2025-10-02 13:06:14.117712082 +0000 UTC m=+1.057459690 container died 00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_faraday, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeaed873cbfbe465b3b57f06a1ec6d08faf9c0e6a9db53fce1a71961d70719e7-merged.mount: Deactivated successfully.
Oct 02 13:06:14 compute-0 podman[380765]: 2025-10-02 13:06:14.193957121 +0000 UTC m=+1.133704699 container remove 00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:06:14 compute-0 systemd[1]: libpod-conmon-00bae73b039819a5f992b1da6fbdf9569e6ded88c7761e48d6127e57a70d36a3.scope: Deactivated successfully.
Oct 02 13:06:14 compute-0 sudo[380656]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:06:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 78 op/s
Oct 02 13:06:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:06:14 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5b13cc66-f3c3-4890-892d-9329fd9bf60d does not exist
Oct 02 13:06:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 87f55eb8-a726-4183-86ad-36a7078a2163 does not exist
Oct 02 13:06:14 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3d8f0ae4-2557-4576-8a96-fd22eb320102 does not exist
Oct 02 13:06:14 compute-0 sudo[380814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:14 compute-0 sudo[380814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:14 compute-0 sudo[380814]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:14 compute-0 sudo[380839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:06:14 compute-0 sudo[380839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:14 compute-0 sudo[380839]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:14.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:15 compute-0 ceph-mon[73668]: osdmap e358: 3 total, 3 up, 3 in
Oct 02 13:06:15 compute-0 ceph-mon[73668]: pgmap v2803: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 78 op/s
Oct 02 13:06:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:15 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:15 compute-0 nova_compute[256940]: 2025-10-02 13:06:15.564 2 DEBUG nova.compute.manager [req-4c92a876-e235-4450-826b-df7dad8bbb6b req-afc688d0-9be2-4732-97a5-23078d18eb86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:15 compute-0 nova_compute[256940]: 2025-10-02 13:06:15.565 2 DEBUG oslo_concurrency.lockutils [req-4c92a876-e235-4450-826b-df7dad8bbb6b req-afc688d0-9be2-4732-97a5-23078d18eb86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:15 compute-0 nova_compute[256940]: 2025-10-02 13:06:15.566 2 DEBUG oslo_concurrency.lockutils [req-4c92a876-e235-4450-826b-df7dad8bbb6b req-afc688d0-9be2-4732-97a5-23078d18eb86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:15 compute-0 nova_compute[256940]: 2025-10-02 13:06:15.566 2 DEBUG oslo_concurrency.lockutils [req-4c92a876-e235-4450-826b-df7dad8bbb6b req-afc688d0-9be2-4732-97a5-23078d18eb86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:15 compute-0 nova_compute[256940]: 2025-10-02 13:06:15.566 2 DEBUG nova.compute.manager [req-4c92a876-e235-4450-826b-df7dad8bbb6b req-afc688d0-9be2-4732-97a5-23078d18eb86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:06:15 compute-0 nova_compute[256940]: 2025-10-02 13:06:15.567 2 WARNING nova.compute.manager [req-4c92a876-e235-4450-826b-df7dad8bbb6b req-afc688d0-9be2-4732-97a5-23078d18eb86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state shelved_offloaded and task_state spawning.
Oct 02 13:06:15 compute-0 nova_compute[256940]: 2025-10-02 13:06:15.907 2 DEBUG nova.compute.manager [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:16 compute-0 nova_compute[256940]: 2025-10-02 13:06:16.157 2 DEBUG oslo_concurrency.lockutils [None req-d3ee1648-6696-4370-b387-0175d995b66a 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 22.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 285 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 02 13:06:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:17.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:17 compute-0 ceph-mon[73668]: pgmap v2804: 305 pgs: 305 active+clean; 285 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 02 13:06:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4164908810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2607940262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3523504306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:17 compute-0 nova_compute[256940]: 2025-10-02 13:06:17.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:18 compute-0 nova_compute[256940]: 2025-10-02 13:06:18.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 02 13:06:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:19 compute-0 nova_compute[256940]: 2025-10-02 13:06:19.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:19.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Oct 02 13:06:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Oct 02 13:06:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Oct 02 13:06:19 compute-0 ceph-mon[73668]: pgmap v2805: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 02 13:06:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1150856767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:19 compute-0 ceph-mon[73668]: osdmap e359: 3 total, 3 up, 3 in
Oct 02 13:06:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Oct 02 13:06:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:21.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:21 compute-0 ceph-mon[73668]: pgmap v2807: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Oct 02 13:06:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1954721145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/85858181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:22 compute-0 nova_compute[256940]: 2025-10-02 13:06:22.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:22 compute-0 nova_compute[256940]: 2025-10-02 13:06:22.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:06:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 176 op/s
Oct 02 13:06:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/822308584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:22 compute-0 nova_compute[256940]: 2025-10-02 13:06:22.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:22.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:23.075 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:06:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:23.077 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.287 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.287 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.287 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.287 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.288 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:23.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814506278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.797 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:23 compute-0 ceph-mon[73668]: pgmap v2808: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 176 op/s
Oct 02 13:06:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3814506278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.921 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:06:23 compute-0 nova_compute[256940]: 2025-10-02 13:06:23.922 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.060 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.061 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4022MB free_disk=20.901077270507812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.061 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.061 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.291 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance ea034622-0a48-4de6-8d68-0f2240b54214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.292 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.292 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.315 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:06:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.384 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.385 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.411 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.436 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:06:24 compute-0 nova_compute[256940]: 2025-10-02 13:06:24.524 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:24 compute-0 ovn_controller[148123]: 2025-10-02T13:06:24Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:64:21 10.100.0.3
Oct 02 13:06:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:24.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/426792412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:25 compute-0 nova_compute[256940]: 2025-10-02 13:06:25.032 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:25 compute-0 nova_compute[256940]: 2025-10-02 13:06:25.038 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:06:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:25.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:25 compute-0 nova_compute[256940]: 2025-10-02 13:06:25.381 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:06:25 compute-0 nova_compute[256940]: 2025-10-02 13:06:25.416 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:06:25 compute-0 nova_compute[256940]: 2025-10-02 13:06:25.417 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:25 compute-0 ceph-mon[73668]: pgmap v2809: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 02 13:06:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/426792412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 02 13:06:26 compute-0 nova_compute[256940]: 2025-10-02 13:06:26.418 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:26 compute-0 nova_compute[256940]: 2025-10-02 13:06:26.418 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:26 compute-0 podman[380917]: 2025-10-02 13:06:26.430165293 +0000 UTC m=+0.088728964 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:06:26 compute-0 podman[380916]: 2025-10-02 13:06:26.435320687 +0000 UTC m=+0.085816779 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 02 13:06:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:26.502 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:26.504 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:26.504 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:26.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:27.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:27 compute-0 nova_compute[256940]: 2025-10-02 13:06:27.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:27 compute-0 ceph-mon[73668]: pgmap v2810: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 02 13:06:28 compute-0 nova_compute[256940]: 2025-10-02 13:06:28.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:06:28
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'images', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta']
Oct 02 13:06:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:06:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:28.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:29 compute-0 ceph-mon[73668]: pgmap v2811: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Oct 02 13:06:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:29.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:06:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:06:30 compute-0 nova_compute[256940]: 2025-10-02 13:06:30.015 2 DEBUG nova.objects.instance [None req-343fa708-6333-44c7-86bc-7b6e98e4d4cf 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_devices' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:30 compute-0 nova_compute[256940]: 2025-10-02 13:06:30.075 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410390.0749655, ea034622-0a48-4de6-8d68-0f2240b54214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:30 compute-0 nova_compute[256940]: 2025-10-02 13:06:30.076 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Paused (Lifecycle Event)
Oct 02 13:06:30 compute-0 nova_compute[256940]: 2025-10-02 13:06:30.121 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:30 compute-0 nova_compute[256940]: 2025-10-02 13:06:30.127 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:30 compute-0 nova_compute[256940]: 2025-10-02 13:06:30.172 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 13:06:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 159 op/s
Oct 02 13:06:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3502773686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.079 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:31 compute-0 kernel: tap55d951c1-1c (unregistering): left promiscuous mode
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:31 compute-0 NetworkManager[44981]: <info>  [1759410391.2112] device (tap55d951c1-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-0 ovn_controller[148123]: 2025-10-02T13:06:31Z|00953|binding|INFO|Releasing lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 from this chassis (sb_readonly=0)
Oct 02 13:06:31 compute-0 ovn_controller[148123]: 2025-10-02T13:06:31Z|00954|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 down in Southbound
Oct 02 13:06:31 compute-0 ovn_controller[148123]: 2025-10-02T13:06:31Z|00955|binding|INFO|Removing iface tap55d951c1-1c ovn-installed in OVS
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.231 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.233 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 unbound from our chassis
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.235 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b07d0c6a-5988-4afb-b4ba-d4048578b224, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.237 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[506108df-ff14-4f1f-ad65-bf658d98a3dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.237 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace which is not needed anymore
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Oct 02 13:06:31 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000b5.scope: Consumed 13.637s CPU time.
Oct 02 13:06:31 compute-0 systemd-machined[210927]: Machine qemu-96-instance-000000b5 terminated.
Oct 02 13:06:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.374 2 DEBUG nova.compute.manager [None req-343fa708-6333-44c7-86bc-7b6e98e4d4cf 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:31 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [NOTICE]   (380033) : haproxy version is 2.8.14-c23fe91
Oct 02 13:06:31 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [NOTICE]   (380033) : path to executable is /usr/sbin/haproxy
Oct 02 13:06:31 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [WARNING]  (380033) : Exiting Master process...
Oct 02 13:06:31 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [WARNING]  (380033) : Exiting Master process...
Oct 02 13:06:31 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [ALERT]    (380033) : Current worker (380035) exited with code 143 (Terminated)
Oct 02 13:06:31 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[380024]: [WARNING]  (380033) : All workers exited. Exiting... (0)
Oct 02 13:06:31 compute-0 systemd[1]: libpod-caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e.scope: Deactivated successfully.
Oct 02 13:06:31 compute-0 conmon[380024]: conmon caaa5bea760c6f7607c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e.scope/container/memory.events
Oct 02 13:06:31 compute-0 podman[380991]: 2025-10-02 13:06:31.408082466 +0000 UTC m=+0.052325449 container died caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:06:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e-userdata-shm.mount: Deactivated successfully.
Oct 02 13:06:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca7e4c44bc282e37e17b080c425d56cbf844f319044fb669b7d9d386c46a9fe6-merged.mount: Deactivated successfully.
Oct 02 13:06:31 compute-0 podman[380991]: 2025-10-02 13:06:31.450426885 +0000 UTC m=+0.094669868 container cleanup caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 13:06:31 compute-0 systemd[1]: libpod-conmon-caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e.scope: Deactivated successfully.
Oct 02 13:06:31 compute-0 podman[381031]: 2025-10-02 13:06:31.529891528 +0000 UTC m=+0.051892768 container remove caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.537 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb16d20-96ed-44ab-a454-51ee054e37fa]: (4, ('Thu Oct  2 01:06:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e)\ncaaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e\nThu Oct  2 01:06:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (caaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e)\ncaaa5bea760c6f7607c1ef99c34e897cc393f710d86c7d7966d775077349140e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.539 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[71c5604c-47bb-41a3-934e-fd962278272a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.539 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-0 kernel: tapb07d0c6a-50: left promiscuous mode
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.562 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f8dbc466-145f-481e-9885-c9e3ccf4384c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.589 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[56b8b0c0-95a5-4cac-a928-4294da28cb19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.590 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[509558fc-d79f-40e3-806a-202747613b9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.608 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fe4aea-77a3-4cf4-9500-850968c741fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 826024, 'reachable_time': 41219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381050, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.610 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:06:31 compute-0 systemd[1]: run-netns-ovnmeta\x2db07d0c6a\x2d5988\x2d4afb\x2db4ba\x2dd4048578b224.mount: Deactivated successfully.
Oct 02 13:06:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:31.610 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[6deaeab3-57bd-439c-9e3e-c09b7324f0e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:31 compute-0 sudo[381051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:31 compute-0 sudo[381051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:31 compute-0 sudo[381051]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:31 compute-0 sudo[381076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:31 compute-0 sudo[381076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:31 compute-0 sudo[381076]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.878 2 DEBUG nova.compute.manager [req-cd598e21-c2a3-4cfa-b885-12cfc7a2ad23 req-bbea5240-1c50-4030-a4e5-19a693219653 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.878 2 DEBUG oslo_concurrency.lockutils [req-cd598e21-c2a3-4cfa-b885-12cfc7a2ad23 req-bbea5240-1c50-4030-a4e5-19a693219653 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.878 2 DEBUG oslo_concurrency.lockutils [req-cd598e21-c2a3-4cfa-b885-12cfc7a2ad23 req-bbea5240-1c50-4030-a4e5-19a693219653 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.879 2 DEBUG oslo_concurrency.lockutils [req-cd598e21-c2a3-4cfa-b885-12cfc7a2ad23 req-bbea5240-1c50-4030-a4e5-19a693219653 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.879 2 DEBUG nova.compute.manager [req-cd598e21-c2a3-4cfa-b885-12cfc7a2ad23 req-bbea5240-1c50-4030-a4e5-19a693219653 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:06:31 compute-0 nova_compute[256940]: 2025-10-02 13:06:31.879 2 WARNING nova.compute.manager [req-cd598e21-c2a3-4cfa-b885-12cfc7a2ad23 req-bbea5240-1c50-4030-a4e5-19a693219653 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state suspended and task_state None.
Oct 02 13:06:32 compute-0 ceph-mon[73668]: pgmap v2812: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 159 op/s
Oct 02 13:06:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2494029457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 606 KiB/s wr, 133 op/s
Oct 02 13:06:32 compute-0 nova_compute[256940]: 2025-10-02 13:06:32.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:33 compute-0 ceph-mon[73668]: pgmap v2813: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 606 KiB/s wr, 133 op/s
Oct 02 13:06:33 compute-0 nova_compute[256940]: 2025-10-02 13:06:33.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-0 nova_compute[256940]: 2025-10-02 13:06:33.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:33 compute-0 nova_compute[256940]: 2025-10-02 13:06:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:06:33 compute-0 nova_compute[256940]: 2025-10-02 13:06:33.271 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:06:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:33.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 110 op/s
Oct 02 13:06:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:34 compute-0 nova_compute[256940]: 2025-10-02 13:06:34.775 2 DEBUG nova.compute.manager [req-1e27e842-2ff9-4c6b-8aa4-b4b8440ce3eb req-f21bba8e-c4f4-4f72-bdc7-05a6d6609028 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:34 compute-0 nova_compute[256940]: 2025-10-02 13:06:34.775 2 DEBUG oslo_concurrency.lockutils [req-1e27e842-2ff9-4c6b-8aa4-b4b8440ce3eb req-f21bba8e-c4f4-4f72-bdc7-05a6d6609028 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:34 compute-0 nova_compute[256940]: 2025-10-02 13:06:34.775 2 DEBUG oslo_concurrency.lockutils [req-1e27e842-2ff9-4c6b-8aa4-b4b8440ce3eb req-f21bba8e-c4f4-4f72-bdc7-05a6d6609028 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:34 compute-0 nova_compute[256940]: 2025-10-02 13:06:34.776 2 DEBUG oslo_concurrency.lockutils [req-1e27e842-2ff9-4c6b-8aa4-b4b8440ce3eb req-f21bba8e-c4f4-4f72-bdc7-05a6d6609028 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:34 compute-0 nova_compute[256940]: 2025-10-02 13:06:34.776 2 DEBUG nova.compute.manager [req-1e27e842-2ff9-4c6b-8aa4-b4b8440ce3eb req-f21bba8e-c4f4-4f72-bdc7-05a6d6609028 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:06:34 compute-0 nova_compute[256940]: 2025-10-02 13:06:34.776 2 WARNING nova.compute.manager [req-1e27e842-2ff9-4c6b-8aa4-b4b8440ce3eb req-f21bba8e-c4f4-4f72-bdc7-05a6d6609028 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state suspended and task_state None.
Oct 02 13:06:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:35 compute-0 nova_compute[256940]: 2025-10-02 13:06:35.267 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:35 compute-0 nova_compute[256940]: 2025-10-02 13:06:35.341 2 INFO nova.compute.manager [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Resuming
Oct 02 13:06:35 compute-0 nova_compute[256940]: 2025-10-02 13:06:35.342 2 DEBUG nova.objects.instance [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'flavor' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:35 compute-0 ceph-mon[73668]: pgmap v2814: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 110 op/s
Oct 02 13:06:35 compute-0 nova_compute[256940]: 2025-10-02 13:06:35.417 2 DEBUG oslo_concurrency.lockutils [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:06:35 compute-0 nova_compute[256940]: 2025-10-02 13:06:35.418 2 DEBUG oslo_concurrency.lockutils [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:35 compute-0 nova_compute[256940]: 2025-10-02 13:06:35.418 2 DEBUG nova.network.neutron [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:06:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 33 KiB/s wr, 142 op/s
Oct 02 13:06:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:36.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:37.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:37 compute-0 ceph-mon[73668]: pgmap v2815: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 33 KiB/s wr, 142 op/s
Oct 02 13:06:37 compute-0 nova_compute[256940]: 2025-10-02 13:06:37.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:38 compute-0 nova_compute[256940]: 2025-10-02 13:06:38.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 87 op/s
Oct 02 13:06:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:38.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.153 2 DEBUG nova.network.neutron [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.197 2 DEBUG oslo_concurrency.lockutils [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.202 2 DEBUG nova.virt.libvirt.vif [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:06:31Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.202 2 DEBUG nova.network.os_vif_util [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.203 2 DEBUG nova.network.os_vif_util [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.203 2 DEBUG os_vif [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.205 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d951c1-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55d951c1-1c, col_values=(('external_ids', {'iface-id': '55d951c1-1ce9-4d4a-979c-9be9aef7e283', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:64:21', 'vm-uuid': 'ea034622-0a48-4de6-8d68-0f2240b54214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.209 2 INFO os_vif [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c')
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.305 2 DEBUG nova.objects.instance [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'numa_topology' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:39.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:39 compute-0 kernel: tap55d951c1-1c: entered promiscuous mode
Oct 02 13:06:39 compute-0 NetworkManager[44981]: <info>  [1759410399.3666] manager: (tap55d951c1-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:39 compute-0 ovn_controller[148123]: 2025-10-02T13:06:39Z|00956|binding|INFO|Claiming lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 for this chassis.
Oct 02 13:06:39 compute-0 ovn_controller[148123]: 2025-10-02T13:06:39Z|00957|binding|INFO|55d951c1-1ce9-4d4a-979c-9be9aef7e283: Claiming fa:16:3e:e8:64:21 10.100.0.3
Oct 02 13:06:39 compute-0 ovn_controller[148123]: 2025-10-02T13:06:39Z|00958|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 ovn-installed in OVS
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 systemd-udevd[381119]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:06:39 compute-0 systemd-machined[210927]: New machine qemu-97-instance-000000b5.
Oct 02 13:06:39 compute-0 ovn_controller[148123]: 2025-10-02T13:06:39Z|00959|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 up in Southbound
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.402 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.403 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 bound to our chassis
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.405 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:06:39 compute-0 NetworkManager[44981]: <info>  [1759410399.4092] device (tap55d951c1-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:06:39 compute-0 NetworkManager[44981]: <info>  [1759410399.4107] device (tap55d951c1-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:06:39 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-000000b5.
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.415 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf232da-90d8-4af9-bd48-ecb49aae2c3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.417 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb07d0c6a-51 in ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.419 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb07d0c6a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.419 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[91b5bac6-b0f3-4a6f-954c-3b06ff7943ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.420 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dc53d5b4-379d-4306-96bf-4ef4d2db244b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.430 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ab09ea61-1e6a-49fc-a9e4-081a6e223a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.454 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ec57e604-5278-49e5-b50d-b886220fce9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.480 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[879a3b2e-07a1-4ee2-996f-135c147a8262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 NetworkManager[44981]: <info>  [1759410399.4861] manager: (tapb07d0c6a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Oct 02 13:06:39 compute-0 systemd-udevd[381122]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.485 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d14d23-6319-41fe-a762-55b3d9735921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.516 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c67d6414-7b1d-467c-b169-f75e5d80bb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.519 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[1faeb3ca-e50b-4f24-9ea9-6c9bb777b489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 NetworkManager[44981]: <info>  [1759410399.5404] device (tapb07d0c6a-50): carrier: link connected
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.545 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbdb219-d56b-4e9a-b5f6-ead067a5e2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.560 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4bec6d3c-3655-4b79-9cd5-71667ec8376c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829502, 'reachable_time': 24072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381152, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.574 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8addbd-3bb0-412c-95de-8c682723f14c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:4808'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829502, 'tstamp': 829502}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381153, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.589 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b7821b49-21d9-4a27-be8b-c30d54f1ec58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829502, 'reachable_time': 24072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381154, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.619 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[05082ba4-e56a-423b-8238-a11810fa7d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.677 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcc0c27-0858-48c3-ad61-693293489337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.678 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.679 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.679 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 NetworkManager[44981]: <info>  [1759410399.6820] manager: (tapb07d0c6a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct 02 13:06:39 compute-0 kernel: tapb07d0c6a-50: entered promiscuous mode
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.685 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 ovn_controller[148123]: 2025-10-02T13:06:39Z|00960|binding|INFO|Releasing lport 874a9fce-3ef5-498a-a977-43087c73ea46 from this chassis (sb_readonly=0)
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 nova_compute[256940]: 2025-10-02 13:06:39.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.705 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.706 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7081f309-2651-4be0-bfc1-3ccef54d32a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.707 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:06:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:06:39.707 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'env', 'PROCESS_TAG=haproxy-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b07d0c6a-5988-4afb-b4ba-d4048578b224.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:06:39 compute-0 ceph-mon[73668]: pgmap v2816: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 87 op/s
Oct 02 13:06:40 compute-0 podman[381222]: 2025-10-02 13:06:40.028772598 +0000 UTC m=+0.021526090 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:06:40 compute-0 nova_compute[256940]: 2025-10-02 13:06:40.162 2 DEBUG nova.compute.manager [req-ffda737f-0bfb-4a5a-8b8d-7dffafd84b6b req-c13047e2-8537-4719-bf6a-af900d873802 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:40 compute-0 nova_compute[256940]: 2025-10-02 13:06:40.162 2 DEBUG oslo_concurrency.lockutils [req-ffda737f-0bfb-4a5a-8b8d-7dffafd84b6b req-c13047e2-8537-4719-bf6a-af900d873802 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:40 compute-0 nova_compute[256940]: 2025-10-02 13:06:40.162 2 DEBUG oslo_concurrency.lockutils [req-ffda737f-0bfb-4a5a-8b8d-7dffafd84b6b req-c13047e2-8537-4719-bf6a-af900d873802 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:40 compute-0 nova_compute[256940]: 2025-10-02 13:06:40.162 2 DEBUG oslo_concurrency.lockutils [req-ffda737f-0bfb-4a5a-8b8d-7dffafd84b6b req-c13047e2-8537-4719-bf6a-af900d873802 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:40 compute-0 nova_compute[256940]: 2025-10-02 13:06:40.163 2 DEBUG nova.compute.manager [req-ffda737f-0bfb-4a5a-8b8d-7dffafd84b6b req-c13047e2-8537-4719-bf6a-af900d873802 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:06:40 compute-0 nova_compute[256940]: 2025-10-02 13:06:40.163 2 WARNING nova.compute.manager [req-ffda737f-0bfb-4a5a-8b8d-7dffafd84b6b req-c13047e2-8537-4719-bf6a-af900d873802 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state suspended and task_state resuming.
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 76 op/s
Oct 02 13:06:40 compute-0 podman[381222]: 2025-10-02 13:06:40.543856178 +0000 UTC m=+0.536609650 container create daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004162522682858738 of space, bias 1.0, pg target 1.2487568048576214 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6465347207681927 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:06:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:06:40 compute-0 systemd[1]: Started libpod-conmon-daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d.scope.
Oct 02 13:06:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6eed2ceff99d83da3170673d309ae741172e0f4557a6390f4fb6f46e3d6b567/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:40 compute-0 podman[381222]: 2025-10-02 13:06:40.783782657 +0000 UTC m=+0.776536129 container init daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:06:40 compute-0 podman[381222]: 2025-10-02 13:06:40.792233646 +0000 UTC m=+0.784987118 container start daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:06:40 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [NOTICE]   (381248) : New worker (381250) forked
Oct 02 13:06:40 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [NOTICE]   (381248) : Loading success.
Oct 02 13:06:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:40.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.154 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for ea034622-0a48-4de6-8d68-0f2240b54214 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.155 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410401.1541815, ea034622-0a48-4de6-8d68-0f2240b54214 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.156 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Started (Lifecycle Event)
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.189 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.197 2 DEBUG nova.compute.manager [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.198 2 DEBUG nova.objects.instance [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_devices' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.202 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.298 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.298 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410401.1593177, ea034622-0a48-4de6-8d68-0f2240b54214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.299 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Resumed (Lifecycle Event)
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.303 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance running successfully.
Oct 02 13:06:41 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.306 2 DEBUG nova.virt.libvirt.guest [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.307 2 DEBUG nova.compute.manager [None req-c3db5ffa-7c14-4b34-8cb6-9bac8d3bd87b 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:41.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.344 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.349 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:41 compute-0 nova_compute[256940]: 2025-10-02 13:06:41.389 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 13:06:42 compute-0 ceph-mon[73668]: pgmap v2817: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 76 op/s
Oct 02 13:06:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 81 op/s
Oct 02 13:06:42 compute-0 podman[381259]: 2025-10-02 13:06:42.405377109 +0000 UTC m=+0.072319159 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 13:06:42 compute-0 podman[381260]: 2025-10-02 13:06:42.421061906 +0000 UTC m=+0.087108002 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 13:06:42 compute-0 nova_compute[256940]: 2025-10-02 13:06:42.631 2 DEBUG nova.compute.manager [req-063f9af4-0265-4370-88fd-f68b98d7d148 req-37c67c7a-b24e-42c2-85cd-1603009a7523 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:42 compute-0 nova_compute[256940]: 2025-10-02 13:06:42.632 2 DEBUG oslo_concurrency.lockutils [req-063f9af4-0265-4370-88fd-f68b98d7d148 req-37c67c7a-b24e-42c2-85cd-1603009a7523 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:42 compute-0 nova_compute[256940]: 2025-10-02 13:06:42.632 2 DEBUG oslo_concurrency.lockutils [req-063f9af4-0265-4370-88fd-f68b98d7d148 req-37c67c7a-b24e-42c2-85cd-1603009a7523 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:42 compute-0 nova_compute[256940]: 2025-10-02 13:06:42.633 2 DEBUG oslo_concurrency.lockutils [req-063f9af4-0265-4370-88fd-f68b98d7d148 req-37c67c7a-b24e-42c2-85cd-1603009a7523 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:42 compute-0 nova_compute[256940]: 2025-10-02 13:06:42.633 2 DEBUG nova.compute.manager [req-063f9af4-0265-4370-88fd-f68b98d7d148 req-37c67c7a-b24e-42c2-85cd-1603009a7523 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:06:42 compute-0 nova_compute[256940]: 2025-10-02 13:06:42.633 2 WARNING nova.compute.manager [req-063f9af4-0265-4370-88fd-f68b98d7d148 req-37c67c7a-b24e-42c2-85cd-1603009a7523 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state None.
Oct 02 13:06:42 compute-0 nova_compute[256940]: 2025-10-02 13:06:42.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:43 compute-0 nova_compute[256940]: 2025-10-02 13:06:43.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:43 compute-0 nova_compute[256940]: 2025-10-02 13:06:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:43 compute-0 nova_compute[256940]: 2025-10-02 13:06:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:06:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:43 compute-0 nova_compute[256940]: 2025-10-02 13:06:43.383 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:06:43 compute-0 ceph-mon[73668]: pgmap v2818: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 81 op/s
Oct 02 13:06:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Oct 02 13:06:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:45.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:46 compute-0 ceph-mon[73668]: pgmap v2819: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Oct 02 13:06:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 87 op/s
Oct 02 13:06:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:06:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:46.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:06:47 compute-0 ceph-mon[73668]: pgmap v2820: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 87 op/s
Oct 02 13:06:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:47.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:47 compute-0 nova_compute[256940]: 2025-10-02 13:06:47.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:48 compute-0 nova_compute[256940]: 2025-10-02 13:06:48.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 317 KiB/s wr, 59 op/s
Oct 02 13:06:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:48.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:49.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:49 compute-0 ceph-mon[73668]: pgmap v2821: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 317 KiB/s wr, 59 op/s
Oct 02 13:06:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 306 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 67 op/s
Oct 02 13:06:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:50.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:51.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:51 compute-0 sudo[381304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:51 compute-0 sudo[381304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:51 compute-0 sudo[381304]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:51 compute-0 sudo[381329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:51 compute-0 sudo[381329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:51 compute-0 sudo[381329]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:51 compute-0 ceph-mon[73668]: pgmap v2822: 305 pgs: 305 active+clean; 306 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 67 op/s
Oct 02 13:06:52 compute-0 nova_compute[256940]: 2025-10-02 13:06:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:52 compute-0 nova_compute[256940]: 2025-10-02 13:06:52.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:06:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 315 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 160 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Oct 02 13:06:52 compute-0 nova_compute[256940]: 2025-10-02 13:06:52.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:53 compute-0 nova_compute[256940]: 2025-10-02 13:06:53.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:53.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:53 compute-0 ceph-mon[73668]: pgmap v2823: 305 pgs: 305 active+clean; 315 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 160 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Oct 02 13:06:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 315 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 1.9 MiB/s wr, 37 op/s
Oct 02 13:06:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:54.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:55 compute-0 ceph-mon[73668]: pgmap v2824: 305 pgs: 305 active+clean; 315 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 1.9 MiB/s wr, 37 op/s
Oct 02 13:06:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:55.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:56 compute-0 sshd-session[381357]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 13:06:56 compute-0 sshd-session[381357]: Connection reset by 45.140.17.97 port 64026
Oct 02 13:06:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:06:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:56.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:57 compute-0 podman[381359]: 2025-10-02 13:06:57.399170257 +0000 UTC m=+0.060777228 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:06:57 compute-0 podman[381360]: 2025-10-02 13:06:57.46898546 +0000 UTC m=+0.131981477 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 13:06:57 compute-0 ceph-mon[73668]: pgmap v2825: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:06:57 compute-0 nova_compute[256940]: 2025-10-02 13:06:57.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:58 compute-0 nova_compute[256940]: 2025-10-02 13:06:58.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:06:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:06:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:58.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:06:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:59 compute-0 ceph-mon[73668]: pgmap v2826: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:07:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 340 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 296 KiB/s rd, 3.4 MiB/s wr, 80 op/s
Oct 02 13:07:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:00.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:01.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:01 compute-0 ceph-mon[73668]: pgmap v2827: 305 pgs: 305 active+clean; 340 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 296 KiB/s rd, 3.4 MiB/s wr, 80 op/s
Oct 02 13:07:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 349 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 371 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Oct 02 13:07:02 compute-0 nova_compute[256940]: 2025-10-02 13:07:02.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:02.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:03 compute-0 nova_compute[256940]: 2025-10-02 13:07:03.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:03.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:03 compute-0 ceph-mon[73668]: pgmap v2828: 305 pgs: 305 active+clean; 349 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 371 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Oct 02 13:07:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 349 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 257 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Oct 02 13:07:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:04.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:05.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.588 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.589 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.612 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.717 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.718 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.733 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.733 2 INFO nova.compute.claims [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:07:05 compute-0 ceph-mon[73668]: pgmap v2829: 305 pgs: 305 active+clean; 349 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 257 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Oct 02 13:07:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:05 compute-0 nova_compute[256940]: 2025-10-02 13:07:05.953 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 298 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 431 KiB/s rd, 2.4 MiB/s wr, 107 op/s
Oct 02 13:07:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3631395183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.423 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.430 2 DEBUG nova.compute.provider_tree [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.458 2 DEBUG nova.scheduler.client.report [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.492 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.493 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.552 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.552 2 DEBUG nova.network.neutron [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.572 2 INFO nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.590 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.673 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.674 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.675 2 INFO nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Creating image(s)
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.706 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.732 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.757 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.760 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3631395183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.838 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.840 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.841 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.842 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.880 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:07:06 compute-0 nova_compute[256940]: 2025-10-02 13:07:06.885 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:06.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.078 2 DEBUG nova.policy [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.181 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.269 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:07:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:07.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.491 2 DEBUG nova.objects.instance [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.519 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.520 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Ensure instance console log exists: /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.521 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.521 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.522 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:07 compute-0 nova_compute[256940]: 2025-10-02 13:07:07.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:07 compute-0 ceph-mon[73668]: pgmap v2830: 305 pgs: 305 active+clean; 298 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 431 KiB/s rd, 2.4 MiB/s wr, 107 op/s
Oct 02 13:07:08 compute-0 nova_compute[256940]: 2025-10-02 13:07:08.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 02 13:07:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:08.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:09 compute-0 nova_compute[256940]: 2025-10-02 13:07:09.106 2 DEBUG nova.network.neutron [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Successfully created port: 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:07:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:09.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:09 compute-0 ceph-mon[73668]: pgmap v2831: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 02 13:07:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2568518845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 2.4 MiB/s wr, 90 op/s
Oct 02 13:07:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:10.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.299 2 DEBUG nova.network.neutron [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Successfully updated port: 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.353 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.354 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.354 2 DEBUG nova.network.neutron [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:07:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:11.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.625 2 DEBUG nova.compute.manager [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received event network-changed-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.626 2 DEBUG nova.compute.manager [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Refreshing instance network info cache due to event network-changed-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.627 2 DEBUG oslo_concurrency.lockutils [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.950 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.951 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.951 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.951 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.952 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.953 2 INFO nova.compute.manager [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Terminating instance
Oct 02 13:07:11 compute-0 ceph-mon[73668]: pgmap v2832: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 2.4 MiB/s wr, 90 op/s
Oct 02 13:07:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:11 compute-0 nova_compute[256940]: 2025-10-02 13:07:11.954 2 DEBUG nova.compute.manager [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:07:11 compute-0 sudo[381599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:11 compute-0 sudo[381599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:11 compute-0 sudo[381599]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.044 2 DEBUG nova.network.neutron [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:07:12 compute-0 sudo[381624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:12 compute-0 sudo[381624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:12 compute-0 sudo[381624]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:12 compute-0 kernel: tap55d951c1-1c (unregistering): left promiscuous mode
Oct 02 13:07:12 compute-0 NetworkManager[44981]: <info>  [1759410432.0930] device (tap55d951c1-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:07:12 compute-0 ovn_controller[148123]: 2025-10-02T13:07:12Z|00961|binding|INFO|Releasing lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 from this chassis (sb_readonly=0)
Oct 02 13:07:12 compute-0 ovn_controller[148123]: 2025-10-02T13:07:12Z|00962|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 down in Southbound
Oct 02 13:07:12 compute-0 ovn_controller[148123]: 2025-10-02T13:07:12Z|00963|binding|INFO|Removing iface tap55d951c1-1c ovn-installed in OVS
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.108 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.111 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 unbound from our chassis
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.113 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b07d0c6a-5988-4afb-b4ba-d4048578b224, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.115 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3f02a256-fca7-4d42-8ca9-a268f7e7f943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.116 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace which is not needed anymore
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Oct 02 13:07:12 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b5.scope: Consumed 2.291s CPU time.
Oct 02 13:07:12 compute-0 systemd-machined[210927]: Machine qemu-97-instance-000000b5 terminated.
Oct 02 13:07:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 2.3 MiB/s wr, 122 op/s
Oct 02 13:07:12 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [NOTICE]   (381248) : haproxy version is 2.8.14-c23fe91
Oct 02 13:07:12 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [NOTICE]   (381248) : path to executable is /usr/sbin/haproxy
Oct 02 13:07:12 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [WARNING]  (381248) : Exiting Master process...
Oct 02 13:07:12 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [WARNING]  (381248) : Exiting Master process...
Oct 02 13:07:12 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [ALERT]    (381248) : Current worker (381250) exited with code 143 (Terminated)
Oct 02 13:07:12 compute-0 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[381243]: [WARNING]  (381248) : All workers exited. Exiting... (0)
Oct 02 13:07:12 compute-0 systemd[1]: libpod-daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d.scope: Deactivated successfully.
Oct 02 13:07:12 compute-0 podman[381673]: 2025-10-02 13:07:12.324672303 +0000 UTC m=+0.072907324 container died daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d-userdata-shm.mount: Deactivated successfully.
Oct 02 13:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6eed2ceff99d83da3170673d309ae741172e0f4557a6390f4fb6f46e3d6b567-merged.mount: Deactivated successfully.
Oct 02 13:07:12 compute-0 podman[381673]: 2025-10-02 13:07:12.364466506 +0000 UTC m=+0.112701517 container cleanup daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:07:12 compute-0 systemd[1]: libpod-conmon-daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d.scope: Deactivated successfully.
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.426 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance destroyed successfully.
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.428 2 DEBUG nova.objects.instance [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'resources' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.446 2 DEBUG nova.virt.libvirt.vif [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:06:41Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.447 2 DEBUG nova.network.os_vif_util [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.448 2 DEBUG nova.network.os_vif_util [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.449 2 DEBUG os_vif [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d951c1-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:12 compute-0 podman[381699]: 2025-10-02 13:07:12.456361671 +0000 UTC m=+0.071813725 container remove daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.462 2 INFO os_vif [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c')
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.466 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[27dfbc7b-c774-4b30-8571-32dc51a47a4c]: (4, ('Thu Oct  2 01:07:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d)\ndaddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d\nThu Oct  2 01:07:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (daddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d)\ndaddd4493982c28013c3d75b47a6c58ecfa53c88dc4d3df77becd4c44245f32d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.468 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ec267de4-8210-4b38-80d2-b2d3f72c293d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.468 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 kernel: tapb07d0c6a-50: left promiscuous mode
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.503 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd40a4e-0ee0-4aa0-9efd-a8cb0a1e8efe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.527 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4f2e96-aead-4dbd-929f-f52d8445f0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.528 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dce9c9f3-7bb7-45e6-89ac-ec63826ef0a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.548 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6e3e03-08dd-4d17-b2d8-30b02519d7f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829495, 'reachable_time': 19278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381768, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 systemd[1]: run-netns-ovnmeta\x2db07d0c6a\x2d5988\x2d4afb\x2db4ba\x2dd4048578b224.mount: Deactivated successfully.
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.553 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:07:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:12.553 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9bac5b03-6af0-4a9e-89c2-b923f1a3b05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:12 compute-0 podman[381720]: 2025-10-02 13:07:12.563986135 +0000 UTC m=+0.086654840 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 13:07:12 compute-0 podman[381721]: 2025-10-02 13:07:12.566664515 +0000 UTC m=+0.081433035 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:07:12 compute-0 nova_compute[256940]: 2025-10-02 13:07:12.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:12.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:13 compute-0 ceph-mon[73668]: pgmap v2833: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 2.3 MiB/s wr, 122 op/s
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.277923) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433277960, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2174, "num_deletes": 253, "total_data_size": 3817387, "memory_usage": 3880112, "flush_reason": "Manual Compaction"}
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433310776, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3749607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61275, "largest_seqno": 63448, "table_properties": {"data_size": 3739781, "index_size": 6191, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20826, "raw_average_key_size": 20, "raw_value_size": 3719986, "raw_average_value_size": 3697, "num_data_blocks": 269, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410224, "oldest_key_time": 1759410224, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 32947 microseconds, and 10792 cpu microseconds.
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.310833) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3749607 bytes OK
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.310889) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.312730) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.312751) EVENT_LOG_v1 {"time_micros": 1759410433312744, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.312776) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3808516, prev total WAL file size 3808516, number of live WAL files 2.
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.314460) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3661KB)], [140(10MB)]
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433314520, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 14457291, "oldest_snapshot_seqno": -1}
Oct 02 13:07:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:13.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9080 keys, 12539246 bytes, temperature: kUnknown
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433399053, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 12539246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12479516, "index_size": 35937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238598, "raw_average_key_size": 26, "raw_value_size": 12319178, "raw_average_value_size": 1356, "num_data_blocks": 1377, "num_entries": 9080, "num_filter_entries": 9080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.399713) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12539246 bytes
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.401778) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.4 rd, 147.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 9605, records dropped: 525 output_compression: NoCompression
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.401807) EVENT_LOG_v1 {"time_micros": 1759410433401794, "job": 86, "event": "compaction_finished", "compaction_time_micros": 84837, "compaction_time_cpu_micros": 36087, "output_level": 6, "num_output_files": 1, "total_output_size": 12539246, "num_input_records": 9605, "num_output_records": 9080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433403436, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433407286, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.314360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.407363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.407371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.407374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.407377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:07:13.407379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.879 2 DEBUG nova.compute.manager [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.880 2 DEBUG oslo_concurrency.lockutils [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.881 2 DEBUG oslo_concurrency.lockutils [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.881 2 DEBUG oslo_concurrency.lockutils [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.882 2 DEBUG nova.compute.manager [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.882 2 DEBUG nova.compute.manager [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.882 2 DEBUG nova.compute.manager [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.883 2 DEBUG oslo_concurrency.lockutils [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.883 2 DEBUG oslo_concurrency.lockutils [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.884 2 DEBUG oslo_concurrency.lockutils [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.884 2 DEBUG nova.compute.manager [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:13 compute-0 nova_compute[256940]: 2025-10-02 13:07:13.885 2 WARNING nova.compute.manager [req-ea7f676c-5648-4d06-8cde-596fe8c6a6ee req-a5d015c8-0174-4b77-afd3-33bcae1de551 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state deleting.
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.159 2 DEBUG nova.network.neutron [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Updating instance_info_cache with network_info: [{"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.191 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.192 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Instance network_info: |[{"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.193 2 DEBUG oslo_concurrency.lockutils [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.193 2 DEBUG nova.network.neutron [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Refreshing network info cache for port 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.198 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Start _get_guest_xml network_info=[{"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.205 2 WARNING nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.211 2 DEBUG nova.virt.libvirt.host [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.212 2 DEBUG nova.virt.libvirt.host [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.216 2 DEBUG nova.virt.libvirt.host [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.217 2 DEBUG nova.virt.libvirt.host [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.218 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.218 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.219 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.219 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.220 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.220 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.220 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.220 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.221 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.221 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.221 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.222 2 DEBUG nova.virt.hardware [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.225 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.9 MiB/s wr, 106 op/s
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.317 2 INFO nova.virt.libvirt.driver [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deleting instance files /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214_del
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.318 2 INFO nova.virt.libvirt.driver [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deletion of /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214_del complete
Oct 02 13:07:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.521 2 INFO nova.compute.manager [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Took 2.57 seconds to destroy the instance on the hypervisor.
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.522 2 DEBUG oslo.service.loopingcall [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.522 2 DEBUG nova.compute.manager [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.522 2 DEBUG nova.network.neutron [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:07:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:07:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3448103692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:14 compute-0 sudo[381803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:14 compute-0 sudo[381803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:14 compute-0 sudo[381803]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.730 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.768 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:07:14 compute-0 nova_compute[256940]: 2025-10-02 13:07:14.776 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:14 compute-0 sudo[381830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:14 compute-0 sudo[381830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:14 compute-0 sudo[381830]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:14 compute-0 sudo[381875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:14 compute-0 sudo[381875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:14 compute-0 sudo[381875]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:14 compute-0 sudo[381900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:07:14 compute-0 sudo[381900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:14.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:07:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836180856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.220 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.222 2 DEBUG nova.virt.libvirt.vif [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-0-1460118355',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-0-1460118355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=192,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-ons30ep0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:07:06Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.222 2 DEBUG nova.network.os_vif_util [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.223 2 DEBUG nova.network.os_vif_util [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8a:ea,bridge_name='br-int',has_traffic_filtering=True,id=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f38c9c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.224 2 DEBUG nova.objects.instance [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:15 compute-0 sudo[381900]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:15.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.402 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:07:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:07:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.606 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <uuid>6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee</uuid>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <name>instance-000000c0</name>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-0-1460118355</nova:name>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:07:14</nova:creationTime>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <nova:port uuid="79f38c9c-d6a5-4837-a9ff-9152b7f5aca6">
Oct 02 13:07:15 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <system>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <entry name="serial">6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee</entry>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <entry name="uuid">6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee</entry>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </system>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <os>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   </os>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <features>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   </features>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk">
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       </source>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk.config">
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       </source>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:07:15 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9c:8a:ea"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <target dev="tap79f38c9c-d6"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/console.log" append="off"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <video>
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </video>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:07:15 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:07:15 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:07:15 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:07:15 compute-0 nova_compute[256940]: </domain>
Oct 02 13:07:15 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.607 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Preparing to wait for external event network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.607 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.607 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.608 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.608 2 DEBUG nova.virt.libvirt.vif [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-0-1460118355',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-0-1460118355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=192,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-ons30ep0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:07:06Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.609 2 DEBUG nova.network.os_vif_util [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.609 2 DEBUG nova.network.os_vif_util [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8a:ea,bridge_name='br-int',has_traffic_filtering=True,id=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f38c9c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.610 2 DEBUG os_vif [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8a:ea,bridge_name='br-int',has_traffic_filtering=True,id=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f38c9c-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:07:15 compute-0 ceph-mon[73668]: pgmap v2834: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.9 MiB/s wr, 106 op/s
Oct 02 13:07:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3448103692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1836180856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79f38c9c-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79f38c9c-d6, col_values=(('external_ids', {'iface-id': '79f38c9c-d6a5-4837-a9ff-9152b7f5aca6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:8a:ea', 'vm-uuid': '6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:07:15 compute-0 NetworkManager[44981]: <info>  [1759410435.6207] manager: (tap79f38c9c-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.630 2 INFO os_vif [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8a:ea,bridge_name='br-int',has_traffic_filtering=True,id=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f38c9c-d6')
Oct 02 13:07:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9e4c9c15-b3fd-4496-9d1a-a78951a46f45 does not exist
Oct 02 13:07:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0136478a-d492-4cd1-9164-5e35b96b58aa does not exist
Oct 02 13:07:15 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1d332f2c-25cc-41c1-b7d3-11cb179cdbbc does not exist
Oct 02 13:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:07:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:07:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:07:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:07:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:15 compute-0 sudo[381981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.711 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.712 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.712 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:9c:8a:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:07:15 compute-0 sudo[381981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.713 2 INFO nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Using config drive
Oct 02 13:07:15 compute-0 sudo[381981]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:15 compute-0 nova_compute[256940]: 2025-10-02 13:07:15.747 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:07:15 compute-0 sudo[382020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:15 compute-0 sudo[382020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:15 compute-0 sudo[382020]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:15 compute-0 sudo[382049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:15 compute-0 sudo[382049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:15 compute-0 sudo[382049]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:15 compute-0 sudo[382074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:07:15 compute-0 sudo[382074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 237 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 13:07:16 compute-0 podman[382139]: 2025-10-02 13:07:16.278609908 +0000 UTC m=+0.058632593 container create 2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:07:16 compute-0 systemd[1]: Started libpod-conmon-2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128.scope.
Oct 02 13:07:16 compute-0 podman[382139]: 2025-10-02 13:07:16.256375771 +0000 UTC m=+0.036398506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:16 compute-0 podman[382139]: 2025-10-02 13:07:16.379929178 +0000 UTC m=+0.159951893 container init 2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:07:16 compute-0 podman[382139]: 2025-10-02 13:07:16.39153232 +0000 UTC m=+0.171555005 container start 2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:07:16 compute-0 podman[382139]: 2025-10-02 13:07:16.394686971 +0000 UTC m=+0.174709696 container attach 2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:07:16 compute-0 pedantic_williams[382155]: 167 167
Oct 02 13:07:16 compute-0 systemd[1]: libpod-2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128.scope: Deactivated successfully.
Oct 02 13:07:16 compute-0 conmon[382155]: conmon 2f363e4418530ef1568e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128.scope/container/memory.events
Oct 02 13:07:16 compute-0 podman[382139]: 2025-10-02 13:07:16.401447067 +0000 UTC m=+0.181469772 container died 2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-653a15fd0a9e94edd5525bebf3bf29ae50256e51ea768018ab68f6dcaadc6a72-merged.mount: Deactivated successfully.
Oct 02 13:07:16 compute-0 podman[382139]: 2025-10-02 13:07:16.439281189 +0000 UTC m=+0.219303884 container remove 2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:16 compute-0 systemd[1]: libpod-conmon-2f363e4418530ef1568e7a705869fff66fdd202910db9fca4a20ca0c50dcc128.scope: Deactivated successfully.
Oct 02 13:07:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:07:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:07:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:07:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:16 compute-0 podman[382178]: 2025-10-02 13:07:16.639146066 +0000 UTC m=+0.071477565 container create 0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:07:16 compute-0 systemd[1]: Started libpod-conmon-0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed.scope.
Oct 02 13:07:16 compute-0 podman[382178]: 2025-10-02 13:07:16.609999371 +0000 UTC m=+0.042330970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfc3627b53c83cbc2eae0d14214c7dd3866b6e99444e4c21b8496a96dad0d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfc3627b53c83cbc2eae0d14214c7dd3866b6e99444e4c21b8496a96dad0d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfc3627b53c83cbc2eae0d14214c7dd3866b6e99444e4c21b8496a96dad0d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfc3627b53c83cbc2eae0d14214c7dd3866b6e99444e4c21b8496a96dad0d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfc3627b53c83cbc2eae0d14214c7dd3866b6e99444e4c21b8496a96dad0d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:16 compute-0 podman[382178]: 2025-10-02 13:07:16.757271492 +0000 UTC m=+0.189603021 container init 0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:07:16 compute-0 podman[382178]: 2025-10-02 13:07:16.765678311 +0000 UTC m=+0.198009830 container start 0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cohen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:07:16 compute-0 podman[382178]: 2025-10-02 13:07:16.780719621 +0000 UTC m=+0.213051180 container attach 0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:07:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:16.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:17.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:17 compute-0 ceph-mon[73668]: pgmap v2835: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 237 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 13:07:17 compute-0 nervous_cohen[382194]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:07:17 compute-0 nervous_cohen[382194]: --> relative data size: 1.0
Oct 02 13:07:17 compute-0 nervous_cohen[382194]: --> All data devices are unavailable
Oct 02 13:07:17 compute-0 systemd[1]: libpod-0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed.scope: Deactivated successfully.
Oct 02 13:07:17 compute-0 podman[382178]: 2025-10-02 13:07:17.717554519 +0000 UTC m=+1.149886058 container died 0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:07:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9cfc3627b53c83cbc2eae0d14214c7dd3866b6e99444e4c21b8496a96dad0d9-merged.mount: Deactivated successfully.
Oct 02 13:07:17 compute-0 nova_compute[256940]: 2025-10-02 13:07:17.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:17 compute-0 podman[382178]: 2025-10-02 13:07:17.777374712 +0000 UTC m=+1.209706221 container remove 0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cohen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:07:17 compute-0 systemd[1]: libpod-conmon-0d68c83bf2099642fa23ddb7f4826ffeb25f430a08f2e68c7dd51b3342888bed.scope: Deactivated successfully.
Oct 02 13:07:17 compute-0 sudo[382074]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:17 compute-0 sudo[382224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:17 compute-0 sudo[382224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:17 compute-0 sudo[382224]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:17 compute-0 sudo[382249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:17 compute-0 sudo[382249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:17 compute-0 sudo[382249]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:17 compute-0 sudo[382274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:18 compute-0 sudo[382274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:18 compute-0 sudo[382274]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.062 2 INFO nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Creating config drive at /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/disk.config
Oct 02 13:07:18 compute-0 sudo[382299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:07:18 compute-0 sudo[382299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.071 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst26kxjd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.221 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst26kxjd" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.249 2 DEBUG nova.storage.rbd_utils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.253 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/disk.config 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 02 13:07:18 compute-0 podman[382402]: 2025-10-02 13:07:18.432070416 +0000 UTC m=+0.045721927 container create 94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.453 2 DEBUG oslo_concurrency.processutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/disk.config 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.454 2 INFO nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Deleting local config drive /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee/disk.config because it was imported into RBD.
Oct 02 13:07:18 compute-0 systemd[1]: Started libpod-conmon-94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe.scope.
Oct 02 13:07:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:18 compute-0 podman[382402]: 2025-10-02 13:07:18.413227267 +0000 UTC m=+0.026878778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:18 compute-0 kernel: tap79f38c9c-d6: entered promiscuous mode
Oct 02 13:07:18 compute-0 podman[382402]: 2025-10-02 13:07:18.521810876 +0000 UTC m=+0.135462417 container init 94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:07:18 compute-0 NetworkManager[44981]: <info>  [1759410438.5233] manager: (tap79f38c9c-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/416)
Oct 02 13:07:18 compute-0 ovn_controller[148123]: 2025-10-02T13:07:18Z|00964|binding|INFO|Claiming lport 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 for this chassis.
Oct 02 13:07:18 compute-0 ovn_controller[148123]: 2025-10-02T13:07:18Z|00965|binding|INFO|79f38c9c-d6a5-4837-a9ff-9152b7f5aca6: Claiming fa:16:3e:9c:8a:ea 10.100.0.5
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 podman[382402]: 2025-10-02 13:07:18.530152012 +0000 UTC m=+0.143803493 container start 94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:07:18 compute-0 podman[382402]: 2025-10-02 13:07:18.534944947 +0000 UTC m=+0.148596438 container attach 94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 gallant_golick[382420]: 167 167
Oct 02 13:07:18 compute-0 NetworkManager[44981]: <info>  [1759410438.5374] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Oct 02 13:07:18 compute-0 systemd[1]: libpod-94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe.scope: Deactivated successfully.
Oct 02 13:07:18 compute-0 podman[382402]: 2025-10-02 13:07:18.538044177 +0000 UTC m=+0.151695698 container died 94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:07:18 compute-0 NetworkManager[44981]: <info>  [1759410438.5384] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct 02 13:07:18 compute-0 systemd-machined[210927]: New machine qemu-98-instance-000000c0.
Oct 02 13:07:18 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-000000c0.
Oct 02 13:07:18 compute-0 systemd-udevd[382453]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-32bd36e3899ac8031fe3eefaa7f3e83e8dccd00a81277f53969562e4cbd8a6a2-merged.mount: Deactivated successfully.
Oct 02 13:07:18 compute-0 NetworkManager[44981]: <info>  [1759410438.6067] device (tap79f38c9c-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:07:18 compute-0 NetworkManager[44981]: <info>  [1759410438.6077] device (tap79f38c9c-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:07:18 compute-0 podman[382402]: 2025-10-02 13:07:18.605848037 +0000 UTC m=+0.219499508 container remove 94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_golick, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:07:18 compute-0 systemd[1]: libpod-conmon-94e9391a27bd97e134f2a79dca4a6b70ab975af36101a9340cb09b5b7f9b78fe.scope: Deactivated successfully.
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 podman[382469]: 2025-10-02 13:07:18.773469028 +0000 UTC m=+0.039609919 container create 3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lalande, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:18 compute-0 systemd[1]: Started libpod-conmon-3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4.scope.
Oct 02 13:07:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ebbb337ee526d913805a1f07f5c3316905c83f436d4c89947703168ada85436/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ebbb337ee526d913805a1f07f5c3316905c83f436d4c89947703168ada85436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ebbb337ee526d913805a1f07f5c3316905c83f436d4c89947703168ada85436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ebbb337ee526d913805a1f07f5c3316905c83f436d4c89947703168ada85436/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:18 compute-0 podman[382469]: 2025-10-02 13:07:18.757401681 +0000 UTC m=+0.023542592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:18 compute-0 podman[382469]: 2025-10-02 13:07:18.870966819 +0000 UTC m=+0.137107730 container init 3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:07:18 compute-0 podman[382469]: 2025-10-02 13:07:18.878653759 +0000 UTC m=+0.144794650 container start 3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lalande, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:07:18 compute-0 podman[382469]: 2025-10-02 13:07:18.881653137 +0000 UTC m=+0.147794028 container attach 3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.889 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8a:ea 10.100.0.5'], port_security=['fa:16:3e:9c:8a:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c3c-d440-4dcd-8562-bb1990277f07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f03b5452-680c-4498-87d9-e083abe84e44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4012d9db-84cc-44d4-8e0c-304e52c3ea33, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.891 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 in datapath 0de30c3c-d440-4dcd-8562-bb1990277f07 bound to our chassis
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.893 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de30c3c-d440-4dcd-8562-bb1990277f07
Oct 02 13:07:18 compute-0 ovn_controller[148123]: 2025-10-02T13:07:18Z|00966|binding|INFO|Setting lport 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 ovn-installed in OVS
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 ovn_controller[148123]: 2025-10-02T13:07:18Z|00967|binding|INFO|Setting lport 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 up in Southbound
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.903 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[df5dd0ab-c2d7-4a14-afc1-0a30472e291c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.904 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0de30c3c-d1 in ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.906 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0de30c3c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.906 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[64b01938-82d5-48a0-b25a-19afd924e135]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:18 compute-0 nova_compute[256940]: 2025-10-02 13:07:18.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.907 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[10e2a699-a247-4c84-9ad5-c4f8953ea4e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:18.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.918 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[89d4ce46-7c48-42f5-aacc-48d42512b79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.943 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[bf312168-887f-4d18-ac1b-52bb52a693aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.972 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[81d19e87-d36e-4bcc-88de-76b7ab0841a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:18.979 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccecb6b-d8ab-48a1-8646-b3a259de8408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:18 compute-0 NetworkManager[44981]: <info>  [1759410438.9809] manager: (tap0de30c3c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/419)
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.013 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[425f4672-277f-4bc4-a240-af5d42acb74e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.016 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4cace92e-0749-4271-bed1-525b1295540f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 NetworkManager[44981]: <info>  [1759410439.0375] device (tap0de30c3c-d0): carrier: link connected
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.044 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[385d2468-ec2a-4762-9f7e-5aa591349b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.060 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3fa0e1-afbe-4943-8fbb-71230e6ec70f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c3c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:84:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833452, 'reachable_time': 38435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382556, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.075 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f89fac-8777-4587-9c3c-1e909af36a9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:84cf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 833452, 'tstamp': 833452}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382558, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.089 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8766b157-51c6-4b6d-b1e5-c8d3f3e7070e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c3c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:84:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833452, 'reachable_time': 38435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382559, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.094 2 DEBUG nova.network.neutron [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Updated VIF entry in instance network info cache for port 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.095 2 DEBUG nova.network.neutron [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Updating instance_info_cache with network_info: [{"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.097 2 DEBUG nova.network.neutron [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.125 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[609807da-3e2a-4657-a193-d9afe1ffc14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.129 2 DEBUG oslo_concurrency.lockutils [req-f8e36ae0-76ac-46dc-acaa-42275f4d3604 req-47c216f5-7ae6-44b7-9281-b1846d1bc50a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.131 2 INFO nova.compute.manager [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Took 4.61 seconds to deallocate network for instance.
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.186 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c2884d4f-2099-45bc-9f23-15c972c068b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.188 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c3c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.189 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.189 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de30c3c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.214 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.214 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:19 compute-0 NetworkManager[44981]: <info>  [1759410439.2367] manager: (tap0de30c3c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Oct 02 13:07:19 compute-0 kernel: tap0de30c3c-d0: entered promiscuous mode
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.238 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de30c3c-d0, col_values=(('external_ids', {'iface-id': 'c46b8ee2-3741-41ad-a412-6a121aeea4c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:19 compute-0 ovn_controller[148123]: 2025-10-02T13:07:19Z|00968|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.257 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.257 2 DEBUG nova.compute.manager [req-59b3f56c-00a9-4edf-9def-d50a797989b0 req-a5443534-c1d5-4fa1-a9b5-c234a2384075 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-deleted-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.258 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e5fbba-7b6a-468e-8364-93f046ab5a24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.259 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-0de30c3c-d440-4dcd-8562-bb1990277f07
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 0de30c3c-d440-4dcd-8562-bb1990277f07
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:07:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:19.260 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'env', 'PROCESS_TAG=haproxy-0de30c3c-d440-4dcd-8562-bb1990277f07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0de30c3c-d440-4dcd-8562-bb1990277f07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.297 2 DEBUG oslo_concurrency.processutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:19.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.501 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410439.5003846, 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.501 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] VM Started (Lifecycle Event)
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.546 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.552 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410439.5004659, 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.552 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] VM Paused (Lifecycle Event)
Oct 02 13:07:19 compute-0 podman[382610]: 2025-10-02 13:07:19.646052679 +0000 UTC m=+0.050207375 container create 5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:19 compute-0 systemd[1]: Started libpod-conmon-5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8.scope.
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.683 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.688 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:07:19 compute-0 ceph-mon[73668]: pgmap v2836: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 02 13:07:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:19 compute-0 podman[382610]: 2025-10-02 13:07:19.62104094 +0000 UTC m=+0.025195656 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b08e22b080a7008939a8719f39618b5836a337fababd3f61d73f58feae8c5c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:19 compute-0 podman[382610]: 2025-10-02 13:07:19.731204239 +0000 UTC m=+0.135358965 container init 5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:07:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913159876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:19 compute-0 podman[382610]: 2025-10-02 13:07:19.739153635 +0000 UTC m=+0.143308331 container start 5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.756 2 DEBUG oslo_concurrency.processutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:19 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [NOTICE]   (382635) : New worker (382638) forked
Oct 02 13:07:19 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [NOTICE]   (382635) : Loading success.
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.761 2 DEBUG nova.compute.provider_tree [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:19 compute-0 silly_lalande[382486]: {
Oct 02 13:07:19 compute-0 silly_lalande[382486]:     "1": [
Oct 02 13:07:19 compute-0 silly_lalande[382486]:         {
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "devices": [
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "/dev/loop3"
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             ],
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "lv_name": "ceph_lv0",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "lv_size": "7511998464",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "name": "ceph_lv0",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "tags": {
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.cluster_name": "ceph",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.crush_device_class": "",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.encrypted": "0",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.osd_id": "1",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.type": "block",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:                 "ceph.vdo": "0"
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             },
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "type": "block",
Oct 02 13:07:19 compute-0 silly_lalande[382486]:             "vg_name": "ceph_vg0"
Oct 02 13:07:19 compute-0 silly_lalande[382486]:         }
Oct 02 13:07:19 compute-0 silly_lalande[382486]:     ]
Oct 02 13:07:19 compute-0 silly_lalande[382486]: }
Oct 02 13:07:19 compute-0 systemd[1]: libpod-3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4.scope: Deactivated successfully.
Oct 02 13:07:19 compute-0 podman[382469]: 2025-10-02 13:07:19.794511272 +0000 UTC m=+1.060652163 container died 3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lalande, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.808 2 DEBUG nova.scheduler.client.report [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.820 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ebbb337ee526d913805a1f07f5c3316905c83f436d4c89947703168ada85436-merged.mount: Deactivated successfully.
Oct 02 13:07:19 compute-0 nova_compute[256940]: 2025-10-02 13:07:19.846 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:19 compute-0 podman[382469]: 2025-10-02 13:07:19.849008807 +0000 UTC m=+1.115149698 container remove 3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lalande, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:07:19 compute-0 systemd[1]: libpod-conmon-3367030e36e8a55c573c858505f61263c14ec08387f45aaf8db2cac7e472fde4.scope: Deactivated successfully.
Oct 02 13:07:19 compute-0 sudo[382299]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:19 compute-0 sudo[382661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:19 compute-0 sudo[382661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:19 compute-0 sudo[382661]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:19 compute-0 sudo[382686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:19 compute-0 sudo[382686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:19 compute-0 sudo[382686]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:20 compute-0 sudo[382711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:20 compute-0 sudo[382711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:20 compute-0 sudo[382711]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:20 compute-0 nova_compute[256940]: 2025-10-02 13:07:20.071 2 INFO nova.scheduler.client.report [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Deleted allocations for instance ea034622-0a48-4de6-8d68-0f2240b54214
Oct 02 13:07:20 compute-0 sudo[382736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:07:20 compute-0 sudo[382736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:20 compute-0 nova_compute[256940]: 2025-10-02 13:07:20.168 2 DEBUG oslo_concurrency.lockutils [None req-d287f55b-5744-47dc-b85b-fb0319b91886 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:20 compute-0 nova_compute[256940]: 2025-10-02 13:07:20.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 02 13:07:20 compute-0 podman[382802]: 2025-10-02 13:07:20.449189856 +0000 UTC m=+0.039692042 container create ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:07:20 compute-0 systemd[1]: Started libpod-conmon-ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d.scope.
Oct 02 13:07:20 compute-0 podman[382802]: 2025-10-02 13:07:20.433525809 +0000 UTC m=+0.024028015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:20 compute-0 podman[382802]: 2025-10-02 13:07:20.545601578 +0000 UTC m=+0.136103814 container init ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:07:20 compute-0 podman[382802]: 2025-10-02 13:07:20.557425175 +0000 UTC m=+0.147927371 container start ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:07:20 compute-0 eager_payne[382819]: 167 167
Oct 02 13:07:20 compute-0 podman[382802]: 2025-10-02 13:07:20.561604924 +0000 UTC m=+0.152107150 container attach ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:07:20 compute-0 systemd[1]: libpod-ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d.scope: Deactivated successfully.
Oct 02 13:07:20 compute-0 podman[382802]: 2025-10-02 13:07:20.562855656 +0000 UTC m=+0.153357832 container died ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:07:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-579c87293233adae6203e87571a1763eb5198f9e10f32baad372a38712766e7c-merged.mount: Deactivated successfully.
Oct 02 13:07:20 compute-0 podman[382802]: 2025-10-02 13:07:20.591035297 +0000 UTC m=+0.181537483 container remove ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_payne, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:07:20 compute-0 systemd[1]: libpod-conmon-ba876e0a7f7200182956834fb0802b496e0822d8b443aca50639c1e31a39851d.scope: Deactivated successfully.
Oct 02 13:07:20 compute-0 nova_compute[256940]: 2025-10-02 13:07:20.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/913159876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/512112853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:20 compute-0 podman[382842]: 2025-10-02 13:07:20.768942036 +0000 UTC m=+0.040801421 container create fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:07:20 compute-0 systemd[1]: Started libpod-conmon-fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4.scope.
Oct 02 13:07:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136872dc9c19bf97aa37341575c926189e2219898fb96f217fcbcc9f44071266/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136872dc9c19bf97aa37341575c926189e2219898fb96f217fcbcc9f44071266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136872dc9c19bf97aa37341575c926189e2219898fb96f217fcbcc9f44071266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136872dc9c19bf97aa37341575c926189e2219898fb96f217fcbcc9f44071266/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:07:20 compute-0 podman[382842]: 2025-10-02 13:07:20.753120885 +0000 UTC m=+0.024980300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:07:20 compute-0 podman[382842]: 2025-10-02 13:07:20.861534249 +0000 UTC m=+0.133393664 container init fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:07:20 compute-0 podman[382842]: 2025-10-02 13:07:20.867660578 +0000 UTC m=+0.139519963 container start fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:07:20 compute-0 podman[382842]: 2025-10-02 13:07:20.886434475 +0000 UTC m=+0.158293880 container attach fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:07:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:20.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.388 2 DEBUG nova.compute.manager [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received event network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.388 2 DEBUG oslo_concurrency.lockutils [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.389 2 DEBUG oslo_concurrency.lockutils [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.389 2 DEBUG oslo_concurrency.lockutils [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.390 2 DEBUG nova.compute.manager [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Processing event network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.390 2 DEBUG nova.compute.manager [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received event network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:21.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.394 2 DEBUG oslo_concurrency.lockutils [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.394 2 DEBUG oslo_concurrency.lockutils [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.395 2 DEBUG oslo_concurrency.lockutils [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.396 2 DEBUG nova.compute.manager [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] No waiting events found dispatching network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.397 2 WARNING nova.compute.manager [req-4712a198-4131-476e-87ff-25767ff750a0 req-94b7fd32-bfa8-4ccc-b139-7f13dc0a4abb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received unexpected event network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 for instance with vm_state building and task_state spawning.
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.398 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.405 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410441.4053028, 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.406 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] VM Resumed (Lifecycle Event)
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.409 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.414 2 INFO nova.virt.libvirt.driver [-] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Instance spawned successfully.
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.415 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.440 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.443 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.452 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.453 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.453 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.454 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.454 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.455 2 DEBUG nova.virt.libvirt.driver [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.518 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.566 2 INFO nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Took 14.89 seconds to spawn the instance on the hypervisor.
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.566 2 DEBUG nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:07:21 compute-0 dreamy_noether[382859]: {
Oct 02 13:07:21 compute-0 dreamy_noether[382859]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:07:21 compute-0 dreamy_noether[382859]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:07:21 compute-0 dreamy_noether[382859]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:07:21 compute-0 dreamy_noether[382859]:         "osd_id": 1,
Oct 02 13:07:21 compute-0 dreamy_noether[382859]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:07:21 compute-0 dreamy_noether[382859]:         "type": "bluestore"
Oct 02 13:07:21 compute-0 dreamy_noether[382859]:     }
Oct 02 13:07:21 compute-0 dreamy_noether[382859]: }
Oct 02 13:07:21 compute-0 ceph-mon[73668]: pgmap v2837: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 02 13:07:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1813418940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:21 compute-0 systemd[1]: libpod-fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4.scope: Deactivated successfully.
Oct 02 13:07:21 compute-0 podman[382842]: 2025-10-02 13:07:21.760021892 +0000 UTC m=+1.031881287 container died fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-136872dc9c19bf97aa37341575c926189e2219898fb96f217fcbcc9f44071266-merged.mount: Deactivated successfully.
Oct 02 13:07:21 compute-0 podman[382842]: 2025-10-02 13:07:21.804944858 +0000 UTC m=+1.076804243 container remove fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:07:21 compute-0 systemd[1]: libpod-conmon-fee5c63799bef8b8b3feddaa9c8cbd6a7c88952e3480eb1cb018fb599ac7adc4.scope: Deactivated successfully.
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.823 2 INFO nova.compute.manager [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Took 16.15 seconds to build instance.
Oct 02 13:07:21 compute-0 sudo[382736]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:07:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:07:21 compute-0 nova_compute[256940]: 2025-10-02 13:07:21.867 2 DEBUG oslo_concurrency.lockutils [None req-830ec861-fe66-4516-b1cd-ec6e19d40010 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 314a9289-f0c4-49c2-bc37-d04addf29a28 does not exist
Oct 02 13:07:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a3b22ebc-0141-41f2-8f86-acbbeb35aef3 does not exist
Oct 02 13:07:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2eebcf9c-61d8-488a-89e1-ab0595b4f2d1 does not exist
Oct 02 13:07:21 compute-0 sudo[382894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:21 compute-0 sudo[382894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:21 compute-0 sudo[382894]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:21 compute-0 sudo[382919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:07:21 compute-0 sudo[382919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:21 compute-0 sudo[382919]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 265 KiB/s rd, 1.5 MiB/s wr, 270 op/s
Oct 02 13:07:22 compute-0 nova_compute[256940]: 2025-10-02 13:07:22.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:22 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/674363330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:22.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:23.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:23 compute-0 ceph-mon[73668]: pgmap v2838: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 265 KiB/s rd, 1.5 MiB/s wr, 270 op/s
Oct 02 13:07:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1430412347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.252 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.252 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.252 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.253 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.253 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 15 KiB/s wr, 221 op/s
Oct 02 13:07:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191533582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.666 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.782 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.783 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:07:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/191533582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:24.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.944 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.945 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3983MB free_disk=20.92182159423828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.945 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:24 compute-0 nova_compute[256940]: 2025-10-02 13:07:24.945 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.024 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.024 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.024 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.092 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:25.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2785683365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.525 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.532 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.553 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.609 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.610 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:25 compute-0 nova_compute[256940]: 2025-10-02 13:07:25.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:25 compute-0 ceph-mon[73668]: pgmap v2839: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 15 KiB/s wr, 221 op/s
Oct 02 13:07:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2785683365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:26 compute-0 ovn_controller[148123]: 2025-10-02T13:07:26Z|00969|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 16 KiB/s wr, 255 op/s
Oct 02 13:07:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:26.503 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:26.504 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:26.506 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.611 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.611 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.611 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.634 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Triggering sync for uuid 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.635 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.635 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:26 compute-0 nova_compute[256940]: 2025-10-02 13:07:26.662 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:26.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:27.044 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:27.045 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:07:27 compute-0 nova_compute[256940]: 2025-10-02 13:07:27.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:27 compute-0 nova_compute[256940]: 2025-10-02 13:07:27.234 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:27.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:27 compute-0 nova_compute[256940]: 2025-10-02 13:07:27.425 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410432.4234679, ea034622-0a48-4de6-8d68-0f2240b54214 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:07:27 compute-0 nova_compute[256940]: 2025-10-02 13:07:27.426 2 INFO nova.compute.manager [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Stopped (Lifecycle Event)
Oct 02 13:07:27 compute-0 nova_compute[256940]: 2025-10-02 13:07:27.454 2 DEBUG nova.compute.manager [None req-191f6df3-6677-4508-a4a4-5837e7584c92 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:07:27 compute-0 nova_compute[256940]: 2025-10-02 13:07:27.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:27 compute-0 ceph-mon[73668]: pgmap v2840: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 16 KiB/s wr, 255 op/s
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 259 op/s
Oct 02 13:07:28 compute-0 podman[382992]: 2025-10-02 13:07:28.392832273 +0000 UTC m=+0.057788341 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:07:28 compute-0 podman[382993]: 2025-10-02 13:07:28.450143981 +0000 UTC m=+0.113474367 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:07:28
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 02 13:07:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:07:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:29.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:07:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:07:29 compute-0 ceph-mon[73668]: pgmap v2841: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 259 op/s
Oct 02 13:07:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 235 op/s
Oct 02 13:07:30 compute-0 nova_compute[256940]: 2025-10-02 13:07:30.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:30 compute-0 nova_compute[256940]: 2025-10-02 13:07:30.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:30.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:31.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:31 compute-0 ceph-mon[73668]: pgmap v2842: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 235 op/s
Oct 02 13:07:32 compute-0 sudo[383039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:32 compute-0 sudo[383039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:32 compute-0 sudo[383039]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:32 compute-0 sudo[383064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:32 compute-0 sudo[383064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:32 compute-0 sudo[383064]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:32 compute-0 nova_compute[256940]: 2025-10-02 13:07:32.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 182 op/s
Oct 02 13:07:32 compute-0 nova_compute[256940]: 2025-10-02 13:07:32.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:32.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:33 compute-0 nova_compute[256940]: 2025-10-02 13:07:33.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:33 compute-0 nova_compute[256940]: 2025-10-02 13:07:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:07:33 compute-0 nova_compute[256940]: 2025-10-02 13:07:33.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:07:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:33.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:33 compute-0 nova_compute[256940]: 2025-10-02 13:07:33.517 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:33 compute-0 nova_compute[256940]: 2025-10-02 13:07:33.518 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:33 compute-0 nova_compute[256940]: 2025-10-02 13:07:33.518 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:07:33 compute-0 nova_compute[256940]: 2025-10-02 13:07:33.519 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:33 compute-0 ceph-mon[73668]: pgmap v2843: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 182 op/s
Oct 02 13:07:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:34.047 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.7 KiB/s wr, 60 op/s
Oct 02 13:07:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:34 compute-0 nova_compute[256940]: 2025-10-02 13:07:34.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:34.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:34 compute-0 ovn_controller[148123]: 2025-10-02T13:07:34Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:8a:ea 10.100.0.5
Oct 02 13:07:34 compute-0 ovn_controller[148123]: 2025-10-02T13:07:34Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:8a:ea 10.100.0.5
Oct 02 13:07:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:35.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:35 compute-0 nova_compute[256940]: 2025-10-02 13:07:35.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:35 compute-0 nova_compute[256940]: 2025-10-02 13:07:35.863 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Updating instance_info_cache with network_info: [{"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:35 compute-0 nova_compute[256940]: 2025-10-02 13:07:35.909 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:35 compute-0 nova_compute[256940]: 2025-10-02 13:07:35.910 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:07:35 compute-0 nova_compute[256940]: 2025-10-02 13:07:35.911 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:36 compute-0 ceph-mon[73668]: pgmap v2844: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.7 KiB/s wr, 60 op/s
Oct 02 13:07:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 97 op/s
Oct 02 13:07:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:37.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:37 compute-0 nova_compute[256940]: 2025-10-02 13:07:37.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:38 compute-0 ceph-mon[73668]: pgmap v2845: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 97 op/s
Oct 02 13:07:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 13:07:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:38.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:39 compute-0 ceph-mon[73668]: pgmap v2846: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 13:07:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:07:40 compute-0 nova_compute[256940]: 2025-10-02 13:07:40.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004335370428531547 of space, bias 1.0, pg target 1.3006111285594641 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6463173433719523 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:07:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:07:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:40.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:41.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.449 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.450 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.451 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.451 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.451 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.454 2 INFO nova.compute.manager [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Terminating instance
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.456 2 DEBUG nova.compute.manager [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:07:41 compute-0 ceph-mon[73668]: pgmap v2847: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:07:41 compute-0 kernel: tap79f38c9c-d6 (unregistering): left promiscuous mode
Oct 02 13:07:41 compute-0 NetworkManager[44981]: <info>  [1759410461.5373] device (tap79f38c9c-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 ovn_controller[148123]: 2025-10-02T13:07:41Z|00970|binding|INFO|Releasing lport 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 from this chassis (sb_readonly=0)
Oct 02 13:07:41 compute-0 ovn_controller[148123]: 2025-10-02T13:07:41Z|00971|binding|INFO|Setting lport 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 down in Southbound
Oct 02 13:07:41 compute-0 ovn_controller[148123]: 2025-10-02T13:07:41Z|00972|binding|INFO|Removing iface tap79f38c9c-d6 ovn-installed in OVS
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.574 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8a:ea 10.100.0.5'], port_security=['fa:16:3e:9c:8a:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c3c-d440-4dcd-8562-bb1990277f07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f03b5452-680c-4498-87d9-e083abe84e44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4012d9db-84cc-44d4-8e0c-304e52c3ea33, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.576 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 in datapath 0de30c3c-d440-4dcd-8562-bb1990277f07 unbound from our chassis
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.579 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0de30c3c-d440-4dcd-8562-bb1990277f07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.580 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[05e71334-fd9c-4847-bee3-9be12229433f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.581 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 namespace which is not needed anymore
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Oct 02 13:07:41 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c0.scope: Consumed 13.387s CPU time.
Oct 02 13:07:41 compute-0 systemd-machined[210927]: Machine qemu-98-instance-000000c0 terminated.
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.705 2 INFO nova.virt.libvirt.driver [-] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Instance destroyed successfully.
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.705 2 DEBUG nova.objects.instance [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:41 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [NOTICE]   (382635) : haproxy version is 2.8.14-c23fe91
Oct 02 13:07:41 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [NOTICE]   (382635) : path to executable is /usr/sbin/haproxy
Oct 02 13:07:41 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [WARNING]  (382635) : Exiting Master process...
Oct 02 13:07:41 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [WARNING]  (382635) : Exiting Master process...
Oct 02 13:07:41 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [ALERT]    (382635) : Current worker (382638) exited with code 143 (Terminated)
Oct 02 13:07:41 compute-0 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[382628]: [WARNING]  (382635) : All workers exited. Exiting... (0)
Oct 02 13:07:41 compute-0 systemd[1]: libpod-5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8.scope: Deactivated successfully.
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.746 2 DEBUG nova.virt.libvirt.vif [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-0-1460118355',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-0-1460118355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=192,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:07:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-ons30ep0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:07:21Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.747 2 DEBUG nova.network.os_vif_util [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "address": "fa:16:3e:9c:8a:ea", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f38c9c-d6", "ovs_interfaceid": "79f38c9c-d6a5-4837-a9ff-9152b7f5aca6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.748 2 DEBUG nova.network.os_vif_util [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:8a:ea,bridge_name='br-int',has_traffic_filtering=True,id=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f38c9c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.748 2 DEBUG os_vif [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:8a:ea,bridge_name='br-int',has_traffic_filtering=True,id=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f38c9c-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79f38c9c-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:41 compute-0 podman[383119]: 2025-10-02 13:07:41.752959227 +0000 UTC m=+0.058342155 container died 5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.759 2 INFO os_vif [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:8a:ea,bridge_name='br-int',has_traffic_filtering=True,id=79f38c9c-d6a5-4837-a9ff-9152b7f5aca6,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f38c9c-d6')
Oct 02 13:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8-userdata-shm.mount: Deactivated successfully.
Oct 02 13:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-22b08e22b080a7008939a8719f39618b5836a337fababd3f61d73f58feae8c5c-merged.mount: Deactivated successfully.
Oct 02 13:07:41 compute-0 podman[383119]: 2025-10-02 13:07:41.802784481 +0000 UTC m=+0.108167409 container cleanup 5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:07:41 compute-0 systemd[1]: libpod-conmon-5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8.scope: Deactivated successfully.
Oct 02 13:07:41 compute-0 podman[383177]: 2025-10-02 13:07:41.884925793 +0000 UTC m=+0.055723507 container remove 5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.891 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d907ec-a4e3-43f6-ba27-5cb7d9708b1c]: (4, ('Thu Oct  2 01:07:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 (5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8)\n5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8\nThu Oct  2 01:07:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 (5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8)\n5c6cf34bcb3fa53b666d727f8615706d80904796b66ae4a8da01ffc0eadea5c8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.893 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a2999b-a5ab-41e1-a54b-5782f019750f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.894 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c3c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 kernel: tap0de30c3c-d0: left promiscuous mode
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.902 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1059f893-a92c-4390-acdb-349f49f3ba4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.933 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[494ca6fe-a5f4-43d2-b234-7f017d541c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.936 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d1badeab-bb6f-42ea-acc7-d378339156d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.937 2 DEBUG nova.compute.manager [req-84d4d808-0810-4691-bf95-b103c1bfb54f req-9a2f0ee7-091a-463e-92d7-df5e16f07c53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received event network-vif-unplugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.938 2 DEBUG oslo_concurrency.lockutils [req-84d4d808-0810-4691-bf95-b103c1bfb54f req-9a2f0ee7-091a-463e-92d7-df5e16f07c53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.938 2 DEBUG oslo_concurrency.lockutils [req-84d4d808-0810-4691-bf95-b103c1bfb54f req-9a2f0ee7-091a-463e-92d7-df5e16f07c53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.939 2 DEBUG oslo_concurrency.lockutils [req-84d4d808-0810-4691-bf95-b103c1bfb54f req-9a2f0ee7-091a-463e-92d7-df5e16f07c53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.939 2 DEBUG nova.compute.manager [req-84d4d808-0810-4691-bf95-b103c1bfb54f req-9a2f0ee7-091a-463e-92d7-df5e16f07c53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] No waiting events found dispatching network-vif-unplugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:41 compute-0 nova_compute[256940]: 2025-10-02 13:07:41.939 2 DEBUG nova.compute.manager [req-84d4d808-0810-4691-bf95-b103c1bfb54f req-9a2f0ee7-091a-463e-92d7-df5e16f07c53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received event network-vif-unplugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.957 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4771b368-61fb-43c9-8a92-70daa9282259]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833445, 'reachable_time': 28005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383192, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.960 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:07:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:07:41.960 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ec24da5f-17d1-4261-bfb0-ac3bae55b182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d0de30c3c\x2dd440\x2d4dcd\x2d8562\x2dbb1990277f07.mount: Deactivated successfully.
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.177 2 INFO nova.virt.libvirt.driver [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Deleting instance files /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_del
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.177 2 INFO nova.virt.libvirt.driver [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Deletion of /var/lib/nova/instances/6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee_del complete
Oct 02 13:07:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.415 2 INFO nova.compute.manager [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.415 2 DEBUG oslo.service.loopingcall [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.416 2 DEBUG nova.compute.manager [-] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.416 2 DEBUG nova.network.neutron [-] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:42 compute-0 nova_compute[256940]: 2025-10-02 13:07:42.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:43 compute-0 podman[383196]: 2025-10-02 13:07:43.394924849 +0000 UTC m=+0.059997318 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:43 compute-0 podman[383195]: 2025-10-02 13:07:43.399662532 +0000 UTC m=+0.066376244 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 13:07:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:43.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:43 compute-0 ceph-mon[73668]: pgmap v2848: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 02 13:07:44 compute-0 nova_compute[256940]: 2025-10-02 13:07:44.118 2 DEBUG nova.compute.manager [req-d8ede3b3-5226-431c-b9e4-2b5b9964c599 req-f93aef9f-f186-478c-9db3-214fbcc7f0bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received event network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:44 compute-0 nova_compute[256940]: 2025-10-02 13:07:44.119 2 DEBUG oslo_concurrency.lockutils [req-d8ede3b3-5226-431c-b9e4-2b5b9964c599 req-f93aef9f-f186-478c-9db3-214fbcc7f0bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:44 compute-0 nova_compute[256940]: 2025-10-02 13:07:44.119 2 DEBUG oslo_concurrency.lockutils [req-d8ede3b3-5226-431c-b9e4-2b5b9964c599 req-f93aef9f-f186-478c-9db3-214fbcc7f0bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:44 compute-0 nova_compute[256940]: 2025-10-02 13:07:44.119 2 DEBUG oslo_concurrency.lockutils [req-d8ede3b3-5226-431c-b9e4-2b5b9964c599 req-f93aef9f-f186-478c-9db3-214fbcc7f0bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:44 compute-0 nova_compute[256940]: 2025-10-02 13:07:44.119 2 DEBUG nova.compute.manager [req-d8ede3b3-5226-431c-b9e4-2b5b9964c599 req-f93aef9f-f186-478c-9db3-214fbcc7f0bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] No waiting events found dispatching network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:44 compute-0 nova_compute[256940]: 2025-10-02 13:07:44.119 2 WARNING nova.compute.manager [req-d8ede3b3-5226-431c-b9e4-2b5b9964c599 req-f93aef9f-f186-478c-9db3-214fbcc7f0bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received unexpected event network-vif-plugged-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 for instance with vm_state active and task_state deleting.
Oct 02 13:07:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 02 13:07:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:45 compute-0 nova_compute[256940]: 2025-10-02 13:07:45.414 2 DEBUG nova.network.neutron [-] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:45.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:45 compute-0 nova_compute[256940]: 2025-10-02 13:07:45.435 2 INFO nova.compute.manager [-] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Took 3.02 seconds to deallocate network for instance.
Oct 02 13:07:45 compute-0 nova_compute[256940]: 2025-10-02 13:07:45.505 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:45 compute-0 nova_compute[256940]: 2025-10-02 13:07:45.505 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:45 compute-0 ceph-mon[73668]: pgmap v2849: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 02 13:07:45 compute-0 nova_compute[256940]: 2025-10-02 13:07:45.675 2 DEBUG nova.compute.manager [req-9d60d343-6f31-489b-8069-31af58a71419 req-3d574886-becf-4c2a-a9b8-8769ef26a6b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Received event network-vif-deleted-79f38c9c-d6a5-4837-a9ff-9152b7f5aca6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:45 compute-0 nova_compute[256940]: 2025-10-02 13:07:45.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:45 compute-0 nova_compute[256940]: 2025-10-02 13:07:45.699 2 DEBUG oslo_concurrency.processutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087518821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:46 compute-0 nova_compute[256940]: 2025-10-02 13:07:46.132 2 DEBUG oslo_concurrency.processutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:46 compute-0 nova_compute[256940]: 2025-10-02 13:07:46.138 2 DEBUG nova.compute.provider_tree [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:46 compute-0 nova_compute[256940]: 2025-10-02 13:07:46.161 2 DEBUG nova.scheduler.client.report [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:46 compute-0 nova_compute[256940]: 2025-10-02 13:07:46.219 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:46 compute-0 nova_compute[256940]: 2025-10-02 13:07:46.264 2 INFO nova.scheduler.client.report [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee
Oct 02 13:07:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Oct 02 13:07:46 compute-0 nova_compute[256940]: 2025-10-02 13:07:46.350 2 DEBUG oslo_concurrency.lockutils [None req-557cc0c3-816a-4aeb-9a1e-3c08e19cc3fb 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2087518821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:46 compute-0 nova_compute[256940]: 2025-10-02 13:07:46.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:47.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:47 compute-0 ceph-mon[73668]: pgmap v2850: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Oct 02 13:07:47 compute-0 nova_compute[256940]: 2025-10-02 13:07:47.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-0 nova_compute[256940]: 2025-10-02 13:07:48.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 205 KiB/s rd, 617 KiB/s wr, 57 op/s
Oct 02 13:07:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:50 compute-0 ceph-mon[73668]: pgmap v2851: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 205 KiB/s rd, 617 KiB/s wr, 57 op/s
Oct 02 13:07:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 157 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 24 KiB/s wr, 35 op/s
Oct 02 13:07:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:51 compute-0 ceph-mon[73668]: pgmap v2852: 305 pgs: 305 active+clean; 157 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 24 KiB/s wr, 35 op/s
Oct 02 13:07:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:51.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:51 compute-0 nova_compute[256940]: 2025-10-02 13:07:51.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:52 compute-0 sudo[383261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:52 compute-0 sudo[383261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:52 compute-0 sudo[383261]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 55 op/s
Oct 02 13:07:52 compute-0 sudo[383286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:52 compute-0 sudo[383286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:52 compute-0 sudo[383286]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:52 compute-0 nova_compute[256940]: 2025-10-02 13:07:52.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:52.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:53 compute-0 nova_compute[256940]: 2025-10-02 13:07:53.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:53.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:53 compute-0 ceph-mon[73668]: pgmap v2853: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 55 op/s
Oct 02 13:07:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4249213821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 13:07:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:55.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:55 compute-0 ceph-mon[73668]: pgmap v2854: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 13:07:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 13:07:56 compute-0 nova_compute[256940]: 2025-10-02 13:07:56.703 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410461.7010686, 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:07:56 compute-0 nova_compute[256940]: 2025-10-02 13:07:56.703 2 INFO nova.compute.manager [-] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] VM Stopped (Lifecycle Event)
Oct 02 13:07:56 compute-0 nova_compute[256940]: 2025-10-02 13:07:56.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:56 compute-0 nova_compute[256940]: 2025-10-02 13:07:56.764 2 DEBUG nova.compute.manager [None req-4fa23329-e24f-4b02-9339-37df755ad177 - - - - - -] [instance: 6f5e1a5f-8c1c-4ef2-b478-a1c2d7f0b1ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:07:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:56.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:07:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:57.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:07:57 compute-0 nova_compute[256940]: 2025-10-02 13:07:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:58 compute-0 ceph-mon[73668]: pgmap v2855: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 13:07:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:07:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:07:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:58.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:59 compute-0 ceph-mon[73668]: pgmap v2856: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:07:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:59 compute-0 podman[383315]: 2025-10-02 13:07:59.40380886 +0000 UTC m=+0.061595630 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:07:59 compute-0 podman[383316]: 2025-10-02 13:07:59.415945975 +0000 UTC m=+0.078544240 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:07:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:07:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:59.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:08:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:01 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:08:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:01.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:01 compute-0 ceph-mon[73668]: pgmap v2857: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:08:01 compute-0 nova_compute[256940]: 2025-10-02 13:08:01.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 02 13:08:02 compute-0 nova_compute[256940]: 2025-10-02 13:08:02.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:02.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:03.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:03 compute-0 ceph-mon[73668]: pgmap v2858: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 02 13:08:03 compute-0 nova_compute[256940]: 2025-10-02 13:08:03.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:03 compute-0 nova_compute[256940]: 2025-10-02 13:08:03.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 10 op/s
Oct 02 13:08:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/90573178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:04.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:08:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198759157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:08:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:08:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198759157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:08:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:05 compute-0 ceph-mon[73668]: pgmap v2859: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 10 op/s
Oct 02 13:08:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3198759157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:08:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3198759157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:08:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 150 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 897 KiB/s wr, 24 op/s
Oct 02 13:08:06 compute-0 nova_compute[256940]: 2025-10-02 13:08:06.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:06.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:07.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:07 compute-0 ceph-mon[73668]: pgmap v2860: 305 pgs: 305 active+clean; 150 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 897 KiB/s wr, 24 op/s
Oct 02 13:08:07 compute-0 nova_compute[256940]: 2025-10-02 13:08:07.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 181 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 47 op/s
Oct 02 13:08:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:09.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:09 compute-0 ceph-mon[73668]: pgmap v2861: 305 pgs: 305 active+clean; 181 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 47 op/s
Oct 02 13:08:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 204 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 64 op/s
Oct 02 13:08:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/70365932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:10.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:11.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:11 compute-0 nova_compute[256940]: 2025-10-02 13:08:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:11 compute-0 ceph-mon[73668]: pgmap v2862: 305 pgs: 305 active+clean; 204 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 64 op/s
Oct 02 13:08:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1373192062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 13:08:12 compute-0 sudo[383368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:12 compute-0 sudo[383368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:12 compute-0 sudo[383368]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:12 compute-0 sudo[383393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:12 compute-0 sudo[383393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:12 compute-0 sudo[383393]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:12 compute-0 nova_compute[256940]: 2025-10-02 13:08:12.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/695871044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:12.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:13.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:13 compute-0 ceph-mon[73668]: pgmap v2863: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 13:08:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Oct 02 13:08:14 compute-0 podman[383420]: 2025-10-02 13:08:14.376641956 +0000 UTC m=+0.051023886 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:08:14 compute-0 podman[383421]: 2025-10-02 13:08:14.376914813 +0000 UTC m=+0.050128863 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:08:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2421613884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:15.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:15 compute-0 ceph-mon[73668]: pgmap v2864: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Oct 02 13:08:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.5 MiB/s wr, 105 op/s
Oct 02 13:08:16 compute-0 nova_compute[256940]: 2025-10-02 13:08:16.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:16.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:17.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:17 compute-0 nova_compute[256940]: 2025-10-02 13:08:17.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:18 compute-0 ceph-mon[73668]: pgmap v2865: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.5 MiB/s wr, 105 op/s
Oct 02 13:08:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 122 op/s
Oct 02 13:08:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:18.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2629945941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:19.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.086 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.086 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.108 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:08:20 compute-0 ceph-mon[73668]: pgmap v2866: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 122 op/s
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.224 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.225 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.231 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.231 2 INFO nova.compute.claims [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:08:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 98 op/s
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.439 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753051723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.907 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.913 2 DEBUG nova.compute.provider_tree [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.935 2 DEBUG nova.scheduler.client.report [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.973 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:20 compute-0 nova_compute[256940]: 2025-10-02 13:08:20.973 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:08:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:20.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.042 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.043 2 DEBUG nova.network.neutron [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.081 2 INFO nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.106 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.203 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.204 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.205 2 INFO nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Creating image(s)
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.262 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:21 compute-0 ceph-mon[73668]: pgmap v2867: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 98 op/s
Oct 02 13:08:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/753051723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.323 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.355 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.361 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.414 2 DEBUG nova.policy [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.454 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.455 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.456 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.456 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.485 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.490 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 45fff123-eb86-4b5c-9b28-8f189f2572be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:21 compute-0 nova_compute[256940]: 2025-10-02 13:08:21.941 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 45fff123-eb86-4b5c-9b28-8f189f2572be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.021 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.136 2 DEBUG nova.objects.instance [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 45fff123-eb86-4b5c-9b28-8f189f2572be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.160 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.160 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Ensure instance console log exists: /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.160 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.161 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.161 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:22 compute-0 sudo[383653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:22 compute-0 sudo[383653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-0 sudo[383653]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 237 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 95 op/s
Oct 02 13:08:22 compute-0 sudo[383678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:22 compute-0 sudo[383678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-0 sudo[383678]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/218103477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:22 compute-0 sudo[383703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:22 compute-0 sudo[383703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-0 sudo[383703]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:22 compute-0 sudo[383728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:08:22 compute-0 sudo[383728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-0 nova_compute[256940]: 2025-10-02 13:08:22.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:22.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:23 compute-0 sudo[383728]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:08:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:08:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:08:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d80976e8-8617-412d-b105-1b5bda00d24e does not exist
Oct 02 13:08:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 04b82b76-513b-40b2-838b-d9a187845cfa does not exist
Oct 02 13:08:23 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2f34c273-db47-4e27-9252-4b3d944d1194 does not exist
Oct 02 13:08:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:08:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:08:23 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:08:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:23 compute-0 sudo[383785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:23 compute-0 sudo[383785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:23 compute-0 sudo[383785]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:23 compute-0 sudo[383810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:23 compute-0 sudo[383810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:23 compute-0 sudo[383810]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:23 compute-0 sudo[383835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:23 compute-0 sudo[383835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:23 compute-0 sudo[383835]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:23 compute-0 sudo[383860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:08:23 compute-0 sudo[383860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:23 compute-0 ceph-mon[73668]: pgmap v2868: 305 pgs: 305 active+clean; 237 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 95 op/s
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1332190033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3757053216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3766980705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-0 podman[383926]: 2025-10-02 13:08:23.802695032 +0000 UTC m=+0.022409942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:24 compute-0 podman[383926]: 2025-10-02 13:08:24.008505135 +0000 UTC m=+0.228220015 container create 279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:08:24 compute-0 systemd[1]: Started libpod-conmon-279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a.scope.
Oct 02 13:08:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:24 compute-0 podman[383926]: 2025-10-02 13:08:24.283435491 +0000 UTC m=+0.503150391 container init 279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:08:24 compute-0 podman[383926]: 2025-10-02 13:08:24.296070779 +0000 UTC m=+0.515785659 container start 279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:08:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 237 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 495 KiB/s wr, 86 op/s
Oct 02 13:08:24 compute-0 elegant_swartz[383942]: 167 167
Oct 02 13:08:24 compute-0 systemd[1]: libpod-279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a.scope: Deactivated successfully.
Oct 02 13:08:24 compute-0 podman[383926]: 2025-10-02 13:08:24.343204623 +0000 UTC m=+0.562919513 container attach 279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:08:24 compute-0 podman[383926]: 2025-10-02 13:08:24.344272061 +0000 UTC m=+0.563986951 container died 279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:08:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c68e9e82719f5d6ecc70638424e8df6d01808759cf55a344a6e27873eb8087bb-merged.mount: Deactivated successfully.
Oct 02 13:08:24 compute-0 podman[383926]: 2025-10-02 13:08:24.811367484 +0000 UTC m=+1.031082364 container remove 279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:08:24 compute-0 systemd[1]: libpod-conmon-279ba2ebecfcd833d0f991fbdbbb7a14b33c90d838d63be33ea17ca8e31e289a.scope: Deactivated successfully.
Oct 02 13:08:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:24.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:25 compute-0 podman[383968]: 2025-10-02 13:08:25.047150725 +0000 UTC m=+0.114453342 container create d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:08:25 compute-0 podman[383968]: 2025-10-02 13:08:24.956866691 +0000 UTC m=+0.024169318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:25 compute-0 systemd[1]: Started libpod-conmon-d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c.scope.
Oct 02 13:08:25 compute-0 nova_compute[256940]: 2025-10-02 13:08:25.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:25 compute-0 nova_compute[256940]: 2025-10-02 13:08:25.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:25 compute-0 nova_compute[256940]: 2025-10-02 13:08:25.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:08:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7e3095cde28848a3b72ab8d991d65942e0d3069f0c46697da3376a08351c14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7e3095cde28848a3b72ab8d991d65942e0d3069f0c46697da3376a08351c14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7e3095cde28848a3b72ab8d991d65942e0d3069f0c46697da3376a08351c14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7e3095cde28848a3b72ab8d991d65942e0d3069f0c46697da3376a08351c14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7e3095cde28848a3b72ab8d991d65942e0d3069f0c46697da3376a08351c14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:25 compute-0 podman[383968]: 2025-10-02 13:08:25.393499995 +0000 UTC m=+0.460802632 container init d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:08:25 compute-0 podman[383968]: 2025-10-02 13:08:25.401170424 +0000 UTC m=+0.468473041 container start d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:08:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:25.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:25 compute-0 podman[383968]: 2025-10-02 13:08:25.51542105 +0000 UTC m=+0.582723707 container attach d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:08:26 compute-0 ceph-mon[73668]: pgmap v2869: 305 pgs: 305 active+clean; 237 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 495 KiB/s wr, 86 op/s
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.229 2 DEBUG nova.network.neutron [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Successfully created port: ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.235 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.236 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:26 compute-0 musing_bell[383984]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:08:26 compute-0 musing_bell[383984]: --> relative data size: 1.0
Oct 02 13:08:26 compute-0 musing_bell[383984]: --> All data devices are unavailable
Oct 02 13:08:26 compute-0 systemd[1]: libpod-d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c.scope: Deactivated successfully.
Oct 02 13:08:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 263 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 149 op/s
Oct 02 13:08:26 compute-0 podman[384000]: 2025-10-02 13:08:26.348394572 +0000 UTC m=+0.032835923 container died d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:08:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:26.504 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:26.505 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:26.505 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689967176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.655 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.818 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.819 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4157MB free_disk=20.961849212646484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.819 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.820 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f7e3095cde28848a3b72ab8d991d65942e0d3069f0c46697da3376a08351c14-merged.mount: Deactivated successfully.
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.919 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 45fff123-eb86-4b5c-9b28-8f189f2572be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.920 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.920 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:08:26 compute-0 nova_compute[256940]: 2025-10-02 13:08:26.962 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:26.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:27 compute-0 podman[384000]: 2025-10-02 13:08:27.273425894 +0000 UTC m=+0.957867245 container remove d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:08:27 compute-0 systemd[1]: libpod-conmon-d54069175c6634a730d13de542f6c5ccb80de887ea0232d1188be06a36a8917c.scope: Deactivated successfully.
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.297 2 DEBUG nova.network.neutron [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Successfully updated port: ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:08:27 compute-0 sudo[383860]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.321 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.322 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.322 2 DEBUG nova.network.neutron [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:08:27 compute-0 sudo[384057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:27 compute-0 sudo[384057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:27 compute-0 sudo[384057]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:27 compute-0 sudo[384082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.418 2 DEBUG nova.compute.manager [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-changed-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.419 2 DEBUG nova.compute.manager [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Refreshing instance network info cache due to event network-changed-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.419 2 DEBUG oslo_concurrency.lockutils [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:27 compute-0 sudo[384082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:27 compute-0 sudo[384082]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:27 compute-0 sudo[384108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:27 compute-0 sudo[384108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:27 compute-0 sudo[384108]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:27 compute-0 sudo[384133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:08:27 compute-0 sudo[384133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1473500295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.547 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.553 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.581 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:08:27 compute-0 ceph-mon[73668]: pgmap v2870: 305 pgs: 305 active+clean; 263 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 149 op/s
Oct 02 13:08:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/689967176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.607 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.607 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:27 compute-0 nova_compute[256940]: 2025-10-02 13:08:27.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:27 compute-0 podman[384199]: 2025-10-02 13:08:27.813886183 +0000 UTC m=+0.020240166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:28 compute-0 podman[384199]: 2025-10-02 13:08:28.114202659 +0000 UTC m=+0.320556582 container create af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_blackburn, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:08:28 compute-0 systemd[1]: Started libpod-conmon-af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8.scope.
Oct 02 13:08:28 compute-0 nova_compute[256940]: 2025-10-02 13:08:28.214 2 DEBUG nova.network.neutron [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:08:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:28 compute-0 podman[384199]: 2025-10-02 13:08:28.265544156 +0000 UTC m=+0.471898119 container init af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:08:28 compute-0 podman[384199]: 2025-10-02 13:08:28.274032647 +0000 UTC m=+0.480386580 container start af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:08:28 compute-0 recursing_blackburn[384216]: 167 167
Oct 02 13:08:28 compute-0 systemd[1]: libpod-af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8.scope: Deactivated successfully.
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 145 op/s
Oct 02 13:08:28 compute-0 podman[384199]: 2025-10-02 13:08:28.339479835 +0000 UTC m=+0.545833868 container attach af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:08:28 compute-0 podman[384199]: 2025-10-02 13:08:28.341885248 +0000 UTC m=+0.548239221 container died af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_blackburn, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e335dea2effd6ec94d4d8d51a8dc9b3ef65bda43723d072f189eb5bfb0307e9-merged.mount: Deactivated successfully.
Oct 02 13:08:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1473500295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:08:28
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'vms', 'backups', 'default.rgw.meta', 'volumes']
Oct 02 13:08:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:08:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:29 compute-0 podman[384199]: 2025-10-02 13:08:29.049514627 +0000 UTC m=+1.255868590 container remove af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:08:29 compute-0 systemd[1]: libpod-conmon-af273d4daa80bd851a81a027e9dcbe13547569b00344963df70f761fc2f479a8.scope: Deactivated successfully.
Oct 02 13:08:29 compute-0 podman[384242]: 2025-10-02 13:08:29.223625256 +0000 UTC m=+0.056787944 container create ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:08:29 compute-0 podman[384242]: 2025-10-02 13:08:29.191900523 +0000 UTC m=+0.025063231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:29 compute-0 systemd[1]: Started libpod-conmon-ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae.scope.
Oct 02 13:08:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c9b6907e201f20d1aa1561d7888c9e478dab462eec8638f935ba4185c7826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c9b6907e201f20d1aa1561d7888c9e478dab462eec8638f935ba4185c7826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c9b6907e201f20d1aa1561d7888c9e478dab462eec8638f935ba4185c7826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c9b6907e201f20d1aa1561d7888c9e478dab462eec8638f935ba4185c7826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:08:29 compute-0 nova_compute[256940]: 2025-10-02 13:08:29.607 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:29 compute-0 nova_compute[256940]: 2025-10-02 13:08:29.609 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:29 compute-0 podman[384242]: 2025-10-02 13:08:29.641548175 +0000 UTC m=+0.474710893 container init ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldstine, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:08:29 compute-0 podman[384242]: 2025-10-02 13:08:29.648452264 +0000 UTC m=+0.481614952 container start ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:08:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:08:29 compute-0 podman[384242]: 2025-10-02 13:08:29.767363821 +0000 UTC m=+0.600526549 container attach ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldstine, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:08:29 compute-0 ceph-mon[73668]: pgmap v2871: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 145 op/s
Oct 02 13:08:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:29.910 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:29.911 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:08:29 compute-0 nova_compute[256940]: 2025-10-02 13:08:29.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.136 2 DEBUG nova.network.neutron [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updating instance_info_cache with network_info: [{"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.161 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.162 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Instance network_info: |[{"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.164 2 DEBUG oslo_concurrency.lockutils [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.164 2 DEBUG nova.network.neutron [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Refreshing network info cache for port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.170 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Start _get_guest_xml network_info=[{"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.178 2 WARNING nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.184 2 DEBUG nova.virt.libvirt.host [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.186 2 DEBUG nova.virt.libvirt.host [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.197 2 DEBUG nova.virt.libvirt.host [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.198 2 DEBUG nova.virt.libvirt.host [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.200 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.201 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.202 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.202 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.203 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.204 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.204 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.205 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.205 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.206 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.207 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.207 2 DEBUG nova.virt.hardware [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.212 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 130 op/s
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]: {
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:     "1": [
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:         {
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "devices": [
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "/dev/loop3"
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             ],
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "lv_name": "ceph_lv0",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "lv_size": "7511998464",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "name": "ceph_lv0",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "tags": {
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.cluster_name": "ceph",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.crush_device_class": "",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.encrypted": "0",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.osd_id": "1",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.type": "block",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:                 "ceph.vdo": "0"
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             },
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "type": "block",
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:             "vg_name": "ceph_vg0"
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:         }
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]:     ]
Oct 02 13:08:30 compute-0 flamboyant_goldstine[384258]: }
Oct 02 13:08:30 compute-0 systemd[1]: libpod-ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae.scope: Deactivated successfully.
Oct 02 13:08:30 compute-0 podman[384242]: 2025-10-02 13:08:30.412666801 +0000 UTC m=+1.245829489 container died ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:08:30 compute-0 podman[384266]: 2025-10-02 13:08:30.520071999 +0000 UTC m=+0.193364410 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:08:30 compute-0 podman[384267]: 2025-10-02 13:08:30.529127824 +0000 UTC m=+0.202519888 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d40c9b6907e201f20d1aa1561d7888c9e478dab462eec8638f935ba4185c7826-merged.mount: Deactivated successfully.
Oct 02 13:08:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:08:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378719045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.687 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.713 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:30 compute-0 nova_compute[256940]: 2025-10-02 13:08:30.717 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:30.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2378719045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:31.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:31 compute-0 podman[384242]: 2025-10-02 13:08:31.506374262 +0000 UTC m=+2.339536950 container remove ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:08:31 compute-0 systemd[1]: libpod-conmon-ced35ffa0bad0e84b67146d6d9ba5cfbf06c5ddd1022ae2495da8614e14a36ae.scope: Deactivated successfully.
Oct 02 13:08:31 compute-0 sudo[384133]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:31 compute-0 sudo[384388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:31 compute-0 sudo[384388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:31 compute-0 sudo[384388]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:31 compute-0 sudo[384413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:31 compute-0 sudo[384413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:31 compute-0 sudo[384413]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:31 compute-0 sudo[384438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:31 compute-0 sudo[384438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:08:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021696205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:31 compute-0 sudo[384438]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.803 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.804 2 DEBUG nova.virt.libvirt.vif [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-696514786',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-696514786',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=195,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEHw7PEQ7N7bXjDjSfRKoTb4nWUP084IQpuvwF5FbfzUH26lrV0N8N1vzXLjxsn8v4sNh+yZH8fUAI8S5o8XhexxRTKfPyB6YAq5YoCku9Nv7lnzYWq7xii8dO6U/gw+g==',key_name='tempest-TestSecurityGroupsBasicOps-1482693592',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-04ryifyw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:21Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=45fff123-eb86-4b5c-9b28-8f189f2572be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.804 2 DEBUG nova.network.os_vif_util [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.805 2 DEBUG nova.network.os_vif_util [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:8b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9,network=Network(12323951-1705-444a-814b-bbe6a34b1313),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac4b6a62-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.807 2 DEBUG nova.objects.instance [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45fff123-eb86-4b5c-9b28-8f189f2572be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.826 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <uuid>45fff123-eb86-4b5c-9b28-8f189f2572be</uuid>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <name>instance-000000c3</name>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-696514786</nova:name>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:08:30</nova:creationTime>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <nova:port uuid="ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9">
Oct 02 13:08:31 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <system>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <entry name="serial">45fff123-eb86-4b5c-9b28-8f189f2572be</entry>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <entry name="uuid">45fff123-eb86-4b5c-9b28-8f189f2572be</entry>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </system>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <os>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   </os>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <features>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   </features>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/45fff123-eb86-4b5c-9b28-8f189f2572be_disk">
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       </source>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/45fff123-eb86-4b5c-9b28-8f189f2572be_disk.config">
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       </source>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:08:31 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:08:8b:bd"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <target dev="tapac4b6a62-57"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/console.log" append="off"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <video>
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </video>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:08:31 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:08:31 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:08:31 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:08:31 compute-0 nova_compute[256940]: </domain>
Oct 02 13:08:31 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.828 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Preparing to wait for external event network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.829 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.829 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.829 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.830 2 DEBUG nova.virt.libvirt.vif [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-696514786',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-696514786',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=195,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEHw7PEQ7N7bXjDjSfRKoTb4nWUP084IQpuvwF5FbfzUH26lrV0N8N1vzXLjxsn8v4sNh+yZH8fUAI8S5o8XhexxRTKfPyB6YAq5YoCku9Nv7lnzYWq7xii8dO6U/gw+g==',key_name='tempest-TestSecurityGroupsBasicOps-1482693592',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-04ryifyw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:21Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=45fff123-eb86-4b5c-9b28-8f189f2572be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.831 2 DEBUG nova.network.os_vif_util [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.832 2 DEBUG nova.network.os_vif_util [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:8b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9,network=Network(12323951-1705-444a-814b-bbe6a34b1313),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac4b6a62-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.832 2 DEBUG os_vif [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:8b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9,network=Network(12323951-1705-444a-814b-bbe6a34b1313),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac4b6a62-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4b6a62-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac4b6a62-57, col_values=(('external_ids', {'iface-id': 'ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:8b:bd', 'vm-uuid': '45fff123-eb86-4b5c-9b28-8f189f2572be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:31 compute-0 NetworkManager[44981]: <info>  [1759410511.8459] manager: (tapac4b6a62-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:31 compute-0 nova_compute[256940]: 2025-10-02 13:08:31.858 2 INFO os_vif [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:8b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9,network=Network(12323951-1705-444a-814b-bbe6a34b1313),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac4b6a62-57')
Oct 02 13:08:31 compute-0 sudo[384464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:08:31 compute-0 sudo[384464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.102 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.102 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.102 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:08:8b:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.103 2 INFO nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Using config drive
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.135 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 288 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 13:08:32 compute-0 podman[384548]: 2025-10-02 13:08:32.237240652 +0000 UTC m=+0.028975973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.436 2 DEBUG nova.network.neutron [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updated VIF entry in instance network info cache for port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.437 2 DEBUG nova.network.neutron [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updating instance_info_cache with network_info: [{"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.452 2 DEBUG oslo_concurrency.lockutils [req-da20a283-0a50-4a26-aba8-b99940a8e84a req-c1241384-1ecf-4d82-9bbe-310c1f874101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:32 compute-0 sudo[384562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:32 compute-0 sudo[384562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:32 compute-0 sudo[384562]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:32 compute-0 sudo[384587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:32 compute-0 sudo[384587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:32 compute-0 ceph-mon[73668]: pgmap v2872: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 130 op/s
Oct 02 13:08:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2021696205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:32 compute-0 sudo[384587]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:32 compute-0 podman[384548]: 2025-10-02 13:08:32.66863426 +0000 UTC m=+0.460369561 container create bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:08:32 compute-0 nova_compute[256940]: 2025-10-02 13:08:32.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:32 compute-0 systemd[1]: Started libpod-conmon-bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff.scope.
Oct 02 13:08:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:33 compute-0 nova_compute[256940]: 2025-10-02 13:08:33.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:33 compute-0 nova_compute[256940]: 2025-10-02 13:08:33.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:33 compute-0 nova_compute[256940]: 2025-10-02 13:08:33.220 2 INFO nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Creating config drive at /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/disk.config
Oct 02 13:08:33 compute-0 nova_compute[256940]: 2025-10-02 13:08:33.224 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprvxco6tb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:33 compute-0 podman[384548]: 2025-10-02 13:08:33.317588925 +0000 UTC m=+1.109324266 container init bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:08:33 compute-0 podman[384548]: 2025-10-02 13:08:33.324631008 +0000 UTC m=+1.116366329 container start bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:08:33 compute-0 practical_colden[384615]: 167 167
Oct 02 13:08:33 compute-0 systemd[1]: libpod-bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff.scope: Deactivated successfully.
Oct 02 13:08:33 compute-0 nova_compute[256940]: 2025-10-02 13:08:33.362 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprvxco6tb" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:33 compute-0 nova_compute[256940]: 2025-10-02 13:08:33.398 2 DEBUG nova.storage.rbd_utils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 45fff123-eb86-4b5c-9b28-8f189f2572be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:33 compute-0 nova_compute[256940]: 2025-10-02 13:08:33.402 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/disk.config 45fff123-eb86-4b5c-9b28-8f189f2572be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:33 compute-0 podman[384548]: 2025-10-02 13:08:33.505185215 +0000 UTC m=+1.296920516 container attach bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:08:33 compute-0 podman[384548]: 2025-10-02 13:08:33.505685918 +0000 UTC m=+1.297421219 container died bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:08:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 288 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 140 op/s
Oct 02 13:08:34 compute-0 ceph-mon[73668]: pgmap v2873: 305 pgs: 305 active+clean; 288 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 13:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbdd382e64d7d417bedaa2d494f557f7fd4af4230a21fbdb99917864db39b428-merged.mount: Deactivated successfully.
Oct 02 13:08:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:35 compute-0 nova_compute[256940]: 2025-10-02 13:08:35.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:35 compute-0 nova_compute[256940]: 2025-10-02 13:08:35.228 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:35 compute-0 nova_compute[256940]: 2025-10-02 13:08:35.229 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:08:35 compute-0 nova_compute[256940]: 2025-10-02 13:08:35.229 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:08:35 compute-0 nova_compute[256940]: 2025-10-02 13:08:35.243 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:08:35 compute-0 nova_compute[256940]: 2025-10-02 13:08:35.243 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:08:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:35.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:35 compute-0 podman[384548]: 2025-10-02 13:08:35.511048302 +0000 UTC m=+3.302783623 container remove bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:08:35 compute-0 systemd[1]: libpod-conmon-bf3c6fbe019e72e0d1e6e5ea516270af3c480ff8bfe227b3da399b3e816587ff.scope: Deactivated successfully.
Oct 02 13:08:35 compute-0 podman[384681]: 2025-10-02 13:08:35.667278117 +0000 UTC m=+0.026922450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:08:35 compute-0 podman[384681]: 2025-10-02 13:08:35.78295264 +0000 UTC m=+0.142596983 container create 03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lumiere, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:08:35 compute-0 systemd[1]: Started libpod-conmon-03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745.scope.
Oct 02 13:08:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9778239528dd0c20cc922cb0492a65ecfafcd700b4d4d1e08142153fce09fb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9778239528dd0c20cc922cb0492a65ecfafcd700b4d4d1e08142153fce09fb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9778239528dd0c20cc922cb0492a65ecfafcd700b4d4d1e08142153fce09fb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9778239528dd0c20cc922cb0492a65ecfafcd700b4d4d1e08142153fce09fb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:36 compute-0 ceph-mon[73668]: pgmap v2874: 305 pgs: 305 active+clean; 288 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 140 op/s
Oct 02 13:08:36 compute-0 podman[384681]: 2025-10-02 13:08:36.034050158 +0000 UTC m=+0.393694531 container init 03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:08:36 compute-0 podman[384681]: 2025-10-02 13:08:36.047434995 +0000 UTC m=+0.407079298 container start 03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:08:36 compute-0 podman[384681]: 2025-10-02 13:08:36.135248095 +0000 UTC m=+0.494892408 container attach 03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.141 2 DEBUG oslo_concurrency.processutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/disk.config 45fff123-eb86-4b5c-9b28-8f189f2572be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.142 2 INFO nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Deleting local config drive /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be/disk.config because it was imported into RBD.
Oct 02 13:08:36 compute-0 NetworkManager[44981]: <info>  [1759410516.2024] manager: (tapac4b6a62-57): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Oct 02 13:08:36 compute-0 kernel: tapac4b6a62-57: entered promiscuous mode
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:36 compute-0 ovn_controller[148123]: 2025-10-02T13:08:36Z|00973|binding|INFO|Claiming lport ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 for this chassis.
Oct 02 13:08:36 compute-0 ovn_controller[148123]: 2025-10-02T13:08:36Z|00974|binding|INFO|ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9: Claiming fa:16:3e:08:8b:bd 10.100.0.6
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.226 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:8b:bd 10.100.0.6'], port_security=['fa:16:3e:08:8b:bd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '45fff123-eb86-4b5c-9b28-8f189f2572be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12323951-1705-444a-814b-bbe6a34b1313', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2aa0dc6b-7093-4b48-9b47-d13d14c9196e ae405bbc-df1c-44c9-8210-b98f462f2f1f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4f816a6-575f-4a19-87a7-00525430aee3, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.227 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 in datapath 12323951-1705-444a-814b-bbe6a34b1313 bound to our chassis
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.228 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12323951-1705-444a-814b-bbe6a34b1313
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.244 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff1fa83-e124-4a97-9a7f-dbe6bad760c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.245 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12323951-11 in ovnmeta-12323951-1705-444a-814b-bbe6a34b1313 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.248 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12323951-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.248 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1d51f078-b5ef-411f-baed-706350a23cee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.249 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[419053d2-376d-43e7-88de-be816d700b0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 systemd-machined[210927]: New machine qemu-99-instance-000000c3.
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.262 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf82c8b-d260-4716-a401-3d6bbcab0d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-000000c3.
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.282 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[185594ca-fc1e-4982-96bc-4f476a580729]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:36 compute-0 ovn_controller[148123]: 2025-10-02T13:08:36Z|00975|binding|INFO|Setting lport ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 ovn-installed in OVS
Oct 02 13:08:36 compute-0 ovn_controller[148123]: 2025-10-02T13:08:36Z|00976|binding|INFO|Setting lport ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 up in Southbound
Oct 02 13:08:36 compute-0 systemd-udevd[384720]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 168 op/s
Oct 02 13:08:36 compute-0 NetworkManager[44981]: <info>  [1759410516.3203] device (tapac4b6a62-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:08:36 compute-0 NetworkManager[44981]: <info>  [1759410516.3214] device (tapac4b6a62-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.327 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[61f0f86f-f541-4dc2-b902-b53871230a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 NetworkManager[44981]: <info>  [1759410516.3364] manager: (tap12323951-10): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.335 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8615f97a-a5be-442e-a845-1f941096bc55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.378 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc9e26f-9a05-4e18-affa-ff05111e86fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.382 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c77d0d79-4552-489e-bf75-fb8751c7c268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 NetworkManager[44981]: <info>  [1759410516.4102] device (tap12323951-10): carrier: link connected
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.416 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f96c7d-6834-43a0-b85c-60c8af3a2bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.440 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2653a0-e141-4c28-9336-0579863bd529]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12323951-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:f5:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841189, 'reachable_time': 33154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384749, 'error': None, 'target': 'ovnmeta-12323951-1705-444a-814b-bbe6a34b1313', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.465 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9a6897-3837-4ccd-b14f-a5f86693bba7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:f5b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841189, 'tstamp': 841189}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384750, 'error': None, 'target': 'ovnmeta-12323951-1705-444a-814b-bbe6a34b1313', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.493 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1161dace-7457-4112-949c-7397635f3b1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12323951-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:f5:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841189, 'reachable_time': 33154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384751, 'error': None, 'target': 'ovnmeta-12323951-1705-444a-814b-bbe6a34b1313', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.531 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7428f1-0b71-4f4d-b310-1067ae767c43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.597 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[52cabc4c-dcba-42c2-bc82-88de449dd5e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.599 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12323951-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.600 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.600 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12323951-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:36 compute-0 NetworkManager[44981]: <info>  [1759410516.6034] manager: (tap12323951-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Oct 02 13:08:36 compute-0 kernel: tap12323951-10: entered promiscuous mode
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.607 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12323951-10, col_values=(('external_ids', {'iface-id': '53a8550a-d3ff-4250-974d-9684dd0f483e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:36 compute-0 ovn_controller[148123]: 2025-10-02T13:08:36Z|00977|binding|INFO|Releasing lport 53a8550a-d3ff-4250-974d-9684dd0f483e from this chassis (sb_readonly=0)
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.625 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12323951-1705-444a-814b-bbe6a34b1313.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12323951-1705-444a-814b-bbe6a34b1313.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.626 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[289f8224-568c-439c-b69f-5af74e71374c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.627 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-12323951-1705-444a-814b-bbe6a34b1313
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/12323951-1705-444a-814b-bbe6a34b1313.pid.haproxy
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 12323951-1705-444a-814b-bbe6a34b1313
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:08:36 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:36.627 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12323951-1705-444a-814b-bbe6a34b1313', 'env', 'PROCESS_TAG=haproxy-12323951-1705-444a-814b-bbe6a34b1313', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12323951-1705-444a-814b-bbe6a34b1313.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:36 compute-0 tender_lumiere[384698]: {
Oct 02 13:08:36 compute-0 tender_lumiere[384698]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:08:36 compute-0 tender_lumiere[384698]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:08:36 compute-0 tender_lumiere[384698]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:08:36 compute-0 tender_lumiere[384698]:         "osd_id": 1,
Oct 02 13:08:36 compute-0 tender_lumiere[384698]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:08:36 compute-0 tender_lumiere[384698]:         "type": "bluestore"
Oct 02 13:08:36 compute-0 tender_lumiere[384698]:     }
Oct 02 13:08:36 compute-0 tender_lumiere[384698]: }
Oct 02 13:08:36 compute-0 systemd[1]: libpod-03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745.scope: Deactivated successfully.
Oct 02 13:08:36 compute-0 podman[384681]: 2025-10-02 13:08:36.957087928 +0000 UTC m=+1.316732271 container died 03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lumiere, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.991 2 DEBUG nova.compute.manager [req-74065132-81b2-4fda-b3a3-588fc30073b8 req-47b88e5e-fbc3-4b5a-8ffd-4e792961b32b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.992 2 DEBUG oslo_concurrency.lockutils [req-74065132-81b2-4fda-b3a3-588fc30073b8 req-47b88e5e-fbc3-4b5a-8ffd-4e792961b32b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.992 2 DEBUG oslo_concurrency.lockutils [req-74065132-81b2-4fda-b3a3-588fc30073b8 req-47b88e5e-fbc3-4b5a-8ffd-4e792961b32b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.993 2 DEBUG oslo_concurrency.lockutils [req-74065132-81b2-4fda-b3a3-588fc30073b8 req-47b88e5e-fbc3-4b5a-8ffd-4e792961b32b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:36 compute-0 nova_compute[256940]: 2025-10-02 13:08:36.993 2 DEBUG nova.compute.manager [req-74065132-81b2-4fda-b3a3-588fc30073b8 req-47b88e5e-fbc3-4b5a-8ffd-4e792961b32b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Processing event network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:08:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:36.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:37.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:37 compute-0 ceph-mon[73668]: pgmap v2875: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 168 op/s
Oct 02 13:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9778239528dd0c20cc922cb0492a65ecfafcd700b4d4d1e08142153fce09fb0-merged.mount: Deactivated successfully.
Oct 02 13:08:37 compute-0 nova_compute[256940]: 2025-10-02 13:08:37.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:38 compute-0 podman[384681]: 2025-10-02 13:08:38.258325196 +0000 UTC m=+2.617969509 container remove 03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lumiere, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.297 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410518.2965968, 45fff123-eb86-4b5c-9b28-8f189f2572be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.298 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] VM Started (Lifecycle Event)
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.299 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:08:38 compute-0 sudo[384464]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.304 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.308 2 INFO nova.virt.libvirt.driver [-] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Instance spawned successfully.
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.308 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:08:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 314 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 02 13:08:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.326 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.333 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.336 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.336 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.336 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.337 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.337 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.338 2 DEBUG nova.virt.libvirt.driver [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.362 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.363 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410518.2975652, 45fff123-eb86-4b5c-9b28-8f189f2572be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.363 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] VM Paused (Lifecycle Event)
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.385 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.389 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410518.3027601, 45fff123-eb86-4b5c-9b28-8f189f2572be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.389 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] VM Resumed (Lifecycle Event)
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.393 2 INFO nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Took 17.19 seconds to spawn the instance on the hypervisor.
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.393 2 DEBUG nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.403 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.407 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:38 compute-0 podman[384799]: 2025-10-02 13:08:38.321587838 +0000 UTC m=+1.363405803 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.429 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.454 2 INFO nova.compute.manager [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Took 18.26 seconds to build instance.
Oct 02 13:08:38 compute-0 nova_compute[256940]: 2025-10-02 13:08:38.469 2 DEBUG oslo_concurrency.lockutils [None req-a787e255-6854-4a9b-86e3-9402421c8a11 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:08:38 compute-0 systemd[1]: libpod-conmon-03617dba891e481b712623d51fbba907448dcae88416ccc2d02b941b13c4c745.scope: Deactivated successfully.
Oct 02 13:08:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:08:38.914 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/769836320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:38.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:39 compute-0 podman[384799]: 2025-10-02 13:08:39.248256721 +0000 UTC m=+2.290074666 container create 2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 13:08:39 compute-0 nova_compute[256940]: 2025-10-02 13:08:39.264 2 DEBUG nova.compute.manager [req-1beb6338-2f10-4a52-aa7a-2c4c2234cbc2 req-661e0fe0-a573-4d63-a283-29f137773b74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:39 compute-0 nova_compute[256940]: 2025-10-02 13:08:39.264 2 DEBUG oslo_concurrency.lockutils [req-1beb6338-2f10-4a52-aa7a-2c4c2234cbc2 req-661e0fe0-a573-4d63-a283-29f137773b74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:39 compute-0 nova_compute[256940]: 2025-10-02 13:08:39.264 2 DEBUG oslo_concurrency.lockutils [req-1beb6338-2f10-4a52-aa7a-2c4c2234cbc2 req-661e0fe0-a573-4d63-a283-29f137773b74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:39 compute-0 nova_compute[256940]: 2025-10-02 13:08:39.264 2 DEBUG oslo_concurrency.lockutils [req-1beb6338-2f10-4a52-aa7a-2c4c2234cbc2 req-661e0fe0-a573-4d63-a283-29f137773b74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:39 compute-0 nova_compute[256940]: 2025-10-02 13:08:39.265 2 DEBUG nova.compute.manager [req-1beb6338-2f10-4a52-aa7a-2c4c2234cbc2 req-661e0fe0-a573-4d63-a283-29f137773b74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] No waiting events found dispatching network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:39 compute-0 nova_compute[256940]: 2025-10-02 13:08:39.265 2 WARNING nova.compute.manager [req-1beb6338-2f10-4a52-aa7a-2c4c2234cbc2 req-661e0fe0-a573-4d63-a283-29f137773b74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received unexpected event network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 for instance with vm_state active and task_state None.
Oct 02 13:08:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:39 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 19fae07a-71fb-47d9-8e24-259d262e93b3 does not exist
Oct 02 13:08:39 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bd32dfbf-c56f-4ac3-ab55-0a911854bd00 does not exist
Oct 02 13:08:39 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8febe04f-187b-475b-9140-fcaaab424f25 does not exist
Oct 02 13:08:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:39.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:39 compute-0 systemd[1]: Started libpod-conmon-2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003.scope.
Oct 02 13:08:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:08:39 compute-0 sudo[384865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:39 compute-0 sudo[384865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3dc96dd06567b3e4bc679f8fb6a0f2947bedd63e219b9c35a5846cb1d1de0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:39 compute-0 sudo[384865]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:39 compute-0 sudo[384895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:08:39 compute-0 sudo[384895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:39 compute-0 sudo[384895]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:39 compute-0 podman[384799]: 2025-10-02 13:08:39.795376313 +0000 UTC m=+2.837194288 container init 2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:08:39 compute-0 podman[384799]: 2025-10-02 13:08:39.804007517 +0000 UTC m=+2.845825462 container start 2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:08:39 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [NOTICE]   (384921) : New worker (384923) forked
Oct 02 13:08:39 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [NOTICE]   (384921) : Loading success.
Oct 02 13:08:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:40 compute-0 ceph-mon[73668]: pgmap v2876: 305 pgs: 305 active+clean; 314 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 02 13:08:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:40 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 344 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 688 KiB/s rd, 4.4 MiB/s wr, 115 op/s
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003762482842453006 of space, bias 1.0, pg target 1.1287448527359019 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0042833888772566536 of space, bias 1.0, pg target 1.2807332742997395 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:08:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 13:08:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:41.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:41.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:41 compute-0 ceph-mon[73668]: pgmap v2877: 305 pgs: 305 active+clean; 344 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 688 KiB/s rd, 4.4 MiB/s wr, 115 op/s
Oct 02 13:08:41 compute-0 nova_compute[256940]: 2025-10-02 13:08:41.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 365 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.4 MiB/s wr, 180 op/s
Oct 02 13:08:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Oct 02 13:08:42 compute-0 nova_compute[256940]: 2025-10-02 13:08:42.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:43.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Oct 02 13:08:43 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Oct 02 13:08:43 compute-0 nova_compute[256940]: 2025-10-02 13:08:43.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:43 compute-0 NetworkManager[44981]: <info>  [1759410523.1080] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct 02 13:08:43 compute-0 NetworkManager[44981]: <info>  [1759410523.1089] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Oct 02 13:08:43 compute-0 nova_compute[256940]: 2025-10-02 13:08:43.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:43 compute-0 ovn_controller[148123]: 2025-10-02T13:08:43Z|00978|binding|INFO|Releasing lport 53a8550a-d3ff-4250-974d-9684dd0f483e from this chassis (sb_readonly=0)
Oct 02 13:08:43 compute-0 nova_compute[256940]: 2025-10-02 13:08:43.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:43.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 365 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 188 op/s
Oct 02 13:08:44 compute-0 ceph-mon[73668]: pgmap v2878: 305 pgs: 305 active+clean; 365 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.4 MiB/s wr, 180 op/s
Oct 02 13:08:44 compute-0 ceph-mon[73668]: osdmap e360: 3 total, 3 up, 3 in
Oct 02 13:08:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3950454878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2802699572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:45.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:45 compute-0 podman[384936]: 2025-10-02 13:08:45.406010611 +0000 UTC m=+0.072333188 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:08:45 compute-0 nova_compute[256940]: 2025-10-02 13:08:45.410 2 DEBUG nova.compute.manager [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-changed-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:45 compute-0 nova_compute[256940]: 2025-10-02 13:08:45.410 2 DEBUG nova.compute.manager [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Refreshing instance network info cache due to event network-changed-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:45 compute-0 nova_compute[256940]: 2025-10-02 13:08:45.410 2 DEBUG oslo_concurrency.lockutils [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:45 compute-0 nova_compute[256940]: 2025-10-02 13:08:45.410 2 DEBUG oslo_concurrency.lockutils [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:45 compute-0 nova_compute[256940]: 2025-10-02 13:08:45.411 2 DEBUG nova.network.neutron [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Refreshing network info cache for port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:45 compute-0 podman[384937]: 2025-10-02 13:08:45.414644116 +0000 UTC m=+0.080127941 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 13:08:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:45.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:45 compute-0 ceph-mon[73668]: pgmap v2880: 305 pgs: 305 active+clean; 365 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 188 op/s
Oct 02 13:08:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 409 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 203 op/s
Oct 02 13:08:46 compute-0 nova_compute[256940]: 2025-10-02 13:08:46.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:08:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:47.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:08:47 compute-0 ceph-mon[73668]: pgmap v2881: 305 pgs: 305 active+clean; 409 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 203 op/s
Oct 02 13:08:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:47.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:47 compute-0 nova_compute[256940]: 2025-10-02 13:08:47.609 2 DEBUG nova.network.neutron [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updated VIF entry in instance network info cache for port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:47 compute-0 nova_compute[256940]: 2025-10-02 13:08:47.610 2 DEBUG nova.network.neutron [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updating instance_info_cache with network_info: [{"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:47 compute-0 nova_compute[256940]: 2025-10-02 13:08:47.626 2 DEBUG oslo_concurrency.lockutils [req-fd5c9be7-d716-4660-9cec-f0a82ad603d1 req-4cb940dc-3858-4246-8564-d82da0207b60 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:47 compute-0 nova_compute[256940]: 2025-10-02 13:08:47.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Oct 02 13:08:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Oct 02 13:08:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Oct 02 13:08:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Oct 02 13:08:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:49.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Oct 02 13:08:49 compute-0 ceph-mon[73668]: pgmap v2882: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Oct 02 13:08:49 compute-0 ceph-mon[73668]: osdmap e361: 3 total, 3 up, 3 in
Oct 02 13:08:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 454 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.2 MiB/s wr, 155 op/s
Oct 02 13:08:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Oct 02 13:08:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Oct 02 13:08:50 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 02 13:08:50 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:50.888417) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:08:50 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 02 13:08:50 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530888465, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1151, "num_deletes": 250, "total_data_size": 1762790, "memory_usage": 1796888, "flush_reason": "Manual Compaction"}
Oct 02 13:08:50 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 02 13:08:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:51.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531230641, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1088282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63449, "largest_seqno": 64599, "table_properties": {"data_size": 1083901, "index_size": 1840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11847, "raw_average_key_size": 20, "raw_value_size": 1074322, "raw_average_value_size": 1901, "num_data_blocks": 82, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410434, "oldest_key_time": 1759410434, "file_creation_time": 1759410530, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 342327 microseconds, and 5298 cpu microseconds.
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.230733) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1088282 bytes OK
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.230766) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.301155) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.301195) EVENT_LOG_v1 {"time_micros": 1759410531301186, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.301215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1757615, prev total WAL file size 1788543, number of live WAL files 2.
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.302133) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1062KB)], [143(11MB)]
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531302208, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 13627528, "oldest_snapshot_seqno": -1}
Oct 02 13:08:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:51.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:51 compute-0 nova_compute[256940]: 2025-10-02 13:08:51.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 9170 keys, 10417929 bytes, temperature: kUnknown
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531913150, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10417929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10361030, "index_size": 32850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 240749, "raw_average_key_size": 26, "raw_value_size": 10202629, "raw_average_value_size": 1112, "num_data_blocks": 1250, "num_entries": 9170, "num_filter_entries": 9170, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.913387) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10417929 bytes
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.967628) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.3 rd, 17.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(22.1) write-amplify(9.6) OK, records in: 9645, records dropped: 475 output_compression: NoCompression
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.967674) EVENT_LOG_v1 {"time_micros": 1759410531967655, "job": 88, "event": "compaction_finished", "compaction_time_micros": 610997, "compaction_time_cpu_micros": 37397, "output_level": 6, "num_output_files": 1, "total_output_size": 10417929, "num_input_records": 9645, "num_output_records": 9170, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531968405, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531973273, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.301974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.973327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.973334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.973337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.973340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:08:51.973343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:52 compute-0 ceph-mon[73668]: pgmap v2884: 305 pgs: 305 active+clean; 454 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.2 MiB/s wr, 155 op/s
Oct 02 13:08:52 compute-0 ceph-mon[73668]: osdmap e362: 3 total, 3 up, 3 in
Oct 02 13:08:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 459 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.5 MiB/s wr, 240 op/s
Oct 02 13:08:52 compute-0 sudo[384980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:52 compute-0 sudo[384980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:52 compute-0 sudo[384980]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:52 compute-0 sudo[385005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:52 compute-0 sudo[385005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:52 compute-0 sudo[385005]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:52 compute-0 nova_compute[256940]: 2025-10-02 13:08:52.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:53.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:53 compute-0 ceph-mon[73668]: pgmap v2886: 305 pgs: 305 active+clean; 459 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.5 MiB/s wr, 240 op/s
Oct 02 13:08:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:53.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 459 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 180 op/s
Oct 02 13:08:54 compute-0 ovn_controller[148123]: 2025-10-02T13:08:54Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:8b:bd 10.100.0.6
Oct 02 13:08:54 compute-0 ovn_controller[148123]: 2025-10-02T13:08:54Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:8b:bd 10.100.0.6
Oct 02 13:08:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Oct 02 13:08:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Oct 02 13:08:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:55 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Oct 02 13:08:56 compute-0 ceph-mon[73668]: pgmap v2887: 305 pgs: 305 active+clean; 459 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 180 op/s
Oct 02 13:08:56 compute-0 ceph-mon[73668]: osdmap e363: 3 total, 3 up, 3 in
Oct 02 13:08:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 475 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 199 op/s
Oct 02 13:08:56 compute-0 nova_compute[256940]: 2025-10-02 13:08:56.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:08:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:08:57 compute-0 nova_compute[256940]: 2025-10-02 13:08:57.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:57 compute-0 ceph-mon[73668]: pgmap v2889: 305 pgs: 305 active+clean; 475 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 199 op/s
Oct 02 13:08:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 476 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 173 op/s
Oct 02 13:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:08:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:08:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2061486174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:08:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:59.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:59 compute-0 ceph-mon[73668]: pgmap v2890: 305 pgs: 305 active+clean; 476 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 173 op/s
Oct 02 13:09:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 470 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 13:09:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:01.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3712778003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:01 compute-0 podman[385035]: 2025-10-02 13:09:01.386175225 +0000 UTC m=+0.059457384 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 13:09:01 compute-0 podman[385036]: 2025-10-02 13:09:01.425891936 +0000 UTC m=+0.095737976 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 02 13:09:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:01 compute-0 nova_compute[256940]: 2025-10-02 13:09:01.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:02 compute-0 ceph-mon[73668]: pgmap v2891: 305 pgs: 305 active+clean; 470 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 13:09:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/800651783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.1 MiB/s wr, 169 op/s
Oct 02 13:09:02 compute-0 nova_compute[256940]: 2025-10-02 13:09:02.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:03.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.605 2 DEBUG nova.compute.manager [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-changed-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.606 2 DEBUG nova.compute.manager [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Refreshing instance network info cache due to event network-changed-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.606 2 DEBUG oslo_concurrency.lockutils [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.606 2 DEBUG oslo_concurrency.lockutils [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.606 2 DEBUG nova.network.neutron [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Refreshing network info cache for port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.664 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.665 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.665 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.665 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.666 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.666 2 INFO nova.compute.manager [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Terminating instance
Oct 02 13:09:03 compute-0 nova_compute[256940]: 2025-10-02 13:09:03.668 2 DEBUG nova.compute.manager [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:09:03 compute-0 ceph-mon[73668]: pgmap v2892: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.1 MiB/s wr, 169 op/s
Oct 02 13:09:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/448831348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.1 MiB/s wr, 169 op/s
Oct 02 13:09:04 compute-0 kernel: tapac4b6a62-57 (unregistering): left promiscuous mode
Oct 02 13:09:04 compute-0 NetworkManager[44981]: <info>  [1759410544.6181] device (tapac4b6a62-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:09:04 compute-0 ovn_controller[148123]: 2025-10-02T13:09:04Z|00979|binding|INFO|Releasing lport ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 from this chassis (sb_readonly=0)
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 ovn_controller[148123]: 2025-10-02T13:09:04Z|00980|binding|INFO|Setting lport ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 down in Southbound
Oct 02 13:09:04 compute-0 ovn_controller[148123]: 2025-10-02T13:09:04Z|00981|binding|INFO|Removing iface tapac4b6a62-57 ovn-installed in OVS
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:04.637 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:8b:bd 10.100.0.6'], port_security=['fa:16:3e:08:8b:bd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '45fff123-eb86-4b5c-9b28-8f189f2572be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12323951-1705-444a-814b-bbe6a34b1313', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2aa0dc6b-7093-4b48-9b47-d13d14c9196e ae405bbc-df1c-44c9-8210-b98f462f2f1f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4f816a6-575f-4a19-87a7-00525430aee3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:04.638 158104 INFO neutron.agent.ovn.metadata.agent [-] Port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 in datapath 12323951-1705-444a-814b-bbe6a34b1313 unbound from our chassis
Oct 02 13:09:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:04.640 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12323951-1705-444a-814b-bbe6a34b1313, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:09:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:04.641 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d5837e0e-8631-49ed-88a5-8be8eb55fe62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:04.641 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12323951-1705-444a-814b-bbe6a34b1313 namespace which is not needed anymore
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Oct 02 13:09:04 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000c3.scope: Consumed 14.906s CPU time.
Oct 02 13:09:04 compute-0 systemd-machined[210927]: Machine qemu-99-instance-000000c3 terminated.
Oct 02 13:09:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2060950700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:04 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [NOTICE]   (384921) : haproxy version is 2.8.14-c23fe91
Oct 02 13:09:04 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [NOTICE]   (384921) : path to executable is /usr/sbin/haproxy
Oct 02 13:09:04 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [WARNING]  (384921) : Exiting Master process...
Oct 02 13:09:04 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [WARNING]  (384921) : Exiting Master process...
Oct 02 13:09:04 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [ALERT]    (384921) : Current worker (384923) exited with code 143 (Terminated)
Oct 02 13:09:04 compute-0 neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313[384885]: [WARNING]  (384921) : All workers exited. Exiting... (0)
Oct 02 13:09:04 compute-0 systemd[1]: libpod-2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003.scope: Deactivated successfully.
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.811 2 DEBUG nova.compute.manager [req-4923035b-8978-4dd8-b0f7-5b98f53c6169 req-6147712e-40b9-4eb1-befd-f56f43474cc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-vif-unplugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.812 2 DEBUG oslo_concurrency.lockutils [req-4923035b-8978-4dd8-b0f7-5b98f53c6169 req-6147712e-40b9-4eb1-befd-f56f43474cc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.812 2 DEBUG oslo_concurrency.lockutils [req-4923035b-8978-4dd8-b0f7-5b98f53c6169 req-6147712e-40b9-4eb1-befd-f56f43474cc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.813 2 DEBUG oslo_concurrency.lockutils [req-4923035b-8978-4dd8-b0f7-5b98f53c6169 req-6147712e-40b9-4eb1-befd-f56f43474cc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.813 2 DEBUG nova.compute.manager [req-4923035b-8978-4dd8-b0f7-5b98f53c6169 req-6147712e-40b9-4eb1-befd-f56f43474cc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] No waiting events found dispatching network-vif-unplugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.813 2 DEBUG nova.compute.manager [req-4923035b-8978-4dd8-b0f7-5b98f53c6169 req-6147712e-40b9-4eb1-befd-f56f43474cc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-vif-unplugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:09:04 compute-0 podman[385103]: 2025-10-02 13:09:04.813789257 +0000 UTC m=+0.073848517 container died 2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.904 2 INFO nova.virt.libvirt.driver [-] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Instance destroyed successfully.
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.905 2 DEBUG nova.objects.instance [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 45fff123-eb86-4b5c-9b28-8f189f2572be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.923 2 DEBUG nova.virt.libvirt.vif [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-696514786',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-696514786',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=195,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEHw7PEQ7N7bXjDjSfRKoTb4nWUP084IQpuvwF5FbfzUH26lrV0N8N1vzXLjxsn8v4sNh+yZH8fUAI8S5o8XhexxRTKfPyB6YAq5YoCku9Nv7lnzYWq7xii8dO6U/gw+g==',key_name='tempest-TestSecurityGroupsBasicOps-1482693592',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-04ryifyw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:38Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=45fff123-eb86-4b5c-9b28-8f189f2572be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.924 2 DEBUG nova.network.os_vif_util [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.926 2 DEBUG nova.network.os_vif_util [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:8b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9,network=Network(12323951-1705-444a-814b-bbe6a34b1313),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac4b6a62-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.926 2 DEBUG os_vif [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:8b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9,network=Network(12323951-1705-444a-814b-bbe6a34b1313),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac4b6a62-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.929 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4b6a62-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.936 2 INFO os_vif [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:8b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9,network=Network(12323951-1705-444a-814b-bbe6a34b1313),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac4b6a62-57')
Oct 02 13:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a3dc96dd06567b3e4bc679f8fb6a0f2947bedd63e219b9c35a5846cb1d1de0f-merged.mount: Deactivated successfully.
Oct 02 13:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003-userdata-shm.mount: Deactivated successfully.
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.979 2 DEBUG nova.network.neutron [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updated VIF entry in instance network info cache for port ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:04 compute-0 nova_compute[256940]: 2025-10-02 13:09:04.980 2 DEBUG nova.network.neutron [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updating instance_info_cache with network_info: [{"id": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "address": "fa:16:3e:08:8b:bd", "network": {"id": "12323951-1705-444a-814b-bbe6a34b1313", "bridge": "br-int", "label": "tempest-network-smoke--1448739523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac4b6a62-57", "ovs_interfaceid": "ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:05 compute-0 nova_compute[256940]: 2025-10-02 13:09:05.009 2 DEBUG oslo_concurrency.lockutils [req-791c4759-e616-4884-a729-5fa32c9ed417 req-2153ed5c-c886-4d37-b3ab-c589b126b99c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-45fff123-eb86-4b5c-9b28-8f189f2572be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:05 compute-0 podman[385103]: 2025-10-02 13:09:05.025029751 +0000 UTC m=+0.285088981 container cleanup 2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:09:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:05 compute-0 podman[385162]: 2025-10-02 13:09:05.148187758 +0000 UTC m=+0.103571880 container remove 2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.153 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c92129f6-63c5-471b-ba42-88e66661bc20]: (4, ('Thu Oct  2 01:09:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313 (2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003)\n2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003\nThu Oct  2 01:09:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-12323951-1705-444a-814b-bbe6a34b1313 (2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003)\n2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-0 systemd[1]: libpod-conmon-2a18b4578680d926e3bbea31a59e9a0cfe18d807905879bc74cb0ed4fb51e003.scope: Deactivated successfully.
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.155 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[45dab37c-897c-4541-827b-09b01f45c111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.156 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12323951-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:05 compute-0 kernel: tap12323951-10: left promiscuous mode
Oct 02 13:09:05 compute-0 nova_compute[256940]: 2025-10-02 13:09:05.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-0 nova_compute[256940]: 2025-10-02 13:09:05.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.176 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f465be23-4f94-479d-97c3-59aa0e5d9ff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.205 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[60fe7b98-c9a2-41f4-b62e-939807aaa65c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.206 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc479b3-d401-4d36-b5b7-60ea0349b4a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.221 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[219bc05e-e8dc-4d0d-a125-4d285372ace8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841180, 'reachable_time': 23789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385175, 'error': None, 'target': 'ovnmeta-12323951-1705-444a-814b-bbe6a34b1313', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d12323951\x2d1705\x2d444a\x2d814b\x2dbbe6a34b1313.mount: Deactivated successfully.
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.224 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12323951-1705-444a-814b-bbe6a34b1313 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:09:05 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:05.224 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[3d68ce64-198c-418f-b1f1-ad45546ccafc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:05.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:05 compute-0 ceph-mon[73668]: pgmap v2893: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.1 MiB/s wr, 169 op/s
Oct 02 13:09:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4208459991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:09:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4208459991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.256 2 INFO nova.virt.libvirt.driver [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Deleting instance files /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be_del
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.257 2 INFO nova.virt.libvirt.driver [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Deletion of /var/lib/nova/instances/45fff123-eb86-4b5c-9b28-8f189f2572be_del complete
Oct 02 13:09:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 481 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 4.6 MiB/s wr, 161 op/s
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.324 2 INFO nova.compute.manager [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Took 2.66 seconds to destroy the instance on the hypervisor.
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.325 2 DEBUG oslo.service.loopingcall [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.325 2 DEBUG nova.compute.manager [-] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.325 2 DEBUG nova.network.neutron [-] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.978 2 DEBUG nova.compute.manager [req-92cdbb1f-7259-418b-8abc-99b08a428211 req-72f6f380-9c9d-4200-a590-23eb4f2c7f32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.978 2 DEBUG oslo_concurrency.lockutils [req-92cdbb1f-7259-418b-8abc-99b08a428211 req-72f6f380-9c9d-4200-a590-23eb4f2c7f32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.978 2 DEBUG oslo_concurrency.lockutils [req-92cdbb1f-7259-418b-8abc-99b08a428211 req-72f6f380-9c9d-4200-a590-23eb4f2c7f32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.978 2 DEBUG oslo_concurrency.lockutils [req-92cdbb1f-7259-418b-8abc-99b08a428211 req-72f6f380-9c9d-4200-a590-23eb4f2c7f32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.979 2 DEBUG nova.compute.manager [req-92cdbb1f-7259-418b-8abc-99b08a428211 req-72f6f380-9c9d-4200-a590-23eb4f2c7f32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] No waiting events found dispatching network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:06 compute-0 nova_compute[256940]: 2025-10-02 13:09:06.979 2 WARNING nova.compute.manager [req-92cdbb1f-7259-418b-8abc-99b08a428211 req-72f6f380-9c9d-4200-a590-23eb4f2c7f32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received unexpected event network-vif-plugged-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 for instance with vm_state active and task_state deleting.
Oct 02 13:09:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.245 2 DEBUG nova.network.neutron [-] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.261 2 INFO nova.compute.manager [-] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Took 0.94 seconds to deallocate network for instance.
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.312 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.313 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.499 2 DEBUG oslo_concurrency.processutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:07.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.798 2 DEBUG nova.compute.manager [req-4df49915-0a73-4d63-8488-9b7614bc861b req-d8a6a6b8-113f-4203-93d0-62b233666281 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Received event network-vif-deleted-ac4b6a62-57e9-43a3-a88c-ecf33ad1c0c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403046575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-0 ceph-mon[73668]: pgmap v2894: 305 pgs: 305 active+clean; 481 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 4.6 MiB/s wr, 161 op/s
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.936 2 DEBUG oslo_concurrency.processutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.944 2 DEBUG nova.compute.provider_tree [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.960 2 DEBUG nova.scheduler.client.report [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:07 compute-0 nova_compute[256940]: 2025-10-02 13:09:07.984 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:08 compute-0 nova_compute[256940]: 2025-10-02 13:09:08.053 2 INFO nova.scheduler.client.report [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 45fff123-eb86-4b5c-9b28-8f189f2572be
Oct 02 13:09:08 compute-0 nova_compute[256940]: 2025-10-02 13:09:08.114 2 DEBUG oslo_concurrency.lockutils [None req-1c33ea3e-0288-4868-9b5e-5c95b93687d8 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "45fff123-eb86-4b5c-9b28-8f189f2572be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 456 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 966 KiB/s rd, 4.1 MiB/s wr, 201 op/s
Oct 02 13:09:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2403046575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2569559011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3346000006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:09.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:09 compute-0 nova_compute[256940]: 2025-10-02 13:09:09.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:09 compute-0 ceph-mon[73668]: pgmap v2895: 305 pgs: 305 active+clean; 456 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 966 KiB/s rd, 4.1 MiB/s wr, 201 op/s
Oct 02 13:09:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 471 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Oct 02 13:09:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:11.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:12 compute-0 ceph-mon[73668]: pgmap v2896: 305 pgs: 305 active+clean; 471 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Oct 02 13:09:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.3 MiB/s wr, 277 op/s
Oct 02 13:09:12 compute-0 sudo[385204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:12 compute-0 sudo[385204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:12 compute-0 sudo[385204]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:12 compute-0 sudo[385229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:12 compute-0 sudo[385229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:12 compute-0 sudo[385229]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:12 compute-0 nova_compute[256940]: 2025-10-02 13:09:12.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:13 compute-0 ceph-mon[73668]: pgmap v2897: 305 pgs: 305 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.3 MiB/s wr, 277 op/s
Oct 02 13:09:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:13.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Oct 02 13:09:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Oct 02 13:09:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Oct 02 13:09:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 6.6 MiB/s wr, 274 op/s
Oct 02 13:09:14 compute-0 nova_compute[256940]: 2025-10-02 13:09:14.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:15.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:15 compute-0 ceph-mon[73668]: osdmap e364: 3 total, 3 up, 3 in
Oct 02 13:09:15 compute-0 ceph-mon[73668]: pgmap v2899: 305 pgs: 305 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 6.6 MiB/s wr, 274 op/s
Oct 02 13:09:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:15.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 439 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 294 op/s
Oct 02 13:09:16 compute-0 podman[385256]: 2025-10-02 13:09:16.387304177 +0000 UTC m=+0.054837354 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 13:09:16 compute-0 podman[385255]: 2025-10-02 13:09:16.387469591 +0000 UTC m=+0.059926336 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:09:16 compute-0 nova_compute[256940]: 2025-10-02 13:09:16.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:17.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:17 compute-0 nova_compute[256940]: 2025-10-02 13:09:17.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:17.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:17 compute-0 ceph-mon[73668]: pgmap v2900: 305 pgs: 305 active+clean; 439 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 294 op/s
Oct 02 13:09:17 compute-0 nova_compute[256940]: 2025-10-02 13:09:17.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 268 op/s
Oct 02 13:09:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3267789736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:19.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:19.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:19 compute-0 ceph-mon[73668]: pgmap v2901: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 268 op/s
Oct 02 13:09:19 compute-0 nova_compute[256940]: 2025-10-02 13:09:19.902 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410544.9010684, 45fff123-eb86-4b5c-9b28-8f189f2572be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:19 compute-0 nova_compute[256940]: 2025-10-02 13:09:19.903 2 INFO nova.compute.manager [-] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] VM Stopped (Lifecycle Event)
Oct 02 13:09:19 compute-0 nova_compute[256940]: 2025-10-02 13:09:19.928 2 DEBUG nova.compute.manager [None req-a423829b-8802-44ef-8eb1-56bef69f9e0b - - - - - -] [instance: 45fff123-eb86-4b5c-9b28-8f189f2572be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:19 compute-0 nova_compute[256940]: 2025-10-02 13:09:19.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Oct 02 13:09:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 423 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.1 MiB/s wr, 227 op/s
Oct 02 13:09:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Oct 02 13:09:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Oct 02 13:09:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:21 compute-0 ceph-mon[73668]: pgmap v2902: 305 pgs: 305 active+clean; 423 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.1 MiB/s wr, 227 op/s
Oct 02 13:09:21 compute-0 ceph-mon[73668]: osdmap e365: 3 total, 3 up, 3 in
Oct 02 13:09:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:21.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 245 op/s
Oct 02 13:09:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1024354763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:22 compute-0 nova_compute[256940]: 2025-10-02 13:09:22.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:23 compute-0 nova_compute[256940]: 2025-10-02 13:09:23.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:23 compute-0 ceph-mon[73668]: pgmap v2904: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 245 op/s
Oct 02 13:09:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:23.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Oct 02 13:09:24 compute-0 nova_compute[256940]: 2025-10-02 13:09:24.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:25.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:25 compute-0 nova_compute[256940]: 2025-10-02 13:09:25.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:25 compute-0 nova_compute[256940]: 2025-10-02 13:09:25.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:25 compute-0 nova_compute[256940]: 2025-10-02 13:09:25.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:09:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:25 compute-0 ceph-mon[73668]: pgmap v2905: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Oct 02 13:09:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1465649545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/533345457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 159 op/s
Oct 02 13:09:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3051948380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2098822834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:26.505 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:26.506 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:26.506 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:27 compute-0 ceph-mon[73668]: pgmap v2906: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 159 op/s
Oct 02 13:09:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:27.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:27 compute-0 nova_compute[256940]: 2025-10-02 13:09:27.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.243 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.243 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.243 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.244 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.244 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 862 KiB/s rd, 2.6 MiB/s wr, 130 op/s
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191147255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.712 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.864 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.866 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4200MB free_disk=20.851539611816406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.867 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.867 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:09:28
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', 'backups', 'vms', '.mgr', 'default.rgw.control']
Oct 02 13:09:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.932 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.933 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:09:28 compute-0 nova_compute[256940]: 2025-10-02 13:09:28.952 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885161977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:29 compute-0 ceph-mon[73668]: pgmap v2907: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 862 KiB/s rd, 2.6 MiB/s wr, 130 op/s
Oct 02 13:09:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/191147255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2885161977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:29 compute-0 nova_compute[256940]: 2025-10-02 13:09:29.446 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:29 compute-0 nova_compute[256940]: 2025-10-02 13:09:29.452 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:29 compute-0 nova_compute[256940]: 2025-10-02 13:09:29.465 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:29 compute-0 nova_compute[256940]: 2025-10-02 13:09:29.484 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:09:29 compute-0 nova_compute[256940]: 2025-10-02 13:09:29.485 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:09:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:09:29 compute-0 nova_compute[256940]: 2025-10-02 13:09:29.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 805 KiB/s rd, 933 KiB/s wr, 101 op/s
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.485 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.486 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.796 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.797 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.836 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.905 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.905 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.918 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.918 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.920 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.925 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:09:30 compute-0 nova_compute[256940]: 2025-10-02 13:09:30.925 2 INFO nova.compute.claims [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.022 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.079 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:31 compute-0 ceph-mon[73668]: pgmap v2908: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 805 KiB/s rd, 933 KiB/s wr, 101 op/s
Oct 02 13:09:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476324504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.512 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.519 2 DEBUG nova.compute.provider_tree [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.533 2 DEBUG nova.scheduler.client.report [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.550 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.550 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.553 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.559 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.559 2 INFO nova.compute.claims [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:09:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:31.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.616 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.616 2 DEBUG nova.network.neutron [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.637 2 INFO nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.655 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.689 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.774 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.775 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.776 2 INFO nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Creating image(s)
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.803 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.830 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.851 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.856 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.918 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.919 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.919 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.920 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.945 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:31 compute-0 nova_compute[256940]: 2025-10-02 13:09:31.950 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 89e1718a-24fd-472a-bbae-aedba874e3a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164258632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.174 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.180 2 DEBUG nova.compute.provider_tree [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.194 2 DEBUG nova.scheduler.client.report [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.201 2 DEBUG nova.policy [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.215 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.216 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.248 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 89e1718a-24fd-472a-bbae-aedba874e3a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.285 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.286 2 DEBUG nova.network.neutron [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.324 2 INFO nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:09:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 440 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 740 KiB/s rd, 802 KiB/s wr, 90 op/s
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.342 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.382 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:09:32 compute-0 podman[385521]: 2025-10-02 13:09:32.393834457 +0000 UTC m=+0.066278212 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:09:32 compute-0 podman[385525]: 2025-10-02 13:09:32.412234204 +0000 UTC m=+0.077526983 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:09:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3476324504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1164258632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.474 2 DEBUG nova.objects.instance [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 89e1718a-24fd-472a-bbae-aedba874e3a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.490 2 DEBUG nova.policy [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.498 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.499 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Ensure instance console log exists: /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.499 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.499 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.500 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.541 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.542 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.542 2 INFO nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Creating image(s)
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.569 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.596 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.622 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.626 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.688 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.689 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.690 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.690 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.720 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.723 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 469a928f-d7cb-4add-9410-629caac3f6f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:32 compute-0 nova_compute[256940]: 2025-10-02 13:09:32.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:32 compute-0 sudo[385700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:33 compute-0 sudo[385700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:33 compute-0 sudo[385700]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.013 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 469a928f-d7cb-4add-9410-629caac3f6f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:33 compute-0 sudo[385725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:33 compute-0 sudo[385725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:33 compute-0 sudo[385725]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.105 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] resizing rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.215 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.215 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.220 2 DEBUG nova.objects.instance [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.236 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.236 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Ensure instance console log exists: /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.236 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.237 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.237 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:33 compute-0 ceph-mon[73668]: pgmap v2909: 305 pgs: 305 active+clean; 440 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 740 KiB/s rd, 802 KiB/s wr, 90 op/s
Oct 02 13:09:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:33.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.657 2 DEBUG nova.network.neutron [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Successfully created port: 8ac4874f-4e37-469f-8df1-1f424336b299 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:09:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:33.702 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:33 compute-0 nova_compute[256940]: 2025-10-02 13:09:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:33 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:33.704 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:09:34 compute-0 nova_compute[256940]: 2025-10-02 13:09:34.094 2 DEBUG nova.network.neutron [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Successfully created port: 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:09:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 440 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 39 KiB/s wr, 49 op/s
Oct 02 13:09:34 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:34.706 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:34 compute-0 nova_compute[256940]: 2025-10-02 13:09:34.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.295 2 DEBUG nova.network.neutron [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Successfully updated port: 8ac4874f-4e37-469f-8df1-1f424336b299 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.312 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.312 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.312 2 DEBUG nova.network.neutron [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:09:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.342932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575342986, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 695, "num_deletes": 258, "total_data_size": 831156, "memory_usage": 845288, "flush_reason": "Manual Compaction"}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575349410, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 821686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64600, "largest_seqno": 65294, "table_properties": {"data_size": 818091, "index_size": 1374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8425, "raw_average_key_size": 19, "raw_value_size": 810724, "raw_average_value_size": 1850, "num_data_blocks": 61, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410531, "oldest_key_time": 1759410531, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 6507 microseconds, and 2797 cpu microseconds.
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.349445) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 821686 bytes OK
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.349461) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.350725) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.350735) EVENT_LOG_v1 {"time_micros": 1759410575350732, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.350753) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 827533, prev total WAL file size 827533, number of live WAL files 2.
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.351231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353136' seq:72057594037927935, type:22 .. '6C6F676D0032373638' seq:0, type:0; will stop at (end)
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(802KB)], [146(10173KB)]
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575351272, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 11239615, "oldest_snapshot_seqno": -1}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 9078 keys, 11112301 bytes, temperature: kUnknown
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575420715, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11112301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11054894, "index_size": 33614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 239836, "raw_average_key_size": 26, "raw_value_size": 10896986, "raw_average_value_size": 1200, "num_data_blocks": 1279, "num_entries": 9078, "num_filter_entries": 9078, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.421546) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11112301 bytes
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.423170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.5 rd, 158.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(27.2) write-amplify(13.5) OK, records in: 9608, records dropped: 530 output_compression: NoCompression
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.423198) EVENT_LOG_v1 {"time_micros": 1759410575423186, "job": 90, "event": "compaction_finished", "compaction_time_micros": 70007, "compaction_time_cpu_micros": 27533, "output_level": 6, "num_output_files": 1, "total_output_size": 11112301, "num_input_records": 9608, "num_output_records": 9078, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575423790, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575426785, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.351191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.426883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.426892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.426895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.426897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:09:35.426898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-0 ceph-mon[73668]: pgmap v2910: 305 pgs: 305 active+clean; 440 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 39 KiB/s wr, 49 op/s
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.515 2 DEBUG nova.network.neutron [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:09:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:35.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.851 2 DEBUG nova.network.neutron [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Successfully updated port: 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.916 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.916 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:35 compute-0 nova_compute[256940]: 2025-10-02 13:09:35.916 2 DEBUG nova.network.neutron [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.112 2 DEBUG nova.network.neutron [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.233 2 DEBUG nova.compute.manager [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.233 2 DEBUG nova.compute.manager [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing instance network info cache due to event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.234 2 DEBUG oslo_concurrency.lockutils [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.284 2 DEBUG nova.network.neutron [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updating instance_info_cache with network_info: [{"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 437 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.375 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.376 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Instance network_info: |[{"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.378 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Start _get_guest_xml network_info=[{"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.382 2 WARNING nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.386 2 DEBUG nova.virt.libvirt.host [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.386 2 DEBUG nova.virt.libvirt.host [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.389 2 DEBUG nova.virt.libvirt.host [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.389 2 DEBUG nova.virt.libvirt.host [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.391 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.391 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.392 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.392 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.392 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.392 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.392 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.393 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.393 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.393 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.393 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.394 2 DEBUG nova.virt.hardware [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.396 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1801784853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.671 2 DEBUG nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-changed-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.671 2 DEBUG nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Refreshing instance network info cache due to event network-changed-8ac4874f-4e37-469f-8df1-1f424336b299. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.671 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.672 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.672 2 DEBUG nova.network.neutron [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Refreshing network info cache for port 8ac4874f-4e37-469f-8df1-1f424336b299 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329906593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.878 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.902 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.905 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:36 compute-0 nova_compute[256940]: 2025-10-02 13:09:36.956 2 DEBUG nova.network.neutron [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.105 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.105 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance network_info: |[{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.106 2 DEBUG oslo_concurrency.lockutils [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.106 2 DEBUG nova.network.neutron [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.109 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Start _get_guest_xml network_info=[{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.114 2 WARNING nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.117 2 DEBUG nova.virt.libvirt.host [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.118 2 DEBUG nova.virt.libvirt.host [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.121 2 DEBUG nova.virt.libvirt.host [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.121 2 DEBUG nova.virt.libvirt.host [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.123 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.123 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.124 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.124 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.124 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.125 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.125 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.125 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.126 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.126 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.126 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.126 2 DEBUG nova.virt.hardware [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.131 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:09:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451403253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.351 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.352 2 DEBUG nova.virt.libvirt.vif [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1851097535',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1851097535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=198,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-h0ucgi2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:31Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=89e1718a-24fd-472a-bbae-aedba874e3a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.352 2 DEBUG nova.network.os_vif_util [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.353 2 DEBUG nova.network.os_vif_util [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:95:7b,bridge_name='br-int',has_traffic_filtering=True,id=8ac4874f-4e37-469f-8df1-1f424336b299,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac4874f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.354 2 DEBUG nova.objects.instance [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89e1718a-24fd-472a-bbae-aedba874e3a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010470436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.548 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.549 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.549 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.565 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:37.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.588 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.596 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.631 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <uuid>89e1718a-24fd-472a-bbae-aedba874e3a0</uuid>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <name>instance-000000c6</name>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1851097535</nova:name>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:09:36</nova:creationTime>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <nova:port uuid="8ac4874f-4e37-469f-8df1-1f424336b299">
Oct 02 13:09:37 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <system>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <entry name="serial">89e1718a-24fd-472a-bbae-aedba874e3a0</entry>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <entry name="uuid">89e1718a-24fd-472a-bbae-aedba874e3a0</entry>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </system>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <os>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   </os>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <features>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   </features>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/89e1718a-24fd-472a-bbae-aedba874e3a0_disk">
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       </source>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/89e1718a-24fd-472a-bbae-aedba874e3a0_disk.config">
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       </source>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:09:37 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:7e:95:7b"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <target dev="tap8ac4874f-4e"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/console.log" append="off"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <video>
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </video>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:09:37 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:09:37 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:09:37 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:09:37 compute-0 nova_compute[256940]: </domain>
Oct 02 13:09:37 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.632 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Preparing to wait for external event network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.632 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.633 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.633 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.634 2 DEBUG nova.virt.libvirt.vif [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1851097535',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1851097535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=198,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-h0ucgi2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:31Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=89e1718a-24fd-472a-bbae-aedba874e3a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.634 2 DEBUG nova.network.os_vif_util [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.635 2 DEBUG nova.network.os_vif_util [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:95:7b,bridge_name='br-int',has_traffic_filtering=True,id=8ac4874f-4e37-469f-8df1-1f424336b299,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac4874f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.636 2 DEBUG os_vif [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:95:7b,bridge_name='br-int',has_traffic_filtering=True,id=8ac4874f-4e37-469f-8df1-1f424336b299,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac4874f-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.640 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ac4874f-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.641 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ac4874f-4e, col_values=(('external_ids', {'iface-id': '8ac4874f-4e37-469f-8df1-1f424336b299', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:95:7b', 'vm-uuid': '89e1718a-24fd-472a-bbae-aedba874e3a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:37 compute-0 NetworkManager[44981]: <info>  [1759410577.6430] manager: (tap8ac4874f-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.653 2 INFO os_vif [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:95:7b,bridge_name='br-int',has_traffic_filtering=True,id=8ac4874f-4e37-469f-8df1-1f424336b299,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac4874f-4e')
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.751 2 DEBUG nova.network.neutron [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updated VIF entry in instance network info cache for port 8ac4874f-4e37-469f-8df1-1f424336b299. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.751 2 DEBUG nova.network.neutron [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updating instance_info_cache with network_info: [{"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:37 compute-0 ceph-mon[73668]: pgmap v2911: 305 pgs: 305 active+clean; 437 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Oct 02 13:09:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/329906593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2451403253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.972 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:37 compute-0 nova_compute[256940]: 2025-10-02 13:09:37.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224616448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.010 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.012 2 DEBUG nova.virt.libvirt.vif [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=199,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-17mdigwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=469a928f-d7cb-4add-9410-629caac3f6f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.012 2 DEBUG nova.network.os_vif_util [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.012 2 DEBUG nova.network.os_vif_util [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.013 2 DEBUG nova.objects.instance [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.076 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.077 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.077 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:7e:95:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.077 2 INFO nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Using config drive
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.106 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.181 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <uuid>469a928f-d7cb-4add-9410-629caac3f6f8</uuid>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <name>instance-000000c7</name>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <nova:name>multiattach-server-0</nova:name>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:09:37</nova:creationTime>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <nova:port uuid="84c1a249-c4f5-48bf-835d-bbbc75fefeb0">
Oct 02 13:09:38 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <system>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <entry name="serial">469a928f-d7cb-4add-9410-629caac3f6f8</entry>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <entry name="uuid">469a928f-d7cb-4add-9410-629caac3f6f8</entry>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </system>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <os>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   </os>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <features>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   </features>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/469a928f-d7cb-4add-9410-629caac3f6f8_disk">
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       </source>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/469a928f-d7cb-4add-9410-629caac3f6f8_disk.config">
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       </source>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:09:38 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:67:d9:a1"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <target dev="tap84c1a249-c4"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/console.log" append="off"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <video>
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </video>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:09:38 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:09:38 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:09:38 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:09:38 compute-0 nova_compute[256940]: </domain>
Oct 02 13:09:38 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.183 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Preparing to wait for external event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.183 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.184 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.184 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.185 2 DEBUG nova.virt.libvirt.vif [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=199,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-17mdigwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=469a928f-d7cb-4add-9410-629caac3f6f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.185 2 DEBUG nova.network.os_vif_util [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.186 2 DEBUG nova.network.os_vif_util [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.186 2 DEBUG os_vif [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1a249-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84c1a249-c4, col_values=(('external_ids', {'iface-id': '84c1a249-c4f5-48bf-835d-bbbc75fefeb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:d9:a1', 'vm-uuid': '469a928f-d7cb-4add-9410-629caac3f6f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:38 compute-0 NetworkManager[44981]: <info>  [1759410578.1935] manager: (tap84c1a249-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.200 2 INFO os_vif [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4')
Oct 02 13:09:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.354 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.355 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.355 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:67:d9:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.356 2 INFO nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Using config drive
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.384 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4010470436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1224616448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.940 2 DEBUG nova.network.neutron [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updated VIF entry in instance network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.940 2 DEBUG nova.network.neutron [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:38 compute-0 nova_compute[256940]: 2025-10-02 13:09:38.962 2 DEBUG oslo_concurrency.lockutils [req-ddea5d5c-c446-4269-9aa1-888e766c61d8 req-29f59b4b-6826-4d4d-a6e7-e66907fe2b5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.087 2 INFO nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Creating config drive at /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/disk.config
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.093 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8y0bzfms execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.135 2 INFO nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Creating config drive at /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/disk.config
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.140 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxmybaajz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.227 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8y0bzfms" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.258 2 DEBUG nova.storage.rbd_utils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 469a928f-d7cb-4add-9410-629caac3f6f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.263 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/disk.config 469a928f-d7cb-4add-9410-629caac3f6f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.303 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxmybaajz" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.332 2 DEBUG nova.storage.rbd_utils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 89e1718a-24fd-472a-bbae-aedba874e3a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.335 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/disk.config 89e1718a-24fd-472a-bbae-aedba874e3a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.874 2 DEBUG oslo_concurrency.processutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/disk.config 469a928f-d7cb-4add-9410-629caac3f6f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.875 2 INFO nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Deleting local config drive /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/disk.config because it was imported into RBD.
Oct 02 13:09:39 compute-0 ceph-mon[73668]: pgmap v2912: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 02 13:09:39 compute-0 kernel: tap84c1a249-c4: entered promiscuous mode
Oct 02 13:09:39 compute-0 NetworkManager[44981]: <info>  [1759410579.9355] manager: (tap84c1a249-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.937 2 DEBUG oslo_concurrency.processutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/disk.config 89e1718a-24fd-472a-bbae-aedba874e3a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.937 2 INFO nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Deleting local config drive /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0/disk.config because it was imported into RBD.
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:39 compute-0 ovn_controller[148123]: 2025-10-02T13:09:39Z|00982|binding|INFO|Claiming lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for this chassis.
Oct 02 13:09:39 compute-0 ovn_controller[148123]: 2025-10-02T13:09:39Z|00983|binding|INFO|84c1a249-c4f5-48bf-835d-bbbc75fefeb0: Claiming fa:16:3e:67:d9:a1 10.100.0.9
Oct 02 13:09:39 compute-0 nova_compute[256940]: 2025-10-02 13:09:39.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:39 compute-0 NetworkManager[44981]: <info>  [1759410579.9603] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Oct 02 13:09:39 compute-0 NetworkManager[44981]: <info>  [1759410579.9608] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.962 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:d9:a1 10.100.0.9'], port_security=['fa:16:3e:67:d9:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '469a928f-d7cb-4add-9410-629caac3f6f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=84c1a249-c4f5-48bf-835d-bbbc75fefeb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.963 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.965 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.976 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c25f8ca9-f140-4f6e-a958-d3209770dff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.977 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9001b9c-b1 in ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:09:39 compute-0 systemd-udevd[386108]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.980 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9001b9c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:09:39 compute-0 systemd-machined[210927]: New machine qemu-100-instance-000000c7.
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.981 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[749530de-5c85-4f89-b4c5-7f4bfe58ade4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.982 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[508366ff-4b34-45d9-9187-b865f1942922]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:39 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:39.992 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a577cd15-564d-4273-b878-7d3b70ed2ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:39 compute-0 NetworkManager[44981]: <info>  [1759410579.9995] device (tap84c1a249-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:09:40 compute-0 NetworkManager[44981]: <info>  [1759410580.0002] device (tap84c1a249-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:09:40 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-000000c7.
Oct 02 13:09:40 compute-0 sudo[386075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:40 compute-0 sudo[386075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 sudo[386075]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.022 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2e073c7b-832a-4c15-873c-281a9f16711d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 NetworkManager[44981]: <info>  [1759410580.0321] manager: (tap8ac4874f-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.053 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8847a479-fd6e-46cc-8ed5-0ca9cef243ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.060 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c6150a15-a271-4be3-ac8e-f2d5c635ace0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 NetworkManager[44981]: <info>  [1759410580.0615] manager: (tapd9001b9c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Oct 02 13:09:40 compute-0 sudo[386123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:09:40 compute-0 sudo[386123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 sudo[386123]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-0 systemd-machined[210927]: New machine qemu-101-instance-000000c6.
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.104 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6abb0b-1234-42a4-9847-bbe29dbade6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.107 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d640ba-d487-4315-b87b-233c96493702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-000000c6.
Oct 02 13:09:40 compute-0 NetworkManager[44981]: <info>  [1759410580.1345] device (tapd9001b9c-b0): carrier: link connected
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.134 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[3de7ac34-d1d6-47d7-b53d-78a0ac477ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 sudo[386177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:40 compute-0 sudo[386177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 sudo[386177]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.156 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2db2ba34-6f93-4550-9f47-5fb7f8054593]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847561, 'reachable_time': 35591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386208, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.171 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[05ec5421-2530-4800-b19b-adba2b18daeb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c08b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 847561, 'tstamp': 847561}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386219, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 sudo[386211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:09:40 compute-0 sudo[386211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 kernel: tap8ac4874f-4e: entered promiscuous mode
Oct 02 13:09:40 compute-0 NetworkManager[44981]: <info>  [1759410580.2050] device (tap8ac4874f-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.201 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[06c3febf-7c63-44be-a705-2493e3f599fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847561, 'reachable_time': 35591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386236, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 NetworkManager[44981]: <info>  [1759410580.2066] device (tap8ac4874f-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:09:40 compute-0 ovn_controller[148123]: 2025-10-02T13:09:40Z|00984|binding|INFO|Setting lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 up in Southbound
Oct 02 13:09:40 compute-0 ovn_controller[148123]: 2025-10-02T13:09:40Z|00985|binding|INFO|Claiming lport 8ac4874f-4e37-469f-8df1-1f424336b299 for this chassis.
Oct 02 13:09:40 compute-0 ovn_controller[148123]: 2025-10-02T13:09:40Z|00986|binding|INFO|8ac4874f-4e37-469f-8df1-1f424336b299: Claiming fa:16:3e:7e:95:7b 10.100.0.12
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-0 ovn_controller[148123]: 2025-10-02T13:09:40Z|00987|binding|INFO|Setting lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 ovn-installed in OVS
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.276 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:95:7b 10.100.0.12'], port_security=['fa:16:3e:7e:95:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '89e1718a-24fd-472a-bbae-aedba874e3a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54a08602-f5b6-41e1-816c-2c122542a2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e682aa5-16aa-4884-9ae4-6ca813b9baae 70184290-30f9-436f-8b75-f26eb92f4dbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1398b7fe-9cb8-4053-9d9c-0523007b5e96, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8ac4874f-4e37-469f-8df1-1f424336b299) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:40 compute-0 ovn_controller[148123]: 2025-10-02T13:09:40Z|00988|binding|INFO|Setting lport 8ac4874f-4e37-469f-8df1-1f424336b299 ovn-installed in OVS
Oct 02 13:09:40 compute-0 ovn_controller[148123]: 2025-10-02T13:09:40Z|00989|binding|INFO|Setting lport 8ac4874f-4e37-469f-8df1-1f424336b299 up in Southbound
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.297 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[78e4aa8c-533c-4d9a-94b0-8fa8d3a5c1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 02 13:09:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.365 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9d621842-48f7-409f-9163-900252d4a71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.366 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.366 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.366 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-0 kernel: tapd9001b9c-b0: entered promiscuous mode
Oct 02 13:09:40 compute-0 NetworkManager[44981]: <info>  [1759410580.3690] manager: (tapd9001b9c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.375 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-0 ovn_controller[148123]: 2025-10-02T13:09:40Z|00990|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.379 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.380 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8dc378-0b0a-44c2-b5e1-ab9af983b656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.380 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:09:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:40.382 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'env', 'PROCESS_TAG=haproxy-d9001b9c-bca6-4085-a954-1414269e31bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9001b9c-bca6-4085-a954-1414269e31bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.428 2 DEBUG nova.compute.manager [req-6e162b16-a8e7-469f-b2a4-970a96071d9a req-d5ce87f4-abd1-4f11-befa-c381ba0865a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.429 2 DEBUG oslo_concurrency.lockutils [req-6e162b16-a8e7-469f-b2a4-970a96071d9a req-d5ce87f4-abd1-4f11-befa-c381ba0865a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.429 2 DEBUG oslo_concurrency.lockutils [req-6e162b16-a8e7-469f-b2a4-970a96071d9a req-d5ce87f4-abd1-4f11-befa-c381ba0865a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.429 2 DEBUG oslo_concurrency.lockutils [req-6e162b16-a8e7-469f-b2a4-970a96071d9a req-d5ce87f4-abd1-4f11-befa-c381ba0865a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.430 2 DEBUG nova.compute.manager [req-6e162b16-a8e7-469f-b2a4-970a96071d9a req-d5ce87f4-abd1-4f11-befa-c381ba0865a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Processing event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:09:40 compute-0 sudo[386211]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:09:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.483 2 DEBUG nova.compute.manager [req-717638f8-56b9-4557-9dc7-f5ecf49728f8 req-c9a6c14c-3250-4974-9bda-de4456909555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.486 2 DEBUG oslo_concurrency.lockutils [req-717638f8-56b9-4557-9dc7-f5ecf49728f8 req-c9a6c14c-3250-4974-9bda-de4456909555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.486 2 DEBUG oslo_concurrency.lockutils [req-717638f8-56b9-4557-9dc7-f5ecf49728f8 req-c9a6c14c-3250-4974-9bda-de4456909555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.487 2 DEBUG oslo_concurrency.lockutils [req-717638f8-56b9-4557-9dc7-f5ecf49728f8 req-c9a6c14c-3250-4974-9bda-de4456909555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:40 compute-0 nova_compute[256940]: 2025-10-02 13:09:40.487 2 DEBUG nova.compute.manager [req-717638f8-56b9-4557-9dc7-f5ecf49728f8 req-c9a6c14c-3250-4974-9bda-de4456909555 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Processing event network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:09:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:09:40 compute-0 sudo[386344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:40 compute-0 sudo[386344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 sudo[386344]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-0 sudo[386375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:09:40 compute-0 sudo[386375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:40 compute-0 sudo[386375]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006327208961474037 of space, bias 1.0, pg target 1.898162688442211 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004320284873441281 of space, bias 1.0, pg target 1.291765177158943 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:09:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Oct 02 13:09:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:40 compute-0 sudo[386411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:40 compute-0 sudo[386411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 sudo[386411]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-0 sudo[386459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:09:40 compute-0 sudo[386459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-0 podman[386445]: 2025-10-02 13:09:40.771740446 +0000 UTC m=+0.024758083 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:09:40 compute-0 podman[386445]: 2025-10-02 13:09:40.936304438 +0000 UTC m=+0.189322065 container create 6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.022 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410581.0218952, 89e1718a-24fd-472a-bbae-aedba874e3a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.023 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] VM Started (Lifecycle Event)
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.024 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.028 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.033 2 INFO nova.virt.libvirt.driver [-] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Instance spawned successfully.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.033 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.050 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.059 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.064 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.067 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.067 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.068 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.068 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.068 2 DEBUG nova.virt.libvirt.driver [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.092 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.093 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410581.0221398, 89e1718a-24fd-472a-bbae-aedba874e3a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.093 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] VM Paused (Lifecycle Event)
Oct 02 13:09:41 compute-0 systemd[1]: Started libpod-conmon-6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771.scope.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.109 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.112 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:09:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.116 2 INFO nova.virt.libvirt.driver [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance spawned successfully.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.116 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c148f8bc9180b04142a99c992bbffdb152c02c9808de1d1171d2beea23ae55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.125 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.128 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410581.030078, 89e1718a-24fd-472a-bbae-aedba874e3a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.141 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] VM Resumed (Lifecycle Event)
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.150 2 INFO nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Took 9.38 seconds to spawn the instance on the hypervisor.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.151 2 DEBUG nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.163 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.164 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.165 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.165 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.166 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.166 2 DEBUG nova.virt.libvirt.driver [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.170 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.178 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:41 compute-0 podman[386445]: 2025-10-02 13:09:41.201930183 +0000 UTC m=+0.454947800 container init 6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:09:41 compute-0 podman[386445]: 2025-10-02 13:09:41.207784565 +0000 UTC m=+0.460802192 container start 6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:09:41 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[386500]: [NOTICE]   (386507) : New worker (386509) forked
Oct 02 13:09:41 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[386500]: [NOTICE]   (386507) : Loading success.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.234 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.235 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410581.10832, 469a928f-d7cb-4add-9410-629caac3f6f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.235 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] VM Started (Lifecycle Event)
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.264 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.274 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.276 2 INFO nova.compute.manager [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Took 10.38 seconds to build instance.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.279 2 INFO nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Took 8.74 seconds to spawn the instance on the hypervisor.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.279 2 DEBUG nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.303 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.304 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410581.1084168, 469a928f-d7cb-4add-9410-629caac3f6f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.304 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] VM Paused (Lifecycle Event)
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.306 2 DEBUG oslo_concurrency.lockutils [None req-a93a38cc-ab8d-4ae4-b110-fe23daa87105 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.333 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.346 2 INFO nova.compute.manager [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Took 10.35 seconds to build instance.
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.361 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410581.1114771, 469a928f-d7cb-4add-9410-629caac3f6f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.362 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] VM Resumed (Lifecycle Event)
Oct 02 13:09:41 compute-0 sudo[386459]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.372 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8ac4874f-4e37-469f-8df1-1f424336b299 in datapath 54a08602-f5b6-41e1-816c-2c122542a2b7 unbound from our chassis
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.377 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54a08602-f5b6-41e1-816c-2c122542a2b7
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.377 2 DEBUG oslo_concurrency.lockutils [None req-61f5bca9-a844-4654-87a6-275f9cb457a6 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.387 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.389 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4778ba-25b8-4a8f-8253-887f4056e514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.390 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54a08602-f1 in ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.392 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54a08602-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.392 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aab80c39-485d-4eb3-91a8-96cfe6dffbee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.393 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cfcec7-d89f-4953-bb36-b75ace68e33d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.397 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.407 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[b7609a87-1243-421d-8573-6816a070f6b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.432 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[35ac7ff3-e828-4ddf-8f9c-c7bfafc5e217]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.468 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e77545e4-4eb7-43b6-9e2d-b4c1e066c7d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 NetworkManager[44981]: <info>  [1759410581.4770] manager: (tap54a08602-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/435)
Oct 02 13:09:41 compute-0 systemd-udevd[386170]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.480 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9885b7c2-b575-422e-89ed-0a2c1b755192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.514 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2b88dc-6e8d-4fbe-a275-39dd890d0903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.518 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b32ee81a-7e1b-44f5-8c74-d21e426529a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 NetworkManager[44981]: <info>  [1759410581.5386] device (tap54a08602-f0): carrier: link connected
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.547 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[7a44355b-99a7-4c4a-bae6-9f6b0bee2e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.563 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd8afb9-17b7-489a-857c-93aa742044bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54a08602-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:cf:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847702, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386542, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ceph-mon[73668]: pgmap v2913: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 02 13:09:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:41.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.579 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5725b1-d867-475b-8602-96028041bbc7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:cf34'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 847702, 'tstamp': 847702}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386543, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.596 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[04a2e3b3-0d6e-4ff1-9b5e-2ed3c56989a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54a08602-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:cf:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847702, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386544, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.626 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e33f196c-4176-479b-8ab3-d204b6e5dfac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.710 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d7b581-8a0f-4d0e-b96f-15f5b760712c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.712 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a08602-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.712 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.712 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54a08602-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:41 compute-0 NetworkManager[44981]: <info>  [1759410581.7152] manager: (tap54a08602-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct 02 13:09:41 compute-0 kernel: tap54a08602-f0: entered promiscuous mode
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.717 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54a08602-f0, col_values=(('external_ids', {'iface-id': '0e7b3164-07ea-4170-8a8f-05633e14550f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:41 compute-0 ovn_controller[148123]: 2025-10-02T13:09:41Z|00991|binding|INFO|Releasing lport 0e7b3164-07ea-4170-8a8f-05633e14550f from this chassis (sb_readonly=0)
Oct 02 13:09:41 compute-0 nova_compute[256940]: 2025-10-02 13:09:41.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.736 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.737 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[057c8bd2-5b88-46c1-89a4-8ff2fb90bcbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.738 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-54a08602-f5b6-41e1-816c-2c122542a2b7
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 54a08602-f5b6-41e1-816c-2c122542a2b7
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:09:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:09:41.740 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'env', 'PROCESS_TAG=haproxy-54a08602-f5b6-41e1-816c-2c122542a2b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54a08602-f5b6-41e1-816c-2c122542a2b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:09:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:09:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:09:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:09:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:09:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:09:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 17e6dd9e-14f2-43c2-a142-5e7f7867e263 does not exist
Oct 02 13:09:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6854221b-abec-4179-8173-5018faf5f98c does not exist
Oct 02 13:09:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 114baa33-ebdb-48b4-9986-72ba207cd3e8 does not exist
Oct 02 13:09:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:09:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:09:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:09:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:09:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:09:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:09:41 compute-0 sudo[386555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:41 compute-0 sudo[386555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:41 compute-0 sudo[386555]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:41 compute-0 sudo[386580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:09:41 compute-0 sudo[386580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:41 compute-0 sudo[386580]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:42 compute-0 sudo[386605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:42 compute-0 sudo[386605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:42 compute-0 sudo[386605]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:42 compute-0 sudo[386642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:09:42 compute-0 sudo[386642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:42 compute-0 podman[386671]: 2025-10-02 13:09:42.143082573 +0000 UTC m=+0.073387906 container create acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:09:42 compute-0 podman[386671]: 2025-10-02 13:09:42.089214055 +0000 UTC m=+0.019519418 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:09:42 compute-0 systemd[1]: Started libpod-conmon-acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8.scope.
Oct 02 13:09:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a781b0ec9f339006cf4a0b78d2c056c1737fe57197d50e06cc80d7b1a4860d32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:42 compute-0 podman[386671]: 2025-10-02 13:09:42.241048796 +0000 UTC m=+0.171354149 container init acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 13:09:42 compute-0 podman[386671]: 2025-10-02 13:09:42.248991102 +0000 UTC m=+0.179296425 container start acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:09:42 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [NOTICE]   (386705) : New worker (386714) forked
Oct 02 13:09:42 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [NOTICE]   (386705) : Loading success.
Oct 02 13:09:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 612 KiB/s rd, 3.6 MiB/s wr, 121 op/s
Oct 02 13:09:42 compute-0 podman[386743]: 2025-10-02 13:09:42.47887672 +0000 UTC m=+0.059938827 container create b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:42 compute-0 systemd[1]: Started libpod-conmon-b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9.scope.
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.523 2 DEBUG nova.compute.manager [req-906edf96-63cf-430d-9f12-e13ea50233df req-9ca47cb3-b489-4669-adc1-5052c01aff96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.523 2 DEBUG oslo_concurrency.lockutils [req-906edf96-63cf-430d-9f12-e13ea50233df req-9ca47cb3-b489-4669-adc1-5052c01aff96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.523 2 DEBUG oslo_concurrency.lockutils [req-906edf96-63cf-430d-9f12-e13ea50233df req-9ca47cb3-b489-4669-adc1-5052c01aff96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.524 2 DEBUG oslo_concurrency.lockutils [req-906edf96-63cf-430d-9f12-e13ea50233df req-9ca47cb3-b489-4669-adc1-5052c01aff96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.524 2 DEBUG nova.compute.manager [req-906edf96-63cf-430d-9f12-e13ea50233df req-9ca47cb3-b489-4669-adc1-5052c01aff96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.524 2 WARNING nova.compute.manager [req-906edf96-63cf-430d-9f12-e13ea50233df req-9ca47cb3-b489-4669-adc1-5052c01aff96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state None.
Oct 02 13:09:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:42 compute-0 podman[386743]: 2025-10-02 13:09:42.441329725 +0000 UTC m=+0.022391852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.582 2 DEBUG nova.compute.manager [req-4bc74f97-f6e0-48ae-85ce-363e6e4b6a86 req-75586fbb-ece2-41c3-b836-f7065a2221b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.582 2 DEBUG oslo_concurrency.lockutils [req-4bc74f97-f6e0-48ae-85ce-363e6e4b6a86 req-75586fbb-ece2-41c3-b836-f7065a2221b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.582 2 DEBUG oslo_concurrency.lockutils [req-4bc74f97-f6e0-48ae-85ce-363e6e4b6a86 req-75586fbb-ece2-41c3-b836-f7065a2221b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.582 2 DEBUG oslo_concurrency.lockutils [req-4bc74f97-f6e0-48ae-85ce-363e6e4b6a86 req-75586fbb-ece2-41c3-b836-f7065a2221b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.583 2 DEBUG nova.compute.manager [req-4bc74f97-f6e0-48ae-85ce-363e6e4b6a86 req-75586fbb-ece2-41c3-b836-f7065a2221b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] No waiting events found dispatching network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.583 2 WARNING nova.compute.manager [req-4bc74f97-f6e0-48ae-85ce-363e6e4b6a86 req-75586fbb-ece2-41c3-b836-f7065a2221b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received unexpected event network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 for instance with vm_state active and task_state None.
Oct 02 13:09:42 compute-0 podman[386743]: 2025-10-02 13:09:42.598816833 +0000 UTC m=+0.179878960 container init b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:09:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:09:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:09:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:09:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:09:42 compute-0 podman[386743]: 2025-10-02 13:09:42.608813353 +0000 UTC m=+0.189875450 container start b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:09:42 compute-0 nice_kepler[386760]: 167 167
Oct 02 13:09:42 compute-0 systemd[1]: libpod-b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9.scope: Deactivated successfully.
Oct 02 13:09:42 compute-0 podman[386743]: 2025-10-02 13:09:42.616215665 +0000 UTC m=+0.197277782 container attach b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:09:42 compute-0 podman[386743]: 2025-10-02 13:09:42.616566634 +0000 UTC m=+0.197628741 container died b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b36745314ccce2c6b4db3976317cb881e90f67f59c77beff7d01ebddf91844bb-merged.mount: Deactivated successfully.
Oct 02 13:09:42 compute-0 podman[386743]: 2025-10-02 13:09:42.777577583 +0000 UTC m=+0.358639690 container remove b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:09:42 compute-0 systemd[1]: libpod-conmon-b20109cd444027b02219f856cb4892a61e234acabe4f413068f6b3e496dc71f9.scope: Deactivated successfully.
Oct 02 13:09:42 compute-0 nova_compute[256940]: 2025-10-02 13:09:42.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:42 compute-0 podman[386786]: 2025-10-02 13:09:42.983446887 +0000 UTC m=+0.058576101 container create 8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:43 compute-0 podman[386786]: 2025-10-02 13:09:42.95157722 +0000 UTC m=+0.026706454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:09:43 compute-0 systemd[1]: Started libpod-conmon-8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17.scope.
Oct 02 13:09:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3b84e45b99496ef2bbfed8e2880d85dda91be7df77c7ce2a274bd938288e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3b84e45b99496ef2bbfed8e2880d85dda91be7df77c7ce2a274bd938288e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3b84e45b99496ef2bbfed8e2880d85dda91be7df77c7ce2a274bd938288e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3b84e45b99496ef2bbfed8e2880d85dda91be7df77c7ce2a274bd938288e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3b84e45b99496ef2bbfed8e2880d85dda91be7df77c7ce2a274bd938288e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:43 compute-0 podman[386786]: 2025-10-02 13:09:43.180870282 +0000 UTC m=+0.255999516 container init 8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:09:43 compute-0 podman[386786]: 2025-10-02 13:09:43.188373597 +0000 UTC m=+0.263502821 container start 8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:09:43 compute-0 nova_compute[256940]: 2025-10-02 13:09:43.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:43 compute-0 podman[386786]: 2025-10-02 13:09:43.289746218 +0000 UTC m=+0.364875482 container attach 8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:09:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:43.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:43 compute-0 objective_kepler[386802]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:09:43 compute-0 objective_kepler[386802]: --> relative data size: 1.0
Oct 02 13:09:43 compute-0 objective_kepler[386802]: --> All data devices are unavailable
Oct 02 13:09:44 compute-0 systemd[1]: libpod-8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17.scope: Deactivated successfully.
Oct 02 13:09:44 compute-0 podman[386786]: 2025-10-02 13:09:44.023293049 +0000 UTC m=+1.098422263 container died 8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:09:44 compute-0 ceph-mon[73668]: pgmap v2914: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 612 KiB/s rd, 3.6 MiB/s wr, 121 op/s
Oct 02 13:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c3b84e45b99496ef2bbfed8e2880d85dda91be7df77c7ce2a274bd938288e0-merged.mount: Deactivated successfully.
Oct 02 13:09:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 544 KiB/s rd, 3.6 MiB/s wr, 115 op/s
Oct 02 13:09:44 compute-0 podman[386786]: 2025-10-02 13:09:44.363624083 +0000 UTC m=+1.438753317 container remove 8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:09:44 compute-0 systemd[1]: libpod-conmon-8b1f5d311a842604282d44541afa33240bbb4a2c98a6836149453586fc5e1e17.scope: Deactivated successfully.
Oct 02 13:09:44 compute-0 sudo[386642]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:44 compute-0 sudo[386829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:44 compute-0 sudo[386829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:44 compute-0 sudo[386829]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:44 compute-0 sudo[386854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:09:44 compute-0 sudo[386854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:44 compute-0 sudo[386854]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:44 compute-0 sudo[386879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:44 compute-0 sudo[386879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:44 compute-0 sudo[386879]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:44 compute-0 sudo[386904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:09:44 compute-0 sudo[386904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:45 compute-0 podman[386965]: 2025-10-02 13:09:45.00994042 +0000 UTC m=+0.069088805 container create cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:09:45 compute-0 systemd[1]: Started libpod-conmon-cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca.scope.
Oct 02 13:09:45 compute-0 podman[386965]: 2025-10-02 13:09:44.969966182 +0000 UTC m=+0.029114597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:09:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:45 compute-0 podman[386965]: 2025-10-02 13:09:45.105956662 +0000 UTC m=+0.165105087 container init cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:09:45 compute-0 podman[386965]: 2025-10-02 13:09:45.114846143 +0000 UTC m=+0.173994548 container start cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:09:45 compute-0 determined_pascal[386979]: 167 167
Oct 02 13:09:45 compute-0 systemd[1]: libpod-cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca.scope: Deactivated successfully.
Oct 02 13:09:45 compute-0 conmon[386979]: conmon cade785cc647cd1a6fef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca.scope/container/memory.events
Oct 02 13:09:45 compute-0 podman[386965]: 2025-10-02 13:09:45.123235801 +0000 UTC m=+0.182384196 container attach cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:45 compute-0 podman[386965]: 2025-10-02 13:09:45.123940669 +0000 UTC m=+0.183089064 container died cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b578bd14d42b5ecc6842074c8eadd9378233318beed6ef9e379ec858a394fe38-merged.mount: Deactivated successfully.
Oct 02 13:09:45 compute-0 podman[386965]: 2025-10-02 13:09:45.165241941 +0000 UTC m=+0.224390326 container remove cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:09:45 compute-0 systemd[1]: libpod-conmon-cade785cc647cd1a6fef545af384e9a61bca2db292fc9c9ff9ddab9e7675ffca.scope: Deactivated successfully.
Oct 02 13:09:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:45 compute-0 podman[387002]: 2025-10-02 13:09:45.360712275 +0000 UTC m=+0.041898929 container create 33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:09:45 compute-0 systemd[1]: Started libpod-conmon-33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b.scope.
Oct 02 13:09:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b5c6459a08fd0875de86c44dc852b809d00cb6666c2b2edd2b5dfb361df6fbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:45 compute-0 podman[387002]: 2025-10-02 13:09:45.343547899 +0000 UTC m=+0.024734583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b5c6459a08fd0875de86c44dc852b809d00cb6666c2b2edd2b5dfb361df6fbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b5c6459a08fd0875de86c44dc852b809d00cb6666c2b2edd2b5dfb361df6fbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b5c6459a08fd0875de86c44dc852b809d00cb6666c2b2edd2b5dfb361df6fbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:45 compute-0 podman[387002]: 2025-10-02 13:09:45.465061434 +0000 UTC m=+0.146248118 container init 33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:09:45 compute-0 podman[387002]: 2025-10-02 13:09:45.471612574 +0000 UTC m=+0.152799228 container start 33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:09:45 compute-0 podman[387002]: 2025-10-02 13:09:45.475288299 +0000 UTC m=+0.156474983 container attach 33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:45.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:46 compute-0 ceph-mon[73668]: pgmap v2915: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 544 KiB/s rd, 3.6 MiB/s wr, 115 op/s
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]: {
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:     "1": [
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:         {
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "devices": [
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "/dev/loop3"
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             ],
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "lv_name": "ceph_lv0",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "lv_size": "7511998464",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "name": "ceph_lv0",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "tags": {
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.cluster_name": "ceph",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.crush_device_class": "",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.encrypted": "0",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.osd_id": "1",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.type": "block",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:                 "ceph.vdo": "0"
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             },
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "type": "block",
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:             "vg_name": "ceph_vg0"
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:         }
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]:     ]
Oct 02 13:09:46 compute-0 xenodochial_mendel[387018]: }
Oct 02 13:09:46 compute-0 systemd[1]: libpod-33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b.scope: Deactivated successfully.
Oct 02 13:09:46 compute-0 podman[387027]: 2025-10-02 13:09:46.262073232 +0000 UTC m=+0.025650027 container died 33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:09:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b5c6459a08fd0875de86c44dc852b809d00cb6666c2b2edd2b5dfb361df6fbe-merged.mount: Deactivated successfully.
Oct 02 13:09:46 compute-0 podman[387027]: 2025-10-02 13:09:46.317002248 +0000 UTC m=+0.080579043 container remove 33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:09:46 compute-0 systemd[1]: libpod-conmon-33466a2599456b710c4e4929982983d24b16ccfbd9ee612ba3848f9941f9974b.scope: Deactivated successfully.
Oct 02 13:09:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 213 op/s
Oct 02 13:09:46 compute-0 sudo[386904]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:46 compute-0 sudo[387042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:46 compute-0 sudo[387042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:46 compute-0 sudo[387042]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:46 compute-0 podman[387066]: 2025-10-02 13:09:46.524639578 +0000 UTC m=+0.066061246 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:09:46 compute-0 sudo[387080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:09:46 compute-0 sudo[387080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:46 compute-0 podman[387067]: 2025-10-02 13:09:46.532767609 +0000 UTC m=+0.071782634 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:09:46 compute-0 sudo[387080]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:46 compute-0 sudo[387128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:46 compute-0 sudo[387128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:46 compute-0 sudo[387128]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:46 compute-0 sudo[387153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:09:46 compute-0 sudo[387153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:47 compute-0 podman[387219]: 2025-10-02 13:09:47.024179884 +0000 UTC m=+0.052288968 container create 09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poincare, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:09:47 compute-0 systemd[1]: Started libpod-conmon-09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4.scope.
Oct 02 13:09:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:47 compute-0 podman[387219]: 2025-10-02 13:09:47.002673986 +0000 UTC m=+0.030783120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:09:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:47 compute-0 podman[387219]: 2025-10-02 13:09:47.111661305 +0000 UTC m=+0.139770409 container init 09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poincare, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:09:47 compute-0 podman[387219]: 2025-10-02 13:09:47.119134819 +0000 UTC m=+0.147243913 container start 09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:09:47 compute-0 podman[387219]: 2025-10-02 13:09:47.122538747 +0000 UTC m=+0.150647841 container attach 09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poincare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:09:47 compute-0 musing_poincare[387235]: 167 167
Oct 02 13:09:47 compute-0 systemd[1]: libpod-09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4.scope: Deactivated successfully.
Oct 02 13:09:47 compute-0 podman[387240]: 2025-10-02 13:09:47.165699537 +0000 UTC m=+0.026694383 container died 09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poincare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 02 13:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c86226ae08597c238ca36324f8fc21ce19c701be265551f648dcbcaf57e03af-merged.mount: Deactivated successfully.
Oct 02 13:09:47 compute-0 podman[387240]: 2025-10-02 13:09:47.19546532 +0000 UTC m=+0.056460146 container remove 09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poincare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:09:47 compute-0 systemd[1]: libpod-conmon-09266f43c2b09a48cc56c58ae01cc1eea673dfbae352c55d319eb6c97c7caea4.scope: Deactivated successfully.
Oct 02 13:09:47 compute-0 nova_compute[256940]: 2025-10-02 13:09:47.328 2 DEBUG nova.compute.manager [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:47 compute-0 nova_compute[256940]: 2025-10-02 13:09:47.331 2 DEBUG nova.compute.manager [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing instance network info cache due to event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:47 compute-0 nova_compute[256940]: 2025-10-02 13:09:47.331 2 DEBUG oslo_concurrency.lockutils [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:47 compute-0 nova_compute[256940]: 2025-10-02 13:09:47.332 2 DEBUG oslo_concurrency.lockutils [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:47 compute-0 nova_compute[256940]: 2025-10-02 13:09:47.332 2 DEBUG nova.network.neutron [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:47 compute-0 podman[387263]: 2025-10-02 13:09:47.405571064 +0000 UTC m=+0.047605267 container create b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:47 compute-0 systemd[1]: Started libpod-conmon-b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e.scope.
Oct 02 13:09:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:09:47 compute-0 podman[387263]: 2025-10-02 13:09:47.385806131 +0000 UTC m=+0.027840354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587667dcc190f80200ef87abb00f58f4ec01af9bbeec3fbd385b618e4b8331e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587667dcc190f80200ef87abb00f58f4ec01af9bbeec3fbd385b618e4b8331e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587667dcc190f80200ef87abb00f58f4ec01af9bbeec3fbd385b618e4b8331e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587667dcc190f80200ef87abb00f58f4ec01af9bbeec3fbd385b618e4b8331e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:47 compute-0 podman[387263]: 2025-10-02 13:09:47.504652926 +0000 UTC m=+0.146687139 container init b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:47 compute-0 podman[387263]: 2025-10-02 13:09:47.511973056 +0000 UTC m=+0.154007259 container start b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:09:47 compute-0 podman[387263]: 2025-10-02 13:09:47.514730598 +0000 UTC m=+0.156764831 container attach b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:09:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:47.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:48 compute-0 nova_compute[256940]: 2025-10-02 13:09:48.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:48 compute-0 ceph-mon[73668]: pgmap v2916: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 213 op/s
Oct 02 13:09:48 compute-0 nova_compute[256940]: 2025-10-02 13:09:48.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.1 MiB/s wr, 194 op/s
Oct 02 13:09:48 compute-0 determined_franklin[387279]: {
Oct 02 13:09:48 compute-0 determined_franklin[387279]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:09:48 compute-0 determined_franklin[387279]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:09:48 compute-0 determined_franklin[387279]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:09:48 compute-0 determined_franklin[387279]:         "osd_id": 1,
Oct 02 13:09:48 compute-0 determined_franklin[387279]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:09:48 compute-0 determined_franklin[387279]:         "type": "bluestore"
Oct 02 13:09:48 compute-0 determined_franklin[387279]:     }
Oct 02 13:09:48 compute-0 determined_franklin[387279]: }
Oct 02 13:09:48 compute-0 systemd[1]: libpod-b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e.scope: Deactivated successfully.
Oct 02 13:09:48 compute-0 podman[387263]: 2025-10-02 13:09:48.406785233 +0000 UTC m=+1.048819436 container died b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-587667dcc190f80200ef87abb00f58f4ec01af9bbeec3fbd385b618e4b8331e4-merged.mount: Deactivated successfully.
Oct 02 13:09:48 compute-0 podman[387263]: 2025-10-02 13:09:48.833720066 +0000 UTC m=+1.475754259 container remove b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:09:48 compute-0 systemd[1]: libpod-conmon-b6dfacc3620be14b872da52cba98e242a01e91d6c7ac51ee0b16445049f5152e.scope: Deactivated successfully.
Oct 02 13:09:48 compute-0 sudo[387153]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:09:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:09:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:48 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 239b32eb-dbae-47c9-8944-32870c09e575 does not exist
Oct 02 13:09:48 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 94c7c5a2-5fea-474b-8970-8ac99a7c40ea does not exist
Oct 02 13:09:48 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3a710f16-2ccf-4911-b5e4-c60ab5d91b95 does not exist
Oct 02 13:09:48 compute-0 sudo[387315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:48 compute-0 sudo[387315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:48 compute-0 sudo[387315]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:49 compute-0 sudo[387340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:09:49 compute-0 sudo[387340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:49 compute-0 sudo[387340]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.221 2 DEBUG nova.network.neutron [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updated VIF entry in instance network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.223 2 DEBUG nova.network.neutron [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.259 2 DEBUG oslo_concurrency.lockutils [req-cb9d268c-09c3-4ada-90ad-196cc07379d6 req-2a5e4bbb-ed9a-44fe-af0f-38ea054efbe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.421 2 DEBUG nova.compute.manager [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-changed-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.421 2 DEBUG nova.compute.manager [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Refreshing instance network info cache due to event network-changed-8ac4874f-4e37-469f-8df1-1f424336b299. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.422 2 DEBUG oslo_concurrency.lockutils [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.422 2 DEBUG oslo_concurrency.lockutils [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:49 compute-0 nova_compute[256940]: 2025-10-02 13:09:49.422 2 DEBUG nova.network.neutron [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Refreshing network info cache for port 8ac4874f-4e37-469f-8df1-1f424336b299 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 13:09:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:49.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 13:09:49 compute-0 ceph-mon[73668]: pgmap v2917: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.1 MiB/s wr, 194 op/s
Oct 02 13:09:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/977708829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 475 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.1 MiB/s wr, 172 op/s
Oct 02 13:09:50 compute-0 nova_compute[256940]: 2025-10-02 13:09:50.632 2 DEBUG nova.network.neutron [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updated VIF entry in instance network info cache for port 8ac4874f-4e37-469f-8df1-1f424336b299. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:50 compute-0 nova_compute[256940]: 2025-10-02 13:09:50.634 2 DEBUG nova.network.neutron [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updating instance_info_cache with network_info: [{"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:50 compute-0 nova_compute[256940]: 2025-10-02 13:09:50.650 2 DEBUG oslo_concurrency.lockutils [req-4580cd58-5010-4552-8ec0-08c633ffa21a req-55345b46-43f9-4fa0-91a0-5221cb917295 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:51.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:52 compute-0 ceph-mon[73668]: pgmap v2918: 305 pgs: 305 active+clean; 475 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.1 MiB/s wr, 172 op/s
Oct 02 13:09:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 524 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.8 MiB/s wr, 203 op/s
Oct 02 13:09:53 compute-0 nova_compute[256940]: 2025-10-02 13:09:53.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:53 compute-0 ceph-mon[73668]: pgmap v2919: 305 pgs: 305 active+clean; 524 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.8 MiB/s wr, 203 op/s
Oct 02 13:09:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3761765269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:53 compute-0 sudo[387367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:53 compute-0 sudo[387367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:53 compute-0 sudo[387367]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:53 compute-0 nova_compute[256940]: 2025-10-02 13:09:53.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:53 compute-0 sudo[387392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:53 compute-0 sudo[387392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:53 compute-0 sudo[387392]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/527186031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2259096278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 524 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.8 MiB/s wr, 179 op/s
Oct 02 13:09:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2287145047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:54 compute-0 ovn_controller[148123]: 2025-10-02T13:09:54Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:95:7b 10.100.0.12
Oct 02 13:09:54 compute-0 ovn_controller[148123]: 2025-10-02T13:09:54Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:95:7b 10.100.0.12
Oct 02 13:09:54 compute-0 ovn_controller[148123]: 2025-10-02T13:09:54Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:d9:a1 10.100.0.9
Oct 02 13:09:54 compute-0 ovn_controller[148123]: 2025-10-02T13:09:54Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:d9:a1 10.100.0.9
Oct 02 13:09:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:55.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:55 compute-0 ceph-mon[73668]: pgmap v2920: 305 pgs: 305 active+clean; 524 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.8 MiB/s wr, 179 op/s
Oct 02 13:09:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2287145047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:55.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 581 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Oct 02 13:09:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:57 compute-0 ceph-mon[73668]: pgmap v2921: 305 pgs: 305 active+clean; 581 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Oct 02 13:09:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:58 compute-0 nova_compute[256940]: 2025-10-02 13:09:58.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:58 compute-0 nova_compute[256940]: 2025-10-02 13:09:58.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 603 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 230 op/s
Oct 02 13:09:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3080379018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:09:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:09:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:09:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:09:59 compute-0 nova_compute[256940]: 2025-10-02 13:09:59.397 2 DEBUG nova.compute.manager [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:59 compute-0 nova_compute[256940]: 2025-10-02 13:09:59.397 2 DEBUG nova.compute.manager [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing instance network info cache due to event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:59 compute-0 nova_compute[256940]: 2025-10-02 13:09:59.397 2 DEBUG oslo_concurrency.lockutils [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:59 compute-0 nova_compute[256940]: 2025-10-02 13:09:59.398 2 DEBUG oslo_concurrency.lockutils [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:59 compute-0 nova_compute[256940]: 2025-10-02 13:09:59.398 2 DEBUG nova.network.neutron [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:59 compute-0 ceph-mon[73668]: pgmap v2922: 305 pgs: 305 active+clean; 603 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 230 op/s
Oct 02 13:09:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:09:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:59.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:10:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 608 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.8 MiB/s wr, 250 op/s
Oct 02 13:10:00 compute-0 nova_compute[256940]: 2025-10-02 13:10:00.496 2 DEBUG nova.network.neutron [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updated VIF entry in instance network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:00 compute-0 nova_compute[256940]: 2025-10-02 13:10:00.497 2 DEBUG nova.network.neutron [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 13:10:00 compute-0 nova_compute[256940]: 2025-10-02 13:10:00.521 2 DEBUG oslo_concurrency.lockutils [req-ae3f7b8a-b370-4957-b7a8-a25d1159f2c4 req-a5e85572-85c6-4116-9ca6-32f4fa4079ee 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:00 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:10:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:01.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:01 compute-0 ceph-mon[73668]: pgmap v2923: 305 pgs: 305 active+clean; 608 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.8 MiB/s wr, 250 op/s
Oct 02 13:10:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:01.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:01 compute-0 nova_compute[256940]: 2025-10-02 13:10:01.805 2 DEBUG oslo_concurrency.lockutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:01 compute-0 nova_compute[256940]: 2025-10-02 13:10:01.805 2 DEBUG oslo_concurrency.lockutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:01 compute-0 nova_compute[256940]: 2025-10-02 13:10:01.820 2 DEBUG nova.objects.instance [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:01 compute-0 nova_compute[256940]: 2025-10-02 13:10:01.872 2 DEBUG oslo_concurrency.lockutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.133 2 DEBUG oslo_concurrency.lockutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.134 2 DEBUG oslo_concurrency.lockutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.135 2 INFO nova.compute.manager [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Attaching volume 8347daf9-f32f-4c50-b89e-df9e913044db to /dev/vdb
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.270 2 DEBUG os_brick.utils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.272 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.289 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.289 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[c51ec3bb-54cb-4f77-bc68-1065c0542ff2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.291 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.301 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.302 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[95e792ab-8b86-4c01-a641-dd94b3452db9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.304 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.315 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.315 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[203bfb5f-687e-4a68-b741-77aed71c0299]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.317 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbdb0cc-3094-4728-a1d4-8c9332606f33]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.318 2 DEBUG oslo_concurrency.processutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 266 op/s
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.355 2 DEBUG oslo_concurrency.processutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.360 2 DEBUG os_brick.initiator.connectors.lightos [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.360 2 DEBUG os_brick.initiator.connectors.lightos [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.361 2 DEBUG os_brick.initiator.connectors.lightos [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.361 2 DEBUG os_brick.utils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:10:02 compute-0 nova_compute[256940]: 2025-10-02 13:10:02.362 2 DEBUG nova.virt.block_device [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating existing volume attachment record: 966c0e7d-a9f2-4cd9-848d-e4828fb91485 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.177 2 DEBUG nova.objects.instance [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.201 2 DEBUG nova.virt.libvirt.driver [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Attempting to attach volume 8347daf9-f32f-4c50-b89e-df9e913044db with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.206 2 DEBUG nova.virt.libvirt.guest [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:10:03 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:10:03 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct 02 13:10:03 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:10:03 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:10:03 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:10:03 compute-0 nova_compute[256940]:   </source>
Oct 02 13:10:03 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 13:10:03 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:10:03 compute-0 nova_compute[256940]:   </auth>
Oct 02 13:10:03 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:10:03 compute-0 nova_compute[256940]:   <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct 02 13:10:03 compute-0 nova_compute[256940]:   <shareable/>
Oct 02 13:10:03 compute-0 nova_compute[256940]: </disk>
Oct 02 13:10:03 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.388 2 DEBUG nova.virt.libvirt.driver [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.389 2 DEBUG nova.virt.libvirt.driver [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.389 2 DEBUG nova.virt.libvirt.driver [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.390 2 DEBUG nova.virt.libvirt.driver [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:67:d9:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:10:03 compute-0 podman[387449]: 2025-10-02 13:10:03.421020732 +0000 UTC m=+0.086659550 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:10:03 compute-0 podman[387450]: 2025-10-02 13:10:03.462233922 +0000 UTC m=+0.119198895 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:10:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:03.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:03 compute-0 nova_compute[256940]: 2025-10-02 13:10:03.623 2 DEBUG oslo_concurrency.lockutils [None req-068e105a-2462-4a90-8fc9-a17d67c82803 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:03 compute-0 ceph-mon[73668]: pgmap v2924: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 266 op/s
Oct 02 13:10:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3483817046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.0 MiB/s wr, 232 op/s
Oct 02 13:10:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:05.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:06 compute-0 ceph-mon[73668]: pgmap v2925: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.0 MiB/s wr, 232 op/s
Oct 02 13:10:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1490262338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:10:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1490262338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:10:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.0 MiB/s wr, 296 op/s
Oct 02 13:10:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:07.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1997719586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:07.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:08 compute-0 nova_compute[256940]: 2025-10-02 13:10:08.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:08 compute-0 nova_compute[256940]: 2025-10-02 13:10:08.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:08 compute-0 ceph-mon[73668]: pgmap v2926: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.0 MiB/s wr, 296 op/s
Oct 02 13:10:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.2 MiB/s wr, 221 op/s
Oct 02 13:10:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:09.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:09 compute-0 ceph-mon[73668]: pgmap v2927: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.2 MiB/s wr, 221 op/s
Oct 02 13:10:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 620 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 712 KiB/s wr, 176 op/s
Oct 02 13:10:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/918586300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:11 compute-0 nova_compute[256940]: 2025-10-02 13:10:11.021 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:11 compute-0 nova_compute[256940]: 2025-10-02 13:10:11.022 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:11 compute-0 nova_compute[256940]: 2025-10-02 13:10:11.022 2 DEBUG nova.network.neutron [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:10:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:11 compute-0 ceph-mon[73668]: pgmap v2928: 305 pgs: 305 active+clean; 620 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 712 KiB/s wr, 176 op/s
Oct 02 13:10:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1018353601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:11.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 675 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.386 2 DEBUG nova.network.neutron [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.402 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.496 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.496 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Creating file /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/15af494b79204c3585bd67030a58f95b.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.496 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/15af494b79204c3585bd67030a58f95b.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.902 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/15af494b79204c3585bd67030a58f95b.tmp" returned: 1 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.904 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/15af494b79204c3585bd67030a58f95b.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.904 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Creating directory /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 13:10:12 compute-0 nova_compute[256940]: 2025-10-02 13:10:12.904 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:13 compute-0 nova_compute[256940]: 2025-10-02 13:10:13.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:13 compute-0 nova_compute[256940]: 2025-10-02 13:10:13.119 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:13 compute-0 nova_compute[256940]: 2025-10-02 13:10:13.123 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:10:13 compute-0 nova_compute[256940]: 2025-10-02 13:10:13.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:13 compute-0 sudo[387499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:13 compute-0 sudo[387499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:13 compute-0 sudo[387499]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:13 compute-0 sudo[387524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:13 compute-0 sudo[387524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:13 compute-0 sudo[387524]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:13 compute-0 ceph-mon[73668]: pgmap v2929: 305 pgs: 305 active+clean; 675 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 13:10:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:13.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 675 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 162 op/s
Oct 02 13:10:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1576777115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2097443980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:15.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:15 compute-0 ceph-mon[73668]: pgmap v2930: 305 pgs: 305 active+clean; 675 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 162 op/s
Oct 02 13:10:15 compute-0 kernel: tap84c1a249-c4 (unregistering): left promiscuous mode
Oct 02 13:10:15 compute-0 NetworkManager[44981]: <info>  [1759410615.7818] device (tap84c1a249-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:10:15 compute-0 nova_compute[256940]: 2025-10-02 13:10:15.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:15 compute-0 ovn_controller[148123]: 2025-10-02T13:10:15Z|00992|binding|INFO|Releasing lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 from this chassis (sb_readonly=0)
Oct 02 13:10:15 compute-0 ovn_controller[148123]: 2025-10-02T13:10:15Z|00993|binding|INFO|Setting lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 down in Southbound
Oct 02 13:10:15 compute-0 ovn_controller[148123]: 2025-10-02T13:10:15Z|00994|binding|INFO|Removing iface tap84c1a249-c4 ovn-installed in OVS
Oct 02 13:10:15 compute-0 nova_compute[256940]: 2025-10-02 13:10:15.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:15.804 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:d9:a1 10.100.0.9'], port_security=['fa:16:3e:67:d9:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '469a928f-d7cb-4add-9410-629caac3f6f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=84c1a249-c4f5-48bf-835d-bbbc75fefeb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:15.807 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis
Oct 02 13:10:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:15.810 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9001b9c-bca6-4085-a954-1414269e31bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:10:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:15.812 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1992e20c-df62-44ec-9225-1a73710e1a7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:15.813 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace which is not needed anymore
Oct 02 13:10:15 compute-0 nova_compute[256940]: 2025-10-02 13:10:15.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:15 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Oct 02 13:10:15 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000c7.scope: Consumed 15.074s CPU time.
Oct 02 13:10:15 compute-0 systemd-machined[210927]: Machine qemu-100-instance-000000c7 terminated.
Oct 02 13:10:16 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[386500]: [NOTICE]   (386507) : haproxy version is 2.8.14-c23fe91
Oct 02 13:10:16 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[386500]: [NOTICE]   (386507) : path to executable is /usr/sbin/haproxy
Oct 02 13:10:16 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[386500]: [WARNING]  (386507) : Exiting Master process...
Oct 02 13:10:16 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[386500]: [ALERT]    (386507) : Current worker (386509) exited with code 143 (Terminated)
Oct 02 13:10:16 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[386500]: [WARNING]  (386507) : All workers exited. Exiting... (0)
Oct 02 13:10:16 compute-0 systemd[1]: libpod-6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771.scope: Deactivated successfully.
Oct 02 13:10:16 compute-0 podman[387573]: 2025-10-02 13:10:16.019884034 +0000 UTC m=+0.066142867 container died 6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 13:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771-userdata-shm.mount: Deactivated successfully.
Oct 02 13:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6c148f8bc9180b04142a99c992bbffdb152c02c9808de1d1171d2beea23ae55-merged.mount: Deactivated successfully.
Oct 02 13:10:16 compute-0 podman[387573]: 2025-10-02 13:10:16.071416052 +0000 UTC m=+0.117674885 container cleanup 6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.078 2 DEBUG nova.compute.manager [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.079 2 DEBUG oslo_concurrency.lockutils [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.079 2 DEBUG oslo_concurrency.lockutils [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.079 2 DEBUG oslo_concurrency.lockutils [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.079 2 DEBUG nova.compute.manager [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.080 2 WARNING nova.compute.manager [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state resize_migrating.
Oct 02 13:10:16 compute-0 systemd[1]: libpod-conmon-6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771.scope: Deactivated successfully.
Oct 02 13:10:16 compute-0 podman[387612]: 2025-10-02 13:10:16.135179867 +0000 UTC m=+0.041899298 container remove 6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.138 2 INFO nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance shutdown successfully after 3 seconds.
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.141 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c946e6e4-4b30-4af5-b575-0f65d0031da5]: (4, ('Thu Oct  2 01:10:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771)\n6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771\nThu Oct  2 01:10:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771)\n6f048d0db1dc6252a38d47ce284b870b600fb5de95ee6980ff42b63df7514771\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.143 2 INFO nova.virt.libvirt.driver [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance destroyed successfully.
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.143 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9c534ec9-3196-4b4f-8950-ed3e44cd4626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.143 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.143 2 DEBUG nova.virt.libvirt.vif [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=199,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-17mdigwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=469a928f-d7cb-4add-9410-629caac3f6f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:67:d9:a1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.144 2 DEBUG nova.network.os_vif_util [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:67:d9:a1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.144 2 DEBUG nova.network.os_vif_util [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.145 2 DEBUG os_vif [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:10:16 compute-0 kernel: tapd9001b9c-b0: left promiscuous mode
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1a249-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.172 2 INFO os_vif [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4')
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.171 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1f626e44-9d0d-4c75-8dac-daee4303699a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.210 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83953ec6-7122-4ef0-956a-6af3fe560ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.211 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[10d40326-b8dc-4c27-8c20-0680a1b8dc6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.226 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8d38f33f-0209-41e5-a6d3-d6ea24f38503]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847553, 'reachable_time': 22522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387628, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-0 systemd[1]: run-netns-ovnmeta\x2dd9001b9c\x2dbca6\x2d4085\x2da954\x2d1414269e31bc.mount: Deactivated successfully.
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.228 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:10:16 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:16.228 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b34d30-b3ed-44c8-8a12-c1c6388f1388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.343 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.344 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.344 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 715 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 222 op/s
Oct 02 13:10:16 compute-0 nova_compute[256940]: 2025-10-02 13:10:16.911 2 DEBUG neutronclient.v2_0.client [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:10:17 compute-0 nova_compute[256940]: 2025-10-02 13:10:17.037 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:17 compute-0 nova_compute[256940]: 2025-10-02 13:10:17.038 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:17 compute-0 nova_compute[256940]: 2025-10-02 13:10:17.038 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:17 compute-0 podman[387630]: 2025-10-02 13:10:17.386499128 +0000 UTC m=+0.057957905 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:10:17 compute-0 podman[387631]: 2025-10-02 13:10:17.416880687 +0000 UTC m=+0.085991563 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:10:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:17.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:17 compute-0 ceph-mon[73668]: pgmap v2931: 305 pgs: 305 active+clean; 715 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 222 op/s
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 691 KiB/s rd, 6.1 MiB/s wr, 163 op/s
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.469 2 DEBUG nova.compute.manager [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.470 2 DEBUG nova.compute.manager [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing instance network info cache due to event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.470 2 DEBUG oslo_concurrency.lockutils [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.470 2 DEBUG oslo_concurrency.lockutils [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.470 2 DEBUG nova.network.neutron [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.487 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.488 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.488 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.488 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.488 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:18 compute-0 nova_compute[256940]: 2025-10-02 13:10:18.488 2 WARNING nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state resize_migrated.
Oct 02 13:10:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:19.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:19 compute-0 ceph-mon[73668]: pgmap v2932: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 691 KiB/s rd, 6.1 MiB/s wr, 163 op/s
Oct 02 13:10:20 compute-0 nova_compute[256940]: 2025-10-02 13:10:20.201 2 DEBUG nova.network.neutron [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updated VIF entry in instance network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:20 compute-0 nova_compute[256940]: 2025-10-02 13:10:20.201 2 DEBUG nova.network.neutron [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:20 compute-0 nova_compute[256940]: 2025-10-02 13:10:20.239 2 DEBUG oslo_concurrency.lockutils [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Oct 02 13:10:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:21 compute-0 nova_compute[256940]: 2025-10-02 13:10:21.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:21.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Oct 02 13:10:21 compute-0 ceph-mon[73668]: pgmap v2933: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Oct 02 13:10:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1814106570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Oct 02 13:10:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Oct 02 13:10:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Oct 02 13:10:22 compute-0 ceph-mon[73668]: osdmap e366: 3 total, 3 up, 3 in
Oct 02 13:10:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/153590829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:23 compute-0 nova_compute[256940]: 2025-10-02 13:10:23.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:23.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:23.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:23 compute-0 ceph-mon[73668]: pgmap v2935: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Oct 02 13:10:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1563581413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Oct 02 13:10:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3536817018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:24 compute-0 nova_compute[256940]: 2025-10-02 13:10:24.605 2 DEBUG nova.compute.manager [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:24 compute-0 nova_compute[256940]: 2025-10-02 13:10:24.605 2 DEBUG oslo_concurrency.lockutils [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:24 compute-0 nova_compute[256940]: 2025-10-02 13:10:24.606 2 DEBUG oslo_concurrency.lockutils [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:24 compute-0 nova_compute[256940]: 2025-10-02 13:10:24.606 2 DEBUG oslo_concurrency.lockutils [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:24 compute-0 nova_compute[256940]: 2025-10-02 13:10:24.606 2 DEBUG nova.compute.manager [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:24 compute-0 nova_compute[256940]: 2025-10-02 13:10:24.606 2 WARNING nova.compute.manager [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state resize_finish.
Oct 02 13:10:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3536817018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:25 compute-0 nova_compute[256940]: 2025-10-02 13:10:25.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:25 compute-0 nova_compute[256940]: 2025-10-02 13:10:25.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:25.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:26 compute-0 ceph-mon[73668]: pgmap v2936: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Oct 02 13:10:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2201128665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:10:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 97 KiB/s wr, 136 op/s
Oct 02 13:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:26.506 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:26.506 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:26.507 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.691 2 DEBUG nova.compute.manager [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.691 2 DEBUG oslo_concurrency.lockutils [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.692 2 DEBUG oslo_concurrency.lockutils [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.692 2 DEBUG oslo_concurrency.lockutils [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.692 2 DEBUG nova.compute.manager [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:26 compute-0 nova_compute[256940]: 2025-10-02 13:10:26.692 2 WARNING nova.compute.manager [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state resized and task_state None.
Oct 02 13:10:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:27.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:27.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:28 compute-0 ceph-mon[73668]: pgmap v2937: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 97 KiB/s wr, 136 op/s
Oct 02 13:10:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/293687028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.245 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.246 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.247 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 79 KiB/s wr, 156 op/s
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.439 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.440 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.440 2 DEBUG nova.compute.manager [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Going to confirm migration 23 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239067814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.684 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.775 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.775 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.775 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.780 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.780 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:10:28
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.meta']
Oct 02 13:10:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.933 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.934 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3983MB free_disk=20.73917007446289GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.934 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.935 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:28 compute-0 nova_compute[256940]: 2025-10-02 13:10:28.991 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration for instance 469a928f-d7cb-4add-9410-629caac3f6f8 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.018 2 INFO nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating resource usage from migration 316381c1-6621-421f-914a-993647bbe4a1
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.018 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Starting to track outgoing migration 316381c1-6621-421f-914a-993647bbe4a1 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.040 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 89e1718a-24fd-472a-bbae-aedba874e3a0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.041 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Migration 316381c1-6621-421f-914a-993647bbe4a1 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.041 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.041 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.101 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:29.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1239067814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2272454097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.327 2 DEBUG neutronclient.v2_0.client [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.328 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.328 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.328 2 DEBUG nova.network.neutron [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.328 2 DEBUG nova.objects.instance [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'info_cache' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.361140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629361190, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 817, "num_deletes": 251, "total_data_size": 1076066, "memory_usage": 1101808, "flush_reason": "Manual Compaction"}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629369065, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1052493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65295, "largest_seqno": 66111, "table_properties": {"data_size": 1048384, "index_size": 1760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9722, "raw_average_key_size": 19, "raw_value_size": 1039964, "raw_average_value_size": 2135, "num_data_blocks": 77, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410575, "oldest_key_time": 1759410575, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7970 microseconds, and 3149 cpu microseconds.
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.369098) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1052493 bytes OK
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.369130) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.370862) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.370875) EVENT_LOG_v1 {"time_micros": 1759410629370871, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.370889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1072040, prev total WAL file size 1072040, number of live WAL files 2.
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.371444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1027KB)], [149(10MB)]
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629371490, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12164794, "oldest_snapshot_seqno": -1}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 9045 keys, 10295431 bytes, temperature: kUnknown
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629443562, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10295431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10238969, "index_size": 32733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239904, "raw_average_key_size": 26, "raw_value_size": 10082304, "raw_average_value_size": 1114, "num_data_blocks": 1237, "num_entries": 9045, "num_filter_entries": 9045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.443843) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10295431 bytes
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.445043) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.6 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(21.3) write-amplify(9.8) OK, records in: 9565, records dropped: 520 output_compression: NoCompression
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.445060) EVENT_LOG_v1 {"time_micros": 1759410629445052, "job": 92, "event": "compaction_finished", "compaction_time_micros": 72155, "compaction_time_cpu_micros": 28448, "output_level": 6, "num_output_files": 1, "total_output_size": 10295431, "num_input_records": 9565, "num_output_records": 9045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629445308, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629447095, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.371312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.447206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.447209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.447210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.447212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:10:29.447213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514685641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.537 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.543 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.558 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:10:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:29.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:10:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.762 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:10:29 compute-0 nova_compute[256940]: 2025-10-02 13:10:29.763 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 54 KiB/s wr, 154 op/s
Oct 02 13:10:30 compute-0 ceph-mon[73668]: pgmap v2938: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 79 KiB/s wr, 156 op/s
Oct 02 13:10:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1514685641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:30 compute-0 nova_compute[256940]: 2025-10-02 13:10:30.762 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.048 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410616.046662, 469a928f-d7cb-4add-9410-629caac3f6f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.048 2 INFO nova.compute.manager [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] VM Stopped (Lifecycle Event)
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.068 2 DEBUG nova.compute.manager [None req-0ea8ef72-3716-4035-936b-0f7bf23fd255 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.071 2 DEBUG nova.compute.manager [None req-0ea8ef72-3716-4035-936b-0f7bf23fd255 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.092 2 INFO nova.compute.manager [None req-0ea8ef72-3716-4035-936b-0f7bf23fd255 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Oct 02 13:10:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:31.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.449 2 DEBUG nova.network.neutron [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.529 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:31 compute-0 nova_compute[256940]: 2025-10-02 13:10:31.530 2 DEBUG nova.objects.instance [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:31.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:32 compute-0 ceph-mon[73668]: pgmap v2939: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 54 KiB/s wr, 154 op/s
Oct 02 13:10:32 compute-0 nova_compute[256940]: 2025-10-02 13:10:32.141 2 DEBUG nova.storage.rbd_utils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] removing snapshot(nova-resize) on rbd image(469a928f-d7cb-4add-9410-629caac3f6f8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:10:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 725 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 683 KiB/s wr, 151 op/s
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Oct 02 13:10:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:33.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Oct 02 13:10:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Oct 02 13:10:33 compute-0 sudo[387759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:33 compute-0 sudo[387759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:33 compute-0 sudo[387759]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:33 compute-0 podman[387783]: 2025-10-02 13:10:33.528945505 +0000 UTC m=+0.052713739 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:10:33 compute-0 sudo[387790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:33 compute-0 sudo[387790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:33 compute-0 sudo[387790]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:33 compute-0 ceph-mon[73668]: pgmap v2940: 305 pgs: 305 active+clean; 725 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 683 KiB/s wr, 151 op/s
Oct 02 13:10:33 compute-0 podman[387826]: 2025-10-02 13:10:33.628961731 +0000 UTC m=+0.077117463 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:10:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:33.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.944 2 DEBUG nova.virt.libvirt.vif [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=199,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:10:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-17mdigwg',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=469a928f-d7cb-4add-9410-629caac3f6f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.944 2 DEBUG nova.network.os_vif_util [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.945 2 DEBUG nova.network.os_vif_util [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.946 2 DEBUG os_vif [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1a249-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.952 2 INFO os_vif [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4')
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.952 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:33 compute-0 nova_compute[256940]: 2025-10-02 13:10:33.952 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:34 compute-0 nova_compute[256940]: 2025-10-02 13:10:34.053 2 DEBUG oslo_concurrency.processutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 725 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 713 KiB/s wr, 130 op/s
Oct 02 13:10:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208984774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:34 compute-0 nova_compute[256940]: 2025-10-02 13:10:34.497 2 DEBUG oslo_concurrency.processutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:34 compute-0 nova_compute[256940]: 2025-10-02 13:10:34.507 2 DEBUG nova.compute.provider_tree [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:34 compute-0 nova_compute[256940]: 2025-10-02 13:10:34.532 2 DEBUG nova.scheduler.client.report [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:34 compute-0 nova_compute[256940]: 2025-10-02 13:10:34.581 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:34 compute-0 ceph-mon[73668]: osdmap e367: 3 total, 3 up, 3 in
Oct 02 13:10:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3208984774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:34 compute-0 nova_compute[256940]: 2025-10-02 13:10:34.684 2 INFO nova.scheduler.client.report [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocation for migration 316381c1-6621-421f-914a-993647bbe4a1
Oct 02 13:10:34 compute-0 nova_compute[256940]: 2025-10-02 13:10:34.746 2 DEBUG oslo_concurrency.lockutils [None req-95c7ba91-d95b-4177-86e9-96ed7e282202 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:35.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:35 compute-0 nova_compute[256940]: 2025-10-02 13:10:35.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:35.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:35 compute-0 ceph-mon[73668]: pgmap v2942: 305 pgs: 305 active+clean; 725 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 713 KiB/s wr, 130 op/s
Oct 02 13:10:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1219523401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:36 compute-0 nova_compute[256940]: 2025-10-02 13:10:36.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 743 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 02 13:10:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:37.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:37.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.651 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.652 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.652 2 INFO nova.compute.manager [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Unshelving
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.732 2 INFO nova.virt.block_device [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Booting with volume 23ece991-964a-4523-b231-9590440c3d93 at /dev/vda
Oct 02 13:10:37 compute-0 ceph-mon[73668]: pgmap v2943: 305 pgs: 305 active+clean; 743 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.942 2 DEBUG os_brick.utils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.943 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.953 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.953 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8705ea-ae43-442b-9205-708c7403a45d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.954 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.963 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.963 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1615884d-5551-4942-a459-001f59fd669c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.964 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.975 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.975 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[efc8ac74-c67e-42dc-8f42-170b48b048d4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.977 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[46889450-2572-4efd-b056-beb90e4db345]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:37 compute-0 nova_compute[256940]: 2025-10-02 13:10:37.977 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.009 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.011 2 DEBUG os_brick.initiator.connectors.lightos [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.011 2 DEBUG os_brick.initiator.connectors.lightos [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.011 2 DEBUG os_brick.initiator.connectors.lightos [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.011 2 DEBUG os_brick.utils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.012 2 DEBUG nova.virt.block_device [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating existing volume attachment record: 1c69fd61-d690-4888-9608-6cf9793b48cc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 752 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 180 op/s
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.908 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.909 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1461054822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.913 2 DEBUG nova.objects.instance [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_requests' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.929 2 DEBUG nova.objects.instance [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'numa_topology' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.978 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:10:38 compute-0 nova_compute[256940]: 2025-10-02 13:10:38.979 2 INFO nova.compute.claims [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.091 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.376 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.377 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.377 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.377 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 89e1718a-24fd-472a-bbae-aedba874e3a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1953460216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.509 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.516 2 DEBUG nova.compute.provider_tree [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.539 2 DEBUG nova.scheduler.client.report [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.565 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:39.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:39 compute-0 nova_compute[256940]: 2025-10-02 13:10:39.738 2 INFO nova.network.neutron [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating port f2d2820c-dd48-47a4-ab94-dd6136c8e314 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 13:10:39 compute-0 ceph-mon[73668]: pgmap v2944: 305 pgs: 305 active+clean; 752 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 180 op/s
Oct 02 13:10:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1953460216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 755 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1018 KiB/s rd, 2.6 MiB/s wr, 146 op/s
Oct 02 13:10:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Oct 02 13:10:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.621 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.622 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.623 2 DEBUG nova.network.neutron [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.714 2 DEBUG nova.compute.manager [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.714 2 DEBUG nova.compute.manager [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing instance network info cache due to event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.715 2 DEBUG oslo_concurrency.lockutils [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013031375046528942 of space, bias 1.0, pg target 3.9094125139586824 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00648842447189652 of space, bias 1.0, pg target 1.9270620681532664 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:10:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.802 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updating instance_info_cache with network_info: [{"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.815 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:40 compute-0 nova_compute[256940]: 2025-10-02 13:10:40.815 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:10:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:41.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:41 compute-0 nova_compute[256940]: 2025-10-02 13:10:41.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:41.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:41 compute-0 ceph-mon[73668]: pgmap v2945: 305 pgs: 305 active+clean; 755 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1018 KiB/s rd, 2.6 MiB/s wr, 146 op/s
Oct 02 13:10:41 compute-0 ceph-mon[73668]: osdmap e368: 3 total, 3 up, 3 in
Oct 02 13:10:41 compute-0 nova_compute[256940]: 2025-10-02 13:10:41.810 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.206 2 DEBUG nova.network.neutron [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.228 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.229 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.230 2 INFO nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Creating image(s)
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.230 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.230 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Ensure instance console log exists: /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.230 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.230 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.231 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.233 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Start _get_guest_xml network_info=[{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-23ece991-964a-4523-b231-9590440c3d93', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '23ece991-964a-4523-b231-9590440c3d93', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '79964143-e208-4552-8380-513c3adf09ac', 'attached_at': '', 'detached_at': '', 'volume_id': '23ece991-964a-4523-b231-9590440c3d93', 'serial': '23ece991-964a-4523-b231-9590440c3d93'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '1c69fd61-d690-4888-9608-6cf9793b48cc', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.233 2 DEBUG oslo_concurrency.lockutils [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.233 2 DEBUG nova.network.neutron [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.239 2 WARNING nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.242 2 DEBUG nova.virt.libvirt.host [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.243 2 DEBUG nova.virt.libvirt.host [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.247 2 DEBUG nova.virt.libvirt.host [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.247 2 DEBUG nova.virt.libvirt.host [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.248 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.248 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.249 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.249 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.249 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.249 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.250 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.250 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.250 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.250 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.250 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.251 2 DEBUG nova.virt.hardware [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.251 2 DEBUG nova.objects.instance [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.293 2 DEBUG nova.storage.rbd_utils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image 79964143-e208-4552-8380-513c3adf09ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.296 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 726 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Oct 02 13:10:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:10:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613403924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.739 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.766 2 DEBUG nova.virt.libvirt.vif [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-838477094',display_name='tempest-TestShelveInstance-server-838477094',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-838477094',id=201,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-2040419602',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-v1nr892j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:37Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=79964143-e208-4552-8380-513c3adf09ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.767 2 DEBUG nova.network.os_vif_util [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.768 2 DEBUG nova.network.os_vif_util [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.769 2 DEBUG nova.objects.instance [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.786 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <uuid>79964143-e208-4552-8380-513c3adf09ac</uuid>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <name>instance-000000c9</name>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <nova:name>tempest-TestShelveInstance-server-838477094</nova:name>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:10:42</nova:creationTime>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:user uuid="62f4c4b5cc194bd59ca9cc9f1da78a79">tempest-TestShelveInstance-228669170-project-member</nova:user>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:project uuid="954946ff6b204fba90f767ec67210620">tempest-TestShelveInstance-228669170</nova:project>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <nova:port uuid="f2d2820c-dd48-47a4-ab94-dd6136c8e314">
Oct 02 13:10:42 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <system>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <entry name="serial">79964143-e208-4552-8380-513c3adf09ac</entry>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <entry name="uuid">79964143-e208-4552-8380-513c3adf09ac</entry>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </system>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <os>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   </os>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <features>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   </features>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/79964143-e208-4552-8380-513c3adf09ac_disk.config">
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       </source>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-23ece991-964a-4523-b231-9590440c3d93">
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       </source>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:10:42 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <serial>23ece991-964a-4523-b231-9590440c3d93</serial>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:47:17:41"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <target dev="tapf2d2820c-dd"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/console.log" append="off"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <video>
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </video>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:10:42 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:10:42 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:10:42 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:10:42 compute-0 nova_compute[256940]: </domain>
Oct 02 13:10:42 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.788 2 DEBUG nova.compute.manager [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Preparing to wait for external event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.788 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.789 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.789 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.790 2 DEBUG nova.virt.libvirt.vif [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-838477094',display_name='tempest-TestShelveInstance-server-838477094',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-838477094',id=201,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-2040419602',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-v1nr892j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:37Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=79964143-e208-4552-8380-513c3adf09ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.790 2 DEBUG nova.network.os_vif_util [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.791 2 DEBUG nova.network.os_vif_util [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.791 2 DEBUG os_vif [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2d2820c-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2d2820c-dd, col_values=(('external_ids', {'iface-id': 'f2d2820c-dd48-47a4-ab94-dd6136c8e314', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:17:41', 'vm-uuid': '79964143-e208-4552-8380-513c3adf09ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:42 compute-0 NetworkManager[44981]: <info>  [1759410642.7989] manager: (tapf2d2820c-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.806 2 INFO os_vif [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd')
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.899 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.900 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.900 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No VIF found with MAC fa:16:3e:47:17:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:10:42 compute-0 nova_compute[256940]: 2025-10-02 13:10:42.900 2 INFO nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Using config drive
Oct 02 13:10:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/613403924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.043 2 DEBUG nova.storage.rbd_utils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image 79964143-e208-4552-8380-513c3adf09ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.068 2 DEBUG nova.objects.instance [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.104 2 DEBUG nova.objects.instance [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'keypairs' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.429 2 INFO nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Creating config drive at /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.434 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvs8h4k4n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.469 2 DEBUG nova.network.neutron [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updated VIF entry in instance network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.470 2 DEBUG nova.network.neutron [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.485 2 DEBUG oslo_concurrency.lockutils [req-9b59d26a-dafc-4c84-83ce-f314fe550672 req-a204cfb3-6457-406f-8c7a-814483caacad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.575 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvs8h4k4n" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.603 2 DEBUG nova.storage.rbd_utils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image 79964143-e208-4552-8380-513c3adf09ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.607 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config 79964143-e208-4552-8380-513c3adf09ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:43.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.817 2 DEBUG oslo_concurrency.processutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config 79964143-e208-4552-8380-513c3adf09ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.818 2 INFO nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deleting local config drive /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config because it was imported into RBD.
Oct 02 13:10:43 compute-0 kernel: tapf2d2820c-dd: entered promiscuous mode
Oct 02 13:10:43 compute-0 NetworkManager[44981]: <info>  [1759410643.8845] manager: (tapf2d2820c-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:43 compute-0 ovn_controller[148123]: 2025-10-02T13:10:43Z|00995|binding|INFO|Claiming lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 for this chassis.
Oct 02 13:10:43 compute-0 ovn_controller[148123]: 2025-10-02T13:10:43Z|00996|binding|INFO|f2d2820c-dd48-47a4-ab94-dd6136c8e314: Claiming fa:16:3e:47:17:41 10.100.0.10
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.893 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:17:41 10.100.0.10'], port_security=['fa:16:3e:47:17:41 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79964143-e208-4552-8380-513c3adf09ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '7', 'neutron:security_group_ids': '0ecabba0-db02-4a2b-8a99-c435db80e5c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f2d2820c-dd48-47a4-ab94-dd6136c8e314) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.895 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f2d2820c-dd48-47a4-ab94-dd6136c8e314 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 bound to our chassis
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.897 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:10:43 compute-0 ovn_controller[148123]: 2025-10-02T13:10:43Z|00997|binding|INFO|Setting lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 ovn-installed in OVS
Oct 02 13:10:43 compute-0 ovn_controller[148123]: 2025-10-02T13:10:43Z|00998|binding|INFO|Setting lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 up in Southbound
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.908 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef2a08c-b8ef-46da-8b3a-10e482669244]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.909 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4223a8cc-f1 in ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.911 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4223a8cc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.911 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[21721340-533f-4e26-9d9e-60ee7fa8fc70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.912 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[09867bdb-916d-4791-b505-a50c45fb4845]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:43 compute-0 nova_compute[256940]: 2025-10-02 13:10:43.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:43 compute-0 systemd-udevd[388027]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:10:43 compute-0 systemd-machined[210927]: New machine qemu-102-instance-000000c9.
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.928 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[49527a5d-5699-4d44-ab1b-df08c5601625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:43 compute-0 NetworkManager[44981]: <info>  [1759410643.9362] device (tapf2d2820c-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:10:43 compute-0 NetworkManager[44981]: <info>  [1759410643.9370] device (tapf2d2820c-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:10:43 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-000000c9.
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.953 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[56fe058f-b843-421a-852d-4870b324d7dd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.989 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[087794e6-3448-4f48-945e-3aba0fef54d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:43 compute-0 systemd-udevd[388030]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:10:43 compute-0 NetworkManager[44981]: <info>  [1759410643.9953] manager: (tap4223a8cc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Oct 02 13:10:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:43.994 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b4745178-c541-4d26-a8a4-81e9bf54b640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.028 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0e60c52e-4a41-4a1a-b6d2-6e85a1dc73bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.030 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a7be4492-412e-4482-8a18-30331866e65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 NetworkManager[44981]: <info>  [1759410644.0541] device (tap4223a8cc-f0): carrier: link connected
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.060 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[08f1d16a-be91-4ed4-8417-9a2b37bda2a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.078 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebf9c60-b4c5-46dc-b79e-0aeb3daa261f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853953, 'reachable_time': 30461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388058, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.094 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdc44b5-0aa3-48fc-9fd4-4145c20d7f1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f568'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 853953, 'tstamp': 853953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388059, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ceph-mon[73668]: pgmap v2947: 305 pgs: 305 active+clean; 726 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.115 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a1377829-1261-45b9-bbe9-6bf96209bc03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853953, 'reachable_time': 30461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388060, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.149 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[79c08791-6a0b-47de-8803-a5d2cf2fd4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.215 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a12567f3-b4d8-415b-8121-a31f72eae484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.216 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.217 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.217 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4223a8cc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:44 compute-0 kernel: tap4223a8cc-f0: entered promiscuous mode
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-0 NetworkManager[44981]: <info>  [1759410644.2640] manager: (tap4223a8cc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.265 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4223a8cc-f0, col_values=(('external_ids', {'iface-id': '97eaefd1-ed23-4787-9782-741cd2cf7e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.269 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:10:44 compute-0 ovn_controller[148123]: 2025-10-02T13:10:44Z|00999|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.270 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[46a3379e-656e-4634-a402-9e96d294144a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.270 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.271 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'env', 'PROCESS_TAG=haproxy-4223a8cc-f72a-428d-accb-3f4210096878', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4223a8cc-f72a-428d-accb-3f4210096878.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.352 2 DEBUG nova.compute.manager [req-4d97a599-37da-4857-8f7c-215640fe3e7f req-a947fd7a-f790-46ee-94e2-314998757f85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.353 2 DEBUG oslo_concurrency.lockutils [req-4d97a599-37da-4857-8f7c-215640fe3e7f req-a947fd7a-f790-46ee-94e2-314998757f85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.353 2 DEBUG oslo_concurrency.lockutils [req-4d97a599-37da-4857-8f7c-215640fe3e7f req-a947fd7a-f790-46ee-94e2-314998757f85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.353 2 DEBUG oslo_concurrency.lockutils [req-4d97a599-37da-4857-8f7c-215640fe3e7f req-a947fd7a-f790-46ee-94e2-314998757f85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.354 2 DEBUG nova.compute.manager [req-4d97a599-37da-4857-8f7c-215640fe3e7f req-a947fd7a-f790-46ee-94e2-314998757f85 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Processing event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:10:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 726 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 146 op/s
Oct 02 13:10:44 compute-0 podman[388134]: 2025-10-02 13:10:44.609558382 +0000 UTC m=+0.027354992 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:10:44 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:44.758 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.946 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410644.9462001, 79964143-e208-4552-8380-513c3adf09ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.947 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Started (Lifecycle Event)
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.949 2 DEBUG nova.compute.manager [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.953 2 DEBUG nova.virt.libvirt.driver [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.956 2 INFO nova.virt.libvirt.driver [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance spawned successfully.
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.957 2 DEBUG nova.compute.manager [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.982 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:44 compute-0 nova_compute[256940]: 2025-10-02 13:10:44.985 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.018 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.018 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410644.947076, 79964143-e208-4552-8380-513c3adf09ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.019 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Paused (Lifecycle Event)
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.023 2 DEBUG oslo_concurrency.lockutils [None req-4cf76f8c-c905-4bd7-b222-fd4de4251a84 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 7.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.041 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.044 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410644.9521263, 79964143-e208-4552-8380-513c3adf09ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.045 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Resumed (Lifecycle Event)
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.062 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:45 compute-0 nova_compute[256940]: 2025-10-02 13:10:45.064 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:10:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:45.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:45 compute-0 ceph-mon[73668]: pgmap v2948: 305 pgs: 305 active+clean; 726 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 146 op/s
Oct 02 13:10:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:45.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:45 compute-0 podman[388134]: 2025-10-02 13:10:45.804266646 +0000 UTC m=+1.222063246 container create 9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:10:45 compute-0 systemd[1]: Started libpod-conmon-9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996.scope.
Oct 02 13:10:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d9545cb820050a9d2f138c761a8584334f6d3c5faea5c868ca2a2fa111cbe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:46 compute-0 podman[388134]: 2025-10-02 13:10:46.228729289 +0000 UTC m=+1.646525919 container init 9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:10:46 compute-0 podman[388134]: 2025-10-02 13:10:46.236984384 +0000 UTC m=+1.654781014 container start 9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:10:46 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [NOTICE]   (388155) : New worker (388157) forked
Oct 02 13:10:46 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [NOTICE]   (388155) : Loading success.
Oct 02 13:10:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:46.338 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:10:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 678 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 908 KiB/s rd, 395 KiB/s wr, 139 op/s
Oct 02 13:10:46 compute-0 nova_compute[256940]: 2025-10-02 13:10:46.603 2 DEBUG nova.compute.manager [req-74be129c-065c-4e40-840c-2b2ee89e895c req-93a7332f-fb0f-459d-a34d-40e19d1f4c19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:46 compute-0 nova_compute[256940]: 2025-10-02 13:10:46.604 2 DEBUG oslo_concurrency.lockutils [req-74be129c-065c-4e40-840c-2b2ee89e895c req-93a7332f-fb0f-459d-a34d-40e19d1f4c19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:46 compute-0 nova_compute[256940]: 2025-10-02 13:10:46.604 2 DEBUG oslo_concurrency.lockutils [req-74be129c-065c-4e40-840c-2b2ee89e895c req-93a7332f-fb0f-459d-a34d-40e19d1f4c19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:46 compute-0 nova_compute[256940]: 2025-10-02 13:10:46.604 2 DEBUG oslo_concurrency.lockutils [req-74be129c-065c-4e40-840c-2b2ee89e895c req-93a7332f-fb0f-459d-a34d-40e19d1f4c19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:46 compute-0 nova_compute[256940]: 2025-10-02 13:10:46.605 2 DEBUG nova.compute.manager [req-74be129c-065c-4e40-840c-2b2ee89e895c req-93a7332f-fb0f-459d-a34d-40e19d1f4c19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] No waiting events found dispatching network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:46 compute-0 nova_compute[256940]: 2025-10-02 13:10:46.605 2 WARNING nova.compute.manager [req-74be129c-065c-4e40-840c-2b2ee89e895c req-93a7332f-fb0f-459d-a34d-40e19d1f4c19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received unexpected event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 for instance with vm_state active and task_state None.
Oct 02 13:10:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1341714694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.575 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.575 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.575 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.575 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.576 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.576 2 INFO nova.compute.manager [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Terminating instance
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.577 2 DEBUG nova.compute.manager [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:10:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:47.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:47 compute-0 kernel: tap8ac4874f-4e (unregistering): left promiscuous mode
Oct 02 13:10:47 compute-0 NetworkManager[44981]: <info>  [1759410647.6839] device (tap8ac4874f-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:10:47 compute-0 ovn_controller[148123]: 2025-10-02T13:10:47Z|01000|binding|INFO|Releasing lport 8ac4874f-4e37-469f-8df1-1f424336b299 from this chassis (sb_readonly=0)
Oct 02 13:10:47 compute-0 ovn_controller[148123]: 2025-10-02T13:10:47Z|01001|binding|INFO|Setting lport 8ac4874f-4e37-469f-8df1-1f424336b299 down in Southbound
Oct 02 13:10:47 compute-0 ovn_controller[148123]: 2025-10-02T13:10:47Z|01002|binding|INFO|Removing iface tap8ac4874f-4e ovn-installed in OVS
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:47.708 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:95:7b 10.100.0.12'], port_security=['fa:16:3e:7e:95:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '89e1718a-24fd-472a-bbae-aedba874e3a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54a08602-f5b6-41e1-816c-2c122542a2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e682aa5-16aa-4884-9ae4-6ca813b9baae 70184290-30f9-436f-8b75-f26eb92f4dbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1398b7fe-9cb8-4053-9d9c-0523007b5e96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=8ac4874f-4e37-469f-8df1-1f424336b299) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:47.710 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 8ac4874f-4e37-469f-8df1-1f424336b299 in datapath 54a08602-f5b6-41e1-816c-2c122542a2b7 unbound from our chassis
Oct 02 13:10:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:47.712 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54a08602-f5b6-41e1-816c-2c122542a2b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:10:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:47.713 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2a84b737-0294-4364-9be8-a9e5b4089a7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:47.713 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 namespace which is not needed anymore
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:47 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Oct 02 13:10:47 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000c6.scope: Consumed 17.070s CPU time.
Oct 02 13:10:47 compute-0 systemd-machined[210927]: Machine qemu-101-instance-000000c6 terminated.
Oct 02 13:10:47 compute-0 podman[388167]: 2025-10-02 13:10:47.802581957 +0000 UTC m=+0.093663285 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.850 2 INFO nova.virt.libvirt.driver [-] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Instance destroyed successfully.
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.851 2 DEBUG nova.objects.instance [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 89e1718a-24fd-472a-bbae-aedba874e3a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.869 2 DEBUG nova.virt.libvirt.vif [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1851097535',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1851097535',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=198,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-h0ucgi2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:09:41Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=89e1718a-24fd-472a-bbae-aedba874e3a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.870 2 DEBUG nova.network.os_vif_util [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.871 2 DEBUG nova.network.os_vif_util [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:95:7b,bridge_name='br-int',has_traffic_filtering=True,id=8ac4874f-4e37-469f-8df1-1f424336b299,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac4874f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.871 2 DEBUG os_vif [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:95:7b,bridge_name='br-int',has_traffic_filtering=True,id=8ac4874f-4e37-469f-8df1-1f424336b299,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac4874f-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.873 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ac4874f-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:47 compute-0 nova_compute[256940]: 2025-10-02 13:10:47.878 2 INFO os_vif [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:95:7b,bridge_name='br-int',has_traffic_filtering=True,id=8ac4874f-4e37-469f-8df1-1f424336b299,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac4874f-4e')
Oct 02 13:10:47 compute-0 ceph-mon[73668]: pgmap v2949: 305 pgs: 305 active+clean; 678 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 908 KiB/s rd, 395 KiB/s wr, 139 op/s
Oct 02 13:10:47 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [NOTICE]   (386705) : haproxy version is 2.8.14-c23fe91
Oct 02 13:10:47 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [NOTICE]   (386705) : path to executable is /usr/sbin/haproxy
Oct 02 13:10:47 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [WARNING]  (386705) : Exiting Master process...
Oct 02 13:10:47 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [WARNING]  (386705) : Exiting Master process...
Oct 02 13:10:47 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [ALERT]    (386705) : Current worker (386714) exited with code 143 (Terminated)
Oct 02 13:10:47 compute-0 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[386690]: [WARNING]  (386705) : All workers exited. Exiting... (0)
Oct 02 13:10:47 compute-0 systemd[1]: libpod-acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8.scope: Deactivated successfully.
Oct 02 13:10:47 compute-0 podman[388231]: 2025-10-02 13:10:47.993460458 +0000 UTC m=+0.180679637 container died acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 678 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 140 KiB/s wr, 83 op/s
Oct 02 13:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8-userdata-shm.mount: Deactivated successfully.
Oct 02 13:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a781b0ec9f339006cf4a0b78d2c056c1737fe57197d50e06cc80d7b1a4860d32-merged.mount: Deactivated successfully.
Oct 02 13:10:48 compute-0 podman[388170]: 2025-10-02 13:10:48.596182095 +0000 UTC m=+0.882840378 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.675 2 DEBUG nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-changed-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.675 2 DEBUG nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Refreshing instance network info cache due to event network-changed-8ac4874f-4e37-469f-8df1-1f424336b299. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.676 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.676 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.676 2 DEBUG nova.network.neutron [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Refreshing network info cache for port 8ac4874f-4e37-469f-8df1-1f424336b299 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.772 2 DEBUG nova.compute.manager [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.863 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.864 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.896 2 DEBUG nova.objects.instance [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_requests' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.922 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.922 2 INFO nova.compute.claims [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.923 2 DEBUG nova.objects.instance [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:48 compute-0 nova_compute[256940]: 2025-10-02 13:10:48.978 2 DEBUG nova.objects.instance [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.033 2 INFO nova.compute.resource_tracker [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating resource usage from migration 3176e491-7bce-457b-bb5e-0b1a709da887
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.034 2 DEBUG nova.compute.resource_tracker [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Starting to track incoming migration 3176e491-7bce-457b-bb5e-0b1a709da887 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 13:10:49 compute-0 podman[388231]: 2025-10-02 13:10:49.052057694 +0000 UTC m=+1.239276873 container cleanup acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:10:49 compute-0 systemd[1]: libpod-conmon-acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8.scope: Deactivated successfully.
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.106 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:49 compute-0 podman[388287]: 2025-10-02 13:10:49.118363768 +0000 UTC m=+0.041824508 container remove acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.125 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[923e4735-4fb4-47ec-8f29-6957e92fe279]: (4, ('Thu Oct  2 01:10:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 (acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8)\nacec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8\nThu Oct  2 01:10:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 (acec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8)\nacec945dab45bf7f29f8ac79ed4d3fcd070d098e69e4bec2901870119b7650a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.127 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa20fb4-5ba1-41b4-8247-f224496d5b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.128 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a08602-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:49 compute-0 kernel: tap54a08602-f0: left promiscuous mode
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.137 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e950810c-b995-4b0b-bd38-1493e673e71d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000078s ======
Oct 02 13:10:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:49.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.166 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[26d594f6-1009-434f-a047-4230e8af4ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.167 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[06244df1-417c-4355-a913-3e420b4ddaa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.181 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7b10d9ac-e233-4d26-9464-ea8d49c9d5bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847694, 'reachable_time': 33670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388304, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d54a08602\x2df5b6\x2d41e1\x2d816c\x2d2c122542a2b7.mount: Deactivated successfully.
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.186 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:10:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:49.186 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[034d45e0-5a47-415b-9779-74088e2109ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.316 2 INFO nova.virt.libvirt.driver [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Deleting instance files /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0_del
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.318 2 INFO nova.virt.libvirt.driver [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Deletion of /var/lib/nova/instances/89e1718a-24fd-472a-bbae-aedba874e3a0_del complete
Oct 02 13:10:49 compute-0 sudo[388324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:49 compute-0 sudo[388324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-0 sudo[388324]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.389 2 INFO nova.compute.manager [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Took 1.81 seconds to destroy the instance on the hypervisor.
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.391 2 DEBUG oslo.service.loopingcall [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.391 2 DEBUG nova.compute.manager [-] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.391 2 DEBUG nova.network.neutron [-] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:10:49 compute-0 sudo[388349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:49 compute-0 sudo[388349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-0 sudo[388349]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:49 compute-0 sudo[388374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:49 compute-0 sudo[388374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-0 sudo[388374]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:49 compute-0 sudo[388399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:10:49 compute-0 sudo[388399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847886454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.605 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.615 2 DEBUG nova.compute.provider_tree [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.644 2 DEBUG nova.scheduler.client.report [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:49.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.669 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.670 2 INFO nova.compute.manager [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Migrating
Oct 02 13:10:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:10:49 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:10:49 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.899 2 DEBUG nova.network.neutron [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updated VIF entry in instance network info cache for port 8ac4874f-4e37-469f-8df1-1f424336b299. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.901 2 DEBUG nova.network.neutron [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updating instance_info_cache with network_info: [{"id": "8ac4874f-4e37-469f-8df1-1f424336b299", "address": "fa:16:3e:7e:95:7b", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac4874f-4e", "ovs_interfaceid": "8ac4874f-4e37-469f-8df1-1f424336b299", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.937 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-89e1718a-24fd-472a-bbae-aedba874e3a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.939 2 DEBUG nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-vif-unplugged-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.941 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.942 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.942 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.943 2 DEBUG nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] No waiting events found dispatching network-vif-unplugged-8ac4874f-4e37-469f-8df1-1f424336b299 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.943 2 DEBUG nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-vif-unplugged-8ac4874f-4e37-469f-8df1-1f424336b299 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.943 2 DEBUG nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.944 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.944 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.945 2 DEBUG oslo_concurrency.lockutils [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.946 2 DEBUG nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] No waiting events found dispatching network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:49 compute-0 nova_compute[256940]: 2025-10-02 13:10:49.948 2 WARNING nova.compute.manager [req-61e5d119-e301-435e-9ea8-0e589efcbd9a req-955ea21e-5951-4add-af14-2887cd18c6c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received unexpected event network-vif-plugged-8ac4874f-4e37-469f-8df1-1f424336b299 for instance with vm_state active and task_state deleting.
Oct 02 13:10:49 compute-0 sudo[388399]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:50 compute-0 ceph-mon[73668]: pgmap v2950: 305 pgs: 305 active+clean; 678 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 140 KiB/s wr, 83 op/s
Oct 02 13:10:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2847886454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.258 2 DEBUG nova.network.neutron [-] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.273 2 INFO nova.compute.manager [-] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Took 0.88 seconds to deallocate network for instance.
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.317 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.317 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.339 2 DEBUG nova.compute.manager [req-fd904459-3e4a-44e2-8342-51bf4ab2f2c6 req-dae736ca-6b87-496b-b0ee-42724ac9da8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Received event network-vif-deleted-8ac4874f-4e37-469f-8df1-1f424336b299 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 643 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 57 KiB/s wr, 124 op/s
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.403 2 DEBUG oslo_concurrency.processutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 34a02d26-a3d7-4c91-b203-e6d671e7849b does not exist
Oct 02 13:10:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0a25d03c-b298-4d77-84d3-ae22074e4403 does not exist
Oct 02 13:10:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 771a057d-f269-4468-95cb-9bcd3b6b18ad does not exist
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:50 compute-0 sudo[388477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:50 compute-0 sudo[388477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:50 compute-0 sudo[388477]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:50 compute-0 sudo[388502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:50 compute-0 sudo[388502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:50 compute-0 sudo[388502]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324039356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:50 compute-0 sudo[388528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:50 compute-0 sudo[388528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:50 compute-0 sudo[388528]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.846 2 DEBUG oslo_concurrency.processutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.853 2 DEBUG nova.compute.provider_tree [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.876 2 DEBUG nova.scheduler.client.report [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:50 compute-0 sudo[388555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:10:50 compute-0 sudo[388555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.901 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:50 compute-0 nova_compute[256940]: 2025-10-02 13:10:50.936 2 INFO nova.scheduler.client.report [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 89e1718a-24fd-472a-bbae-aedba874e3a0
Oct 02 13:10:51 compute-0 nova_compute[256940]: 2025-10-02 13:10:51.025 2 DEBUG oslo_concurrency.lockutils [None req-9894de78-c570-49f2-ac6c-08198d51b34c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "89e1718a-24fd-472a-bbae-aedba874e3a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1324039356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:51 compute-0 sshd-session[388580]: Accepted publickey for nova from 192.168.122.101 port 40002 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 13:10:51 compute-0 systemd-logind[820]: New session 72 of user nova.
Oct 02 13:10:51 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 13:10:51 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 13:10:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:51 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 13:10:51 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 02 13:10:51 compute-0 systemd[388610]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 13:10:51 compute-0 podman[388627]: 2025-10-02 13:10:51.250619581 +0000 UTC m=+0.036090699 container create 93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:10:51 compute-0 systemd[388610]: Queued start job for default target Main User Target.
Oct 02 13:10:51 compute-0 systemd[1]: Started libpod-conmon-93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6.scope.
Oct 02 13:10:51 compute-0 systemd[388610]: Created slice User Application Slice.
Oct 02 13:10:51 compute-0 systemd[388610]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 13:10:51 compute-0 systemd[388610]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 13:10:51 compute-0 systemd[388610]: Reached target Paths.
Oct 02 13:10:51 compute-0 systemd[388610]: Reached target Timers.
Oct 02 13:10:51 compute-0 systemd[388610]: Starting D-Bus User Message Bus Socket...
Oct 02 13:10:51 compute-0 systemd[388610]: Starting Create User's Volatile Files and Directories...
Oct 02 13:10:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:51 compute-0 systemd[388610]: Finished Create User's Volatile Files and Directories.
Oct 02 13:10:51 compute-0 systemd[388610]: Listening on D-Bus User Message Bus Socket.
Oct 02 13:10:51 compute-0 systemd[388610]: Reached target Sockets.
Oct 02 13:10:51 compute-0 systemd[388610]: Reached target Basic System.
Oct 02 13:10:51 compute-0 systemd[388610]: Reached target Main User Target.
Oct 02 13:10:51 compute-0 systemd[388610]: Startup finished in 144ms.
Oct 02 13:10:51 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 02 13:10:51 compute-0 systemd[1]: Started Session 72 of User nova.
Oct 02 13:10:51 compute-0 podman[388627]: 2025-10-02 13:10:51.323576487 +0000 UTC m=+0.109047615 container init 93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:10:51 compute-0 sshd-session[388580]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 13:10:51 compute-0 podman[388627]: 2025-10-02 13:10:51.233230369 +0000 UTC m=+0.018701517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:51 compute-0 podman[388627]: 2025-10-02 13:10:51.330494997 +0000 UTC m=+0.115966115 container start 93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:10:51 compute-0 angry_shamir[388649]: 167 167
Oct 02 13:10:51 compute-0 systemd[1]: libpod-93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6.scope: Deactivated successfully.
Oct 02 13:10:51 compute-0 conmon[388649]: conmon 93103e2b114cc10c2427 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6.scope/container/memory.events
Oct 02 13:10:51 compute-0 podman[388627]: 2025-10-02 13:10:51.336156534 +0000 UTC m=+0.121627682 container attach 93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:10:51 compute-0 podman[388627]: 2025-10-02 13:10:51.336490243 +0000 UTC m=+0.121961361 container died 93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aaba1a5ea97060509ea322fd84411a74b37e47863363cef4bda455f675ad221-merged.mount: Deactivated successfully.
Oct 02 13:10:51 compute-0 podman[388627]: 2025-10-02 13:10:51.37720181 +0000 UTC m=+0.162672928 container remove 93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:10:51 compute-0 sshd-session[388655]: Received disconnect from 192.168.122.101 port 40002:11: disconnected by user
Oct 02 13:10:51 compute-0 sshd-session[388655]: Disconnected from user nova 192.168.122.101 port 40002
Oct 02 13:10:51 compute-0 sshd-session[388580]: pam_unix(sshd:session): session closed for user nova
Oct 02 13:10:51 compute-0 systemd-logind[820]: Session 72 logged out. Waiting for processes to exit.
Oct 02 13:10:51 compute-0 systemd[1]: session-72.scope: Deactivated successfully.
Oct 02 13:10:51 compute-0 systemd-logind[820]: Removed session 72.
Oct 02 13:10:51 compute-0 systemd[1]: libpod-conmon-93103e2b114cc10c2427a1cce950234d1c8580d8e2fee53a2b3ad1636d4a7bd6.scope: Deactivated successfully.
Oct 02 13:10:51 compute-0 sshd-session[388671]: Accepted publickey for nova from 192.168.122.101 port 40004 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 13:10:51 compute-0 podman[388678]: 2025-10-02 13:10:51.545667659 +0000 UTC m=+0.042310751 container create dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:51 compute-0 systemd-logind[820]: New session 74 of user nova.
Oct 02 13:10:51 compute-0 systemd[1]: Started Session 74 of User nova.
Oct 02 13:10:51 compute-0 sshd-session[388671]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 13:10:51 compute-0 systemd[1]: Started libpod-conmon-dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d.scope.
Oct 02 13:10:51 compute-0 podman[388678]: 2025-10-02 13:10:51.5241522 +0000 UTC m=+0.020795302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff445111f3a52278c3ac4f8fc84687236b4f144f6955dbb95916a94e1b883f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff445111f3a52278c3ac4f8fc84687236b4f144f6955dbb95916a94e1b883f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff445111f3a52278c3ac4f8fc84687236b4f144f6955dbb95916a94e1b883f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff445111f3a52278c3ac4f8fc84687236b4f144f6955dbb95916a94e1b883f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff445111f3a52278c3ac4f8fc84687236b4f144f6955dbb95916a94e1b883f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:51 compute-0 sshd-session[388693]: Received disconnect from 192.168.122.101 port 40004:11: disconnected by user
Oct 02 13:10:51 compute-0 sshd-session[388693]: Disconnected from user nova 192.168.122.101 port 40004
Oct 02 13:10:51 compute-0 sshd-session[388671]: pam_unix(sshd:session): session closed for user nova
Oct 02 13:10:51 compute-0 podman[388678]: 2025-10-02 13:10:51.643586144 +0000 UTC m=+0.140229246 container init dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:51 compute-0 systemd-logind[820]: Session 74 logged out. Waiting for processes to exit.
Oct 02 13:10:51 compute-0 systemd[1]: session-74.scope: Deactivated successfully.
Oct 02 13:10:51 compute-0 systemd-logind[820]: Removed session 74.
Oct 02 13:10:51 compute-0 podman[388678]: 2025-10-02 13:10:51.657766833 +0000 UTC m=+0.154409915 container start dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:10:51 compute-0 podman[388678]: 2025-10-02 13:10:51.661036878 +0000 UTC m=+0.157679960 container attach dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:10:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:51.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:52 compute-0 ceph-mon[73668]: pgmap v2951: 305 pgs: 305 active+clean; 643 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 57 KiB/s wr, 124 op/s
Oct 02 13:10:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:10:52.340 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 62 KiB/s wr, 152 op/s
Oct 02 13:10:52 compute-0 competent_kepler[388696]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:10:52 compute-0 competent_kepler[388696]: --> relative data size: 1.0
Oct 02 13:10:52 compute-0 competent_kepler[388696]: --> All data devices are unavailable
Oct 02 13:10:52 compute-0 systemd[1]: libpod-dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d.scope: Deactivated successfully.
Oct 02 13:10:52 compute-0 podman[388678]: 2025-10-02 13:10:52.477636694 +0000 UTC m=+0.974279816 container died dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fff445111f3a52278c3ac4f8fc84687236b4f144f6955dbb95916a94e1b883f-merged.mount: Deactivated successfully.
Oct 02 13:10:52 compute-0 podman[388678]: 2025-10-02 13:10:52.537345316 +0000 UTC m=+1.033988398 container remove dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:10:52 compute-0 systemd[1]: libpod-conmon-dfc9ed2dc985d43961813241f016d262d07c3ce562cfea42b6b8e5284366408d.scope: Deactivated successfully.
Oct 02 13:10:52 compute-0 sudo[388555]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:52 compute-0 sudo[388723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:52 compute-0 sudo[388723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:52 compute-0 sudo[388723]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:52 compute-0 sudo[388748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:52 compute-0 sudo[388748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:52 compute-0 sudo[388748]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:52 compute-0 sudo[388774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:52 compute-0 sudo[388774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:52 compute-0 sudo[388774]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:52 compute-0 sudo[388799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:10:52 compute-0 sudo[388799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:52 compute-0 nova_compute[256940]: 2025-10-02 13:10:52.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:53 compute-0 nova_compute[256940]: 2025-10-02 13:10:53.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:53 compute-0 ceph-mon[73668]: pgmap v2952: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 62 KiB/s wr, 152 op/s
Oct 02 13:10:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:53.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:53 compute-0 podman[388863]: 2025-10-02 13:10:53.194863396 +0000 UTC m=+0.046023877 container create ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 13:10:53 compute-0 systemd[1]: Started libpod-conmon-ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51.scope.
Oct 02 13:10:53 compute-0 podman[388863]: 2025-10-02 13:10:53.172366741 +0000 UTC m=+0.023527192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:53 compute-0 podman[388863]: 2025-10-02 13:10:53.295972414 +0000 UTC m=+0.147132915 container init ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:10:53 compute-0 podman[388863]: 2025-10-02 13:10:53.303697825 +0000 UTC m=+0.154858296 container start ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:10:53 compute-0 optimistic_lalande[388879]: 167 167
Oct 02 13:10:53 compute-0 systemd[1]: libpod-ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51.scope: Deactivated successfully.
Oct 02 13:10:53 compute-0 podman[388863]: 2025-10-02 13:10:53.310911333 +0000 UTC m=+0.162071794 container attach ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:10:53 compute-0 podman[388863]: 2025-10-02 13:10:53.311883718 +0000 UTC m=+0.163044169 container died ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ba9bca6e86bd3886b799f11c639c55d5387677d0541f3c849fa99918475c988-merged.mount: Deactivated successfully.
Oct 02 13:10:53 compute-0 podman[388863]: 2025-10-02 13:10:53.350260105 +0000 UTC m=+0.201420556 container remove ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:10:53 compute-0 systemd[1]: libpod-conmon-ac16c414413865a7c2334f2551877ab1df29e993269226e89bccd58968054f51.scope: Deactivated successfully.
Oct 02 13:10:53 compute-0 podman[388902]: 2025-10-02 13:10:53.51240219 +0000 UTC m=+0.039675712 container create 69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_napier, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:53 compute-0 systemd[1]: Started libpod-conmon-69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232.scope.
Oct 02 13:10:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3229ba8a9108f73a67f919a23ffdc8c5fd169b8756dcf8d939836392e5f7cba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:53 compute-0 podman[388902]: 2025-10-02 13:10:53.496349273 +0000 UTC m=+0.023622805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3229ba8a9108f73a67f919a23ffdc8c5fd169b8756dcf8d939836392e5f7cba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3229ba8a9108f73a67f919a23ffdc8c5fd169b8756dcf8d939836392e5f7cba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3229ba8a9108f73a67f919a23ffdc8c5fd169b8756dcf8d939836392e5f7cba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:53 compute-0 podman[388902]: 2025-10-02 13:10:53.608637311 +0000 UTC m=+0.135910853 container init 69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:10:53 compute-0 sudo[388919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:53 compute-0 podman[388902]: 2025-10-02 13:10:53.616983458 +0000 UTC m=+0.144256980 container start 69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:10:53 compute-0 sudo[388919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:53 compute-0 sudo[388919]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:53 compute-0 podman[388902]: 2025-10-02 13:10:53.622132262 +0000 UTC m=+0.149405784 container attach 69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_napier, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:10:53 compute-0 sudo[388949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:53 compute-0 sudo[388949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:53.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:53 compute-0 sudo[388949]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 42 KiB/s wr, 128 op/s
Oct 02 13:10:54 compute-0 loving_napier[388920]: {
Oct 02 13:10:54 compute-0 loving_napier[388920]:     "1": [
Oct 02 13:10:54 compute-0 loving_napier[388920]:         {
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "devices": [
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "/dev/loop3"
Oct 02 13:10:54 compute-0 loving_napier[388920]:             ],
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "lv_name": "ceph_lv0",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "lv_size": "7511998464",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "name": "ceph_lv0",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "tags": {
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.cluster_name": "ceph",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.crush_device_class": "",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.encrypted": "0",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.osd_id": "1",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.type": "block",
Oct 02 13:10:54 compute-0 loving_napier[388920]:                 "ceph.vdo": "0"
Oct 02 13:10:54 compute-0 loving_napier[388920]:             },
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "type": "block",
Oct 02 13:10:54 compute-0 loving_napier[388920]:             "vg_name": "ceph_vg0"
Oct 02 13:10:54 compute-0 loving_napier[388920]:         }
Oct 02 13:10:54 compute-0 loving_napier[388920]:     ]
Oct 02 13:10:54 compute-0 loving_napier[388920]: }
Oct 02 13:10:54 compute-0 systemd[1]: libpod-69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232.scope: Deactivated successfully.
Oct 02 13:10:54 compute-0 podman[388902]: 2025-10-02 13:10:54.420840873 +0000 UTC m=+0.948114405 container died 69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_napier, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:10:54 compute-0 nova_compute[256940]: 2025-10-02 13:10:54.553 2 DEBUG nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:54 compute-0 nova_compute[256940]: 2025-10-02 13:10:54.554 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:54 compute-0 nova_compute[256940]: 2025-10-02 13:10:54.555 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:54 compute-0 nova_compute[256940]: 2025-10-02 13:10:54.555 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:54 compute-0 nova_compute[256940]: 2025-10-02 13:10:54.555 2 DEBUG nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:54 compute-0 nova_compute[256940]: 2025-10-02 13:10:54.555 2 WARNING nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_migrating.
Oct 02 13:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3229ba8a9108f73a67f919a23ffdc8c5fd169b8756dcf8d939836392e5f7cba2-merged.mount: Deactivated successfully.
Oct 02 13:10:55 compute-0 podman[388902]: 2025-10-02 13:10:55.010895939 +0000 UTC m=+1.538169461 container remove 69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:10:55 compute-0 sudo[388799]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:55 compute-0 systemd[1]: libpod-conmon-69f3ac94f7b4723683fa4dd6380306348176b44d293cda12e3102947118ca232.scope: Deactivated successfully.
Oct 02 13:10:55 compute-0 sudo[388990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:55 compute-0 sudo[388990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:55 compute-0 sudo[388990]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:55.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:55 compute-0 sudo[389015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:55 compute-0 sudo[389015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:55 compute-0 sudo[389015]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:55 compute-0 sudo[389040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:55 compute-0 sudo[389040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:55 compute-0 sudo[389040]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:55 compute-0 sudo[389065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:10:55 compute-0 sudo[389065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:55 compute-0 ceph-mon[73668]: pgmap v2953: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 42 KiB/s wr, 128 op/s
Oct 02 13:10:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:55 compute-0 podman[389132]: 2025-10-02 13:10:55.649298063 +0000 UTC m=+0.041827649 container create 13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:10:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:55.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:55 compute-0 systemd[1]: Started libpod-conmon-13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19.scope.
Oct 02 13:10:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:55 compute-0 podman[389132]: 2025-10-02 13:10:55.630184436 +0000 UTC m=+0.022714002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:55 compute-0 podman[389132]: 2025-10-02 13:10:55.735137514 +0000 UTC m=+0.127667120 container init 13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:10:55 compute-0 podman[389132]: 2025-10-02 13:10:55.742658239 +0000 UTC m=+0.135187785 container start 13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:10:55 compute-0 podman[389132]: 2025-10-02 13:10:55.746021677 +0000 UTC m=+0.138551243 container attach 13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:10:55 compute-0 thirsty_yalow[389149]: 167 167
Oct 02 13:10:55 compute-0 systemd[1]: libpod-13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19.scope: Deactivated successfully.
Oct 02 13:10:55 compute-0 podman[389132]: 2025-10-02 13:10:55.748416409 +0000 UTC m=+0.140945955 container died 13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f558290bf7677816add6548b9e58b2ca429c6bce7c2b0d058032533bd690583f-merged.mount: Deactivated successfully.
Oct 02 13:10:55 compute-0 podman[389132]: 2025-10-02 13:10:55.785734039 +0000 UTC m=+0.178263585 container remove 13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:10:55 compute-0 systemd[1]: libpod-conmon-13b965ba72ca5531dd742be0614dfadffd5115e89468fc77c7a7a7f9d579da19.scope: Deactivated successfully.
Oct 02 13:10:55 compute-0 podman[389172]: 2025-10-02 13:10:55.943636493 +0000 UTC m=+0.040300808 container create 22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:10:55 compute-0 systemd[1]: Started libpod-conmon-22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c.scope.
Oct 02 13:10:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e634ddb473b46992df4a9a07e3edfe596f9e9ab624fedd0aed436080ca2e035c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e634ddb473b46992df4a9a07e3edfe596f9e9ab624fedd0aed436080ca2e035c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e634ddb473b46992df4a9a07e3edfe596f9e9ab624fedd0aed436080ca2e035c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e634ddb473b46992df4a9a07e3edfe596f9e9ab624fedd0aed436080ca2e035c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:56 compute-0 podman[389172]: 2025-10-02 13:10:55.925636816 +0000 UTC m=+0.022301151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:10:56 compute-0 podman[389172]: 2025-10-02 13:10:56.028590682 +0000 UTC m=+0.125255017 container init 22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:10:56 compute-0 podman[389172]: 2025-10-02 13:10:56.040238414 +0000 UTC m=+0.136902729 container start 22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_montalcini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:10:56 compute-0 podman[389172]: 2025-10-02 13:10:56.045698476 +0000 UTC m=+0.142362821 container attach 22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:10:56 compute-0 nova_compute[256940]: 2025-10-02 13:10:56.303 2 INFO nova.network.neutron [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating port 513c3d66-613d-4626-8ab0-58520113de32 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 13:10:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 48 KiB/s wr, 130 op/s
Oct 02 13:10:56 compute-0 nova_compute[256940]: 2025-10-02 13:10:56.700 2 DEBUG nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:56 compute-0 nova_compute[256940]: 2025-10-02 13:10:56.701 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:56 compute-0 nova_compute[256940]: 2025-10-02 13:10:56.701 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:56 compute-0 nova_compute[256940]: 2025-10-02 13:10:56.702 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:56 compute-0 nova_compute[256940]: 2025-10-02 13:10:56.702 2 DEBUG nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:56 compute-0 nova_compute[256940]: 2025-10-02 13:10:56.702 2 WARNING nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_migrated.
Oct 02 13:10:56 compute-0 loving_montalcini[389188]: {
Oct 02 13:10:56 compute-0 loving_montalcini[389188]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:10:56 compute-0 loving_montalcini[389188]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:10:56 compute-0 loving_montalcini[389188]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:10:56 compute-0 loving_montalcini[389188]:         "osd_id": 1,
Oct 02 13:10:56 compute-0 loving_montalcini[389188]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:10:56 compute-0 loving_montalcini[389188]:         "type": "bluestore"
Oct 02 13:10:56 compute-0 loving_montalcini[389188]:     }
Oct 02 13:10:56 compute-0 loving_montalcini[389188]: }
Oct 02 13:10:56 compute-0 systemd[1]: libpod-22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c.scope: Deactivated successfully.
Oct 02 13:10:56 compute-0 podman[389172]: 2025-10-02 13:10:56.910075674 +0000 UTC m=+1.006740009 container died 22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_montalcini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e634ddb473b46992df4a9a07e3edfe596f9e9ab624fedd0aed436080ca2e035c-merged.mount: Deactivated successfully.
Oct 02 13:10:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:57.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:57 compute-0 podman[389172]: 2025-10-02 13:10:57.36158712 +0000 UTC m=+1.458251425 container remove 22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_montalcini, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:10:57 compute-0 systemd[1]: libpod-conmon-22e212acedaef05aad85fda8930c1755eb2119421eb9376249394631b14f681c.scope: Deactivated successfully.
Oct 02 13:10:57 compute-0 sudo[389065]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:10:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:10:57 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:57 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 946ff474-0e33-4c68-80d3-3c13de7bf451 does not exist
Oct 02 13:10:57 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 05964738-93b2-401d-8d65-35e351a7a75d does not exist
Oct 02 13:10:57 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 26b8c2d2-8c2d-41fb-8ee0-927758628baf does not exist
Oct 02 13:10:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:57.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:57 compute-0 sudo[389225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:57 compute-0 sudo[389225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:57 compute-0 sudo[389225]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:57 compute-0 sudo[389250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:10:57 compute-0 sudo[389250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:57 compute-0 sudo[389250]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:57 compute-0 nova_compute[256940]: 2025-10-02 13:10:57.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:57 compute-0 ceph-mon[73668]: pgmap v2954: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 48 KiB/s wr, 130 op/s
Oct 02 13:10:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:58 compute-0 nova_compute[256940]: 2025-10-02 13:10:58.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 101 op/s
Oct 02 13:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:10:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:10:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:59.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:59 compute-0 nova_compute[256940]: 2025-10-02 13:10:59.274 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:59 compute-0 nova_compute[256940]: 2025-10-02 13:10:59.274 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:59 compute-0 nova_compute[256940]: 2025-10-02 13:10:59.274 2 DEBUG nova.network.neutron [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:10:59 compute-0 nova_compute[256940]: 2025-10-02 13:10:59.388 2 DEBUG nova.compute.manager [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:59 compute-0 nova_compute[256940]: 2025-10-02 13:10:59.388 2 DEBUG nova.compute.manager [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:59 compute-0 nova_compute[256940]: 2025-10-02 13:10:59.388 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:59 compute-0 ovn_controller[148123]: 2025-10-02T13:10:59Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:17:41 10.100.0.10
Oct 02 13:10:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:10:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:10:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:59.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:10:59 compute-0 ovn_controller[148123]: 2025-10-02T13:10:59Z|01003|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct 02 13:10:59 compute-0 nova_compute[256940]: 2025-10-02 13:10:59.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:00 compute-0 ceph-mon[73668]: pgmap v2955: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 101 op/s
Oct 02 13:11:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 110 op/s
Oct 02 13:11:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:01.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:01 compute-0 ceph-mon[73668]: pgmap v2956: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 110 op/s
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.617 2 DEBUG nova.network.neutron [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:01.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.704 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.708 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.709 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:11:01 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 13:11:01 compute-0 systemd[388610]: Activating special unit Exit the Session...
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped target Main User Target.
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped target Basic System.
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped target Paths.
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped target Sockets.
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped target Timers.
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 13:11:01 compute-0 systemd[388610]: Closed D-Bus User Message Bus Socket.
Oct 02 13:11:01 compute-0 systemd[388610]: Stopped Create User's Volatile Files and Directories.
Oct 02 13:11:01 compute-0 systemd[388610]: Removed slice User Application Slice.
Oct 02 13:11:01 compute-0 systemd[388610]: Reached target Shutdown.
Oct 02 13:11:01 compute-0 systemd[388610]: Finished Exit the Session.
Oct 02 13:11:01 compute-0 systemd[388610]: Reached target Exit the Session.
Oct 02 13:11:01 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 13:11:01 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 13:11:01 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 13:11:01 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 13:11:01 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 13:11:01 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 13:11:01 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.945 2 DEBUG os_brick.utils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.946 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.965 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.965 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[346fe909-b3e1-4e4b-b1fd-295e922bfd9a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.966 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.978 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.978 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[d3056917-4c52-4395-95d9-ed7d4c6b32d3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.980 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.988 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.988 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[ff948382-3b4d-4695-a7fa-4ed8576f1bcb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.989 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[c01e4310-6894-470c-afd0-c4b5a506b3c4]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:01 compute-0 nova_compute[256940]: 2025-10-02 13:11:01.990 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.024 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.026 2 DEBUG os_brick.initiator.connectors.lightos [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.026 2 DEBUG os_brick.initiator.connectors.lightos [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.026 2 DEBUG os_brick.initiator.connectors.lightos [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.026 2 DEBUG os_brick.utils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:11:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 41 KiB/s wr, 94 op/s
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.849 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410647.8477216, 89e1718a-24fd-472a-bbae-aedba874e3a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.850 2 INFO nova.compute.manager [-] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] VM Stopped (Lifecycle Event)
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:02 compute-0 nova_compute[256940]: 2025-10-02 13:11:02.886 2 DEBUG nova.compute.manager [None req-3c3f7bfa-10f2-4a53-8be6-e6e0cf8707f1 - - - - - -] [instance: 89e1718a-24fd-472a-bbae-aedba874e3a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:03.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.359 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.361 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.362 2 INFO nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Creating image(s)
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.415 2 DEBUG nova.storage.rbd_utils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] creating snapshot(nova-resize) on rbd image(de995ad8-07bb-4097-899b-5c79d62a1f4c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:11:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:03.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.725 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.726 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:03 compute-0 nova_compute[256940]: 2025-10-02 13:11:03.752 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Oct 02 13:11:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Oct 02 13:11:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 643 KiB/s rd, 33 KiB/s wr, 56 op/s
Oct 02 13:11:04 compute-0 podman[389322]: 2025-10-02 13:11:04.434617465 +0000 UTC m=+0.099138307 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:11:04 compute-0 podman[389323]: 2025-10-02 13:11:04.471875744 +0000 UTC m=+0.131248613 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:11:04 compute-0 ceph-mon[73668]: pgmap v2957: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 41 KiB/s wr, 94 op/s
Oct 02 13:11:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1043214721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:04 compute-0 nova_compute[256940]: 2025-10-02 13:11:04.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:04 compute-0 nova_compute[256940]: 2025-10-02 13:11:04.664 2 DEBUG nova.objects.instance [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'trusted_certs' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:05.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.239 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.239 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Ensure instance console log exists: /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.240 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.240 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.240 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.243 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start _get_guest_xml network_info=[{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8347daf9-f32f-4c50-b89e-df9e913044db', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'de995ad8-07bb-4097-899b-5c79d62a1f4c', 'attached_at': '2025-10-02T13:11:02.000000', 'detached_at': '', 'volume_id': '8347daf9-f32f-4c50-b89e-df9e913044db', 'multiattach': True, 'serial': '8347daf9-f32f-4c50-b89e-df9e913044db'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '2f0a1f52-338f-461a-ba0a-e3bd32637ff8', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.249 2 WARNING nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.256 2 DEBUG nova.virt.libvirt.host [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.257 2 DEBUG nova.virt.libvirt.host [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.260 2 DEBUG nova.virt.libvirt.host [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.260 2 DEBUG nova.virt.libvirt.host [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.262 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.262 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.263 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.263 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.264 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.264 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.264 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.265 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.265 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.265 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.266 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.266 2 DEBUG nova.virt.hardware [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.266 2 DEBUG nova.objects.instance [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.284 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:11:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300418728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:05.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.706 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:05 compute-0 nova_compute[256940]: 2025-10-02 13:11:05.759 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:05 compute-0 ceph-mon[73668]: osdmap e369: 3 total, 3 up, 3 in
Oct 02 13:11:05 compute-0 ceph-mon[73668]: pgmap v2959: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 643 KiB/s rd, 33 KiB/s wr, 56 op/s
Oct 02 13:11:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 37 KiB/s wr, 62 op/s
Oct 02 13:11:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:11:06 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767376829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.445 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.614 2 DEBUG nova.virt.libvirt.vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.614 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.616 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.619 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <uuid>de995ad8-07bb-4097-899b-5c79d62a1f4c</uuid>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <name>instance-000000c8</name>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <memory>196608</memory>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <nova:name>multiattach-server-1</nova:name>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:11:05</nova:creationTime>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <nova:flavor name="m1.micro">
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:memory>192</nova:memory>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <nova:port uuid="513c3d66-613d-4626-8ab0-58520113de32">
Oct 02 13:11:06 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <system>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <entry name="serial">de995ad8-07bb-4097-899b-5c79d62a1f4c</entry>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <entry name="uuid">de995ad8-07bb-4097-899b-5c79d62a1f4c</entry>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </system>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <os>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   </os>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <features>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   </features>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/de995ad8-07bb-4097-899b-5c79d62a1f4c_disk">
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </source>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config">
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </source>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:11:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </source>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:11:06 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <shareable/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9a:bc:4e"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <target dev="tap513c3d66-61"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/console.log" append="off"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <video>
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </video>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:11:06 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:11:06 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:11:06 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:11:06 compute-0 nova_compute[256940]: </domain>
Oct 02 13:11:06 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.620 2 DEBUG nova.virt.libvirt.vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.620 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.621 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.621 2 DEBUG os_vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap513c3d66-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap513c3d66-61, col_values=(('external_ids', {'iface-id': '513c3d66-613d-4626-8ab0-58520113de32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:bc:4e', 'vm-uuid': 'de995ad8-07bb-4097-899b-5c79d62a1f4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:06 compute-0 NetworkManager[44981]: <info>  [1759410666.6316] manager: (tap513c3d66-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.638 2 INFO os_vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.822 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.822 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.823 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.823 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:9a:bc:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:11:06 compute-0 nova_compute[256940]: 2025-10-02 13:11:06.823 2 INFO nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Using config drive
Oct 02 13:11:07 compute-0 kernel: tap513c3d66-61: entered promiscuous mode
Oct 02 13:11:07 compute-0 NetworkManager[44981]: <info>  [1759410667.1249] manager: (tap513c3d66-61): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Oct 02 13:11:07 compute-0 ovn_controller[148123]: 2025-10-02T13:11:07Z|01004|binding|INFO|Claiming lport 513c3d66-613d-4626-8ab0-58520113de32 for this chassis.
Oct 02 13:11:07 compute-0 ovn_controller[148123]: 2025-10-02T13:11:07Z|01005|binding|INFO|513c3d66-613d-4626-8ab0-58520113de32: Claiming fa:16:3e:9a:bc:4e 10.100.0.4
Oct 02 13:11:07 compute-0 nova_compute[256940]: 2025-10-02 13:11:07.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.138 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:bc:4e 10.100.0.4'], port_security=['fa:16:3e:9a:bc:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'de995ad8-07bb-4097-899b-5c79d62a1f4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=513c3d66-613d-4626-8ab0-58520113de32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.139 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 513c3d66-613d-4626-8ab0-58520113de32 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.141 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:11:07 compute-0 ovn_controller[148123]: 2025-10-02T13:11:07Z|01006|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 ovn-installed in OVS
Oct 02 13:11:07 compute-0 ovn_controller[148123]: 2025-10-02T13:11:07Z|01007|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 up in Southbound
Oct 02 13:11:07 compute-0 nova_compute[256940]: 2025-10-02 13:11:07.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:07 compute-0 nova_compute[256940]: 2025-10-02 13:11:07.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.154 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1256a433-1068-4254-8070-095205611819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.155 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9001b9c-b1 in ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.156 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9001b9c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.156 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[cf850177-6e1a-4831-bd99-d13e2e587638]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.157 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e0aa49-3a5c-486f-9bbe-0054a7788e57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:07 compute-0 systemd-udevd[389502]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.168 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[831d5c40-20c1-44ba-b579-36cb750ff28a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 systemd-machined[210927]: New machine qemu-103-instance-000000c8.
Oct 02 13:11:07 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-000000c8.
Oct 02 13:11:07 compute-0 NetworkManager[44981]: <info>  [1759410667.1820] device (tap513c3d66-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:11:07 compute-0 NetworkManager[44981]: <info>  [1759410667.1831] device (tap513c3d66-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.190 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[baa19dd2-18e3-43c9-a5a6-624e724176c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1300418728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/767376829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.221 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d52f63d8-25e7-4ae2-91e4-3f3ee6165361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 systemd-udevd[389507]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:11:07 compute-0 NetworkManager[44981]: <info>  [1759410667.2270] manager: (tapd9001b9c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/443)
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.227 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9a558fa8-de92-43e8-8a6d-be7af6f4264e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.262 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e3118b-ba01-4782-ae89-7a702f031431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.265 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f79f5033-a06d-41b1-a7bf-344c9078edac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 NetworkManager[44981]: <info>  [1759410667.2898] device (tapd9001b9c-b0): carrier: link connected
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.295 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a2942ff9-7a64-48a8-9adf-e7201499546b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.312 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c07048ea-f6b1-4199-b394-7c30d2dc03b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856277, 'reachable_time': 15140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389535, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.330 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[661fd27a-dc0c-446d-9254-2dac12645207]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c08b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856277, 'tstamp': 856277}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389536, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.347 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[83d34bcd-7dcf-4ef0-ad0e-6bf51d19593d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856277, 'reachable_time': 15140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389537, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.376 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f6b7e9-6e3e-4eec-b96c-77069f9c0309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.454 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c433055d-817c-4f87-b0b3-7e5d7afd63f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.457 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.457 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.457 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:07 compute-0 kernel: tapd9001b9c-b0: entered promiscuous mode
Oct 02 13:11:07 compute-0 nova_compute[256940]: 2025-10-02 13:11:07.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:07 compute-0 NetworkManager[44981]: <info>  [1759410667.4627] manager: (tapd9001b9c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Oct 02 13:11:07 compute-0 nova_compute[256940]: 2025-10-02 13:11:07.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.466 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:07 compute-0 nova_compute[256940]: 2025-10-02 13:11:07.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:07 compute-0 ovn_controller[148123]: 2025-10-02T13:11:07Z|01008|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:11:07 compute-0 nova_compute[256940]: 2025-10-02 13:11:07.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.492 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.494 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1835fe47-c006-42bc-acac-03843a9b6bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.495 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:11:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:07.496 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'env', 'PROCESS_TAG=haproxy-d9001b9c-bca6-4085-a954-1414269e31bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9001b9c-bca6-4085-a954-1414269e31bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:11:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:07.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:07 compute-0 podman[389623]: 2025-10-02 13:11:07.831648693 +0000 UTC m=+0.027770203 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 35 KiB/s wr, 60 op/s
Oct 02 13:11:08 compute-0 ceph-mon[73668]: pgmap v2960: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 37 KiB/s wr, 62 op/s
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.477 2 DEBUG nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.477 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.477 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.477 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.477 2 DEBUG nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.478 2 WARNING nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_finish.
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.612 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410668.6118414, de995ad8-07bb-4097-899b-5c79d62a1f4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.613 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Resumed (Lifecycle Event)
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.614 2 DEBUG nova.compute.manager [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.618 2 INFO nova.virt.libvirt.driver [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance running successfully.
Oct 02 13:11:08 compute-0 virtqemud[257589]: argument unsupported: QEMU guest agent is not configured
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.620 2 DEBUG nova.virt.libvirt.guest [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.620 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.637 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.641 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.676 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.676 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410668.6151688, de995ad8-07bb-4097-899b-5c79d62a1f4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.677 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Started (Lifecycle Event)
Oct 02 13:11:08 compute-0 podman[389623]: 2025-10-02 13:11:08.683432623 +0000 UTC m=+0.879554053 container create c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.708 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.711 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:11:08 compute-0 nova_compute[256940]: 2025-10-02 13:11:08.729 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 13:11:08 compute-0 systemd[1]: Started libpod-conmon-c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e.scope.
Oct 02 13:11:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059cbb43b1007ea210d8a040b16e1716dc0c3d474a27aead0df1e76bc8c145c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:09.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:09 compute-0 podman[389623]: 2025-10-02 13:11:09.248911761 +0000 UTC m=+1.445033211 container init c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:11:09 compute-0 podman[389623]: 2025-10-02 13:11:09.261768715 +0000 UTC m=+1.457890145 container start c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:11:09 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[389645]: [NOTICE]   (389649) : New worker (389651) forked
Oct 02 13:11:09 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[389645]: [NOTICE]   (389649) : Loading success.
Oct 02 13:11:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:09.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:10 compute-0 ceph-mon[73668]: pgmap v2961: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 35 KiB/s wr, 60 op/s
Oct 02 13:11:10 compute-0 nova_compute[256940]: 2025-10-02 13:11:10.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 967 KiB/s rd, 31 KiB/s wr, 73 op/s
Oct 02 13:11:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:10 compute-0 nova_compute[256940]: 2025-10-02 13:11:10.622 2 DEBUG nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:10 compute-0 nova_compute[256940]: 2025-10-02 13:11:10.623 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:10 compute-0 nova_compute[256940]: 2025-10-02 13:11:10.623 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:10 compute-0 nova_compute[256940]: 2025-10-02 13:11:10.623 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:10 compute-0 nova_compute[256940]: 2025-10-02 13:11:10.623 2 DEBUG nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:11:10 compute-0 nova_compute[256940]: 2025-10-02 13:11:10.623 2 WARNING nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state resized and task_state None.
Oct 02 13:11:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:11.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:11 compute-0 nova_compute[256940]: 2025-10-02 13:11:11.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:11.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:11 compute-0 ceph-mon[73668]: pgmap v2962: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 967 KiB/s rd, 31 KiB/s wr, 73 op/s
Oct 02 13:11:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 97 op/s
Oct 02 13:11:13 compute-0 nova_compute[256940]: 2025-10-02 13:11:13.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 13:11:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:13.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 13:11:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:13 compute-0 sudo[389663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:13 compute-0 sudo[389663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:13 compute-0 sudo[389663]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:13 compute-0 sudo[389688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:13 compute-0 sudo[389688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:13 compute-0 sudo[389688]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:14 compute-0 ceph-mon[73668]: pgmap v2963: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 97 op/s
Oct 02 13:11:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 95 op/s
Oct 02 13:11:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:15 compute-0 ceph-mon[73668]: pgmap v2964: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 95 op/s
Oct 02 13:11:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct 02 13:11:16 compute-0 nova_compute[256940]: 2025-10-02 13:11:16.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:17.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:17.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Oct 02 13:11:18 compute-0 nova_compute[256940]: 2025-10-02 13:11:18.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:18 compute-0 ceph-mon[73668]: pgmap v2965: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct 02 13:11:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.0 KiB/s wr, 85 op/s
Oct 02 13:11:18 compute-0 podman[389715]: 2025-10-02 13:11:18.389977349 +0000 UTC m=+0.062286220 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:11:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Oct 02 13:11:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Oct 02 13:11:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:19.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:19 compute-0 podman[389736]: 2025-10-02 13:11:19.404806678 +0000 UTC m=+0.074443896 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:11:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:19.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 8.6 KiB/s wr, 81 op/s
Oct 02 13:11:20 compute-0 ceph-mon[73668]: pgmap v2966: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.0 KiB/s wr, 85 op/s
Oct 02 13:11:20 compute-0 ceph-mon[73668]: osdmap e370: 3 total, 3 up, 3 in
Oct 02 13:11:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:21.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:21 compute-0 ovn_controller[148123]: 2025-10-02T13:11:21Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:bc:4e 10.100.0.4
Oct 02 13:11:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1781545665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:21 compute-0 ceph-mon[73668]: pgmap v2968: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 8.6 KiB/s wr, 81 op/s
Oct 02 13:11:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:21.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.804 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.805 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.805 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.806 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.806 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.807 2 INFO nova.compute.manager [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Terminating instance
Oct 02 13:11:21 compute-0 nova_compute[256940]: 2025-10-02 13:11:21.808 2 DEBUG nova.compute.manager [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.034 2 DEBUG nova.compute.manager [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.035 2 DEBUG nova.compute.manager [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing instance network info cache due to event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.035 2 DEBUG oslo_concurrency.lockutils [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.035 2 DEBUG oslo_concurrency.lockutils [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.035 2 DEBUG nova.network.neutron [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:11:22 compute-0 kernel: tapf2d2820c-dd (unregistering): left promiscuous mode
Oct 02 13:11:22 compute-0 NetworkManager[44981]: <info>  [1759410682.3448] device (tapf2d2820c-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:11:22 compute-0 ovn_controller[148123]: 2025-10-02T13:11:22Z|01009|binding|INFO|Releasing lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 from this chassis (sb_readonly=0)
Oct 02 13:11:22 compute-0 ovn_controller[148123]: 2025-10-02T13:11:22Z|01010|binding|INFO|Setting lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 down in Southbound
Oct 02 13:11:22 compute-0 ovn_controller[148123]: 2025-10-02T13:11:22Z|01011|binding|INFO|Removing iface tapf2d2820c-dd ovn-installed in OVS
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 841 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:22 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Oct 02 13:11:22 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000c9.scope: Consumed 14.838s CPU time.
Oct 02 13:11:22 compute-0 systemd-machined[210927]: Machine qemu-102-instance-000000c9 terminated.
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.447 2 INFO nova.virt.libvirt.driver [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance destroyed successfully.
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.447 2 DEBUG nova.objects.instance [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'resources' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.521 2 DEBUG nova.virt.libvirt.vif [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-838477094',display_name='tempest-TestShelveInstance-server-838477094',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-838477094',id=201,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVidC3ivoi0/BSVTNr+Q25H86pwCavGSEVSNuXxP+lg72xfVuGJwhfG7zkX1TfYRkx+B8B4fFFME+SiQB5nTIMxK1PVVykpxHJMksdNCrSHtdTo6A7G7KE0+4LlyrKslw==',key_name='tempest-TestShelveInstance-2040419602',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:10:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-v1nr892j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:44Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=79964143-e208-4552-8380-513c3adf09ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:11:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:22.520 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:17:41 10.100.0.10'], port_security=['fa:16:3e:47:17:41 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79964143-e208-4552-8380-513c3adf09ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '9', 'neutron:security_group_ids': '0ecabba0-db02-4a2b-8a99-c435db80e5c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f2d2820c-dd48-47a4-ab94-dd6136c8e314) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.521 2 DEBUG nova.network.os_vif_util [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.522 2 DEBUG nova.network.os_vif_util [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.522 2 DEBUG os_vif [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:11:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:22.522 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f2d2820c-dd48-47a4-ab94-dd6136c8e314 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 unbound from our chassis
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2d2820c-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:22.524 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4223a8cc-f72a-428d-accb-3f4210096878, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:22.525 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[73685aa1-444e-440b-9d4e-02a358e283c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:22.527 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace which is not needed anymore
Oct 02 13:11:22 compute-0 nova_compute[256940]: 2025-10-02 13:11:22.529 2 INFO os_vif [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd')
Oct 02 13:11:23 compute-0 nova_compute[256940]: 2025-10-02 13:11:23.051 2 DEBUG nova.compute.manager [req-149fe430-5eda-4c5a-acac-414156f1a3f6 req-85bd8fb3-e85b-4f3e-a899-34f2375c9202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-unplugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:23 compute-0 nova_compute[256940]: 2025-10-02 13:11:23.052 2 DEBUG oslo_concurrency.lockutils [req-149fe430-5eda-4c5a-acac-414156f1a3f6 req-85bd8fb3-e85b-4f3e-a899-34f2375c9202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:23 compute-0 nova_compute[256940]: 2025-10-02 13:11:23.052 2 DEBUG oslo_concurrency.lockutils [req-149fe430-5eda-4c5a-acac-414156f1a3f6 req-85bd8fb3-e85b-4f3e-a899-34f2375c9202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:23 compute-0 nova_compute[256940]: 2025-10-02 13:11:23.053 2 DEBUG oslo_concurrency.lockutils [req-149fe430-5eda-4c5a-acac-414156f1a3f6 req-85bd8fb3-e85b-4f3e-a899-34f2375c9202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:23 compute-0 nova_compute[256940]: 2025-10-02 13:11:23.053 2 DEBUG nova.compute.manager [req-149fe430-5eda-4c5a-acac-414156f1a3f6 req-85bd8fb3-e85b-4f3e-a899-34f2375c9202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] No waiting events found dispatching network-vif-unplugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:11:23 compute-0 nova_compute[256940]: 2025-10-02 13:11:23.053 2 DEBUG nova.compute.manager [req-149fe430-5eda-4c5a-acac-414156f1a3f6 req-85bd8fb3-e85b-4f3e-a899-34f2375c9202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-unplugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:11:23 compute-0 nova_compute[256940]: 2025-10-02 13:11:23.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:23 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [NOTICE]   (388155) : haproxy version is 2.8.14-c23fe91
Oct 02 13:11:23 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [NOTICE]   (388155) : path to executable is /usr/sbin/haproxy
Oct 02 13:11:23 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [WARNING]  (388155) : Exiting Master process...
Oct 02 13:11:23 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [WARNING]  (388155) : Exiting Master process...
Oct 02 13:11:23 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [ALERT]    (388155) : Current worker (388157) exited with code 143 (Terminated)
Oct 02 13:11:23 compute-0 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[388150]: [WARNING]  (388155) : All workers exited. Exiting... (0)
Oct 02 13:11:23 compute-0 systemd[1]: libpod-9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996.scope: Deactivated successfully.
Oct 02 13:11:23 compute-0 podman[389808]: 2025-10-02 13:11:23.111155085 +0000 UTC m=+0.494789582 container died 9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:11:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:23.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2051422908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996-userdata-shm.mount: Deactivated successfully.
Oct 02 13:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e04d9545cb820050a9d2f138c761a8584334f6d3c5faea5c868ca2a2fa111cbe-merged.mount: Deactivated successfully.
Oct 02 13:11:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:23.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:24 compute-0 podman[389808]: 2025-10-02 13:11:24.158335063 +0000 UTC m=+1.541969560 container cleanup 9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:11:24 compute-0 systemd[1]: libpod-conmon-9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996.scope: Deactivated successfully.
Oct 02 13:11:24 compute-0 nova_compute[256940]: 2025-10-02 13:11:24.323 2 DEBUG nova.network.neutron [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updated VIF entry in instance network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:11:24 compute-0 nova_compute[256940]: 2025-10-02 13:11:24.323 2 DEBUG nova.network.neutron [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:24 compute-0 nova_compute[256940]: 2025-10-02 13:11:24.363 2 DEBUG oslo_concurrency.lockutils [req-a7beeaef-8539-48bd-ac29-4b73cb03945e req-02a47580-7f5e-4317-93ae-9f622b61f1fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 841 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:11:24 compute-0 ceph-mon[73668]: pgmap v2969: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 841 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:11:24 compute-0 podman[389840]: 2025-10-02 13:11:24.800007352 +0000 UTC m=+0.622658116 container remove 9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.808 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[595b8a23-ae23-4ee8-bbc1-16423ec8c55c]: (4, ('Thu Oct  2 01:11:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996)\n9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996\nThu Oct  2 01:11:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996)\n9769cee5a6824dc489c19daf7b6177af333f40bba3bcd64edf50ee684b46a996\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.811 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3a15a53f-9781-4a9c-880d-2b3f05127c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.812 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:24 compute-0 nova_compute[256940]: 2025-10-02 13:11:24.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:24 compute-0 kernel: tap4223a8cc-f0: left promiscuous mode
Oct 02 13:11:24 compute-0 nova_compute[256940]: 2025-10-02 13:11:24.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.848 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[76f0be59-1a53-471c-ad47-0991f6363ab0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.874 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2f26dd12-ba8d-4cd0-9a10-d33f26ac2c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.876 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[71ea48c3-94e7-4b94-8833-a262c5223aef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.897 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfe8216-8f36-4f18-bf94-ca9bec4a88b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853946, 'reachable_time': 32623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389857, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.899 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:11:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:24.900 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d87d50-d6ce-4641-b12f-9dbdfb88ec0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d4223a8cc\x2df72a\x2d428d\x2daccb\x2d3f4210096878.mount: Deactivated successfully.
Oct 02 13:11:25 compute-0 nova_compute[256940]: 2025-10-02 13:11:25.165 2 DEBUG nova.compute.manager [req-92070a9a-79a2-4dae-a0da-131bf90ac95b req-de283aa4-2390-40c7-8787-069f9199d852 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:25 compute-0 nova_compute[256940]: 2025-10-02 13:11:25.165 2 DEBUG oslo_concurrency.lockutils [req-92070a9a-79a2-4dae-a0da-131bf90ac95b req-de283aa4-2390-40c7-8787-069f9199d852 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:25 compute-0 nova_compute[256940]: 2025-10-02 13:11:25.166 2 DEBUG oslo_concurrency.lockutils [req-92070a9a-79a2-4dae-a0da-131bf90ac95b req-de283aa4-2390-40c7-8787-069f9199d852 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:25 compute-0 nova_compute[256940]: 2025-10-02 13:11:25.166 2 DEBUG oslo_concurrency.lockutils [req-92070a9a-79a2-4dae-a0da-131bf90ac95b req-de283aa4-2390-40c7-8787-069f9199d852 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:25 compute-0 nova_compute[256940]: 2025-10-02 13:11:25.166 2 DEBUG nova.compute.manager [req-92070a9a-79a2-4dae-a0da-131bf90ac95b req-de283aa4-2390-40c7-8787-069f9199d852 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] No waiting events found dispatching network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:11:25 compute-0 nova_compute[256940]: 2025-10-02 13:11:25.167 2 WARNING nova.compute.manager [req-92070a9a-79a2-4dae-a0da-131bf90ac95b req-de283aa4-2390-40c7-8787-069f9199d852 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received unexpected event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 for instance with vm_state active and task_state deleting.
Oct 02 13:11:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:25.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:25.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:25 compute-0 ceph-mon[73668]: pgmap v2970: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 841 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:11:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2097975945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Oct 02 13:11:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Oct 02 13:11:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Oct 02 13:11:26 compute-0 nova_compute[256940]: 2025-10-02 13:11:26.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 632 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Oct 02 13:11:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:26.507 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:26.508 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:26.508 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:26 compute-0 nova_compute[256940]: 2025-10-02 13:11:26.713 2 INFO nova.virt.libvirt.driver [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deleting instance files /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac_del
Oct 02 13:11:26 compute-0 nova_compute[256940]: 2025-10-02 13:11:26.714 2 INFO nova.virt.libvirt.driver [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deletion of /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac_del complete
Oct 02 13:11:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:27.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:27 compute-0 nova_compute[256940]: 2025-10-02 13:11:27.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:27 compute-0 ceph-mon[73668]: osdmap e371: 3 total, 3 up, 3 in
Oct 02 13:11:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2413210923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:27 compute-0 nova_compute[256940]: 2025-10-02 13:11:27.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:27.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.210 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:11:28 compute-0 ceph-mon[73668]: pgmap v2972: 305 pgs: 305 active+clean; 632 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 964 KiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.720 2 DEBUG nova.compute.manager [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.721 2 DEBUG nova.compute.manager [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.721 2 DEBUG oslo_concurrency.lockutils [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.721 2 DEBUG oslo_concurrency.lockutils [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.721 2 DEBUG nova.network.neutron [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.835 2 INFO nova.compute.manager [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Took 7.03 seconds to destroy the instance on the hypervisor.
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.835 2 DEBUG oslo.service.loopingcall [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.836 2 DEBUG nova.compute.manager [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:11:28 compute-0 nova_compute[256940]: 2025-10-02 13:11:28.836 2 DEBUG nova.network.neutron [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:11:28
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups']
Oct 02 13:11:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:11:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:29.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.263 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.263 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:29 compute-0 ceph-mon[73668]: pgmap v2973: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 964 KiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 02 13:11:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/351148488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:11:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:11:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/727687055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.718 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:29.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.892 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.892 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:11:29 compute-0 nova_compute[256940]: 2025-10-02 13:11:29.893 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.069 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.070 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3950MB free_disk=20.784896850585938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.071 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.071 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.338 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 79964143-e208-4552-8380-513c3adf09ac actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.338 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance de995ad8-07bb-4097-899b-5c79d62a1f4c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.338 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.339 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.363 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:11:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 839 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.383 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.384 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.400 2 DEBUG nova.network.neutron [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.413 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.439 2 INFO nova.compute.manager [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Took 1.60 seconds to deallocate network for instance.
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.461 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.544 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/727687055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2587128955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.847 2 INFO nova.compute.manager [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Took 0.41 seconds to detach 1 volumes for instance.
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.851 2 DEBUG nova.compute.manager [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deleting volume: 23ece991-964a-4523-b231-9590440c3d93 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 13:11:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.874 2 DEBUG nova.compute.manager [req-1f32bd72-8f37-4b2b-b81a-895c4f733d2f req-52527059-0f75-48be-8375-8483325e36e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-deleted-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083973286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:30 compute-0 nova_compute[256940]: 2025-10-02 13:11:30.996 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.003 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.035 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.094 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.095 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:31.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.230 2 DEBUG nova.network.neutron [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.230 2 DEBUG nova.network.neutron [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.338 2 DEBUG oslo_concurrency.lockutils [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.365 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.366 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:31 compute-0 nova_compute[256940]: 2025-10-02 13:11:31.460 2 DEBUG oslo_concurrency.processutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:31.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:31 compute-0 ceph-mon[73668]: pgmap v2974: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 839 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Oct 02 13:11:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4083973286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1116925894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.006 2 DEBUG oslo_concurrency.processutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.012 2 DEBUG nova.compute.provider_tree [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.045 2 DEBUG nova.scheduler.client.report [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.080 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.117 2 INFO nova.scheduler.client.report [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Deleted allocations for instance 79964143-e208-4552-8380-513c3adf09ac
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.200 2 DEBUG oslo_concurrency.lockutils [None req-f8095582-0480-41f1-96b3-7e131feea103 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.359 2 DEBUG oslo_concurrency.lockutils [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.360 2 DEBUG oslo_concurrency.lockutils [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 648 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.394 2 INFO nova.compute.manager [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Detaching volume 8347daf9-f32f-4c50-b89e-df9e913044db
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.662 2 INFO nova.virt.block_device [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Attempting to driver detach volume 8347daf9-f32f-4c50-b89e-df9e913044db from mountpoint /dev/vdb
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.671 2 DEBUG nova.virt.libvirt.driver [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Attempting to detach device vdb from instance de995ad8-07bb-4097-899b-5c79d62a1f4c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.672 2 DEBUG nova.virt.libvirt.guest [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct 02 13:11:32 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   </source>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <shareable/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]: </disk>
Oct 02 13:11:32 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.679 2 INFO nova.virt.libvirt.driver [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance de995ad8-07bb-4097-899b-5c79d62a1f4c from the persistent domain config.
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.679 2 DEBUG nova.virt.libvirt.driver [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance de995ad8-07bb-4097-899b-5c79d62a1f4c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.680 2 DEBUG nova.virt.libvirt.guest [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct 02 13:11:32 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   </source>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <shareable/>
Oct 02 13:11:32 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 13:11:32 compute-0 nova_compute[256940]: </disk>
Oct 02 13:11:32 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.777 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759410692.7772639, de995ad8-07bb-4097-899b-5c79d62a1f4c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.779 2 DEBUG nova.virt.libvirt.driver [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance de995ad8-07bb-4097-899b-5c79d62a1f4c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:11:32 compute-0 nova_compute[256940]: 2025-10-02 13:11:32.781 2 INFO nova.virt.libvirt.driver [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance de995ad8-07bb-4097-899b-5c79d62a1f4c from the live domain config.
Oct 02 13:11:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:11:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2511029647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:11:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2511029647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1116925894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2188314948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2511029647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2511029647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:33 compute-0 nova_compute[256940]: 2025-10-02 13:11:33.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:33 compute-0 nova_compute[256940]: 2025-10-02 13:11:33.169 2 DEBUG nova.objects.instance [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:33.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:33 compute-0 nova_compute[256940]: 2025-10-02 13:11:33.268 2 DEBUG oslo_concurrency.lockutils [None req-4b3cc54f-9ef9-4366-828b-703439820911 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:33.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:33 compute-0 sudo[389932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:33 compute-0 sudo[389932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:33 compute-0 sudo[389932]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:33 compute-0 sudo[389957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:33 compute-0 sudo[389957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:33 compute-0 sudo[389957]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:33 compute-0 ceph-mon[73668]: pgmap v2975: 305 pgs: 305 active+clean; 648 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:11:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1694941547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:34 compute-0 nova_compute[256940]: 2025-10-02 13:11:34.096 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 648 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:11:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:35.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:35 compute-0 nova_compute[256940]: 2025-10-02 13:11:35.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:35 compute-0 podman[389983]: 2025-10-02 13:11:35.403043271 +0000 UTC m=+0.068367098 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:11:35 compute-0 podman[389984]: 2025-10-02 13:11:35.434250843 +0000 UTC m=+0.097513216 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Oct 02 13:11:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:35.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:36 compute-0 ceph-mon[73668]: pgmap v2976: 305 pgs: 305 active+clean; 648 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:11:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 586 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Oct 02 13:11:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:37.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:37 compute-0 nova_compute[256940]: 2025-10-02 13:11:37.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:37 compute-0 nova_compute[256940]: 2025-10-02 13:11:37.446 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410682.4448664, 79964143-e208-4552-8380-513c3adf09ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:37 compute-0 nova_compute[256940]: 2025-10-02 13:11:37.447 2 INFO nova.compute.manager [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Stopped (Lifecycle Event)
Oct 02 13:11:37 compute-0 nova_compute[256940]: 2025-10-02 13:11:37.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:37 compute-0 nova_compute[256940]: 2025-10-02 13:11:37.595 2 DEBUG nova.compute.manager [None req-eb127b04-9782-4ff7-a8db-1d854f04ea79 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:37.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:38 compute-0 nova_compute[256940]: 2025-10-02 13:11:38.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:38 compute-0 ceph-mon[73668]: pgmap v2977: 305 pgs: 305 active+clean; 586 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Oct 02 13:11:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 567 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.1 MiB/s wr, 38 op/s
Oct 02 13:11:39 compute-0 ceph-mon[73668]: pgmap v2978: 305 pgs: 305 active+clean; 567 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.1 MiB/s wr, 38 op/s
Oct 02 13:11:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:39.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:39.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 582 KiB/s rd, 27 KiB/s wr, 48 op/s
Oct 02 13:11:40 compute-0 ovn_controller[148123]: 2025-10-02T13:11:40Z|01012|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:11:40 compute-0 nova_compute[256940]: 2025-10-02 13:11:40.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009698194386283268 of space, bias 1.0, pg target 2.9094583158849803 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004320466627117067 of space, bias 1.0, pg target 1.287499054880886 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:11:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 13:11:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:41 compute-0 nova_compute[256940]: 2025-10-02 13:11:41.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:41 compute-0 nova_compute[256940]: 2025-10-02 13:11:41.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:11:41 compute-0 nova_compute[256940]: 2025-10-02 13:11:41.318 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:11:41 compute-0 ceph-mon[73668]: pgmap v2979: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 582 KiB/s rd, 27 KiB/s wr, 48 op/s
Oct 02 13:11:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:41.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 22 KiB/s wr, 101 op/s
Oct 02 13:11:42 compute-0 nova_compute[256940]: 2025-10-02 13:11:42.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:42 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:11:43 compute-0 nova_compute[256940]: 2025-10-02 13:11:43.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:43 compute-0 ceph-mon[73668]: pgmap v2980: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 22 KiB/s wr, 101 op/s
Oct 02 13:11:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:43.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 21 KiB/s wr, 97 op/s
Oct 02 13:11:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:45.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:45 compute-0 ceph-mon[73668]: pgmap v2981: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 21 KiB/s wr, 97 op/s
Oct 02 13:11:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:46.337 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:11:46 compute-0 nova_compute[256940]: 2025-10-02 13:11:46.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:46.338 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:11:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 26 KiB/s wr, 98 op/s
Oct 02 13:11:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:47 compute-0 nova_compute[256940]: 2025-10-02 13:11:47.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:11:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:11:47 compute-0 ceph-mon[73668]: pgmap v2982: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 26 KiB/s wr, 98 op/s
Oct 02 13:11:48 compute-0 nova_compute[256940]: 2025-10-02 13:11:48.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 23 KiB/s wr, 91 op/s
Oct 02 13:11:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:49.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:49 compute-0 podman[390032]: 2025-10-02 13:11:49.397042601 +0000 UTC m=+0.064167099 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:11:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:49.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:50 compute-0 ceph-mon[73668]: pgmap v2983: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 23 KiB/s wr, 91 op/s
Oct 02 13:11:50 compute-0 podman[390052]: 2025-10-02 13:11:50.384333464 +0000 UTC m=+0.058767569 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:11:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 572 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 739 KiB/s wr, 102 op/s
Oct 02 13:11:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:51 compute-0 ceph-mon[73668]: pgmap v2984: 305 pgs: 305 active+clean; 572 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 739 KiB/s wr, 102 op/s
Oct 02 13:11:51 compute-0 ovn_controller[148123]: 2025-10-02T13:11:51Z|01013|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:11:51 compute-0 nova_compute[256940]: 2025-10-02 13:11:51.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:52 compute-0 nova_compute[256940]: 2025-10-02 13:11:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:52 compute-0 nova_compute[256940]: 2025-10-02 13:11:52.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:11:52 compute-0 nova_compute[256940]: 2025-10-02 13:11:52.231 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:11:52 compute-0 nova_compute[256940]: 2025-10-02 13:11:52.231 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:52 compute-0 nova_compute[256940]: 2025-10-02 13:11:52.232 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:11:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:11:52.339 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 579 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 89 op/s
Oct 02 13:11:52 compute-0 nova_compute[256940]: 2025-10-02 13:11:52.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:53 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct 02 13:11:53 compute-0 nova_compute[256940]: 2025-10-02 13:11:53.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:53.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:54 compute-0 sudo[390075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:54 compute-0 sudo[390075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:54 compute-0 sudo[390075]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:54 compute-0 ceph-mon[73668]: pgmap v2985: 305 pgs: 305 active+clean; 579 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 89 op/s
Oct 02 13:11:54 compute-0 sudo[390100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:54 compute-0 sudo[390100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:54 compute-0 sudo[390100]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 579 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 33 op/s
Oct 02 13:11:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:55.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:56 compute-0 ceph-mon[73668]: pgmap v2986: 305 pgs: 305 active+clean; 579 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 33 op/s
Oct 02 13:11:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 638 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 3.8 MiB/s wr, 78 op/s
Oct 02 13:11:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:57.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:57 compute-0 ceph-mon[73668]: pgmap v2987: 305 pgs: 305 active+clean; 638 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 3.8 MiB/s wr, 78 op/s
Oct 02 13:11:57 compute-0 nova_compute[256940]: 2025-10-02 13:11:57.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:58 compute-0 nova_compute[256940]: 2025-10-02 13:11:58.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:58 compute-0 sudo[390127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:58 compute-0 sudo[390127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:58 compute-0 sudo[390127]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:58 compute-0 sudo[390152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:11:58 compute-0 sudo[390152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:58 compute-0 sudo[390152]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:58 compute-0 sudo[390177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:58 compute-0 sudo[390177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:58 compute-0 sudo[390177]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:58 compute-0 sudo[390202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:11:58 compute-0 sudo[390202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 315 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Oct 02 13:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:11:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:11:58 compute-0 sudo[390202]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.065 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.065 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.100 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:11:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:11:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:11:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:11:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:59.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.214 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.215 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.226 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.227 2 INFO nova.compute.claims [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:11:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:11:59 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ff1b6645-4fcb-4263-aea6-7b19815660cf does not exist
Oct 02 13:11:59 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 02fb6184-3056-4bda-89ff-91f434ce7614 does not exist
Oct 02 13:11:59 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e10f40d2-7a04-45e3-8f45-b39e925b2d0c does not exist
Oct 02 13:11:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:11:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:11:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:11:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:59 compute-0 sudo[390259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:59 compute-0 sudo[390259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:59 compute-0 sudo[390259]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:59 compute-0 sudo[390284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:11:59 compute-0 sudo[390284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:59 compute-0 sudo[390284]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.431 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:59 compute-0 sudo[390309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:59 compute-0 sudo[390309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:59 compute-0 sudo[390309]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:59 compute-0 ceph-mon[73668]: pgmap v2988: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 315 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Oct 02 13:11:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:11:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:11:59 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:59 compute-0 sudo[390335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:11:59 compute-0 sudo[390335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:11:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:59.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:59 compute-0 podman[390419]: 2025-10-02 13:11:59.835998135 +0000 UTC m=+0.048805190 container create 6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:11:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238194769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:59 compute-0 systemd[1]: Started libpod-conmon-6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21.scope.
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.881 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.896 2 DEBUG nova.compute.provider_tree [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:11:59 compute-0 podman[390419]: 2025-10-02 13:11:59.816472328 +0000 UTC m=+0.029279403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:11:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.939 2 DEBUG nova.scheduler.client.report [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:11:59 compute-0 podman[390419]: 2025-10-02 13:11:59.941574729 +0000 UTC m=+0.154381784 container init 6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:11:59 compute-0 podman[390419]: 2025-10-02 13:11:59.950802999 +0000 UTC m=+0.163610054 container start 6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:11:59 compute-0 podman[390419]: 2025-10-02 13:11:59.954611538 +0000 UTC m=+0.167418623 container attach 6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:11:59 compute-0 strange_hugle[390437]: 167 167
Oct 02 13:11:59 compute-0 systemd[1]: libpod-6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21.scope: Deactivated successfully.
Oct 02 13:11:59 compute-0 podman[390419]: 2025-10-02 13:11:59.961039905 +0000 UTC m=+0.173846960 container died 6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-96fb8a8e70809981df2cfd70afb8d5a70e40c0c4aee2459dc98a5c20bfb594b0-merged.mount: Deactivated successfully.
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.987 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:59 compute-0 nova_compute[256940]: 2025-10-02 13:11:59.988 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:12:00 compute-0 podman[390419]: 2025-10-02 13:12:00.004210367 +0000 UTC m=+0.217017422 container remove 6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:12:00 compute-0 systemd[1]: libpod-conmon-6fde7f3b81d0e58cd4e110d1e0c8f7d34270517480c179683dc01c466cdded21.scope: Deactivated successfully.
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.060 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.061 2 DEBUG nova.network.neutron [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.106 2 INFO nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.142 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:12:00 compute-0 podman[390460]: 2025-10-02 13:12:00.190857109 +0000 UTC m=+0.043000019 container create 2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_clarke, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.221 2 INFO nova.virt.block_device [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Booting with volume 19b06699-b688-462a-890e-57ac19f2d9e6 at /dev/vda
Oct 02 13:12:00 compute-0 systemd[1]: Started libpod-conmon-2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152.scope.
Oct 02 13:12:00 compute-0 podman[390460]: 2025-10-02 13:12:00.172640475 +0000 UTC m=+0.024783405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6dabebc58c604f81214f9073125e7d50f28cd0725228ec69ce6932f90e902c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6dabebc58c604f81214f9073125e7d50f28cd0725228ec69ce6932f90e902c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6dabebc58c604f81214f9073125e7d50f28cd0725228ec69ce6932f90e902c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6dabebc58c604f81214f9073125e7d50f28cd0725228ec69ce6932f90e902c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6dabebc58c604f81214f9073125e7d50f28cd0725228ec69ce6932f90e902c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:00 compute-0 podman[390460]: 2025-10-02 13:12:00.295290093 +0000 UTC m=+0.147433023 container init 2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:12:00 compute-0 podman[390460]: 2025-10-02 13:12:00.304448781 +0000 UTC m=+0.156591691 container start 2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_clarke, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:12:00 compute-0 podman[390460]: 2025-10-02 13:12:00.308275531 +0000 UTC m=+0.160418441 container attach 2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_clarke, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 13:12:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/238194769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.584 2 DEBUG nova.policy [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.606 2 DEBUG os_brick.utils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.608 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.622 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.623 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2e960f-46fb-48af-b969-938d831c9900]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.625 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.634 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.634 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[81250b94-6192-4cb1-8a0d-8f292168902d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.636 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.644 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.644 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49f4e0-25e0-425b-9447-2f11a9523b84]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.645 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[7b657341-6216-4604-b771-2549b38306c1]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.646 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.676 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.679 2 DEBUG os_brick.initiator.connectors.lightos [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.679 2 DEBUG os_brick.initiator.connectors.lightos [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.680 2 DEBUG os_brick.initiator.connectors.lightos [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.680 2 DEBUG os_brick.utils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:12:00 compute-0 nova_compute[256940]: 2025-10-02 13:12:00.681 2 DEBUG nova.virt.block_device [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Updating existing volume attachment record: 0d0e9cb5-44a1-4c98-87a9-040f5b2c2dc4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:12:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:01 compute-0 funny_clarke[390476]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:12:01 compute-0 funny_clarke[390476]: --> relative data size: 1.0
Oct 02 13:12:01 compute-0 funny_clarke[390476]: --> All data devices are unavailable
Oct 02 13:12:01 compute-0 systemd[1]: libpod-2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152.scope: Deactivated successfully.
Oct 02 13:12:01 compute-0 conmon[390476]: conmon 2115ea7b73b7bad6bc70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152.scope/container/memory.events
Oct 02 13:12:01 compute-0 podman[390460]: 2025-10-02 13:12:01.200383249 +0000 UTC m=+1.052526159 container died 2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:12:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:01.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f6dabebc58c604f81214f9073125e7d50f28cd0725228ec69ce6932f90e902c-merged.mount: Deactivated successfully.
Oct 02 13:12:01 compute-0 podman[390460]: 2025-10-02 13:12:01.266641661 +0000 UTC m=+1.118784571 container remove 2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:12:01 compute-0 systemd[1]: libpod-conmon-2115ea7b73b7bad6bc70c0b030df4be9f582639bab88ed3959dde081bf30b152.scope: Deactivated successfully.
Oct 02 13:12:01 compute-0 sudo[390335]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:01 compute-0 sudo[390511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:01 compute-0 sudo[390511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:01 compute-0 sudo[390511]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:01 compute-0 sudo[390536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:12:01 compute-0 sudo[390536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:01 compute-0 sudo[390536]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:01 compute-0 sudo[390561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:01 compute-0 sudo[390561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:01 compute-0 sudo[390561]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:01 compute-0 ceph-mon[73668]: pgmap v2989: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 13:12:01 compute-0 sudo[390586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:12:01 compute-0 sudo[390586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:12:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794547524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:01.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:01 compute-0 podman[390651]: 2025-10-02 13:12:01.854982284 +0000 UTC m=+0.043420760 container create 112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:12:01 compute-0 systemd[1]: Started libpod-conmon-112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc.scope.
Oct 02 13:12:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:01 compute-0 podman[390651]: 2025-10-02 13:12:01.837644773 +0000 UTC m=+0.026083269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:01 compute-0 podman[390651]: 2025-10-02 13:12:01.93562851 +0000 UTC m=+0.124067006 container init 112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:12:01 compute-0 podman[390651]: 2025-10-02 13:12:01.941738809 +0000 UTC m=+0.130177285 container start 112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:12:01 compute-0 podman[390651]: 2025-10-02 13:12:01.944553792 +0000 UTC m=+0.132992268 container attach 112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:12:01 compute-0 stoic_joliot[390667]: 167 167
Oct 02 13:12:01 compute-0 systemd[1]: libpod-112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc.scope: Deactivated successfully.
Oct 02 13:12:01 compute-0 podman[390651]: 2025-10-02 13:12:01.946334798 +0000 UTC m=+0.134773284 container died 112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6897bfdf9fe7d8df03303f5b86107891fc8022057484622ac70ac93df53bc35-merged.mount: Deactivated successfully.
Oct 02 13:12:01 compute-0 podman[390651]: 2025-10-02 13:12:01.979411798 +0000 UTC m=+0.167850274 container remove 112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:12:01 compute-0 systemd[1]: libpod-conmon-112efcf926ec3063877de8d944e1bc0c510f94b4824f8939046c0e5cc94a3bbc.scope: Deactivated successfully.
Oct 02 13:12:02 compute-0 podman[390691]: 2025-10-02 13:12:02.138123214 +0000 UTC m=+0.038199484 container create a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:02 compute-0 systemd[1]: Started libpod-conmon-a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0.scope.
Oct 02 13:12:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc7e6d4a91ab45be8ad2986000325cdaa268c1ab2be582f029117db5003ba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc7e6d4a91ab45be8ad2986000325cdaa268c1ab2be582f029117db5003ba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:02 compute-0 podman[390691]: 2025-10-02 13:12:02.122638781 +0000 UTC m=+0.022715071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc7e6d4a91ab45be8ad2986000325cdaa268c1ab2be582f029117db5003ba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc7e6d4a91ab45be8ad2986000325cdaa268c1ab2be582f029117db5003ba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:02 compute-0 podman[390691]: 2025-10-02 13:12:02.230007852 +0000 UTC m=+0.130084142 container init a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:12:02 compute-0 podman[390691]: 2025-10-02 13:12:02.236302416 +0000 UTC m=+0.136378686 container start a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:12:02 compute-0 podman[390691]: 2025-10-02 13:12:02.239841408 +0000 UTC m=+0.139917678 container attach a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:12:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 3.2 MiB/s wr, 80 op/s
Oct 02 13:12:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2794547524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.704 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.707 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.708 2 INFO nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Creating image(s)
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.709 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.710 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Ensure instance console log exists: /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.710 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.711 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.712 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:02 compute-0 nova_compute[256940]: 2025-10-02 13:12:02.714 2 DEBUG nova.network.neutron [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Successfully created port: f86e0bfd-2b02-404a-8737-81d2fb076a10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:12:02 compute-0 nifty_wiles[390707]: {
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:     "1": [
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:         {
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "devices": [
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "/dev/loop3"
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             ],
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "lv_name": "ceph_lv0",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "lv_size": "7511998464",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "name": "ceph_lv0",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "tags": {
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.cluster_name": "ceph",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.crush_device_class": "",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.encrypted": "0",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.osd_id": "1",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.type": "block",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:                 "ceph.vdo": "0"
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             },
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "type": "block",
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:             "vg_name": "ceph_vg0"
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:         }
Oct 02 13:12:02 compute-0 nifty_wiles[390707]:     ]
Oct 02 13:12:02 compute-0 nifty_wiles[390707]: }
Oct 02 13:12:03 compute-0 systemd[1]: libpod-a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0.scope: Deactivated successfully.
Oct 02 13:12:03 compute-0 podman[390691]: 2025-10-02 13:12:03.005722374 +0000 UTC m=+0.905798654 container died a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4cc7e6d4a91ab45be8ad2986000325cdaa268c1ab2be582f029117db5003ba2-merged.mount: Deactivated successfully.
Oct 02 13:12:03 compute-0 podman[390691]: 2025-10-02 13:12:03.064470871 +0000 UTC m=+0.964547141 container remove a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:12:03 compute-0 systemd[1]: libpod-conmon-a69b5e4d4002fd996f1db61fed23c49a9bcef8e7ada06602727282ad9bb471f0.scope: Deactivated successfully.
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:03 compute-0 sudo[390586]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:03 compute-0 sudo[390729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:03 compute-0 sudo[390729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:03 compute-0 sudo[390729]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:03 compute-0 sudo[390754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:12:03 compute-0 sudo[390754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:03.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:03 compute-0 sudo[390754]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:03 compute-0 sudo[390779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:03 compute-0 sudo[390779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:03 compute-0 sudo[390779]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:03 compute-0 sudo[390804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:12:03 compute-0 sudo[390804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.519 2 DEBUG nova.network.neutron [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Successfully updated port: f86e0bfd-2b02-404a-8737-81d2fb076a10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.549 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-59697a8e-b10c-4ae7-a721-478fb1551925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.549 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-59697a8e-b10c-4ae7-a721-478fb1551925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.550 2 DEBUG nova.network.neutron [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.665 2 DEBUG nova.compute.manager [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received event network-changed-f86e0bfd-2b02-404a-8737-81d2fb076a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.666 2 DEBUG nova.compute.manager [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Refreshing instance network info cache due to event network-changed-f86e0bfd-2b02-404a-8737-81d2fb076a10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:12:03 compute-0 nova_compute[256940]: 2025-10-02 13:12:03.666 2 DEBUG oslo_concurrency.lockutils [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-59697a8e-b10c-4ae7-a721-478fb1551925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:03 compute-0 ceph-mon[73668]: pgmap v2990: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 3.2 MiB/s wr, 80 op/s
Oct 02 13:12:03 compute-0 podman[390869]: 2025-10-02 13:12:03.69000474 +0000 UTC m=+0.050462092 container create 92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:12:03 compute-0 systemd[1]: Started libpod-conmon-92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023.scope.
Oct 02 13:12:03 compute-0 podman[390869]: 2025-10-02 13:12:03.662001762 +0000 UTC m=+0.022459124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:03.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:03 compute-0 podman[390869]: 2025-10-02 13:12:03.795477522 +0000 UTC m=+0.155934894 container init 92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:12:03 compute-0 podman[390869]: 2025-10-02 13:12:03.802122435 +0000 UTC m=+0.162579777 container start 92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:03 compute-0 podman[390869]: 2025-10-02 13:12:03.806530379 +0000 UTC m=+0.166987721 container attach 92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:12:03 compute-0 hungry_saha[390885]: 167 167
Oct 02 13:12:03 compute-0 systemd[1]: libpod-92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023.scope: Deactivated successfully.
Oct 02 13:12:03 compute-0 conmon[390885]: conmon 92f5ea7abdcb11aa8fcc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023.scope/container/memory.events
Oct 02 13:12:03 compute-0 podman[390869]: 2025-10-02 13:12:03.809593159 +0000 UTC m=+0.170050531 container died 92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b47d7129b3d74330b81220a773e43567ebc00bf5f36e687ac05c803373ea30-merged.mount: Deactivated successfully.
Oct 02 13:12:03 compute-0 podman[390869]: 2025-10-02 13:12:03.862415892 +0000 UTC m=+0.222873244 container remove 92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:03 compute-0 systemd[1]: libpod-conmon-92f5ea7abdcb11aa8fcc9410353eb40e99743f7bf69c42acb2499d781796f023.scope: Deactivated successfully.
Oct 02 13:12:04 compute-0 podman[390909]: 2025-10-02 13:12:04.065776028 +0000 UTC m=+0.054110938 container create 99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:12:04 compute-0 systemd[1]: Started libpod-conmon-99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d.scope.
Oct 02 13:12:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef0532a3eb01198da02cd51412bc9a00c1b8402e85e71ccd1855388df166d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef0532a3eb01198da02cd51412bc9a00c1b8402e85e71ccd1855388df166d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef0532a3eb01198da02cd51412bc9a00c1b8402e85e71ccd1855388df166d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef0532a3eb01198da02cd51412bc9a00c1b8402e85e71ccd1855388df166d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:04 compute-0 podman[390909]: 2025-10-02 13:12:04.042184824 +0000 UTC m=+0.030519794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:12:04 compute-0 podman[390909]: 2025-10-02 13:12:04.15937334 +0000 UTC m=+0.147708250 container init 99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:12:04 compute-0 podman[390909]: 2025-10-02 13:12:04.165685425 +0000 UTC m=+0.154020315 container start 99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:12:04 compute-0 podman[390909]: 2025-10-02 13:12:04.170446228 +0000 UTC m=+0.158781168 container attach 99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:12:04 compute-0 nova_compute[256940]: 2025-10-02 13:12:04.320 2 DEBUG nova.network.neutron [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:12:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Oct 02 13:12:05 compute-0 clever_buck[390926]: {
Oct 02 13:12:05 compute-0 clever_buck[390926]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:12:05 compute-0 clever_buck[390926]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:12:05 compute-0 clever_buck[390926]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:12:05 compute-0 clever_buck[390926]:         "osd_id": 1,
Oct 02 13:12:05 compute-0 clever_buck[390926]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:12:05 compute-0 clever_buck[390926]:         "type": "bluestore"
Oct 02 13:12:05 compute-0 clever_buck[390926]:     }
Oct 02 13:12:05 compute-0 clever_buck[390926]: }
Oct 02 13:12:05 compute-0 systemd[1]: libpod-99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d.scope: Deactivated successfully.
Oct 02 13:12:05 compute-0 podman[390909]: 2025-10-02 13:12:05.077601658 +0000 UTC m=+1.065936548 container died 99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fef0532a3eb01198da02cd51412bc9a00c1b8402e85e71ccd1855388df166d0-merged.mount: Deactivated successfully.
Oct 02 13:12:05 compute-0 podman[390909]: 2025-10-02 13:12:05.132063123 +0000 UTC m=+1.120398013 container remove 99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:12:05 compute-0 systemd[1]: libpod-conmon-99815444ac2074a7e52177e6c0dd02e9482b34bb518170750427e1798438210d.scope: Deactivated successfully.
Oct 02 13:12:05 compute-0 sudo[390804]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:12:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:05.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:12:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:12:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.355 2 DEBUG nova.network.neutron [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Updating instance_info_cache with network_info: [{"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 64e028b9-d671-4c4d-9a71-fb233af7dc78 does not exist
Oct 02 13:12:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a6536c09-50fe-401a-877a-fae465f79140 does not exist
Oct 02 13:12:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e241ffce-dc2c-46f7-8353-f55efba8ae40 does not exist
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.374 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-59697a8e-b10c-4ae7-a721-478fb1551925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.375 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Instance network_info: |[{"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.375 2 DEBUG oslo_concurrency.lockutils [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-59697a8e-b10c-4ae7-a721-478fb1551925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.375 2 DEBUG nova.network.neutron [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Refreshing network info cache for port f86e0bfd-2b02-404a-8737-81d2fb076a10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.378 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Start _get_guest_xml network_info=[{"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-19b06699-b688-462a-890e-57ac19f2d9e6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '19b06699-b688-462a-890e-57ac19f2d9e6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '59697a8e-b10c-4ae7-a721-478fb1551925', 'attached_at': '', 'detached_at': '', 'volume_id': '19b06699-b688-462a-890e-57ac19f2d9e6', 'serial': '19b06699-b688-462a-890e-57ac19f2d9e6', 'multiattach': True}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '0d0e9cb5-44a1-4c98-87a9-040f5b2c2dc4', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.383 2 WARNING nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.388 2 DEBUG nova.virt.libvirt.host [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.389 2 DEBUG nova.virt.libvirt.host [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.392 2 DEBUG nova.virt.libvirt.host [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.393 2 DEBUG nova.virt.libvirt.host [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.394 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.395 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.395 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.396 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.396 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.396 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.397 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.397 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.398 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.398 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.398 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.399 2 DEBUG nova.virt.hardware [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:12:05 compute-0 sudo[390960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:05 compute-0 sudo[390960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.435 2 DEBUG nova.storage.rbd_utils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 59697a8e-b10c-4ae7-a721-478fb1551925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:05 compute-0 sudo[390960]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.439 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:05 compute-0 podman[390998]: 2025-10-02 13:12:05.500034688 +0000 UTC m=+0.059301493 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:12:05 compute-0 sudo[391009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:12:05 compute-0 sudo[391009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:05 compute-0 sudo[391009]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:05 compute-0 podman[391048]: 2025-10-02 13:12:05.627587223 +0000 UTC m=+0.098255695 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:12:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:05.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:12:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040018228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.884 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.942 2 DEBUG nova.virt.libvirt.vif [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-5686468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-5686468',id=204,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-2it4ltgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:00Z,user_data=None,user_id='156cc6022c70402ab6d194a340b076d5',uuid=59697a8e-b10c-4ae7-a721-478fb1551925,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.943 2 DEBUG nova.network.os_vif_util [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.944 2 DEBUG nova.network.os_vif_util [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:89:e8,bridge_name='br-int',has_traffic_filtering=True,id=f86e0bfd-2b02-404a-8737-81d2fb076a10,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf86e0bfd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:05 compute-0 nova_compute[256940]: 2025-10-02 13:12:05.945 2 DEBUG nova.objects.instance [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59697a8e-b10c-4ae7-a721-478fb1551925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:05 compute-0 ceph-mon[73668]: pgmap v2991: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Oct 02 13:12:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:12:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2639919700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2639919700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:05 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:12:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1040018228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.006 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <uuid>59697a8e-b10c-4ae7-a721-478fb1551925</uuid>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <name>instance-000000cc</name>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <nova:name>tempest-AttachVolumeMultiAttachTest-server-5686468</nova:name>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:12:05</nova:creationTime>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <nova:port uuid="f86e0bfd-2b02-404a-8737-81d2fb076a10">
Oct 02 13:12:06 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <system>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <entry name="serial">59697a8e-b10c-4ae7-a721-478fb1551925</entry>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <entry name="uuid">59697a8e-b10c-4ae7-a721-478fb1551925</entry>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </system>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <os>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   </os>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <features>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   </features>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/59697a8e-b10c-4ae7-a721-478fb1551925_disk.config">
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       </source>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-19b06699-b688-462a-890e-57ac19f2d9e6">
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       </source>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:12:06 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <serial>19b06699-b688-462a-890e-57ac19f2d9e6</serial>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <shareable/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:9d:89:e8"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <target dev="tapf86e0bfd-2b"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/console.log" append="off"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <video>
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </video>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:12:06 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:12:06 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:12:06 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:12:06 compute-0 nova_compute[256940]: </domain>
Oct 02 13:12:06 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.007 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Preparing to wait for external event network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.007 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.007 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.008 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.008 2 DEBUG nova.virt.libvirt.vif [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-5686468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-5686468',id=204,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-2it4ltgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:00Z,user_data=None,user_id='156cc6022c70402ab6d194a340b076d5',uuid=59697a8e-b10c-4ae7-a721-478fb1551925,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.009 2 DEBUG nova.network.os_vif_util [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.009 2 DEBUG nova.network.os_vif_util [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:89:e8,bridge_name='br-int',has_traffic_filtering=True,id=f86e0bfd-2b02-404a-8737-81d2fb076a10,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf86e0bfd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.009 2 DEBUG os_vif [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:89:e8,bridge_name='br-int',has_traffic_filtering=True,id=f86e0bfd-2b02-404a-8737-81d2fb076a10,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf86e0bfd-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86e0bfd-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf86e0bfd-2b, col_values=(('external_ids', {'iface-id': 'f86e0bfd-2b02-404a-8737-81d2fb076a10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:89:e8', 'vm-uuid': '59697a8e-b10c-4ae7-a721-478fb1551925'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:06 compute-0 NetworkManager[44981]: <info>  [1759410726.0186] manager: (tapf86e0bfd-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.024 2 INFO os_vif [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:89:e8,bridge_name='br-int',has_traffic_filtering=True,id=f86e0bfd-2b02-404a-8737-81d2fb076a10,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf86e0bfd-2b')
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.103 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.104 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.104 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:9d:89:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.105 2 INFO nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Using config drive
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.146 2 DEBUG nova.storage.rbd_utils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 59697a8e-b10c-4ae7-a721-478fb1551925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.558 2 INFO nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Creating config drive at /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/disk.config
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.563 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfq1ma2a5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.703 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfq1ma2a5" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.751 2 DEBUG nova.storage.rbd_utils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 59697a8e-b10c-4ae7-a721-478fb1551925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:06 compute-0 nova_compute[256940]: 2025-10-02 13:12:06.756 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/disk.config 59697a8e-b10c-4ae7-a721-478fb1551925_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.181 2 DEBUG oslo_concurrency.processutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/disk.config 59697a8e-b10c-4ae7-a721-478fb1551925_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.182 2 INFO nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Deleting local config drive /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925/disk.config because it was imported into RBD.
Oct 02 13:12:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:07.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:07 compute-0 NetworkManager[44981]: <info>  [1759410727.2381] manager: (tapf86e0bfd-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Oct 02 13:12:07 compute-0 kernel: tapf86e0bfd-2b: entered promiscuous mode
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:07 compute-0 ovn_controller[148123]: 2025-10-02T13:12:07Z|01014|binding|INFO|Claiming lport f86e0bfd-2b02-404a-8737-81d2fb076a10 for this chassis.
Oct 02 13:12:07 compute-0 ovn_controller[148123]: 2025-10-02T13:12:07Z|01015|binding|INFO|f86e0bfd-2b02-404a-8737-81d2fb076a10: Claiming fa:16:3e:9d:89:e8 10.100.0.10
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.252 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:89:e8 10.100.0.10'], port_security=['fa:16:3e:9d:89:e8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59697a8e-b10c-4ae7-a721-478fb1551925', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e0b78e6-81a7-466c-a6a5-7c1350a20a08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f86e0bfd-2b02-404a-8737-81d2fb076a10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.254 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f86e0bfd-2b02-404a-8737-81d2fb076a10 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.260 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:12:07 compute-0 ovn_controller[148123]: 2025-10-02T13:12:07Z|01016|binding|INFO|Setting lport f86e0bfd-2b02-404a-8737-81d2fb076a10 ovn-installed in OVS
Oct 02 13:12:07 compute-0 ovn_controller[148123]: 2025-10-02T13:12:07Z|01017|binding|INFO|Setting lport f86e0bfd-2b02-404a-8737-81d2fb076a10 up in Southbound
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:07 compute-0 systemd-udevd[391169]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:12:07 compute-0 systemd-machined[210927]: New machine qemu-104-instance-000000cc.
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.279 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ef942d58-a5b9-4d59-8eef-836f80480251]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:07 compute-0 NetworkManager[44981]: <info>  [1759410727.2884] device (tapf86e0bfd-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:12:07 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-000000cc.
Oct 02 13:12:07 compute-0 NetworkManager[44981]: <info>  [1759410727.2907] device (tapf86e0bfd-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.320 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[12b57052-e7e8-4f9b-a757-a89ea1cbbcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.323 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[30f28aac-e050-4c11-bda9-286c484470d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.344 2 DEBUG nova.network.neutron [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Updated VIF entry in instance network info cache for port f86e0bfd-2b02-404a-8737-81d2fb076a10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.345 2 DEBUG nova.network.neutron [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Updating instance_info_cache with network_info: [{"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.351 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[461a2fc7-03f5-418f-88ce-ce17b3bf07a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.368 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[84a5e554-4a37-472b-8aa4-8e1949fbad56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856277, 'reachable_time': 15140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391183, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.372 2 DEBUG oslo_concurrency.lockutils [req-cd4ed0e8-30d7-428a-bf5a-541e8bf2d8e1 req-c44bcfe7-8c0b-4c10-9c14-a3ea02a591e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-59697a8e-b10c-4ae7-a721-478fb1551925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.383 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[314443db-067a-42ce-aaf4-864a85eb3b50]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856289, 'tstamp': 856289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391184, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856293, 'tstamp': 856293}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391184, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.385 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.388 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.388 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.388 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:07 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:07.389 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.698 2 DEBUG nova.compute.manager [req-1e2833e4-a3be-412c-b117-8b9147e8e923 req-252a9ff1-c360-478e-b61c-b00ee020d7b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received event network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.698 2 DEBUG oslo_concurrency.lockutils [req-1e2833e4-a3be-412c-b117-8b9147e8e923 req-252a9ff1-c360-478e-b61c-b00ee020d7b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.699 2 DEBUG oslo_concurrency.lockutils [req-1e2833e4-a3be-412c-b117-8b9147e8e923 req-252a9ff1-c360-478e-b61c-b00ee020d7b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.699 2 DEBUG oslo_concurrency.lockutils [req-1e2833e4-a3be-412c-b117-8b9147e8e923 req-252a9ff1-c360-478e-b61c-b00ee020d7b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:07 compute-0 nova_compute[256940]: 2025-10-02 13:12:07.700 2 DEBUG nova.compute.manager [req-1e2833e4-a3be-412c-b117-8b9147e8e923 req-252a9ff1-c360-478e-b61c-b00ee020d7b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Processing event network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:12:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:07.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:08 compute-0 ceph-mon[73668]: pgmap v2992: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.103 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410728.1028168, 59697a8e-b10c-4ae7-a721-478fb1551925 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.103 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] VM Started (Lifecycle Event)
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.106 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.109 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.112 2 INFO nova.virt.libvirt.driver [-] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Instance spawned successfully.
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.112 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.135 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.139 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.167 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.167 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.168 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.168 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.169 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.169 2 DEBUG nova.virt.libvirt.driver [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.184 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.185 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410728.1030486, 59697a8e-b10c-4ae7-a721-478fb1551925 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.185 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] VM Paused (Lifecycle Event)
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.227 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.230 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410728.1081653, 59697a8e-b10c-4ae7-a721-478fb1551925 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.231 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] VM Resumed (Lifecycle Event)
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.277 2 INFO nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Took 5.57 seconds to spawn the instance on the hypervisor.
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.277 2 DEBUG nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.286 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.289 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.331 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.381 2 INFO nova.compute.manager [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Took 9.20 seconds to build instance.
Oct 02 13:12:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 122 KiB/s wr, 24 op/s
Oct 02 13:12:08 compute-0 nova_compute[256940]: 2025-10-02 13:12:08.422 2 DEBUG oslo_concurrency.lockutils [None req-f4bd347d-fb93-434e-a57c-87c8ddc89fde 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:09.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:10 compute-0 ceph-mon[73668]: pgmap v2993: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 122 KiB/s wr, 24 op/s
Oct 02 13:12:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 13 KiB/s wr, 19 op/s
Oct 02 13:12:10 compute-0 nova_compute[256940]: 2025-10-02 13:12:10.652 2 DEBUG nova.compute.manager [req-8c4be5c7-5120-4c58-9d98-2fadb4634889 req-7b146680-a4c3-41e9-b150-42388a9ed973 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received event network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:10 compute-0 nova_compute[256940]: 2025-10-02 13:12:10.653 2 DEBUG oslo_concurrency.lockutils [req-8c4be5c7-5120-4c58-9d98-2fadb4634889 req-7b146680-a4c3-41e9-b150-42388a9ed973 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:10 compute-0 nova_compute[256940]: 2025-10-02 13:12:10.653 2 DEBUG oslo_concurrency.lockutils [req-8c4be5c7-5120-4c58-9d98-2fadb4634889 req-7b146680-a4c3-41e9-b150-42388a9ed973 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:10 compute-0 nova_compute[256940]: 2025-10-02 13:12:10.653 2 DEBUG oslo_concurrency.lockutils [req-8c4be5c7-5120-4c58-9d98-2fadb4634889 req-7b146680-a4c3-41e9-b150-42388a9ed973 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:10 compute-0 nova_compute[256940]: 2025-10-02 13:12:10.653 2 DEBUG nova.compute.manager [req-8c4be5c7-5120-4c58-9d98-2fadb4634889 req-7b146680-a4c3-41e9-b150-42388a9ed973 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] No waiting events found dispatching network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:10 compute-0 nova_compute[256940]: 2025-10-02 13:12:10.653 2 WARNING nova.compute.manager [req-8c4be5c7-5120-4c58-9d98-2fadb4634889 req-7b146680-a4c3-41e9-b150-42388a9ed973 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received unexpected event network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 for instance with vm_state active and task_state None.
Oct 02 13:12:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:11 compute-0 nova_compute[256940]: 2025-10-02 13:12:11.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:11 compute-0 nova_compute[256940]: 2025-10-02 13:12:11.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:11.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Oct 02 13:12:12 compute-0 ceph-mon[73668]: pgmap v2994: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 13 KiB/s wr, 19 op/s
Oct 02 13:12:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Oct 02 13:12:12 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Oct 02 13:12:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 89 op/s
Oct 02 13:12:13 compute-0 nova_compute[256940]: 2025-10-02 13:12:13.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Oct 02 13:12:13 compute-0 ceph-mon[73668]: osdmap e372: 3 total, 3 up, 3 in
Oct 02 13:12:13 compute-0 ceph-mon[73668]: pgmap v2996: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 89 op/s
Oct 02 13:12:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3655608364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Oct 02 13:12:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Oct 02 13:12:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:13.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:14 compute-0 sudo[391231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:14 compute-0 sudo[391231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:14 compute-0 sudo[391231]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:14 compute-0 sudo[391256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:14 compute-0 sudo[391256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:14 compute-0 sudo[391256]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 111 op/s
Oct 02 13:12:14 compute-0 ceph-mon[73668]: osdmap e373: 3 total, 3 up, 3 in
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.159 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.160 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.161 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.162 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.162 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.164 2 INFO nova.compute.manager [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Terminating instance
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.166 2 DEBUG nova.compute.manager [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:12:15 compute-0 kernel: tapf86e0bfd-2b (unregistering): left promiscuous mode
Oct 02 13:12:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:15 compute-0 NetworkManager[44981]: <info>  [1759410735.2328] device (tapf86e0bfd-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:12:15 compute-0 ovn_controller[148123]: 2025-10-02T13:12:15Z|01018|binding|INFO|Releasing lport f86e0bfd-2b02-404a-8737-81d2fb076a10 from this chassis (sb_readonly=0)
Oct 02 13:12:15 compute-0 ovn_controller[148123]: 2025-10-02T13:12:15Z|01019|binding|INFO|Setting lport f86e0bfd-2b02-404a-8737-81d2fb076a10 down in Southbound
Oct 02 13:12:15 compute-0 ovn_controller[148123]: 2025-10-02T13:12:15Z|01020|binding|INFO|Removing iface tapf86e0bfd-2b ovn-installed in OVS
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.253 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:89:e8 10.100.0.10'], port_security=['fa:16:3e:9d:89:e8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59697a8e-b10c-4ae7-a721-478fb1551925', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e0b78e6-81a7-466c-a6a5-7c1350a20a08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f86e0bfd-2b02-404a-8737-81d2fb076a10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.257 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f86e0bfd-2b02-404a-8737-81d2fb076a10 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.259 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.284 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[144120d0-71fe-4ed3-8607-db4629dd1f8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:15 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000cc.scope: Deactivated successfully.
Oct 02 13:12:15 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000cc.scope: Consumed 7.865s CPU time.
Oct 02 13:12:15 compute-0 systemd-machined[210927]: Machine qemu-104-instance-000000cc terminated.
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.315 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[455bc9b9-38f5-4fe5-8c12-4d60f8e4ef58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.318 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5627bc-c8bb-42de-9dc1-2747f8fd465f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.342 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[2a652a4f-6591-453d-9bae-380081796010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.366 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[063a7e2d-2c1e-4f27-addb-6af22e083f5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856277, 'reachable_time': 15140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391293, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.385 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[656462ba-2490-417a-a8b7-5f6b76170814]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856289, 'tstamp': 856289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391294, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856293, 'tstamp': 856293}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391294, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.387 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.394 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.395 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.396 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:15.396 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.407 2 INFO nova.virt.libvirt.driver [-] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Instance destroyed successfully.
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.408 2 DEBUG nova.objects.instance [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid 59697a8e-b10c-4ae7-a721-478fb1551925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.429 2 DEBUG nova.virt.libvirt.vif [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-5686468',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-5686468',id=204,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:12:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-2it4ltgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:12:08Z,user_data=None,user_id='156cc6022c70402ab6d194a340b076d5',uuid=59697a8e-b10c-4ae7-a721-478fb1551925,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.430 2 DEBUG nova.network.os_vif_util [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "address": "fa:16:3e:9d:89:e8", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf86e0bfd-2b", "ovs_interfaceid": "f86e0bfd-2b02-404a-8737-81d2fb076a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.430 2 DEBUG nova.network.os_vif_util [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:89:e8,bridge_name='br-int',has_traffic_filtering=True,id=f86e0bfd-2b02-404a-8737-81d2fb076a10,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf86e0bfd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.431 2 DEBUG os_vif [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:89:e8,bridge_name='br-int',has_traffic_filtering=True,id=f86e0bfd-2b02-404a-8737-81d2fb076a10,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf86e0bfd-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86e0bfd-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.438 2 INFO os_vif [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:89:e8,bridge_name='br-int',has_traffic_filtering=True,id=f86e0bfd-2b02-404a-8737-81d2fb076a10,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf86e0bfd-2b')
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.675 2 DEBUG nova.compute.manager [req-d962023b-3fab-4c90-9fad-9774de28ded1 req-e62daff1-1974-490a-b8ab-aef44b0d6e52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received event network-vif-unplugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.676 2 DEBUG oslo_concurrency.lockutils [req-d962023b-3fab-4c90-9fad-9774de28ded1 req-e62daff1-1974-490a-b8ab-aef44b0d6e52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.676 2 DEBUG oslo_concurrency.lockutils [req-d962023b-3fab-4c90-9fad-9774de28ded1 req-e62daff1-1974-490a-b8ab-aef44b0d6e52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.677 2 DEBUG oslo_concurrency.lockutils [req-d962023b-3fab-4c90-9fad-9774de28ded1 req-e62daff1-1974-490a-b8ab-aef44b0d6e52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.677 2 DEBUG nova.compute.manager [req-d962023b-3fab-4c90-9fad-9774de28ded1 req-e62daff1-1974-490a-b8ab-aef44b0d6e52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] No waiting events found dispatching network-vif-unplugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:15 compute-0 nova_compute[256940]: 2025-10-02 13:12:15.677 2 DEBUG nova.compute.manager [req-d962023b-3fab-4c90-9fad-9774de28ded1 req-e62daff1-1974-490a-b8ab-aef44b0d6e52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received event network-vif-unplugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:12:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:15.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:15 compute-0 ceph-mon[73668]: pgmap v2998: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 111 op/s
Oct 02 13:12:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 686 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 13:12:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:17.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.387 2 INFO nova.virt.libvirt.driver [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Deleting instance files /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925_del
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.389 2 INFO nova.virt.libvirt.driver [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Deletion of /var/lib/nova/instances/59697a8e-b10c-4ae7-a721-478fb1551925_del complete
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.679 2 INFO nova.compute.manager [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Took 2.51 seconds to destroy the instance on the hypervisor.
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.680 2 DEBUG oslo.service.loopingcall [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.680 2 DEBUG nova.compute.manager [-] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.680 2 DEBUG nova.network.neutron [-] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:12:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:17.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.834 2 DEBUG nova.compute.manager [req-19a674e9-5e0c-4eec-ad47-10b5265c0ce7 req-57a31490-50e8-426f-9e7f-a296e68299af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received event network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.834 2 DEBUG oslo_concurrency.lockutils [req-19a674e9-5e0c-4eec-ad47-10b5265c0ce7 req-57a31490-50e8-426f-9e7f-a296e68299af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.835 2 DEBUG oslo_concurrency.lockutils [req-19a674e9-5e0c-4eec-ad47-10b5265c0ce7 req-57a31490-50e8-426f-9e7f-a296e68299af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.835 2 DEBUG oslo_concurrency.lockutils [req-19a674e9-5e0c-4eec-ad47-10b5265c0ce7 req-57a31490-50e8-426f-9e7f-a296e68299af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.835 2 DEBUG nova.compute.manager [req-19a674e9-5e0c-4eec-ad47-10b5265c0ce7 req-57a31490-50e8-426f-9e7f-a296e68299af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] No waiting events found dispatching network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:17 compute-0 nova_compute[256940]: 2025-10-02 13:12:17.835 2 WARNING nova.compute.manager [req-19a674e9-5e0c-4eec-ad47-10b5265c0ce7 req-57a31490-50e8-426f-9e7f-a296e68299af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received unexpected event network-vif-plugged-f86e0bfd-2b02-404a-8737-81d2fb076a10 for instance with vm_state active and task_state deleting.
Oct 02 13:12:17 compute-0 ceph-mon[73668]: pgmap v2999: 305 pgs: 305 active+clean; 686 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 13:12:18 compute-0 nova_compute[256940]: 2025-10-02 13:12:18.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 149 op/s
Oct 02 13:12:18 compute-0 nova_compute[256940]: 2025-10-02 13:12:18.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:18 compute-0 nova_compute[256940]: 2025-10-02 13:12:18.723 2 DEBUG nova.network.neutron [-] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:18 compute-0 nova_compute[256940]: 2025-10-02 13:12:18.749 2 INFO nova.compute.manager [-] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Took 1.07 seconds to deallocate network for instance.
Oct 02 13:12:18 compute-0 nova_compute[256940]: 2025-10-02 13:12:18.988 2 INFO nova.compute.manager [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Took 0.24 seconds to detach 1 volumes for instance.
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.041 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.042 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.112 2 DEBUG oslo_concurrency.processutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:19.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3057999980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/8473852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979295640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.593 2 DEBUG oslo_concurrency.processutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.599 2 DEBUG nova.compute.provider_tree [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.640 2 DEBUG nova.scheduler.client.report [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.667 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.701 2 INFO nova.scheduler.client.report [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocations for instance 59697a8e-b10c-4ae7-a721-478fb1551925
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.779 2 DEBUG oslo_concurrency.lockutils [None req-750f8e57-4578-42e7-a44e-8f186cc11dbe 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "59697a8e-b10c-4ae7-a721-478fb1551925" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:19.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:19 compute-0 nova_compute[256940]: 2025-10-02 13:12:19.924 2 DEBUG nova.compute.manager [req-fbd61fa9-a413-4e0e-800f-2e8b50a0f042 req-75f814cf-7add-45bf-b040-57ed4aba9e63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Received event network-vif-deleted-f86e0bfd-2b02-404a-8737-81d2fb076a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:20 compute-0 podman[391350]: 2025-10-02 13:12:20.381856994 +0000 UTC m=+0.056617143 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 13:12:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Oct 02 13:12:20 compute-0 nova_compute[256940]: 2025-10-02 13:12:20.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:20 compute-0 ceph-mon[73668]: pgmap v3000: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 149 op/s
Oct 02 13:12:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/979295640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:21.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:21 compute-0 podman[391372]: 2025-10-02 13:12:21.388317914 +0000 UTC m=+0.058667266 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 13:12:21 compute-0 nova_compute[256940]: 2025-10-02 13:12:21.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Oct 02 13:12:21 compute-0 ceph-mon[73668]: pgmap v3001: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Oct 02 13:12:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Oct 02 13:12:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Oct 02 13:12:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:21.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct 02 13:12:22 compute-0 ceph-mon[73668]: osdmap e374: 3 total, 3 up, 3 in
Oct 02 13:12:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:12:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560297776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:12:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560297776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:23 compute-0 nova_compute[256940]: 2025-10-02 13:12:23.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:23.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:23.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Oct 02 13:12:23 compute-0 ceph-mon[73668]: pgmap v3003: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct 02 13:12:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1560297776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1560297776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Oct 02 13:12:23 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Oct 02 13:12:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 461 KiB/s wr, 53 op/s
Oct 02 13:12:24 compute-0 ceph-mon[73668]: osdmap e375: 3 total, 3 up, 3 in
Oct 02 13:12:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:25 compute-0 nova_compute[256940]: 2025-10-02 13:12:25.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:25.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:26 compute-0 ceph-mon[73668]: pgmap v3005: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 461 KiB/s wr, 53 op/s
Oct 02 13:12:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2888907482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 652 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 21 KiB/s wr, 58 op/s
Oct 02 13:12:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:26.508 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:26.508 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:26.509 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:27 compute-0 nova_compute[256940]: 2025-10-02 13:12:27.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:27 compute-0 nova_compute[256940]: 2025-10-02 13:12:27.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:27 compute-0 ceph-mon[73668]: pgmap v3006: 305 pgs: 305 active+clean; 652 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 21 KiB/s wr, 58 op/s
Oct 02 13:12:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1570152893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:28 compute-0 nova_compute[256940]: 2025-10-02 13:12:28.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 23 KiB/s wr, 76 op/s
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:12:28
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data']
Oct 02 13:12:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:12:29 compute-0 nova_compute[256940]: 2025-10-02 13:12:29.228 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:29.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:12:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:12:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:29.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:30 compute-0 ceph-mon[73668]: pgmap v3007: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 23 KiB/s wr, 76 op/s
Oct 02 13:12:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/328279734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:30 compute-0 nova_compute[256940]: 2025-10-02 13:12:30.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:30 compute-0 nova_compute[256940]: 2025-10-02 13:12:30.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:12:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 23 KiB/s wr, 117 op/s
Oct 02 13:12:30 compute-0 nova_compute[256940]: 2025-10-02 13:12:30.405 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410735.4048069, 59697a8e-b10c-4ae7-a721-478fb1551925 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:30 compute-0 nova_compute[256940]: 2025-10-02 13:12:30.406 2 INFO nova.compute.manager [-] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] VM Stopped (Lifecycle Event)
Oct 02 13:12:30 compute-0 nova_compute[256940]: 2025-10-02 13:12:30.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:30 compute-0 nova_compute[256940]: 2025-10-02 13:12:30.443 2 DEBUG nova.compute.manager [None req-dcfe7acf-d471-47b1-a7e5-e83e454f3313 - - - - - -] [instance: 59697a8e-b10c-4ae7-a721-478fb1551925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:31.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.257 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.257 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.257 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.258 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.258 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512212279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.707 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Oct 02 13:12:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:31.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:31 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.982 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:31 compute-0 nova_compute[256940]: 2025-10-02 13:12:31.982 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.142 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.143 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=20.739154815673828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.144 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.144 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3156646001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.9 KiB/s wr, 170 op/s
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.452 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance de995ad8-07bb-4097-899b-5c79d62a1f4c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.452 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.453 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.499 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121352586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.946 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.952 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:12:32 compute-0 nova_compute[256940]: 2025-10-02 13:12:32.979 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:12:33 compute-0 nova_compute[256940]: 2025-10-02 13:12:33.063 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:12:33 compute-0 nova_compute[256940]: 2025-10-02 13:12:33.064 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:33 compute-0 nova_compute[256940]: 2025-10-02 13:12:33.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:33.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:33.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.063 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.063 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.254 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:12:34 compute-0 ceph-mon[73668]: pgmap v3008: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 23 KiB/s wr, 117 op/s
Oct 02 13:12:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2512212279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:34 compute-0 ceph-mon[73668]: osdmap e376: 3 total, 3 up, 3 in
Oct 02 13:12:34 compute-0 ceph-mon[73668]: pgmap v3010: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.9 KiB/s wr, 170 op/s
Oct 02 13:12:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2121352586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:34 compute-0 sudo[391444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:34 compute-0 sudo[391444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:34 compute-0 sudo[391444]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:34 compute-0 sudo[391469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.9 KiB/s wr, 145 op/s
Oct 02 13:12:34 compute-0 sudo[391469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:34 compute-0 sudo[391469]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.458 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.458 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.464 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.465 2 INFO nova.compute.claims [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:12:34 compute-0 nova_compute[256940]: 2025-10-02 13:12:34.620 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156979982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.048 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.055 2 DEBUG nova.compute.provider_tree [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.075 2 DEBUG nova.scheduler.client.report [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.111 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.112 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.183 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.184 2 DEBUG nova.network.neutron [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.202 2 INFO nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.217 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:12:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:35.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.406 2 DEBUG nova.policy [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81db307ac1f846188ce19b644ebcc396', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cbaefa5c700c4ed495a5244732eed7e3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.469 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.470 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.470 2 INFO nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Creating image(s)
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.494 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.521 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.549 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.553 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.637 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.638 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.639 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.639 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.665 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:35 compute-0 nova_compute[256940]: 2025-10-02 13:12:35.669 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 97dd449c-87a7-4278-a819-3d412f587a4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:36 compute-0 nova_compute[256940]: 2025-10-02 13:12:36.064 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:36 compute-0 podman[391608]: 2025-10-02 13:12:36.382945473 +0000 UTC m=+0.053973464 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:12:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 114 op/s
Oct 02 13:12:36 compute-0 podman[391609]: 2025-10-02 13:12:36.42593476 +0000 UTC m=+0.094267201 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:36 compute-0 nova_compute[256940]: 2025-10-02 13:12:36.775 2 DEBUG nova.network.neutron [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Successfully created port: bff5fdd5-ec7d-45cf-94ea-2264da79c91d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:12:37 compute-0 nova_compute[256940]: 2025-10-02 13:12:37.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:37 compute-0 ceph-mon[73668]: pgmap v3011: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.9 KiB/s wr, 145 op/s
Oct 02 13:12:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2156979982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:37.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 89 op/s
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.551 2 DEBUG nova.network.neutron [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Successfully updated port: bff5fdd5-ec7d-45cf-94ea-2264da79c91d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.570 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.571 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquired lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.571 2 DEBUG nova.network.neutron [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.671 2 DEBUG nova.compute.manager [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-changed-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.671 2 DEBUG nova.compute.manager [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Refreshing instance network info cache due to event network-changed-bff5fdd5-ec7d-45cf-94ea-2264da79c91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.671 2 DEBUG oslo_concurrency.lockutils [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:38 compute-0 nova_compute[256940]: 2025-10-02 13:12:38.701 2 DEBUG nova.network.neutron [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:12:39 compute-0 nova_compute[256940]: 2025-10-02 13:12:39.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:39 compute-0 ceph-mon[73668]: pgmap v3012: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 114 op/s
Oct 02 13:12:39 compute-0 nova_compute[256940]: 2025-10-02 13:12:39.429 2 DEBUG nova.network.neutron [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating instance_info_cache with network_info: [{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:39 compute-0 nova_compute[256940]: 2025-10-02 13:12:39.451 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Releasing lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:39 compute-0 nova_compute[256940]: 2025-10-02 13:12:39.452 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Instance network_info: |[{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:12:39 compute-0 nova_compute[256940]: 2025-10-02 13:12:39.452 2 DEBUG oslo_concurrency.lockutils [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:39 compute-0 nova_compute[256940]: 2025-10-02 13:12:39.452 2 DEBUG nova.network.neutron [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Refreshing network info cache for port bff5fdd5-ec7d-45cf-94ea-2264da79c91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:12:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:39.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:12:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1589505672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:12:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1589505672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.090 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.091 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.091 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.091 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.091 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.093 2 INFO nova.compute.manager [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Terminating instance
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.094 2 DEBUG nova.compute.manager [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.5 KiB/s wr, 50 op/s
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011871241333984739 of space, bias 1.0, pg target 3.5613724001954217 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004320648380792853 of space, bias 1.0, pg target 1.2832325690954773 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:12:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.805 2 DEBUG nova.network.neutron [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updated VIF entry in instance network info cache for port bff5fdd5-ec7d-45cf-94ea-2264da79c91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.806 2 DEBUG nova.network.neutron [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating instance_info_cache with network_info: [{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:40 compute-0 nova_compute[256940]: 2025-10-02 13:12:40.826 2 DEBUG oslo_concurrency.lockutils [req-b86f930a-d30d-470b-9df8-cb8e11cb074f req-8ed9ecca-23c3-4f1a-b4a1-6bc0e34169ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:41.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 3.4 KiB/s wr, 11 op/s
Oct 02 13:12:42 compute-0 kernel: tap513c3d66-61 (unregistering): left promiscuous mode
Oct 02 13:12:42 compute-0 NetworkManager[44981]: <info>  [1759410762.7762] device (tap513c3d66-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:12:42 compute-0 ovn_controller[148123]: 2025-10-02T13:12:42Z|01021|binding|INFO|Releasing lport 513c3d66-613d-4626-8ab0-58520113de32 from this chassis (sb_readonly=0)
Oct 02 13:12:42 compute-0 ovn_controller[148123]: 2025-10-02T13:12:42Z|01022|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 down in Southbound
Oct 02 13:12:42 compute-0 ovn_controller[148123]: 2025-10-02T13:12:42Z|01023|binding|INFO|Removing iface tap513c3d66-61 ovn-installed in OVS
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:42.801 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:bc:4e 10.100.0.4'], port_security=['fa:16:3e:9a:bc:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'de995ad8-07bb-4097-899b-5c79d62a1f4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=513c3d66-613d-4626-8ab0-58520113de32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:42.802 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 513c3d66-613d-4626-8ab0-58520113de32 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis
Oct 02 13:12:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:42.804 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9001b9c-bca6-4085-a954-1414269e31bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:12:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:42.805 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ded3955a-0dfa-46ca-9db4-64f336cd6bde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:42.806 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace which is not needed anymore
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:42 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Oct 02 13:12:42 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000c8.scope: Consumed 16.683s CPU time.
Oct 02 13:12:42 compute-0 systemd-machined[210927]: Machine qemu-103-instance-000000c8 terminated.
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.939 2 INFO nova.virt.libvirt.driver [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance destroyed successfully.
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.940 2 DEBUG nova.objects.instance [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:42 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[389645]: [NOTICE]   (389649) : haproxy version is 2.8.14-c23fe91
Oct 02 13:12:42 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[389645]: [NOTICE]   (389649) : path to executable is /usr/sbin/haproxy
Oct 02 13:12:42 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[389645]: [WARNING]  (389649) : Exiting Master process...
Oct 02 13:12:42 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[389645]: [ALERT]    (389649) : Current worker (389651) exited with code 143 (Terminated)
Oct 02 13:12:42 compute-0 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[389645]: [WARNING]  (389649) : All workers exited. Exiting... (0)
Oct 02 13:12:42 compute-0 systemd[1]: libpod-c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e.scope: Deactivated successfully.
Oct 02 13:12:42 compute-0 podman[391681]: 2025-10-02 13:12:42.955297974 +0000 UTC m=+0.049951569 container died c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.954 2 DEBUG nova.virt.libvirt.vif [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:11:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:11:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.955 2 DEBUG nova.network.os_vif_util [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.956 2 DEBUG nova.network.os_vif_util [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.957 2 DEBUG os_vif [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513c3d66-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:12:42 compute-0 nova_compute[256940]: 2025-10-02 13:12:42.966 2 INFO os_vif [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')
Oct 02 13:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e-userdata-shm.mount: Deactivated successfully.
Oct 02 13:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-059cbb43b1007ea210d8a040b16e1716dc0c3d474a27aead0df1e76bc8c145c5-merged.mount: Deactivated successfully.
Oct 02 13:12:43 compute-0 podman[391681]: 2025-10-02 13:12:43.005409407 +0000 UTC m=+0.100062992 container cleanup c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:43 compute-0 systemd[1]: libpod-conmon-c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e.scope: Deactivated successfully.
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.030 2 DEBUG nova.compute.manager [req-f95be5da-1158-4943-81d9-a67803330cb4 req-5f637d94-2cbf-424e-a5bc-515e14568145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.030 2 DEBUG oslo_concurrency.lockutils [req-f95be5da-1158-4943-81d9-a67803330cb4 req-5f637d94-2cbf-424e-a5bc-515e14568145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.030 2 DEBUG oslo_concurrency.lockutils [req-f95be5da-1158-4943-81d9-a67803330cb4 req-5f637d94-2cbf-424e-a5bc-515e14568145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.031 2 DEBUG oslo_concurrency.lockutils [req-f95be5da-1158-4943-81d9-a67803330cb4 req-5f637d94-2cbf-424e-a5bc-515e14568145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.031 2 DEBUG nova.compute.manager [req-f95be5da-1158-4943-81d9-a67803330cb4 req-5f637d94-2cbf-424e-a5bc-515e14568145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.031 2 DEBUG nova.compute.manager [req-f95be5da-1158-4943-81d9-a67803330cb4 req-5f637d94-2cbf-424e-a5bc-515e14568145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:12:43 compute-0 podman[391740]: 2025-10-02 13:12:43.075981031 +0000 UTC m=+0.050054352 container remove c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.081 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d46e6d97-2911-472b-82e7-0fd2acd199a7]: (4, ('Thu Oct  2 01:12:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e)\nc502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e\nThu Oct  2 01:12:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (c502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e)\nc502f25622297345edb9cf8ffc9f72205dd2790aea6ae1d48191a6842ee6534e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.083 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f888b8b2-7d3b-4548-aade-3139353b527f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.084 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:43 compute-0 kernel: tapd9001b9c-b0: left promiscuous mode
Oct 02 13:12:43 compute-0 ceph-mon[73668]: pgmap v3013: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 89 op/s
Oct 02 13:12:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1589505672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1589505672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.113 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c64c5567-fd5c-4556-8e54-4ffcf0af0624]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.135 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7fda8690-3cb2-4bfd-a9ce-4b948dba71f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.136 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a69ebdb-27e3-415f-a051-babc8a0b6104]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.154 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[64c753a2-3fa5-4077-a65b-12ac952d19b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856270, 'reachable_time': 20941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391757, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.158 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:12:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:43.158 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb3771c-c0bb-433b-a0b8-d8de532f11d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dd9001b9c\x2dbca6\x2d4085\x2da954\x2d1414269e31bc.mount: Deactivated successfully.
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.235 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.236 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:12:43 compute-0 nova_compute[256940]: 2025-10-02 13:12:43.237 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:12:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:43.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:43.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:44 compute-0 ceph-mon[73668]: pgmap v3014: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.5 KiB/s wr, 50 op/s
Oct 02 13:12:44 compute-0 ceph-mon[73668]: pgmap v3015: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 3.4 KiB/s wr, 11 op/s
Oct 02 13:12:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 650 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 13:12:44 compute-0 nova_compute[256940]: 2025-10-02 13:12:44.517 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 97dd449c-87a7-4278-a819-3d412f587a4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.848s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:44 compute-0 nova_compute[256940]: 2025-10-02 13:12:44.619 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] resizing rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.164 2 DEBUG nova.compute.manager [req-31356cd5-fc23-42eb-9f3d-1e7af0d381b5 req-f76ff880-105f-4631-86c2-3fad2635a3a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.165 2 DEBUG oslo_concurrency.lockutils [req-31356cd5-fc23-42eb-9f3d-1e7af0d381b5 req-f76ff880-105f-4631-86c2-3fad2635a3a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.165 2 DEBUG oslo_concurrency.lockutils [req-31356cd5-fc23-42eb-9f3d-1e7af0d381b5 req-f76ff880-105f-4631-86c2-3fad2635a3a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.166 2 DEBUG oslo_concurrency.lockutils [req-31356cd5-fc23-42eb-9f3d-1e7af0d381b5 req-f76ff880-105f-4631-86c2-3fad2635a3a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.166 2 DEBUG nova.compute.manager [req-31356cd5-fc23-42eb-9f3d-1e7af0d381b5 req-f76ff880-105f-4631-86c2-3fad2635a3a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.166 2 WARNING nova.compute.manager [req-31356cd5-fc23-42eb-9f3d-1e7af0d381b5 req-f76ff880-105f-4631-86c2-3fad2635a3a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state deleting.
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:45.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:45 compute-0 ceph-mon[73668]: pgmap v3016: 305 pgs: 305 active+clean; 650 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.695 2 DEBUG nova.objects.instance [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'migration_context' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.718 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.718 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Ensure instance console log exists: /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.719 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.719 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.719 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.721 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Start _get_guest_xml network_info=[{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.726 2 WARNING nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.740 2 DEBUG nova.virt.libvirt.host [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.741 2 DEBUG nova.virt.libvirt.host [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.746 2 DEBUG nova.virt.libvirt.host [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.746 2 DEBUG nova.virt.libvirt.host [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.747 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.748 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.748 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.748 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.749 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.749 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.749 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.749 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.750 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.750 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.750 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.750 2 DEBUG nova.virt.hardware [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:12:45 compute-0 nova_compute[256940]: 2025-10-02 13:12:45.753 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:45.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:12:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/28757893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.206 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.236 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.241 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 686 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 112 KiB/s rd, 2.4 MiB/s wr, 46 op/s
Oct 02 13:12:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/28757893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:12:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907190954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.710 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.713 2 DEBUG nova.virt.libvirt.vif [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1400642785',display_name='tempest-TestStampPattern-server-1400642785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1400642785',id=206,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKUbc0knAwx6AjWLxEzN/Myua8DLnB1wbhcbmQ6eEauumE5/uQW0dSqivGfoQK/c14gwHJVzybj68xv4MB1iOou4+ZOgUXCtWGooPy7in3/oc/+fGSq5+qeVZlJgs3yxeQ==',key_name='tempest-TestStampPattern-2020443839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbaefa5c700c4ed495a5244732eed7e3',ramdisk_id='',reservation_id='r-ze3olpw0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1060565162',owner_user_name='tempest-TestStampPattern-1060565162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:35Z,user_data=None,user_id='81db307ac1f846188ce19b644ebcc396',uuid=97dd449c-87a7-4278-a819-3d412f587a4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.713 2 DEBUG nova.network.os_vif_util [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converting VIF {"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.714 2 DEBUG nova.network.os_vif_util [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b8:83,bridge_name='br-int',has_traffic_filtering=True,id=bff5fdd5-ec7d-45cf-94ea-2264da79c91d,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbff5fdd5-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.715 2 DEBUG nova.objects.instance [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.742 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <uuid>97dd449c-87a7-4278-a819-3d412f587a4c</uuid>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <name>instance-000000ce</name>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <nova:name>tempest-TestStampPattern-server-1400642785</nova:name>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:12:45</nova:creationTime>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:user uuid="81db307ac1f846188ce19b644ebcc396">tempest-TestStampPattern-1060565162-project-member</nova:user>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:project uuid="cbaefa5c700c4ed495a5244732eed7e3">tempest-TestStampPattern-1060565162</nova:project>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <nova:port uuid="bff5fdd5-ec7d-45cf-94ea-2264da79c91d">
Oct 02 13:12:46 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <system>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <entry name="serial">97dd449c-87a7-4278-a819-3d412f587a4c</entry>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <entry name="uuid">97dd449c-87a7-4278-a819-3d412f587a4c</entry>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </system>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <os>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   </os>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <features>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   </features>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/97dd449c-87a7-4278-a819-3d412f587a4c_disk">
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       </source>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/97dd449c-87a7-4278-a819-3d412f587a4c_disk.config">
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       </source>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:12:46 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:99:b8:83"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <target dev="tapbff5fdd5-ec"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/console.log" append="off"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <video>
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </video>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:12:46 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:12:46 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:12:46 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:12:46 compute-0 nova_compute[256940]: </domain>
Oct 02 13:12:46 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.743 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Preparing to wait for external event network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.744 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.744 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.744 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.745 2 DEBUG nova.virt.libvirt.vif [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1400642785',display_name='tempest-TestStampPattern-server-1400642785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1400642785',id=206,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKUbc0knAwx6AjWLxEzN/Myua8DLnB1wbhcbmQ6eEauumE5/uQW0dSqivGfoQK/c14gwHJVzybj68xv4MB1iOou4+ZOgUXCtWGooPy7in3/oc/+fGSq5+qeVZlJgs3yxeQ==',key_name='tempest-TestStampPattern-2020443839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbaefa5c700c4ed495a5244732eed7e3',ramdisk_id='',reservation_id='r-ze3olpw0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1060565162',owner_user_name='tempest-TestStampPattern-1060565162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:35Z,user_data=None,user_id='81db307ac1f846188ce19b644ebcc396',uuid=97dd449c-87a7-4278-a819-3d412f587a4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.745 2 DEBUG nova.network.os_vif_util [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converting VIF {"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.746 2 DEBUG nova.network.os_vif_util [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b8:83,bridge_name='br-int',has_traffic_filtering=True,id=bff5fdd5-ec7d-45cf-94ea-2264da79c91d,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbff5fdd5-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.746 2 DEBUG os_vif [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b8:83,bridge_name='br-int',has_traffic_filtering=True,id=bff5fdd5-ec7d-45cf-94ea-2264da79c91d,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbff5fdd5-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbff5fdd5-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbff5fdd5-ec, col_values=(('external_ids', {'iface-id': 'bff5fdd5-ec7d-45cf-94ea-2264da79c91d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:b8:83', 'vm-uuid': '97dd449c-87a7-4278-a819-3d412f587a4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:46 compute-0 NetworkManager[44981]: <info>  [1759410766.7538] manager: (tapbff5fdd5-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.760 2 INFO os_vif [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b8:83,bridge_name='br-int',has_traffic_filtering=True,id=bff5fdd5-ec7d-45cf-94ea-2264da79c91d,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbff5fdd5-ec')
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.853 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.853 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.854 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No VIF found with MAC fa:16:3e:99:b8:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.854 2 INFO nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Using config drive
Oct 02 13:12:46 compute-0 nova_compute[256940]: 2025-10-02 13:12:46.880 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:47.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.478 2 INFO nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Creating config drive at /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/disk.config
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.483 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptz0g_1ex execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.523 2 INFO nova.virt.libvirt.driver [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Deleting instance files /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c_del
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.525 2 INFO nova.virt.libvirt.driver [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Deletion of /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c_del complete
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.620 2 INFO nova.compute.manager [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Took 7.53 seconds to destroy the instance on the hypervisor.
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.621 2 DEBUG oslo.service.loopingcall [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.621 2 DEBUG nova.compute.manager [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.621 2 DEBUG nova.network.neutron [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.624 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptz0g_1ex" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.654 2 DEBUG nova.storage.rbd_utils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image 97dd449c-87a7-4278-a819-3d412f587a4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:12:47 compute-0 nova_compute[256940]: 2025-10-02 13:12:47.657 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/disk.config 97dd449c-87a7-4278-a819-3d412f587a4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:47 compute-0 ceph-mon[73668]: pgmap v3017: 305 pgs: 305 active+clean; 686 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 112 KiB/s rd, 2.4 MiB/s wr, 46 op/s
Oct 02 13:12:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3907190954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:47.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.364 2 DEBUG oslo_concurrency.processutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/disk.config 97dd449c-87a7-4278-a819-3d412f587a4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.365 2 INFO nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Deleting local config drive /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c/disk.config because it was imported into RBD.
Oct 02 13:12:48 compute-0 kernel: tapbff5fdd5-ec: entered promiscuous mode
Oct 02 13:12:48 compute-0 ovn_controller[148123]: 2025-10-02T13:12:48Z|01024|binding|INFO|Claiming lport bff5fdd5-ec7d-45cf-94ea-2264da79c91d for this chassis.
Oct 02 13:12:48 compute-0 ovn_controller[148123]: 2025-10-02T13:12:48Z|01025|binding|INFO|bff5fdd5-ec7d-45cf-94ea-2264da79c91d: Claiming fa:16:3e:99:b8:83 10.100.0.9
Oct 02 13:12:48 compute-0 NetworkManager[44981]: <info>  [1759410768.4188] manager: (tapbff5fdd5-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 694 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 179 KiB/s rd, 3.8 MiB/s wr, 85 op/s
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.424 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b8:83 10.100.0.9'], port_security=['fa:16:3e:99:b8:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97dd449c-87a7-4278-a819-3d412f587a4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-059f5861-22ab-45f3-a914-fb801f3c71f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbaefa5c700c4ed495a5244732eed7e3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '858bdb72-cf27-4a78-a9f7-c4548894dc59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a904a34-19d9-4790-850b-39af4c509e92, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=bff5fdd5-ec7d-45cf-94ea-2264da79c91d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.425 158104 INFO neutron.agent.ovn.metadata.agent [-] Port bff5fdd5-ec7d-45cf-94ea-2264da79c91d in datapath 059f5861-22ab-45f3-a914-fb801f3c71f9 bound to our chassis
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.427 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 059f5861-22ab-45f3-a914-fb801f3c71f9
Oct 02 13:12:48 compute-0 ovn_controller[148123]: 2025-10-02T13:12:48Z|01026|binding|INFO|Setting lport bff5fdd5-ec7d-45cf-94ea-2264da79c91d ovn-installed in OVS
Oct 02 13:12:48 compute-0 ovn_controller[148123]: 2025-10-02T13:12:48Z|01027|binding|INFO|Setting lport bff5fdd5-ec7d-45cf-94ea-2264da79c91d up in Southbound
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.441 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[04036b3b-2f16-4f46-9595-109dac9a9abf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.442 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap059f5861-21 in ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.444 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap059f5861-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.444 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3f18a776-0597-4192-9288-f7cfee1f10c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.446 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[33c61056-58bb-4038-a04d-588237870611]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 systemd-udevd[391971]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:12:48 compute-0 systemd-machined[210927]: New machine qemu-105-instance-000000ce.
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.458 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[963ec2c3-2fa9-47ba-beb2-8937a263409d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-000000ce.
Oct 02 13:12:48 compute-0 NetworkManager[44981]: <info>  [1759410768.4615] device (tapbff5fdd5-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:12:48 compute-0 NetworkManager[44981]: <info>  [1759410768.4625] device (tapbff5fdd5-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.474 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[087a822b-3471-440e-9f59-360debc6491e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.504 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d8b9f4-dbb6-4c6f-98fa-7b5726d42331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 NetworkManager[44981]: <info>  [1759410768.5157] manager: (tap059f5861-20): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.515 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4305ecb0-270f-4622-95a5-30f01bf52a08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.547 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b40933-678e-4e29-b1fe-cd1825c3cd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.551 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[c419ee53-2a49-4db3-8cd2-f586c73e27db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 NetworkManager[44981]: <info>  [1759410768.5719] device (tap059f5861-20): carrier: link connected
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.578 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[885ccfba-910a-4954-9528-09a42bd59282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.597 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0f9484-f812-43d0-a8b0-50accc86d159]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap059f5861-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e7:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 866405, 'reachable_time': 33612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392003, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.613 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[107df631-a8d9-4c40-af91-1017382dbd43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:e72a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 866405, 'tstamp': 866405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392004, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.630 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e1aa90-52d8-44cb-9253-5bd356b7188a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap059f5861-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e7:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 866405, 'reachable_time': 33612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392005, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.661 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3d59d055-40e8-4c1e-a910-ae839f16b6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.711 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[58963954-724b-4b65-b91d-a86ff9dfec4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.713 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap059f5861-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.713 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.714 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap059f5861-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 NetworkManager[44981]: <info>  [1759410768.7163] manager: (tap059f5861-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Oct 02 13:12:48 compute-0 kernel: tap059f5861-20: entered promiscuous mode
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.720 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap059f5861-20, col_values=(('external_ids', {'iface-id': 'd7b1128a-bc65-448f-ac61-7bb6414ffd02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:48 compute-0 ovn_controller[148123]: 2025-10-02T13:12:48Z|01028|binding|INFO|Releasing lport d7b1128a-bc65-448f-ac61-7bb6414ffd02 from this chassis (sb_readonly=0)
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.723 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/059f5861-22ab-45f3-a914-fb801f3c71f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/059f5861-22ab-45f3-a914-fb801f3c71f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.724 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11ddc501-9043-43bf-b650-6edfef5f9d3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.725 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-059f5861-22ab-45f3-a914-fb801f3c71f9
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/059f5861-22ab-45f3-a914-fb801f3c71f9.pid.haproxy
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 059f5861-22ab-45f3-a914-fb801f3c71f9
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:12:48 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:48.726 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'env', 'PROCESS_TAG=haproxy-059f5861-22ab-45f3-a914-fb801f3c71f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/059f5861-22ab-45f3-a914-fb801f3c71f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.746 2 DEBUG nova.compute.manager [req-d29bff44-3d1f-476a-8c79-52340344c8f9 req-50978893-b11e-46f3-9808-85159cb86a62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.747 2 DEBUG oslo_concurrency.lockutils [req-d29bff44-3d1f-476a-8c79-52340344c8f9 req-50978893-b11e-46f3-9808-85159cb86a62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.747 2 DEBUG oslo_concurrency.lockutils [req-d29bff44-3d1f-476a-8c79-52340344c8f9 req-50978893-b11e-46f3-9808-85159cb86a62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.747 2 DEBUG oslo_concurrency.lockutils [req-d29bff44-3d1f-476a-8c79-52340344c8f9 req-50978893-b11e-46f3-9808-85159cb86a62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.748 2 DEBUG nova.compute.manager [req-d29bff44-3d1f-476a-8c79-52340344c8f9 req-50978893-b11e-46f3-9808-85159cb86a62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Processing event network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.889 2 DEBUG nova.network.neutron [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.922 2 INFO nova.compute.manager [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Took 1.30 seconds to deallocate network for instance.
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.998 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:48 compute-0 nova_compute[256940]: 2025-10-02 13:12:48.999 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.001 2 DEBUG nova.compute.manager [req-bd50ad1c-3143-4f34-8994-0d3f73fb40c1 req-b2f5ea5e-77be-441c-be44-2d52fd2ef03d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-deleted-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.081 2 DEBUG oslo_concurrency.processutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:49 compute-0 podman[392038]: 2025-10-02 13:12:49.199337493 +0000 UTC m=+0.123248895 container create 85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:12:49 compute-0 podman[392038]: 2025-10-02 13:12:49.108650405 +0000 UTC m=+0.032561837 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:12:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:49.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:49 compute-0 systemd[1]: Started libpod-conmon-85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc.scope.
Oct 02 13:12:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bf821f50cb94b5b50d5c5410a6ec4971e20a976fc7f6085cb1e91670c554faa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:12:49 compute-0 podman[392038]: 2025-10-02 13:12:49.389977158 +0000 UTC m=+0.313888590 container init 85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:12:49 compute-0 podman[392038]: 2025-10-02 13:12:49.397148774 +0000 UTC m=+0.321060176 container start 85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:12:49 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [NOTICE]   (392094) : New worker (392096) forked
Oct 02 13:12:49 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [NOTICE]   (392094) : Loading success.
Oct 02 13:12:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046206427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.624 2 DEBUG oslo_concurrency.processutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.632 2 DEBUG nova.compute.provider_tree [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.647 2 DEBUG nova.scheduler.client.report [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.671 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.700 2 INFO nova.scheduler.client.report [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocations for instance de995ad8-07bb-4097-899b-5c79d62a1f4c
Oct 02 13:12:49 compute-0 nova_compute[256940]: 2025-10-02 13:12:49.769 2 DEBUG oslo_concurrency.lockutils [None req-231e1771-d692-48ab-89cb-733fd6f960da 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.106 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.107 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410770.106004, 97dd449c-87a7-4278-a819-3d412f587a4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.107 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] VM Started (Lifecycle Event)
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.110 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.114 2 INFO nova.virt.libvirt.driver [-] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Instance spawned successfully.
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.115 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.134 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.143 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.143 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.144 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.144 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.144 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.145 2 DEBUG nova.virt.libvirt.driver [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.148 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:50 compute-0 ceph-mon[73668]: pgmap v3018: 305 pgs: 305 active+clean; 694 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 179 KiB/s rd, 3.8 MiB/s wr, 85 op/s
Oct 02 13:12:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4046206427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.191 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.192 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410770.106348, 97dd449c-87a7-4278-a819-3d412f587a4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.192 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] VM Paused (Lifecycle Event)
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.221 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.225 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410770.1096091, 97dd449c-87a7-4278-a819-3d412f587a4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.225 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] VM Resumed (Lifecycle Event)
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.228 2 INFO nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Took 14.76 seconds to spawn the instance on the hypervisor.
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.229 2 DEBUG nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.241 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.245 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.293 2 INFO nova.compute.manager [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Took 15.85 seconds to build instance.
Oct 02 13:12:50 compute-0 nova_compute[256940]: 2025-10-02 13:12:50.346 2 DEBUG oslo_concurrency.lockutils [None req-6ceb13d9-38b2-400f-9015-8a5ff46360bc 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 666 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Oct 02 13:12:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.094 2 DEBUG nova.compute.manager [req-be687792-0f7c-419a-bf60-22662ee52d56 req-57bc08b9-bc5b-4f15-8801-f60098d68927 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.095 2 DEBUG oslo_concurrency.lockutils [req-be687792-0f7c-419a-bf60-22662ee52d56 req-57bc08b9-bc5b-4f15-8801-f60098d68927 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.095 2 DEBUG oslo_concurrency.lockutils [req-be687792-0f7c-419a-bf60-22662ee52d56 req-57bc08b9-bc5b-4f15-8801-f60098d68927 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.095 2 DEBUG oslo_concurrency.lockutils [req-be687792-0f7c-419a-bf60-22662ee52d56 req-57bc08b9-bc5b-4f15-8801-f60098d68927 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.096 2 DEBUG nova.compute.manager [req-be687792-0f7c-419a-bf60-22662ee52d56 req-57bc08b9-bc5b-4f15-8801-f60098d68927 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] No waiting events found dispatching network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.096 2 WARNING nova.compute.manager [req-be687792-0f7c-419a-bf60-22662ee52d56 req-57bc08b9-bc5b-4f15-8801-f60098d68927 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received unexpected event network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d for instance with vm_state active and task_state None.
Oct 02 13:12:51 compute-0 ceph-mon[73668]: pgmap v3019: 305 pgs: 305 active+clean; 666 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Oct 02 13:12:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:51.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:51 compute-0 podman[392132]: 2025-10-02 13:12:51.394577672 +0000 UTC m=+0.059665432 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:12:51 compute-0 podman[392152]: 2025-10-02 13:12:51.487857837 +0000 UTC m=+0.053385259 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:51.652 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:51.653 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:12:51 compute-0 nova_compute[256940]: 2025-10-02 13:12:51.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 645 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 472 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 02 13:12:53 compute-0 nova_compute[256940]: 2025-10-02 13:12:53.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:53.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:53 compute-0 ceph-mon[73668]: pgmap v3020: 305 pgs: 305 active+clean; 645 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 472 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 02 13:12:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:54 compute-0 nova_compute[256940]: 2025-10-02 13:12:54.020 2 DEBUG nova.compute.manager [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-changed-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:12:54 compute-0 nova_compute[256940]: 2025-10-02 13:12:54.021 2 DEBUG nova.compute.manager [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Refreshing instance network info cache due to event network-changed-bff5fdd5-ec7d-45cf-94ea-2264da79c91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:12:54 compute-0 nova_compute[256940]: 2025-10-02 13:12:54.021 2 DEBUG oslo_concurrency.lockutils [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:54 compute-0 nova_compute[256940]: 2025-10-02 13:12:54.021 2 DEBUG oslo_concurrency.lockutils [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:54 compute-0 nova_compute[256940]: 2025-10-02 13:12:54.021 2 DEBUG nova.network.neutron [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Refreshing network info cache for port bff5fdd5-ec7d-45cf-94ea-2264da79c91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:12:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 645 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 953 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 02 13:12:54 compute-0 sudo[392174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:54 compute-0 sudo[392174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:54 compute-0 sudo[392174]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:54 compute-0 sudo[392199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:54 compute-0 sudo[392199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:54 compute-0 sudo[392199]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:55.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:55.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:56 compute-0 ceph-mon[73668]: pgmap v3021: 305 pgs: 305 active+clean; 645 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 953 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 02 13:12:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 596 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 205 op/s
Oct 02 13:12:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:12:56.654 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:56 compute-0 nova_compute[256940]: 2025-10-02 13:12:56.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:57 compute-0 ceph-mon[73668]: pgmap v3022: 305 pgs: 305 active+clean; 596 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 205 op/s
Oct 02 13:12:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:12:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:57.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:12:57 compute-0 nova_compute[256940]: 2025-10-02 13:12:57.938 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410762.9366052, de995ad8-07bb-4097-899b-5c79d62a1f4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:12:57 compute-0 nova_compute[256940]: 2025-10-02 13:12:57.938 2 INFO nova.compute.manager [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Stopped (Lifecycle Event)
Oct 02 13:12:57 compute-0 nova_compute[256940]: 2025-10-02 13:12:57.960 2 DEBUG nova.compute.manager [None req-59b3c547-b393-47e0-8e22-4d722787c24e - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:12:58 compute-0 nova_compute[256940]: 2025-10-02 13:12:58.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 560 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 178 op/s
Oct 02 13:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:12:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:12:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:59.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:59 compute-0 nova_compute[256940]: 2025-10-02 13:12:59.302 2 DEBUG nova.network.neutron [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updated VIF entry in instance network info cache for port bff5fdd5-ec7d-45cf-94ea-2264da79c91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:12:59 compute-0 nova_compute[256940]: 2025-10-02 13:12:59.302 2 DEBUG nova.network.neutron [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating instance_info_cache with network_info: [{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:59 compute-0 nova_compute[256940]: 2025-10-02 13:12:59.327 2 DEBUG oslo_concurrency.lockutils [req-29321bd4-7189-4478-a4b8-97a8fff49c5b req-636de084-09a1-41e8-9307-aea5ceabd049 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:59 compute-0 ceph-mon[73668]: pgmap v3023: 305 pgs: 305 active+clean; 560 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 178 op/s
Oct 02 13:12:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:12:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 533 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 124 KiB/s wr, 151 op/s
Oct 02 13:13:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3468991839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:01.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:01 compute-0 nova_compute[256940]: 2025-10-02 13:13:01.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:01.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:02 compute-0 ceph-mon[73668]: pgmap v3024: 305 pgs: 305 active+clean; 533 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 124 KiB/s wr, 151 op/s
Oct 02 13:13:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 504 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 66 KiB/s wr, 142 op/s
Oct 02 13:13:03 compute-0 nova_compute[256940]: 2025-10-02 13:13:03.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:03.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:03 compute-0 ceph-mon[73668]: pgmap v3025: 305 pgs: 305 active+clean; 504 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 66 KiB/s wr, 142 op/s
Oct 02 13:13:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:03.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 485 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 170 KiB/s wr, 129 op/s
Oct 02 13:13:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3318170492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:05.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:05 compute-0 ceph-mon[73668]: pgmap v3026: 305 pgs: 305 active+clean; 485 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 170 KiB/s wr, 129 op/s
Oct 02 13:13:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2316275095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2316275095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:05 compute-0 sudo[392230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:05 compute-0 sudo[392230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:05 compute-0 sudo[392230]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:06 compute-0 sudo[392255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:06 compute-0 sudo[392255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:06 compute-0 sudo[392255]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:06 compute-0 sudo[392280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:06 compute-0 sudo[392280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:06 compute-0 sudo[392280]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:06 compute-0 sudo[392305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:13:06 compute-0 sudo[392305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 501 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 133 op/s
Oct 02 13:13:06 compute-0 podman[392372]: 2025-10-02 13:13:06.533695905 +0000 UTC m=+0.097551536 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 13:13:06 compute-0 podman[392380]: 2025-10-02 13:13:06.568528151 +0000 UTC m=+0.110215026 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:13:06 compute-0 podman[392448]: 2025-10-02 13:13:06.641523698 +0000 UTC m=+0.074963479 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:13:06 compute-0 podman[392448]: 2025-10-02 13:13:06.75971596 +0000 UTC m=+0.193155751 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:13:06 compute-0 nova_compute[256940]: 2025-10-02 13:13:06.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:07 compute-0 ovn_controller[148123]: 2025-10-02T13:13:07Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:b8:83 10.100.0.9
Oct 02 13:13:07 compute-0 ovn_controller[148123]: 2025-10-02T13:13:07Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:b8:83 10.100.0.9
Oct 02 13:13:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:07.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:07 compute-0 podman[392589]: 2025-10-02 13:13:07.364626234 +0000 UTC m=+0.069266732 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:13:07 compute-0 podman[392589]: 2025-10-02 13:13:07.371900973 +0000 UTC m=+0.076541451 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:13:07 compute-0 podman[392654]: 2025-10-02 13:13:07.577875506 +0000 UTC m=+0.054028736 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, version=2.2.4, io.openshift.expose-services=, architecture=x86_64)
Oct 02 13:13:07 compute-0 podman[392654]: 2025-10-02 13:13:07.589497928 +0000 UTC m=+0.065651148 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, release=1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public)
Oct 02 13:13:07 compute-0 sudo[392305]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:13:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:13:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:08 compute-0 sudo[392707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:08 compute-0 sudo[392707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:08 compute-0 sudo[392707]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:08 compute-0 ceph-mon[73668]: pgmap v3027: 305 pgs: 305 active+clean; 501 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 133 op/s
Oct 02 13:13:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:08 compute-0 sudo[392732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:08 compute-0 sudo[392732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:08 compute-0 sudo[392732]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:08 compute-0 sudo[392757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:08 compute-0 sudo[392757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:08 compute-0 sudo[392757]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:08 compute-0 nova_compute[256940]: 2025-10-02 13:13:08.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-0 sudo[392782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:13:08 compute-0 sudo[392782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 510 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 13:13:08 compute-0 sudo[392782]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:09 compute-0 ceph-mon[73668]: pgmap v3028: 305 pgs: 305 active+clean; 510 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 13:13:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:09.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:13:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 438 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 268 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Oct 02 13:13:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:13:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:11.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:13:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:13:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:13:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:13:11 compute-0 ceph-mon[73668]: pgmap v3029: 305 pgs: 305 active+clean; 438 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 268 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Oct 02 13:13:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:11 compute-0 nova_compute[256940]: 2025-10-02 13:13:11.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 195af50a-cb4e-4905-a553-0e80e568ec07 does not exist
Oct 02 13:13:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d5b33b63-85ff-4e24-987c-338a503e1bf8 does not exist
Oct 02 13:13:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b0426d96-8f81-4ce7-a0aa-e4e7436d7052 does not exist
Oct 02 13:13:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:13:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:13:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:13:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:13:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:13:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:11 compute-0 sudo[392840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:11 compute-0 sudo[392840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:11 compute-0 sudo[392840]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:11.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:11 compute-0 sudo[392865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:11 compute-0 sudo[392865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:11 compute-0 sudo[392865]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:11 compute-0 sudo[392890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:11 compute-0 sudo[392890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:11 compute-0 sudo[392890]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:12 compute-0 sudo[392915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:13:12 compute-0 sudo[392915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:12 compute-0 podman[392981]: 2025-10-02 13:13:12.390718294 +0000 UTC m=+0.038493652 container create 6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:12 compute-0 systemd[1]: Started libpod-conmon-6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687.scope.
Oct 02 13:13:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 404 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 02 13:13:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:12 compute-0 podman[392981]: 2025-10-02 13:13:12.373407374 +0000 UTC m=+0.021182762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:12 compute-0 podman[392981]: 2025-10-02 13:13:12.49058493 +0000 UTC m=+0.138360388 container init 6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:12 compute-0 podman[392981]: 2025-10-02 13:13:12.497604202 +0000 UTC m=+0.145379600 container start 6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:12 compute-0 zealous_kepler[392997]: 167 167
Oct 02 13:13:12 compute-0 systemd[1]: libpod-6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687.scope: Deactivated successfully.
Oct 02 13:13:12 compute-0 podman[392981]: 2025-10-02 13:13:12.513506795 +0000 UTC m=+0.161282153 container attach 6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:12 compute-0 podman[392981]: 2025-10-02 13:13:12.513788743 +0000 UTC m=+0.161564101 container died 6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-07093fd2d6af6de57c0a8a2f99ba47bd22df4f60c02c46b92ca504b7e5a1c886-merged.mount: Deactivated successfully.
Oct 02 13:13:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:13:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:13:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:13:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:12 compute-0 podman[392981]: 2025-10-02 13:13:12.702606911 +0000 UTC m=+0.350382309 container remove 6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:12 compute-0 systemd[1]: libpod-conmon-6fb17f4c55fc01bde415fb1d4d354af71db2bf91730a1af8ccb7356bc3db5687.scope: Deactivated successfully.
Oct 02 13:13:13 compute-0 podman[393023]: 2025-10-02 13:13:12.904852138 +0000 UTC m=+0.022960768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:13 compute-0 podman[393023]: 2025-10-02 13:13:13.009348384 +0000 UTC m=+0.127456964 container create 9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhabha, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:13:13 compute-0 systemd[1]: Started libpod-conmon-9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c.scope.
Oct 02 13:13:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c912a937d672af98888bfc75d57212a85d0b0a5a9f7d61451b771527b4dbc5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c912a937d672af98888bfc75d57212a85d0b0a5a9f7d61451b771527b4dbc5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c912a937d672af98888bfc75d57212a85d0b0a5a9f7d61451b771527b4dbc5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c912a937d672af98888bfc75d57212a85d0b0a5a9f7d61451b771527b4dbc5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c912a937d672af98888bfc75d57212a85d0b0a5a9f7d61451b771527b4dbc5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:13 compute-0 nova_compute[256940]: 2025-10-02 13:13:13.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:13 compute-0 podman[393023]: 2025-10-02 13:13:13.179022364 +0000 UTC m=+0.297130964 container init 9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhabha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:13 compute-0 podman[393023]: 2025-10-02 13:13:13.189553158 +0000 UTC m=+0.307661748 container start 9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:13:13 compute-0 podman[393023]: 2025-10-02 13:13:13.222904135 +0000 UTC m=+0.341012715 container attach 9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhabha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:13:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:13.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:13.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:14 compute-0 vigilant_bhabha[393039]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:13:14 compute-0 vigilant_bhabha[393039]: --> relative data size: 1.0
Oct 02 13:13:14 compute-0 vigilant_bhabha[393039]: --> All data devices are unavailable
Oct 02 13:13:14 compute-0 systemd[1]: libpod-9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c.scope: Deactivated successfully.
Oct 02 13:13:14 compute-0 podman[393023]: 2025-10-02 13:13:14.030078125 +0000 UTC m=+1.148186735 container died 9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:13:14 compute-0 ceph-mon[73668]: pgmap v3030: 305 pgs: 305 active+clean; 404 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 02 13:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c912a937d672af98888bfc75d57212a85d0b0a5a9f7d61451b771527b4dbc5c-merged.mount: Deactivated successfully.
Oct 02 13:13:14 compute-0 podman[393023]: 2025-10-02 13:13:14.403780309 +0000 UTC m=+1.521888889 container remove 9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 13:13:14 compute-0 sudo[392915]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:14 compute-0 sudo[393067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:14 compute-0 sudo[393067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:14 compute-0 sudo[393067]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:14 compute-0 systemd[1]: libpod-conmon-9595095e8555843b277cb846e0a76910f4fa9f82d3e5bfcf7e4fe56acb5ca46c.scope: Deactivated successfully.
Oct 02 13:13:14 compute-0 sudo[393092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:14 compute-0 sudo[393092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:14 compute-0 sudo[393092]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:14 compute-0 sudo[393117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:14 compute-0 sudo[393117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:14 compute-0 sudo[393117]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:14 compute-0 sudo[393142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:14 compute-0 sudo[393142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:14 compute-0 sudo[393142]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:14 compute-0 sudo[393143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:13:14 compute-0 sudo[393143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:14 compute-0 sudo[393192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:14 compute-0 sudo[393192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:14 compute-0 sudo[393192]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:15 compute-0 podman[393258]: 2025-10-02 13:13:15.031773441 +0000 UTC m=+0.044544649 container create e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:13:15 compute-0 systemd[1]: Started libpod-conmon-e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb.scope.
Oct 02 13:13:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:15 compute-0 podman[393258]: 2025-10-02 13:13:15.012440899 +0000 UTC m=+0.025212137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:15 compute-0 podman[393258]: 2025-10-02 13:13:15.116547785 +0000 UTC m=+0.129319013 container init e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:15 compute-0 podman[393258]: 2025-10-02 13:13:15.123848395 +0000 UTC m=+0.136619603 container start e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:13:15 compute-0 podman[393258]: 2025-10-02 13:13:15.12674082 +0000 UTC m=+0.139512048 container attach e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:15 compute-0 romantic_joliot[393274]: 167 167
Oct 02 13:13:15 compute-0 systemd[1]: libpod-e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb.scope: Deactivated successfully.
Oct 02 13:13:15 compute-0 podman[393258]: 2025-10-02 13:13:15.129981254 +0000 UTC m=+0.142752472 container died e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e7adef8171bcc5cd2b09767e091a537c31254656f4fbe568d583a86a4b313d-merged.mount: Deactivated successfully.
Oct 02 13:13:15 compute-0 podman[393258]: 2025-10-02 13:13:15.170556789 +0000 UTC m=+0.183327997 container remove e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 13:13:15 compute-0 systemd[1]: libpod-conmon-e71146ee68accc5785b81ab00941041f2a60db292783e29c5a6efb5659200abb.scope: Deactivated successfully.
Oct 02 13:13:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1903692744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:15 compute-0 ceph-mon[73668]: pgmap v3031: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 13:13:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3948110413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:15.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:15 compute-0 podman[393297]: 2025-10-02 13:13:15.350414554 +0000 UTC m=+0.042861105 container create 886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:13:15 compute-0 systemd[1]: Started libpod-conmon-886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59.scope.
Oct 02 13:13:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815f8124c559e5f1a052ebc62e5182bff6bf9c07aec52072c94c220717ea227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815f8124c559e5f1a052ebc62e5182bff6bf9c07aec52072c94c220717ea227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815f8124c559e5f1a052ebc62e5182bff6bf9c07aec52072c94c220717ea227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815f8124c559e5f1a052ebc62e5182bff6bf9c07aec52072c94c220717ea227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:15 compute-0 podman[393297]: 2025-10-02 13:13:15.334386587 +0000 UTC m=+0.026833158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:15 compute-0 podman[393297]: 2025-10-02 13:13:15.437918038 +0000 UTC m=+0.130364599 container init 886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:13:15 compute-0 podman[393297]: 2025-10-02 13:13:15.444743086 +0000 UTC m=+0.137189637 container start 886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:13:15 compute-0 podman[393297]: 2025-10-02 13:13:15.448781611 +0000 UTC m=+0.141228182 container attach 886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:13:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:16 compute-0 funny_mestorf[393314]: {
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:     "1": [
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:         {
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "devices": [
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "/dev/loop3"
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             ],
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "lv_name": "ceph_lv0",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "lv_size": "7511998464",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "name": "ceph_lv0",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "tags": {
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.cluster_name": "ceph",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.crush_device_class": "",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.encrypted": "0",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.osd_id": "1",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.type": "block",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:                 "ceph.vdo": "0"
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             },
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "type": "block",
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:             "vg_name": "ceph_vg0"
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:         }
Oct 02 13:13:16 compute-0 funny_mestorf[393314]:     ]
Oct 02 13:13:16 compute-0 funny_mestorf[393314]: }
Oct 02 13:13:16 compute-0 systemd[1]: libpod-886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59.scope: Deactivated successfully.
Oct 02 13:13:16 compute-0 podman[393297]: 2025-10-02 13:13:16.286656569 +0000 UTC m=+0.979103120 container died 886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2815f8124c559e5f1a052ebc62e5182bff6bf9c07aec52072c94c220717ea227-merged.mount: Deactivated successfully.
Oct 02 13:13:16 compute-0 podman[393297]: 2025-10-02 13:13:16.343787794 +0000 UTC m=+1.036234345 container remove 886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:16 compute-0 systemd[1]: libpod-conmon-886d765621bc4723942f786a6405937cc5da7d15a01089c2c8ca195242f50a59.scope: Deactivated successfully.
Oct 02 13:13:16 compute-0 sudo[393143]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.0 MiB/s wr, 123 op/s
Oct 02 13:13:16 compute-0 sudo[393337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:16 compute-0 sudo[393337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:16 compute-0 sudo[393337]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:16 compute-0 sudo[393362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:16 compute-0 sudo[393362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:16 compute-0 sudo[393362]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:16 compute-0 sudo[393387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:16 compute-0 sudo[393387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:16 compute-0 sudo[393387]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:16 compute-0 sudo[393412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:13:16 compute-0 sudo[393412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:16 compute-0 nova_compute[256940]: 2025-10-02 13:13:16.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:16 compute-0 podman[393478]: 2025-10-02 13:13:16.948674897 +0000 UTC m=+0.045319969 container create bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:13:16 compute-0 systemd[1]: Started libpod-conmon-bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99.scope.
Oct 02 13:13:17 compute-0 podman[393478]: 2025-10-02 13:13:16.930867704 +0000 UTC m=+0.027512786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:17 compute-0 podman[393478]: 2025-10-02 13:13:17.046599962 +0000 UTC m=+0.143245044 container init bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:13:17 compute-0 podman[393478]: 2025-10-02 13:13:17.053266825 +0000 UTC m=+0.149911887 container start bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:13:17 compute-0 podman[393478]: 2025-10-02 13:13:17.056791837 +0000 UTC m=+0.153436899 container attach bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:13:17 compute-0 laughing_driscoll[393495]: 167 167
Oct 02 13:13:17 compute-0 systemd[1]: libpod-bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99.scope: Deactivated successfully.
Oct 02 13:13:17 compute-0 podman[393478]: 2025-10-02 13:13:17.060612556 +0000 UTC m=+0.157257618 container died bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8f55b71be02e94e2964bc106226238457466e2d0f1eeb6c005f6c0735412905-merged.mount: Deactivated successfully.
Oct 02 13:13:17 compute-0 podman[393478]: 2025-10-02 13:13:17.110161784 +0000 UTC m=+0.206806846 container remove bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:17 compute-0 systemd[1]: libpod-conmon-bcf1a1f9ddc077e34582c4681d83f2fd646ca3b3f7d85e086fe63c14f7029c99.scope: Deactivated successfully.
Oct 02 13:13:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:17.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:17 compute-0 podman[393519]: 2025-10-02 13:13:17.290247015 +0000 UTC m=+0.048874121 container create a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:13:17 compute-0 systemd[1]: Started libpod-conmon-a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e.scope.
Oct 02 13:13:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f910e4dc60bead907b38fed5fc45e6506b2de7e76a380184921e37685b3075d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f910e4dc60bead907b38fed5fc45e6506b2de7e76a380184921e37685b3075d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f910e4dc60bead907b38fed5fc45e6506b2de7e76a380184921e37685b3075d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f910e4dc60bead907b38fed5fc45e6506b2de7e76a380184921e37685b3075d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:13:17 compute-0 podman[393519]: 2025-10-02 13:13:17.360661826 +0000 UTC m=+0.119288942 container init a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:13:17 compute-0 podman[393519]: 2025-10-02 13:13:17.264842815 +0000 UTC m=+0.023469971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:13:17 compute-0 podman[393519]: 2025-10-02 13:13:17.374161736 +0000 UTC m=+0.132788842 container start a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:17 compute-0 podman[393519]: 2025-10-02 13:13:17.378300804 +0000 UTC m=+0.136927910 container attach a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:13:17 compute-0 ceph-mon[73668]: pgmap v3032: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.0 MiB/s wr, 123 op/s
Oct 02 13:13:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:17.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:18 compute-0 nova_compute[256940]: 2025-10-02 13:13:18.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]: {
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]:         "osd_id": 1,
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]:         "type": "bluestore"
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]:     }
Oct 02 13:13:18 compute-0 dreamy_lewin[393535]: }
Oct 02 13:13:18 compute-0 systemd[1]: libpod-a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e.scope: Deactivated successfully.
Oct 02 13:13:18 compute-0 conmon[393535]: conmon a8c0a3e22aacc6a12106 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e.scope/container/memory.events
Oct 02 13:13:18 compute-0 podman[393519]: 2025-10-02 13:13:18.239492318 +0000 UTC m=+0.998119424 container died a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f910e4dc60bead907b38fed5fc45e6506b2de7e76a380184921e37685b3075d7-merged.mount: Deactivated successfully.
Oct 02 13:13:18 compute-0 podman[393519]: 2025-10-02 13:13:18.307506486 +0000 UTC m=+1.066133602 container remove a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 13:13:18 compute-0 systemd[1]: libpod-conmon-a8c0a3e22aacc6a12106cedd4d08d6d54add9262f0cab7a5cf93a45fd440154e.scope: Deactivated successfully.
Oct 02 13:13:18 compute-0 sudo[393412]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:13:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:13:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 658 KiB/s wr, 97 op/s
Oct 02 13:13:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 27e2c7d8-080f-4dd1-831e-a5ee7338594f does not exist
Oct 02 13:13:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2f0e822a-4764-4612-a4ce-4e163f5a926e does not exist
Oct 02 13:13:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 63292ba8-a32f-4ca6-8450-37ef658138bd does not exist
Oct 02 13:13:18 compute-0 sudo[393570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:18 compute-0 sudo[393570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:18 compute-0 sudo[393570]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:18 compute-0 sudo[393595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:13:18 compute-0 sudo[393595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:18 compute-0 sudo[393595]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:19 compute-0 ovn_controller[148123]: 2025-10-02T13:13:19Z|01029|binding|INFO|Releasing lport d7b1128a-bc65-448f-ac61-7bb6414ffd02 from this chassis (sb_readonly=0)
Oct 02 13:13:19 compute-0 nova_compute[256940]: 2025-10-02 13:13:19.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:19 compute-0 ceph-mon[73668]: pgmap v3033: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 658 KiB/s wr, 97 op/s
Oct 02 13:13:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:19.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 314 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 99 KiB/s wr, 72 op/s
Oct 02 13:13:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:21 compute-0 nova_compute[256940]: 2025-10-02 13:13:21.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:21.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:21 compute-0 ceph-mon[73668]: pgmap v3034: 305 pgs: 305 active+clean; 314 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 99 KiB/s wr, 72 op/s
Oct 02 13:13:22 compute-0 podman[393623]: 2025-10-02 13:13:22.417751532 +0000 UTC m=+0.073913162 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:13:22 compute-0 podman[393622]: 2025-10-02 13:13:22.417786183 +0000 UTC m=+0.073894072 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:13:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 98 KiB/s rd, 95 KiB/s wr, 63 op/s
Oct 02 13:13:23 compute-0 nova_compute[256940]: 2025-10-02 13:13:23.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:23.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:23 compute-0 ceph-mon[73668]: pgmap v3035: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 98 KiB/s rd, 95 KiB/s wr, 63 op/s
Oct 02 13:13:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1530611564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 15 KiB/s wr, 62 op/s
Oct 02 13:13:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:13:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:25.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:13:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:26 compute-0 ceph-mon[73668]: pgmap v3036: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 15 KiB/s wr, 62 op/s
Oct 02 13:13:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 15 KiB/s wr, 49 op/s
Oct 02 13:13:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:13:26.508 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:13:26.509 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:13:26.509 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:26 compute-0 nova_compute[256940]: 2025-10-02 13:13:26.596 2 DEBUG oslo_concurrency.lockutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:26 compute-0 nova_compute[256940]: 2025-10-02 13:13:26.597 2 DEBUG oslo_concurrency.lockutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:26 compute-0 nova_compute[256940]: 2025-10-02 13:13:26.625 2 DEBUG nova.objects.instance [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'flavor' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:26 compute-0 nova_compute[256940]: 2025-10-02 13:13:26.668 2 DEBUG oslo_concurrency.lockutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:26 compute-0 nova_compute[256940]: 2025-10-02 13:13:26.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.180 2 DEBUG oslo_concurrency.lockutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.181 2 DEBUG oslo_concurrency.lockutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.181 2 INFO nova.compute.manager [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Attaching volume d96c5d9f-6560-48dd-8bf0-55b9a4da462b to /dev/vdb
Oct 02 13:13:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1372799750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:27 compute-0 ceph-mon[73668]: pgmap v3037: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 15 KiB/s wr, 49 op/s
Oct 02 13:13:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:27.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.484 2 DEBUG os_brick.utils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.486 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.499 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.500 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c4312f-5904-4497-af0b-49845d8b9809]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.501 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.511 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.511 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0213f6-2e94-41e4-a054-2d7e34dd7c4d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.513 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.522 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.523 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[8c2ee32e-43a1-492b-b0af-bd1044730674]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.524 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[007d82e3-5f39-4e23-bd5b-8eeaf3b56cb8]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.525 2 DEBUG oslo_concurrency.processutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.558 2 DEBUG oslo_concurrency.processutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.561 2 DEBUG os_brick.initiator.connectors.lightos [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.561 2 DEBUG os_brick.initiator.connectors.lightos [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.562 2 DEBUG os_brick.initiator.connectors.lightos [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.562 2 DEBUG os_brick.utils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:13:27 compute-0 nova_compute[256940]: 2025-10-02 13:13:27.562 2 DEBUG nova.virt.block_device [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating existing volume attachment record: b26ac390-5800-4d5f-9659-a22fb1a48ad5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:13:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:27.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:13:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928997528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3093970101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/57139510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 32 op/s
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.449 2 DEBUG nova.objects.instance [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'flavor' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.479 2 DEBUG nova.virt.libvirt.driver [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Attempting to attach volume d96c5d9f-6560-48dd-8bf0-55b9a4da462b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.483 2 DEBUG nova.virt.libvirt.guest [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:13:28 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:13:28 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-d96c5d9f-6560-48dd-8bf0-55b9a4da462b">
Oct 02 13:13:28 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:13:28 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:13:28 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:13:28 compute-0 nova_compute[256940]:   </source>
Oct 02 13:13:28 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 13:13:28 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:13:28 compute-0 nova_compute[256940]:   </auth>
Oct 02 13:13:28 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:13:28 compute-0 nova_compute[256940]:   <serial>d96c5d9f-6560-48dd-8bf0-55b9a4da462b</serial>
Oct 02 13:13:28 compute-0 nova_compute[256940]: </disk>
Oct 02 13:13:28 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.619 2 DEBUG nova.virt.libvirt.driver [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.620 2 DEBUG nova.virt.libvirt.driver [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.620 2 DEBUG nova.virt.libvirt.driver [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.620 2 DEBUG nova.virt.libvirt.driver [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No VIF found with MAC fa:16:3e:99:b8:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:13:28 compute-0 nova_compute[256940]: 2025-10-02 13:13:28.872 2 DEBUG oslo_concurrency.lockutils [None req-82d726c8-6454-4e6c-8cfb-5331b8947a25 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:13:28
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images']
Oct 02 13:13:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:13:29 compute-0 nova_compute[256940]: 2025-10-02 13:13:29.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:29.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2928997528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:29 compute-0 ceph-mon[73668]: pgmap v3038: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 32 op/s
Oct 02 13:13:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1871557515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:13:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:13:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:29.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:30 compute-0 nova_compute[256940]: 2025-10-02 13:13:30.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:30 compute-0 nova_compute[256940]: 2025-10-02 13:13:30.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:13:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 2.7 KiB/s wr, 27 op/s
Oct 02 13:13:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/532331378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:31 compute-0 nova_compute[256940]: 2025-10-02 13:13:31.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:31.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:31 compute-0 nova_compute[256940]: 2025-10-02 13:13:31.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:31 compute-0 ceph-mon[73668]: pgmap v3039: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 2.7 KiB/s wr, 27 op/s
Oct 02 13:13:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1278279996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1278279996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:31.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:31 compute-0 nova_compute[256940]: 2025-10-02 13:13:31.997 2 DEBUG oslo_concurrency.lockutils [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:31 compute-0 nova_compute[256940]: 2025-10-02 13:13:31.998 2 DEBUG oslo_concurrency.lockutils [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.022 2 INFO nova.compute.manager [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Detaching volume d96c5d9f-6560-48dd-8bf0-55b9a4da462b
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.180 2 INFO nova.virt.block_device [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Attempting to driver detach volume d96c5d9f-6560-48dd-8bf0-55b9a4da462b from mountpoint /dev/vdb
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.188 2 DEBUG nova.virt.libvirt.driver [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Attempting to detach device vdb from instance 97dd449c-87a7-4278-a819-3d412f587a4c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.189 2 DEBUG nova.virt.libvirt.guest [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-d96c5d9f-6560-48dd-8bf0-55b9a4da462b">
Oct 02 13:13:32 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   </source>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <serial>d96c5d9f-6560-48dd-8bf0-55b9a4da462b</serial>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]: </disk>
Oct 02 13:13:32 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.202 2 INFO nova.virt.libvirt.driver [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully detached device vdb from instance 97dd449c-87a7-4278-a819-3d412f587a4c from the persistent domain config.
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.203 2 DEBUG nova.virt.libvirt.driver [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 97dd449c-87a7-4278-a819-3d412f587a4c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.204 2 DEBUG nova.virt.libvirt.guest [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-d96c5d9f-6560-48dd-8bf0-55b9a4da462b">
Oct 02 13:13:32 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   </source>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <serial>d96c5d9f-6560-48dd-8bf0-55b9a4da462b</serial>
Oct 02 13:13:32 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:13:32 compute-0 nova_compute[256940]: </disk>
Oct 02 13:13:32 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.247 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.317 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759410812.3170798, 97dd449c-87a7-4278-a819-3d412f587a4c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.319 2 DEBUG nova.virt.libvirt.driver [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 97dd449c-87a7-4278-a819-3d412f587a4c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.322 2 INFO nova.virt.libvirt.driver [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully detached device vdb from instance 97dd449c-87a7-4278-a819-3d412f587a4c from the live domain config.
Oct 02 13:13:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.663 2 DEBUG nova.objects.instance [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'flavor' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306169587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.694 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.708 2 DEBUG oslo_concurrency.lockutils [None req-8cfd9da4-2c50-44b2-9684-9218be281920 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.774 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.775 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:13:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/306169587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.918 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.919 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3919MB free_disk=20.94274139404297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.919 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:32 compute-0 nova_compute[256940]: 2025-10-02 13:13:32.919 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.006 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 97dd449c-87a7-4278-a819-3d412f587a4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.007 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.007 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.040 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:33.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391005584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.464 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.470 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.486 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.535 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:13:33 compute-0 nova_compute[256940]: 2025-10-02 13:13:33.535 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Oct 02 13:13:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Oct 02 13:13:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Oct 02 13:13:33 compute-0 ceph-mon[73668]: pgmap v3040: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Oct 02 13:13:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/391005584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:33.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 237 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 210 KiB/s wr, 35 op/s
Oct 02 13:13:34 compute-0 sudo[393745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:34 compute-0 sudo[393745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:34 compute-0 sudo[393745]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:34 compute-0 sudo[393770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:34 compute-0 sudo[393770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:34 compute-0 sudo[393770]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:34 compute-0 ceph-mon[73668]: osdmap e377: 3 total, 3 up, 3 in
Oct 02 13:13:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:35 compute-0 ovn_controller[148123]: 2025-10-02T13:13:35Z|01030|binding|INFO|Releasing lport d7b1128a-bc65-448f-ac61-7bb6414ffd02 from this chassis (sb_readonly=0)
Oct 02 13:13:35 compute-0 nova_compute[256940]: 2025-10-02 13:13:35.399 2 DEBUG nova.compute.manager [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:13:35 compute-0 nova_compute[256940]: 2025-10-02 13:13:35.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:35 compute-0 nova_compute[256940]: 2025-10-02 13:13:35.453 2 INFO nova.compute.manager [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] instance snapshotting
Oct 02 13:13:35 compute-0 nova_compute[256940]: 2025-10-02 13:13:35.869 2 INFO nova.virt.libvirt.driver [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Beginning live snapshot process
Oct 02 13:13:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:35.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:36 compute-0 nova_compute[256940]: 2025-10-02 13:13:36.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:36 compute-0 nova_compute[256940]: 2025-10-02 13:13:36.256 2 DEBUG nova.virt.libvirt.imagebackend [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 13:13:36 compute-0 ceph-mon[73668]: pgmap v3042: 305 pgs: 305 active+clean; 237 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 210 KiB/s wr, 35 op/s
Oct 02 13:13:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 233 KiB/s wr, 42 op/s
Oct 02 13:13:36 compute-0 nova_compute[256940]: 2025-10-02 13:13:36.497 2 DEBUG nova.storage.rbd_utils [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] creating snapshot(05c3a90a2f68470d8acfd713fc9f53f3) on rbd image(97dd449c-87a7-4278-a819-3d412f587a4c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:13:36 compute-0 nova_compute[256940]: 2025-10-02 13:13:36.627 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:36 compute-0 nova_compute[256940]: 2025-10-02 13:13:36.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Oct 02 13:13:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:37.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Oct 02 13:13:37 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Oct 02 13:13:37 compute-0 podman[393847]: 2025-10-02 13:13:37.38273075 +0000 UTC m=+0.057791114 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:13:37 compute-0 podman[393848]: 2025-10-02 13:13:37.413846338 +0000 UTC m=+0.086292134 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:13:37 compute-0 ceph-mon[73668]: pgmap v3043: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 233 KiB/s wr, 42 op/s
Oct 02 13:13:37 compute-0 nova_compute[256940]: 2025-10-02 13:13:37.579 2 DEBUG nova.storage.rbd_utils [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] cloning vms/97dd449c-87a7-4278-a819-3d412f587a4c_disk@05c3a90a2f68470d8acfd713fc9f53f3 to images/23101426-fbb2-4946-b9ce-fdcd0b2d5391 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 13:13:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:37.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:37 compute-0 nova_compute[256940]: 2025-10-02 13:13:37.964 2 DEBUG nova.storage.rbd_utils [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] flattening images/23101426-fbb2-4946-b9ce-fdcd0b2d5391 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 13:13:38 compute-0 nova_compute[256940]: 2025-10-02 13:13:38.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:38 compute-0 nova_compute[256940]: 2025-10-02 13:13:38.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 291 KiB/s wr, 76 op/s
Oct 02 13:13:38 compute-0 ceph-mon[73668]: osdmap e378: 3 total, 3 up, 3 in
Oct 02 13:13:38 compute-0 nova_compute[256940]: 2025-10-02 13:13:38.993 2 DEBUG nova.storage.rbd_utils [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] removing snapshot(05c3a90a2f68470d8acfd713fc9f53f3) on rbd image(97dd449c-87a7-4278-a819-3d412f587a4c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:13:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:13:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 67K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1635 writes, 6981 keys, 1635 commit groups, 1.0 writes per commit group, ingest: 10.62 MB, 0.02 MB/s
                                           Interval WAL: 1635 writes, 1635 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     37.3      2.37              0.27        46    0.051       0      0       0.0       0.0
                                             L6      1/0    9.82 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1     76.0     64.8      6.98              1.31        45    0.155    317K    24K       0.0       0.0
                                            Sum      1/0    9.82 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     56.8     57.8      9.34              1.58        91    0.103    317K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     47.0     45.9      1.30              0.18        10    0.130     47K   2560       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     76.0     64.8      6.98              1.31        45    0.155    317K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     37.3      2.36              0.27        45    0.053       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.086, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.53 GB write, 0.10 MB/s write, 0.52 GB read, 0.10 MB/s read, 9.3 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 57.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000391 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3358,55.33 MB,18.1996%) FilterBlock(92,886.61 KB,0.284812%) IndexBlock(92,1.46 MB,0.480431%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:13:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:39.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Oct 02 13:13:39 compute-0 ceph-mon[73668]: pgmap v3045: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 291 KiB/s wr, 76 op/s
Oct 02 13:13:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Oct 02 13:13:39 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Oct 02 13:13:39 compute-0 nova_compute[256940]: 2025-10-02 13:13:39.882 2 DEBUG nova.storage.rbd_utils [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] creating snapshot(snap) on rbd image(23101426-fbb2-4946-b9ce-fdcd0b2d5391) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:13:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:39.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:40 compute-0 nova_compute[256940]: 2025-10-02 13:13:40.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 227 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021745009771077612 of space, bias 1.0, pg target 0.6523502931323284 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002255563116508468 of space, bias 1.0, pg target 0.6766689349525404 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002561454552856877 of space, bias 1.0, pg target 0.7684363658570631 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:13:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:13:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Oct 02 13:13:40 compute-0 ceph-mon[73668]: osdmap e379: 3 total, 3 up, 3 in
Oct 02 13:13:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Oct 02 13:13:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Oct 02 13:13:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:41 compute-0 nova_compute[256940]: 2025-10-02 13:13:41.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:41.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:42 compute-0 ceph-mon[73668]: pgmap v3047: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 227 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 02 13:13:42 compute-0 ceph-mon[73668]: osdmap e380: 3 total, 3 up, 3 in
Oct 02 13:13:42 compute-0 nova_compute[256940]: 2025-10-02 13:13:42.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.1 MiB/s wr, 88 op/s
Oct 02 13:13:42 compute-0 nova_compute[256940]: 2025-10-02 13:13:42.928 2 INFO nova.virt.libvirt.driver [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Snapshot image upload complete
Oct 02 13:13:42 compute-0 nova_compute[256940]: 2025-10-02 13:13:42.929 2 INFO nova.compute.manager [None req-b04664e4-8579-402c-8c74-2d1cf0c2ccbb 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Took 7.47 seconds to snapshot the instance on the hypervisor.
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.435 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.435 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.436 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:13:43 compute-0 nova_compute[256940]: 2025-10-02 13:13:43.436 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:43.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:44 compute-0 ceph-mon[73668]: pgmap v3049: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.1 MiB/s wr, 88 op/s
Oct 02 13:13:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.6 MiB/s wr, 144 op/s
Oct 02 13:13:44 compute-0 nova_compute[256940]: 2025-10-02 13:13:44.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:45 compute-0 nova_compute[256940]: 2025-10-02 13:13:45.027 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating instance_info_cache with network_info: [{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:13:45 compute-0 nova_compute[256940]: 2025-10-02 13:13:45.048 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:13:45 compute-0 nova_compute[256940]: 2025-10-02 13:13:45.048 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:13:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:45 compute-0 ceph-mon[73668]: pgmap v3050: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.6 MiB/s wr, 144 op/s
Oct 02 13:13:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:45.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 120 op/s
Oct 02 13:13:46 compute-0 nova_compute[256940]: 2025-10-02 13:13:46.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:46 compute-0 nova_compute[256940]: 2025-10-02 13:13:46.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:47 compute-0 ceph-mon[73668]: pgmap v3051: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 120 op/s
Oct 02 13:13:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:47.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:48 compute-0 nova_compute[256940]: 2025-10-02 13:13:48.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 126 op/s
Oct 02 13:13:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4100119741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.743629) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828743665, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2212, "num_deletes": 255, "total_data_size": 3804065, "memory_usage": 3864696, "flush_reason": "Manual Compaction"}
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828803268, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3723786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66112, "largest_seqno": 68323, "table_properties": {"data_size": 3713812, "index_size": 6339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21332, "raw_average_key_size": 20, "raw_value_size": 3693614, "raw_average_value_size": 3614, "num_data_blocks": 274, "num_entries": 1022, "num_filter_entries": 1022, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410630, "oldest_key_time": 1759410630, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 59679 microseconds, and 9233 cpu microseconds.
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.803307) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3723786 bytes OK
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.803329) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.810929) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.810981) EVENT_LOG_v1 {"time_micros": 1759410828810970, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.811009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3794998, prev total WAL file size 3794998, number of live WAL files 2.
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.812685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3636KB)], [152(10054KB)]
Oct 02 13:13:48 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828812712, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 14019217, "oldest_snapshot_seqno": -1}
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9537 keys, 12075047 bytes, temperature: kUnknown
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410829241412, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 12075047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12013722, "index_size": 36380, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 251121, "raw_average_key_size": 26, "raw_value_size": 11846966, "raw_average_value_size": 1242, "num_data_blocks": 1385, "num_entries": 9537, "num_filter_entries": 9537, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:13:49 compute-0 nova_compute[256940]: 2025-10-02 13:13:49.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.241725) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12075047 bytes
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.283514) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.7 rd, 28.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 10067, records dropped: 530 output_compression: NoCompression
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.283540) EVENT_LOG_v1 {"time_micros": 1759410829283529, "job": 94, "event": "compaction_finished", "compaction_time_micros": 428791, "compaction_time_cpu_micros": 29763, "output_level": 6, "num_output_files": 1, "total_output_size": 12075047, "num_input_records": 10067, "num_output_records": 9537, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410829284385, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410829286221, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:48.812556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.286301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.286306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.286308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.286309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:13:49.286311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:49 compute-0 ceph-mon[73668]: pgmap v3052: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 126 op/s
Oct 02 13:13:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:49.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.3 MiB/s wr, 102 op/s
Oct 02 13:13:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Oct 02 13:13:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Oct 02 13:13:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Oct 02 13:13:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:51 compute-0 nova_compute[256940]: 2025-10-02 13:13:51.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:51.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:51 compute-0 ceph-mon[73668]: pgmap v3053: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.3 MiB/s wr, 102 op/s
Oct 02 13:13:51 compute-0 ceph-mon[73668]: osdmap e381: 3 total, 3 up, 3 in
Oct 02 13:13:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2946218505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 107 op/s
Oct 02 13:13:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2028186615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:53 compute-0 nova_compute[256940]: 2025-10-02 13:13:53.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:53 compute-0 podman[393989]: 2025-10-02 13:13:53.403018508 +0000 UTC m=+0.064268081 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:13:53 compute-0 podman[393988]: 2025-10-02 13:13:53.427937776 +0000 UTC m=+0.091078068 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:13:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:54 compute-0 ceph-mon[73668]: pgmap v3055: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 107 op/s
Oct 02 13:13:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 KiB/s wr, 72 op/s
Oct 02 13:13:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:13:54.773 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:13:54 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:13:54.774 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:13:54 compute-0 nova_compute[256940]: 2025-10-02 13:13:54.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:54 compute-0 sudo[394026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:54 compute-0 sudo[394026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:54 compute-0 sudo[394026]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:55 compute-0 sudo[394051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:55 compute-0 sudo[394051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:55 compute-0 sudo[394051]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:55.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:55 compute-0 ceph-mon[73668]: pgmap v3056: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 KiB/s wr, 72 op/s
Oct 02 13:13:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 KiB/s wr, 72 op/s
Oct 02 13:13:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4156363450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:57 compute-0 nova_compute[256940]: 2025-10-02 13:13:57.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:57.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:57 compute-0 ceph-mon[73668]: pgmap v3057: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 KiB/s wr, 72 op/s
Oct 02 13:13:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:57.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:58 compute-0 nova_compute[256940]: 2025-10-02 13:13:58.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 319 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 02 13:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:13:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:13:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:59.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:13:59 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:13:59.776 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:59 compute-0 ceph-mon[73668]: pgmap v3058: 305 pgs: 305 active+clean; 319 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 02 13:13:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:13:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:13:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 344 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 13:14:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Oct 02 13:14:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Oct 02 13:14:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3992521573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3938846710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Oct 02 13:14:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:01.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Oct 02 13:14:02 compute-0 nova_compute[256940]: 2025-10-02 13:14:02.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:02 compute-0 ceph-mon[73668]: pgmap v3059: 305 pgs: 305 active+clean; 344 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 13:14:02 compute-0 ceph-mon[73668]: osdmap e382: 3 total, 3 up, 3 in
Oct 02 13:14:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3755106332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Oct 02 13:14:02 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Oct 02 13:14:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 374 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 209 op/s
Oct 02 13:14:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Oct 02 13:14:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Oct 02 13:14:03 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Oct 02 13:14:03 compute-0 nova_compute[256940]: 2025-10-02 13:14:03.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:03 compute-0 ceph-mon[73668]: osdmap e383: 3 total, 3 up, 3 in
Oct 02 13:14:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:03.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Oct 02 13:14:04 compute-0 ceph-mon[73668]: pgmap v3062: 305 pgs: 305 active+clean; 374 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 209 op/s
Oct 02 13:14:04 compute-0 ceph-mon[73668]: osdmap e384: 3 total, 3 up, 3 in
Oct 02 13:14:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 374 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 165 op/s
Oct 02 13:14:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Oct 02 13:14:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Oct 02 13:14:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:05.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:05 compute-0 ceph-mon[73668]: pgmap v3064: 305 pgs: 305 active+clean; 374 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 165 op/s
Oct 02 13:14:05 compute-0 ceph-mon[73668]: osdmap e385: 3 total, 3 up, 3 in
Oct 02 13:14:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:14:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:14:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:05.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 410 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.0 MiB/s wr, 152 op/s
Oct 02 13:14:07 compute-0 nova_compute[256940]: 2025-10-02 13:14:07.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:07.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:07 compute-0 ceph-mon[73668]: pgmap v3066: 305 pgs: 305 active+clean; 410 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.0 MiB/s wr, 152 op/s
Oct 02 13:14:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:07.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:08 compute-0 nova_compute[256940]: 2025-10-02 13:14:08.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:08 compute-0 podman[394082]: 2025-10-02 13:14:08.386802805 +0000 UTC m=+0.054578500 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:14:08 compute-0 podman[394083]: 2025-10-02 13:14:08.425144301 +0000 UTC m=+0.080391420 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:14:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 420 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 3.4 MiB/s wr, 203 op/s
Oct 02 13:14:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Oct 02 13:14:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4268729036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Oct 02 13:14:08 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Oct 02 13:14:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:09.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:09.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:09 compute-0 ceph-mon[73668]: pgmap v3067: 305 pgs: 305 active+clean; 420 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 3.4 MiB/s wr, 203 op/s
Oct 02 13:14:09 compute-0 ceph-mon[73668]: osdmap e386: 3 total, 3 up, 3 in
Oct 02 13:14:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.1 MiB/s rd, 4.3 MiB/s wr, 219 op/s
Oct 02 13:14:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Oct 02 13:14:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Oct 02 13:14:10 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Oct 02 13:14:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:11.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:11 compute-0 ceph-mon[73668]: pgmap v3069: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.1 MiB/s rd, 4.3 MiB/s wr, 219 op/s
Oct 02 13:14:11 compute-0 ceph-mon[73668]: osdmap e387: 3 total, 3 up, 3 in
Oct 02 13:14:12 compute-0 nova_compute[256940]: 2025-10-02 13:14:12.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 456 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 MiB/s rd, 4.7 MiB/s wr, 248 op/s
Oct 02 13:14:13 compute-0 nova_compute[256940]: 2025-10-02 13:14:13.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:13.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:13.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:14 compute-0 ceph-mon[73668]: pgmap v3071: 305 pgs: 305 active+clean; 456 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 MiB/s rd, 4.7 MiB/s wr, 248 op/s
Oct 02 13:14:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 481 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.603 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.603 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.618 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.738 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.739 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.746 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.747 2 INFO nova.compute.claims [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:14:14 compute-0 nova_compute[256940]: 2025-10-02 13:14:14.927 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:15 compute-0 sudo[394133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:15 compute-0 sudo[394133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:15 compute-0 sudo[394133]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:15 compute-0 sudo[394177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:15 compute-0 sudo[394177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:15 compute-0 sudo[394177]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:15.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549993522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.841 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.851 2 DEBUG nova.compute.provider_tree [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.871 2 DEBUG nova.scheduler.client.report [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.896 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.898 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.948 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.948 2 DEBUG nova.network.neutron [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:14:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:15 compute-0 nova_compute[256940]: 2025-10-02 13:14:15.981 2 INFO nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:14:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:15.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:15 compute-0 ceph-mon[73668]: pgmap v3072: 305 pgs: 305 active+clean; 481 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.010 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.082 2 INFO nova.virt.block_device [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Booting with volume 322bbc88-53a2-4cfe-95f7-44d52cc69f65 at /dev/vda
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.214 2 DEBUG os_brick.utils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.216 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.233 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.234 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[717b90c9-59fe-43a3-bc54-e8789aa3091d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.237 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.245 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.246 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[6948904c-3542-4c6b-8f9a-5919f7f26839]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.248 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.260 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.261 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[75afd6be-f106-45b6-a91d-88f4fd2c8f1a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.263 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[b17cc494-db79-4112-93e7-4fb023215551]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.264 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.313 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CMD "nvme version" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.318 2 DEBUG os_brick.initiator.connectors.lightos [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.318 2 DEBUG os_brick.initiator.connectors.lightos [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.318 2 DEBUG os_brick.initiator.connectors.lightos [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.319 2 DEBUG os_brick.utils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] <== get_connector_properties: return (103ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.319 2 DEBUG nova.virt.block_device [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating existing volume attachment record: 9a16231b-feb9-4c30-8006-da5d7d1ee8d4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:14:16 compute-0 nova_compute[256940]: 2025-10-02 13:14:16.378 2 DEBUG nova.policy [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '216ca241b0c548969467dd331a0d31ef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f75652686f7d404a965745c02c9bb8e0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:14:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 495 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 149 op/s
Oct 02 13:14:17 compute-0 nova_compute[256940]: 2025-10-02 13:14:17.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2092745978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1549993522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:17.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:17.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.064 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.066 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.066 2 INFO nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Creating image(s)
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.067 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.068 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Ensure instance console log exists: /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.068 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.069 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.069 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:18 compute-0 nova_compute[256940]: 2025-10-02 13:14:18.353 2 DEBUG nova.network.neutron [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Successfully created port: 0600d755-522a-4419-877c-9a865a692edc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:14:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 499 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.7 MiB/s wr, 152 op/s
Oct 02 13:14:18 compute-0 ceph-mon[73668]: pgmap v3073: 305 pgs: 305 active+clean; 495 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 149 op/s
Oct 02 13:14:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2092745978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:18 compute-0 sudo[394213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:18 compute-0 sudo[394213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:18 compute-0 sudo[394213]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-0 sudo[394238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:19 compute-0 sudo[394238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:19 compute-0 sudo[394238]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-0 sudo[394263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:19 compute-0 sudo[394263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:19 compute-0 sudo[394263]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-0 sudo[394288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:14:19 compute-0 sudo[394288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:19.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:19 compute-0 sudo[394288]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-0 nova_compute[256940]: 2025-10-02 13:14:19.729 2 DEBUG nova.network.neutron [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Successfully updated port: 0600d755-522a-4419-877c-9a865a692edc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:14:19 compute-0 nova_compute[256940]: 2025-10-02 13:14:19.818 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:19 compute-0 nova_compute[256940]: 2025-10-02 13:14:19.819 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquired lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:19 compute-0 nova_compute[256940]: 2025-10-02 13:14:19.819 2 DEBUG nova.network.neutron [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:14:19 compute-0 ceph-mon[73668]: pgmap v3074: 305 pgs: 305 active+clean; 499 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.7 MiB/s wr, 152 op/s
Oct 02 13:14:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:14:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:14:19 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:14:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:14:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:19.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:20 compute-0 nova_compute[256940]: 2025-10-02 13:14:20.014 2 DEBUG nova.compute.manager [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-changed-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:20 compute-0 nova_compute[256940]: 2025-10-02 13:14:20.014 2 DEBUG nova.compute.manager [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing instance network info cache due to event network-changed-0600d755-522a-4419-877c-9a865a692edc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:20 compute-0 nova_compute[256940]: 2025-10-02 13:14:20.014 2 DEBUG oslo_concurrency.lockutils [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:20 compute-0 nova_compute[256940]: 2025-10-02 13:14:20.148 2 DEBUG nova.network.neutron [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:14:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3a2aea1a-2ea7-4482-8398-c949f28c4662 does not exist
Oct 02 13:14:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1f07743d-92b2-47bb-872a-6d48357c9078 does not exist
Oct 02 13:14:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b4d5f2f6-f03e-4454-b69e-50b4282643eb does not exist
Oct 02 13:14:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:14:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:14:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:14:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:14:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:14:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:20 compute-0 sudo[394344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:20 compute-0 sudo[394344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:20 compute-0 sudo[394344]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:20 compute-0 sudo[394369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:20 compute-0 sudo[394369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 116 op/s
Oct 02 13:14:20 compute-0 sudo[394369]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:20 compute-0 sudo[394394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:20 compute-0 sudo[394394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:20 compute-0 sudo[394394]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:20 compute-0 sudo[394419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:14:20 compute-0 sudo[394419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:21 compute-0 podman[394484]: 2025-10-02 13:14:20.94018502 +0000 UTC m=+0.032665070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:14:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:14:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:14:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:21.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:21 compute-0 podman[394484]: 2025-10-02 13:14:21.359443827 +0000 UTC m=+0.451923897 container create eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:14:21 compute-0 systemd[1]: Started libpod-conmon-eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61.scope.
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.680 2 DEBUG nova.network.neutron [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating instance_info_cache with network_info: [{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.715 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Releasing lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.716 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Instance network_info: |[{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.717 2 DEBUG oslo_concurrency.lockutils [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.718 2 DEBUG nova.network.neutron [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing network info cache for port 0600d755-522a-4419-877c-9a865a692edc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.727 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Start _get_guest_xml network_info=[{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-322bbc88-53a2-4cfe-95f7-44d52cc69f65', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '322bbc88-53a2-4cfe-95f7-44d52cc69f65', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9e8e18b4-2722-4e75-ac1a-35426a38b20b', 'attached_at': '', 'detached_at': '', 'volume_id': '322bbc88-53a2-4cfe-95f7-44d52cc69f65', 'serial': '322bbc88-53a2-4cfe-95f7-44d52cc69f65'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '9a16231b-feb9-4c30-8006-da5d7d1ee8d4', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.734 2 WARNING nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.740 2 DEBUG nova.virt.libvirt.host [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.742 2 DEBUG nova.virt.libvirt.host [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.762 2 DEBUG nova.virt.libvirt.host [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.763 2 DEBUG nova.virt.libvirt.host [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.764 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.765 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.765 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.765 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.766 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.766 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.766 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.766 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.767 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.767 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.767 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.768 2 DEBUG nova.virt.hardware [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.804 2 DEBUG nova.storage.rbd_utils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] rbd image 9e8e18b4-2722-4e75-ac1a-35426a38b20b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:21 compute-0 nova_compute[256940]: 2025-10-02 13:14:21.808 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:21 compute-0 podman[394484]: 2025-10-02 13:14:21.994451403 +0000 UTC m=+1.086931473 container init eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:14:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:21.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:22 compute-0 podman[394484]: 2025-10-02 13:14:22.006822885 +0000 UTC m=+1.099302915 container start eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bose, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:14:22 compute-0 systemd[1]: libpod-eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61.scope: Deactivated successfully.
Oct 02 13:14:22 compute-0 ecstatic_bose[394500]: 167 167
Oct 02 13:14:22 compute-0 conmon[394500]: conmon eef95dd7454fff37ba52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61.scope/container/memory.events
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 510 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 926 KiB/s rd, 3.2 MiB/s wr, 83 op/s
Oct 02 13:14:22 compute-0 podman[394484]: 2025-10-02 13:14:22.512122969 +0000 UTC m=+1.604603009 container attach eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bose, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:14:22 compute-0 podman[394484]: 2025-10-02 13:14:22.513568866 +0000 UTC m=+1.606048926 container died eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bose, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:14:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2760415651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.559 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.624 2 DEBUG nova.virt.libvirt.vif [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1869138478',display_name='tempest-TestVolumeBackupRestore-server-1869138478',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1869138478',id=209,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmRJfyjlSS96oOq4tf8HkYeaCrzKBF7hhHE1nB7GzbQCMs9fO8+UMR8IyyaXtTa1QR52U1+zTKX5Ci38KogRNGOtcWD7LZEijjoALoMSkvvwH+cJwFjJMHDexEflNoXow==',key_name='tempest-TestVolumeBackupRestore-1220833504',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f75652686f7d404a965745c02c9bb8e0',ramdisk_id='',reservation_id='r-dd4dl9vh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1986811919',owner_user_name='tempest-TestVolumeBackupRestore-1986811919-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:16Z,user_data=None,user_id='216ca241b0c548969467dd331a0d31ef',uuid=9e8e18b4-2722-4e75-ac1a-35426a38b20b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.625 2 DEBUG nova.network.os_vif_util [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Converting VIF {"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.626 2 DEBUG nova.network.os_vif_util [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:3d:eb,bridge_name='br-int',has_traffic_filtering=True,id=0600d755-522a-4419-877c-9a865a692edc,network=Network(f44471b2-ae04-4be3-a1a2-3cab2033c3ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0600d755-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.628 2 DEBUG nova.objects.instance [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e8e18b4-2722-4e75-ac1a-35426a38b20b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.651 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <uuid>9e8e18b4-2722-4e75-ac1a-35426a38b20b</uuid>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <name>instance-000000d1</name>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <nova:name>tempest-TestVolumeBackupRestore-server-1869138478</nova:name>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:14:21</nova:creationTime>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:user uuid="216ca241b0c548969467dd331a0d31ef">tempest-TestVolumeBackupRestore-1986811919-project-member</nova:user>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:project uuid="f75652686f7d404a965745c02c9bb8e0">tempest-TestVolumeBackupRestore-1986811919</nova:project>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <nova:port uuid="0600d755-522a-4419-877c-9a865a692edc">
Oct 02 13:14:22 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <system>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <entry name="serial">9e8e18b4-2722-4e75-ac1a-35426a38b20b</entry>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <entry name="uuid">9e8e18b4-2722-4e75-ac1a-35426a38b20b</entry>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </system>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <os>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   </os>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <features>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   </features>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/9e8e18b4-2722-4e75-ac1a-35426a38b20b_disk.config">
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       </source>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-322bbc88-53a2-4cfe-95f7-44d52cc69f65">
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       </source>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:14:22 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <serial>322bbc88-53a2-4cfe-95f7-44d52cc69f65</serial>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:24:3d:eb"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <target dev="tap0600d755-52"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/console.log" append="off"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <video>
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </video>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:14:22 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:14:22 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:14:22 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:14:22 compute-0 nova_compute[256940]: </domain>
Oct 02 13:14:22 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.652 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Preparing to wait for external event network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.652 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.653 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.654 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.655 2 DEBUG nova.virt.libvirt.vif [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1869138478',display_name='tempest-TestVolumeBackupRestore-server-1869138478',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1869138478',id=209,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmRJfyjlSS96oOq4tf8HkYeaCrzKBF7hhHE1nB7GzbQCMs9fO8+UMR8IyyaXtTa1QR52U1+zTKX5Ci38KogRNGOtcWD7LZEijjoALoMSkvvwH+cJwFjJMHDexEflNoXow==',key_name='tempest-TestVolumeBackupRestore-1220833504',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f75652686f7d404a965745c02c9bb8e0',ramdisk_id='',reservation_id='r-dd4dl9vh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1986811919',owner_user_name='tempest-TestVolumeBackupRestore-1986811919-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:16Z,user_data=None,user_id='216ca241b0c548969467dd331a0d31ef',uuid=9e8e18b4-2722-4e75-ac1a-35426a38b20b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.656 2 DEBUG nova.network.os_vif_util [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Converting VIF {"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.657 2 DEBUG nova.network.os_vif_util [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:3d:eb,bridge_name='br-int',has_traffic_filtering=True,id=0600d755-522a-4419-877c-9a865a692edc,network=Network(f44471b2-ae04-4be3-a1a2-3cab2033c3ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0600d755-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.658 2 DEBUG os_vif [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:3d:eb,bridge_name='br-int',has_traffic_filtering=True,id=0600d755-522a-4419-877c-9a865a692edc,network=Network(f44471b2-ae04-4be3-a1a2-3cab2033c3ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0600d755-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.661 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0600d755-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.667 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0600d755-52, col_values=(('external_ids', {'iface-id': '0600d755-522a-4419-877c-9a865a692edc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:3d:eb', 'vm-uuid': '9e8e18b4-2722-4e75-ac1a-35426a38b20b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:22 compute-0 NetworkManager[44981]: <info>  [1759410862.6713] manager: (tap0600d755-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.679 2 INFO os_vif [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:3d:eb,bridge_name='br-int',has_traffic_filtering=True,id=0600d755-522a-4419-877c-9a865a692edc,network=Network(f44471b2-ae04-4be3-a1a2-3cab2033c3ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0600d755-52')
Oct 02 13:14:22 compute-0 ceph-mon[73668]: pgmap v3075: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 116 op/s
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.851 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.852 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.852 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] No VIF found with MAC fa:16:3e:24:3d:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:14:22 compute-0 nova_compute[256940]: 2025-10-02 13:14:22.853 2 INFO nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Using config drive
Oct 02 13:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9cea7d978f21e3dcdba6d202424e739cfdc772c2d07611f30143811584e3ea9-merged.mount: Deactivated successfully.
Oct 02 13:14:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:23.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:23 compute-0 nova_compute[256940]: 2025-10-02 13:14:23.520 2 DEBUG nova.storage.rbd_utils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] rbd image 9e8e18b4-2722-4e75-ac1a-35426a38b20b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:23 compute-0 nova_compute[256940]: 2025-10-02 13:14:23.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:23.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:24 compute-0 ceph-mon[73668]: pgmap v3076: 305 pgs: 305 active+clean; 510 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 926 KiB/s rd, 3.2 MiB/s wr, 83 op/s
Oct 02 13:14:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2760415651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:24 compute-0 podman[394484]: 2025-10-02 13:14:24.433836768 +0000 UTC m=+3.526316818 container remove eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:14:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 512 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 920 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Oct 02 13:14:24 compute-0 podman[394579]: 2025-10-02 13:14:24.500009078 +0000 UTC m=+0.167937976 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:24 compute-0 systemd[1]: libpod-conmon-eef95dd7454fff37ba524699ce9f1cdacb7d4ff70425e0c78019a81f307b7a61.scope: Deactivated successfully.
Oct 02 13:14:24 compute-0 podman[394578]: 2025-10-02 13:14:24.521993619 +0000 UTC m=+0.185564664 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:14:24 compute-0 podman[394624]: 2025-10-02 13:14:24.584312969 +0000 UTC m=+0.021672664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:24 compute-0 podman[394624]: 2025-10-02 13:14:24.883444645 +0000 UTC m=+0.320804320 container create 1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:14:25 compute-0 systemd[1]: Started libpod-conmon-1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c.scope.
Oct 02 13:14:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64bc2d8cce61200c3244657811b8274f2829355c73c6b9609f16467a4579a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64bc2d8cce61200c3244657811b8274f2829355c73c6b9609f16467a4579a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64bc2d8cce61200c3244657811b8274f2829355c73c6b9609f16467a4579a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64bc2d8cce61200c3244657811b8274f2829355c73c6b9609f16467a4579a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64bc2d8cce61200c3244657811b8274f2829355c73c6b9609f16467a4579a88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:25 compute-0 podman[394624]: 2025-10-02 13:14:25.267657072 +0000 UTC m=+0.705016777 container init 1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:14:25 compute-0 podman[394624]: 2025-10-02 13:14:25.277450826 +0000 UTC m=+0.714810521 container start 1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:14:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:25.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:25 compute-0 podman[394624]: 2025-10-02 13:14:25.477082625 +0000 UTC m=+0.914442360 container attach 1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:26.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:26 compute-0 sweet_matsumoto[394642]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:14:26 compute-0 sweet_matsumoto[394642]: --> relative data size: 1.0
Oct 02 13:14:26 compute-0 sweet_matsumoto[394642]: --> All data devices are unavailable
Oct 02 13:14:26 compute-0 systemd[1]: libpod-1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c.scope: Deactivated successfully.
Oct 02 13:14:26 compute-0 podman[394624]: 2025-10-02 13:14:26.087789719 +0000 UTC m=+1.525149404 container died 1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:14:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 517 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 571 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 02 13:14:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:26.509 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:26.511 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:26.512 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:26 compute-0 ceph-mon[73668]: pgmap v3077: 305 pgs: 305 active+clean; 512 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 920 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Oct 02 13:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e64bc2d8cce61200c3244657811b8274f2829355c73c6b9609f16467a4579a88-merged.mount: Deactivated successfully.
Oct 02 13:14:27 compute-0 podman[394624]: 2025-10-02 13:14:27.294431682 +0000 UTC m=+2.731791367 container remove 1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:14:27 compute-0 sudo[394419]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:27.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:27 compute-0 systemd[1]: libpod-conmon-1acabc872974d93d0b2c9514b367f408547210019c1bce45f5f16f56ea11966c.scope: Deactivated successfully.
Oct 02 13:14:27 compute-0 sudo[394670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:27 compute-0 sudo[394670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:27 compute-0 sudo[394670]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:27 compute-0 sudo[394695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:27 compute-0 sudo[394695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:27 compute-0 sudo[394695]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:27 compute-0 sudo[394720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:27 compute-0 sudo[394720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:27 compute-0 sudo[394720]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:27 compute-0 sudo[394745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:14:27 compute-0 sudo[394745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:27 compute-0 nova_compute[256940]: 2025-10-02 13:14:27.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:27 compute-0 ceph-mon[73668]: pgmap v3078: 305 pgs: 305 active+clean; 517 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 571 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 02 13:14:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/577872724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:28 compute-0 podman[394810]: 2025-10-02 13:14:28.016918521 +0000 UTC m=+0.081822107 container create df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_haslett, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:14:28 compute-0 podman[394810]: 2025-10-02 13:14:27.963487353 +0000 UTC m=+0.028390959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:28 compute-0 systemd[1]: Started libpod-conmon-df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5.scope.
Oct 02 13:14:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:28 compute-0 podman[394810]: 2025-10-02 13:14:28.255747419 +0000 UTC m=+0.320651095 container init df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:14:28 compute-0 podman[394810]: 2025-10-02 13:14:28.271206881 +0000 UTC m=+0.336110507 container start df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:14:28 compute-0 quirky_haslett[394826]: 167 167
Oct 02 13:14:28 compute-0 systemd[1]: libpod-df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5.scope: Deactivated successfully.
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 podman[394810]: 2025-10-02 13:14:28.29692765 +0000 UTC m=+0.361831256 container attach df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:14:28 compute-0 podman[394810]: 2025-10-02 13:14:28.297514425 +0000 UTC m=+0.362418041 container died df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_haslett, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.370 2 INFO nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Creating config drive at /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/disk.config
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.376 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptawy2kg9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d21db9411ac4e8249ea825b525d2746a653827507746cd36268e8e292bd257-merged.mount: Deactivated successfully.
Oct 02 13:14:28 compute-0 podman[394810]: 2025-10-02 13:14:28.450386229 +0000 UTC m=+0.515289845 container remove df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_haslett, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:14:28 compute-0 systemd[1]: libpod-conmon-df662884521a4ff80d758d32699d2458ebbbca320a759a7b1312655a7a1122d5.scope: Deactivated successfully.
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 520 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 521 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.511 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptawy2kg9" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.561 2 DEBUG nova.storage.rbd_utils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] rbd image 9e8e18b4-2722-4e75-ac1a-35426a38b20b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.566 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/disk.config 9e8e18b4-2722-4e75-ac1a-35426a38b20b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:28 compute-0 podman[394877]: 2025-10-02 13:14:28.73092323 +0000 UTC m=+0.060266707 container create c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:14:28 compute-0 podman[394877]: 2025-10-02 13:14:28.700383456 +0000 UTC m=+0.029727003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:28 compute-0 systemd[1]: Started libpod-conmon-c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492.scope.
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.806 2 DEBUG oslo_concurrency.processutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/disk.config 9e8e18b4-2722-4e75-ac1a-35426a38b20b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.807 2 INFO nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Deleting local config drive /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b/disk.config because it was imported into RBD.
Oct 02 13:14:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78f4eb4063fcad5a80d79042dbc09a5b2e60f5f01a377f883160828a6deaaa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78f4eb4063fcad5a80d79042dbc09a5b2e60f5f01a377f883160828a6deaaa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78f4eb4063fcad5a80d79042dbc09a5b2e60f5f01a377f883160828a6deaaa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78f4eb4063fcad5a80d79042dbc09a5b2e60f5f01a377f883160828a6deaaa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:28 compute-0 kernel: tap0600d755-52: entered promiscuous mode
Oct 02 13:14:28 compute-0 NetworkManager[44981]: <info>  [1759410868.8727] manager: (tap0600d755-52): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 ovn_controller[148123]: 2025-10-02T13:14:28Z|01031|binding|INFO|Claiming lport 0600d755-522a-4419-877c-9a865a692edc for this chassis.
Oct 02 13:14:28 compute-0 ovn_controller[148123]: 2025-10-02T13:14:28Z|01032|binding|INFO|0600d755-522a-4419-877c-9a865a692edc: Claiming fa:16:3e:24:3d:eb 10.100.0.11
Oct 02 13:14:28 compute-0 podman[394877]: 2025-10-02 13:14:28.879571234 +0000 UTC m=+0.208914781 container init c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.880 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:3d:eb 10.100.0.11'], port_security=['fa:16:3e:24:3d:eb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9e8e18b4-2722-4e75-ac1a-35426a38b20b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f75652686f7d404a965745c02c9bb8e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e5e1a2c2-b954-46cd-992e-ed529525ec61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9763e87-88a5-4949-b5f5-9322b6512594, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=0600d755-522a-4419-877c-9a865a692edc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.882 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 0600d755-522a-4419-877c-9a865a692edc in datapath f44471b2-ae04-4be3-a1a2-3cab2033c3ef bound to our chassis
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.884 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f44471b2-ae04-4be3-a1a2-3cab2033c3ef
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:14:28
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'backups', 'vms']
Oct 02 13:14:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:14:28 compute-0 podman[394877]: 2025-10-02 13:14:28.888954438 +0000 UTC m=+0.218297905 container start c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:14:28 compute-0 ovn_controller[148123]: 2025-10-02T13:14:28Z|01033|binding|INFO|Setting lport 0600d755-522a-4419-877c-9a865a692edc ovn-installed in OVS
Oct 02 13:14:28 compute-0 ovn_controller[148123]: 2025-10-02T13:14:28Z|01034|binding|INFO|Setting lport 0600d755-522a-4419-877c-9a865a692edc up in Southbound
Oct 02 13:14:28 compute-0 nova_compute[256940]: 2025-10-02 13:14:28.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.900 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5f92d981-e3e0-40bf-84b8-8b9e73b7c7af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.901 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf44471b2-a1 in ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.903 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf44471b2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.903 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0be44ccd-1963-4e8c-8d2b-57476f7f92be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.909 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf28be6-57da-495b-891e-f904ea86aa96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 podman[394877]: 2025-10-02 13:14:28.919866912 +0000 UTC m=+0.249210469 container attach c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.926 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[7289bbaf-1699-470b-b403-18d726e2031a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 systemd-machined[210927]: New machine qemu-106-instance-000000d1.
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.938 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b88a8d29-0a88-49a0-b615-eec192634dc6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-000000d1.
Oct 02 13:14:28 compute-0 systemd-udevd[394932]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.964 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[42e7880f-509d-4de7-9cb6-648c77be00ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:28.970 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c54ef1a3-8b88-42a0-9283-fcbfe0bd24fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:28 compute-0 systemd-udevd[394936]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:28 compute-0 NetworkManager[44981]: <info>  [1759410868.9716] manager: (tapf44471b2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Oct 02 13:14:28 compute-0 NetworkManager[44981]: <info>  [1759410868.9763] device (tap0600d755-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:14:28 compute-0 NetworkManager[44981]: <info>  [1759410868.9771] device (tap0600d755-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.001 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[afe26a50-8c1e-4aad-9cf0-289f4f5b37c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.004 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[28ea58f7-0d67-40a2-83cf-b56c2481ceb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 NetworkManager[44981]: <info>  [1759410869.0295] device (tapf44471b2-a0): carrier: link connected
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.034 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[35ed94bf-98ae-4689-8dc2-1db45d86b116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.055 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a7242a-48bc-42cb-83fd-97d593be6a61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf44471b2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:62:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 295], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876451, 'reachable_time': 31051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394960, 'error': None, 'target': 'ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.076 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[57429181-40e2-404a-a5c8-7110010f1b6f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:6285'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 876451, 'tstamp': 876451}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394961, 'error': None, 'target': 'ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.094 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[281c595a-e946-4fbb-8018-27e25df7bf88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf44471b2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:62:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 295], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876451, 'reachable_time': 31051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394962, 'error': None, 'target': 'ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.130 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f358be4b-6b8b-4103-b61d-7c74d40f3ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.193 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2c91d2b2-8caf-49c9-9d1f-4f5b79f6f9d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.195 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf44471b2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.196 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.196 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf44471b2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:29 compute-0 NetworkManager[44981]: <info>  [1759410869.1997] manager: (tapf44471b2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Oct 02 13:14:29 compute-0 kernel: tapf44471b2-a0: entered promiscuous mode
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.203 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf44471b2-a0, col_values=(('external_ids', {'iface-id': 'bb907c76-8389-4be8-91ee-3be2e6313729'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:29 compute-0 ovn_controller[148123]: 2025-10-02T13:14:29Z|01035|binding|INFO|Releasing lport bb907c76-8389-4be8-91ee-3be2e6313729 from this chassis (sb_readonly=0)
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.228 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f44471b2-ae04-4be3-a1a2-3cab2033c3ef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f44471b2-ae04-4be3-a1a2-3cab2033c3ef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.229 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1c06e7ad-ee07-428c-be25-78c16d675720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.230 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-f44471b2-ae04-4be3-a1a2-3cab2033c3ef
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/f44471b2-ae04-4be3-a1a2-3cab2033c3ef.pid.haproxy
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID f44471b2-ae04-4be3-a1a2-3cab2033c3ef
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:14:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:29.232 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'env', 'PROCESS_TAG=haproxy-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f44471b2-ae04-4be3-a1a2-3cab2033c3ef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:14:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:29.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.462 2 DEBUG nova.compute.manager [req-0f7dc76e-f731-4f63-8d42-79177c316c2a req-a1928ab6-0e2f-4853-be98-0f15c2079286 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.462 2 DEBUG oslo_concurrency.lockutils [req-0f7dc76e-f731-4f63-8d42-79177c316c2a req-a1928ab6-0e2f-4853-be98-0f15c2079286 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.462 2 DEBUG oslo_concurrency.lockutils [req-0f7dc76e-f731-4f63-8d42-79177c316c2a req-a1928ab6-0e2f-4853-be98-0f15c2079286 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.463 2 DEBUG oslo_concurrency.lockutils [req-0f7dc76e-f731-4f63-8d42-79177c316c2a req-a1928ab6-0e2f-4853-be98-0f15c2079286 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.463 2 DEBUG nova.compute.manager [req-0f7dc76e-f731-4f63-8d42-79177c316c2a req-a1928ab6-0e2f-4853-be98-0f15c2079286 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Processing event network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:14:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3626442609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]: {
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:     "1": [
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:         {
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "devices": [
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "/dev/loop3"
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             ],
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "lv_name": "ceph_lv0",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "lv_size": "7511998464",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "name": "ceph_lv0",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "tags": {
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.cluster_name": "ceph",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.crush_device_class": "",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.encrypted": "0",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.osd_id": "1",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.type": "block",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:                 "ceph.vdo": "0"
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             },
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "type": "block",
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:             "vg_name": "ceph_vg0"
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:         }
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]:     ]
Oct 02 13:14:29 compute-0 pedantic_mccarthy[394909]: }
Oct 02 13:14:29 compute-0 podman[395036]: 2025-10-02 13:14:29.595703108 +0000 UTC m=+0.026733075 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:14:29 compute-0 systemd[1]: libpod-c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492.scope: Deactivated successfully.
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:14:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:14:29 compute-0 podman[395036]: 2025-10-02 13:14:29.719217218 +0000 UTC m=+0.150247175 container create 009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 13:14:29 compute-0 podman[394877]: 2025-10-02 13:14:29.726258751 +0000 UTC m=+1.055602248 container died c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:14:29 compute-0 systemd[1]: Started libpod-conmon-009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51.scope.
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.798 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410869.797899, 9e8e18b4-2722-4e75-ac1a-35426a38b20b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.799 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] VM Started (Lifecycle Event)
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.802 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.809 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.813 2 INFO nova.virt.libvirt.driver [-] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Instance spawned successfully.
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.813 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:14:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e3d786fbd8078ceab620964cfe4334b865f7600455709f0866c6d337b5eb18/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.855 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.860 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.861 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.861 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.861 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.862 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.862 2 DEBUG nova.virt.libvirt.driver [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a78f4eb4063fcad5a80d79042dbc09a5b2e60f5f01a377f883160828a6deaaa2-merged.mount: Deactivated successfully.
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.868 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.946 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.947 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410869.798012, 9e8e18b4-2722-4e75-ac1a-35426a38b20b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.947 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] VM Paused (Lifecycle Event)
Oct 02 13:14:29 compute-0 podman[395036]: 2025-10-02 13:14:29.951497745 +0000 UTC m=+0.382527782 container init 009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:29 compute-0 podman[395036]: 2025-10-02 13:14:29.958669542 +0000 UTC m=+0.389699519 container start 009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:14:29 compute-0 podman[395053]: 2025-10-02 13:14:29.975576111 +0000 UTC m=+0.253371087 container remove c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:14:29 compute-0 systemd[1]: libpod-conmon-c4330863d5b164fd4bc57a78039c8e971749276c7499e59edf21d657dc649492.scope: Deactivated successfully.
Oct 02 13:14:29 compute-0 neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef[395067]: [NOTICE]   (395073) : New worker (395075) forked
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.987 2 DEBUG nova.network.neutron [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updated VIF entry in instance network info cache for port 0600d755-522a-4419-877c-9a865a692edc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:29 compute-0 neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef[395067]: [NOTICE]   (395073) : Loading success.
Oct 02 13:14:29 compute-0 nova_compute[256940]: 2025-10-02 13:14:29.989 2 DEBUG nova.network.neutron [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating instance_info_cache with network_info: [{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:30 compute-0 sudo[394745]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.069 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:30 compute-0 sudo[395084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:30 compute-0 sudo[395084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:30 compute-0 sudo[395084]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:30 compute-0 sudo[395109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:30 compute-0 sudo[395109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:30 compute-0 sudo[395109]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.219 2 INFO nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Took 12.15 seconds to spawn the instance on the hypervisor.
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.220 2 DEBUG nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.220 2 DEBUG oslo_concurrency.lockutils [req-471de6da-c45f-4c99-9d15-c059808e95cc req-67234d82-44d9-4fc7-806c-9ed94ca9ab9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.220 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.221 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759410869.8086154, 9e8e18b4-2722-4e75-ac1a-35426a38b20b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.221 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] VM Resumed (Lifecycle Event)
Oct 02 13:14:30 compute-0 sudo[395134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.255 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:30 compute-0 sudo[395134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.258 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:30 compute-0 sudo[395134]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.292 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.305 2 INFO nova.compute.manager [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Took 15.64 seconds to build instance.
Oct 02 13:14:30 compute-0 sudo[395159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:14:30 compute-0 sudo[395159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:30 compute-0 nova_compute[256940]: 2025-10-02 13:14:30.331 2 DEBUG oslo_concurrency.lockutils [None req-65fd04f1-9546-4acc-a486-a1728c7d7c63 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 351 KiB/s rd, 936 KiB/s wr, 42 op/s
Oct 02 13:14:30 compute-0 ceph-mon[73668]: pgmap v3079: 305 pgs: 305 active+clean; 520 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 521 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Oct 02 13:14:30 compute-0 podman[395225]: 2025-10-02 13:14:30.81267154 +0000 UTC m=+0.113587514 container create 380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:14:30 compute-0 podman[395225]: 2025-10-02 13:14:30.730634937 +0000 UTC m=+0.031551011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:30 compute-0 systemd[1]: Started libpod-conmon-380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e.scope.
Oct 02 13:14:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:30 compute-0 podman[395225]: 2025-10-02 13:14:30.967375371 +0000 UTC m=+0.268291435 container init 380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:14:30 compute-0 podman[395225]: 2025-10-02 13:14:30.975323237 +0000 UTC m=+0.276239211 container start 380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:14:30 compute-0 zen_burnell[395242]: 167 167
Oct 02 13:14:30 compute-0 systemd[1]: libpod-380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e.scope: Deactivated successfully.
Oct 02 13:14:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:30 compute-0 podman[395225]: 2025-10-02 13:14:30.998448309 +0000 UTC m=+0.299364373 container attach 380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:30 compute-0 podman[395225]: 2025-10-02 13:14:30.999364382 +0000 UTC m=+0.300280356 container died 380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eef020dd1467935acfb8e033138093e707ce67ed13327b6cfc139caa0365116-merged.mount: Deactivated successfully.
Oct 02 13:14:31 compute-0 podman[395225]: 2025-10-02 13:14:31.163428687 +0000 UTC m=+0.464344671 container remove 380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_burnell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:14:31 compute-0 systemd[1]: libpod-conmon-380bc2639fa78ed8c829f8d98a0c5bb241c0276094c61459fd4dbb6f7bb69d6e.scope: Deactivated successfully.
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:14:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:31.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:31 compute-0 podman[395269]: 2025-10-02 13:14:31.399708638 +0000 UTC m=+0.065127963 container create 642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:14:31 compute-0 systemd[1]: Started libpod-conmon-642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e.scope.
Oct 02 13:14:31 compute-0 podman[395269]: 2025-10-02 13:14:31.359968035 +0000 UTC m=+0.025387380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:14:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81082f52b69a958367931a2749f5ab8cbac4ddcc9e0910620d26f122c23b0f3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81082f52b69a958367931a2749f5ab8cbac4ddcc9e0910620d26f122c23b0f3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81082f52b69a958367931a2749f5ab8cbac4ddcc9e0910620d26f122c23b0f3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81082f52b69a958367931a2749f5ab8cbac4ddcc9e0910620d26f122c23b0f3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:31 compute-0 podman[395269]: 2025-10-02 13:14:31.509255756 +0000 UTC m=+0.174675091 container init 642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:14:31 compute-0 podman[395269]: 2025-10-02 13:14:31.517556032 +0000 UTC m=+0.182975347 container start 642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:14:31 compute-0 podman[395269]: 2025-10-02 13:14:31.535615671 +0000 UTC m=+0.201034996 container attach 642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.572 2 DEBUG nova.compute.manager [req-8d2fae84-28e9-4f76-9338-b3c30b6b0b73 req-0ee78aaf-b925-4163-ad7d-f730b2567acb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.572 2 DEBUG oslo_concurrency.lockutils [req-8d2fae84-28e9-4f76-9338-b3c30b6b0b73 req-0ee78aaf-b925-4163-ad7d-f730b2567acb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.573 2 DEBUG oslo_concurrency.lockutils [req-8d2fae84-28e9-4f76-9338-b3c30b6b0b73 req-0ee78aaf-b925-4163-ad7d-f730b2567acb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.573 2 DEBUG oslo_concurrency.lockutils [req-8d2fae84-28e9-4f76-9338-b3c30b6b0b73 req-0ee78aaf-b925-4163-ad7d-f730b2567acb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.573 2 DEBUG nova.compute.manager [req-8d2fae84-28e9-4f76-9338-b3c30b6b0b73 req-0ee78aaf-b925-4163-ad7d-f730b2567acb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] No waiting events found dispatching network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:14:31 compute-0 nova_compute[256940]: 2025-10-02 13:14:31.573 2 WARNING nova.compute.manager [req-8d2fae84-28e9-4f76-9338-b3c30b6b0b73 req-0ee78aaf-b925-4163-ad7d-f730b2567acb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received unexpected event network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc for instance with vm_state active and task_state None.
Oct 02 13:14:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:32 compute-0 ceph-mon[73668]: pgmap v3080: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 351 KiB/s rd, 936 KiB/s wr, 42 op/s
Oct 02 13:14:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/420625770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.238 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.238 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 610 KiB/s rd, 435 KiB/s wr, 51 op/s
Oct 02 13:14:32 compute-0 gracious_perlman[395287]: {
Oct 02 13:14:32 compute-0 gracious_perlman[395287]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:14:32 compute-0 gracious_perlman[395287]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:14:32 compute-0 gracious_perlman[395287]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:14:32 compute-0 gracious_perlman[395287]:         "osd_id": 1,
Oct 02 13:14:32 compute-0 gracious_perlman[395287]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:14:32 compute-0 gracious_perlman[395287]:         "type": "bluestore"
Oct 02 13:14:32 compute-0 gracious_perlman[395287]:     }
Oct 02 13:14:32 compute-0 gracious_perlman[395287]: }
Oct 02 13:14:32 compute-0 systemd[1]: libpod-642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e.scope: Deactivated successfully.
Oct 02 13:14:32 compute-0 podman[395328]: 2025-10-02 13:14:32.564524425 +0000 UTC m=+0.036869989 container died 642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-81082f52b69a958367931a2749f5ab8cbac4ddcc9e0910620d26f122c23b0f3b-merged.mount: Deactivated successfully.
Oct 02 13:14:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849229435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.731 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.852 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.852 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.855 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:32 compute-0 nova_compute[256940]: 2025-10-02 13:14:32.855 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:32 compute-0 podman[395328]: 2025-10-02 13:14:32.894366469 +0000 UTC m=+0.366711983 container remove 642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:14:32 compute-0 systemd[1]: libpod-conmon-642889da7eadbbcf825a084e9660008b6cd28c5af40bc968aa58a308f28d5b8e.scope: Deactivated successfully.
Oct 02 13:14:32 compute-0 sudo[395159]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.071 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.072 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3667MB free_disk=20.888511657714844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.072 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.073 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.165 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 97dd449c-87a7-4278-a819-3d412f587a4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.166 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 9e8e18b4-2722-4e75-ac1a-35426a38b20b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.166 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.166 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.241 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:33.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 56a7daa4-6c5e-4fb1-8ea4-598f03eeba28 does not exist
Oct 02 13:14:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev fc413952-22f5-46cf-ad3b-f3e998bc3f30 does not exist
Oct 02 13:14:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 22345ed4-d846-4de1-9f03-b92265dcbe45 does not exist
Oct 02 13:14:33 compute-0 sudo[395367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:33 compute-0 sudo[395367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-0 sudo[395367]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:33 compute-0 sudo[395392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:14:33 compute-0 sudo[395392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-0 sudo[395392]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:33 compute-0 ceph-mon[73668]: pgmap v3081: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 610 KiB/s rd, 435 KiB/s wr, 51 op/s
Oct 02 13:14:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/849229435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3782613241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:33 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484772399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.752 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.758 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.781 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.842 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:14:33 compute-0 nova_compute[256940]: 2025-10-02 13:14:33.842 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:34.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 305 KiB/s wr, 70 op/s
Oct 02 13:14:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3484772399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/725074288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:34 compute-0 nova_compute[256940]: 2025-10-02 13:14:34.843 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:35 compute-0 nova_compute[256940]: 2025-10-02 13:14:35.034 2 DEBUG nova.compute.manager [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-changed-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:35 compute-0 nova_compute[256940]: 2025-10-02 13:14:35.035 2 DEBUG nova.compute.manager [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing instance network info cache due to event network-changed-0600d755-522a-4419-877c-9a865a692edc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:35 compute-0 nova_compute[256940]: 2025-10-02 13:14:35.035 2 DEBUG oslo_concurrency.lockutils [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:35 compute-0 nova_compute[256940]: 2025-10-02 13:14:35.036 2 DEBUG oslo_concurrency.lockutils [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:35 compute-0 nova_compute[256940]: 2025-10-02 13:14:35.036 2 DEBUG nova.network.neutron [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing network info cache for port 0600d755-522a-4419-877c-9a865a692edc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:35 compute-0 sudo[395420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:35 compute-0 sudo[395420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:35 compute-0 sudo[395420]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:35 compute-0 sudo[395445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:35 compute-0 sudo[395445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:35 compute-0 sudo[395445]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:35.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:35 compute-0 ceph-mon[73668]: pgmap v3082: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 305 KiB/s wr, 70 op/s
Oct 02 13:14:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:36.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 269 KiB/s wr, 87 op/s
Oct 02 13:14:36 compute-0 nova_compute[256940]: 2025-10-02 13:14:36.731 2 DEBUG nova.compute.manager [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-changed-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:36 compute-0 nova_compute[256940]: 2025-10-02 13:14:36.732 2 DEBUG nova.compute.manager [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing instance network info cache due to event network-changed-0600d755-522a-4419-877c-9a865a692edc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:36 compute-0 nova_compute[256940]: 2025-10-02 13:14:36.732 2 DEBUG oslo_concurrency.lockutils [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:37 compute-0 nova_compute[256940]: 2025-10-02 13:14:37.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:37 compute-0 nova_compute[256940]: 2025-10-02 13:14:37.345 2 DEBUG nova.network.neutron [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updated VIF entry in instance network info cache for port 0600d755-522a-4419-877c-9a865a692edc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:37 compute-0 nova_compute[256940]: 2025-10-02 13:14:37.346 2 DEBUG nova.network.neutron [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating instance_info_cache with network_info: [{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:37.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:37 compute-0 nova_compute[256940]: 2025-10-02 13:14:37.373 2 DEBUG oslo_concurrency.lockutils [req-583944e0-5097-4337-ba7a-7925850046fd req-182afaed-2cab-41d8-a264-1ec2c29fcc49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:37 compute-0 nova_compute[256940]: 2025-10-02 13:14:37.373 2 DEBUG oslo_concurrency.lockutils [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:37 compute-0 nova_compute[256940]: 2025-10-02 13:14:37.374 2 DEBUG nova.network.neutron [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing network info cache for port 0600d755-522a-4419-877c-9a865a692edc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:37 compute-0 nova_compute[256940]: 2025-10-02 13:14:37.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:38.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:38 compute-0 ceph-mon[73668]: pgmap v3083: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 269 KiB/s wr, 87 op/s
Oct 02 13:14:38 compute-0 nova_compute[256940]: 2025-10-02 13:14:38.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 215 KiB/s wr, 86 op/s
Oct 02 13:14:38 compute-0 nova_compute[256940]: 2025-10-02 13:14:38.959 2 DEBUG nova.compute.manager [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-changed-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:38 compute-0 nova_compute[256940]: 2025-10-02 13:14:38.959 2 DEBUG nova.compute.manager [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing instance network info cache due to event network-changed-0600d755-522a-4419-877c-9a865a692edc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:38 compute-0 nova_compute[256940]: 2025-10-02 13:14:38.960 2 DEBUG oslo_concurrency.lockutils [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:39 compute-0 nova_compute[256940]: 2025-10-02 13:14:39.200 2 DEBUG nova.network.neutron [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updated VIF entry in instance network info cache for port 0600d755-522a-4419-877c-9a865a692edc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:39 compute-0 nova_compute[256940]: 2025-10-02 13:14:39.201 2 DEBUG nova.network.neutron [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating instance_info_cache with network_info: [{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:39 compute-0 nova_compute[256940]: 2025-10-02 13:14:39.229 2 DEBUG oslo_concurrency.lockutils [req-ab414962-565b-4524-91e5-b06b7e5aa9b7 req-a3d8f907-af7d-407a-a4ee-98d731840bd8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:39 compute-0 nova_compute[256940]: 2025-10-02 13:14:39.230 2 DEBUG oslo_concurrency.lockutils [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:39 compute-0 nova_compute[256940]: 2025-10-02 13:14:39.230 2 DEBUG nova.network.neutron [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing network info cache for port 0600d755-522a-4419-877c-9a865a692edc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:39 compute-0 podman[395472]: 2025-10-02 13:14:39.391152676 +0000 UTC m=+0.056066578 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:14:39 compute-0 podman[395473]: 2025-10-02 13:14:39.44978583 +0000 UTC m=+0.115278657 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:14:39 compute-0 ceph-mon[73668]: pgmap v3084: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 215 KiB/s wr, 86 op/s
Oct 02 13:14:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:40.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:40 compute-0 nova_compute[256940]: 2025-10-02 13:14:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 523 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 200 KiB/s wr, 88 op/s
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004755039665922204 of space, bias 1.0, pg target 1.426511899776661 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004327191513121161 of space, bias 1.0, pg target 1.2938302624232272 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.294807006676903 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004070737076586637 of space, bias 1.0, pg target 1.2130796488228177 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:14:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Oct 02 13:14:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:41 compute-0 nova_compute[256940]: 2025-10-02 13:14:41.179 2 DEBUG nova.network.neutron [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updated VIF entry in instance network info cache for port 0600d755-522a-4419-877c-9a865a692edc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:41 compute-0 nova_compute[256940]: 2025-10-02 13:14:41.179 2 DEBUG nova.network.neutron [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating instance_info_cache with network_info: [{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:41 compute-0 nova_compute[256940]: 2025-10-02 13:14:41.204 2 DEBUG oslo_concurrency.lockutils [req-f3958eeb-c1aa-4a29-a88f-dc8db2b39223 req-8f9142c9-9f74-4bb0-a1a0-4912fc97538d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:41 compute-0 nova_compute[256940]: 2025-10-02 13:14:41.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:41.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:41 compute-0 ceph-mon[73668]: pgmap v3085: 305 pgs: 305 active+clean; 523 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 200 KiB/s wr, 88 op/s
Oct 02 13:14:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3573521355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:42.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 523 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 176 KiB/s wr, 83 op/s
Oct 02 13:14:42 compute-0 nova_compute[256940]: 2025-10-02 13:14:42.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:43.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.376 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.377 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.377 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:14:43 compute-0 nova_compute[256940]: 2025-10-02 13:14:43.377 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:44.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:44 compute-0 ceph-mon[73668]: pgmap v3086: 305 pgs: 305 active+clean; 523 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 176 KiB/s wr, 83 op/s
Oct 02 13:14:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 529 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 596 KiB/s wr, 90 op/s
Oct 02 13:14:45 compute-0 nova_compute[256940]: 2025-10-02 13:14:45.282 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating instance_info_cache with network_info: [{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:45 compute-0 nova_compute[256940]: 2025-10-02 13:14:45.305 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:45 compute-0 nova_compute[256940]: 2025-10-02 13:14:45.305 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:14:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:45.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:45 compute-0 ceph-mon[73668]: pgmap v3087: 305 pgs: 305 active+clean; 529 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 596 KiB/s wr, 90 op/s
Oct 02 13:14:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:46.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 537 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 69 op/s
Oct 02 13:14:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:47.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:47 compute-0 nova_compute[256940]: 2025-10-02 13:14:47.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:47 compute-0 ceph-mon[73668]: pgmap v3088: 305 pgs: 305 active+clean; 537 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 69 op/s
Oct 02 13:14:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:48.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:48 compute-0 ovn_controller[148123]: 2025-10-02T13:14:48Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:3d:eb 10.100.0.11
Oct 02 13:14:48 compute-0 ovn_controller[148123]: 2025-10-02T13:14:48Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:3d:eb 10.100.0.11
Oct 02 13:14:48 compute-0 nova_compute[256940]: 2025-10-02 13:14:48.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 539 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 13:14:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:49.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:49 compute-0 ceph-mon[73668]: pgmap v3089: 305 pgs: 305 active+clean; 539 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 13:14:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:50.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:50 compute-0 nova_compute[256940]: 2025-10-02 13:14:50.301 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 532 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct 02 13:14:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:51.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:52.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:52 compute-0 ceph-mon[73668]: pgmap v3090: 305 pgs: 305 active+clean; 532 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct 02 13:14:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3137431195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1591108121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 532 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 375 KiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 02 13:14:52 compute-0 nova_compute[256940]: 2025-10-02 13:14:52.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:53 compute-0 nova_compute[256940]: 2025-10-02 13:14:53.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:53.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:54.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:54 compute-0 ceph-mon[73668]: pgmap v3091: 305 pgs: 305 active+clean; 532 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 375 KiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 02 13:14:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1480746353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:14:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1480746353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:14:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 549 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.5 MiB/s wr, 123 op/s
Oct 02 13:14:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Oct 02 13:14:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Oct 02 13:14:55 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Oct 02 13:14:55 compute-0 sudo[395523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:55 compute-0 sudo[395523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:55 compute-0 sudo[395523]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:55 compute-0 sudo[395577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:55 compute-0 sudo[395577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:55 compute-0 sudo[395577]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:55 compute-0 podman[395525]: 2025-10-02 13:14:55.425491588 +0000 UTC m=+0.090470433 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid)
Oct 02 13:14:55 compute-0 podman[395536]: 2025-10-02 13:14:55.426414252 +0000 UTC m=+0.091666574 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:14:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:56.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:56 compute-0 ceph-mon[73668]: pgmap v3092: 305 pgs: 305 active+clean; 549 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.5 MiB/s wr, 123 op/s
Oct 02 13:14:56 compute-0 ceph-mon[73668]: osdmap e388: 3 total, 3 up, 3 in
Oct 02 13:14:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4017726753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 588 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 3.2 MiB/s wr, 148 op/s
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.322 2 DEBUG nova.compute.manager [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-changed-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.322 2 DEBUG nova.compute.manager [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing instance network info cache due to event network-changed-0600d755-522a-4419-877c-9a865a692edc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.323 2 DEBUG oslo_concurrency.lockutils [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.323 2 DEBUG oslo_concurrency.lockutils [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.324 2 DEBUG nova.network.neutron [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Refreshing network info cache for port 0600d755-522a-4419-877c-9a865a692edc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2928502249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:57 compute-0 ceph-mon[73668]: pgmap v3094: 305 pgs: 305 active+clean; 588 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 3.2 MiB/s wr, 148 op/s
Oct 02 13:14:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:14:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:57.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.425 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.426 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.426 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.426 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.426 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.428 2 INFO nova.compute.manager [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Terminating instance
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.429 2 DEBUG nova.compute.manager [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:14:57 compute-0 kernel: tap0600d755-52 (unregistering): left promiscuous mode
Oct 02 13:14:57 compute-0 NetworkManager[44981]: <info>  [1759410897.6227] device (tap0600d755-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 ovn_controller[148123]: 2025-10-02T13:14:57Z|01036|binding|INFO|Releasing lport 0600d755-522a-4419-877c-9a865a692edc from this chassis (sb_readonly=0)
Oct 02 13:14:57 compute-0 ovn_controller[148123]: 2025-10-02T13:14:57Z|01037|binding|INFO|Setting lport 0600d755-522a-4419-877c-9a865a692edc down in Southbound
Oct 02 13:14:57 compute-0 ovn_controller[148123]: 2025-10-02T13:14:57Z|01038|binding|INFO|Removing iface tap0600d755-52 ovn-installed in OVS
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:57.649 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:3d:eb 10.100.0.11'], port_security=['fa:16:3e:24:3d:eb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9e8e18b4-2722-4e75-ac1a-35426a38b20b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f75652686f7d404a965745c02c9bb8e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e5e1a2c2-b954-46cd-992e-ed529525ec61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9763e87-88a5-4949-b5f5-9322b6512594, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=0600d755-522a-4419-877c-9a865a692edc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:57.651 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 0600d755-522a-4419-877c-9a865a692edc in datapath f44471b2-ae04-4be3-a1a2-3cab2033c3ef unbound from our chassis
Oct 02 13:14:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:57.653 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f44471b2-ae04-4be3-a1a2-3cab2033c3ef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:14:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:57.654 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[04d00cd5-330f-4cda-84ac-3b4b40cfe3e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:57 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:57.656 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef namespace which is not needed anymore
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Oct 02 13:14:57 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000d1.scope: Consumed 14.021s CPU time.
Oct 02 13:14:57 compute-0 systemd-machined[210927]: Machine qemu-106-instance-000000d1 terminated.
Oct 02 13:14:57 compute-0 neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef[395067]: [NOTICE]   (395073) : haproxy version is 2.8.14-c23fe91
Oct 02 13:14:57 compute-0 neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef[395067]: [NOTICE]   (395073) : path to executable is /usr/sbin/haproxy
Oct 02 13:14:57 compute-0 neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef[395067]: [WARNING]  (395073) : Exiting Master process...
Oct 02 13:14:57 compute-0 neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef[395067]: [ALERT]    (395073) : Current worker (395075) exited with code 143 (Terminated)
Oct 02 13:14:57 compute-0 neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef[395067]: [WARNING]  (395073) : All workers exited. Exiting... (0)
Oct 02 13:14:57 compute-0 systemd[1]: libpod-009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51.scope: Deactivated successfully.
Oct 02 13:14:57 compute-0 podman[395637]: 2025-10-02 13:14:57.846825025 +0000 UTC m=+0.069658132 container died 009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.871 2 INFO nova.virt.libvirt.driver [-] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Instance destroyed successfully.
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.872 2 DEBUG nova.objects.instance [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lazy-loading 'resources' on Instance uuid 9e8e18b4-2722-4e75-ac1a-35426a38b20b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.896 2 DEBUG nova.virt.libvirt.vif [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:14:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1869138478',display_name='tempest-TestVolumeBackupRestore-server-1869138478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1869138478',id=209,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmRJfyjlSS96oOq4tf8HkYeaCrzKBF7hhHE1nB7GzbQCMs9fO8+UMR8IyyaXtTa1QR52U1+zTKX5Ci38KogRNGOtcWD7LZEijjoALoMSkvvwH+cJwFjJMHDexEflNoXow==',key_name='tempest-TestVolumeBackupRestore-1220833504',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f75652686f7d404a965745c02c9bb8e0',ramdisk_id='',reservation_id='r-dd4dl9vh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1986811919',owner_user_name='tempest-TestVolumeBackupRestore-1986811919-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:30Z,user_data=None,user_id='216ca241b0c548969467dd331a0d31ef',uuid=9e8e18b4-2722-4e75-ac1a-35426a38b20b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.897 2 DEBUG nova.network.os_vif_util [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Converting VIF {"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.898 2 DEBUG nova.network.os_vif_util [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:3d:eb,bridge_name='br-int',has_traffic_filtering=True,id=0600d755-522a-4419-877c-9a865a692edc,network=Network(f44471b2-ae04-4be3-a1a2-3cab2033c3ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0600d755-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.899 2 DEBUG os_vif [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:3d:eb,bridge_name='br-int',has_traffic_filtering=True,id=0600d755-522a-4419-877c-9a865a692edc,network=Network(f44471b2-ae04-4be3-a1a2-3cab2033c3ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0600d755-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0600d755-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:14:57 compute-0 nova_compute[256940]: 2025-10-02 13:14:57.912 2 INFO os_vif [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:3d:eb,bridge_name='br-int',has_traffic_filtering=True,id=0600d755-522a-4419-877c-9a865a692edc,network=Network(f44471b2-ae04-4be3-a1a2-3cab2033c3ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0600d755-52')
Oct 02 13:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51-userdata-shm.mount: Deactivated successfully.
Oct 02 13:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4e3d786fbd8078ceab620964cfe4334b865f7600455709f0866c6d337b5eb18-merged.mount: Deactivated successfully.
Oct 02 13:14:58 compute-0 podman[395637]: 2025-10-02 13:14:58.000438908 +0000 UTC m=+0.223272015 container cleanup 009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:58 compute-0 systemd[1]: libpod-conmon-009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51.scope: Deactivated successfully.
Oct 02 13:14:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:58.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:58 compute-0 podman[395696]: 2025-10-02 13:14:58.122510421 +0000 UTC m=+0.090663148 container remove 009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.131 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c59a22-f970-48f6-a521-d5fc8e22a2e6]: (4, ('Thu Oct  2 01:14:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef (009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51)\n009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51\nThu Oct  2 01:14:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef (009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51)\n009266b18e9cb25b06bf7266abe5b37b110957c2328bbfa73a647f5d0fa79e51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.133 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c88e3a33-b4d0-4dfa-b5a4-7ba531dc67cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.134 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf44471b2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:58 compute-0 kernel: tapf44471b2-a0: left promiscuous mode
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.156 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e545a338-beea-48ff-82a6-b2493b36c228]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.179 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e666bd7e-d2f1-42fe-9aa1-b1b2e65243a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.181 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf63ab1-29da-4028-8e09-42d421efca97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.202 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[23625d62-7c2a-4979-be48-46d8c62d732f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876444, 'reachable_time': 20156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395713, 'error': None, 'target': 'ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-0 systemd[1]: run-netns-ovnmeta\x2df44471b2\x2dae04\x2d4be3\x2da1a2\x2d3cab2033c3ef.mount: Deactivated successfully.
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.206 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f44471b2-ae04-4be3-a1a2-3cab2033c3ef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:14:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:14:58.206 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[a24e5fe3-6c2b-4bb1-a323-2b2583b86184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.328 2 INFO nova.virt.libvirt.driver [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Deleting instance files /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b_del
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.329 2 INFO nova.virt.libvirt.driver [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Deletion of /var/lib/nova/instances/9e8e18b4-2722-4e75-ac1a-35426a38b20b_del complete
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.432 2 INFO nova.compute.manager [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.432 2 DEBUG oslo.service.loopingcall [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.432 2 DEBUG nova.compute.manager [-] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.433 2 DEBUG nova.network.neutron [-] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:14:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:14:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 560 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 2.4 MiB/s wr, 143 op/s
Oct 02 13:14:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.579 2 DEBUG nova.compute.manager [req-997ecdc3-675a-4308-ada0-bbe958b156ce req-7dfe0243-a304-4b74-bf38-ffebd41ad0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-vif-unplugged-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.580 2 DEBUG oslo_concurrency.lockutils [req-997ecdc3-675a-4308-ada0-bbe958b156ce req-7dfe0243-a304-4b74-bf38-ffebd41ad0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.580 2 DEBUG oslo_concurrency.lockutils [req-997ecdc3-675a-4308-ada0-bbe958b156ce req-7dfe0243-a304-4b74-bf38-ffebd41ad0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.580 2 DEBUG oslo_concurrency.lockutils [req-997ecdc3-675a-4308-ada0-bbe958b156ce req-7dfe0243-a304-4b74-bf38-ffebd41ad0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.580 2 DEBUG nova.compute.manager [req-997ecdc3-675a-4308-ada0-bbe958b156ce req-7dfe0243-a304-4b74-bf38-ffebd41ad0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] No waiting events found dispatching network-vif-unplugged-0600d755-522a-4419-877c-9a865a692edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:14:58 compute-0 nova_compute[256940]: 2025-10-02 13:14:58.581 2 DEBUG nova.compute.manager [req-997ecdc3-675a-4308-ada0-bbe958b156ce req-7dfe0243-a304-4b74-bf38-ffebd41ad0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-vif-unplugged-0600d755-522a-4419-877c-9a865a692edc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:14:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Oct 02 13:14:58 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Oct 02 13:14:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:14:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:59.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:59 compute-0 ceph-mon[73668]: pgmap v3095: 305 pgs: 305 active+clean; 560 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 2.4 MiB/s wr, 143 op/s
Oct 02 13:14:59 compute-0 ceph-mon[73668]: osdmap e389: 3 total, 3 up, 3 in
Oct 02 13:15:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:00.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:00.406 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:00.407 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:15:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:00.408 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 535 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 2.8 MiB/s wr, 159 op/s
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.499 2 DEBUG nova.network.neutron [-] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.537 2 INFO nova.compute.manager [-] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Took 2.10 seconds to deallocate network for instance.
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.629 2 DEBUG nova.compute.manager [req-12ef0d88-e90f-42f7-8a56-661fe89e5c3a req-0247c86e-b349-4416-a02c-f2b1b84ffcb5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-vif-deleted-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.747 2 DEBUG nova.compute.manager [req-9c485cfc-f23f-427e-a660-473a2d0760c6 req-d6469854-5571-498c-9d64-0e4c99984cc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received event network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.747 2 DEBUG oslo_concurrency.lockutils [req-9c485cfc-f23f-427e-a660-473a2d0760c6 req-d6469854-5571-498c-9d64-0e4c99984cc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.748 2 DEBUG oslo_concurrency.lockutils [req-9c485cfc-f23f-427e-a660-473a2d0760c6 req-d6469854-5571-498c-9d64-0e4c99984cc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.748 2 DEBUG oslo_concurrency.lockutils [req-9c485cfc-f23f-427e-a660-473a2d0760c6 req-d6469854-5571-498c-9d64-0e4c99984cc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.748 2 DEBUG nova.compute.manager [req-9c485cfc-f23f-427e-a660-473a2d0760c6 req-d6469854-5571-498c-9d64-0e4c99984cc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] No waiting events found dispatching network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.748 2 WARNING nova.compute.manager [req-9c485cfc-f23f-427e-a660-473a2d0760c6 req-d6469854-5571-498c-9d64-0e4c99984cc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Received unexpected event network-vif-plugged-0600d755-522a-4419-877c-9a865a692edc for instance with vm_state active and task_state deleting.
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.913 2 DEBUG nova.network.neutron [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updated VIF entry in instance network info cache for port 0600d755-522a-4419-877c-9a865a692edc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.914 2 DEBUG nova.network.neutron [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Updating instance_info_cache with network_info: [{"id": "0600d755-522a-4419-877c-9a865a692edc", "address": "fa:16:3e:24:3d:eb", "network": {"id": "f44471b2-ae04-4be3-a1a2-3cab2033c3ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1985714310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f75652686f7d404a965745c02c9bb8e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0600d755-52", "ovs_interfaceid": "0600d755-522a-4419-877c-9a865a692edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:00 compute-0 nova_compute[256940]: 2025-10-02 13:15:00.961 2 INFO nova.compute.manager [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Took 0.42 seconds to detach 1 volumes for instance.
Oct 02 13:15:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.022 2 DEBUG oslo_concurrency.lockutils [req-c80f9aff-5851-4afc-9202-9a68937ae4ea req-7214c15a-d8df-45c6-93f3-2743c38a3d27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9e8e18b4-2722-4e75-ac1a-35426a38b20b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.072 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.073 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.160 2 DEBUG oslo_concurrency.processutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Oct 02 13:15:01 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.272 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.273 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.273 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.273 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.273 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.275 2 INFO nova.compute.manager [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Terminating instance
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.276 2 DEBUG nova.compute.manager [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:15:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:01.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3749576690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.605 2 DEBUG oslo_concurrency.processutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.611 2 DEBUG nova.compute.provider_tree [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.631 2 DEBUG nova.scheduler.client.report [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.666 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.699 2 INFO nova.scheduler.client.report [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Deleted allocations for instance 9e8e18b4-2722-4e75-ac1a-35426a38b20b
Oct 02 13:15:01 compute-0 nova_compute[256940]: 2025-10-02 13:15:01.842 2 DEBUG oslo_concurrency.lockutils [None req-584513b6-d8e5-4e64-a11f-0d57b9b3a343 216ca241b0c548969467dd331a0d31ef f75652686f7d404a965745c02c9bb8e0 - - default default] Lock "9e8e18b4-2722-4e75-ac1a-35426a38b20b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:02.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:02 compute-0 ceph-mon[73668]: pgmap v3097: 305 pgs: 305 active+clean; 535 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 2.8 MiB/s wr, 159 op/s
Oct 02 13:15:02 compute-0 ceph-mon[73668]: osdmap e390: 3 total, 3 up, 3 in
Oct 02 13:15:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3749576690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 420 KiB/s rd, 846 KiB/s wr, 127 op/s
Oct 02 13:15:02 compute-0 nova_compute[256940]: 2025-10-02 13:15:02.751 2 DEBUG nova.compute.manager [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-changed-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:02 compute-0 nova_compute[256940]: 2025-10-02 13:15:02.752 2 DEBUG nova.compute.manager [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Refreshing instance network info cache due to event network-changed-bff5fdd5-ec7d-45cf-94ea-2264da79c91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:15:02 compute-0 nova_compute[256940]: 2025-10-02 13:15:02.753 2 DEBUG oslo_concurrency.lockutils [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:15:02 compute-0 nova_compute[256940]: 2025-10-02 13:15:02.753 2 DEBUG oslo_concurrency.lockutils [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:15:02 compute-0 nova_compute[256940]: 2025-10-02 13:15:02.754 2 DEBUG nova.network.neutron [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Refreshing network info cache for port bff5fdd5-ec7d-45cf-94ea-2264da79c91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:15:02 compute-0 nova_compute[256940]: 2025-10-02 13:15:02.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 kernel: tapbff5fdd5-ec (unregistering): left promiscuous mode
Oct 02 13:15:03 compute-0 NetworkManager[44981]: <info>  [1759410903.1435] device (tapbff5fdd5-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:15:03 compute-0 ovn_controller[148123]: 2025-10-02T13:15:03Z|01039|binding|INFO|Releasing lport bff5fdd5-ec7d-45cf-94ea-2264da79c91d from this chassis (sb_readonly=0)
Oct 02 13:15:03 compute-0 ovn_controller[148123]: 2025-10-02T13:15:03Z|01040|binding|INFO|Setting lport bff5fdd5-ec7d-45cf-94ea-2264da79c91d down in Southbound
Oct 02 13:15:03 compute-0 ovn_controller[148123]: 2025-10-02T13:15:03Z|01041|binding|INFO|Removing iface tapbff5fdd5-ec ovn-installed in OVS
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.187 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b8:83 10.100.0.9'], port_security=['fa:16:3e:99:b8:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97dd449c-87a7-4278-a819-3d412f587a4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-059f5861-22ab-45f3-a914-fb801f3c71f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbaefa5c700c4ed495a5244732eed7e3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '858bdb72-cf27-4a78-a9f7-c4548894dc59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a904a34-19d9-4790-850b-39af4c509e92, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=bff5fdd5-ec7d-45cf-94ea-2264da79c91d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.188 158104 INFO neutron.agent.ovn.metadata.agent [-] Port bff5fdd5-ec7d-45cf-94ea-2264da79c91d in datapath 059f5861-22ab-45f3-a914-fb801f3c71f9 unbound from our chassis
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.190 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 059f5861-22ab-45f3-a914-fb801f3c71f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.191 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7a2b8f-50cc-4e41-b95e-fcd5f93de2a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.192 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 namespace which is not needed anymore
Oct 02 13:15:03 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000ce.scope: Deactivated successfully.
Oct 02 13:15:03 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000ce.scope: Consumed 19.544s CPU time.
Oct 02 13:15:03 compute-0 systemd-machined[210927]: Machine qemu-105-instance-000000ce terminated.
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.318 2 INFO nova.virt.libvirt.driver [-] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Instance destroyed successfully.
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.319 2 DEBUG nova.objects.instance [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'resources' on Instance uuid 97dd449c-87a7-4278-a819-3d412f587a4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.340 2 DEBUG nova.virt.libvirt.vif [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:12:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1400642785',display_name='tempest-TestStampPattern-server-1400642785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1400642785',id=206,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKUbc0knAwx6AjWLxEzN/Myua8DLnB1wbhcbmQ6eEauumE5/uQW0dSqivGfoQK/c14gwHJVzybj68xv4MB1iOou4+ZOgUXCtWGooPy7in3/oc/+fGSq5+qeVZlJgs3yxeQ==',key_name='tempest-TestStampPattern-2020443839',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:12:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cbaefa5c700c4ed495a5244732eed7e3',ramdisk_id='',reservation_id='r-ze3olpw0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-1060565162',owner_user_name='tempest-TestStampPattern-1060565162-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:13:42Z,user_data=None,user_id='81db307ac1f846188ce19b644ebcc396',uuid=97dd449c-87a7-4278-a819-3d412f587a4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.342 2 DEBUG nova.network.os_vif_util [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converting VIF {"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.343 2 DEBUG nova.network.os_vif_util [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:b8:83,bridge_name='br-int',has_traffic_filtering=True,id=bff5fdd5-ec7d-45cf-94ea-2264da79c91d,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbff5fdd5-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.343 2 DEBUG os_vif [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:b8:83,bridge_name='br-int',has_traffic_filtering=True,id=bff5fdd5-ec7d-45cf-94ea-2264da79c91d,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbff5fdd5-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbff5fdd5-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.351 2 INFO os_vif [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:b8:83,bridge_name='br-int',has_traffic_filtering=True,id=bff5fdd5-ec7d-45cf-94ea-2264da79c91d,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbff5fdd5-ec')
Oct 02 13:15:03 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [NOTICE]   (392094) : haproxy version is 2.8.14-c23fe91
Oct 02 13:15:03 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [NOTICE]   (392094) : path to executable is /usr/sbin/haproxy
Oct 02 13:15:03 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [WARNING]  (392094) : Exiting Master process...
Oct 02 13:15:03 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [WARNING]  (392094) : Exiting Master process...
Oct 02 13:15:03 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [ALERT]    (392094) : Current worker (392096) exited with code 143 (Terminated)
Oct 02 13:15:03 compute-0 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[392072]: [WARNING]  (392094) : All workers exited. Exiting... (0)
Oct 02 13:15:03 compute-0 systemd[1]: libpod-85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc.scope: Deactivated successfully.
Oct 02 13:15:03 compute-0 podman[395764]: 2025-10-02 13:15:03.373679151 +0000 UTC m=+0.072195157 container died 85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 13:15:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:03.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc-userdata-shm.mount: Deactivated successfully.
Oct 02 13:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bf821f50cb94b5b50d5c5410a6ec4971e20a976fc7f6085cb1e91670c554faa-merged.mount: Deactivated successfully.
Oct 02 13:15:03 compute-0 podman[395764]: 2025-10-02 13:15:03.406664358 +0000 UTC m=+0.105180364 container cleanup 85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:15:03 compute-0 systemd[1]: libpod-conmon-85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc.scope: Deactivated successfully.
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 podman[395817]: 2025-10-02 13:15:03.465540909 +0000 UTC m=+0.037511046 container remove 85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.471 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7743bc78-bb84-4ef2-b712-543ffa42506a]: (4, ('Thu Oct  2 01:15:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 (85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc)\n85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc\nThu Oct  2 01:15:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 (85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc)\n85459c2e24e795c1e97a55327c16fec2b3b99780e8e378f7e348da58b0444dbc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.472 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[37afb61d-d420-40e8-9f4f-b66fbe4694bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.473 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap059f5861-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 kernel: tap059f5861-20: left promiscuous mode
Oct 02 13:15:03 compute-0 ceph-mon[73668]: pgmap v3099: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 420 KiB/s rd, 846 KiB/s wr, 127 op/s
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.492 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8690683d-b469-418b-b299-81e3e841b42b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.518 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0e742dc6-e943-4d16-ae6d-515058ddfb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.519 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf87c52-0e96-4e69-b564-d77e6663c7d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.535 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a86e4b52-cdae-4897-9d41-0c01b07a43f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 866398, 'reachable_time': 37064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395831, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.537 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:15:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:03.537 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[24b53970-1727-4132-b1e2-dfe2f68dd87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d059f5861\x2d22ab\x2d45f3\x2da914\x2dfb801f3c71f9.mount: Deactivated successfully.
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.716 2 DEBUG nova.compute.manager [req-e7bee4d7-c359-4d59-880f-2468f4985e6c req-174e084f-8380-457b-9d25-edd1188ccfa9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-vif-unplugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.718 2 DEBUG oslo_concurrency.lockutils [req-e7bee4d7-c359-4d59-880f-2468f4985e6c req-174e084f-8380-457b-9d25-edd1188ccfa9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.718 2 DEBUG oslo_concurrency.lockutils [req-e7bee4d7-c359-4d59-880f-2468f4985e6c req-174e084f-8380-457b-9d25-edd1188ccfa9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.718 2 DEBUG oslo_concurrency.lockutils [req-e7bee4d7-c359-4d59-880f-2468f4985e6c req-174e084f-8380-457b-9d25-edd1188ccfa9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.719 2 DEBUG nova.compute.manager [req-e7bee4d7-c359-4d59-880f-2468f4985e6c req-174e084f-8380-457b-9d25-edd1188ccfa9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] No waiting events found dispatching network-vif-unplugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:03 compute-0 nova_compute[256940]: 2025-10-02 13:15:03.719 2 DEBUG nova.compute.manager [req-e7bee4d7-c359-4d59-880f-2468f4985e6c req-174e084f-8380-457b-9d25-edd1188ccfa9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-vif-unplugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:15:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:04.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 45 KiB/s wr, 105 op/s
Oct 02 13:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Oct 02 13:15:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Oct 02 13:15:04 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Oct 02 13:15:04 compute-0 nova_compute[256940]: 2025-10-02 13:15:04.824 2 DEBUG nova.network.neutron [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updated VIF entry in instance network info cache for port bff5fdd5-ec7d-45cf-94ea-2264da79c91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:15:04 compute-0 nova_compute[256940]: 2025-10-02 13:15:04.824 2 DEBUG nova.network.neutron [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating instance_info_cache with network_info: [{"id": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "address": "fa:16:3e:99:b8:83", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbff5fdd5-ec", "ovs_interfaceid": "bff5fdd5-ec7d-45cf-94ea-2264da79c91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:04 compute-0 nova_compute[256940]: 2025-10-02 13:15:04.858 2 DEBUG oslo_concurrency.lockutils [req-19643e76-a91e-489b-b118-7ea1b3735307 req-250d0e53-4326-4173-920c-8b913f374ede 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-97dd449c-87a7-4278-a819-3d412f587a4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:15:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Oct 02 13:15:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Oct 02 13:15:05 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Oct 02 13:15:05 compute-0 nova_compute[256940]: 2025-10-02 13:15:05.836 2 DEBUG nova.compute.manager [req-5b78e750-92cd-4ce6-b8a2-b5c3079566e6 req-bf7ecb79-f6c8-426c-acef-47284a19a0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:05 compute-0 nova_compute[256940]: 2025-10-02 13:15:05.836 2 DEBUG oslo_concurrency.lockutils [req-5b78e750-92cd-4ce6-b8a2-b5c3079566e6 req-bf7ecb79-f6c8-426c-acef-47284a19a0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:05 compute-0 nova_compute[256940]: 2025-10-02 13:15:05.837 2 DEBUG oslo_concurrency.lockutils [req-5b78e750-92cd-4ce6-b8a2-b5c3079566e6 req-bf7ecb79-f6c8-426c-acef-47284a19a0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:05 compute-0 nova_compute[256940]: 2025-10-02 13:15:05.837 2 DEBUG oslo_concurrency.lockutils [req-5b78e750-92cd-4ce6-b8a2-b5c3079566e6 req-bf7ecb79-f6c8-426c-acef-47284a19a0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:05 compute-0 nova_compute[256940]: 2025-10-02 13:15:05.837 2 DEBUG nova.compute.manager [req-5b78e750-92cd-4ce6-b8a2-b5c3079566e6 req-bf7ecb79-f6c8-426c-acef-47284a19a0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] No waiting events found dispatching network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:05 compute-0 nova_compute[256940]: 2025-10-02 13:15:05.837 2 WARNING nova.compute.manager [req-5b78e750-92cd-4ce6-b8a2-b5c3079566e6 req-bf7ecb79-f6c8-426c-acef-47284a19a0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received unexpected event network-vif-plugged-bff5fdd5-ec7d-45cf-94ea-2264da79c91d for instance with vm_state active and task_state deleting.
Oct 02 13:15:05 compute-0 ceph-mon[73668]: pgmap v3100: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 45 KiB/s wr, 105 op/s
Oct 02 13:15:05 compute-0 ceph-mon[73668]: osdmap e391: 3 total, 3 up, 3 in
Oct 02 13:15:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Oct 02 13:15:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:06.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Oct 02 13:15:06 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Oct 02 13:15:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 460 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.1 KiB/s wr, 196 op/s
Oct 02 13:15:06 compute-0 nova_compute[256940]: 2025-10-02 13:15:06.732 2 INFO nova.virt.libvirt.driver [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Deleting instance files /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c_del
Oct 02 13:15:06 compute-0 nova_compute[256940]: 2025-10-02 13:15:06.734 2 INFO nova.virt.libvirt.driver [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Deletion of /var/lib/nova/instances/97dd449c-87a7-4278-a819-3d412f587a4c_del complete
Oct 02 13:15:06 compute-0 nova_compute[256940]: 2025-10-02 13:15:06.829 2 INFO nova.compute.manager [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Took 5.55 seconds to destroy the instance on the hypervisor.
Oct 02 13:15:06 compute-0 nova_compute[256940]: 2025-10-02 13:15:06.830 2 DEBUG oslo.service.loopingcall [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:15:06 compute-0 nova_compute[256940]: 2025-10-02 13:15:06.830 2 DEBUG nova.compute.manager [-] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:15:06 compute-0 nova_compute[256940]: 2025-10-02 13:15:06.831 2 DEBUG nova.network.neutron [-] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:15:07 compute-0 ceph-mon[73668]: osdmap e392: 3 total, 3 up, 3 in
Oct 02 13:15:07 compute-0 ceph-mon[73668]: osdmap e393: 3 total, 3 up, 3 in
Oct 02 13:15:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:07.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:07 compute-0 nova_compute[256940]: 2025-10-02 13:15:07.727 2 DEBUG nova.network.neutron [-] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:07 compute-0 nova_compute[256940]: 2025-10-02 13:15:07.751 2 INFO nova.compute.manager [-] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Took 0.92 seconds to deallocate network for instance.
Oct 02 13:15:07 compute-0 nova_compute[256940]: 2025-10-02 13:15:07.780 2 DEBUG nova.compute.manager [req-70f167a1-6e59-49de-83be-cf698da3e8f1 req-bce53a12-72c8-47c2-9250-a326684b2187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Received event network-vif-deleted-bff5fdd5-ec7d-45cf-94ea-2264da79c91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:07 compute-0 nova_compute[256940]: 2025-10-02 13:15:07.837 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:07 compute-0 nova_compute[256940]: 2025-10-02 13:15:07.837 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:07 compute-0 nova_compute[256940]: 2025-10-02 13:15:07.904 2 DEBUG oslo_concurrency.processutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:08 compute-0 ceph-mon[73668]: pgmap v3104: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 460 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.1 KiB/s wr, 196 op/s
Oct 02 13:15:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:08.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214948547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.366 2 DEBUG oslo_concurrency.processutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.373 2 DEBUG nova.compute.provider_tree [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.390 2 DEBUG nova.scheduler.client.report [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.417 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.462 2 INFO nova.scheduler.client.report [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Deleted allocations for instance 97dd449c-87a7-4278-a819-3d412f587a4c
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 356 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.2 KiB/s wr, 218 op/s
Oct 02 13:15:08 compute-0 nova_compute[256940]: 2025-10-02 13:15:08.548 2 DEBUG oslo_concurrency.lockutils [None req-10c10915-5e1f-4ce8-b89c-30dddf28141a 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "97dd449c-87a7-4278-a819-3d412f587a4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2214948547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:09.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:10 compute-0 ceph-mon[73668]: pgmap v3105: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 356 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.2 KiB/s wr, 218 op/s
Oct 02 13:15:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:10.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:10 compute-0 podman[395858]: 2025-10-02 13:15:10.402701913 +0000 UTC m=+0.074559359 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 02 13:15:10 compute-0 podman[395859]: 2025-10-02 13:15:10.434458429 +0000 UTC m=+0.105554305 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:15:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.8 KiB/s wr, 210 op/s
Oct 02 13:15:10 compute-0 nova_compute[256940]: 2025-10-02 13:15:10.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Oct 02 13:15:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Oct 02 13:15:11 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Oct 02 13:15:11 compute-0 nova_compute[256940]: 2025-10-02 13:15:11.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:12 compute-0 ceph-mon[73668]: pgmap v3106: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.8 KiB/s wr, 210 op/s
Oct 02 13:15:12 compute-0 ceph-mon[73668]: osdmap e394: 3 total, 3 up, 3 in
Oct 02 13:15:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:12.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 604 KiB/s rd, 7.2 KiB/s wr, 177 op/s
Oct 02 13:15:12 compute-0 nova_compute[256940]: 2025-10-02 13:15:12.869 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410897.8658226, 9e8e18b4-2722-4e75-ac1a-35426a38b20b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:15:12 compute-0 nova_compute[256940]: 2025-10-02 13:15:12.869 2 INFO nova.compute.manager [-] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] VM Stopped (Lifecycle Event)
Oct 02 13:15:12 compute-0 nova_compute[256940]: 2025-10-02 13:15:12.890 2 DEBUG nova.compute.manager [None req-894594c9-d2ff-452c-b28d-076353b4f8ef - - - - - -] [instance: 9e8e18b4-2722-4e75-ac1a-35426a38b20b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:15:13 compute-0 nova_compute[256940]: 2025-10-02 13:15:13.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:13.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:13 compute-0 nova_compute[256940]: 2025-10-02 13:15:13.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:14 compute-0 ceph-mon[73668]: pgmap v3108: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 604 KiB/s rd, 7.2 KiB/s wr, 177 op/s
Oct 02 13:15:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:14.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 254 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 679 KiB/s wr, 128 op/s
Oct 02 13:15:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:15.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:15 compute-0 sudo[395907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:15 compute-0 sudo[395907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:15 compute-0 sudo[395907]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:15 compute-0 sudo[395932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:15 compute-0 sudo[395932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:15 compute-0 sudo[395932]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Oct 02 13:15:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Oct 02 13:15:16 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Oct 02 13:15:16 compute-0 ceph-mon[73668]: pgmap v3109: 305 pgs: 305 active+clean; 254 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 679 KiB/s wr, 128 op/s
Oct 02 13:15:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 552 KiB/s rd, 3.2 MiB/s wr, 177 op/s
Oct 02 13:15:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:17.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:17 compute-0 ceph-mon[73668]: osdmap e395: 3 total, 3 up, 3 in
Oct 02 13:15:17 compute-0 ceph-mon[73668]: pgmap v3111: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 552 KiB/s rd, 3.2 MiB/s wr, 177 op/s
Oct 02 13:15:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:18.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:18 compute-0 nova_compute[256940]: 2025-10-02 13:15:18.318 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410903.3165042, 97dd449c-87a7-4278-a819-3d412f587a4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:15:18 compute-0 nova_compute[256940]: 2025-10-02 13:15:18.318 2 INFO nova.compute.manager [-] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] VM Stopped (Lifecycle Event)
Oct 02 13:15:18 compute-0 nova_compute[256940]: 2025-10-02 13:15:18.339 2 DEBUG nova.compute.manager [None req-f1e9a314-1ddc-47f4-a336-6dbcb9887205 - - - - - -] [instance: 97dd449c-87a7-4278-a819-3d412f587a4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:15:18 compute-0 nova_compute[256940]: 2025-10-02 13:15:18.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:18 compute-0 nova_compute[256940]: 2025-10-02 13:15:18.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 3.2 MiB/s wr, 160 op/s
Oct 02 13:15:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:20.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:20 compute-0 ceph-mon[73668]: pgmap v3112: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 3.2 MiB/s wr, 160 op/s
Oct 02 13:15:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct 02 13:15:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:21 compute-0 ceph-mon[73668]: pgmap v3113: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct 02 13:15:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 405 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Oct 02 13:15:23 compute-0 nova_compute[256940]: 2025-10-02 13:15:23.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:23 compute-0 nova_compute[256940]: 2025-10-02 13:15:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:23 compute-0 ceph-mon[73668]: pgmap v3114: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 405 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Oct 02 13:15:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 381 KiB/s rd, 2.0 MiB/s wr, 81 op/s
Oct 02 13:15:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:25.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:25 compute-0 ceph-mon[73668]: pgmap v3115: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 381 KiB/s rd, 2.0 MiB/s wr, 81 op/s
Oct 02 13:15:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/938968884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:26 compute-0 podman[395963]: 2025-10-02 13:15:26.391876793 +0000 UTC m=+0.059185279 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:15:26 compute-0 podman[395962]: 2025-10-02 13:15:26.412171321 +0000 UTC m=+0.074246631 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 13:15:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 25 KiB/s wr, 33 op/s
Oct 02 13:15:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:26.509 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:26.510 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:15:26.510 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:27 compute-0 ceph-mon[73668]: pgmap v3116: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 25 KiB/s wr, 33 op/s
Oct 02 13:15:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:28.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:28 compute-0 nova_compute[256940]: 2025-10-02 13:15:28.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:28 compute-0 nova_compute[256940]: 2025-10-02 13:15:28.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Oct 02 13:15:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2661508965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:15:28
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control']
Oct 02 13:15:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:15:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:15:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:15:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:30.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:30 compute-0 ceph-mon[73668]: pgmap v3117: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Oct 02 13:15:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3669883439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1886680935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:30 compute-0 nova_compute[256940]: 2025-10-02 13:15:30.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 22 KiB/s wr, 35 op/s
Oct 02 13:15:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:31 compute-0 ceph-mon[73668]: pgmap v3118: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 22 KiB/s wr, 35 op/s
Oct 02 13:15:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1328104633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.234 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.235 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 22 KiB/s wr, 34 op/s
Oct 02 13:15:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2777553421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.703 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.865 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.867 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4159MB free_disk=20.942684173583984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.867 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.867 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.925 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.926 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:15:32 compute-0 nova_compute[256940]: 2025-10-02 13:15:32.952 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2777553421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:33 compute-0 nova_compute[256940]: 2025-10-02 13:15:33.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209803078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:33 compute-0 nova_compute[256940]: 2025-10-02 13:15:33.430 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:33 compute-0 nova_compute[256940]: 2025-10-02 13:15:33.435 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:33 compute-0 nova_compute[256940]: 2025-10-02 13:15:33.450 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:33 compute-0 nova_compute[256940]: 2025-10-02 13:15:33.475 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:15:33 compute-0 nova_compute[256940]: 2025-10-02 13:15:33.476 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:33 compute-0 nova_compute[256940]: 2025-10-02 13:15:33.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:34 compute-0 sudo[396053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:34 compute-0 sudo[396053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-0 sudo[396053]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-0 sudo[396078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:34 compute-0 sudo[396078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-0 sudo[396078]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:34 compute-0 sudo[396103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:34 compute-0 sudo[396103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-0 sudo[396103]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-0 sudo[396128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:15:34 compute-0 sudo[396128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-0 ceph-mon[73668]: pgmap v3119: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 22 KiB/s wr, 34 op/s
Oct 02 13:15:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3209803078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:34 compute-0 nova_compute[256940]: 2025-10-02 13:15:34.476 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:34 compute-0 nova_compute[256940]: 2025-10-02 13:15:34.476 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:34 compute-0 nova_compute[256940]: 2025-10-02 13:15:34.477 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:15:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 10 KiB/s wr, 40 op/s
Oct 02 13:15:34 compute-0 sudo[396128]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:15:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:15:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:15:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:15:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9272b99d-dcc7-43d7-aa8d-4201b958ddb1 does not exist
Oct 02 13:15:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7a31e8c4-28d3-4774-b7fc-34bc52cf8d3b does not exist
Oct 02 13:15:34 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5ec55248-5c28-4ff0-8432-cb43f2d4b69e does not exist
Oct 02 13:15:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:15:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:15:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:15:34 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:15:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:15:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:34 compute-0 sudo[396185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:34 compute-0 sudo[396185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-0 sudo[396185]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:35 compute-0 sudo[396210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:35 compute-0 sudo[396210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:35 compute-0 sudo[396210]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:35 compute-0 sudo[396235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:35 compute-0 sudo[396235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:35 compute-0 sudo[396235]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:35 compute-0 sudo[396260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:15:35 compute-0 sudo[396260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:35 compute-0 nova_compute[256940]: 2025-10-02 13:15:35.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:35 compute-0 ceph-mon[73668]: pgmap v3120: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 10 KiB/s wr, 40 op/s
Oct 02 13:15:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:15:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:15:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:15:35 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:35.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:35 compute-0 podman[396325]: 2025-10-02 13:15:35.504087312 +0000 UTC m=+0.042957458 container create b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ishizaka, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:15:35 compute-0 systemd[1]: Started libpod-conmon-b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc.scope.
Oct 02 13:15:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:35 compute-0 podman[396325]: 2025-10-02 13:15:35.483217699 +0000 UTC m=+0.022087865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:35 compute-0 podman[396325]: 2025-10-02 13:15:35.587609353 +0000 UTC m=+0.126479519 container init b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ishizaka, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:15:35 compute-0 podman[396325]: 2025-10-02 13:15:35.597171031 +0000 UTC m=+0.136041177 container start b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:15:35 compute-0 podman[396325]: 2025-10-02 13:15:35.603138416 +0000 UTC m=+0.142008602 container attach b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:15:35 compute-0 stoic_ishizaka[396341]: 167 167
Oct 02 13:15:35 compute-0 systemd[1]: libpod-b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc.scope: Deactivated successfully.
Oct 02 13:15:35 compute-0 podman[396325]: 2025-10-02 13:15:35.604464681 +0000 UTC m=+0.143334867 container died b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ishizaka, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e94d0a2010739e6a84ff540994eacae4067339ff0ba680175916b583152b1d9b-merged.mount: Deactivated successfully.
Oct 02 13:15:35 compute-0 sudo[396347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:35 compute-0 sudo[396347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:35 compute-0 podman[396325]: 2025-10-02 13:15:35.66830629 +0000 UTC m=+0.207176436 container remove b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:15:35 compute-0 sudo[396347]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:35 compute-0 systemd[1]: libpod-conmon-b28535f69244300f063d8d9e128084e88c94d4de1bbdbc44489686bfebbaccdc.scope: Deactivated successfully.
Oct 02 13:15:35 compute-0 sudo[396385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:35 compute-0 sudo[396385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:35 compute-0 sudo[396385]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:35 compute-0 podman[396416]: 2025-10-02 13:15:35.823770231 +0000 UTC m=+0.044361124 container create 81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:15:35 compute-0 systemd[1]: Started libpod-conmon-81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07.scope.
Oct 02 13:15:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853423bd81c447370476dc20c4667f604c4adaf0f81fbcd28a61c0f61447ed5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853423bd81c447370476dc20c4667f604c4adaf0f81fbcd28a61c0f61447ed5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853423bd81c447370476dc20c4667f604c4adaf0f81fbcd28a61c0f61447ed5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853423bd81c447370476dc20c4667f604c4adaf0f81fbcd28a61c0f61447ed5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853423bd81c447370476dc20c4667f604c4adaf0f81fbcd28a61c0f61447ed5a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:35 compute-0 podman[396416]: 2025-10-02 13:15:35.805568878 +0000 UTC m=+0.026159791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:35 compute-0 podman[396416]: 2025-10-02 13:15:35.911807979 +0000 UTC m=+0.132398892 container init 81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:15:35 compute-0 podman[396416]: 2025-10-02 13:15:35.91990185 +0000 UTC m=+0.140492743 container start 81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:15:35 compute-0 podman[396416]: 2025-10-02 13:15:35.927067326 +0000 UTC m=+0.147658219 container attach 81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:15:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 41 op/s
Oct 02 13:15:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/188827119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:36 compute-0 blissful_jennings[396433]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:15:36 compute-0 blissful_jennings[396433]: --> relative data size: 1.0
Oct 02 13:15:36 compute-0 blissful_jennings[396433]: --> All data devices are unavailable
Oct 02 13:15:36 compute-0 systemd[1]: libpod-81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07.scope: Deactivated successfully.
Oct 02 13:15:36 compute-0 podman[396416]: 2025-10-02 13:15:36.830681534 +0000 UTC m=+1.051272437 container died 81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-853423bd81c447370476dc20c4667f604c4adaf0f81fbcd28a61c0f61447ed5a-merged.mount: Deactivated successfully.
Oct 02 13:15:36 compute-0 podman[396416]: 2025-10-02 13:15:36.901447893 +0000 UTC m=+1.122038786 container remove 81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:15:36 compute-0 systemd[1]: libpod-conmon-81cfe5e491c3dab97d59f04430fac76abb139ef801d68d37386e725f68d1fa07.scope: Deactivated successfully.
Oct 02 13:15:36 compute-0 sudo[396260]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:36 compute-0 sudo[396461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:36 compute-0 sudo[396461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:36 compute-0 sudo[396461]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:37 compute-0 sudo[396486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:37 compute-0 sudo[396486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:37 compute-0 sudo[396486]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:37 compute-0 sudo[396511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:37 compute-0 sudo[396511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:37 compute-0 sudo[396511]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:37 compute-0 sudo[396536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:15:37 compute-0 sudo[396536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:37 compute-0 podman[396601]: 2025-10-02 13:15:37.606122559 +0000 UTC m=+0.085216966 container create 1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:15:37 compute-0 ceph-mon[73668]: pgmap v3121: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 41 op/s
Oct 02 13:15:37 compute-0 podman[396601]: 2025-10-02 13:15:37.555819052 +0000 UTC m=+0.034913539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:37 compute-0 systemd[1]: Started libpod-conmon-1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e.scope.
Oct 02 13:15:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:37 compute-0 podman[396601]: 2025-10-02 13:15:37.71233796 +0000 UTC m=+0.191432397 container init 1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:15:37 compute-0 podman[396601]: 2025-10-02 13:15:37.723008947 +0000 UTC m=+0.202103384 container start 1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:15:37 compute-0 great_babbage[396617]: 167 167
Oct 02 13:15:37 compute-0 systemd[1]: libpod-1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e.scope: Deactivated successfully.
Oct 02 13:15:37 compute-0 podman[396601]: 2025-10-02 13:15:37.745957813 +0000 UTC m=+0.225052210 container attach 1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:15:37 compute-0 podman[396601]: 2025-10-02 13:15:37.746331193 +0000 UTC m=+0.225425600 container died 1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:15:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f9eb703c26eb678f10709485bf57b9f50a2eaf386173b1a58287d42b6ff2f4d-merged.mount: Deactivated successfully.
Oct 02 13:15:38 compute-0 podman[396601]: 2025-10-02 13:15:38.003866236 +0000 UTC m=+0.482960683 container remove 1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_babbage, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:15:38 compute-0 systemd[1]: libpod-conmon-1c1c664dffe2c6958d12559663e724a8c688c226a20cd3554c39864e2e53279e.scope: Deactivated successfully.
Oct 02 13:15:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:38.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:38 compute-0 nova_compute[256940]: 2025-10-02 13:15:38.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:38 compute-0 podman[396641]: 2025-10-02 13:15:38.255249161 +0000 UTC m=+0.077754883 container create e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:15:38 compute-0 podman[396641]: 2025-10-02 13:15:38.200475757 +0000 UTC m=+0.022981499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:38 compute-0 systemd[1]: Started libpod-conmon-e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d.scope.
Oct 02 13:15:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f536af0c58d648ac9e05bbb90e085a7a1e3f0ae3e58a87fa9c0359f2c804a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f536af0c58d648ac9e05bbb90e085a7a1e3f0ae3e58a87fa9c0359f2c804a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f536af0c58d648ac9e05bbb90e085a7a1e3f0ae3e58a87fa9c0359f2c804a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f536af0c58d648ac9e05bbb90e085a7a1e3f0ae3e58a87fa9c0359f2c804a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:38 compute-0 podman[396641]: 2025-10-02 13:15:38.401078621 +0000 UTC m=+0.223584413 container init e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:15:38 compute-0 podman[396641]: 2025-10-02 13:15:38.40758116 +0000 UTC m=+0.230086882 container start e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:15:38 compute-0 podman[396641]: 2025-10-02 13:15:38.421001549 +0000 UTC m=+0.243507311 container attach e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:15:38 compute-0 nova_compute[256940]: 2025-10-02 13:15:38.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:38 compute-0 nova_compute[256940]: 2025-10-02 13:15:38.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 28 op/s
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]: {
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:     "1": [
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:         {
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "devices": [
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "/dev/loop3"
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             ],
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "lv_name": "ceph_lv0",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "lv_size": "7511998464",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "name": "ceph_lv0",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "tags": {
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.cluster_name": "ceph",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.crush_device_class": "",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.encrypted": "0",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.osd_id": "1",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.type": "block",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:                 "ceph.vdo": "0"
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             },
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "type": "block",
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:             "vg_name": "ceph_vg0"
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:         }
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]:     ]
Oct 02 13:15:39 compute-0 wonderful_sanderson[396658]: }
Oct 02 13:15:39 compute-0 systemd[1]: libpod-e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d.scope: Deactivated successfully.
Oct 02 13:15:39 compute-0 podman[396641]: 2025-10-02 13:15:39.243446277 +0000 UTC m=+1.065952039 container died e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f536af0c58d648ac9e05bbb90e085a7a1e3f0ae3e58a87fa9c0359f2c804a9-merged.mount: Deactivated successfully.
Oct 02 13:15:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:15:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 54K writes, 215K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s
                                           Cumulative WAL: 54K writes, 19K syncs, 2.88 writes per sync, written: 0.20 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7295 writes, 25K keys, 7295 commit groups, 1.0 writes per commit group, ingest: 25.84 MB, 0.04 MB/s
                                           Interval WAL: 7295 writes, 2838 syncs, 2.57 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:15:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:39.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:39 compute-0 podman[396641]: 2025-10-02 13:15:39.454572575 +0000 UTC m=+1.277078297 container remove e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:15:39 compute-0 systemd[1]: libpod-conmon-e9c6cee76eb0f7f38456435edbc8ad5730bacf6f1ec685721f81af6deb70c31d.scope: Deactivated successfully.
Oct 02 13:15:39 compute-0 sudo[396536]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:39 compute-0 sudo[396682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:39 compute-0 sudo[396682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:39 compute-0 sudo[396682]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:39 compute-0 sudo[396707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:39 compute-0 sudo[396707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:39 compute-0 sudo[396707]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:39 compute-0 sudo[396732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:39 compute-0 sudo[396732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:39 compute-0 sudo[396732]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:39 compute-0 ceph-mon[73668]: pgmap v3122: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 28 op/s
Oct 02 13:15:39 compute-0 sudo[396757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:15:39 compute-0 sudo[396757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:40.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:40 compute-0 podman[396823]: 2025-10-02 13:15:40.121405627 +0000 UTC m=+0.043782719 container create f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:15:40 compute-0 systemd[1]: Started libpod-conmon-f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22.scope.
Oct 02 13:15:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:40 compute-0 podman[396823]: 2025-10-02 13:15:40.100667348 +0000 UTC m=+0.023044460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:40 compute-0 nova_compute[256940]: 2025-10-02 13:15:40.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:40 compute-0 podman[396823]: 2025-10-02 13:15:40.233320256 +0000 UTC m=+0.155697378 container init f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:15:40 compute-0 podman[396823]: 2025-10-02 13:15:40.246796336 +0000 UTC m=+0.169173438 container start f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:15:40 compute-0 podman[396823]: 2025-10-02 13:15:40.254698772 +0000 UTC m=+0.177075894 container attach f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:15:40 compute-0 reverent_mclaren[396838]: 167 167
Oct 02 13:15:40 compute-0 systemd[1]: libpod-f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22.scope: Deactivated successfully.
Oct 02 13:15:40 compute-0 podman[396823]: 2025-10-02 13:15:40.257039563 +0000 UTC m=+0.179416665 container died f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-33f810746eeb45d745762673803d37ba3b652ce2632d94c3be1e163671eba817-merged.mount: Deactivated successfully.
Oct 02 13:15:40 compute-0 podman[396823]: 2025-10-02 13:15:40.313787318 +0000 UTC m=+0.236164420 container remove f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:15:40 compute-0 systemd[1]: libpod-conmon-f279170bf1e1c0d91a3231a90533b348e8607a5782b7293fd3ba104219342b22.scope: Deactivated successfully.
Oct 02 13:15:40 compute-0 podman[396862]: 2025-10-02 13:15:40.476533358 +0000 UTC m=+0.042718151 container create 4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:15:40 compute-0 systemd[1]: Started libpod-conmon-4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54.scope.
Oct 02 13:15:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:15:40 compute-0 podman[396862]: 2025-10-02 13:15:40.458615642 +0000 UTC m=+0.024800435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163fed0663a3bca36257aedea1d8c9068b60ee201e0962e3d8274f91603726fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163fed0663a3bca36257aedea1d8c9068b60ee201e0962e3d8274f91603726fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163fed0663a3bca36257aedea1d8c9068b60ee201e0962e3d8274f91603726fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163fed0663a3bca36257aedea1d8c9068b60ee201e0962e3d8274f91603726fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:15:40 compute-0 podman[396862]: 2025-10-02 13:15:40.569199387 +0000 UTC m=+0.135384220 container init 4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:15:40 compute-0 podman[396862]: 2025-10-02 13:15:40.577939344 +0000 UTC m=+0.144124147 container start 4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:15:40 compute-0 podman[396862]: 2025-10-02 13:15:40.582596715 +0000 UTC m=+0.148781518 container attach 4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:15:40 compute-0 podman[396876]: 2025-10-02 13:15:40.610550611 +0000 UTC m=+0.092773522 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:15:40 compute-0 podman[396880]: 2025-10-02 13:15:40.67396686 +0000 UTC m=+0.156479828 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:15:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:15:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:41 compute-0 silly_jones[396881]: {
Oct 02 13:15:41 compute-0 silly_jones[396881]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:15:41 compute-0 silly_jones[396881]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:15:41 compute-0 silly_jones[396881]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:15:41 compute-0 silly_jones[396881]:         "osd_id": 1,
Oct 02 13:15:41 compute-0 silly_jones[396881]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:15:41 compute-0 silly_jones[396881]:         "type": "bluestore"
Oct 02 13:15:41 compute-0 silly_jones[396881]:     }
Oct 02 13:15:41 compute-0 silly_jones[396881]: }
Oct 02 13:15:41 compute-0 systemd[1]: libpod-4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54.scope: Deactivated successfully.
Oct 02 13:15:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:41.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:41 compute-0 podman[396862]: 2025-10-02 13:15:41.423195473 +0000 UTC m=+0.989380296 container died 4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-163fed0663a3bca36257aedea1d8c9068b60ee201e0962e3d8274f91603726fc-merged.mount: Deactivated successfully.
Oct 02 13:15:41 compute-0 podman[396862]: 2025-10-02 13:15:41.49423115 +0000 UTC m=+1.060415953 container remove 4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:15:41 compute-0 systemd[1]: libpod-conmon-4dc852b093b5a44db2cb24c5500211437ff88b6264843b01fb5fa234f3c2bc54.scope: Deactivated successfully.
Oct 02 13:15:41 compute-0 sudo[396757]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:15:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:15:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9d0b36a7-7a16-4103-aa16-80ef977a9e36 does not exist
Oct 02 13:15:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bc8652d0-5335-4611-9805-cb1443f913d8 does not exist
Oct 02 13:15:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 26b3740b-3b1d-44ff-813d-1b66e1ce09fa does not exist
Oct 02 13:15:41 compute-0 sudo[396955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:41 compute-0 sudo[396955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:41 compute-0 sudo[396955]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:41 compute-0 sudo[396980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:15:41 compute-0 sudo[396980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:41 compute-0 sudo[396980]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:41 compute-0 ceph-mon[73668]: pgmap v3123: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:15:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:42.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:42 compute-0 nova_compute[256940]: 2025-10-02 13:15:42.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.238 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:15:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:43 compute-0 nova_compute[256940]: 2025-10-02 13:15:43.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:15:43 compute-0 ceph-mon[73668]: pgmap v3124: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 02 13:15:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:44.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 02 13:15:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:45 compute-0 ceph-mon[73668]: pgmap v3125: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 02 13:15:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/112591635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:46.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Oct 02 13:15:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 13:15:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:47.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 13:15:47 compute-0 ceph-mon[73668]: pgmap v3126: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Oct 02 13:15:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 13:15:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 255 B/s wr, 1 op/s
Oct 02 13:15:48 compute-0 nova_compute[256940]: 2025-10-02 13:15:48.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:48 compute-0 nova_compute[256940]: 2025-10-02 13:15:48.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:48 compute-0 nova_compute[256940]: 2025-10-02 13:15:48.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:15:48 compute-0 nova_compute[256940]: 2025-10-02 13:15:48.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:15:48 compute-0 nova_compute[256940]: 2025-10-02 13:15:48.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:48 compute-0 nova_compute[256940]: 2025-10-02 13:15:48.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:15:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:49.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:49 compute-0 ceph-mon[73668]: pgmap v3127: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 255 B/s wr, 1 op/s
Oct 02 13:15:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:50.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 149 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.0 MiB/s wr, 23 op/s
Oct 02 13:15:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:51.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:51 compute-0 ceph-mon[73668]: pgmap v3128: 305 pgs: 305 active+clean; 149 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.0 MiB/s wr, 23 op/s
Oct 02 13:15:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1942650274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2329846800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:15:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:53.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:53 compute-0 nova_compute[256940]: 2025-10-02 13:15:53.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:53 compute-0 nova_compute[256940]: 2025-10-02 13:15:53.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:53 compute-0 nova_compute[256940]: 2025-10-02 13:15:53.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:15:53 compute-0 nova_compute[256940]: 2025-10-02 13:15:53.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:15:53 compute-0 nova_compute[256940]: 2025-10-02 13:15:53.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:53 compute-0 nova_compute[256940]: 2025-10-02 13:15:53.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:15:53 compute-0 ceph-mon[73668]: pgmap v3129: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:15:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:54.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:15:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:55 compute-0 sudo[397012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:55 compute-0 sudo[397012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:55 compute-0 sudo[397012]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:55 compute-0 sudo[397037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:55 compute-0 sudo[397037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:55 compute-0 sudo[397037]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:56 compute-0 ceph-mon[73668]: pgmap v3130: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:15:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 02 13:15:57 compute-0 podman[397063]: 2025-10-02 13:15:57.396004887 +0000 UTC m=+0.061303454 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:15:57 compute-0 podman[397064]: 2025-10-02 13:15:57.412831664 +0000 UTC m=+0.070340639 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct 02 13:15:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:57.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:15:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:58.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:58 compute-0 ceph-mon[73668]: pgmap v3131: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 02 13:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:15:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:15:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 02 13:15:58 compute-0 nova_compute[256940]: 2025-10-02 13:15:58.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:58 compute-0 nova_compute[256940]: 2025-10-02 13:15:58.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:15:59 compute-0 ceph-mon[73668]: pgmap v3132: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 02 13:15:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:15:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:15:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:00.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:16:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:01 compute-0 ceph-mon[73668]: pgmap v3133: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:16:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:02.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 76 op/s
Oct 02 13:16:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:03.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:03 compute-0 nova_compute[256940]: 2025-10-02 13:16:03.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:03 compute-0 nova_compute[256940]: 2025-10-02 13:16:03.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:03 compute-0 nova_compute[256940]: 2025-10-02 13:16:03.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:16:03 compute-0 nova_compute[256940]: 2025-10-02 13:16:03.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:03 compute-0 nova_compute[256940]: 2025-10-02 13:16:03.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:03 compute-0 nova_compute[256940]: 2025-10-02 13:16:03.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:04 compute-0 ceph-mon[73668]: pgmap v3134: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 76 op/s
Oct 02 13:16:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:04.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:16:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:16:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591145988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:16:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:16:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591145988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:16:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:05.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:05 compute-0 ovn_controller[148123]: 2025-10-02T13:16:05Z|01042|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 02 13:16:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:06 compute-0 ceph-mon[73668]: pgmap v3135: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:16:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1591145988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:16:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1591145988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:16:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 105 op/s
Oct 02 13:16:07 compute-0 ceph-mon[73668]: pgmap v3136: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 105 op/s
Oct 02 13:16:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:07.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Oct 02 13:16:08 compute-0 nova_compute[256940]: 2025-10-02 13:16:08.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:09.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:09 compute-0 ceph-mon[73668]: pgmap v3137: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Oct 02 13:16:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:16:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.160054) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971160089, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1649, "num_deletes": 262, "total_data_size": 2720186, "memory_usage": 2755360, "flush_reason": "Manual Compaction"}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971174825, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 2675865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68324, "largest_seqno": 69972, "table_properties": {"data_size": 2668131, "index_size": 4611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16622, "raw_average_key_size": 20, "raw_value_size": 2652457, "raw_average_value_size": 3282, "num_data_blocks": 202, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410829, "oldest_key_time": 1759410829, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 14832 microseconds, and 6229 cpu microseconds.
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.174882) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 2675865 bytes OK
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.174907) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.176970) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.176987) EVENT_LOG_v1 {"time_micros": 1759410971176981, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.177006) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 2713073, prev total WAL file size 2713073, number of live WAL files 2.
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.178227) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373637' seq:72057594037927935, type:22 .. '6C6F676D0033303139' seq:0, type:0; will stop at (end)
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(2613KB)], [155(11MB)]
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971178315, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 14750912, "oldest_snapshot_seqno": -1}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9806 keys, 14611335 bytes, temperature: kUnknown
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971270830, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 14611335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14545209, "index_size": 40477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 257986, "raw_average_key_size": 26, "raw_value_size": 14370787, "raw_average_value_size": 1465, "num_data_blocks": 1556, "num_entries": 9806, "num_filter_entries": 9806, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.271082) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14611335 bytes
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.273149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.3 rd, 157.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.5 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 10345, records dropped: 539 output_compression: NoCompression
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.273175) EVENT_LOG_v1 {"time_micros": 1759410971273163, "job": 96, "event": "compaction_finished", "compaction_time_micros": 92578, "compaction_time_cpu_micros": 35975, "output_level": 6, "num_output_files": 1, "total_output_size": 14611335, "num_input_records": 10345, "num_output_records": 9806, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971273769, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971276095, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.178043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.276143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.276148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.276149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.276151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:16:11.276152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-0 podman[397107]: 2025-10-02 13:16:11.385725475 +0000 UTC m=+0.054924809 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:16:11 compute-0 podman[397108]: 2025-10-02 13:16:11.431916405 +0000 UTC m=+0.098479070 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:16:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:11.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:16:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:16:12 compute-0 ceph-mon[73668]: pgmap v3138: 305 pgs: 305 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:16:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:16:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:13.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:13 compute-0 nova_compute[256940]: 2025-10-02 13:16:13.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:13 compute-0 nova_compute[256940]: 2025-10-02 13:16:13.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:13 compute-0 nova_compute[256940]: 2025-10-02 13:16:13.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:16:13 compute-0 nova_compute[256940]: 2025-10-02 13:16:13.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:13 compute-0 nova_compute[256940]: 2025-10-02 13:16:13.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:13 compute-0 nova_compute[256940]: 2025-10-02 13:16:13.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:14.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:14 compute-0 ceph-mon[73668]: pgmap v3139: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:16:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:16:15 compute-0 ceph-mon[73668]: pgmap v3140: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:16:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:15.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:16:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/629256755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:15 compute-0 sudo[397152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:15 compute-0 sudo[397152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:15 compute-0 sudo[397152]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:16 compute-0 sudo[397177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:16 compute-0 sudo[397177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:16 compute-0 sudo[397177]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/629256755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:16:17 compute-0 ceph-mon[73668]: pgmap v3141: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:16:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:17.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:18.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 137 KiB/s rd, 1.0 MiB/s wr, 34 op/s
Oct 02 13:16:18 compute-0 nova_compute[256940]: 2025-10-02 13:16:18.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:18 compute-0 nova_compute[256940]: 2025-10-02 13:16:18.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:18 compute-0 nova_compute[256940]: 2025-10-02 13:16:18.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:16:18 compute-0 nova_compute[256940]: 2025-10-02 13:16:18.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:18 compute-0 nova_compute[256940]: 2025-10-02 13:16:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:18 compute-0 nova_compute[256940]: 2025-10-02 13:16:18.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:19.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:19 compute-0 ceph-mon[73668]: pgmap v3142: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 137 KiB/s rd, 1.0 MiB/s wr, 34 op/s
Oct 02 13:16:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:20.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:16:20.299 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:16:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:16:20.300 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:16:20 compute-0 nova_compute[256940]: 2025-10-02 13:16:20.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 1.0 MiB/s wr, 48 op/s
Oct 02 13:16:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:21.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:21 compute-0 ceph-mon[73668]: pgmap v3143: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 1.0 MiB/s wr, 48 op/s
Oct 02 13:16:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 29 KiB/s wr, 24 op/s
Oct 02 13:16:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3683690967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:16:23.302 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:23 compute-0 nova_compute[256940]: 2025-10-02 13:16:23.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:23 compute-0 ceph-mon[73668]: pgmap v3144: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 29 KiB/s wr, 24 op/s
Oct 02 13:16:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 17 KiB/s wr, 21 op/s
Oct 02 13:16:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:25.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:26 compute-0 ceph-mon[73668]: pgmap v3145: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 17 KiB/s wr, 21 op/s
Oct 02 13:16:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:26.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:16:26.511 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:16:26.511 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:16:26.511 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 17 KiB/s wr, 31 op/s
Oct 02 13:16:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:27.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:28 compute-0 ceph-mon[73668]: pgmap v3146: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 17 KiB/s wr, 31 op/s
Oct 02 13:16:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:28.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:28 compute-0 podman[397208]: 2025-10-02 13:16:28.383356235 +0000 UTC m=+0.057980628 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 02 13:16:28 compute-0 podman[397209]: 2025-10-02 13:16:28.39700139 +0000 UTC m=+0.068064680 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 02 13:16:28 compute-0 nova_compute[256940]: 2025-10-02 13:16:28.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:16:28
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'volumes', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 02 13:16:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:16:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:29.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:16:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:16:30 compute-0 ceph-mon[73668]: pgmap v3147: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 02 13:16:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/256858337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1660275554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:30.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:30 compute-0 nova_compute[256940]: 2025-10-02 13:16:30.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 02 13:16:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3079176688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:31.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:32 compute-0 nova_compute[256940]: 2025-10-02 13:16:32.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:32 compute-0 ceph-mon[73668]: pgmap v3148: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 02 13:16:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1391326950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1023 B/s wr, 16 op/s
Oct 02 13:16:32 compute-0 nova_compute[256940]: 2025-10-02 13:16:32.835 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:32 compute-0 nova_compute[256940]: 2025-10-02 13:16:32.836 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:32 compute-0 nova_compute[256940]: 2025-10-02 13:16:32.836 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:32 compute-0 nova_compute[256940]: 2025-10-02 13:16:32.836 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:16:32 compute-0 nova_compute[256940]: 2025-10-02 13:16:32.836 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:16:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044782465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.283 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.428 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.429 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4170MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.430 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.430 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:33.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.504 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.505 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.522 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.542 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.542 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.566 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.586 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.607 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:33 compute-0 nova_compute[256940]: 2025-10-02 13:16:33.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:33 compute-0 ceph-mon[73668]: pgmap v3149: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1023 B/s wr, 16 op/s
Oct 02 13:16:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3044782465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:16:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3539885036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:34 compute-0 nova_compute[256940]: 2025-10-02 13:16:34.025 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:34 compute-0 nova_compute[256940]: 2025-10-02 13:16:34.032 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:16:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:34 compute-0 nova_compute[256940]: 2025-10-02 13:16:34.194 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:16:34 compute-0 nova_compute[256940]: 2025-10-02 13:16:34.196 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:16:34 compute-0 nova_compute[256940]: 2025-10-02 13:16:34.196 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 170 B/s wr, 10 op/s
Oct 02 13:16:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3539885036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:35.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:35 compute-0 ceph-mon[73668]: pgmap v3150: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 170 B/s wr, 10 op/s
Oct 02 13:16:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/207042207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:36 compute-0 sudo[397297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:36 compute-0 sudo[397297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:36 compute-0 sudo[397297]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:36 compute-0 sudo[397322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:36 compute-0 sudo[397322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:36 compute-0 sudo[397322]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 910 KiB/s wr, 32 op/s
Oct 02 13:16:37 compute-0 nova_compute[256940]: 2025-10-02 13:16:37.198 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:37 compute-0 nova_compute[256940]: 2025-10-02 13:16:37.198 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:37 compute-0 nova_compute[256940]: 2025-10-02 13:16:37.198 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:37 compute-0 nova_compute[256940]: 2025-10-02 13:16:37.199 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:16:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:37 compute-0 ceph-mon[73668]: pgmap v3151: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 910 KiB/s wr, 32 op/s
Oct 02 13:16:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:38.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:38 compute-0 nova_compute[256940]: 2025-10-02 13:16:38.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 910 KiB/s wr, 21 op/s
Oct 02 13:16:38 compute-0 nova_compute[256940]: 2025-10-02 13:16:38.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:39.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:39 compute-0 ceph-mon[73668]: pgmap v3152: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 910 KiB/s wr, 21 op/s
Oct 02 13:16:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:40.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6486970442490229 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:16:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:16:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/637609681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:41 compute-0 ceph-mon[73668]: pgmap v3153: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 13:16:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2286483105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:42 compute-0 sudo[397350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-0 sudo[397350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-0 sudo[397350]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-0 sudo[397382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:42 compute-0 sudo[397382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-0 sudo[397382]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:42.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:42 compute-0 podman[397374]: 2025-10-02 13:16:42.205157998 +0000 UTC m=+0.076560740 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 13:16:42 compute-0 nova_compute[256940]: 2025-10-02 13:16:42.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:42 compute-0 sudo[397436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-0 sudo[397436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-0 podman[397375]: 2025-10-02 13:16:42.261224626 +0000 UTC m=+0.119267722 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 13:16:42 compute-0 sudo[397436]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-0 sudo[397470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:16:42 compute-0 sudo[397470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:42 compute-0 sudo[397470]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-0 sudo[397527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-0 sudo[397527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-0 sudo[397527]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-0 sudo[397552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:42 compute-0 sudo[397552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-0 sudo[397552]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-0 sudo[397577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-0 sudo[397577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-0 sudo[397577]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:43 compute-0 sudo[397602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 13:16:43 compute-0 sudo[397602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:43 compute-0 sudo[397602]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:16:43 compute-0 nova_compute[256940]: 2025-10-02 13:16:43.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:16:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:16:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:16:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:43 compute-0 nova_compute[256940]: 2025-10-02 13:16:43.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:43 compute-0 nova_compute[256940]: 2025-10-02 13:16:43.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:43 compute-0 nova_compute[256940]: 2025-10-02 13:16:43.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:16:43 compute-0 nova_compute[256940]: 2025-10-02 13:16:43.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:43 compute-0 nova_compute[256940]: 2025-10-02 13:16:43.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:43 compute-0 nova_compute[256940]: 2025-10-02 13:16:43.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:16:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:16:43 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:44.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:44 compute-0 ceph-mon[73668]: pgmap v3154: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:16:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:16:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:16:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:16:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4581aaa5-d557-4444-bf56-f213ced2a3de does not exist
Oct 02 13:16:44 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7e17d611-ce7e-41db-84d1-007cdc1fdbfe does not exist
Oct 02 13:16:44 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 08acaaa0-ecdc-4edb-a170-5fcf40a87938 does not exist
Oct 02 13:16:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:16:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:16:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:16:44 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:16:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:16:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:44 compute-0 sudo[397645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:44 compute-0 sudo[397645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:44 compute-0 sudo[397645]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:44 compute-0 sudo[397670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:44 compute-0 sudo[397670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:44 compute-0 sudo[397670]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:44 compute-0 sudo[397696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:44 compute-0 sudo[397696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:44 compute-0 sudo[397696]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:44 compute-0 sudo[397721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:16:44 compute-0 sudo[397721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:45 compute-0 nova_compute[256940]: 2025-10-02 13:16:45.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:45 compute-0 nova_compute[256940]: 2025-10-02 13:16:45.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:16:45 compute-0 nova_compute[256940]: 2025-10-02 13:16:45.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:16:45 compute-0 nova_compute[256940]: 2025-10-02 13:16:45.227 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:16:45 compute-0 ceph-mon[73668]: pgmap v3155: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:16:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:16:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:16:45 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:45 compute-0 podman[397786]: 2025-10-02 13:16:45.273487613 +0000 UTC m=+0.042393623 container create 3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:16:45 compute-0 systemd[1]: Started libpod-conmon-3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49.scope.
Oct 02 13:16:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:45 compute-0 podman[397786]: 2025-10-02 13:16:45.251629614 +0000 UTC m=+0.020535654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:45 compute-0 podman[397786]: 2025-10-02 13:16:45.350956686 +0000 UTC m=+0.119862726 container init 3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:16:45 compute-0 podman[397786]: 2025-10-02 13:16:45.357354233 +0000 UTC m=+0.126260243 container start 3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:16:45 compute-0 podman[397786]: 2025-10-02 13:16:45.360862014 +0000 UTC m=+0.129768044 container attach 3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:16:45 compute-0 youthful_gauss[397802]: 167 167
Oct 02 13:16:45 compute-0 systemd[1]: libpod-3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49.scope: Deactivated successfully.
Oct 02 13:16:45 compute-0 conmon[397802]: conmon 3ceb40e90bdd0a32395d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49.scope/container/memory.events
Oct 02 13:16:45 compute-0 podman[397786]: 2025-10-02 13:16:45.363956934 +0000 UTC m=+0.132862984 container died 3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-83195596e34ba35b8401940c6b287fd4247261a80d84bdd962e477558023a536-merged.mount: Deactivated successfully.
Oct 02 13:16:45 compute-0 podman[397786]: 2025-10-02 13:16:45.411760787 +0000 UTC m=+0.180666827 container remove 3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:16:45 compute-0 systemd[1]: libpod-conmon-3ceb40e90bdd0a32395d7b6ba691e04a87bdaa579017f486f775469264ba6e49.scope: Deactivated successfully.
Oct 02 13:16:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:45 compute-0 podman[397824]: 2025-10-02 13:16:45.575139503 +0000 UTC m=+0.041158790 container create efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:16:45 compute-0 systemd[1]: Started libpod-conmon-efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0.scope.
Oct 02 13:16:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c31cbf6bc248a9a203108ea7cbf21d54fa6ab888b3e4bbc67f2e7be4a954993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c31cbf6bc248a9a203108ea7cbf21d54fa6ab888b3e4bbc67f2e7be4a954993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c31cbf6bc248a9a203108ea7cbf21d54fa6ab888b3e4bbc67f2e7be4a954993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c31cbf6bc248a9a203108ea7cbf21d54fa6ab888b3e4bbc67f2e7be4a954993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c31cbf6bc248a9a203108ea7cbf21d54fa6ab888b3e4bbc67f2e7be4a954993/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:45 compute-0 podman[397824]: 2025-10-02 13:16:45.556574751 +0000 UTC m=+0.022594068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:45 compute-0 podman[397824]: 2025-10-02 13:16:45.659450185 +0000 UTC m=+0.125469502 container init efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:16:45 compute-0 podman[397824]: 2025-10-02 13:16:45.672041062 +0000 UTC m=+0.138060349 container start efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:16:45 compute-0 podman[397824]: 2025-10-02 13:16:45.679093695 +0000 UTC m=+0.145113072 container attach efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:16:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:46.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:46 compute-0 thirsty_heyrovsky[397840]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:16:46 compute-0 thirsty_heyrovsky[397840]: --> relative data size: 1.0
Oct 02 13:16:46 compute-0 thirsty_heyrovsky[397840]: --> All data devices are unavailable
Oct 02 13:16:46 compute-0 systemd[1]: libpod-efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0.scope: Deactivated successfully.
Oct 02 13:16:46 compute-0 podman[397824]: 2025-10-02 13:16:46.473927554 +0000 UTC m=+0.939946841 container died efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c31cbf6bc248a9a203108ea7cbf21d54fa6ab888b3e4bbc67f2e7be4a954993-merged.mount: Deactivated successfully.
Oct 02 13:16:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:16:46 compute-0 podman[397824]: 2025-10-02 13:16:46.532201989 +0000 UTC m=+0.998221276 container remove efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:16:46 compute-0 systemd[1]: libpod-conmon-efc9edd716b1a1cb095452ccdf64c2452a6a8b1caade3dc1608ee198eedb17b0.scope: Deactivated successfully.
Oct 02 13:16:46 compute-0 sudo[397721]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:46 compute-0 sudo[397866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:46 compute-0 sudo[397866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:46 compute-0 sudo[397866]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:46 compute-0 sudo[397891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:46 compute-0 sudo[397891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:46 compute-0 sudo[397891]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:46 compute-0 sudo[397916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:46 compute-0 sudo[397916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:46 compute-0 sudo[397916]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:46 compute-0 sudo[397941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:16:46 compute-0 sudo[397941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:47 compute-0 podman[398008]: 2025-10-02 13:16:47.150123231 +0000 UTC m=+0.047357702 container create fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_benz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:16:47 compute-0 systemd[1]: Started libpod-conmon-fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682.scope.
Oct 02 13:16:47 compute-0 podman[398008]: 2025-10-02 13:16:47.128683083 +0000 UTC m=+0.025917584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:47 compute-0 podman[398008]: 2025-10-02 13:16:47.248560429 +0000 UTC m=+0.145794920 container init fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_benz, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:16:47 compute-0 podman[398008]: 2025-10-02 13:16:47.25821839 +0000 UTC m=+0.155452861 container start fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:16:47 compute-0 podman[398008]: 2025-10-02 13:16:47.262006489 +0000 UTC m=+0.159240960 container attach fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:47 compute-0 adoring_benz[398024]: 167 167
Oct 02 13:16:47 compute-0 systemd[1]: libpod-fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682.scope: Deactivated successfully.
Oct 02 13:16:47 compute-0 podman[398008]: 2025-10-02 13:16:47.270182871 +0000 UTC m=+0.167417342 container died fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdbde4061e94d9c016ed686149c68bbd1357d0444d5bab527e2f7cdeda91639f-merged.mount: Deactivated successfully.
Oct 02 13:16:47 compute-0 podman[398008]: 2025-10-02 13:16:47.307861181 +0000 UTC m=+0.205095652 container remove fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:16:47 compute-0 systemd[1]: libpod-conmon-fe1f0e3b1e8b97cd94242437c9e28e9bc2a7f1381f4be62610fbfbbf2c3d5682.scope: Deactivated successfully.
Oct 02 13:16:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:47.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:47 compute-0 podman[398048]: 2025-10-02 13:16:47.556857553 +0000 UTC m=+0.113606834 container create 072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:16:47 compute-0 podman[398048]: 2025-10-02 13:16:47.468524647 +0000 UTC m=+0.025274028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:47 compute-0 ceph-mon[73668]: pgmap v3156: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:16:47 compute-0 systemd[1]: Started libpod-conmon-072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2.scope.
Oct 02 13:16:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db85397ec9d827174dfc2a4a4e4a3ed928fea2b192904e302aaaa67be761510b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db85397ec9d827174dfc2a4a4e4a3ed928fea2b192904e302aaaa67be761510b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db85397ec9d827174dfc2a4a4e4a3ed928fea2b192904e302aaaa67be761510b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db85397ec9d827174dfc2a4a4e4a3ed928fea2b192904e302aaaa67be761510b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:47 compute-0 podman[398048]: 2025-10-02 13:16:47.757835547 +0000 UTC m=+0.314584888 container init 072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:16:47 compute-0 podman[398048]: 2025-10-02 13:16:47.767039756 +0000 UTC m=+0.323789037 container start 072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:16:47 compute-0 podman[398048]: 2025-10-02 13:16:47.770008773 +0000 UTC m=+0.326758074 container attach 072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 02 13:16:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:48.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:48 compute-0 nova_compute[256940]: 2025-10-02 13:16:48.223 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 919 KiB/s wr, 79 op/s
Oct 02 13:16:48 compute-0 focused_jennings[398065]: {
Oct 02 13:16:48 compute-0 focused_jennings[398065]:     "1": [
Oct 02 13:16:48 compute-0 focused_jennings[398065]:         {
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "devices": [
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "/dev/loop3"
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             ],
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "lv_name": "ceph_lv0",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "lv_size": "7511998464",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "name": "ceph_lv0",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "tags": {
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.cluster_name": "ceph",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.crush_device_class": "",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.encrypted": "0",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.osd_id": "1",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.type": "block",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:                 "ceph.vdo": "0"
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             },
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "type": "block",
Oct 02 13:16:48 compute-0 focused_jennings[398065]:             "vg_name": "ceph_vg0"
Oct 02 13:16:48 compute-0 focused_jennings[398065]:         }
Oct 02 13:16:48 compute-0 focused_jennings[398065]:     ]
Oct 02 13:16:48 compute-0 focused_jennings[398065]: }
Oct 02 13:16:48 compute-0 systemd[1]: libpod-072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2.scope: Deactivated successfully.
Oct 02 13:16:48 compute-0 podman[398048]: 2025-10-02 13:16:48.578689193 +0000 UTC m=+1.135438474 container died 072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-db85397ec9d827174dfc2a4a4e4a3ed928fea2b192904e302aaaa67be761510b-merged.mount: Deactivated successfully.
Oct 02 13:16:48 compute-0 podman[398048]: 2025-10-02 13:16:48.639559475 +0000 UTC m=+1.196308766 container remove 072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:16:48 compute-0 systemd[1]: libpod-conmon-072b2e0dab62daa57135a725cb1ca9bf46097dd22eadbace1d8f8be099a1f0f2.scope: Deactivated successfully.
Oct 02 13:16:48 compute-0 sudo[397941]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:48 compute-0 sudo[398089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:48 compute-0 sudo[398089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:48 compute-0 sudo[398089]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:48 compute-0 nova_compute[256940]: 2025-10-02 13:16:48.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:48 compute-0 sudo[398114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:48 compute-0 sudo[398114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:48 compute-0 sudo[398114]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:48 compute-0 sudo[398140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:48 compute-0 sudo[398140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:48 compute-0 sudo[398140]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:48 compute-0 sudo[398165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:16:48 compute-0 sudo[398165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:49 compute-0 podman[398232]: 2025-10-02 13:16:49.366866079 +0000 UTC m=+0.033592024 container create fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sammet, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:16:49 compute-0 systemd[1]: Started libpod-conmon-fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089.scope.
Oct 02 13:16:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:49 compute-0 podman[398232]: 2025-10-02 13:16:49.438765338 +0000 UTC m=+0.105491293 container init fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sammet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:16:49 compute-0 podman[398232]: 2025-10-02 13:16:49.444382524 +0000 UTC m=+0.111108489 container start fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:16:49 compute-0 podman[398232]: 2025-10-02 13:16:49.448234914 +0000 UTC m=+0.114960859 container attach fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:49 compute-0 podman[398232]: 2025-10-02 13:16:49.352700471 +0000 UTC m=+0.019426406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:49 compute-0 awesome_sammet[398249]: 167 167
Oct 02 13:16:49 compute-0 systemd[1]: libpod-fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089.scope: Deactivated successfully.
Oct 02 13:16:49 compute-0 conmon[398249]: conmon fed05e1772fada6c219d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089.scope/container/memory.events
Oct 02 13:16:49 compute-0 podman[398232]: 2025-10-02 13:16:49.451009956 +0000 UTC m=+0.117735881 container died fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sammet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9f56df23214de2a53961735ba3518fdf8799f94acd933c03d0718bbf1b384ac-merged.mount: Deactivated successfully.
Oct 02 13:16:49 compute-0 podman[398232]: 2025-10-02 13:16:49.485155984 +0000 UTC m=+0.151881919 container remove fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:16:49 compute-0 systemd[1]: libpod-conmon-fed05e1772fada6c219d587c8377c75f799ec6127c133c0445451fe0ea90b089.scope: Deactivated successfully.
Oct 02 13:16:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:49.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:49 compute-0 podman[398272]: 2025-10-02 13:16:49.641799345 +0000 UTC m=+0.044099067 container create 9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gauss, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:16:49 compute-0 systemd[1]: Started libpod-conmon-9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6.scope.
Oct 02 13:16:49 compute-0 podman[398272]: 2025-10-02 13:16:49.621337933 +0000 UTC m=+0.023637665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:16:49 compute-0 ceph-mon[73668]: pgmap v3157: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 919 KiB/s wr, 79 op/s
Oct 02 13:16:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0935e13362308a4f18bdb780073ce2f93012f7ee43fe1fccadfa390e2cf8e04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0935e13362308a4f18bdb780073ce2f93012f7ee43fe1fccadfa390e2cf8e04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0935e13362308a4f18bdb780073ce2f93012f7ee43fe1fccadfa390e2cf8e04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0935e13362308a4f18bdb780073ce2f93012f7ee43fe1fccadfa390e2cf8e04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:16:49 compute-0 podman[398272]: 2025-10-02 13:16:49.760251474 +0000 UTC m=+0.162551206 container init 9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:16:49 compute-0 podman[398272]: 2025-10-02 13:16:49.766689491 +0000 UTC m=+0.168989203 container start 9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:16:49 compute-0 podman[398272]: 2025-10-02 13:16:49.770710456 +0000 UTC m=+0.173010208 container attach 9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gauss, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:16:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 919 KiB/s wr, 79 op/s
Oct 02 13:16:50 compute-0 eager_gauss[398288]: {
Oct 02 13:16:50 compute-0 eager_gauss[398288]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:16:50 compute-0 eager_gauss[398288]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:16:50 compute-0 eager_gauss[398288]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:16:50 compute-0 eager_gauss[398288]:         "osd_id": 1,
Oct 02 13:16:50 compute-0 eager_gauss[398288]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:16:50 compute-0 eager_gauss[398288]:         "type": "bluestore"
Oct 02 13:16:50 compute-0 eager_gauss[398288]:     }
Oct 02 13:16:50 compute-0 eager_gauss[398288]: }
Oct 02 13:16:50 compute-0 systemd[1]: libpod-9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6.scope: Deactivated successfully.
Oct 02 13:16:50 compute-0 podman[398272]: 2025-10-02 13:16:50.662852165 +0000 UTC m=+1.065151917 container died 9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0935e13362308a4f18bdb780073ce2f93012f7ee43fe1fccadfa390e2cf8e04-merged.mount: Deactivated successfully.
Oct 02 13:16:50 compute-0 podman[398272]: 2025-10-02 13:16:50.728016049 +0000 UTC m=+1.130315771 container remove 9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:50 compute-0 systemd[1]: libpod-conmon-9f8a95115b564bf432f885d9b37a499ce1a54a370f9fb8b5ef199055ccbd6ed6.scope: Deactivated successfully.
Oct 02 13:16:50 compute-0 sudo[398165]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:16:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d38e92b4-6366-4734-ab6c-37feb93cf84c does not exist
Oct 02 13:16:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f9da2d61-022e-4abc-80f7-1740ccc8252d does not exist
Oct 02 13:16:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 10d6eb40-1564-406b-ad8d-b0d48c6372c8 does not exist
Oct 02 13:16:50 compute-0 sudo[398321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:50 compute-0 sudo[398321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:50 compute-0 sudo[398321]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:50 compute-0 sudo[398346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:16:50 compute-0 sudo[398346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:50 compute-0 sudo[398346]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:51.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:51 compute-0 ceph-mon[73668]: pgmap v3158: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 919 KiB/s wr, 79 op/s
Oct 02 13:16:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 02 13:16:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:53.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:53 compute-0 nova_compute[256940]: 2025-10-02 13:16:53.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:53 compute-0 ceph-mon[73668]: pgmap v3159: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 02 13:16:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:16:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:55.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:55 compute-0 ceph-mon[73668]: pgmap v3160: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:16:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:56.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:56 compute-0 sudo[398373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:56 compute-0 sudo[398373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:56 compute-0 sudo[398373]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:56 compute-0 sudo[398398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:56 compute-0 sudo[398398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:56 compute-0 sudo[398398]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 180 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Oct 02 13:16:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:57.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:57 compute-0 ceph-mon[73668]: pgmap v3161: 305 pgs: 305 active+clean; 180 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:16:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:16:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:58.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.228 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:16:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:16:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 180 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5034 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:16:58 compute-0 nova_compute[256940]: 2025-10-02 13:16:58.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:16:59 compute-0 podman[398425]: 2025-10-02 13:16:59.432789547 +0000 UTC m=+0.094516108 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:16:59 compute-0 podman[398426]: 2025-10-02 13:16:59.436214086 +0000 UTC m=+0.097717291 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:16:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:16:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:00 compute-0 ceph-mon[73668]: pgmap v3162: 305 pgs: 305 active+clean; 180 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 02 13:17:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:00.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:01.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:02 compute-0 ceph-mon[73668]: pgmap v3163: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:02 compute-0 nova_compute[256940]: 2025-10-02 13:17:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:02 compute-0 nova_compute[256940]: 2025-10-02 13:17:02.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:17:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:02.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:03.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:03 compute-0 ceph-mon[73668]: pgmap v3164: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:03 compute-0 nova_compute[256940]: 2025-10-02 13:17:03.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:03 compute-0 nova_compute[256940]: 2025-10-02 13:17:03.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:03 compute-0 nova_compute[256940]: 2025-10-02 13:17:03.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:17:03 compute-0 nova_compute[256940]: 2025-10-02 13:17:03.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:03 compute-0 nova_compute[256940]: 2025-10-02 13:17:03.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:03 compute-0 nova_compute[256940]: 2025-10-02 13:17:03.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:04.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:05.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:05 compute-0 ceph-mon[73668]: pgmap v3165: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:06.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:17:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:07.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:07 compute-0 ceph-mon[73668]: pgmap v3166: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:17:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4015710855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:17:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:08.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 828 KiB/s wr, 13 op/s
Oct 02 13:17:08 compute-0 nova_compute[256940]: 2025-10-02 13:17:08.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:08 compute-0 nova_compute[256940]: 2025-10-02 13:17:08.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:09.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:09 compute-0 ceph-mon[73668]: pgmap v3167: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 828 KiB/s wr, 13 op/s
Oct 02 13:17:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:10.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 828 KiB/s wr, 15 op/s
Oct 02 13:17:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:12 compute-0 ceph-mon[73668]: pgmap v3168: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 828 KiB/s wr, 15 op/s
Oct 02 13:17:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:12.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:12 compute-0 podman[398466]: 2025-10-02 13:17:12.420958661 +0000 UTC m=+0.086518840 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:17:12 compute-0 podman[398467]: 2025-10-02 13:17:12.438051216 +0000 UTC m=+0.104556389 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:17:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 15 KiB/s wr, 11 op/s
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.178940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033178981, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 826, "num_deletes": 251, "total_data_size": 1231817, "memory_usage": 1254496, "flush_reason": "Manual Compaction"}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033187676, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 1207876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69974, "largest_seqno": 70798, "table_properties": {"data_size": 1203607, "index_size": 1984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9581, "raw_average_key_size": 19, "raw_value_size": 1195082, "raw_average_value_size": 2484, "num_data_blocks": 85, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410971, "oldest_key_time": 1759410971, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 8787 microseconds, and 3447 cpu microseconds.
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.187722) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 1207876 bytes OK
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.187746) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.190678) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.190690) EVENT_LOG_v1 {"time_micros": 1759411033190686, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.190707) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1227788, prev total WAL file size 1227788, number of live WAL files 2.
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.191339) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(1179KB)], [158(13MB)]
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033191414, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 15819211, "oldest_snapshot_seqno": -1}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9768 keys, 13945844 bytes, temperature: kUnknown
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033313183, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 13945844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13880579, "index_size": 39744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 257889, "raw_average_key_size": 26, "raw_value_size": 13707398, "raw_average_value_size": 1403, "num_data_blocks": 1520, "num_entries": 9768, "num_filter_entries": 9768, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.313440) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13945844 bytes
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.316507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.8 rd, 114.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.9 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(24.6) write-amplify(11.5) OK, records in: 10287, records dropped: 519 output_compression: NoCompression
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.316532) EVENT_LOG_v1 {"time_micros": 1759411033316521, "job": 98, "event": "compaction_finished", "compaction_time_micros": 121834, "compaction_time_cpu_micros": 41421, "output_level": 6, "num_output_files": 1, "total_output_size": 13945844, "num_input_records": 10287, "num_output_records": 9768, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033316983, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033319760, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.191225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.319829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.319838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.319840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.319841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:13.319843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:13 compute-0 nova_compute[256940]: 2025-10-02 13:17:13.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:13 compute-0 nova_compute[256940]: 2025-10-02 13:17:13.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:14.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:14 compute-0 ceph-mon[73668]: pgmap v3169: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 15 KiB/s wr, 11 op/s
Oct 02 13:17:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 15 KiB/s wr, 20 op/s
Oct 02 13:17:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:15.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:15 compute-0 ceph-mon[73668]: pgmap v3170: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 15 KiB/s wr, 20 op/s
Oct 02 13:17:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3499506024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:16.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.377516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036377552, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 279, "num_deletes": 251, "total_data_size": 50067, "memory_usage": 55168, "flush_reason": "Manual Compaction"}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 02 13:17:16 compute-0 sudo[398514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:16 compute-0 sudo[398514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:16 compute-0 sudo[398514]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:16 compute-0 sudo[398539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:16 compute-0 sudo[398539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:16 compute-0 sudo[398539]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036483872, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 49349, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70799, "largest_seqno": 71077, "table_properties": {"data_size": 47462, "index_size": 115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5451, "raw_average_key_size": 20, "raw_value_size": 43715, "raw_average_value_size": 162, "num_data_blocks": 5, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411034, "oldest_key_time": 1759411034, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 106415 microseconds, and 884 cpu microseconds.
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 32 op/s
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.483925) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 49349 bytes OK
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.483949) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.545824) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.545858) EVENT_LOG_v1 {"time_micros": 1759411036545851, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.545879) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 47971, prev total WAL file size 47971, number of live WAL files 2.
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.546437) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(48KB)], [161(13MB)]
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036546462, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 13995193, "oldest_snapshot_seqno": -1}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9527 keys, 10147056 bytes, temperature: kUnknown
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036906303, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10147056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10088296, "index_size": 33815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 253080, "raw_average_key_size": 26, "raw_value_size": 9924176, "raw_average_value_size": 1041, "num_data_blocks": 1272, "num_entries": 9527, "num_filter_entries": 9527, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.906644) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10147056 bytes
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.960637) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.9 rd, 28.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(489.2) write-amplify(205.6) OK, records in: 10037, records dropped: 510 output_compression: NoCompression
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.960679) EVENT_LOG_v1 {"time_micros": 1759411036960662, "job": 100, "event": "compaction_finished", "compaction_time_micros": 359987, "compaction_time_cpu_micros": 25163, "output_level": 6, "num_output_files": 1, "total_output_size": 10147056, "num_input_records": 10037, "num_output_records": 9527, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036960857, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036963634, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.546361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.963666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.963670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.963672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.963674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:17:16.963676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:17 compute-0 radosgw[92108]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 02 13:17:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:17.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:17 compute-0 ceph-mon[73668]: pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 32 op/s
Oct 02 13:17:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:18.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 31 op/s
Oct 02 13:17:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:18 compute-0 nova_compute[256940]: 2025-10-02 13:17:18.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:18 compute-0 nova_compute[256940]: 2025-10-02 13:17:18.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000078s ======
Oct 02 13:17:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:19.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Oct 02 13:17:19 compute-0 ceph-mon[73668]: pgmap v3172: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 31 op/s
Oct 02 13:17:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:20.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 2.8 KiB/s wr, 165 op/s
Oct 02 13:17:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:21.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:21 compute-0 ceph-mon[73668]: pgmap v3173: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 2.8 KiB/s wr, 165 op/s
Oct 02 13:17:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 138 KiB/s rd, 3.3 KiB/s wr, 219 op/s
Oct 02 13:17:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:17:23.466 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:17:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:17:23.467 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:17:23 compute-0 nova_compute[256940]: 2025-10-02 13:17:23.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:23 compute-0 nova_compute[256940]: 2025-10-02 13:17:23.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:23 compute-0 nova_compute[256940]: 2025-10-02 13:17:23.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:23 compute-0 ceph-mon[73668]: pgmap v3174: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 138 KiB/s rd, 3.3 KiB/s wr, 219 op/s
Oct 02 13:17:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:24.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 149 KiB/s rd, 1.6 KiB/s wr, 236 op/s
Oct 02 13:17:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:17:25.469 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:17:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:25.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:25 compute-0 ceph-mon[73668]: pgmap v3175: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 149 KiB/s rd, 1.6 KiB/s wr, 236 op/s
Oct 02 13:17:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:26.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:17:26.512 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:17:26.513 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:17:26.513 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 1.8 KiB/s wr, 229 op/s
Oct 02 13:17:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:27.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:28 compute-0 ceph-mon[73668]: pgmap v3176: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 1.8 KiB/s wr, 229 op/s
Oct 02 13:17:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:28.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 136 KiB/s rd, 767 B/s wr, 217 op/s
Oct 02 13:17:28 compute-0 nova_compute[256940]: 2025-10-02 13:17:28.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:28 compute-0 nova_compute[256940]: 2025-10-02 13:17:28.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:17:28
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 02 13:17:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:17:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:29.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:17:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:17:30 compute-0 ceph-mon[73668]: pgmap v3177: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 136 KiB/s rd, 767 B/s wr, 217 op/s
Oct 02 13:17:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:30.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:30 compute-0 podman[398571]: 2025-10-02 13:17:30.40457176 +0000 UTC m=+0.070358950 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 13:17:30 compute-0 podman[398572]: 2025-10-02 13:17:30.428894083 +0000 UTC m=+0.083401989 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:17:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 136 KiB/s rd, 767 B/s wr, 217 op/s
Oct 02 13:17:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/610322793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2563700799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:31.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:32 compute-0 nova_compute[256940]: 2025-10-02 13:17:32.225 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:32.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:32 compute-0 ceph-mon[73668]: pgmap v3178: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 136 KiB/s rd, 767 B/s wr, 217 op/s
Oct 02 13:17:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/300480324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 938 B/s wr, 84 op/s
Oct 02 13:17:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Oct 02 13:17:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Oct 02 13:17:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.239 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.240 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:17:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1010626883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:33 compute-0 ceph-mon[73668]: pgmap v3179: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 938 B/s wr, 84 op/s
Oct 02 13:17:33 compute-0 ceph-mon[73668]: osdmap e396: 3 total, 3 up, 3 in
Oct 02 13:17:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:17:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367091814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.691 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.864 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.865 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4155MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.865 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.865 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.919 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.919 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:17:33 compute-0 nova_compute[256940]: 2025-10-02 13:17:33.934 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:17:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:34.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:17:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280913833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:34 compute-0 nova_compute[256940]: 2025-10-02 13:17:34.397 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:17:34 compute-0 nova_compute[256940]: 2025-10-02 13:17:34.404 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:17:34 compute-0 nova_compute[256940]: 2025-10-02 13:17:34.423 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:17:34 compute-0 nova_compute[256940]: 2025-10-02 13:17:34.426 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:17:34 compute-0 nova_compute[256940]: 2025-10-02 13:17:34.426 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Oct 02 13:17:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2367091814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1280913833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 614 B/s wr, 5 op/s
Oct 02 13:17:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Oct 02 13:17:34 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Oct 02 13:17:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:35 compute-0 ceph-mon[73668]: pgmap v3181: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 614 B/s wr, 5 op/s
Oct 02 13:17:35 compute-0 ceph-mon[73668]: osdmap e397: 3 total, 3 up, 3 in
Oct 02 13:17:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:36 compute-0 nova_compute[256940]: 2025-10-02 13:17:36.427 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:36 compute-0 nova_compute[256940]: 2025-10-02 13:17:36.427 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:17:36 compute-0 sudo[398659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:36 compute-0 sudo[398659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:36 compute-0 sudo[398659]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct 02 13:17:36 compute-0 sudo[398684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:36 compute-0 sudo[398684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:36 compute-0 sudo[398684]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:37 compute-0 nova_compute[256940]: 2025-10-02 13:17:37.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:37.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:37 compute-0 ceph-mon[73668]: pgmap v3183: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct 02 13:17:38 compute-0 nova_compute[256940]: 2025-10-02 13:17:38.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:38.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct 02 13:17:38 compute-0 nova_compute[256940]: 2025-10-02 13:17:38.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:38 compute-0 nova_compute[256940]: 2025-10-02 13:17:38.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:39 compute-0 nova_compute[256940]: 2025-10-02 13:17:39.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:39.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:39 compute-0 ceph-mon[73668]: pgmap v3184: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct 02 13:17:40 compute-0 nova_compute[256940]: 2025-10-02 13:17:40.224 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:40.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 44 op/s
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:17:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:17:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Oct 02 13:17:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Oct 02 13:17:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Oct 02 13:17:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:41.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:42 compute-0 ceph-mon[73668]: pgmap v3185: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 44 op/s
Oct 02 13:17:42 compute-0 ceph-mon[73668]: osdmap e398: 3 total, 3 up, 3 in
Oct 02 13:17:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:42.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Oct 02 13:17:43 compute-0 nova_compute[256940]: 2025-10-02 13:17:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:43 compute-0 ceph-mon[73668]: pgmap v3187: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Oct 02 13:17:43 compute-0 podman[398714]: 2025-10-02 13:17:43.403928445 +0000 UTC m=+0.078118711 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:17:43 compute-0 podman[398713]: 2025-10-02 13:17:43.403919215 +0000 UTC m=+0.077130756 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:17:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:43 compute-0 nova_compute[256940]: 2025-10-02 13:17:43.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:43 compute-0 nova_compute[256940]: 2025-10-02 13:17:43.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:43 compute-0 nova_compute[256940]: 2025-10-02 13:17:43.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:17:43 compute-0 nova_compute[256940]: 2025-10-02 13:17:43.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:43 compute-0 nova_compute[256940]: 2025-10-02 13:17:43.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:43 compute-0 nova_compute[256940]: 2025-10-02 13:17:43.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 13:17:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:44.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 13:17:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Oct 02 13:17:45 compute-0 nova_compute[256940]: 2025-10-02 13:17:45.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:45 compute-0 ceph-mon[73668]: pgmap v3188: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Oct 02 13:17:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:46.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:17:47 compute-0 nova_compute[256940]: 2025-10-02 13:17:47.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:47 compute-0 nova_compute[256940]: 2025-10-02 13:17:47.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:17:47 compute-0 nova_compute[256940]: 2025-10-02 13:17:47.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:17:47 compute-0 nova_compute[256940]: 2025-10-02 13:17:47.229 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:17:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:47.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:47 compute-0 ceph-mon[73668]: pgmap v3189: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:17:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:48.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:17:48 compute-0 nova_compute[256940]: 2025-10-02 13:17:48.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:49.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:49 compute-0 ceph-mon[73668]: pgmap v3190: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:17:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:50.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:51 compute-0 sudo[398763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:51 compute-0 sudo[398763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-0 sudo[398763]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:51 compute-0 sudo[398788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:51 compute-0 sudo[398788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-0 sudo[398788]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:51 compute-0 sudo[398813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:51 compute-0 sudo[398813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-0 sudo[398813]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:51 compute-0 sudo[398838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:17:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:51.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:51 compute-0 sudo[398838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-0 ceph-mon[73668]: pgmap v3191: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:52 compute-0 sudo[398838]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-0 sudo[398893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:52 compute-0 sudo[398893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-0 sudo[398893]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-0 sudo[398918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:52 compute-0 sudo[398918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-0 sudo[398918]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:52 compute-0 sudo[398943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:52 compute-0 sudo[398943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-0 sudo[398943]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:17:52 compute-0 sudo[398968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 13:17:52 compute-0 sudo[398968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:17:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:52 compute-0 podman[399034]: 2025-10-02 13:17:52.849737016 +0000 UTC m=+0.084434296 container create e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:17:52 compute-0 systemd[1]: Started libpod-conmon-e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c.scope.
Oct 02 13:17:52 compute-0 podman[399034]: 2025-10-02 13:17:52.79220608 +0000 UTC m=+0.026903380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:52 compute-0 podman[399034]: 2025-10-02 13:17:52.930672859 +0000 UTC m=+0.165370159 container init e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:17:52 compute-0 podman[399034]: 2025-10-02 13:17:52.93685143 +0000 UTC m=+0.171548750 container start e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 13:17:52 compute-0 magical_banzai[399052]: 167 167
Oct 02 13:17:52 compute-0 systemd[1]: libpod-e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c.scope: Deactivated successfully.
Oct 02 13:17:52 compute-0 conmon[399052]: conmon e28761a678247f8c2985 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c.scope/container/memory.events
Oct 02 13:17:52 compute-0 podman[399034]: 2025-10-02 13:17:52.949602051 +0000 UTC m=+0.184299351 container attach e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banzai, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 13:17:52 compute-0 podman[399034]: 2025-10-02 13:17:52.950017752 +0000 UTC m=+0.184715042 container died e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-966ce370fa02a2d76f6e2b4f95181790f245384ec91c23dd8d43ca1d23e403ff-merged.mount: Deactivated successfully.
Oct 02 13:17:53 compute-0 podman[399034]: 2025-10-02 13:17:53.194392664 +0000 UTC m=+0.429089944 container remove e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:17:53 compute-0 systemd[1]: libpod-conmon-e28761a678247f8c298569676adf8f207d5014cad977cdcc3eb8a632623b338c.scope: Deactivated successfully.
Oct 02 13:17:53 compute-0 podman[399077]: 2025-10-02 13:17:53.370485781 +0000 UTC m=+0.026271234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:53 compute-0 podman[399077]: 2025-10-02 13:17:53.480344667 +0000 UTC m=+0.136130090 container create 43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:17:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:53 compute-0 ceph-mon[73668]: pgmap v3192: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:53.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:53 compute-0 systemd[1]: Started libpod-conmon-43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc.scope.
Oct 02 13:17:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad328c8d0deb03f9fb5b937999c89d3ca9d722f3a3f91bd00fba20ac0ea1993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad328c8d0deb03f9fb5b937999c89d3ca9d722f3a3f91bd00fba20ac0ea1993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad328c8d0deb03f9fb5b937999c89d3ca9d722f3a3f91bd00fba20ac0ea1993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad328c8d0deb03f9fb5b937999c89d3ca9d722f3a3f91bd00fba20ac0ea1993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-0 podman[399077]: 2025-10-02 13:17:53.671071266 +0000 UTC m=+0.326856739 container init 43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:17:53 compute-0 podman[399077]: 2025-10-02 13:17:53.679985577 +0000 UTC m=+0.335771010 container start 43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:17:53 compute-0 podman[399077]: 2025-10-02 13:17:53.709062863 +0000 UTC m=+0.364848326 container attach 43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:17:53 compute-0 nova_compute[256940]: 2025-10-02 13:17:53.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:54.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:17:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:17:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]: [
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:     {
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "available": false,
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "ceph_device": false,
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "lsm_data": {},
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "lvs": [],
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "path": "/dev/sr0",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "rejected_reasons": [
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "Has a FileSystem",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "Insufficient space (<5GB)"
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         ],
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         "sys_api": {
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "actuators": null,
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "device_nodes": "sr0",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "devname": "sr0",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "human_readable_size": "482.00 KB",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "id_bus": "ata",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "model": "QEMU DVD-ROM",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "nr_requests": "2",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "parent": "/dev/sr0",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "partitions": {},
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "path": "/dev/sr0",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "removable": "1",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "rev": "2.5+",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "ro": "0",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "rotational": "0",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "sas_address": "",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "sas_device_handle": "",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "scheduler_mode": "mq-deadline",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "sectors": 0,
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "sectorsize": "2048",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "size": 493568.0,
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "support_discard": "2048",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "type": "disk",
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:             "vendor": "QEMU"
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:         }
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]:     }
Oct 02 13:17:54 compute-0 crazy_elgamal[399093]: ]
Oct 02 13:17:55 compute-0 systemd[1]: libpod-43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc.scope: Deactivated successfully.
Oct 02 13:17:55 compute-0 systemd[1]: libpod-43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc.scope: Consumed 1.358s CPU time.
Oct 02 13:17:55 compute-0 podman[400341]: 2025-10-02 13:17:55.087850479 +0000 UTC m=+0.040577395 container died 43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:17:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ad328c8d0deb03f9fb5b937999c89d3ca9d722f3a3f91bd00fba20ac0ea1993-merged.mount: Deactivated successfully.
Oct 02 13:17:55 compute-0 podman[400341]: 2025-10-02 13:17:55.146247287 +0000 UTC m=+0.098974163 container remove 43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elgamal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:17:55 compute-0 systemd[1]: libpod-conmon-43439ab070abfeae9d18a18da4314ee5ffdef8c9a952a24b101a5370f3b7dacc.scope: Deactivated successfully.
Oct 02 13:17:55 compute-0 sudo[398968]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:17:55 compute-0 nova_compute[256940]: 2025-10-02 13:17:55.358 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4cf157e8-e1a1-4e06-bc95-5b66a6628855 does not exist
Oct 02 13:17:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4aaac17d-04ca-4076-83aa-547db6d7964c does not exist
Oct 02 13:17:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2050feab-a06e-4105-92be-a09d6ff6cf4d does not exist
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:17:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:55 compute-0 sudo[400356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:55 compute-0 sudo[400356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:55 compute-0 sudo[400356]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:55 compute-0 sudo[400381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:55 compute-0 sudo[400381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:55 compute-0 sudo[400381]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:55 compute-0 sudo[400406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:55 compute-0 sudo[400406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:55 compute-0 sudo[400406]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:55 compute-0 ceph-mon[73668]: pgmap v3193: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:17:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:55 compute-0 sudo[400431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:17:55 compute-0 sudo[400431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:56 compute-0 podman[400498]: 2025-10-02 13:17:56.16276624 +0000 UTC m=+0.051771917 container create 0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:17:56 compute-0 systemd[1]: Started libpod-conmon-0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41.scope.
Oct 02 13:17:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:56 compute-0 podman[400498]: 2025-10-02 13:17:56.140533662 +0000 UTC m=+0.029539419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:56 compute-0 podman[400498]: 2025-10-02 13:17:56.257948174 +0000 UTC m=+0.146953881 container init 0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wescoff, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:17:56 compute-0 podman[400498]: 2025-10-02 13:17:56.266047174 +0000 UTC m=+0.155052861 container start 0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:17:56 compute-0 podman[400498]: 2025-10-02 13:17:56.269244127 +0000 UTC m=+0.158249844 container attach 0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wescoff, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:17:56 compute-0 goofy_wescoff[400514]: 167 167
Oct 02 13:17:56 compute-0 systemd[1]: libpod-0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41.scope: Deactivated successfully.
Oct 02 13:17:56 compute-0 podman[400498]: 2025-10-02 13:17:56.271923657 +0000 UTC m=+0.160929354 container died 0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6c9d20c2c0ad6d0d58b6cf02f88b0f247f73bdd765954436f6587f651591e73-merged.mount: Deactivated successfully.
Oct 02 13:17:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:56.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:56 compute-0 podman[400498]: 2025-10-02 13:17:56.319635257 +0000 UTC m=+0.208640944 container remove 0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wescoff, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:17:56 compute-0 systemd[1]: libpod-conmon-0cc3591408ddd997cfb6eb5a1c02e043c8f59a59864cba92d602dd8374c5eb41.scope: Deactivated successfully.
Oct 02 13:17:56 compute-0 podman[400540]: 2025-10-02 13:17:56.501200686 +0000 UTC m=+0.056558801 container create a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:17:56 compute-0 systemd[1]: Started libpod-conmon-a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104.scope.
Oct 02 13:17:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe83f87a56046acb6effd8bc7c981ec99b7b8e4aa7139bfed8aabbc3332d5e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe83f87a56046acb6effd8bc7c981ec99b7b8e4aa7139bfed8aabbc3332d5e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe83f87a56046acb6effd8bc7c981ec99b7b8e4aa7139bfed8aabbc3332d5e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe83f87a56046acb6effd8bc7c981ec99b7b8e4aa7139bfed8aabbc3332d5e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe83f87a56046acb6effd8bc7c981ec99b7b8e4aa7139bfed8aabbc3332d5e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:56 compute-0 podman[400540]: 2025-10-02 13:17:56.485395466 +0000 UTC m=+0.040753591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:56 compute-0 podman[400540]: 2025-10-02 13:17:56.576061262 +0000 UTC m=+0.131419377 container init a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feynman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:17:56 compute-0 podman[400540]: 2025-10-02 13:17:56.589654646 +0000 UTC m=+0.145012781 container start a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:17:56 compute-0 podman[400540]: 2025-10-02 13:17:56.594558013 +0000 UTC m=+0.149916128 container attach a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 13:17:56 compute-0 sudo[400562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:56 compute-0 sudo[400562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:56 compute-0 sudo[400562]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:56 compute-0 sudo[400587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:56 compute-0 sudo[400587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:56 compute-0 sudo[400587]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 keen_feynman[400557]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:17:57 compute-0 keen_feynman[400557]: --> relative data size: 1.0
Oct 02 13:17:57 compute-0 keen_feynman[400557]: --> All data devices are unavailable
Oct 02 13:17:57 compute-0 systemd[1]: libpod-a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104.scope: Deactivated successfully.
Oct 02 13:17:57 compute-0 podman[400540]: 2025-10-02 13:17:57.418251852 +0000 UTC m=+0.973610007 container died a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feynman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:17:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe83f87a56046acb6effd8bc7c981ec99b7b8e4aa7139bfed8aabbc3332d5e4-merged.mount: Deactivated successfully.
Oct 02 13:17:57 compute-0 podman[400540]: 2025-10-02 13:17:57.492668296 +0000 UTC m=+1.048026411 container remove a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feynman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:17:57 compute-0 systemd[1]: libpod-conmon-a4c8ac3999fdfc13d086ea34565b8a7ae5f7c074eb5989ad3d625881aacc5104.scope: Deactivated successfully.
Oct 02 13:17:57 compute-0 sudo[400431]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[400636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:57.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:57 compute-0 sudo[400636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[400636]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[400661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:57 compute-0 sudo[400661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[400661]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[400686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:57 compute-0 sudo[400686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:57 compute-0 sudo[400686]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:57 compute-0 sudo[400711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:17:57 compute-0 ceph-mon[73668]: pgmap v3194: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:57 compute-0 sudo[400711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:58 compute-0 podman[400777]: 2025-10-02 13:17:58.112128778 +0000 UTC m=+0.037688221 container create dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:17:58 compute-0 systemd[1]: Started libpod-conmon-dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1.scope.
Oct 02 13:17:58 compute-0 podman[400777]: 2025-10-02 13:17:58.095168177 +0000 UTC m=+0.020727620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:58 compute-0 podman[400777]: 2025-10-02 13:17:58.207516367 +0000 UTC m=+0.133075860 container init dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:17:58 compute-0 podman[400777]: 2025-10-02 13:17:58.213622046 +0000 UTC m=+0.139181489 container start dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:17:58 compute-0 podman[400777]: 2025-10-02 13:17:58.217841046 +0000 UTC m=+0.143400499 container attach dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:17:58 compute-0 pensive_turing[400793]: 167 167
Oct 02 13:17:58 compute-0 systemd[1]: libpod-dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1.scope: Deactivated successfully.
Oct 02 13:17:58 compute-0 podman[400777]: 2025-10-02 13:17:58.222371883 +0000 UTC m=+0.147931596 container died dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f03543d4ad67b2506bdc7f7fc650883befd9553641b46c3317ac386f3072d100-merged.mount: Deactivated successfully.
Oct 02 13:17:58 compute-0 podman[400777]: 2025-10-02 13:17:58.271700396 +0000 UTC m=+0.197259879 container remove dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:17:58 compute-0 systemd[1]: libpod-conmon-dc7668d83631e4fc67fcb9994f2183a0cfc7cf0d37869f2509ade2803743bde1.scope: Deactivated successfully.
Oct 02 13:17:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:17:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:58.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:17:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:17:58 compute-0 podman[400818]: 2025-10-02 13:17:58.485870523 +0000 UTC m=+0.053335328 container create b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:17:58 compute-0 systemd[1]: Started libpod-conmon-b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c.scope.
Oct 02 13:17:58 compute-0 podman[400818]: 2025-10-02 13:17:58.464180839 +0000 UTC m=+0.031645664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2e4c3de31e7050a1bf26335ca5a53d3d7e1d224b9fa1007c73f2417bbb9b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2e4c3de31e7050a1bf26335ca5a53d3d7e1d224b9fa1007c73f2417bbb9b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2e4c3de31e7050a1bf26335ca5a53d3d7e1d224b9fa1007c73f2417bbb9b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2e4c3de31e7050a1bf26335ca5a53d3d7e1d224b9fa1007c73f2417bbb9b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:58 compute-0 podman[400818]: 2025-10-02 13:17:58.601526779 +0000 UTC m=+0.168991654 container init b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:17:58 compute-0 podman[400818]: 2025-10-02 13:17:58.61464361 +0000 UTC m=+0.182108445 container start b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:17:58 compute-0 podman[400818]: 2025-10-02 13:17:58.620643076 +0000 UTC m=+0.188107891 container attach b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:17:58 compute-0 nova_compute[256940]: 2025-10-02 13:17:58.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:58 compute-0 nova_compute[256940]: 2025-10-02 13:17:58.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:17:58 compute-0 nova_compute[256940]: 2025-10-02 13:17:58.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:17:58 compute-0 nova_compute[256940]: 2025-10-02 13:17:58.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:58 compute-0 nova_compute[256940]: 2025-10-02 13:17:58.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:58 compute-0 nova_compute[256940]: 2025-10-02 13:17:58.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:17:59 compute-0 pensive_noether[400835]: {
Oct 02 13:17:59 compute-0 pensive_noether[400835]:     "1": [
Oct 02 13:17:59 compute-0 pensive_noether[400835]:         {
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "devices": [
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "/dev/loop3"
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             ],
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "lv_name": "ceph_lv0",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "lv_size": "7511998464",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "name": "ceph_lv0",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "tags": {
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.cluster_name": "ceph",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.crush_device_class": "",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.encrypted": "0",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.osd_id": "1",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.type": "block",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:                 "ceph.vdo": "0"
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             },
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "type": "block",
Oct 02 13:17:59 compute-0 pensive_noether[400835]:             "vg_name": "ceph_vg0"
Oct 02 13:17:59 compute-0 pensive_noether[400835]:         }
Oct 02 13:17:59 compute-0 pensive_noether[400835]:     ]
Oct 02 13:17:59 compute-0 pensive_noether[400835]: }
Oct 02 13:17:59 compute-0 systemd[1]: libpod-b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c.scope: Deactivated successfully.
Oct 02 13:17:59 compute-0 podman[400818]: 2025-10-02 13:17:59.35565276 +0000 UTC m=+0.923117555 container died b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-adf2e4c3de31e7050a1bf26335ca5a53d3d7e1d224b9fa1007c73f2417bbb9b1-merged.mount: Deactivated successfully.
Oct 02 13:17:59 compute-0 podman[400818]: 2025-10-02 13:17:59.411767089 +0000 UTC m=+0.979231904 container remove b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:17:59 compute-0 systemd[1]: libpod-conmon-b513dd2087b5830c07fcfd07572886e21e5e4883ed039d542255991ba5f4174c.scope: Deactivated successfully.
Oct 02 13:17:59 compute-0 sudo[400711]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:59 compute-0 sudo[400859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:59 compute-0 sudo[400859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:59 compute-0 sudo[400859]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:59 compute-0 sudo[400884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:59 compute-0 sudo[400884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:59 compute-0 sudo[400884]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:17:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:59 compute-0 sudo[400909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:59 compute-0 sudo[400909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:59 compute-0 sudo[400909]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:59 compute-0 sudo[400934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:17:59 compute-0 sudo[400934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:59 compute-0 ceph-mon[73668]: pgmap v3195: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:00 compute-0 podman[400999]: 2025-10-02 13:18:00.061823506 +0000 UTC m=+0.038727188 container create 153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:18:00 compute-0 systemd[1]: Started libpod-conmon-153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0.scope.
Oct 02 13:18:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:00 compute-0 podman[400999]: 2025-10-02 13:18:00.13738993 +0000 UTC m=+0.114293642 container init 153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:18:00 compute-0 podman[400999]: 2025-10-02 13:18:00.0446875 +0000 UTC m=+0.021591202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:18:00 compute-0 podman[400999]: 2025-10-02 13:18:00.144550286 +0000 UTC m=+0.121453968 container start 153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:18:00 compute-0 sleepy_albattani[401015]: 167 167
Oct 02 13:18:00 compute-0 systemd[1]: libpod-153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0.scope: Deactivated successfully.
Oct 02 13:18:00 compute-0 podman[400999]: 2025-10-02 13:18:00.149590477 +0000 UTC m=+0.126494189 container attach 153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:18:00 compute-0 conmon[401015]: conmon 153785d8e8be75aa92f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0.scope/container/memory.events
Oct 02 13:18:00 compute-0 podman[400999]: 2025-10-02 13:18:00.150156252 +0000 UTC m=+0.127059934 container died 153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-08abd3666721ad6d4389cb42f03c27c1b3222937b64e055bdcb4b148ac642f80-merged.mount: Deactivated successfully.
Oct 02 13:18:00 compute-0 podman[400999]: 2025-10-02 13:18:00.192687507 +0000 UTC m=+0.169591189 container remove 153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:18:00 compute-0 systemd[1]: libpod-conmon-153785d8e8be75aa92f0201a6649d3d1908a7198c7951d48c66e9ec3b316f8d0.scope: Deactivated successfully.
Oct 02 13:18:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:00.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:00 compute-0 podman[401042]: 2025-10-02 13:18:00.397498201 +0000 UTC m=+0.055581406 container create b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:18:00 compute-0 systemd[1]: Started libpod-conmon-b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526.scope.
Oct 02 13:18:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/186062a7a828a3e89a7e90077db3b5ee4f3e89d6c1a5208a14612d8e9918b43d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/186062a7a828a3e89a7e90077db3b5ee4f3e89d6c1a5208a14612d8e9918b43d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/186062a7a828a3e89a7e90077db3b5ee4f3e89d6c1a5208a14612d8e9918b43d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/186062a7a828a3e89a7e90077db3b5ee4f3e89d6c1a5208a14612d8e9918b43d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:00 compute-0 podman[401042]: 2025-10-02 13:18:00.473019414 +0000 UTC m=+0.131102649 container init b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:18:00 compute-0 podman[401042]: 2025-10-02 13:18:00.380968541 +0000 UTC m=+0.039051766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:18:00 compute-0 podman[401042]: 2025-10-02 13:18:00.481652298 +0000 UTC m=+0.139735503 container start b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_borg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 13:18:00 compute-0 podman[401042]: 2025-10-02 13:18:00.485145849 +0000 UTC m=+0.143229064 container attach b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_borg, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 13:18:00 compute-0 podman[401064]: 2025-10-02 13:18:00.520351784 +0000 UTC m=+0.058108551 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:18:00 compute-0 podman[401062]: 2025-10-02 13:18:00.520339564 +0000 UTC m=+0.060274538 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:18:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 683 KiB/s wr, 1 op/s
Oct 02 13:18:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Oct 02 13:18:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Oct 02 13:18:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Oct 02 13:18:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:01 compute-0 sharp_borg[401060]: {
Oct 02 13:18:01 compute-0 sharp_borg[401060]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:18:01 compute-0 sharp_borg[401060]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:18:01 compute-0 sharp_borg[401060]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:18:01 compute-0 sharp_borg[401060]:         "osd_id": 1,
Oct 02 13:18:01 compute-0 sharp_borg[401060]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:18:01 compute-0 sharp_borg[401060]:         "type": "bluestore"
Oct 02 13:18:01 compute-0 sharp_borg[401060]:     }
Oct 02 13:18:01 compute-0 sharp_borg[401060]: }
Oct 02 13:18:01 compute-0 systemd[1]: libpod-b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526.scope: Deactivated successfully.
Oct 02 13:18:01 compute-0 conmon[401060]: conmon b8a94b40d5375d7cef80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526.scope/container/memory.events
Oct 02 13:18:01 compute-0 podman[401122]: 2025-10-02 13:18:01.404635578 +0000 UTC m=+0.028585324 container died b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-186062a7a828a3e89a7e90077db3b5ee4f3e89d6c1a5208a14612d8e9918b43d-merged.mount: Deactivated successfully.
Oct 02 13:18:01 compute-0 podman[401122]: 2025-10-02 13:18:01.540482879 +0000 UTC m=+0.164432615 container remove b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_borg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 13:18:01 compute-0 systemd[1]: libpod-conmon-b8a94b40d5375d7cef80700a85900a35c89b01ad83e745748fd7b245d5a4e526.scope: Deactivated successfully.
Oct 02 13:18:01 compute-0 sudo[400934]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:18:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:01.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:18:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:18:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:18:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6446d0f6-08ee-418c-8493-62c566157f8a does not exist
Oct 02 13:18:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f20e188d-05ff-4cf8-bf65-b5912c4a23af does not exist
Oct 02 13:18:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e167d895-3c80-4265-8245-a836ab245bcc does not exist
Oct 02 13:18:01 compute-0 sudo[401137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:01 compute-0 sudo[401137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:01 compute-0 sudo[401137]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:01 compute-0 sudo[401162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:18:01 compute-0 sudo[401162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:01 compute-0 sudo[401162]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:01 compute-0 ceph-mon[73668]: pgmap v3196: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 683 KiB/s wr, 1 op/s
Oct 02 13:18:01 compute-0 ceph-mon[73668]: osdmap e399: 3 total, 3 up, 3 in
Oct 02 13:18:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:18:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:18:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:02.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 14 op/s
Oct 02 13:18:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:03.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:03 compute-0 nova_compute[256940]: 2025-10-02 13:18:03.913 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:03 compute-0 nova_compute[256940]: 2025-10-02 13:18:03.913 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:03 compute-0 nova_compute[256940]: 2025-10-02 13:18:03.935 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:18:03 compute-0 nova_compute[256940]: 2025-10-02 13:18:03.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.072 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.073 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.081 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.082 2 INFO nova.compute.claims [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:18:04 compute-0 ceph-mon[73668]: pgmap v3198: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 14 op/s
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.287 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:04.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Oct 02 13:18:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:04 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810113878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.761 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.767 2 DEBUG nova.compute.provider_tree [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.796 2 DEBUG nova.scheduler.client.report [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.825 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.826 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.880 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.881 2 DEBUG nova.network.neutron [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.905 2 INFO nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:18:04 compute-0 nova_compute[256940]: 2025-10-02 13:18:04.935 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.087 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.089 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.090 2 INFO nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Creating image(s)
Oct 02 13:18:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1810113878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.145 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.178 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.207 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.211 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "9db343af560801cab61c541fe1a9a4551054155b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.212 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "9db343af560801cab61c541fe1a9a4551054155b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.555 2 DEBUG nova.virt.libvirt.imagebackend [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/40dbc07f-c919-4d14-85e2-405ded1c3c40/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/40dbc07f-c919-4d14-85e2-405ded1c3c40/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 13:18:05 compute-0 nova_compute[256940]: 2025-10-02 13:18:05.559 2 DEBUG nova.policy [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '74f5186fabfb4fea86d32c8ef1f2e354', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:18:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:05.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:06 compute-0 ceph-mon[73668]: pgmap v3199: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Oct 02 13:18:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3080546147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:18:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3080546147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:18:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:06.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 13:18:06 compute-0 nova_compute[256940]: 2025-10-02 13:18:06.841 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:06 compute-0 nova_compute[256940]: 2025-10-02 13:18:06.919 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:06 compute-0 nova_compute[256940]: 2025-10-02 13:18:06.921 2 DEBUG nova.virt.images [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] 40dbc07f-c919-4d14-85e2-405ded1c3c40 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 13:18:06 compute-0 nova_compute[256940]: 2025-10-02 13:18:06.922 2 DEBUG nova.privsep.utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 13:18:06 compute-0 nova_compute[256940]: 2025-10-02 13:18:06.922 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.part /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:07.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:07 compute-0 nova_compute[256940]: 2025-10-02 13:18:07.678 2 DEBUG nova.network.neutron [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Successfully created port: 9c092188-b1cb-4109-975b-8e9dd4daf435 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.059 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.part /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.converted" returned: 0 in 1.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.068 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.161 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.163 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "9db343af560801cab61c541fe1a9a4551054155b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.201 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.206 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b 42de3106-827c-4d43-988c-18a6f44f3e01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:08 compute-0 ceph-mon[73668]: pgmap v3200: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 13:18:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:08.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.567 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b 42de3106-827c-4d43-988c-18a6f44f3e01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.637 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] resizing rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.752 2 DEBUG nova.objects.instance [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.785 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.786 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Ensure instance console log exists: /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.786 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.786 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.787 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 13:18:08 compute-0 nova_compute[256940]: 2025-10-02 13:18:08.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.310 2 DEBUG nova.network.neutron [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Successfully updated port: 9c092188-b1cb-4109-975b-8e9dd4daf435 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:18:09 compute-0 ceph-mon[73668]: pgmap v3201: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.334 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.335 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.335 2 DEBUG nova.network.neutron [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.460 2 DEBUG nova.compute.manager [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.461 2 DEBUG nova.compute.manager [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing instance network info cache due to event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.461 2 DEBUG oslo_concurrency.lockutils [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:09 compute-0 nova_compute[256940]: 2025-10-02 13:18:09.510 2 DEBUG nova.network.neutron [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:18:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:09.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 56 op/s
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.909 2 DEBUG nova.network.neutron [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.935 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.936 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance network_info: |[{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.936 2 DEBUG oslo_concurrency.lockutils [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.936 2 DEBUG nova.network.neutron [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.939 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Start _get_guest_xml network_info=[{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T13:17:59Z,direct_url=<?>,disk_format='qcow2',id=40dbc07f-c919-4d14-85e2-405ded1c3c40,min_disk=0,min_ram=0,name='tempest-scenario-img--1483751743',owner='ced4d30c525c44cca617c3b9838d21b7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T13:18:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '40dbc07f-c919-4d14-85e2-405ded1c3c40'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.943 2 WARNING nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.950 2 DEBUG nova.virt.libvirt.host [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.950 2 DEBUG nova.virt.libvirt.host [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.961 2 DEBUG nova.virt.libvirt.host [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.961 2 DEBUG nova.virt.libvirt.host [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.963 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.963 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T13:17:59Z,direct_url=<?>,disk_format='qcow2',id=40dbc07f-c919-4d14-85e2-405ded1c3c40,min_disk=0,min_ram=0,name='tempest-scenario-img--1483751743',owner='ced4d30c525c44cca617c3b9838d21b7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T13:18:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.964 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.964 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.964 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.964 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.965 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.965 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.965 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.966 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.966 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.966 2 DEBUG nova.virt.hardware [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:18:10 compute-0 nova_compute[256940]: 2025-10-02 13:18:10.969 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:18:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627526565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.436 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.464 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.468 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:11.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:11 compute-0 ceph-mon[73668]: pgmap v3202: 305 pgs: 305 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 56 op/s
Oct 02 13:18:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3627526565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:18:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859342343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.977 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.980 2 DEBUG nova.virt.libvirt.vif [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:18:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-29107059',display_name='tempest-TestMinimumBasicScenario-server-29107059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-29107059',id=213,image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmOS46UgYP1IqwRi0ZKAvRep9yFPkjXVjI+Pw/ILwMRb4DdLlWNdBd0VfxSM2PbIpiC2ZiRlAp2KBMDDOOWT3wcngZ1oPAnDSzP/QaWBAIqsGgjy9F6g3hYw/xgLq2aOA==',key_name='tempest-TestMinimumBasicScenario-1692650409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-j5dd0ovx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:18:04Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=42de3106-827c-4d43-988c-18a6f44f3e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.980 2 DEBUG nova.network.os_vif_util [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.982 2 DEBUG nova.network.os_vif_util [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:11 compute-0 nova_compute[256940]: 2025-10-02 13:18:11.984 2 DEBUG nova.objects.instance [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.227 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <uuid>42de3106-827c-4d43-988c-18a6f44f3e01</uuid>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <name>instance-000000d5</name>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <nova:name>tempest-TestMinimumBasicScenario-server-29107059</nova:name>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:18:10</nova:creationTime>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:user uuid="74f5186fabfb4fea86d32c8ef1f2e354">tempest-TestMinimumBasicScenario-1527105691-project-member</nova:user>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:project uuid="ced4d30c525c44cca617c3b9838d21b7">tempest-TestMinimumBasicScenario-1527105691</nova:project>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="40dbc07f-c919-4d14-85e2-405ded1c3c40"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <nova:port uuid="9c092188-b1cb-4109-975b-8e9dd4daf435">
Oct 02 13:18:12 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <system>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <entry name="serial">42de3106-827c-4d43-988c-18a6f44f3e01</entry>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <entry name="uuid">42de3106-827c-4d43-988c-18a6f44f3e01</entry>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </system>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <os>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   </os>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <features>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   </features>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42de3106-827c-4d43-988c-18a6f44f3e01_disk">
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       </source>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42de3106-827c-4d43-988c-18a6f44f3e01_disk.config">
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       </source>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:18:12 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:8e:1e:46"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <target dev="tap9c092188-b1"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/console.log" append="off"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <video>
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </video>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:18:12 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:18:12 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:18:12 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:18:12 compute-0 nova_compute[256940]: </domain>
Oct 02 13:18:12 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.229 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Preparing to wait for external event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.230 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.230 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.230 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.231 2 DEBUG nova.virt.libvirt.vif [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:18:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-29107059',display_name='tempest-TestMinimumBasicScenario-server-29107059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-29107059',id=213,image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmOS46UgYP1IqwRi0ZKAvRep9yFPkjXVjI+Pw/ILwMRb4DdLlWNdBd0VfxSM2PbIpiC2ZiRlAp2KBMDDOOWT3wcngZ1oPAnDSzP/QaWBAIqsGgjy9F6g3hYw/xgLq2aOA==',key_name='tempest-TestMinimumBasicScenario-1692650409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-j5dd0ovx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:18:04Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=42de3106-827c-4d43-988c-18a6f44f3e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.232 2 DEBUG nova.network.os_vif_util [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.232 2 DEBUG nova.network.os_vif_util [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.233 2 DEBUG os_vif [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c092188-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c092188-b1, col_values=(('external_ids', {'iface-id': '9c092188-b1cb-4109-975b-8e9dd4daf435', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:1e:46', 'vm-uuid': '42de3106-827c-4d43-988c-18a6f44f3e01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:12 compute-0 NetworkManager[44981]: <info>  [1759411092.2421] manager: (tap9c092188-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.253 2 INFO os_vif [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1')
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.300 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.301 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.302 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No VIF found with MAC fa:16:3e:8e:1e:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.302 2 INFO nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Using config drive
Oct 02 13:18:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:12.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.333 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.9 MiB/s wr, 49 op/s
Oct 02 13:18:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1859342343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.837 2 INFO nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Creating config drive at /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/disk.config
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.843 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgb3pwjhv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.975 2 DEBUG nova.network.neutron [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated VIF entry in instance network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.976 2 DEBUG nova.network.neutron [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:12 compute-0 nova_compute[256940]: 2025-10-02 13:18:12.984 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgb3pwjhv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.015 2 DEBUG nova.storage.rbd_utils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 42de3106-827c-4d43-988c-18a6f44f3e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.018 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/disk.config 42de3106-827c-4d43-988c-18a6f44f3e01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.055 2 DEBUG oslo_concurrency.lockutils [req-63666cd0-6432-4d26-a054-07a3893da436 req-6ba76c15-d3ce-4f3d-9e9c-320c5811e577 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.219 2 DEBUG oslo_concurrency.processutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/disk.config 42de3106-827c-4d43-988c-18a6f44f3e01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.220 2 INFO nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Deleting local config drive /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/disk.config because it was imported into RBD.
Oct 02 13:18:13 compute-0 kernel: tap9c092188-b1: entered promiscuous mode
Oct 02 13:18:13 compute-0 ovn_controller[148123]: 2025-10-02T13:18:13Z|01043|binding|INFO|Claiming lport 9c092188-b1cb-4109-975b-8e9dd4daf435 for this chassis.
Oct 02 13:18:13 compute-0 ovn_controller[148123]: 2025-10-02T13:18:13Z|01044|binding|INFO|9c092188-b1cb-4109-975b-8e9dd4daf435: Claiming fa:16:3e:8e:1e:46 10.100.0.9
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 NetworkManager[44981]: <info>  [1759411093.2847] manager: (tap9c092188-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/456)
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 systemd-udevd[401526]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.321 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:1e:46 10.100.0.9'], port_security=['fa:16:3e:8e:1e:46 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '42de3106-827c-4d43-988c-18a6f44f3e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9c092188-b1cb-4109-975b-8e9dd4daf435) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.322 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9c092188-b1cb-4109-975b-8e9dd4daf435 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 bound to our chassis
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.323 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:18:13 compute-0 NetworkManager[44981]: <info>  [1759411093.3321] device (tap9c092188-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:18:13 compute-0 NetworkManager[44981]: <info>  [1759411093.3332] device (tap9c092188-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.337 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[20d4c4d9-dfcc-4a7f-b69d-09eedbf3b198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.338 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap540159ad-f1 in ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.341 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap540159ad-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.341 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a573f23d-153b-47bc-bad0-5aeddc91f69c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.342 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[6d01062a-780c-405c-a7f1-326437a8a228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 systemd-machined[210927]: New machine qemu-107-instance-000000d5.
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.356 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd9abe0-aab7-4eaf-a50e-618584223b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-000000d5.
Oct 02 13:18:13 compute-0 ovn_controller[148123]: 2025-10-02T13:18:13Z|01045|binding|INFO|Setting lport 9c092188-b1cb-4109-975b-8e9dd4daf435 ovn-installed in OVS
Oct 02 13:18:13 compute-0 ovn_controller[148123]: 2025-10-02T13:18:13Z|01046|binding|INFO|Setting lport 9c092188-b1cb-4109-975b-8e9dd4daf435 up in Southbound
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.370 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5292f104-6a77-4db9-8d60-e6db7ee7c64a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.400 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[b725092d-9baf-48d7-ba34-faf54730e176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 systemd-udevd[401532]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.406 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2f7853-8b77-400a-b721-9d646a7f5c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 NetworkManager[44981]: <info>  [1759411093.4077] manager: (tap540159ad-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/457)
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.436 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[d37e930e-1b0b-4783-b0e1-d5858218b6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.441 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cc11f12d-3d15-4ce4-8200-0a497e0b9ae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 NetworkManager[44981]: <info>  [1759411093.4656] device (tap540159ad-f0): carrier: link connected
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.471 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[bff9dc55-2ab1-4bde-9992-cbed3daa9bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.489 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[31b4f61e-a5c0-4cea-a15e-977c0281c4da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 898894, 'reachable_time': 20003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401583, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.505 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[af863297-8092-431d-b0f6-e4be84964d0c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:9bb7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 898894, 'tstamp': 898894}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401592, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 podman[401554]: 2025-10-02 13:18:13.509700699 +0000 UTC m=+0.064293512 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.533 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[724c8617-d035-4374-b08f-7a5f8444bc45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 898894, 'reachable_time': 20003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401600, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 podman[401560]: 2025-10-02 13:18:13.540670744 +0000 UTC m=+0.093442119 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.569 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba797ba-bc64-40e5-9a70-eda7764889b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:13.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.648 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2015ee55-f598-47d0-b997-0fe5bc3ac9cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.649 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.649 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.649 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540159ad-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:13 compute-0 NetworkManager[44981]: <info>  [1759411093.6516] manager: (tap540159ad-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Oct 02 13:18:13 compute-0 kernel: tap540159ad-f0: entered promiscuous mode
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.655 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap540159ad-f0, col_values=(('external_ids', {'iface-id': 'b64b1a3a-1d89-4a71-b9b0-71e964509167'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 ovn_controller[148123]: 2025-10-02T13:18:13Z|01047|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct 02 13:18:13 compute-0 ceph-mon[73668]: pgmap v3203: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.9 MiB/s wr, 49 op/s
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.668 2 DEBUG nova.compute.manager [req-e324eec3-470d-439f-b07b-e1f1ae172c6e req-5370d079-ab38-44a7-b63e-76dfc8423e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.668 2 DEBUG oslo_concurrency.lockutils [req-e324eec3-470d-439f-b07b-e1f1ae172c6e req-5370d079-ab38-44a7-b63e-76dfc8423e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.669 2 DEBUG oslo_concurrency.lockutils [req-e324eec3-470d-439f-b07b-e1f1ae172c6e req-5370d079-ab38-44a7-b63e-76dfc8423e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.669 2 DEBUG oslo_concurrency.lockutils [req-e324eec3-470d-439f-b07b-e1f1ae172c6e req-5370d079-ab38-44a7-b63e-76dfc8423e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.669 2 DEBUG nova.compute.manager [req-e324eec3-470d-439f-b07b-e1f1ae172c6e req-5370d079-ab38-44a7-b63e-76dfc8423e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Processing event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:18:13 compute-0 nova_compute[256940]: 2025-10-02 13:18:13.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.676 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.677 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a185272f-397c-40af-a305-120ed5161286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.678 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:18:13 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:13.679 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'env', 'PROCESS_TAG=haproxy-540159ad-ffd2-462a-a8b9-e86914ed6249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/540159ad-ffd2-462a-a8b9-e86914ed6249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:14 compute-0 podman[401680]: 2025-10-02 13:18:14.09720259 +0000 UTC m=+0.063055880 container create 98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:18:14 compute-0 systemd[1]: Started libpod-conmon-98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382.scope.
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.147 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.149 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411094.1472108, 42de3106-827c-4d43-988c-18a6f44f3e01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.149 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] VM Started (Lifecycle Event)
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.151 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:18:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.155 2 INFO nova.virt.libvirt.driver [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance spawned successfully.
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.155 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:18:14 compute-0 podman[401680]: 2025-10-02 13:18:14.060690641 +0000 UTC m=+0.026543951 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:18:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23c993bd5b9371d84eb1d4817c69bd9d137a9a4b101646ca0f2a05eaad4d82e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.173 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:14 compute-0 podman[401680]: 2025-10-02 13:18:14.176340827 +0000 UTC m=+0.142194137 container init 98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.178 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:14 compute-0 podman[401680]: 2025-10-02 13:18:14.181392349 +0000 UTC m=+0.147245639 container start 98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.183 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.183 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.184 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.184 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.185 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.186 2 DEBUG nova.virt.libvirt.driver [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:18:14 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[401695]: [NOTICE]   (401699) : New worker (401701) forked
Oct 02 13:18:14 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[401695]: [NOTICE]   (401699) : Loading success.
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.218 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.219 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411094.148428, 42de3106-827c-4d43-988c-18a6f44f3e01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.220 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] VM Paused (Lifecycle Event)
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.245 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.249 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411094.151534, 42de3106-827c-4d43-988c-18a6f44f3e01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.250 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] VM Resumed (Lifecycle Event)
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.273 2 INFO nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Took 9.19 seconds to spawn the instance on the hypervisor.
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.273 2 DEBUG nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.275 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.281 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.314 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:18:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:14.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.340 2 INFO nova.compute.manager [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Took 10.32 seconds to build instance.
Oct 02 13:18:14 compute-0 nova_compute[256940]: 2025-10-02 13:18:14.356 2 DEBUG oslo_concurrency.lockutils [None req-2f1b3cef-ae9f-44f0-b69f-fcb1c9f5c992 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.8 MiB/s wr, 37 op/s
Oct 02 13:18:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:15 compute-0 ceph-mon[73668]: pgmap v3204: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.8 MiB/s wr, 37 op/s
Oct 02 13:18:15 compute-0 nova_compute[256940]: 2025-10-02 13:18:15.770 2 DEBUG nova.compute.manager [req-fd8fc9f4-be94-4c3c-9c7c-7aff9d0a4aa3 req-8771ff46-67e8-480d-a3bc-554a2358d7de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:15 compute-0 nova_compute[256940]: 2025-10-02 13:18:15.771 2 DEBUG oslo_concurrency.lockutils [req-fd8fc9f4-be94-4c3c-9c7c-7aff9d0a4aa3 req-8771ff46-67e8-480d-a3bc-554a2358d7de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:15 compute-0 nova_compute[256940]: 2025-10-02 13:18:15.771 2 DEBUG oslo_concurrency.lockutils [req-fd8fc9f4-be94-4c3c-9c7c-7aff9d0a4aa3 req-8771ff46-67e8-480d-a3bc-554a2358d7de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:15 compute-0 nova_compute[256940]: 2025-10-02 13:18:15.771 2 DEBUG oslo_concurrency.lockutils [req-fd8fc9f4-be94-4c3c-9c7c-7aff9d0a4aa3 req-8771ff46-67e8-480d-a3bc-554a2358d7de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:15 compute-0 nova_compute[256940]: 2025-10-02 13:18:15.772 2 DEBUG nova.compute.manager [req-fd8fc9f4-be94-4c3c-9c7c-7aff9d0a4aa3 req-8771ff46-67e8-480d-a3bc-554a2358d7de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] No waiting events found dispatching network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:15 compute-0 nova_compute[256940]: 2025-10-02 13:18:15.772 2 WARNING nova.compute.manager [req-fd8fc9f4-be94-4c3c-9c7c-7aff9d0a4aa3 req-8771ff46-67e8-480d-a3bc-554a2358d7de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received unexpected event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 for instance with vm_state active and task_state None.
Oct 02 13:18:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:16.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 61 op/s
Oct 02 13:18:16 compute-0 sudo[401712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:16 compute-0 sudo[401712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:16 compute-0 sudo[401712]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:16 compute-0 sudo[401737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:16 compute-0 sudo[401737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:16 compute-0 sudo[401737]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:17 compute-0 nova_compute[256940]: 2025-10-02 13:18:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:17.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:17 compute-0 ceph-mon[73668]: pgmap v3205: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 61 op/s
Oct 02 13:18:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:18.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 13:18:19 compute-0 nova_compute[256940]: 2025-10-02 13:18:19.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:19.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:20 compute-0 ceph-mon[73668]: pgmap v3206: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 13:18:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:20.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 13:18:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:21 compute-0 nova_compute[256940]: 2025-10-02 13:18:21.725 2 DEBUG oslo_concurrency.lockutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:21 compute-0 nova_compute[256940]: 2025-10-02 13:18:21.725 2 DEBUG oslo_concurrency.lockutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:21 compute-0 nova_compute[256940]: 2025-10-02 13:18:21.741 2 DEBUG nova.objects.instance [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:21 compute-0 nova_compute[256940]: 2025-10-02 13:18:21.806 2 DEBUG oslo_concurrency.lockutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.101 2 DEBUG oslo_concurrency.lockutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.101 2 DEBUG oslo_concurrency.lockutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.102 2 INFO nova.compute.manager [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Attaching volume b43f69f8-7c8d-4ef5-af90-c5aa5656977e to /dev/vdb
Oct 02 13:18:22 compute-0 ceph-mon[73668]: pgmap v3207: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.265 2 DEBUG os_brick.utils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.266 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.282 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.283 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[2651556b-30f1-49af-8394-69483ecfac32]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.285 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.298 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.299 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[e02ea5a5-3b22-4266-9416-59dd0b298919]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.301 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.314 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.315 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9bb3a1-b12b-46da-8f39-aa3a12f2c044]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.317 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e229d7-f8df-49f6-817c-69f586f87c16]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.318 2 DEBUG oslo_concurrency.processutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.368 2 DEBUG oslo_concurrency.processutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "nvme version" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.370 2 DEBUG os_brick.initiator.connectors.lightos [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.371 2 DEBUG os_brick.initiator.connectors.lightos [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.371 2 DEBUG os_brick.initiator.connectors.lightos [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.371 2 DEBUG os_brick.utils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] <== get_connector_properties: return (105ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:18:22 compute-0 nova_compute[256940]: 2025-10-02 13:18:22.371 2 DEBUG nova.virt.block_device [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating existing volume attachment record: c846ac90-d7e3-4130-adfe-f07a717fc10a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:18:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 312 KiB/s wr, 75 op/s
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.068 2 DEBUG nova.objects.instance [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.091 2 DEBUG nova.virt.libvirt.driver [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Attempting to attach volume b43f69f8-7c8d-4ef5-af90-c5aa5656977e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.093 2 DEBUG nova.virt.libvirt.guest [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:18:23 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:18:23 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b43f69f8-7c8d-4ef5-af90-c5aa5656977e">
Oct 02 13:18:23 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:23 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:23 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:23 compute-0 nova_compute[256940]:   </source>
Oct 02 13:18:23 compute-0 nova_compute[256940]:   <auth username="openstack">
Oct 02 13:18:23 compute-0 nova_compute[256940]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:18:23 compute-0 nova_compute[256940]:   </auth>
Oct 02 13:18:23 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:18:23 compute-0 nova_compute[256940]:   <serial>b43f69f8-7c8d-4ef5-af90-c5aa5656977e</serial>
Oct 02 13:18:23 compute-0 nova_compute[256940]: </disk>
Oct 02 13:18:23 compute-0 nova_compute[256940]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:18:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/67111490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.308 2 DEBUG nova.virt.libvirt.driver [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.309 2 DEBUG nova.virt.libvirt.driver [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.309 2 DEBUG nova.virt.libvirt.driver [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.309 2 DEBUG nova.virt.libvirt.driver [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No VIF found with MAC fa:16:3e:8e:1e:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:18:23 compute-0 nova_compute[256940]: 2025-10-02 13:18:23.485 2 DEBUG oslo_concurrency.lockutils [None req-40628a9e-40b0-4afc-85bf-cb15e6b25088 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:23.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:24 compute-0 nova_compute[256940]: 2025-10-02 13:18:24.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:24 compute-0 ceph-mon[73668]: pgmap v3208: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 312 KiB/s wr, 75 op/s
Oct 02 13:18:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:24.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 13:18:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Oct 02 13:18:25 compute-0 ceph-mon[73668]: pgmap v3209: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 13:18:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Oct 02 13:18:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Oct 02 13:18:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:25.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:26.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:26 compute-0 ceph-mon[73668]: osdmap e400: 3 total, 3 up, 3 in
Oct 02 13:18:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:26.513 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:26.515 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:26.515 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 136 KiB/s wr, 83 op/s
Oct 02 13:18:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:27.320 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:27 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:27.321 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:18:27 compute-0 nova_compute[256940]: 2025-10-02 13:18:27.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:27 compute-0 nova_compute[256940]: 2025-10-02 13:18:27.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:27 compute-0 ceph-mon[73668]: pgmap v3211: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 136 KiB/s wr, 83 op/s
Oct 02 13:18:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:27.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:28 compute-0 ovn_controller[148123]: 2025-10-02T13:18:28Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:1e:46 10.100.0.9
Oct 02 13:18:28 compute-0 ovn_controller[148123]: 2025-10-02T13:18:28Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:1e:46 10.100.0.9
Oct 02 13:18:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 136 KiB/s wr, 83 op/s
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:18:28
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'images', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control']
Oct 02 13:18:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:18:29 compute-0 nova_compute[256940]: 2025-10-02 13:18:29.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:29 compute-0 NetworkManager[44981]: <info>  [1759411109.3262] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Oct 02 13:18:29 compute-0 nova_compute[256940]: 2025-10-02 13:18:29.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:29 compute-0 NetworkManager[44981]: <info>  [1759411109.3277] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct 02 13:18:29 compute-0 nova_compute[256940]: 2025-10-02 13:18:29.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:29 compute-0 ovn_controller[148123]: 2025-10-02T13:18:29Z|01048|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct 02 13:18:29 compute-0 nova_compute[256940]: 2025-10-02 13:18:29.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:29.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:29 compute-0 ceph-mon[73668]: pgmap v3212: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 136 KiB/s wr, 83 op/s
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:18:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:18:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:30.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 464 KiB/s rd, 2.6 MiB/s wr, 91 op/s
Oct 02 13:18:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1475968903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:31 compute-0 podman[401798]: 2025-10-02 13:18:31.323628229 +0000 UTC m=+0.060262308 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 13:18:31 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:31.323 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:31 compute-0 podman[401799]: 2025-10-02 13:18:31.339916532 +0000 UTC m=+0.071921120 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:18:31 compute-0 nova_compute[256940]: 2025-10-02 13:18:31.631 2 DEBUG nova.compute.manager [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:31 compute-0 nova_compute[256940]: 2025-10-02 13:18:31.632 2 DEBUG nova.compute.manager [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing instance network info cache due to event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:31 compute-0 nova_compute[256940]: 2025-10-02 13:18:31.633 2 DEBUG oslo_concurrency.lockutils [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:31 compute-0 nova_compute[256940]: 2025-10-02 13:18:31.633 2 DEBUG oslo_concurrency.lockutils [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:31 compute-0 nova_compute[256940]: 2025-10-02 13:18:31.633 2 DEBUG nova.network.neutron [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:31.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:31 compute-0 ceph-mon[73668]: pgmap v3213: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 464 KiB/s rd, 2.6 MiB/s wr, 91 op/s
Oct 02 13:18:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2399959325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:32.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:32 compute-0 nova_compute[256940]: 2025-10-02 13:18:32.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Oct 02 13:18:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/838882996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.036 2 DEBUG nova.compute.manager [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.036 2 DEBUG nova.compute.manager [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing instance network info cache due to event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.037 2 DEBUG oslo_concurrency.lockutils [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.237 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.237 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:33.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552845782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.661 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.712 2 DEBUG nova.network.neutron [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated VIF entry in instance network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.713 2 DEBUG nova.network.neutron [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.757 2 DEBUG oslo_concurrency.lockutils [req-ce8e7815-8db0-4c3d-a3c9-1f2d1f8a8576 req-30e562fe-4be8-4636-889d-5b9c1720a914 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.759 2 DEBUG oslo_concurrency.lockutils [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.759 2 DEBUG nova.network.neutron [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.797 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.797 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.798 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.957 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.960 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3920MB free_disk=20.942890167236328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.961 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:33 compute-0 nova_compute[256940]: 2025-10-02 13:18:33.961 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:33 compute-0 ceph-mon[73668]: pgmap v3214: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Oct 02 13:18:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/604091308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/552845782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.062 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 42de3106-827c-4d43-988c-18a6f44f3e01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.063 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.063 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.092 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 13:18:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:34.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 13:18:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379871822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.528 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.535 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.571 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:18:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 470 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.691 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:18:34 compute-0 nova_compute[256940]: 2025-10-02 13:18:34.692 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2379871822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.162 2 DEBUG nova.compute.manager [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.162 2 DEBUG nova.compute.manager [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing instance network info cache due to event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.163 2 DEBUG oslo_concurrency.lockutils [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.183 2 DEBUG nova.network.neutron [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated VIF entry in instance network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.184 2 DEBUG nova.network.neutron [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.209 2 DEBUG oslo_concurrency.lockutils [req-6a80e0b3-156a-4b0a-a527-1bd0db532eab req-725744a4-a61f-4ff7-a286-24465d57e0ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.209 2 DEBUG oslo_concurrency.lockutils [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:35 compute-0 nova_compute[256940]: 2025-10-02 13:18:35.210 2 DEBUG nova.network.neutron [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:35.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:36 compute-0 ceph-mon[73668]: pgmap v3215: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 470 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Oct 02 13:18:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:36.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 418 KiB/s rd, 2.3 MiB/s wr, 81 op/s
Oct 02 13:18:36 compute-0 nova_compute[256940]: 2025-10-02 13:18:36.755 2 DEBUG nova.network.neutron [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated VIF entry in instance network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:36 compute-0 nova_compute[256940]: 2025-10-02 13:18:36.756 2 DEBUG nova.network.neutron [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:36 compute-0 nova_compute[256940]: 2025-10-02 13:18:36.773 2 DEBUG oslo_concurrency.lockutils [req-273b294f-1ca3-4c74-937b-010dec955112 req-2b643962-95af-4102-930f-c0b4e8762b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:36 compute-0 sudo[401887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:36 compute-0 sudo[401887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:36 compute-0 sudo[401887]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:37 compute-0 sudo[401912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:37 compute-0 sudo[401912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:37 compute-0 sudo[401912]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:37 compute-0 nova_compute[256940]: 2025-10-02 13:18:37.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:37.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:38 compute-0 ceph-mon[73668]: pgmap v3216: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 418 KiB/s rd, 2.3 MiB/s wr, 81 op/s
Oct 02 13:18:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:38.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 360 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 13:18:38 compute-0 nova_compute[256940]: 2025-10-02 13:18:38.691 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:38 compute-0 nova_compute[256940]: 2025-10-02 13:18:38.691 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:38 compute-0 nova_compute[256940]: 2025-10-02 13:18:38.692 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:18:39 compute-0 nova_compute[256940]: 2025-10-02 13:18:39.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:39 compute-0 nova_compute[256940]: 2025-10-02 13:18:39.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:39 compute-0 ceph-mon[73668]: pgmap v3217: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 360 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 13:18:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:39.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:40.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 360 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002170684149916248 of space, bias 1.0, pg target 0.6512052449748743 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:18:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:18:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:41 compute-0 nova_compute[256940]: 2025-10-02 13:18:41.590 2 DEBUG nova.compute.manager [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:41 compute-0 nova_compute[256940]: 2025-10-02 13:18:41.591 2 DEBUG nova.compute.manager [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing instance network info cache due to event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:18:41 compute-0 nova_compute[256940]: 2025-10-02 13:18:41.591 2 DEBUG oslo_concurrency.lockutils [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:41 compute-0 nova_compute[256940]: 2025-10-02 13:18:41.591 2 DEBUG oslo_concurrency.lockutils [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:41 compute-0 nova_compute[256940]: 2025-10-02 13:18:41.591 2 DEBUG nova.network.neutron [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:18:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:41.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:41 compute-0 ceph-mon[73668]: pgmap v3218: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 360 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 13:18:42 compute-0 nova_compute[256940]: 2025-10-02 13:18:42.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:42.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:42 compute-0 nova_compute[256940]: 2025-10-02 13:18:42.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct 02 13:18:42 compute-0 nova_compute[256940]: 2025-10-02 13:18:42.958 2 DEBUG nova.network.neutron [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated VIF entry in instance network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:18:42 compute-0 nova_compute[256940]: 2025-10-02 13:18:42.959 2 DEBUG nova.network.neutron [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:42 compute-0 nova_compute[256940]: 2025-10-02 13:18:42.973 2 DEBUG oslo_concurrency.lockutils [req-74832085-5b5b-44c7-85c7-7c1d56d771d4 req-b204803d-e7dc-4e04-97cd-a2f034326cf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:43.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Oct 02 13:18:43 compute-0 ceph-mon[73668]: pgmap v3219: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct 02 13:18:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Oct 02 13:18:43 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.324 2 DEBUG oslo_concurrency.lockutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.325 2 DEBUG oslo_concurrency.lockutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.326 2 INFO nova.compute.manager [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Rebooting instance
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.349 2 DEBUG oslo_concurrency.lockutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.349 2 DEBUG oslo_concurrency.lockutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:44 compute-0 nova_compute[256940]: 2025-10-02 13:18:44.349 2 DEBUG nova.network.neutron [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:18:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:44 compute-0 podman[401940]: 2025-10-02 13:18:44.414362209 +0000 UTC m=+0.073436500 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:18:44 compute-0 podman[401941]: 2025-10-02 13:18:44.487178142 +0000 UTC m=+0.139470836 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 13:18:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:18:44 compute-0 ceph-mon[73668]: osdmap e401: 3 total, 3 up, 3 in
Oct 02 13:18:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:45.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:45 compute-0 ceph-mon[73668]: pgmap v3221: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:18:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:46.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 14 op/s
Oct 02 13:18:47 compute-0 nova_compute[256940]: 2025-10-02 13:18:47.208 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:47 compute-0 nova_compute[256940]: 2025-10-02 13:18:47.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:47.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:47 compute-0 ceph-mon[73668]: pgmap v3222: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 14 op/s
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.244 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:18:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:48.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 14 op/s
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.952 2 DEBUG nova.network.neutron [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.993 2 DEBUG oslo_concurrency.lockutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.994 2 DEBUG nova.compute.manager [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.994 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.995 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:18:48 compute-0 nova_compute[256940]: 2025-10-02 13:18:48.995 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:49 compute-0 kernel: tap9c092188-b1 (unregistering): left promiscuous mode
Oct 02 13:18:49 compute-0 NetworkManager[44981]: <info>  [1759411129.3973] device (tap9c092188-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:18:49 compute-0 ovn_controller[148123]: 2025-10-02T13:18:49Z|01049|binding|INFO|Releasing lport 9c092188-b1cb-4109-975b-8e9dd4daf435 from this chassis (sb_readonly=0)
Oct 02 13:18:49 compute-0 ovn_controller[148123]: 2025-10-02T13:18:49Z|01050|binding|INFO|Setting lport 9c092188-b1cb-4109-975b-8e9dd4daf435 down in Southbound
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:49 compute-0 ovn_controller[148123]: 2025-10-02T13:18:49Z|01051|binding|INFO|Removing iface tap9c092188-b1 ovn-installed in OVS
Oct 02 13:18:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:49.414 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:1e:46 10.100.0.9'], port_security=['fa:16:3e:8e:1e:46 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '42de3106-827c-4d43-988c-18a6f44f3e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e 65d073eb-417a-4811-9114-11a73d47b431', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9c092188-b1cb-4109-975b-8e9dd4daf435) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:49.415 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9c092188-b1cb-4109-975b-8e9dd4daf435 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 unbound from our chassis
Oct 02 13:18:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:49.417 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 540159ad-ffd2-462a-a8b9-e86914ed6249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:18:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:49.419 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[922ee003-6d13-4ede-aee6-5cf727165c16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:49.421 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace which is not needed anymore
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:49 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Oct 02 13:18:49 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000d5.scope: Consumed 14.615s CPU time.
Oct 02 13:18:49 compute-0 systemd-machined[210927]: Machine qemu-107-instance-000000d5 terminated.
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.554 2 INFO nova.virt.libvirt.driver [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance destroyed successfully.
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.556 2 DEBUG nova.objects.instance [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'resources' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.581 2 DEBUG nova.virt.libvirt.vif [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:18:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-29107059',display_name='tempest-TestMinimumBasicScenario-server-29107059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-29107059',id=213,image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmOS46UgYP1IqwRi0ZKAvRep9yFPkjXVjI+Pw/ILwMRb4DdLlWNdBd0VfxSM2PbIpiC2ZiRlAp2KBMDDOOWT3wcngZ1oPAnDSzP/QaWBAIqsGgjy9F6g3hYw/xgLq2aOA==',key_name='tempest-TestMinimumBasicScenario-1692650409',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:18:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-j5dd0ovx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:18:49Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=42de3106-827c-4d43-988c-18a6f44f3e01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.582 2 DEBUG nova.network.os_vif_util [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.583 2 DEBUG nova.network.os_vif_util [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.583 2 DEBUG os_vif [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c092188-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.593 2 INFO os_vif [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1')
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.602 2 DEBUG nova.virt.libvirt.driver [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Start _get_guest_xml network_info=[{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=40dbc07f-c919-4d14-85e2-405ded1c3c40,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '40dbc07f-c919-4d14-85e2-405ded1c3c40'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b43f69f8-7c8d-4ef5-af90-c5aa5656977e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b43f69f8-7c8d-4ef5-af90-c5aa5656977e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '42de3106-827c-4d43-988c-18a6f44f3e01', 'attached_at': '', 'detached_at': '', 'volume_id': 'b43f69f8-7c8d-4ef5-af90-c5aa5656977e', 'serial': 'b43f69f8-7c8d-4ef5-af90-c5aa5656977e'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': 'c846ac90-d7e3-4130-adfe-f07a717fc10a', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.606 2 WARNING nova.virt.libvirt.driver [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:18:49 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[401695]: [NOTICE]   (401699) : haproxy version is 2.8.14-c23fe91
Oct 02 13:18:49 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[401695]: [NOTICE]   (401699) : path to executable is /usr/sbin/haproxy
Oct 02 13:18:49 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[401695]: [WARNING]  (401699) : Exiting Master process...
Oct 02 13:18:49 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[401695]: [ALERT]    (401699) : Current worker (401701) exited with code 143 (Terminated)
Oct 02 13:18:49 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[401695]: [WARNING]  (401699) : All workers exited. Exiting... (0)
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.625 2 DEBUG nova.virt.libvirt.host [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.626 2 DEBUG nova.virt.libvirt.host [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:18:49 compute-0 systemd[1]: libpod-98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382.scope: Deactivated successfully.
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.631 2 DEBUG nova.virt.libvirt.host [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.631 2 DEBUG nova.virt.libvirt.host [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.633 2 DEBUG nova.virt.libvirt.driver [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.633 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=40dbc07f-c919-4d14-85e2-405ded1c3c40,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.633 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.634 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:18:49 compute-0 podman[402013]: 2025-10-02 13:18:49.634166676 +0000 UTC m=+0.107305521 container died 98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.634 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.634 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.634 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.634 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.635 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.635 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.635 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.635 2 DEBUG nova.virt.hardware [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.636 2 DEBUG nova.objects.instance [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.653 2 DEBUG oslo_concurrency.processutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:49.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.726 2 DEBUG nova.compute.manager [req-24eccff5-653a-4645-8d77-fc5473ae7587 req-4f747a63-a4f3-43b0-b914-f6b53d2a4123 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-unplugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.727 2 DEBUG oslo_concurrency.lockutils [req-24eccff5-653a-4645-8d77-fc5473ae7587 req-4f747a63-a4f3-43b0-b914-f6b53d2a4123 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.727 2 DEBUG oslo_concurrency.lockutils [req-24eccff5-653a-4645-8d77-fc5473ae7587 req-4f747a63-a4f3-43b0-b914-f6b53d2a4123 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.727 2 DEBUG oslo_concurrency.lockutils [req-24eccff5-653a-4645-8d77-fc5473ae7587 req-4f747a63-a4f3-43b0-b914-f6b53d2a4123 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.728 2 DEBUG nova.compute.manager [req-24eccff5-653a-4645-8d77-fc5473ae7587 req-4f747a63-a4f3-43b0-b914-f6b53d2a4123 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] No waiting events found dispatching network-vif-unplugged-9c092188-b1cb-4109-975b-8e9dd4daf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:49 compute-0 nova_compute[256940]: 2025-10-02 13:18:49.728 2 WARNING nova.compute.manager [req-24eccff5-653a-4645-8d77-fc5473ae7587 req-4f747a63-a4f3-43b0-b914-f6b53d2a4123 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received unexpected event network-vif-unplugged-9c092188-b1cb-4109-975b-8e9dd4daf435 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 13:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382-userdata-shm.mount: Deactivated successfully.
Oct 02 13:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e23c993bd5b9371d84eb1d4817c69bd9d137a9a4b101646ca0f2a05eaad4d82e-merged.mount: Deactivated successfully.
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.124 2 DEBUG oslo_concurrency.processutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.204 2 DEBUG oslo_concurrency.processutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:50 compute-0 ceph-mon[73668]: pgmap v3223: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 14 op/s
Oct 02 13:18:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:50.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:50 compute-0 podman[402013]: 2025-10-02 13:18:50.446076369 +0000 UTC m=+0.919215184 container cleanup 98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:18:50 compute-0 systemd[1]: libpod-conmon-98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382.scope: Deactivated successfully.
Oct 02 13:18:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 23 KiB/s wr, 17 op/s
Oct 02 13:18:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:18:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/217378257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.657 2 DEBUG oslo_concurrency.processutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.683 2 DEBUG nova.virt.libvirt.vif [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:18:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-29107059',display_name='tempest-TestMinimumBasicScenario-server-29107059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-29107059',id=213,image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmOS46UgYP1IqwRi0ZKAvRep9yFPkjXVjI+Pw/ILwMRb4DdLlWNdBd0VfxSM2PbIpiC2ZiRlAp2KBMDDOOWT3wcngZ1oPAnDSzP/QaWBAIqsGgjy9F6g3hYw/xgLq2aOA==',key_name='tempest-TestMinimumBasicScenario-1692650409',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:18:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-j5dd0ovx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:18:49Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=42de3106-827c-4d43-988c-18a6f44f3e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.683 2 DEBUG nova.network.os_vif_util [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.684 2 DEBUG nova.network.os_vif_util [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.685 2 DEBUG nova.objects.instance [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.702 2 DEBUG nova.virt.libvirt.driver [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <uuid>42de3106-827c-4d43-988c-18a6f44f3e01</uuid>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <name>instance-000000d5</name>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <nova:name>tempest-TestMinimumBasicScenario-server-29107059</nova:name>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:18:49</nova:creationTime>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:user uuid="74f5186fabfb4fea86d32c8ef1f2e354">tempest-TestMinimumBasicScenario-1527105691-project-member</nova:user>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:project uuid="ced4d30c525c44cca617c3b9838d21b7">tempest-TestMinimumBasicScenario-1527105691</nova:project>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="40dbc07f-c919-4d14-85e2-405ded1c3c40"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <nova:port uuid="9c092188-b1cb-4109-975b-8e9dd4daf435">
Oct 02 13:18:50 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <system>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <entry name="serial">42de3106-827c-4d43-988c-18a6f44f3e01</entry>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <entry name="uuid">42de3106-827c-4d43-988c-18a6f44f3e01</entry>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </system>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <os>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   </os>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <features>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   </features>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42de3106-827c-4d43-988c-18a6f44f3e01_disk">
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </source>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/42de3106-827c-4d43-988c-18a6f44f3e01_disk.config">
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </source>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-b43f69f8-7c8d-4ef5-af90-c5aa5656977e">
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </source>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:18:50 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <target dev="vdb" bus="virtio"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <serial>b43f69f8-7c8d-4ef5-af90-c5aa5656977e</serial>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:8e:1e:46"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <target dev="tap9c092188-b1"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01/console.log" append="off"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <video>
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </video>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <input type="keyboard" bus="usb"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:18:50 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:18:50 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:18:50 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:18:50 compute-0 nova_compute[256940]: </domain>
Oct 02 13:18:50 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.705 2 DEBUG nova.virt.libvirt.driver [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.706 2 DEBUG nova.virt.libvirt.driver [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.706 2 DEBUG nova.virt.libvirt.driver [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.707 2 DEBUG nova.virt.libvirt.vif [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:18:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-29107059',display_name='tempest-TestMinimumBasicScenario-server-29107059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-29107059',id=213,image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmOS46UgYP1IqwRi0ZKAvRep9yFPkjXVjI+Pw/ILwMRb4DdLlWNdBd0VfxSM2PbIpiC2ZiRlAp2KBMDDOOWT3wcngZ1oPAnDSzP/QaWBAIqsGgjy9F6g3hYw/xgLq2aOA==',key_name='tempest-TestMinimumBasicScenario-1692650409',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:18:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-j5dd0ovx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:18:49Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=42de3106-827c-4d43-988c-18a6f44f3e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.708 2 DEBUG nova.network.os_vif_util [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.709 2 DEBUG nova.network.os_vif_util [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.709 2 DEBUG os_vif [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.710 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c092188-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c092188-b1, col_values=(('external_ids', {'iface-id': '9c092188-b1cb-4109-975b-8e9dd4daf435', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:1e:46', 'vm-uuid': '42de3106-827c-4d43-988c-18a6f44f3e01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 NetworkManager[44981]: <info>  [1759411130.7183] manager: (tap9c092188-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.724 2 INFO os_vif [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1')
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.756 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.813 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.814 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:18:50 compute-0 kernel: tap9c092188-b1: entered promiscuous mode
Oct 02 13:18:50 compute-0 NetworkManager[44981]: <info>  [1759411130.8624] manager: (tap9c092188-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Oct 02 13:18:50 compute-0 systemd-udevd[401991]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 ovn_controller[148123]: 2025-10-02T13:18:50Z|01052|binding|INFO|Claiming lport 9c092188-b1cb-4109-975b-8e9dd4daf435 for this chassis.
Oct 02 13:18:50 compute-0 ovn_controller[148123]: 2025-10-02T13:18:50Z|01053|binding|INFO|9c092188-b1cb-4109-975b-8e9dd4daf435: Claiming fa:16:3e:8e:1e:46 10.100.0.9
Oct 02 13:18:50 compute-0 NetworkManager[44981]: <info>  [1759411130.8782] device (tap9c092188-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:18:50 compute-0 NetworkManager[44981]: <info>  [1759411130.8800] device (tap9c092188-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:18:50 compute-0 ovn_controller[148123]: 2025-10-02T13:18:50Z|01054|binding|INFO|Setting lport 9c092188-b1cb-4109-975b-8e9dd4daf435 ovn-installed in OVS
Oct 02 13:18:50 compute-0 nova_compute[256940]: 2025-10-02 13:18:50.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:50 compute-0 ovn_controller[148123]: 2025-10-02T13:18:50Z|01055|binding|INFO|Setting lport 9c092188-b1cb-4109-975b-8e9dd4daf435 up in Southbound
Oct 02 13:18:50 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:50.895 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:1e:46 10.100.0.9'], port_security=['fa:16:3e:8e:1e:46 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '42de3106-827c-4d43-988c-18a6f44f3e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e 65d073eb-417a-4811-9114-11a73d47b431', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9c092188-b1cb-4109-975b-8e9dd4daf435) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:50 compute-0 systemd-machined[210927]: New machine qemu-108-instance-000000d5.
Oct 02 13:18:50 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-000000d5.
Oct 02 13:18:50 compute-0 podman[402116]: 2025-10-02 13:18:50.988972691 +0000 UTC m=+0.513640392 container remove 98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.000 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[edbdb688-0952-45c9-88bc-ba74a1ad6cf8]: (4, ('Thu Oct  2 01:18:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382)\n98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382\nThu Oct  2 01:18:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382)\n98ef1da735456584bd33ea525649ddeaed7d1c0383712838765cf64e5700c382\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.003 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9da47f4e-a158-4ac1-8c28-ae08ccbf0f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.005 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-0 kernel: tap540159ad-f0: left promiscuous mode
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.028 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9e29364a-5e3f-4c14-b1d8-8eec7956a50c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.059 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[edb1c1bc-eba2-482b-b85a-f0efa783e785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.062 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc1e6cf-632d-416d-ac5e-909cab2aeebd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.085 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[30409a08-4621-46e9-bb92-406928201d91]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 898887, 'reachable_time': 24992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402155, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d540159ad\x2dffd2\x2d462a\x2da8b9\x2de86914ed6249.mount: Deactivated successfully.
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.089 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.089 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[75fb5212-cb65-42a2-8a51-a2f60987b241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.092 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9c092188-b1cb-4109-975b-8e9dd4daf435 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 unbound from our chassis
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.094 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.108 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[34f9d22b-d8bd-4e35-8170-95fc472c4da7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.109 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap540159ad-f1 in ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.111 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap540159ad-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.111 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d2de8906-ca67-4194-ad26-242cc81b84c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.112 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[0a163306-33bd-49b4-a8eb-fb14fd33b6f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.124 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[167f7c4c-47cc-4bd8-84f8-be916d6310f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.152 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4591592a-81fa-4f07-b7c0-ec79407127d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.190 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[4af61f2d-fb6b-4388-a8bc-45582e5909eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.198 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9640cc7a-f7e7-4d68-969b-b40a9ff09415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 NetworkManager[44981]: <info>  [1759411131.2008] manager: (tap540159ad-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/463)
Oct 02 13:18:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.243 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[04d5552d-9840-417a-ba4d-75d12b36f797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.247 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[53c52a20-ff28-4531-9453-5544860ba35e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 NetworkManager[44981]: <info>  [1759411131.2693] device (tap540159ad-f0): carrier: link connected
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.275 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2dd5f1-4b4a-4b38-a881-52827238f985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.293 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[dfffa2fc-83d4-4abe-8e9a-7cca88f15380]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902675, 'reachable_time': 33372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402183, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.311 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1d0d28-8724-4670-a213-c3d5d90f5f20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:9bb7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 902675, 'tstamp': 902675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402184, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.329 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[be354931-af49-4231-8f7c-f9e374598806]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902675, 'reachable_time': 33372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402185, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.360 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2d433aab-2d46-485e-b152-f334cbe741bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.426 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[52f634e4-c944-414e-8670-6c378d98c9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.428 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.428 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.429 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540159ad-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-0 kernel: tap540159ad-f0: entered promiscuous mode
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-0 NetworkManager[44981]: <info>  [1759411131.4324] manager: (tap540159ad-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.436 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap540159ad-f0, col_values=(('external_ids', {'iface-id': 'b64b1a3a-1d89-4a71-b9b0-71e964509167'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-0 ovn_controller[148123]: 2025-10-02T13:18:51Z|01056|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.440 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.496 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f30b0ef2-44b0-40a1-8444-1c76a1a2c178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.497 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:18:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:18:51.498 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'env', 'PROCESS_TAG=haproxy-540159ad-ffd2-462a-a8b9-e86914ed6249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/540159ad-ffd2-462a-a8b9-e86914ed6249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:18:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Oct 02 13:18:51 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Oct 02 13:18:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:51.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3546321456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:51 compute-0 ceph-mon[73668]: pgmap v3224: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 23 KiB/s wr, 17 op/s
Oct 02 13:18:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/217378257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.902 2 DEBUG nova.compute.manager [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.903 2 DEBUG oslo_concurrency.lockutils [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.904 2 DEBUG oslo_concurrency.lockutils [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.904 2 DEBUG oslo_concurrency.lockutils [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.904 2 DEBUG nova.compute.manager [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] No waiting events found dispatching network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.904 2 WARNING nova.compute.manager [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received unexpected event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.905 2 DEBUG nova.compute.manager [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.905 2 DEBUG oslo_concurrency.lockutils [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.905 2 DEBUG oslo_concurrency.lockutils [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.906 2 DEBUG oslo_concurrency.lockutils [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.906 2 DEBUG nova.compute.manager [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] No waiting events found dispatching network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:51 compute-0 nova_compute[256940]: 2025-10-02 13:18:51.906 2 WARNING nova.compute.manager [req-ce0ddc12-d6fc-4763-ace7-ecc95d27da6d req-5fde7e04-8754-4a59-87e4-ed1335ced58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received unexpected event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 13:18:51 compute-0 podman[402270]: 2025-10-02 13:18:51.846610142 +0000 UTC m=+0.023537093 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:18:52 compute-0 podman[402270]: 2025-10-02 13:18:52.270647314 +0000 UTC m=+0.447574255 container create fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.378 2 DEBUG nova.virt.libvirt.host [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Removed pending event for 42de3106-827c-4d43-988c-18a6f44f3e01 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.379 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411132.3783333, 42de3106-827c-4d43-988c-18a6f44f3e01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.380 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] VM Resumed (Lifecycle Event)
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.381 2 DEBUG nova.compute.manager [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.384 2 INFO nova.virt.libvirt.driver [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance rebooted successfully.
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.385 2 DEBUG nova.compute.manager [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:52.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.409 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.412 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.436 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.437 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411132.379436, 42de3106-827c-4d43-988c-18a6f44f3e01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.437 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] VM Started (Lifecycle Event)
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.441 2 DEBUG oslo_concurrency.lockutils [None req-d3f78309-3610-4874-9acd-d773f852980f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.458 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:18:52 compute-0 nova_compute[256940]: 2025-10-02 13:18:52.462 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:18:52 compute-0 systemd[1]: Started libpod-conmon-fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b.scope.
Oct 02 13:18:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb26ee396de443e6f8096307a377f0f4b467c77673530caa10bb8c9a080429c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:18:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 26 KiB/s wr, 20 op/s
Oct 02 13:18:52 compute-0 podman[402270]: 2025-10-02 13:18:52.647791177 +0000 UTC m=+0.824718158 container init fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:18:52 compute-0 podman[402270]: 2025-10-02 13:18:52.656139454 +0000 UTC m=+0.833066395 container start fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:18:52 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [NOTICE]   (402295) : New worker (402297) forked
Oct 02 13:18:52 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [NOTICE]   (402295) : Loading success.
Oct 02 13:18:52 compute-0 ceph-mon[73668]: osdmap e402: 3 total, 3 up, 3 in
Oct 02 13:18:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:53.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:53 compute-0 nova_compute[256940]: 2025-10-02 13:18:53.810 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:54 compute-0 nova_compute[256940]: 2025-10-02 13:18:54.040 2 DEBUG nova.compute.manager [req-f536fe3f-974b-434f-9e6c-54ef47690997 req-39106b9a-048d-4bca-a329-9cfd36c738e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:18:54 compute-0 nova_compute[256940]: 2025-10-02 13:18:54.040 2 DEBUG oslo_concurrency.lockutils [req-f536fe3f-974b-434f-9e6c-54ef47690997 req-39106b9a-048d-4bca-a329-9cfd36c738e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:54 compute-0 nova_compute[256940]: 2025-10-02 13:18:54.041 2 DEBUG oslo_concurrency.lockutils [req-f536fe3f-974b-434f-9e6c-54ef47690997 req-39106b9a-048d-4bca-a329-9cfd36c738e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:54 compute-0 nova_compute[256940]: 2025-10-02 13:18:54.041 2 DEBUG oslo_concurrency.lockutils [req-f536fe3f-974b-434f-9e6c-54ef47690997 req-39106b9a-048d-4bca-a329-9cfd36c738e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:54 compute-0 nova_compute[256940]: 2025-10-02 13:18:54.041 2 DEBUG nova.compute.manager [req-f536fe3f-974b-434f-9e6c-54ef47690997 req-39106b9a-048d-4bca-a329-9cfd36c738e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] No waiting events found dispatching network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:18:54 compute-0 nova_compute[256940]: 2025-10-02 13:18:54.041 2 WARNING nova.compute.manager [req-f536fe3f-974b-434f-9e6c-54ef47690997 req-39106b9a-048d-4bca-a329-9cfd36c738e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received unexpected event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 for instance with vm_state active and task_state None.
Oct 02 13:18:54 compute-0 nova_compute[256940]: 2025-10-02 13:18:54.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:54 compute-0 ceph-mon[73668]: pgmap v3226: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 26 KiB/s wr, 20 op/s
Oct 02 13:18:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 KiB/s wr, 57 op/s
Oct 02 13:18:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:55.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:55 compute-0 nova_compute[256940]: 2025-10-02 13:18:55.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:56 compute-0 ceph-mon[73668]: pgmap v3227: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 KiB/s wr, 57 op/s
Oct 02 13:18:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:56.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Oct 02 13:18:57 compute-0 sudo[402309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:57 compute-0 sudo[402309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:57 compute-0 sudo[402309]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:57 compute-0 sudo[402334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:57 compute-0 sudo[402334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:57 compute-0 sudo[402334]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:57 compute-0 ceph-mon[73668]: pgmap v3228: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Oct 02 13:18:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:18:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:57.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:18:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:58.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:18:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:18:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Oct 02 13:18:59 compute-0 nova_compute[256940]: 2025-10-02 13:18:59.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:18:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:59.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:59 compute-0 ceph-mon[73668]: pgmap v3229: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Oct 02 13:19:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:00.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 85 op/s
Oct 02 13:19:00 compute-0 nova_compute[256940]: 2025-10-02 13:19:00.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:01.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:01 compute-0 ceph-mon[73668]: pgmap v3230: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 85 op/s
Oct 02 13:19:02 compute-0 sudo[402361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:02 compute-0 sudo[402361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-0 sudo[402361]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:02 compute-0 sudo[402388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:02 compute-0 sudo[402388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-0 sudo[402388]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:02 compute-0 podman[402386]: 2025-10-02 13:19:02.205633879 +0000 UTC m=+0.076455218 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 13:19:02 compute-0 podman[402385]: 2025-10-02 13:19:02.233929455 +0000 UTC m=+0.104773285 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:19:02 compute-0 sudo[402447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:02 compute-0 sudo[402447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-0 sudo[402447]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:02 compute-0 sudo[402474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:19:02 compute-0 sudo[402474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:02.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 77 op/s
Oct 02 13:19:02 compute-0 sudo[402474]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:03.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:04 compute-0 ceph-mon[73668]: pgmap v3231: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 77 op/s
Oct 02 13:19:04 compute-0 nova_compute[256940]: 2025-10-02 13:19:04.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:04 compute-0 ovn_controller[148123]: 2025-10-02T13:19:04Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:1e:46 10.100.0.9
Oct 02 13:19:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:04.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 86 op/s
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bb7bc230-98e2-44ad-8f47-ae32fec96342 does not exist
Oct 02 13:19:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 514d2196-24f9-4b19-8742-eddc31c83282 does not exist
Oct 02 13:19:05 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8744b691-1d5e-416c-a73f-56be56257ba4 does not exist
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:19:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:19:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:05 compute-0 sudo[402534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:05 compute-0 sudo[402534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:05 compute-0 sudo[402534]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:05 compute-0 sudo[402559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:05 compute-0 sudo[402559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:05 compute-0 sudo[402559]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:05 compute-0 sudo[402584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:05 compute-0 sudo[402584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:05 compute-0 sudo[402584]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:05 compute-0 sudo[402609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:19:05 compute-0 sudo[402609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:05 compute-0 nova_compute[256940]: 2025-10-02 13:19:05.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:05 compute-0 podman[402676]: 2025-10-02 13:19:05.986779851 +0000 UTC m=+0.041379177 container create bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:19:06 compute-0 systemd[1]: Started libpod-conmon-bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad.scope.
Oct 02 13:19:06 compute-0 podman[402676]: 2025-10-02 13:19:05.967544701 +0000 UTC m=+0.022144057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:06 compute-0 podman[402676]: 2025-10-02 13:19:06.116893873 +0000 UTC m=+0.171493229 container init bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:19:06 compute-0 podman[402676]: 2025-10-02 13:19:06.123894205 +0000 UTC m=+0.178493571 container start bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:19:06 compute-0 fervent_easley[402693]: 167 167
Oct 02 13:19:06 compute-0 systemd[1]: libpod-bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad.scope: Deactivated successfully.
Oct 02 13:19:06 compute-0 conmon[402693]: conmon bc78a1c2d9f3a5909cf3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad.scope/container/memory.events
Oct 02 13:19:06 compute-0 podman[402676]: 2025-10-02 13:19:06.13679781 +0000 UTC m=+0.191397176 container attach bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:19:06 compute-0 podman[402676]: 2025-10-02 13:19:06.137975181 +0000 UTC m=+0.192574547 container died bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:19:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb2e27912c37cb9e3d71fd08c54c2125bc5ed07efe309360c977e792ac2d17e4-merged.mount: Deactivated successfully.
Oct 02 13:19:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:06.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 79 op/s
Oct 02 13:19:06 compute-0 ceph-mon[73668]: pgmap v3232: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 86 op/s
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:19:06 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:06 compute-0 podman[402676]: 2025-10-02 13:19:06.74617778 +0000 UTC m=+0.800777116 container remove bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:19:06 compute-0 systemd[1]: libpod-conmon-bc78a1c2d9f3a5909cf3e9e58a180bc85fdc261b1323c76bfd094306da437cad.scope: Deactivated successfully.
Oct 02 13:19:06 compute-0 podman[402720]: 2025-10-02 13:19:06.942848552 +0000 UTC m=+0.055496734 container create 2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:19:06 compute-0 systemd[1]: Started libpod-conmon-2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8.scope.
Oct 02 13:19:07 compute-0 podman[402720]: 2025-10-02 13:19:06.910608924 +0000 UTC m=+0.023256926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a10ba091e42154ed182c38b12b2c463be2e9db3e092f655b55845750a1076f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a10ba091e42154ed182c38b12b2c463be2e9db3e092f655b55845750a1076f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a10ba091e42154ed182c38b12b2c463be2e9db3e092f655b55845750a1076f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a10ba091e42154ed182c38b12b2c463be2e9db3e092f655b55845750a1076f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a10ba091e42154ed182c38b12b2c463be2e9db3e092f655b55845750a1076f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:07 compute-0 podman[402720]: 2025-10-02 13:19:07.058315423 +0000 UTC m=+0.170963415 container init 2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_black, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:19:07 compute-0 podman[402720]: 2025-10-02 13:19:07.067645726 +0000 UTC m=+0.180293698 container start 2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_black, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:19:07 compute-0 podman[402720]: 2025-10-02 13:19:07.089903294 +0000 UTC m=+0.202551266 container attach 2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_black, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:19:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:07.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:07 compute-0 loving_black[402737]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:19:07 compute-0 loving_black[402737]: --> relative data size: 1.0
Oct 02 13:19:07 compute-0 loving_black[402737]: --> All data devices are unavailable
Oct 02 13:19:07 compute-0 systemd[1]: libpod-2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8.scope: Deactivated successfully.
Oct 02 13:19:07 compute-0 podman[402720]: 2025-10-02 13:19:07.929459737 +0000 UTC m=+1.042107719 container died 2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_black, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:19:08 compute-0 ceph-mon[73668]: pgmap v3233: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 79 op/s
Oct 02 13:19:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:08.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-57a10ba091e42154ed182c38b12b2c463be2e9db3e092f655b55845750a1076f-merged.mount: Deactivated successfully.
Oct 02 13:19:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 496 KiB/s rd, 13 KiB/s wr, 43 op/s
Oct 02 13:19:09 compute-0 podman[402720]: 2025-10-02 13:19:09.136944341 +0000 UTC m=+2.249592343 container remove 2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_black, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:19:09 compute-0 nova_compute[256940]: 2025-10-02 13:19:09.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:09 compute-0 sudo[402609]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:09 compute-0 systemd[1]: libpod-conmon-2dd583d6562c5aa3c1fb1d19cd03c1c61f6ee3f8f6e870f5a6c7a821f78958b8.scope: Deactivated successfully.
Oct 02 13:19:09 compute-0 sudo[402764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:09 compute-0 sudo[402764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:09 compute-0 sudo[402764]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:09 compute-0 sudo[402789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:09 compute-0 sudo[402789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:09 compute-0 sudo[402789]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:09 compute-0 sudo[402814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:09 compute-0 sudo[402814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:09 compute-0 sudo[402814]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:09 compute-0 sudo[402839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:19:09 compute-0 sudo[402839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:09 compute-0 ceph-mon[73668]: pgmap v3234: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 496 KiB/s rd, 13 KiB/s wr, 43 op/s
Oct 02 13:19:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:09.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:09 compute-0 podman[402904]: 2025-10-02 13:19:09.788707403 +0000 UTC m=+0.020196806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:09 compute-0 podman[402904]: 2025-10-02 13:19:09.886863604 +0000 UTC m=+0.118352957 container create 57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:19:09 compute-0 systemd[1]: Started libpod-conmon-57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b.scope.
Oct 02 13:19:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:10 compute-0 podman[402904]: 2025-10-02 13:19:10.067051368 +0000 UTC m=+0.298540751 container init 57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:19:10 compute-0 podman[402904]: 2025-10-02 13:19:10.075188969 +0000 UTC m=+0.306678322 container start 57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lichterman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 13:19:10 compute-0 clever_lichterman[402920]: 167 167
Oct 02 13:19:10 compute-0 systemd[1]: libpod-57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b.scope: Deactivated successfully.
Oct 02 13:19:10 compute-0 podman[402904]: 2025-10-02 13:19:10.134446539 +0000 UTC m=+0.365961693 container attach 57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lichterman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:19:10 compute-0 podman[402904]: 2025-10-02 13:19:10.135079306 +0000 UTC m=+0.366568669 container died 57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a9f993d79ea624de1e4af4c911aba55c888573a7e8588a6cae9de2c4f0ab0c-merged.mount: Deactivated successfully.
Oct 02 13:19:10 compute-0 podman[402904]: 2025-10-02 13:19:10.210893626 +0000 UTC m=+0.442382979 container remove 57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lichterman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:19:10 compute-0 systemd[1]: libpod-conmon-57c86e07f158a232390ed5f224bb2ee45dac2dc7c891170ba355712be909658b.scope: Deactivated successfully.
Oct 02 13:19:10 compute-0 podman[402946]: 2025-10-02 13:19:10.395026803 +0000 UTC m=+0.072735712 container create 15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:19:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:10 compute-0 podman[402946]: 2025-10-02 13:19:10.34300682 +0000 UTC m=+0.020715779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:10 compute-0 systemd[1]: Started libpod-conmon-15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252.scope.
Oct 02 13:19:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6402c85019830d36738f343fcf1c05860ba5981ee77287c7e9f449b1a10b334a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6402c85019830d36738f343fcf1c05860ba5981ee77287c7e9f449b1a10b334a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6402c85019830d36738f343fcf1c05860ba5981ee77287c7e9f449b1a10b334a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6402c85019830d36738f343fcf1c05860ba5981ee77287c7e9f449b1a10b334a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:10 compute-0 podman[402946]: 2025-10-02 13:19:10.533767779 +0000 UTC m=+0.211476708 container init 15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:19:10 compute-0 podman[402946]: 2025-10-02 13:19:10.545335009 +0000 UTC m=+0.223043918 container start 15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:19:10 compute-0 podman[402946]: 2025-10-02 13:19:10.555722009 +0000 UTC m=+0.233430918 container attach 15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:19:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 21 KiB/s wr, 45 op/s
Oct 02 13:19:10 compute-0 nova_compute[256940]: 2025-10-02 13:19:10.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:11 compute-0 inspiring_colden[402962]: {
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:     "1": [
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:         {
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "devices": [
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "/dev/loop3"
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             ],
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "lv_name": "ceph_lv0",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "lv_size": "7511998464",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "name": "ceph_lv0",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "tags": {
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.cluster_name": "ceph",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.crush_device_class": "",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.encrypted": "0",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.osd_id": "1",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.type": "block",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:                 "ceph.vdo": "0"
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             },
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "type": "block",
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:             "vg_name": "ceph_vg0"
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:         }
Oct 02 13:19:11 compute-0 inspiring_colden[402962]:     ]
Oct 02 13:19:11 compute-0 inspiring_colden[402962]: }
Oct 02 13:19:11 compute-0 systemd[1]: libpod-15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252.scope: Deactivated successfully.
Oct 02 13:19:11 compute-0 podman[402946]: 2025-10-02 13:19:11.430486737 +0000 UTC m=+1.108195656 container died 15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6402c85019830d36738f343fcf1c05860ba5981ee77287c7e9f449b1a10b334a-merged.mount: Deactivated successfully.
Oct 02 13:19:11 compute-0 podman[402946]: 2025-10-02 13:19:11.495005224 +0000 UTC m=+1.172714133 container remove 15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:19:11 compute-0 systemd[1]: libpod-conmon-15d7d6edaa5cf7af24d0e40e4ace9d957f5197e550e89ece2fd51ae7830d8252.scope: Deactivated successfully.
Oct 02 13:19:11 compute-0 sudo[402839]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 sudo[402984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:11 compute-0 sudo[402984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:11 compute-0 sudo[402984]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 sudo[403009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:11 compute-0 sudo[403009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:11 compute-0 sudo[403009]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 ceph-mon[73668]: pgmap v3235: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 21 KiB/s wr, 45 op/s
Oct 02 13:19:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:11 compute-0 sudo[403034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:11 compute-0 sudo[403034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:11 compute-0 sudo[403034]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:11 compute-0 sudo[403059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:19:11 compute-0 sudo[403059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:12 compute-0 podman[403125]: 2025-10-02 13:19:12.130643976 +0000 UTC m=+0.043385089 container create 274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:19:12 compute-0 systemd[1]: Started libpod-conmon-274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9.scope.
Oct 02 13:19:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:12 compute-0 podman[403125]: 2025-10-02 13:19:12.112637688 +0000 UTC m=+0.025378821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:12 compute-0 podman[403125]: 2025-10-02 13:19:12.209049074 +0000 UTC m=+0.121790217 container init 274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:19:12 compute-0 podman[403125]: 2025-10-02 13:19:12.215123602 +0000 UTC m=+0.127864715 container start 274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dijkstra, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:19:12 compute-0 gallant_dijkstra[403142]: 167 167
Oct 02 13:19:12 compute-0 systemd[1]: libpod-274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9.scope: Deactivated successfully.
Oct 02 13:19:12 compute-0 podman[403125]: 2025-10-02 13:19:12.221086417 +0000 UTC m=+0.133827550 container attach 274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:19:12 compute-0 podman[403125]: 2025-10-02 13:19:12.221341153 +0000 UTC m=+0.134082276 container died 274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4075ba3d04a42a0954aa5e3d9c5bbbdfe87d1ab9d2305424fa92b3ad30a7bd3-merged.mount: Deactivated successfully.
Oct 02 13:19:12 compute-0 podman[403125]: 2025-10-02 13:19:12.255929722 +0000 UTC m=+0.168670835 container remove 274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:19:12 compute-0 systemd[1]: libpod-conmon-274b4970a1bcf7e6e90ef8f0c4c12d73d693f01d65029a98e322cb041badb4d9.scope: Deactivated successfully.
Oct 02 13:19:12 compute-0 podman[403165]: 2025-10-02 13:19:12.417860242 +0000 UTC m=+0.040393611 container create 057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:19:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:12.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:12 compute-0 systemd[1]: Started libpod-conmon-057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c.scope.
Oct 02 13:19:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d14d0c66db616a42b32ac24bf90e0c447d7175257eb63d5a7bf3cd736cd44c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d14d0c66db616a42b32ac24bf90e0c447d7175257eb63d5a7bf3cd736cd44c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d14d0c66db616a42b32ac24bf90e0c447d7175257eb63d5a7bf3cd736cd44c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d14d0c66db616a42b32ac24bf90e0c447d7175257eb63d5a7bf3cd736cd44c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:12 compute-0 podman[403165]: 2025-10-02 13:19:12.398455107 +0000 UTC m=+0.020988496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:19:12 compute-0 podman[403165]: 2025-10-02 13:19:12.495913259 +0000 UTC m=+0.118446658 container init 057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:19:12 compute-0 podman[403165]: 2025-10-02 13:19:12.502586003 +0000 UTC m=+0.125119372 container start 057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:19:12 compute-0 podman[403165]: 2025-10-02 13:19:12.505787476 +0000 UTC m=+0.128320865 container attach 057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:19:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 46 op/s
Oct 02 13:19:12 compute-0 nova_compute[256940]: 2025-10-02 13:19:12.702 2 DEBUG nova.compute.manager [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:12 compute-0 nova_compute[256940]: 2025-10-02 13:19:12.704 2 DEBUG nova.compute.manager [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing instance network info cache due to event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:19:12 compute-0 nova_compute[256940]: 2025-10-02 13:19:12.705 2 DEBUG oslo_concurrency.lockutils [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:19:12 compute-0 nova_compute[256940]: 2025-10-02 13:19:12.705 2 DEBUG oslo_concurrency.lockutils [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:19:12 compute-0 nova_compute[256940]: 2025-10-02 13:19:12.705 2 DEBUG nova.network.neutron [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]: {
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]:         "osd_id": 1,
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]:         "type": "bluestore"
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]:     }
Oct 02 13:19:13 compute-0 nervous_sutherland[403181]: }
Oct 02 13:19:13 compute-0 systemd[1]: libpod-057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c.scope: Deactivated successfully.
Oct 02 13:19:13 compute-0 podman[403165]: 2025-10-02 13:19:13.371064167 +0000 UTC m=+0.993597536 container died 057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d14d0c66db616a42b32ac24bf90e0c447d7175257eb63d5a7bf3cd736cd44c9-merged.mount: Deactivated successfully.
Oct 02 13:19:13 compute-0 podman[403165]: 2025-10-02 13:19:13.4319495 +0000 UTC m=+1.054482859 container remove 057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:19:13 compute-0 systemd[1]: libpod-conmon-057b3b8292c6f7330acc1adfa34cfbd9886116059df0f25ecacb455366b1e17c.scope: Deactivated successfully.
Oct 02 13:19:13 compute-0 sudo[403059]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:19:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:19:13 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 41902c9c-64ed-45fd-b9bb-0c67bb84e635 does not exist
Oct 02 13:19:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 24a9a902-e326-4740-8cdb-895738d04c8d does not exist
Oct 02 13:19:13 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev adbdbf2e-df54-48e9-83dc-c4df4a8bcbb4 does not exist
Oct 02 13:19:13 compute-0 sudo[403214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:13 compute-0 sudo[403214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:13 compute-0 sudo[403214]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:13 compute-0 sudo[403239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:19:13 compute-0 sudo[403239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:13 compute-0 sudo[403239]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:13.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:13 compute-0 ceph-mon[73668]: pgmap v3236: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 46 op/s
Oct 02 13:19:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:14 compute-0 nova_compute[256940]: 2025-10-02 13:19:14.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:14.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 707 KiB/s rd, 23 KiB/s wr, 50 op/s
Oct 02 13:19:15 compute-0 podman[403265]: 2025-10-02 13:19:15.399856151 +0000 UTC m=+0.063141012 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:19:15 compute-0 podman[403266]: 2025-10-02 13:19:15.479403998 +0000 UTC m=+0.141322664 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:19:15 compute-0 nova_compute[256940]: 2025-10-02 13:19:15.667 2 DEBUG nova.network.neutron [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated VIF entry in instance network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:19:15 compute-0 nova_compute[256940]: 2025-10-02 13:19:15.667 2 DEBUG nova.network.neutron [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:19:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:15.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:15 compute-0 nova_compute[256940]: 2025-10-02 13:19:15.696 2 DEBUG oslo_concurrency.lockutils [req-e9cb3875-bdad-43e9-9784-66e5c69a4d2a req-9f9b4608-5249-41cf-bc3b-4082e017d4dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:19:15 compute-0 ceph-mon[73668]: pgmap v3237: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 707 KiB/s rd, 23 KiB/s wr, 50 op/s
Oct 02 13:19:15 compute-0 nova_compute[256940]: 2025-10-02 13:19:15.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:16.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 539 KiB/s rd, 11 KiB/s wr, 36 op/s
Oct 02 13:19:17 compute-0 sudo[403312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:17 compute-0 sudo[403312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:17 compute-0 sudo[403312]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:17 compute-0 sudo[403337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:17 compute-0 sudo[403337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:17 compute-0 sudo[403337]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:17.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:17 compute-0 ceph-mon[73668]: pgmap v3238: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 539 KiB/s rd, 11 KiB/s wr, 36 op/s
Oct 02 13:19:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:18.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 11 KiB/s wr, 8 op/s
Oct 02 13:19:19 compute-0 nova_compute[256940]: 2025-10-02 13:19:19.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:19 compute-0 nova_compute[256940]: 2025-10-02 13:19:19.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:19.346 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:19:19 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:19.348 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:19:19 compute-0 nova_compute[256940]: 2025-10-02 13:19:19.515 2 DEBUG nova.compute.manager [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:19 compute-0 nova_compute[256940]: 2025-10-02 13:19:19.515 2 DEBUG nova.compute.manager [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing instance network info cache due to event network-changed-9c092188-b1cb-4109-975b-8e9dd4daf435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:19:19 compute-0 nova_compute[256940]: 2025-10-02 13:19:19.516 2 DEBUG oslo_concurrency.lockutils [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:19:19 compute-0 nova_compute[256940]: 2025-10-02 13:19:19.516 2 DEBUG oslo_concurrency.lockutils [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:19:19 compute-0 nova_compute[256940]: 2025-10-02 13:19:19.516 2 DEBUG nova.network.neutron [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Refreshing network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:19:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:19.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:19 compute-0 ceph-mon[73668]: pgmap v3239: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 11 KiB/s wr, 8 op/s
Oct 02 13:19:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:20.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 02 13:19:20 compute-0 nova_compute[256940]: 2025-10-02 13:19:20.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.602 2 DEBUG oslo_concurrency.lockutils [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.603 2 DEBUG oslo_concurrency.lockutils [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.635 2 INFO nova.compute.manager [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Detaching volume b43f69f8-7c8d-4ef5-af90-c5aa5656977e
Oct 02 13:19:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.941 2 INFO nova.virt.block_device [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Attempting to driver detach volume b43f69f8-7c8d-4ef5-af90-c5aa5656977e from mountpoint /dev/vdb
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.952 2 DEBUG nova.virt.libvirt.driver [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Attempting to detach device vdb from instance 42de3106-827c-4d43-988c-18a6f44f3e01 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.953 2 DEBUG nova.virt.libvirt.guest [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b43f69f8-7c8d-4ef5-af90-c5aa5656977e">
Oct 02 13:19:21 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   </source>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <serial>b43f69f8-7c8d-4ef5-af90-c5aa5656977e</serial>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]: </disk>
Oct 02 13:19:21 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.962 2 INFO nova.virt.libvirt.driver [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully detached device vdb from instance 42de3106-827c-4d43-988c-18a6f44f3e01 from the persistent domain config.
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.963 2 DEBUG nova.virt.libvirt.driver [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 42de3106-827c-4d43-988c-18a6f44f3e01 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:19:21 compute-0 nova_compute[256940]: 2025-10-02 13:19:21.963 2 DEBUG nova.virt.libvirt.guest [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <source protocol="rbd" name="volumes/volume-b43f69f8-7c8d-4ef5-af90-c5aa5656977e">
Oct 02 13:19:21 compute-0 nova_compute[256940]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   </source>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <serial>b43f69f8-7c8d-4ef5-af90-c5aa5656977e</serial>
Oct 02 13:19:21 compute-0 nova_compute[256940]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 02 13:19:21 compute-0 nova_compute[256940]: </disk>
Oct 02 13:19:21 compute-0 nova_compute[256940]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.013 2 DEBUG nova.network.neutron [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updated VIF entry in instance network info cache for port 9c092188-b1cb-4109-975b-8e9dd4daf435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.013 2 DEBUG nova.network.neutron [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [{"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.065 2 DEBUG oslo_concurrency.lockutils [req-faecee57-574d-46e1-81e0-b6a67d96638f req-0e9e0d34-49fc-418d-917d-af8b9d953af0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-42de3106-827c-4d43-988c-18a6f44f3e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.076 2 DEBUG nova.virt.libvirt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Received event <DeviceRemovedEvent: 1759411162.0763543, 42de3106-827c-4d43-988c-18a6f44f3e01 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.079 2 DEBUG nova.virt.libvirt.driver [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 42de3106-827c-4d43-988c-18a6f44f3e01 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.082 2 INFO nova.virt.libvirt.driver [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully detached device vdb from instance 42de3106-827c-4d43-988c-18a6f44f3e01 from the live domain config.
Oct 02 13:19:22 compute-0 ceph-mon[73668]: pgmap v3240: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 02 13:19:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 4.4 KiB/s wr, 6 op/s
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.675 2 DEBUG nova.objects.instance [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:19:22 compute-0 nova_compute[256940]: 2025-10-02 13:19:22.740 2 DEBUG oslo_concurrency.lockutils [None req-f26d2e50-ee85-4ef8-8252-b39f808caf0f 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:23 compute-0 ceph-mon[73668]: pgmap v3241: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 4.4 KiB/s wr, 6 op/s
Oct 02 13:19:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:23.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:24.350 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:19:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:19:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:24.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 3.4 KiB/s wr, 7 op/s
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.883 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.884 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.884 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.885 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.885 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.887 2 INFO nova.compute.manager [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Terminating instance
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.889 2 DEBUG nova.compute.manager [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:19:24 compute-0 kernel: tap9c092188-b1 (unregistering): left promiscuous mode
Oct 02 13:19:24 compute-0 NetworkManager[44981]: <info>  [1759411164.9493] device (tap9c092188-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:19:24 compute-0 ovn_controller[148123]: 2025-10-02T13:19:24Z|01057|binding|INFO|Releasing lport 9c092188-b1cb-4109-975b-8e9dd4daf435 from this chassis (sb_readonly=0)
Oct 02 13:19:24 compute-0 ovn_controller[148123]: 2025-10-02T13:19:24Z|01058|binding|INFO|Setting lport 9c092188-b1cb-4109-975b-8e9dd4daf435 down in Southbound
Oct 02 13:19:24 compute-0 ovn_controller[148123]: 2025-10-02T13:19:24Z|01059|binding|INFO|Removing iface tap9c092188-b1 ovn-installed in OVS
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:24.972 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:1e:46 10.100.0.9'], port_security=['fa:16:3e:8e:1e:46 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '42de3106-827c-4d43-988c-18a6f44f3e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=9c092188-b1cb-4109-975b-8e9dd4daf435) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:19:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:24.974 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 9c092188-b1cb-4109-975b-8e9dd4daf435 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 unbound from our chassis
Oct 02 13:19:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:24.975 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 540159ad-ffd2-462a-a8b9-e86914ed6249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:19:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:24.976 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[547f408c-b908-4948-850c-c94400429e80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:24.977 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace which is not needed anymore
Oct 02 13:19:24 compute-0 nova_compute[256940]: 2025-10-02 13:19:24.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:25 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Oct 02 13:19:25 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000d5.scope: Consumed 14.605s CPU time.
Oct 02 13:19:25 compute-0 systemd-machined[210927]: Machine qemu-108-instance-000000d5 terminated.
Oct 02 13:19:25 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [NOTICE]   (402295) : haproxy version is 2.8.14-c23fe91
Oct 02 13:19:25 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [NOTICE]   (402295) : path to executable is /usr/sbin/haproxy
Oct 02 13:19:25 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [WARNING]  (402295) : Exiting Master process...
Oct 02 13:19:25 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [WARNING]  (402295) : Exiting Master process...
Oct 02 13:19:25 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [ALERT]    (402295) : Current worker (402297) exited with code 143 (Terminated)
Oct 02 13:19:25 compute-0 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[402291]: [WARNING]  (402295) : All workers exited. Exiting... (0)
Oct 02 13:19:25 compute-0 systemd[1]: libpod-fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b.scope: Deactivated successfully.
Oct 02 13:19:25 compute-0 podman[403391]: 2025-10-02 13:19:25.122304321 +0000 UTC m=+0.045566175 container died fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.130 2 INFO nova.virt.libvirt.driver [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Instance destroyed successfully.
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.132 2 DEBUG nova.objects.instance [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'resources' on Instance uuid 42de3106-827c-4d43-988c-18a6f44f3e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.145 2 DEBUG nova.virt.libvirt.vif [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:18:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-29107059',display_name='tempest-TestMinimumBasicScenario-server-29107059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-29107059',id=213,image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmOS46UgYP1IqwRi0ZKAvRep9yFPkjXVjI+Pw/ILwMRb4DdLlWNdBd0VfxSM2PbIpiC2ZiRlAp2KBMDDOOWT3wcngZ1oPAnDSzP/QaWBAIqsGgjy9F6g3hYw/xgLq2aOA==',key_name='tempest-TestMinimumBasicScenario-1692650409',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:18:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-j5dd0ovx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40dbc07f-c919-4d14-85e2-405ded1c3c40',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:18:52Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=42de3106-827c-4d43-988c-18a6f44f3e01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.146 2 DEBUG nova.network.os_vif_util [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "9c092188-b1cb-4109-975b-8e9dd4daf435", "address": "fa:16:3e:8e:1e:46", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c092188-b1", "ovs_interfaceid": "9c092188-b1cb-4109-975b-8e9dd4daf435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.147 2 DEBUG nova.network.os_vif_util [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.147 2 DEBUG os_vif [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.149 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c092188-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.157 2 INFO os_vif [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:1e:46,bridge_name='br-int',has_traffic_filtering=True,id=9c092188-b1cb-4109-975b-8e9dd4daf435,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c092188-b1')
Oct 02 13:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb26ee396de443e6f8096307a377f0f4b467c77673530caa10bb8c9a080429c1-merged.mount: Deactivated successfully.
Oct 02 13:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b-userdata-shm.mount: Deactivated successfully.
Oct 02 13:19:25 compute-0 podman[403391]: 2025-10-02 13:19:25.178682606 +0000 UTC m=+0.101944450 container cleanup fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:19:25 compute-0 systemd[1]: libpod-conmon-fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b.scope: Deactivated successfully.
Oct 02 13:19:25 compute-0 podman[403448]: 2025-10-02 13:19:25.255712239 +0000 UTC m=+0.055029912 container remove fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.261 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[39bc180e-493d-4203-844b-c384929b8a62]: (4, ('Thu Oct  2 01:19:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b)\nfd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b\nThu Oct  2 01:19:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (fd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b)\nfd7c976109bbfb893896471601f8417e091f62b960b617631714a9d8e286041b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.263 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e418997b-f0ff-48c5-b041-738a0ad5f3c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.265 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:25 compute-0 kernel: tap540159ad-f0: left promiscuous mode
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.275 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[25e7dea6-4dbb-4631-b064-bc23cedf30b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.302 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[873456b0-a8d7-49b5-9ca6-437ef2792e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.304 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[47f7422b-7643-41a0-90f2-c0378423ff16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.321 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[61f0143a-22ac-4b13-b7f5-a2d44f324cbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902666, 'reachable_time': 27295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403466, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.323 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:19:25 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:25.324 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[3872f6d3-eb16-418a-9912-4879b7dc90aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d540159ad\x2dffd2\x2d462a\x2da8b9\x2de86914ed6249.mount: Deactivated successfully.
Oct 02 13:19:25 compute-0 ceph-mon[73668]: pgmap v3242: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 3.4 KiB/s wr, 7 op/s
Oct 02 13:19:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:25.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.838 2 INFO nova.virt.libvirt.driver [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Deleting instance files /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01_del
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.839 2 INFO nova.virt.libvirt.driver [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Deletion of /var/lib/nova/instances/42de3106-827c-4d43-988c-18a6f44f3e01_del complete
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.885 2 DEBUG nova.compute.manager [req-9c3331ca-6cfc-48bb-8ff5-32d7191f5a22 req-368828d0-c6d4-4240-9a1d-463ff840785d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-unplugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.886 2 DEBUG oslo_concurrency.lockutils [req-9c3331ca-6cfc-48bb-8ff5-32d7191f5a22 req-368828d0-c6d4-4240-9a1d-463ff840785d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.886 2 DEBUG oslo_concurrency.lockutils [req-9c3331ca-6cfc-48bb-8ff5-32d7191f5a22 req-368828d0-c6d4-4240-9a1d-463ff840785d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.886 2 DEBUG oslo_concurrency.lockutils [req-9c3331ca-6cfc-48bb-8ff5-32d7191f5a22 req-368828d0-c6d4-4240-9a1d-463ff840785d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.887 2 DEBUG nova.compute.manager [req-9c3331ca-6cfc-48bb-8ff5-32d7191f5a22 req-368828d0-c6d4-4240-9a1d-463ff840785d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] No waiting events found dispatching network-vif-unplugged-9c092188-b1cb-4109-975b-8e9dd4daf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.887 2 DEBUG nova.compute.manager [req-9c3331ca-6cfc-48bb-8ff5-32d7191f5a22 req-368828d0-c6d4-4240-9a1d-463ff840785d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-unplugged-9c092188-b1cb-4109-975b-8e9dd4daf435 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.924 2 INFO nova.compute.manager [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Took 1.04 seconds to destroy the instance on the hypervisor.
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.925 2 DEBUG oslo.service.loopingcall [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.925 2 DEBUG nova.compute.manager [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:19:25 compute-0 nova_compute[256940]: 2025-10-02 13:19:25.926 2 DEBUG nova.network.neutron [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:19:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:26.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:26.514 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:26.515 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:19:26.515 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 4.2 KiB/s wr, 26 op/s
Oct 02 13:19:27 compute-0 ceph-mon[73668]: pgmap v3243: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 4.2 KiB/s wr, 26 op/s
Oct 02 13:19:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:27.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:27 compute-0 sshd-session[403469]: Invalid user admin from 80.94.95.25 port 18307
Oct 02 13:19:27 compute-0 sshd-session[403469]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:19:27 compute-0 sshd-session[403469]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.25
Oct 02 13:19:27 compute-0 nova_compute[256940]: 2025-10-02 13:19:27.909 2 DEBUG nova.network.neutron [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:19:27 compute-0 nova_compute[256940]: 2025-10-02 13:19:27.963 2 INFO nova.compute.manager [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Took 2.04 seconds to deallocate network for instance.
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.035 2 DEBUG nova.compute.manager [req-03136cb4-b58d-41bc-b700-24a80835ed6e req-47bb9a23-1dfe-4517-ba7a-268287f4399d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-deleted-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.037 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.037 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.086 2 DEBUG nova.compute.manager [req-ea55b69c-0099-4451-b717-9b3da6534af1 req-d21954eb-b0f1-4aaa-a3f3-1383c2937ea7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.087 2 DEBUG oslo_concurrency.lockutils [req-ea55b69c-0099-4451-b717-9b3da6534af1 req-d21954eb-b0f1-4aaa-a3f3-1383c2937ea7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.087 2 DEBUG oslo_concurrency.lockutils [req-ea55b69c-0099-4451-b717-9b3da6534af1 req-d21954eb-b0f1-4aaa-a3f3-1383c2937ea7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.087 2 DEBUG oslo_concurrency.lockutils [req-ea55b69c-0099-4451-b717-9b3da6534af1 req-d21954eb-b0f1-4aaa-a3f3-1383c2937ea7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.088 2 DEBUG nova.compute.manager [req-ea55b69c-0099-4451-b717-9b3da6534af1 req-d21954eb-b0f1-4aaa-a3f3-1383c2937ea7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] No waiting events found dispatching network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.088 2 WARNING nova.compute.manager [req-ea55b69c-0099-4451-b717-9b3da6534af1 req-d21954eb-b0f1-4aaa-a3f3-1383c2937ea7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Received unexpected event network-vif-plugged-9c092188-b1cb-4109-975b-8e9dd4daf435 for instance with vm_state deleted and task_state None.
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.366 2 DEBUG oslo_concurrency.processutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:28.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 4.1 KiB/s wr, 24 op/s
Oct 02 13:19:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014423520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.809 2 DEBUG oslo_concurrency.processutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.817 2 DEBUG nova.compute.provider_tree [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.844 2 DEBUG nova.scheduler.client.report [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:19:28 compute-0 nova_compute[256940]: 2025-10-02 13:19:28.877 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:19:28
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root', 'backups', '.mgr', 'volumes']
Oct 02 13:19:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:19:29 compute-0 nova_compute[256940]: 2025-10-02 13:19:29.028 2 INFO nova.scheduler.client.report [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Deleted allocations for instance 42de3106-827c-4d43-988c-18a6f44f3e01
Oct 02 13:19:29 compute-0 nova_compute[256940]: 2025-10-02 13:19:29.118 2 DEBUG oslo_concurrency.lockutils [None req-fe178ff6-2214-40c8-b8c4-c79168a80875 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "42de3106-827c-4d43-988c-18a6f44f3e01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:29 compute-0 nova_compute[256940]: 2025-10-02 13:19:29.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:19:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:29 compute-0 ceph-mon[73668]: pgmap v3244: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 4.1 KiB/s wr, 24 op/s
Oct 02 13:19:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3014423520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:19:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:19:30 compute-0 nova_compute[256940]: 2025-10-02 13:19:30.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:30 compute-0 sshd-session[403469]: Failed password for invalid user admin from 80.94.95.25 port 18307 ssh2
Oct 02 13:19:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:30.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 4.4 KiB/s wr, 41 op/s
Oct 02 13:19:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Oct 02 13:19:31 compute-0 ceph-mon[73668]: pgmap v3245: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 4.4 KiB/s wr, 41 op/s
Oct 02 13:19:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Oct 02 13:19:32 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Oct 02 13:19:32 compute-0 sshd-session[403469]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:19:32 compute-0 podman[403495]: 2025-10-02 13:19:32.400055818 +0000 UTC m=+0.062121186 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 13:19:32 compute-0 podman[403496]: 2025-10-02 13:19:32.427045559 +0000 UTC m=+0.094025035 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:19:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:32.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 02 13:19:33 compute-0 ceph-mon[73668]: osdmap e403: 3 total, 3 up, 3 in
Oct 02 13:19:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3859766783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3958918451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:33.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:34 compute-0 sshd-session[403469]: Failed password for invalid user admin from 80.94.95.25 port 18307 ssh2
Oct 02 13:19:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Oct 02 13:19:34 compute-0 nova_compute[256940]: 2025-10-02 13:19:34.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:34 compute-0 nova_compute[256940]: 2025-10-02 13:19:34.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:34 compute-0 ceph-mon[73668]: pgmap v3247: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 02 13:19:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2050096160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2642578138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:34.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Oct 02 13:19:34 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Oct 02 13:19:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.0 MiB/s wr, 41 op/s
Oct 02 13:19:34 compute-0 sshd-session[403469]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.234 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.235 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.236 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:35 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:35 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76299625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.677 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:35.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.848 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.849 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4101MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.849 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:35 compute-0 nova_compute[256940]: 2025-10-02 13:19:35.849 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.001 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.002 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.025 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2359424399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:36.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.465 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.470 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.494 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.528 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:19:36 compute-0 nova_compute[256940]: 2025-10-02 13:19:36.528 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Oct 02 13:19:36 compute-0 ceph-mon[73668]: osdmap e404: 3 total, 3 up, 3 in
Oct 02 13:19:36 compute-0 ceph-mon[73668]: pgmap v3249: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.0 MiB/s wr, 41 op/s
Oct 02 13:19:36 compute-0 sshd-session[403469]: Failed password for invalid user admin from 80.94.95.25 port 18307 ssh2
Oct 02 13:19:36 compute-0 sshd-session[403469]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:19:37 compute-0 sudo[403581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:37 compute-0 sudo[403581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:37 compute-0 sudo[403581]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:37 compute-0 sudo[403606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:37 compute-0 sudo[403606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:37 compute-0 sudo[403606]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:37.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/76299625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2359424399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:37 compute-0 ceph-mon[73668]: pgmap v3250: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Oct 02 13:19:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:38.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 54 op/s
Oct 02 13:19:39 compute-0 sshd-session[403469]: Failed password for invalid user admin from 80.94.95.25 port 18307 ssh2
Oct 02 13:19:39 compute-0 nova_compute[256940]: 2025-10-02 13:19:39.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:39 compute-0 sshd-session[403469]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:19:39 compute-0 nova_compute[256940]: 2025-10-02 13:19:39.529 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:39 compute-0 nova_compute[256940]: 2025-10-02 13:19:39.530 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:39 compute-0 nova_compute[256940]: 2025-10-02 13:19:39.530 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:19:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:39.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:39 compute-0 ceph-mon[73668]: pgmap v3251: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 54 op/s
Oct 02 13:19:40 compute-0 nova_compute[256940]: 2025-10-02 13:19:40.125 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411165.1234877, 42de3106-827c-4d43-988c-18a6f44f3e01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:19:40 compute-0 nova_compute[256940]: 2025-10-02 13:19:40.125 2 INFO nova.compute.manager [-] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] VM Stopped (Lifecycle Event)
Oct 02 13:19:40 compute-0 nova_compute[256940]: 2025-10-02 13:19:40.161 2 DEBUG nova.compute.manager [None req-1648d954-193d-4e4d-b257-6cb19074db06 - - - - - -] [instance: 42de3106-827c-4d43-988c-18a6f44f3e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:19:40 compute-0 nova_compute[256940]: 2025-10-02 13:19:40.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 2.4 MiB/s wr, 43 op/s
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:19:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:19:41 compute-0 nova_compute[256940]: 2025-10-02 13:19:41.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Oct 02 13:19:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Oct 02 13:19:41 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Oct 02 13:19:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:41.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:41 compute-0 sshd-session[403469]: Failed password for invalid user admin from 80.94.95.25 port 18307 ssh2
Oct 02 13:19:42 compute-0 ceph-mon[73668]: pgmap v3252: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 2.4 MiB/s wr, 43 op/s
Oct 02 13:19:42 compute-0 ceph-mon[73668]: osdmap e405: 3 total, 3 up, 3 in
Oct 02 13:19:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1616618197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:42 compute-0 nova_compute[256940]: 2025-10-02 13:19:42.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.5 MiB/s wr, 38 op/s
Oct 02 13:19:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:43.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:43 compute-0 sshd-session[403469]: Received disconnect from 80.94.95.25 port 18307:11: Bye [preauth]
Oct 02 13:19:43 compute-0 sshd-session[403469]: Disconnected from invalid user admin 80.94.95.25 port 18307 [preauth]
Oct 02 13:19:43 compute-0 sshd-session[403469]: PAM 4 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.25
Oct 02 13:19:43 compute-0 sshd-session[403469]: PAM service(sshd) ignoring max retries; 5 > 3
Oct 02 13:19:44 compute-0 ceph-mon[73668]: pgmap v3254: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.5 MiB/s wr, 38 op/s
Oct 02 13:19:44 compute-0 nova_compute[256940]: 2025-10-02 13:19:44.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:44.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 31 op/s
Oct 02 13:19:45 compute-0 nova_compute[256940]: 2025-10-02 13:19:45.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:45 compute-0 ceph-mon[73668]: pgmap v3255: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 31 op/s
Oct 02 13:19:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:45.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:46 compute-0 nova_compute[256940]: 2025-10-02 13:19:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:46 compute-0 podman[403635]: 2025-10-02 13:19:46.387303451 +0000 UTC m=+0.056776537 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:19:46 compute-0 podman[403636]: 2025-10-02 13:19:46.419879888 +0000 UTC m=+0.092253579 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:19:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:19:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:47.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:48 compute-0 ceph-mon[73668]: pgmap v3256: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:19:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:19:49 compute-0 nova_compute[256940]: 2025-10-02 13:19:49.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:49 compute-0 nova_compute[256940]: 2025-10-02 13:19:49.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:49.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:50 compute-0 nova_compute[256940]: 2025-10-02 13:19:50.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:50 compute-0 ceph-mon[73668]: pgmap v3257: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:19:50 compute-0 nova_compute[256940]: 2025-10-02 13:19:50.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:50 compute-0 nova_compute[256940]: 2025-10-02 13:19:50.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:19:50 compute-0 nova_compute[256940]: 2025-10-02 13:19:50.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:19:50 compute-0 nova_compute[256940]: 2025-10-02 13:19:50.229 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:19:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:50.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 166 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 912 KiB/s wr, 37 op/s
Oct 02 13:19:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1397213192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:19:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2262272927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:19:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:51.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:52 compute-0 ceph-mon[73668]: pgmap v3258: 305 pgs: 305 active+clean; 166 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 912 KiB/s wr, 37 op/s
Oct 02 13:19:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:52.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct 02 13:19:53 compute-0 ceph-mon[73668]: pgmap v3259: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct 02 13:19:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:53.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:54 compute-0 nova_compute[256940]: 2025-10-02 13:19:54.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:54.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:19:55 compute-0 nova_compute[256940]: 2025-10-02 13:19:55.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:55.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:56 compute-0 ceph-mon[73668]: pgmap v3260: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:19:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:19:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:56.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:19:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 02 13:19:57 compute-0 sudo[403687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:57 compute-0 sudo[403687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:57 compute-0 sudo[403687]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:57 compute-0 sudo[403712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:57 compute-0 sudo[403712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:57 compute-0 sudo[403712]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:57 compute-0 ceph-mon[73668]: pgmap v3261: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 02 13:19:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:19:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:19:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:58.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 02 13:19:59 compute-0 nova_compute[256940]: 2025-10-02 13:19:59.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:19:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:59.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:20:00 compute-0 nova_compute[256940]: 2025-10-02 13:20:00.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:00 compute-0 ceph-mon[73668]: pgmap v3262: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 02 13:20:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:00.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 13:20:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:01 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 13:20:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:02.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 77 op/s
Oct 02 13:20:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:20:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3036994989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:20:02 compute-0 ceph-mon[73668]: pgmap v3263: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 13:20:03 compute-0 podman[403741]: 2025-10-02 13:20:03.419310186 +0000 UTC m=+0.076512810 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:20:03 compute-0 podman[403740]: 2025-10-02 13:20:03.426061451 +0000 UTC m=+0.085736669 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:04 compute-0 ceph-mon[73668]: pgmap v3264: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 77 op/s
Oct 02 13:20:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3036994989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:20:04 compute-0 nova_compute[256940]: 2025-10-02 13:20:04.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:04.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 698 KiB/s wr, 78 op/s
Oct 02 13:20:05 compute-0 nova_compute[256940]: 2025-10-02 13:20:05.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:05 compute-0 ceph-mon[73668]: pgmap v3265: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 698 KiB/s wr, 78 op/s
Oct 02 13:20:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:05.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:06.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 767 B/s wr, 77 op/s
Oct 02 13:20:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:20:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:20:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:07.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:20:08.022 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:20:08 compute-0 nova_compute[256940]: 2025-10-02 13:20:08.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:20:08.024 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:20:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 682 B/s wr, 48 op/s
Oct 02 13:20:08 compute-0 ceph-mon[73668]: pgmap v3266: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 767 B/s wr, 77 op/s
Oct 02 13:20:09 compute-0 nova_compute[256940]: 2025-10-02 13:20:09.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:09 compute-0 ceph-mon[73668]: pgmap v3267: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 682 B/s wr, 48 op/s
Oct 02 13:20:10 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:20:10.026 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:20:10 compute-0 nova_compute[256940]: 2025-10-02 13:20:10.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 66 op/s
Oct 02 13:20:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 13:20:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 13:20:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:12.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 110 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Oct 02 13:20:12 compute-0 ceph-mon[73668]: pgmap v3268: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 66 op/s
Oct 02 13:20:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:13.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:13 compute-0 sudo[403788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:14 compute-0 sudo[403788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-0 sudo[403788]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-0 sudo[403813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:14 compute-0 ceph-mon[73668]: pgmap v3269: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 110 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Oct 02 13:20:14 compute-0 sudo[403813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-0 sudo[403813]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-0 sudo[403838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:14 compute-0 sudo[403838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-0 sudo[403838]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-0 sudo[403863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:20:14 compute-0 sudo[403863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-0 nova_compute[256940]: 2025-10-02 13:20:14.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:14 compute-0 sudo[403863]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:20:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:14.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:20:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:20:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:15 compute-0 nova_compute[256940]: 2025-10-02 13:20:15.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:20:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:15.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:20:15 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:15 compute-0 sudo[403910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:15 compute-0 sudo[403910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:15 compute-0 sudo[403910]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-0 sudo[403935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:16 compute-0 sudo[403935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-0 sudo[403935]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-0 sudo[403960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:16 compute-0 sudo[403960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-0 sudo[403960]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-0 sudo[403985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:20:16 compute-0 sudo[403985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:16 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:20:16 compute-0 sudo[403985]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-0 ceph-mon[73668]: pgmap v3270: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:20:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:16 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:20:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:20:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:20:17 compute-0 podman[404041]: 2025-10-02 13:20:17.396915008 +0000 UTC m=+0.059675832 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:20:17 compute-0 podman[404042]: 2025-10-02 13:20:17.476411204 +0000 UTC m=+0.130548484 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:20:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3cb36bb1-1d68-4928-a489-c337591886d8 does not exist
Oct 02 13:20:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b65fab79-2ee8-4d67-9bb7-7daba0417a1c does not exist
Oct 02 13:20:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b032a572-3606-44c0-8978-4e9f239fa1ed does not exist
Oct 02 13:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:20:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:20:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:20:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:20:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:17 compute-0 sudo[404087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:17 compute-0 sudo[404087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 sudo[404087]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 sudo[404116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:17 compute-0 sudo[404112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:17 compute-0 sudo[404116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 sudo[404112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 sudo[404116]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 sudo[404112]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:17.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:17 compute-0 sudo[404162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:17 compute-0 sudo[404165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:17 compute-0 sudo[404162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 sudo[404165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:17 compute-0 sudo[404162]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 sudo[404165]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:17 compute-0 sudo[404212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:20:17 compute-0 sudo[404212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:18 compute-0 ceph-mon[73668]: pgmap v3271: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:20:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:20:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:20:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:20:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:18 compute-0 podman[404278]: 2025-10-02 13:20:18.190579507 +0000 UTC m=+0.021826048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:18.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 13:20:18 compute-0 podman[404278]: 2025-10-02 13:20:18.673810758 +0000 UTC m=+0.505057279 container create 96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swartz, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:20:18 compute-0 systemd[1]: Started libpod-conmon-96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f.scope.
Oct 02 13:20:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:18 compute-0 podman[404278]: 2025-10-02 13:20:18.989382061 +0000 UTC m=+0.820628602 container init 96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:20:18 compute-0 podman[404278]: 2025-10-02 13:20:18.996052444 +0000 UTC m=+0.827298945 container start 96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swartz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:20:19 compute-0 systemd[1]: libpod-96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f.scope: Deactivated successfully.
Oct 02 13:20:19 compute-0 sharp_swartz[404295]: 167 167
Oct 02 13:20:19 compute-0 conmon[404295]: conmon 96a0110b617faf593ee5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f.scope/container/memory.events
Oct 02 13:20:19 compute-0 podman[404278]: 2025-10-02 13:20:19.155648863 +0000 UTC m=+0.986895374 container attach 96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:19 compute-0 podman[404278]: 2025-10-02 13:20:19.157453589 +0000 UTC m=+0.988700100 container died 96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swartz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:20:19 compute-0 nova_compute[256940]: 2025-10-02 13:20:19.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d8f1bb23c32870f55b51cca92f1520471b36fc72336a7fff9961a42d7332418-merged.mount: Deactivated successfully.
Oct 02 13:20:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:19.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:19 compute-0 podman[404278]: 2025-10-02 13:20:19.80536808 +0000 UTC m=+1.636614591 container remove 96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:20:19 compute-0 systemd[1]: libpod-conmon-96a0110b617faf593ee5c86377f7f65ca9881899e5b84c23f8db6d679197fd5f.scope: Deactivated successfully.
Oct 02 13:20:20 compute-0 podman[404319]: 2025-10-02 13:20:20.010714998 +0000 UTC m=+0.028390909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:20 compute-0 podman[404319]: 2025-10-02 13:20:20.1450443 +0000 UTC m=+0.162720121 container create 0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:20:20 compute-0 nova_compute[256940]: 2025-10-02 13:20:20.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:20 compute-0 systemd[1]: Started libpod-conmon-0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b.scope.
Oct 02 13:20:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9c7ccd08996275633e13e88599b8a12680324df85cfa5b3d510c1aa56bc4e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9c7ccd08996275633e13e88599b8a12680324df85cfa5b3d510c1aa56bc4e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9c7ccd08996275633e13e88599b8a12680324df85cfa5b3d510c1aa56bc4e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9c7ccd08996275633e13e88599b8a12680324df85cfa5b3d510c1aa56bc4e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9c7ccd08996275633e13e88599b8a12680324df85cfa5b3d510c1aa56bc4e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:20 compute-0 ceph-mon[73668]: pgmap v3272: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 13:20:20 compute-0 podman[404319]: 2025-10-02 13:20:20.386053455 +0000 UTC m=+0.403729276 container init 0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sutherland, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:20 compute-0 podman[404319]: 2025-10-02 13:20:20.393924409 +0000 UTC m=+0.411600220 container start 0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:20:20 compute-0 podman[404319]: 2025-10-02 13:20:20.463693312 +0000 UTC m=+0.481369123 container attach 0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:20:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:20.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:20:21 compute-0 serene_sutherland[404338]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:20:21 compute-0 serene_sutherland[404338]: --> relative data size: 1.0
Oct 02 13:20:21 compute-0 serene_sutherland[404338]: --> All data devices are unavailable
Oct 02 13:20:21 compute-0 systemd[1]: libpod-0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b.scope: Deactivated successfully.
Oct 02 13:20:21 compute-0 podman[404319]: 2025-10-02 13:20:21.230669507 +0000 UTC m=+1.248345348 container died 0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sutherland, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:20:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:21 compute-0 ceph-mon[73668]: pgmap v3273: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:20:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:21.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9c7ccd08996275633e13e88599b8a12680324df85cfa5b3d510c1aa56bc4e2-merged.mount: Deactivated successfully.
Oct 02 13:20:22 compute-0 podman[404319]: 2025-10-02 13:20:22.298574275 +0000 UTC m=+2.316250096 container remove 0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:20:22 compute-0 systemd[1]: libpod-conmon-0346f06e4590e8bf1a134e5d4f1f6b11c2294c9148189705f511de7ca31c381b.scope: Deactivated successfully.
Oct 02 13:20:22 compute-0 sudo[404212]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:22 compute-0 sudo[404368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:22 compute-0 sudo[404368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:22 compute-0 sudo[404368]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:22 compute-0 sudo[404393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:22 compute-0 sudo[404393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:22 compute-0 sudo[404393]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:22.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:22 compute-0 sudo[404418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:22 compute-0 sudo[404418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:22 compute-0 sudo[404418]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:22 compute-0 sudo[404443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:20:22 compute-0 sudo[404443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 677 KiB/s wr, 45 op/s
Oct 02 13:20:23 compute-0 podman[404511]: 2025-10-02 13:20:22.916683321 +0000 UTC m=+0.023083241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:23 compute-0 podman[404511]: 2025-10-02 13:20:23.017704108 +0000 UTC m=+0.124104038 container create 1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:20:23 compute-0 systemd[1]: Started libpod-conmon-1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6.scope.
Oct 02 13:20:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:23 compute-0 podman[404511]: 2025-10-02 13:20:23.298691731 +0000 UTC m=+0.405091651 container init 1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:20:23 compute-0 podman[404511]: 2025-10-02 13:20:23.310024735 +0000 UTC m=+0.416424635 container start 1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:20:23 compute-0 compassionate_borg[404527]: 167 167
Oct 02 13:20:23 compute-0 systemd[1]: libpod-1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6.scope: Deactivated successfully.
Oct 02 13:20:23 compute-0 podman[404511]: 2025-10-02 13:20:23.430518038 +0000 UTC m=+0.536917958 container attach 1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:23 compute-0 podman[404511]: 2025-10-02 13:20:23.431585565 +0000 UTC m=+0.537985465 container died 1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0b12b04e24eb34abb27433901c8a831ddc2823b1488d9b8bc11b5c10d04c28b-merged.mount: Deactivated successfully.
Oct 02 13:20:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:23.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:24 compute-0 podman[404511]: 2025-10-02 13:20:24.149619618 +0000 UTC m=+1.256019528 container remove 1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 13:20:24 compute-0 systemd[1]: libpod-conmon-1dbadda5999f6adfa17383ab8cb53e0ad8a6177b7fdd46e7ee8b716fd546f1e6.scope: Deactivated successfully.
Oct 02 13:20:24 compute-0 nova_compute[256940]: 2025-10-02 13:20:24.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:24 compute-0 podman[404553]: 2025-10-02 13:20:24.334094293 +0000 UTC m=+0.045303029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:24 compute-0 podman[404553]: 2025-10-02 13:20:24.430226462 +0000 UTC m=+0.141435208 container create c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 13:20:24 compute-0 ceph-mon[73668]: pgmap v3274: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 677 KiB/s wr, 45 op/s
Oct 02 13:20:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:24 compute-0 systemd[1]: Started libpod-conmon-c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5.scope.
Oct 02 13:20:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 310 KiB/s rd, 166 KiB/s wr, 38 op/s
Oct 02 13:20:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da8d41a26f9ac787ea7515716da2835415491f74654fc67d137522a77ab870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da8d41a26f9ac787ea7515716da2835415491f74654fc67d137522a77ab870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da8d41a26f9ac787ea7515716da2835415491f74654fc67d137522a77ab870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da8d41a26f9ac787ea7515716da2835415491f74654fc67d137522a77ab870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:24 compute-0 podman[404553]: 2025-10-02 13:20:24.70943331 +0000 UTC m=+0.420642096 container init c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:20:24 compute-0 podman[404553]: 2025-10-02 13:20:24.717606522 +0000 UTC m=+0.428815248 container start c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:24 compute-0 podman[404553]: 2025-10-02 13:20:24.895134406 +0000 UTC m=+0.606343142 container attach c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:20:25 compute-0 nova_compute[256940]: 2025-10-02 13:20:25.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]: {
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:     "1": [
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:         {
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "devices": [
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "/dev/loop3"
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             ],
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "lv_name": "ceph_lv0",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "lv_size": "7511998464",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "name": "ceph_lv0",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "tags": {
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.cluster_name": "ceph",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.crush_device_class": "",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.encrypted": "0",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.osd_id": "1",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.type": "block",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:                 "ceph.vdo": "0"
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             },
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "type": "block",
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:             "vg_name": "ceph_vg0"
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:         }
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]:     ]
Oct 02 13:20:25 compute-0 bold_zhukovsky[404571]: }
Oct 02 13:20:25 compute-0 systemd[1]: libpod-c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5.scope: Deactivated successfully.
Oct 02 13:20:25 compute-0 podman[404581]: 2025-10-02 13:20:25.585425068 +0000 UTC m=+0.044881877 container died c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_zhukovsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:20:25 compute-0 ceph-mon[73668]: pgmap v3275: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 310 KiB/s rd, 166 KiB/s wr, 38 op/s
Oct 02 13:20:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:25.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-00da8d41a26f9ac787ea7515716da2835415491f74654fc67d137522a77ab870-merged.mount: Deactivated successfully.
Oct 02 13:20:26 compute-0 podman[404581]: 2025-10-02 13:20:26.23026812 +0000 UTC m=+0.689724929 container remove c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:20:26 compute-0 systemd[1]: libpod-conmon-c617cf765a8588b36ffd2d66b9ce21968bbb465bde8ca4c21a87efe9c1aef7c5.scope: Deactivated successfully.
Oct 02 13:20:26 compute-0 sudo[404443]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:26 compute-0 sudo[404598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:26 compute-0 sudo[404598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:26 compute-0 sudo[404598]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:26 compute-0 sudo[404623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:26 compute-0 sudo[404623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:26 compute-0 sudo[404623]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:26 compute-0 sudo[404648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:26 compute-0 sudo[404648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:26 compute-0 sudo[404648]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:26 compute-0 sudo[404673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:20:26 compute-0 sudo[404673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:20:26.515 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:20:26.516 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:20:26.516 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:26.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 57 KiB/s wr, 14 op/s
Oct 02 13:20:26 compute-0 podman[404738]: 2025-10-02 13:20:26.795113202 +0000 UTC m=+0.020512285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:26 compute-0 podman[404738]: 2025-10-02 13:20:26.946083806 +0000 UTC m=+0.171482809 container create f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:27 compute-0 systemd[1]: Started libpod-conmon-f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829.scope.
Oct 02 13:20:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:27 compute-0 podman[404738]: 2025-10-02 13:20:27.164498433 +0000 UTC m=+0.389897436 container init f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:20:27 compute-0 podman[404738]: 2025-10-02 13:20:27.174279927 +0000 UTC m=+0.399678930 container start f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:20:27 compute-0 nifty_swanson[404756]: 167 167
Oct 02 13:20:27 compute-0 systemd[1]: libpod-f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829.scope: Deactivated successfully.
Oct 02 13:20:27 compute-0 podman[404738]: 2025-10-02 13:20:27.412832828 +0000 UTC m=+0.638231881 container attach f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:20:27 compute-0 podman[404738]: 2025-10-02 13:20:27.414159813 +0000 UTC m=+0.639558856 container died f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:27 compute-0 ceph-mon[73668]: pgmap v3276: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 57 KiB/s wr, 14 op/s
Oct 02 13:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf553889ddd5e48ef27bb784cc3629be2f88c7cb72280023169b421a9abdd45-merged.mount: Deactivated successfully.
Oct 02 13:20:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:27.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:27 compute-0 podman[404738]: 2025-10-02 13:20:27.780221516 +0000 UTC m=+1.005620519 container remove f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:20:27 compute-0 systemd[1]: libpod-conmon-f1061fc308c9f0176e005019c4e202fcff24ea3119dfbf59a388f9c69f346829.scope: Deactivated successfully.
Oct 02 13:20:27 compute-0 podman[404780]: 2025-10-02 13:20:27.972416392 +0000 UTC m=+0.048511752 container create 356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ptolemy, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:20:28 compute-0 systemd[1]: Started libpod-conmon-356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617.scope.
Oct 02 13:20:28 compute-0 podman[404780]: 2025-10-02 13:20:27.947071603 +0000 UTC m=+0.023166993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:20:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d998f874391dd744459e0f9eff903c371f12a2f068cb6943c415970fbdb57d84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d998f874391dd744459e0f9eff903c371f12a2f068cb6943c415970fbdb57d84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d998f874391dd744459e0f9eff903c371f12a2f068cb6943c415970fbdb57d84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d998f874391dd744459e0f9eff903c371f12a2f068cb6943c415970fbdb57d84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:28 compute-0 podman[404780]: 2025-10-02 13:20:28.117956965 +0000 UTC m=+0.194052355 container init 356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ptolemy, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:28 compute-0 podman[404780]: 2025-10-02 13:20:28.124744212 +0000 UTC m=+0.200839562 container start 356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ptolemy, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:20:28 compute-0 podman[404780]: 2025-10-02 13:20:28.195494401 +0000 UTC m=+0.271589781 container attach 356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ptolemy, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 65 KiB/s rd, 19 KiB/s wr, 11 op/s
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:20:28
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr']
Oct 02 13:20:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]: {
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]:         "osd_id": 1,
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]:         "type": "bluestore"
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]:     }
Oct 02 13:20:28 compute-0 friendly_ptolemy[404797]: }
Oct 02 13:20:28 compute-0 systemd[1]: libpod-356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617.scope: Deactivated successfully.
Oct 02 13:20:28 compute-0 podman[404780]: 2025-10-02 13:20:28.936085251 +0000 UTC m=+1.012180611 container died 356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ptolemy, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d998f874391dd744459e0f9eff903c371f12a2f068cb6943c415970fbdb57d84-merged.mount: Deactivated successfully.
Oct 02 13:20:29 compute-0 nova_compute[256940]: 2025-10-02 13:20:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:29 compute-0 podman[404780]: 2025-10-02 13:20:29.310879212 +0000 UTC m=+1.386974572 container remove 356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:20:29 compute-0 systemd[1]: libpod-conmon-356e6dde8b2d9602b35a273f50854abf723f91e6e05453aaa851afa8996c0617.scope: Deactivated successfully.
Oct 02 13:20:29 compute-0 sudo[404673]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:20:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:20:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 08a3c912-9962-4748-9daa-970589f9aae4 does not exist
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f7172bfb-b883-4910-9481-53da49767e2c does not exist
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b989d865-f379-4e1d-8e3c-9ec83e248ffb does not exist
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:20:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:20:29 compute-0 sudo[404833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:29 compute-0 sudo[404833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:29 compute-0 sudo[404833]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:29.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:29 compute-0 sudo[404858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:20:29 compute-0 sudo[404858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:29 compute-0 sudo[404858]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:30 compute-0 nova_compute[256940]: 2025-10-02 13:20:30.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:30.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 26 KiB/s wr, 12 op/s
Oct 02 13:20:30 compute-0 ceph-mon[73668]: pgmap v3277: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 65 KiB/s rd, 19 KiB/s wr, 11 op/s
Oct 02 13:20:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:31.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:32 compute-0 ceph-mon[73668]: pgmap v3278: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 26 KiB/s wr, 12 op/s
Oct 02 13:20:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/89902622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:20:33 compute-0 ceph-mon[73668]: pgmap v3279: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:20:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/366028002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:33.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:34 compute-0 nova_compute[256940]: 2025-10-02 13:20:34.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:34 compute-0 podman[404886]: 2025-10-02 13:20:34.396872771 +0000 UTC m=+0.063747488 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:20:34 compute-0 podman[404885]: 2025-10-02 13:20:34.397711293 +0000 UTC m=+0.064197040 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 13:20:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:34.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:20:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3264101197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:35 compute-0 nova_compute[256940]: 2025-10-02 13:20:35.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:35.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:36 compute-0 nova_compute[256940]: 2025-10-02 13:20:36.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.352863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236352945, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2148, "num_deletes": 254, "total_data_size": 3921070, "memory_usage": 3983024, "flush_reason": "Manual Compaction"}
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 02 13:20:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:36.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236587451, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 3812293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71078, "largest_seqno": 73225, "table_properties": {"data_size": 3802420, "index_size": 6302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20243, "raw_average_key_size": 20, "raw_value_size": 3782831, "raw_average_value_size": 3856, "num_data_blocks": 273, "num_entries": 981, "num_filter_entries": 981, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411037, "oldest_key_time": 1759411037, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 241845 microseconds, and 8162 cpu microseconds.
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.587491) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 3812293 bytes OK
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.594749) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.610619) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.610637) EVENT_LOG_v1 {"time_micros": 1759411236610631, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.610655) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3912296, prev total WAL file size 3913564, number of live WAL files 2.
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.612643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(3722KB)], [164(9909KB)]
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236612709, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 13959349, "oldest_snapshot_seqno": -1}
Oct 02 13:20:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 15 KiB/s wr, 2 op/s
Oct 02 13:20:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3983555616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:36 compute-0 ceph-mon[73668]: pgmap v3280: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 9980 keys, 12021841 bytes, temperature: kUnknown
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236835992, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 12021841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11958373, "index_size": 37435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263302, "raw_average_key_size": 26, "raw_value_size": 11784729, "raw_average_value_size": 1180, "num_data_blocks": 1420, "num_entries": 9980, "num_filter_entries": 9980, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.836244) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12021841 bytes
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.838436) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.5 rd, 53.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 10508, records dropped: 528 output_compression: NoCompression
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.838463) EVENT_LOG_v1 {"time_micros": 1759411236838452, "job": 102, "event": "compaction_finished", "compaction_time_micros": 223339, "compaction_time_cpu_micros": 28784, "output_level": 6, "num_output_files": 1, "total_output_size": 12021841, "num_input_records": 10508, "num_output_records": 9980, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236839251, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236841418, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.612571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.841514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.841521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.841523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.841525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:20:36.841527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.238 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.238 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:20:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120276339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.647 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:37.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.788 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.789 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4138MB free_disk=20.942710876464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.789 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.789 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:37 compute-0 sudo[404949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:37 compute-0 sudo[404949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:37 compute-0 sudo[404949]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.914 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.914 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:20:37 compute-0 sudo[404974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:37 compute-0 sudo[404974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:37 compute-0 sudo[404974]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:37 compute-0 nova_compute[256940]: 2025-10-02 13:20:37.936 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:38 compute-0 ceph-mon[73668]: pgmap v3281: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 15 KiB/s wr, 2 op/s
Oct 02 13:20:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4120276339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:38.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 14 KiB/s wr, 2 op/s
Oct 02 13:20:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:20:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506392747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:38 compute-0 nova_compute[256940]: 2025-10-02 13:20:38.716 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.780s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:38 compute-0 nova_compute[256940]: 2025-10-02 13:20:38.722 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:20:38 compute-0 nova_compute[256940]: 2025-10-02 13:20:38.778 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:20:38 compute-0 nova_compute[256940]: 2025-10-02 13:20:38.781 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:20:38 compute-0 nova_compute[256940]: 2025-10-02 13:20:38.782 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/506392747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:39 compute-0 ovn_controller[148123]: 2025-10-02T13:20:39Z|01060|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct 02 13:20:39 compute-0 nova_compute[256940]: 2025-10-02 13:20:39.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:39.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:40 compute-0 nova_compute[256940]: 2025-10-02 13:20:40.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:40 compute-0 ceph-mon[73668]: pgmap v3282: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 14 KiB/s wr, 2 op/s
Oct 02 13:20:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:40.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 02 13:20:40 compute-0 nova_compute[256940]: 2025-10-02 13:20:40.783 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:40 compute-0 nova_compute[256940]: 2025-10-02 13:20:40.783 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:40 compute-0 nova_compute[256940]: 2025-10-02 13:20:40.783 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:20:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:20:41 compute-0 nova_compute[256940]: 2025-10-02 13:20:41.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:41 compute-0 ceph-mon[73668]: pgmap v3283: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 02 13:20:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:41.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:42.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 02 13:20:43 compute-0 nova_compute[256940]: 2025-10-02 13:20:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:43.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:43 compute-0 ceph-mon[73668]: pgmap v3284: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 02 13:20:44 compute-0 nova_compute[256940]: 2025-10-02 13:20:44.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:44.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 14 KiB/s wr, 4 op/s
Oct 02 13:20:45 compute-0 nova_compute[256940]: 2025-10-02 13:20:45.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:45.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:46 compute-0 ceph-mon[73668]: pgmap v3285: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 14 KiB/s wr, 4 op/s
Oct 02 13:20:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:46.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 727 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 13:20:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:47.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:48 compute-0 nova_compute[256940]: 2025-10-02 13:20:48.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:48 compute-0 ceph-mon[73668]: pgmap v3286: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 727 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 13:20:48 compute-0 podman[405026]: 2025-10-02 13:20:48.376839735 +0000 UTC m=+0.049613071 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:20:48 compute-0 podman[405027]: 2025-10-02 13:20:48.401809694 +0000 UTC m=+0.071637043 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 13:20:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 727 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 13:20:49 compute-0 nova_compute[256940]: 2025-10-02 13:20:49.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:49 compute-0 ceph-mon[73668]: pgmap v3287: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 727 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 13:20:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:49.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:50 compute-0 nova_compute[256940]: 2025-10-02 13:20:50.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:50.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:20:51 compute-0 nova_compute[256940]: 2025-10-02 13:20:51.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:51 compute-0 nova_compute[256940]: 2025-10-02 13:20:51.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:51 compute-0 nova_compute[256940]: 2025-10-02 13:20:51.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:20:51 compute-0 nova_compute[256940]: 2025-10-02 13:20:51.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:20:51 compute-0 nova_compute[256940]: 2025-10-02 13:20:51.233 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:20:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:51 compute-0 ceph-mon[73668]: pgmap v3288: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:20:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:51.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:52.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct 02 13:20:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:53.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:54 compute-0 ceph-mon[73668]: pgmap v3289: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct 02 13:20:54 compute-0 nova_compute[256940]: 2025-10-02 13:20:54.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:54.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.7 KiB/s wr, 71 op/s
Oct 02 13:20:55 compute-0 nova_compute[256940]: 2025-10-02 13:20:55.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:55.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:56 compute-0 nova_compute[256940]: 2025-10-02 13:20:56.230 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:56 compute-0 ceph-mon[73668]: pgmap v3290: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.7 KiB/s wr, 71 op/s
Oct 02 13:20:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:56.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 9.8 KiB/s wr, 78 op/s
Oct 02 13:20:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:58 compute-0 sudo[405077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:58 compute-0 sudo[405077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:58 compute-0 sudo[405077]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:58 compute-0 sudo[405102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:58 compute-0 sudo[405102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:58 compute-0 sudo[405102]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:58 compute-0 ceph-mon[73668]: pgmap v3291: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 9.8 KiB/s wr, 78 op/s
Oct 02 13:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:20:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:20:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:20:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:58.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:20:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 9.8 KiB/s wr, 51 op/s
Oct 02 13:20:59 compute-0 nova_compute[256940]: 2025-10-02 13:20:59.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:59 compute-0 ceph-mon[73668]: pgmap v3292: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 9.8 KiB/s wr, 51 op/s
Oct 02 13:20:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:20:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:59.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:00 compute-0 nova_compute[256940]: 2025-10-02 13:21:00.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 85 op/s
Oct 02 13:21:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:01 compute-0 ceph-mon[73668]: pgmap v3293: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 85 op/s
Oct 02 13:21:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:01.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:02.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 02 13:21:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:03.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:03 compute-0 ceph-mon[73668]: pgmap v3294: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 02 13:21:04 compute-0 nova_compute[256940]: 2025-10-02 13:21:04.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:04.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 02 13:21:05 compute-0 nova_compute[256940]: 2025-10-02 13:21:05.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:05 compute-0 podman[405131]: 2025-10-02 13:21:05.424158678 +0000 UTC m=+0.078549773 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 13:21:05 compute-0 podman[405132]: 2025-10-02 13:21:05.432845694 +0000 UTC m=+0.082548197 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:21:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:05.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:06 compute-0 ceph-mon[73668]: pgmap v3295: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 02 13:21:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1005916527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1005916527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:21:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:06.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 24 KiB/s wr, 45 op/s
Oct 02 13:21:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:07.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:08 compute-0 ceph-mon[73668]: pgmap v3296: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 24 KiB/s wr, 45 op/s
Oct 02 13:21:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:08.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 432 KiB/s rd, 14 KiB/s wr, 35 op/s
Oct 02 13:21:09 compute-0 nova_compute[256940]: 2025-10-02 13:21:09.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:09.378 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:21:09 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:09.379 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:21:09 compute-0 nova_compute[256940]: 2025-10-02 13:21:09.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:09.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:10 compute-0 ceph-mon[73668]: pgmap v3297: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 432 KiB/s rd, 14 KiB/s wr, 35 op/s
Oct 02 13:21:10 compute-0 nova_compute[256940]: 2025-10-02 13:21:10.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:10.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 432 KiB/s rd, 15 KiB/s wr, 35 op/s
Oct 02 13:21:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:12 compute-0 ceph-mon[73668]: pgmap v3298: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 432 KiB/s rd, 15 KiB/s wr, 35 op/s
Oct 02 13:21:12 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:12.381 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:21:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:12.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 16 KiB/s wr, 2 op/s
Oct 02 13:21:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:21:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:21:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:13.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:14 compute-0 ceph-mon[73668]: pgmap v3299: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 16 KiB/s wr, 2 op/s
Oct 02 13:21:14 compute-0 nova_compute[256940]: 2025-10-02 13:21:14.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:14.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 182 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 20 KiB/s wr, 15 op/s
Oct 02 13:21:15 compute-0 nova_compute[256940]: 2025-10-02 13:21:15.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:15 compute-0 ceph-mon[73668]: pgmap v3300: 305 pgs: 305 active+clean; 182 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 20 KiB/s wr, 15 op/s
Oct 02 13:21:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:15.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:16.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 39 op/s
Oct 02 13:21:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:17 compute-0 ceph-mon[73668]: pgmap v3301: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 39 op/s
Oct 02 13:21:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2487810029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:17.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:18 compute-0 sudo[405176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:18 compute-0 sudo[405176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:18 compute-0 sudo[405176]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:18 compute-0 sudo[405201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:18 compute-0 sudo[405201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:18 compute-0 sudo[405201]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:18.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 10 KiB/s wr, 38 op/s
Oct 02 13:21:19 compute-0 nova_compute[256940]: 2025-10-02 13:21:19.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:19 compute-0 podman[405227]: 2025-10-02 13:21:19.395772266 +0000 UTC m=+0.059383425 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 13:21:19 compute-0 podman[405228]: 2025-10-02 13:21:19.45825455 +0000 UTC m=+0.121339705 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:21:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:19.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Oct 02 13:21:19 compute-0 ceph-mon[73668]: pgmap v3302: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 10 KiB/s wr, 38 op/s
Oct 02 13:21:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Oct 02 13:21:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Oct 02 13:21:20 compute-0 nova_compute[256940]: 2025-10-02 13:21:20.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:20.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 12 KiB/s wr, 66 op/s
Oct 02 13:21:21 compute-0 ceph-mon[73668]: osdmap e406: 3 total, 3 up, 3 in
Oct 02 13:21:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:21.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:22 compute-0 ceph-mon[73668]: pgmap v3304: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 12 KiB/s wr, 66 op/s
Oct 02 13:21:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:22.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 7.6 KiB/s wr, 66 op/s
Oct 02 13:21:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:23.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:23 compute-0 nova_compute[256940]: 2025-10-02 13:21:23.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:23 compute-0 nova_compute[256940]: 2025-10-02 13:21:23.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:24 compute-0 ceph-mon[73668]: pgmap v3305: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 7.6 KiB/s wr, 66 op/s
Oct 02 13:21:24 compute-0 nova_compute[256940]: 2025-10-02 13:21:24.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 133 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 3.6 KiB/s wr, 53 op/s
Oct 02 13:21:25 compute-0 nova_compute[256940]: 2025-10-02 13:21:25.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:26 compute-0 ceph-mon[73668]: pgmap v3306: 305 pgs: 305 active+clean; 133 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 3.6 KiB/s wr, 53 op/s
Oct 02 13:21:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:26.516 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:26.516 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:26.516 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Oct 02 13:21:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 29 op/s
Oct 02 13:21:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Oct 02 13:21:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Oct 02 13:21:27 compute-0 ceph-mon[73668]: pgmap v3307: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 29 op/s
Oct 02 13:21:27 compute-0 ceph-mon[73668]: osdmap e407: 3 total, 3 up, 3 in
Oct 02 13:21:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:27.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:28.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:21:28
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Oct 02 13:21:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:21:29 compute-0 nova_compute[256940]: 2025-10-02 13:21:29.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:21:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:21:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:29.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:29 compute-0 ceph-mon[73668]: pgmap v3309: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct 02 13:21:30 compute-0 sudo[405280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:30 compute-0 sudo[405280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-0 sudo[405280]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:30 compute-0 sudo[405305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:30 compute-0 sudo[405305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-0 sudo[405305]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:30 compute-0 nova_compute[256940]: 2025-10-02 13:21:30.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:30 compute-0 sudo[405330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:30 compute-0 sudo[405330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-0 sudo[405330]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:30 compute-0 sudo[405355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:21:30 compute-0 sudo[405355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:21:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:21:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 KiB/s rd, 716 B/s wr, 9 op/s
Oct 02 13:21:30 compute-0 sudo[405355]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:21:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:21:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:21:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:21:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:21:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:21:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 75916612-bf59-4f32-a1ab-0393a8860ef3 does not exist
Oct 02 13:21:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a384a442-bfc0-4fb3-82ee-d94a98b82180 does not exist
Oct 02 13:21:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f447c099-23d8-4747-950b-ad0b2b07c8f5 does not exist
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:21:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:21:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:21:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:31 compute-0 sudo[405410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:31 compute-0 sudo[405410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:31 compute-0 sudo[405410]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:31 compute-0 sudo[405435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:31 compute-0 ceph-mon[73668]: pgmap v3310: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 KiB/s rd, 716 B/s wr, 9 op/s
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:21:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:31 compute-0 sudo[405435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:31 compute-0 sudo[405435]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:31 compute-0 sudo[405460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:31 compute-0 sudo[405460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:31 compute-0 sudo[405460]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:31 compute-0 sudo[405485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:21:31 compute-0 sudo[405485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:31.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:32 compute-0 podman[405550]: 2025-10-02 13:21:32.09347874 +0000 UTC m=+0.025002651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:32 compute-0 podman[405550]: 2025-10-02 13:21:32.328225272 +0000 UTC m=+0.259749153 container create d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ramanujan, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:21:32 compute-0 systemd[1]: Started libpod-conmon-d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a.scope.
Oct 02 13:21:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:32 compute-0 podman[405550]: 2025-10-02 13:21:32.506021303 +0000 UTC m=+0.437545194 container init d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ramanujan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:21:32 compute-0 podman[405550]: 2025-10-02 13:21:32.516345621 +0000 UTC m=+0.447869512 container start d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:21:32 compute-0 romantic_ramanujan[405567]: 167 167
Oct 02 13:21:32 compute-0 systemd[1]: libpod-d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a.scope: Deactivated successfully.
Oct 02 13:21:32 compute-0 conmon[405567]: conmon d1810f1e4da6f6df412b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a.scope/container/memory.events
Oct 02 13:21:32 compute-0 podman[405550]: 2025-10-02 13:21:32.573432435 +0000 UTC m=+0.504956406 container attach d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ramanujan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:21:32 compute-0 podman[405550]: 2025-10-02 13:21:32.574068112 +0000 UTC m=+0.505592033 container died d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:21:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 716 B/s wr, 8 op/s
Oct 02 13:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bde8e528552299dabb77a1ad0d340761d897000c0f1f1866dce03aea5e829d5-merged.mount: Deactivated successfully.
Oct 02 13:21:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1089819506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:33 compute-0 podman[405550]: 2025-10-02 13:21:33.121687766 +0000 UTC m=+1.053211687 container remove d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:21:33 compute-0 systemd[1]: libpod-conmon-d1810f1e4da6f6df412b32a1bb4755f56dc8bfab085ca7b55f26b76fcd8ac33a.scope: Deactivated successfully.
Oct 02 13:21:33 compute-0 podman[405594]: 2025-10-02 13:21:33.262002693 +0000 UTC m=+0.028348438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:33 compute-0 podman[405594]: 2025-10-02 13:21:33.387257519 +0000 UTC m=+0.153603255 container create 54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:21:33 compute-0 systemd[1]: Started libpod-conmon-54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002.scope.
Oct 02 13:21:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e0a8de1bcf62cd5c9613b233acb5429765e098a1205da07abe8bea54eb32b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e0a8de1bcf62cd5c9613b233acb5429765e098a1205da07abe8bea54eb32b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e0a8de1bcf62cd5c9613b233acb5429765e098a1205da07abe8bea54eb32b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e0a8de1bcf62cd5c9613b233acb5429765e098a1205da07abe8bea54eb32b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e0a8de1bcf62cd5c9613b233acb5429765e098a1205da07abe8bea54eb32b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:33 compute-0 podman[405594]: 2025-10-02 13:21:33.51580723 +0000 UTC m=+0.282152965 container init 54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:21:33 compute-0 podman[405594]: 2025-10-02 13:21:33.528891961 +0000 UTC m=+0.295237696 container start 54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:21:33 compute-0 podman[405594]: 2025-10-02 13:21:33.546296943 +0000 UTC m=+0.312642688 container attach 54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:21:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:33.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:34 compute-0 ceph-mon[73668]: pgmap v3311: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 716 B/s wr, 8 op/s
Oct 02 13:21:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2728240605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1768222646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:34 compute-0 priceless_turing[405610]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:21:34 compute-0 priceless_turing[405610]: --> relative data size: 1.0
Oct 02 13:21:34 compute-0 priceless_turing[405610]: --> All data devices are unavailable
Oct 02 13:21:34 compute-0 nova_compute[256940]: 2025-10-02 13:21:34.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:34 compute-0 systemd[1]: libpod-54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002.scope: Deactivated successfully.
Oct 02 13:21:34 compute-0 podman[405594]: 2025-10-02 13:21:34.338564016 +0000 UTC m=+1.104909751 container died 54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84e0a8de1bcf62cd5c9613b233acb5429765e098a1205da07abe8bea54eb32b-merged.mount: Deactivated successfully.
Oct 02 13:21:34 compute-0 podman[405594]: 2025-10-02 13:21:34.396847651 +0000 UTC m=+1.163193386 container remove 54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:21:34 compute-0 systemd[1]: libpod-conmon-54db2f9c92718bafcb6f883c78b0109ce903f0a5a3bbab63ba5b8498c04c2002.scope: Deactivated successfully.
Oct 02 13:21:34 compute-0 sudo[405485]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:34 compute-0 sudo[405639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:34 compute-0 sudo[405639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:34 compute-0 sudo[405639]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:34 compute-0 sudo[405664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:34 compute-0 sudo[405664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:34 compute-0 sudo[405664]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:34 compute-0 sudo[405689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:34 compute-0 sudo[405689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:34 compute-0 sudo[405689]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:34.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 307 B/s wr, 4 op/s
Oct 02 13:21:34 compute-0 sudo[405714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:21:34 compute-0 sudo[405714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:34 compute-0 podman[405780]: 2025-10-02 13:21:34.984572848 +0000 UTC m=+0.043840621 container create 136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pascal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:21:35 compute-0 systemd[1]: Started libpod-conmon-136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604.scope.
Oct 02 13:21:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:35 compute-0 podman[405780]: 2025-10-02 13:21:34.967492134 +0000 UTC m=+0.026759947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:35 compute-0 podman[405780]: 2025-10-02 13:21:35.06660237 +0000 UTC m=+0.125870173 container init 136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pascal, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:21:35 compute-0 podman[405780]: 2025-10-02 13:21:35.07391163 +0000 UTC m=+0.133179393 container start 136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pascal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:21:35 compute-0 podman[405780]: 2025-10-02 13:21:35.078043917 +0000 UTC m=+0.137311730 container attach 136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pascal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:21:35 compute-0 gifted_pascal[405796]: 167 167
Oct 02 13:21:35 compute-0 systemd[1]: libpod-136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604.scope: Deactivated successfully.
Oct 02 13:21:35 compute-0 podman[405780]: 2025-10-02 13:21:35.081369964 +0000 UTC m=+0.140637767 container died 136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-517cec12c1ba596438a78f1ceb23cafc6ea7e93fbec2d600911f21c80a28cff7-merged.mount: Deactivated successfully.
Oct 02 13:21:35 compute-0 podman[405780]: 2025-10-02 13:21:35.130610044 +0000 UTC m=+0.189877797 container remove 136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:21:35 compute-0 systemd[1]: libpod-conmon-136db4d09687711b331f678616b583dca45b7941a74af47c0b0cb29b80d5c604.scope: Deactivated successfully.
Oct 02 13:21:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/479049458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:35 compute-0 nova_compute[256940]: 2025-10-02 13:21:35.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:35 compute-0 podman[405822]: 2025-10-02 13:21:35.294730049 +0000 UTC m=+0.051163270 container create 9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:21:35 compute-0 systemd[1]: Started libpod-conmon-9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f.scope.
Oct 02 13:21:35 compute-0 podman[405822]: 2025-10-02 13:21:35.267632355 +0000 UTC m=+0.024065666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac34656febbbc2bd3263fc30b59b45cd4ece562296abba014893e0a9833aa86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac34656febbbc2bd3263fc30b59b45cd4ece562296abba014893e0a9833aa86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac34656febbbc2bd3263fc30b59b45cd4ece562296abba014893e0a9833aa86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac34656febbbc2bd3263fc30b59b45cd4ece562296abba014893e0a9833aa86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:35 compute-0 podman[405822]: 2025-10-02 13:21:35.387502271 +0000 UTC m=+0.143935532 container init 9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 02 13:21:35 compute-0 podman[405822]: 2025-10-02 13:21:35.3967082 +0000 UTC m=+0.153141431 container start 9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 13:21:35 compute-0 podman[405822]: 2025-10-02 13:21:35.399984515 +0000 UTC m=+0.156417746 container attach 9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:21:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:35.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:36 compute-0 elegant_edison[405839]: {
Oct 02 13:21:36 compute-0 elegant_edison[405839]:     "1": [
Oct 02 13:21:36 compute-0 elegant_edison[405839]:         {
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "devices": [
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "/dev/loop3"
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             ],
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "lv_name": "ceph_lv0",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "lv_size": "7511998464",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "name": "ceph_lv0",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "tags": {
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.cluster_name": "ceph",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.crush_device_class": "",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.encrypted": "0",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.osd_id": "1",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.type": "block",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:                 "ceph.vdo": "0"
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             },
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "type": "block",
Oct 02 13:21:36 compute-0 elegant_edison[405839]:             "vg_name": "ceph_vg0"
Oct 02 13:21:36 compute-0 elegant_edison[405839]:         }
Oct 02 13:21:36 compute-0 elegant_edison[405839]:     ]
Oct 02 13:21:36 compute-0 elegant_edison[405839]: }
Oct 02 13:21:36 compute-0 systemd[1]: libpod-9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f.scope: Deactivated successfully.
Oct 02 13:21:36 compute-0 podman[405849]: 2025-10-02 13:21:36.186076537 +0000 UTC m=+0.027502556 container died 9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 13:21:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac34656febbbc2bd3263fc30b59b45cd4ece562296abba014893e0a9833aa86-merged.mount: Deactivated successfully.
Oct 02 13:21:36 compute-0 podman[405855]: 2025-10-02 13:21:36.238070939 +0000 UTC m=+0.060103724 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:21:36 compute-0 ceph-mon[73668]: pgmap v3312: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 307 B/s wr, 4 op/s
Oct 02 13:21:36 compute-0 podman[405849]: 2025-10-02 13:21:36.246505268 +0000 UTC m=+0.087931287 container remove 9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:21:36 compute-0 podman[405848]: 2025-10-02 13:21:36.251671732 +0000 UTC m=+0.073123782 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:21:36 compute-0 systemd[1]: libpod-conmon-9f0587f6d7a7a06f5ee4081b6b42f0762a54145440c4c5aa0358410ca724548f.scope: Deactivated successfully.
Oct 02 13:21:36 compute-0 sudo[405714]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:36 compute-0 sudo[405900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:36 compute-0 sudo[405900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:36 compute-0 sudo[405900]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:36 compute-0 sudo[405925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:36 compute-0 sudo[405925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:36 compute-0 sudo[405925]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:36 compute-0 sudo[405950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:36 compute-0 sudo[405950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:36 compute-0 sudo[405950]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:36 compute-0 sudo[405975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:21:36 compute-0 sudo[405975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:36.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:36 compute-0 podman[406041]: 2025-10-02 13:21:36.959520361 +0000 UTC m=+0.049667552 container create a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:21:37 compute-0 systemd[1]: Started libpod-conmon-a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23.scope.
Oct 02 13:21:37 compute-0 podman[406041]: 2025-10-02 13:21:36.940983279 +0000 UTC m=+0.031130500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:37 compute-0 podman[406041]: 2025-10-02 13:21:37.068125684 +0000 UTC m=+0.158272885 container init a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:21:37 compute-0 podman[406041]: 2025-10-02 13:21:37.075330471 +0000 UTC m=+0.165477672 container start a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:21:37 compute-0 confident_gagarin[406058]: 167 167
Oct 02 13:21:37 compute-0 podman[406041]: 2025-10-02 13:21:37.079975652 +0000 UTC m=+0.170122833 container attach a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:21:37 compute-0 systemd[1]: libpod-a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23.scope: Deactivated successfully.
Oct 02 13:21:37 compute-0 podman[406041]: 2025-10-02 13:21:37.081259515 +0000 UTC m=+0.171406696 container died a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 13:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-152fadbfe66a13c6af5bf563639648fa40c89f4d80c3e77495a9d3ab8e8924ab-merged.mount: Deactivated successfully.
Oct 02 13:21:37 compute-0 podman[406041]: 2025-10-02 13:21:37.130881875 +0000 UTC m=+0.221029056 container remove a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:21:37 compute-0 systemd[1]: libpod-conmon-a4c4a2439c5ba9173f373bab0a1c4650cd379860bba4d09550132787d0385f23.scope: Deactivated successfully.
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.235 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.237 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.238 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:21:37 compute-0 podman[406081]: 2025-10-02 13:21:37.290690779 +0000 UTC m=+0.040356400 container create e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jackson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:21:37 compute-0 systemd[1]: Started libpod-conmon-e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31.scope.
Oct 02 13:21:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88fbc269f463a7ac8b3f9217eedc9151d16a1cdfbab530f04ea79679050c725/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88fbc269f463a7ac8b3f9217eedc9151d16a1cdfbab530f04ea79679050c725/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88fbc269f463a7ac8b3f9217eedc9151d16a1cdfbab530f04ea79679050c725/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88fbc269f463a7ac8b3f9217eedc9151d16a1cdfbab530f04ea79679050c725/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:21:37 compute-0 podman[406081]: 2025-10-02 13:21:37.274919349 +0000 UTC m=+0.024584890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:21:37 compute-0 podman[406081]: 2025-10-02 13:21:37.371843648 +0000 UTC m=+0.121509199 container init e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jackson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:21:37 compute-0 podman[406081]: 2025-10-02 13:21:37.381098459 +0000 UTC m=+0.130763990 container start e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jackson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:21:37 compute-0 podman[406081]: 2025-10-02 13:21:37.385397421 +0000 UTC m=+0.135062972 container attach e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:21:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:21:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4131645090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.686 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:21:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:37.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.859 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.860 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4136MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.861 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.861 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.916 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.917 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.931 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.951 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.951 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.962 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:21:37 compute-0 nova_compute[256940]: 2025-10-02 13:21:37.985 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:21:38 compute-0 nova_compute[256940]: 2025-10-02 13:21:38.007 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:21:38 compute-0 happy_jackson[406098]: {
Oct 02 13:21:38 compute-0 happy_jackson[406098]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:21:38 compute-0 happy_jackson[406098]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:21:38 compute-0 happy_jackson[406098]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:21:38 compute-0 happy_jackson[406098]:         "osd_id": 1,
Oct 02 13:21:38 compute-0 happy_jackson[406098]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:21:38 compute-0 happy_jackson[406098]:         "type": "bluestore"
Oct 02 13:21:38 compute-0 happy_jackson[406098]:     }
Oct 02 13:21:38 compute-0 happy_jackson[406098]: }
Oct 02 13:21:38 compute-0 ceph-mon[73668]: pgmap v3313: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4131645090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:38 compute-0 systemd[1]: libpod-e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31.scope: Deactivated successfully.
Oct 02 13:21:38 compute-0 podman[406081]: 2025-10-02 13:21:38.270832846 +0000 UTC m=+1.020498407 container died e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b88fbc269f463a7ac8b3f9217eedc9151d16a1cdfbab530f04ea79679050c725-merged.mount: Deactivated successfully.
Oct 02 13:21:38 compute-0 podman[406081]: 2025-10-02 13:21:38.334834709 +0000 UTC m=+1.084500240 container remove e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jackson, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:21:38 compute-0 systemd[1]: libpod-conmon-e1ca819c030a53b8e1cdb29f853675a7d96b3b4891fe46bcc8819400e9a48b31.scope: Deactivated successfully.
Oct 02 13:21:38 compute-0 sudo[405975]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:21:38 compute-0 sudo[406171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:38 compute-0 sudo[406171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:38 compute-0 sudo[406171]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:21:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7082ed1f-34d3-44c8-97eb-e2177c71dea2 does not exist
Oct 02 13:21:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ee0b7cc7-23af-4a07-a78d-1b8e74ceb641 does not exist
Oct 02 13:21:38 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2adba914-1c38-4454-9de8-2de78e31bc4a does not exist
Oct 02 13:21:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:21:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314671866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:38 compute-0 nova_compute[256940]: 2025-10-02 13:21:38.469 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:21:38 compute-0 nova_compute[256940]: 2025-10-02 13:21:38.474 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:21:38 compute-0 sudo[406198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:38 compute-0 sudo[406197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:38 compute-0 sudo[406198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:38 compute-0 sudo[406197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:38 compute-0 sudo[406197]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:38 compute-0 sudo[406198]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:38 compute-0 nova_compute[256940]: 2025-10-02 13:21:38.500 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:21:38 compute-0 nova_compute[256940]: 2025-10-02 13:21:38.502 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:21:38 compute-0 nova_compute[256940]: 2025-10-02 13:21:38.503 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:38 compute-0 sudo[406249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:21:38 compute-0 sudo[406249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:38 compute-0 sudo[406249]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:38.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:39 compute-0 nova_compute[256940]: 2025-10-02 13:21:39.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2314671866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:39 compute-0 ceph-mon[73668]: pgmap v3314: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:39 compute-0 nova_compute[256940]: 2025-10-02 13:21:39.504 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:39.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:40 compute-0 nova_compute[256940]: 2025-10-02 13:21:40.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:40 compute-0 nova_compute[256940]: 2025-10-02 13:21:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:40 compute-0 nova_compute[256940]: 2025-10-02 13:21:40.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:21:40 compute-0 nova_compute[256940]: 2025-10-02 13:21:40.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:21:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:21:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:41 compute-0 ceph-mon[73668]: pgmap v3315: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:41.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:42 compute-0 nova_compute[256940]: 2025-10-02 13:21:42.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:43 compute-0 ceph-mon[73668]: pgmap v3316: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:43.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:44 compute-0 nova_compute[256940]: 2025-10-02 13:21:44.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:44.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:45 compute-0 nova_compute[256940]: 2025-10-02 13:21:45.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:45 compute-0 nova_compute[256940]: 2025-10-02 13:21:45.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:45.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:45 compute-0 ceph-mon[73668]: pgmap v3317: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:46.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:47.672 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:21:47 compute-0 nova_compute[256940]: 2025-10-02 13:21:47.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:47 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:47.674 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:21:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:47 compute-0 ceph-mon[73668]: pgmap v3318: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:48 compute-0 nova_compute[256940]: 2025-10-02 13:21:48.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:48.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:49 compute-0 nova_compute[256940]: 2025-10-02 13:21:49.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:49.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:50 compute-0 ceph-mon[73668]: pgmap v3319: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:50 compute-0 nova_compute[256940]: 2025-10-02 13:21:50.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:50 compute-0 podman[406281]: 2025-10-02 13:21:50.415984668 +0000 UTC m=+0.088860461 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:21:50 compute-0 podman[406282]: 2025-10-02 13:21:50.436702196 +0000 UTC m=+0.109057305 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:21:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:50.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:51 compute-0 nova_compute[256940]: 2025-10-02 13:21:51.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:51 compute-0 nova_compute[256940]: 2025-10-02 13:21:51.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:21:51 compute-0 nova_compute[256940]: 2025-10-02 13:21:51.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:21:51 compute-0 nova_compute[256940]: 2025-10-02 13:21:51.228 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:21:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:51.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:52 compute-0 ceph-mon[73668]: pgmap v3320: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:52 compute-0 nova_compute[256940]: 2025-10-02 13:21:52.223 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:52.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:54 compute-0 ceph-mon[73668]: pgmap v3321: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:54 compute-0 nova_compute[256940]: 2025-10-02 13:21:54.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:54.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:55 compute-0 nova_compute[256940]: 2025-10-02 13:21:55.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:55.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:56 compute-0 ceph-mon[73668]: pgmap v3322: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:56 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:21:56.676 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:21:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:56.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:57.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:58 compute-0 ceph-mon[73668]: pgmap v3323: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:21:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:21:58 compute-0 sudo[406330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:58 compute-0 sudo[406330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:58 compute-0 sudo[406330]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:58 compute-0 sudo[406355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:58 compute-0 sudo[406355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:58 compute-0 sudo[406355]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:21:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:58.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:21:59 compute-0 nova_compute[256940]: 2025-10-02 13:21:59.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:21:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:59.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:00 compute-0 nova_compute[256940]: 2025-10-02 13:22:00.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:00 compute-0 ceph-mon[73668]: pgmap v3324: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:01.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:02 compute-0 ceph-mon[73668]: pgmap v3325: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:03 compute-0 ceph-mon[73668]: pgmap v3326: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:03.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:04 compute-0 nova_compute[256940]: 2025-10-02 13:22:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:04 compute-0 nova_compute[256940]: 2025-10-02 13:22:04.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:22:04 compute-0 nova_compute[256940]: 2025-10-02 13:22:04.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:04.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:05 compute-0 nova_compute[256940]: 2025-10-02 13:22:05.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:05 compute-0 ceph-mon[73668]: pgmap v3327: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:22:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:22:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:05.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:06 compute-0 podman[406384]: 2025-10-02 13:22:06.393084961 +0000 UTC m=+0.060045362 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:22:06 compute-0 podman[406385]: 2025-10-02 13:22:06.403925842 +0000 UTC m=+0.067309500 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:22:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:07 compute-0 nova_compute[256940]: 2025-10-02 13:22:07.232 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:07 compute-0 nova_compute[256940]: 2025-10-02 13:22:07.232 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:22:07 compute-0 nova_compute[256940]: 2025-10-02 13:22:07.248 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:22:07 compute-0 ceph-mon[73668]: pgmap v3328: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:07.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:08.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.549 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.549 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.574 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.671 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.671 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.679 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.679 2 INFO nova.compute.claims [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:22:09 compute-0 nova_compute[256940]: 2025-10-02 13:22:09.794 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:09 compute-0 ceph-mon[73668]: pgmap v3329: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:22:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722731459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.228 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.233 2 DEBUG nova.compute.provider_tree [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.256 2 DEBUG nova.scheduler.client.report [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.296 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.297 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.362 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.363 2 DEBUG nova.network.neutron [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.384 2 INFO nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.402 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.509 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.510 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.510 2 INFO nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Creating image(s)
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.542 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.574 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.600 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.604 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:10.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.702 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.703 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.703 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.703 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.725 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.729 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 98e67486-6e8a-4a86-953d-16116ef2f446_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:10 compute-0 nova_compute[256940]: 2025-10-02 13:22:10.764 2 DEBUG nova.policy [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ccfaeecf44e04594afffe1129fec1b0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8b72785e0faa4a5eae22ec67bc75f68e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:22:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1722731459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:11 compute-0 nova_compute[256940]: 2025-10-02 13:22:11.481 2 DEBUG nova.network.neutron [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Successfully created port: 621d1801-37d7-4f03-b509-695c4c7a886d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:22:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:11.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:11 compute-0 ceph-mon[73668]: pgmap v3330: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.028 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 98e67486-6e8a-4a86-953d-16116ef2f446_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.102 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] resizing rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.209 2 DEBUG nova.objects.instance [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lazy-loading 'migration_context' on Instance uuid 98e67486-6e8a-4a86-953d-16116ef2f446 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.215 2 DEBUG nova.network.neutron [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Successfully updated port: 621d1801-37d7-4f03-b509-695c4c7a886d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.227 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.228 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Ensure instance console log exists: /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.228 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.228 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.229 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.230 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.230 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquired lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.230 2 DEBUG nova.network.neutron [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.316 2 DEBUG nova.compute.manager [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-changed-621d1801-37d7-4f03-b509-695c4c7a886d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.317 2 DEBUG nova.compute.manager [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Refreshing instance network info cache due to event network-changed-621d1801-37d7-4f03-b509-695c4c7a886d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.317 2 DEBUG oslo_concurrency.lockutils [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:22:12 compute-0 nova_compute[256940]: 2025-10-02 13:22:12.386 2 DEBUG nova.network.neutron [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:22:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Oct 02 13:22:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:12.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:13.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:14 compute-0 ceph-mon[73668]: pgmap v3331: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Oct 02 13:22:14 compute-0 nova_compute[256940]: 2025-10-02 13:22:14.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 140 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 612 KiB/s wr, 24 op/s
Oct 02 13:22:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:14.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:14 compute-0 ovn_controller[148123]: 2025-10-02T13:22:14Z|01061|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.704 2 DEBUG nova.network.neutron [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Updating instance_info_cache with network_info: [{"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.742 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Releasing lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.742 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Instance network_info: |[{"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.742 2 DEBUG oslo_concurrency.lockutils [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.743 2 DEBUG nova.network.neutron [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Refreshing network info cache for port 621d1801-37d7-4f03-b509-695c4c7a886d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.745 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Start _get_guest_xml network_info=[{"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.751 2 WARNING nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.760 2 DEBUG nova.virt.libvirt.host [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.761 2 DEBUG nova.virt.libvirt.host [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.768 2 DEBUG nova.virt.libvirt.host [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.769 2 DEBUG nova.virt.libvirt.host [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.771 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.771 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.772 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.773 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.774 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.774 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.774 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.775 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.776 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.776 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.776 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.777 2 DEBUG nova.virt.hardware [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:22:15 compute-0 nova_compute[256940]: 2025-10-02 13:22:15.782 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:15.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:16 compute-0 ceph-mon[73668]: pgmap v3332: 305 pgs: 305 active+clean; 140 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 612 KiB/s wr, 24 op/s
Oct 02 13:22:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:22:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179133531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.243 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.277 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.281 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:22:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:22:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535284313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:22:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:16.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.709 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.710 2 DEBUG nova.virt.libvirt.vif [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1934594404',display_name='tempest-TestServerBasicOps-server-1934594404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1934594404',id=215,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD0T0cxVy/+CsBluQz3FpvOn86+syyACkCgBDVjULHd4O3+sJKnq2ksU+7zrRHyRa9uJjIcI/Ncm5KHiL4GoYTe3rqHT0YBpyyOWOg8yvOS6UTmkIu6947QkN2BF5g+3Q==',key_name='tempest-TestServerBasicOps-2097069966',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8b72785e0faa4a5eae22ec67bc75f68e',ramdisk_id='',reservation_id='r-zf5wtvh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-700599071',owner_user_name='tempest-TestServerBasicOps-700599071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:22:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ccfaeecf44e04594afffe1129fec1b0b',uuid=98e67486-6e8a-4a86-953d-16116ef2f446,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.711 2 DEBUG nova.network.os_vif_util [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Converting VIF {"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.712 2 DEBUG nova.network.os_vif_util [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9d:72,bridge_name='br-int',has_traffic_filtering=True,id=621d1801-37d7-4f03-b509-695c4c7a886d,network=Network(7f10919f-48f8-4d93-90be-f6d40210fadd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap621d1801-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.713 2 DEBUG nova.objects.instance [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 98e67486-6e8a-4a86-953d-16116ef2f446 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.728 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <uuid>98e67486-6e8a-4a86-953d-16116ef2f446</uuid>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <name>instance-000000d7</name>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <nova:name>tempest-TestServerBasicOps-server-1934594404</nova:name>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:22:15</nova:creationTime>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:user uuid="ccfaeecf44e04594afffe1129fec1b0b">tempest-TestServerBasicOps-700599071-project-member</nova:user>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:project uuid="8b72785e0faa4a5eae22ec67bc75f68e">tempest-TestServerBasicOps-700599071</nova:project>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <nova:port uuid="621d1801-37d7-4f03-b509-695c4c7a886d">
Oct 02 13:22:16 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <system>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <entry name="serial">98e67486-6e8a-4a86-953d-16116ef2f446</entry>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <entry name="uuid">98e67486-6e8a-4a86-953d-16116ef2f446</entry>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </system>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <os>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   </os>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <features>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   </features>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/98e67486-6e8a-4a86-953d-16116ef2f446_disk">
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       </source>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/98e67486-6e8a-4a86-953d-16116ef2f446_disk.config">
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       </source>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:22:16 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:8a:9d:72"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <target dev="tap621d1801-37"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/console.log" append="off"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <video>
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </video>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:22:16 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:22:16 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:22:16 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:22:16 compute-0 nova_compute[256940]: </domain>
Oct 02 13:22:16 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.729 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Preparing to wait for external event network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.730 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.730 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.731 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.732 2 DEBUG nova.virt.libvirt.vif [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1934594404',display_name='tempest-TestServerBasicOps-server-1934594404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1934594404',id=215,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD0T0cxVy/+CsBluQz3FpvOn86+syyACkCgBDVjULHd4O3+sJKnq2ksU+7zrRHyRa9uJjIcI/Ncm5KHiL4GoYTe3rqHT0YBpyyOWOg8yvOS6UTmkIu6947QkN2BF5g+3Q==',key_name='tempest-TestServerBasicOps-2097069966',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8b72785e0faa4a5eae22ec67bc75f68e',ramdisk_id='',reservation_id='r-zf5wtvh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-700599071',owner_user_name='tempest-TestServerBasicOps-700599071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:22:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ccfaeecf44e04594afffe1129fec1b0b',uuid=98e67486-6e8a-4a86-953d-16116ef2f446,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.732 2 DEBUG nova.network.os_vif_util [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Converting VIF {"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.733 2 DEBUG nova.network.os_vif_util [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9d:72,bridge_name='br-int',has_traffic_filtering=True,id=621d1801-37d7-4f03-b509-695c4c7a886d,network=Network(7f10919f-48f8-4d93-90be-f6d40210fadd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap621d1801-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.733 2 DEBUG os_vif [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9d:72,bridge_name='br-int',has_traffic_filtering=True,id=621d1801-37d7-4f03-b509-695c4c7a886d,network=Network(7f10919f-48f8-4d93-90be-f6d40210fadd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap621d1801-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621d1801-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap621d1801-37, col_values=(('external_ids', {'iface-id': '621d1801-37d7-4f03-b509-695c4c7a886d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:9d:72', 'vm-uuid': '98e67486-6e8a-4a86-953d-16116ef2f446'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:16 compute-0 NetworkManager[44981]: <info>  [1759411336.7430] manager: (tap621d1801-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.749 2 INFO os_vif [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9d:72,bridge_name='br-int',has_traffic_filtering=True,id=621d1801-37d7-4f03-b509-695c4c7a886d,network=Network(7f10919f-48f8-4d93-90be-f6d40210fadd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap621d1801-37')
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.821 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.822 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.823 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] No VIF found with MAC fa:16:3e:8a:9d:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.824 2 INFO nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Using config drive
Oct 02 13:22:16 compute-0 nova_compute[256940]: 2025-10-02 13:22:16.864 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:22:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4179133531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:22:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1535284313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:22:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:17.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:17 compute-0 nova_compute[256940]: 2025-10-02 13:22:17.911 2 INFO nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Creating config drive at /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/disk.config
Oct 02 13:22:17 compute-0 nova_compute[256940]: 2025-10-02 13:22:17.918 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbnalcm_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.053 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbnalcm_" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.090 2 DEBUG nova.storage.rbd_utils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] rbd image 98e67486-6e8a-4a86-953d-16116ef2f446_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.094 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/disk.config 98e67486-6e8a-4a86-953d-16116ef2f446_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:18 compute-0 ceph-mon[73668]: pgmap v3333: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.269 2 DEBUG oslo_concurrency.processutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/disk.config 98e67486-6e8a-4a86-953d-16116ef2f446_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.270 2 INFO nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Deleting local config drive /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446/disk.config because it was imported into RBD.
Oct 02 13:22:18 compute-0 kernel: tap621d1801-37: entered promiscuous mode
Oct 02 13:22:18 compute-0 NetworkManager[44981]: <info>  [1759411338.3196] manager: (tap621d1801-37): new Tun device (/org/freedesktop/NetworkManager/Devices/466)
Oct 02 13:22:18 compute-0 ovn_controller[148123]: 2025-10-02T13:22:18Z|01062|binding|INFO|Claiming lport 621d1801-37d7-4f03-b509-695c4c7a886d for this chassis.
Oct 02 13:22:18 compute-0 ovn_controller[148123]: 2025-10-02T13:22:18Z|01063|binding|INFO|621d1801-37d7-4f03-b509-695c4c7a886d: Claiming fa:16:3e:8a:9d:72 10.100.0.9
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.334 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:9d:72 10.100.0.9'], port_security=['fa:16:3e:8a:9d:72 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '98e67486-6e8a-4a86-953d-16116ef2f446', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f10919f-48f8-4d93-90be-f6d40210fadd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b72785e0faa4a5eae22ec67bc75f68e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b232f35-94d7-4833-95df-bc2498df7616 789ac8f8-0b3f-4e8f-bfd8-58c28465e093', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59fdd5c5-2edd-4d32-81c5-fe1df2adc6bd, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=621d1801-37d7-4f03-b509-695c4c7a886d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.335 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 621d1801-37d7-4f03-b509-695c4c7a886d in datapath 7f10919f-48f8-4d93-90be-f6d40210fadd bound to our chassis
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.336 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f10919f-48f8-4d93-90be-f6d40210fadd
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.350 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[104b8d7c-ab01-4566-82b5-3c35a4d82182]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.351 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f10919f-41 in ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.354 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f10919f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:22:18 compute-0 systemd-udevd[406753]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.354 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8041c80e-adbe-44f5-9a33-73b1b80ac1a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.355 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e2774f3c-cdbd-46a7-ad52-da88ff1b27ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 systemd-machined[210927]: New machine qemu-109-instance-000000d7.
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.366 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[ec57b4b6-641a-485e-aa3a-0d9c79fb0b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 NetworkManager[44981]: <info>  [1759411338.3711] device (tap621d1801-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:22:18 compute-0 NetworkManager[44981]: <info>  [1759411338.3725] device (tap621d1801-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:22:18 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-000000d7.
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.392 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0d765c-30c5-409b-ab6a-e6c4d87c7504]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 ovn_controller[148123]: 2025-10-02T13:22:18Z|01064|binding|INFO|Setting lport 621d1801-37d7-4f03-b509-695c4c7a886d ovn-installed in OVS
Oct 02 13:22:18 compute-0 ovn_controller[148123]: 2025-10-02T13:22:18Z|01065|binding|INFO|Setting lport 621d1801-37d7-4f03-b509-695c4c7a886d up in Southbound
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.423 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[14f03aaf-a0b1-4fe7-9666-42cda9723007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 systemd-udevd[406757]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.428 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4f241989-38ad-40f3-886e-c9d29a3e150d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 NetworkManager[44981]: <info>  [1759411338.4300] manager: (tap7f10919f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/467)
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.460 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[82d273b4-8dd2-462a-b0a1-712737ac10c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.463 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[dadbd7af-bdff-49bc-8784-040f7b62b436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 NetworkManager[44981]: <info>  [1759411338.4870] device (tap7f10919f-40): carrier: link connected
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.493 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccc3f18-7a8b-4538-a36f-8c4591f621a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.511 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe5e230-07c5-4860-bc16-2b621f936270]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f10919f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:1e:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923397, 'reachable_time': 40175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406785, 'error': None, 'target': 'ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.530 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8ba46e-cc70-460a-b1af-cd06b0d19a05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:1edb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 923397, 'tstamp': 923397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406786, 'error': None, 'target': 'ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.549 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1997d45e-ea09-4927-8c20-398a6035a43c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f10919f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:1e:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923397, 'reachable_time': 40175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 406787, 'error': None, 'target': 'ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.578 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[144924d4-907d-413e-934f-c56b6289a381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.642 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d44600ca-c1bc-45fb-bebb-eda7fcbaeaf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f10919f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.644 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.645 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f10919f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 kernel: tap7f10919f-40: entered promiscuous mode
Oct 02 13:22:18 compute-0 NetworkManager[44981]: <info>  [1759411338.6472] manager: (tap7f10919f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.649 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f10919f-40, col_values=(('external_ids', {'iface-id': '1ddfd2e7-e422-4a07-8f3b-992303097483'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:18 compute-0 ovn_controller[148123]: 2025-10-02T13:22:18Z|01066|binding|INFO|Releasing lport 1ddfd2e7-e422-4a07-8f3b-992303097483 from this chassis (sb_readonly=0)
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.664 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f10919f-48f8-4d93-90be-f6d40210fadd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f10919f-48f8-4d93-90be-f6d40210fadd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.665 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[86cddc09-bdd6-416e-8b61-b33cdf40165d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.666 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-7f10919f-48f8-4d93-90be-f6d40210fadd
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/7f10919f-48f8-4d93-90be-f6d40210fadd.pid.haproxy
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 7f10919f-48f8-4d93-90be-f6d40210fadd
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:22:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:18.668 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd', 'env', 'PROCESS_TAG=haproxy-7f10919f-48f8-4d93-90be-f6d40210fadd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f10919f-48f8-4d93-90be-f6d40210fadd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:22:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:22:18 compute-0 sudo[406794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:18 compute-0 sudo[406794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:18 compute-0 sudo[406794]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:18.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:18 compute-0 sudo[406836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:18 compute-0 sudo[406836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:18 compute-0 sudo[406836]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.953 2 DEBUG nova.network.neutron [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Updated VIF entry in instance network info cache for port 621d1801-37d7-4f03-b509-695c4c7a886d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.954 2 DEBUG nova.network.neutron [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Updating instance_info_cache with network_info: [{"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:22:18 compute-0 nova_compute[256940]: 2025-10-02 13:22:18.976 2 DEBUG oslo_concurrency.lockutils [req-ab923240-397a-4ffe-ad9f-4f5265673508 req-64268217-b38d-4213-85d1-daabd3a0154b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.009 2 DEBUG nova.compute.manager [req-92295559-588b-4e69-b2bf-b769a981a2ee req-0c4a3f20-1c86-4d2b-b90a-229e26881db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.009 2 DEBUG oslo_concurrency.lockutils [req-92295559-588b-4e69-b2bf-b769a981a2ee req-0c4a3f20-1c86-4d2b-b90a-229e26881db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.009 2 DEBUG oslo_concurrency.lockutils [req-92295559-588b-4e69-b2bf-b769a981a2ee req-0c4a3f20-1c86-4d2b-b90a-229e26881db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.009 2 DEBUG oslo_concurrency.lockutils [req-92295559-588b-4e69-b2bf-b769a981a2ee req-0c4a3f20-1c86-4d2b-b90a-229e26881db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.010 2 DEBUG nova.compute.manager [req-92295559-588b-4e69-b2bf-b769a981a2ee req-0c4a3f20-1c86-4d2b-b90a-229e26881db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Processing event network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:22:19 compute-0 podman[406912]: 2025-10-02 13:22:19.056077033 +0000 UTC m=+0.049400115 container create e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 13:22:19 compute-0 systemd[1]: Started libpod-conmon-e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292.scope.
Oct 02 13:22:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66913df2f92b45e4e4444c8c66e760c305f7d09196cafe76b3299e0e16d016cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:19 compute-0 podman[406912]: 2025-10-02 13:22:19.031615118 +0000 UTC m=+0.024938200 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:22:19 compute-0 podman[406912]: 2025-10-02 13:22:19.134787639 +0000 UTC m=+0.128110781 container init e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 13:22:19 compute-0 podman[406912]: 2025-10-02 13:22:19.140901558 +0000 UTC m=+0.134224640 container start e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:19 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [NOTICE]   (406930) : New worker (406932) forked
Oct 02 13:22:19 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [NOTICE]   (406930) : Loading success.
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.351 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.352 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411339.3520555, 98e67486-6e8a-4a86-953d-16116ef2f446 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.352 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] VM Started (Lifecycle Event)
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.357 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.362 2 INFO nova.virt.libvirt.driver [-] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Instance spawned successfully.
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.362 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.371 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.374 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.381 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.382 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.382 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.383 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.383 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.384 2 DEBUG nova.virt.libvirt.driver [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.392 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.392 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411339.3544, 98e67486-6e8a-4a86-953d-16116ef2f446 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.393 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] VM Paused (Lifecycle Event)
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.411 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.414 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411339.3565307, 98e67486-6e8a-4a86-953d-16116ef2f446 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.414 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] VM Resumed (Lifecycle Event)
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.435 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.438 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.445 2 INFO nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Took 8.94 seconds to spawn the instance on the hypervisor.
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.445 2 DEBUG nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.484 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.544 2 INFO nova.compute.manager [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Took 9.90 seconds to build instance.
Oct 02 13:22:19 compute-0 nova_compute[256940]: 2025-10-02 13:22:19.561 2 DEBUG oslo_concurrency.lockutils [None req-c71ba341-1f2c-43b1-a7b3-2e192d998ac8 ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:19.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:20 compute-0 ceph-mon[73668]: pgmap v3334: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:22:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 176 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 13:22:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:20.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:21 compute-0 nova_compute[256940]: 2025-10-02 13:22:21.107 2 DEBUG nova.compute.manager [req-4c94f4c6-fa34-49eb-8311-8682372badcf req-f01719b6-02cc-4ada-917b-b40af7a7676b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:22:21 compute-0 nova_compute[256940]: 2025-10-02 13:22:21.108 2 DEBUG oslo_concurrency.lockutils [req-4c94f4c6-fa34-49eb-8311-8682372badcf req-f01719b6-02cc-4ada-917b-b40af7a7676b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:21 compute-0 nova_compute[256940]: 2025-10-02 13:22:21.108 2 DEBUG oslo_concurrency.lockutils [req-4c94f4c6-fa34-49eb-8311-8682372badcf req-f01719b6-02cc-4ada-917b-b40af7a7676b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:21 compute-0 nova_compute[256940]: 2025-10-02 13:22:21.108 2 DEBUG oslo_concurrency.lockutils [req-4c94f4c6-fa34-49eb-8311-8682372badcf req-f01719b6-02cc-4ada-917b-b40af7a7676b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:21 compute-0 nova_compute[256940]: 2025-10-02 13:22:21.108 2 DEBUG nova.compute.manager [req-4c94f4c6-fa34-49eb-8311-8682372badcf req-f01719b6-02cc-4ada-917b-b40af7a7676b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] No waiting events found dispatching network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:22:21 compute-0 nova_compute[256940]: 2025-10-02 13:22:21.109 2 WARNING nova.compute.manager [req-4c94f4c6-fa34-49eb-8311-8682372badcf req-f01719b6-02cc-4ada-917b-b40af7a7676b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received unexpected event network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d for instance with vm_state active and task_state None.
Oct 02 13:22:21 compute-0 podman[406942]: 2025-10-02 13:22:21.406891528 +0000 UTC m=+0.078823990 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:22:21 compute-0 podman[406943]: 2025-10-02 13:22:21.418758896 +0000 UTC m=+0.085862753 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:22:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:21 compute-0 nova_compute[256940]: 2025-10-02 13:22:21.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:21.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:22 compute-0 NetworkManager[44981]: <info>  [1759411342.0238] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Oct 02 13:22:22 compute-0 NetworkManager[44981]: <info>  [1759411342.0246] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:22 compute-0 ovn_controller[148123]: 2025-10-02T13:22:22Z|01067|binding|INFO|Releasing lport 1ddfd2e7-e422-4a07-8f3b-992303097483 from this chassis (sb_readonly=0)
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.269 2 DEBUG nova.compute.manager [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-changed-621d1801-37d7-4f03-b509-695c4c7a886d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.269 2 DEBUG nova.compute.manager [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Refreshing instance network info cache due to event network-changed-621d1801-37d7-4f03-b509-695c4c7a886d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.270 2 DEBUG oslo_concurrency.lockutils [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.271 2 DEBUG oslo_concurrency.lockutils [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:22:22 compute-0 nova_compute[256940]: 2025-10-02 13:22:22.271 2 DEBUG nova.network.neutron [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Refreshing network info cache for port 621d1801-37d7-4f03-b509-695c4c7a886d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:22:22 compute-0 ceph-mon[73668]: pgmap v3335: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 176 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 13:22:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:22:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:22.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:23 compute-0 nova_compute[256940]: 2025-10-02 13:22:23.258 2 DEBUG nova.network.neutron [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Updated VIF entry in instance network info cache for port 621d1801-37d7-4f03-b509-695c4c7a886d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:22:23 compute-0 nova_compute[256940]: 2025-10-02 13:22:23.259 2 DEBUG nova.network.neutron [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Updating instance_info_cache with network_info: [{"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:22:23 compute-0 nova_compute[256940]: 2025-10-02 13:22:23.277 2 DEBUG oslo_concurrency.lockutils [req-3ddcf5a9-9fac-4baf-9098-fd00f968156b req-e386e435-ef02-4062-ab23-7d415670925c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-98e67486-6e8a-4a86-953d-16116ef2f446" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:22:23 compute-0 ceph-mon[73668]: pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:22:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:23.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:24 compute-0 nova_compute[256940]: 2025-10-02 13:22:24.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 02 13:22:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:25.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:25 compute-0 ceph-mon[73668]: pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 02 13:22:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:26.518 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:26.518 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:26.519 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 75 op/s
Oct 02 13:22:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:26.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:26 compute-0 nova_compute[256940]: 2025-10-02 13:22:26.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:27.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:27 compute-0 ceph-mon[73668]: pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 75 op/s
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:22:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:28.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:22:28
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', '.mgr', 'vms', '.rgw.root']
Oct 02 13:22:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:22:29 compute-0 nova_compute[256940]: 2025-10-02 13:22:29.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:22:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:22:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:29.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:29 compute-0 ceph-mon[73668]: pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:22:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 74 op/s
Oct 02 13:22:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:30.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:31 compute-0 nova_compute[256940]: 2025-10-02 13:22:31.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:31.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:32 compute-0 ovn_controller[148123]: 2025-10-02T13:22:32Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:9d:72 10.100.0.9
Oct 02 13:22:32 compute-0 ovn_controller[148123]: 2025-10-02T13:22:32Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:9d:72 10.100.0.9
Oct 02 13:22:32 compute-0 ceph-mon[73668]: pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 74 op/s
Oct 02 13:22:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 176 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 835 KiB/s wr, 77 op/s
Oct 02 13:22:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2038774104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/364948547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:33.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:34 compute-0 ceph-mon[73668]: pgmap v3341: 305 pgs: 305 active+clean; 176 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 835 KiB/s wr, 77 op/s
Oct 02 13:22:34 compute-0 nova_compute[256940]: 2025-10-02 13:22:34.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 192 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct 02 13:22:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/520594306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:35.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:36 compute-0 ceph-mon[73668]: pgmap v3342: 305 pgs: 305 active+clean; 192 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct 02 13:22:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1752492837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:22:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:36 compute-0 nova_compute[256940]: 2025-10-02 13:22:36.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:37 compute-0 podman[406995]: 2025-10-02 13:22:37.416159188 +0000 UTC m=+0.088588934 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 13:22:37 compute-0 podman[406996]: 2025-10-02 13:22:37.416727543 +0000 UTC m=+0.075975426 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:22:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:37.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.226 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.326 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.326 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.326 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.326 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.327 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:38 compute-0 ceph-mon[73668]: pgmap v3343: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:22:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:22:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673423010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.765 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:38 compute-0 sudo[407058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:38 compute-0 sudo[407058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:38 compute-0 sudo[407058]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:38 compute-0 sudo[407073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:38 compute-0 sudo[407073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:38 compute-0 sudo[407073]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.872 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:22:38 compute-0 nova_compute[256940]: 2025-10-02 13:22:38.873 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:22:38 compute-0 sudo[407108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:38 compute-0 sudo[407108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:38 compute-0 sudo[407108]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:38 compute-0 sudo[407128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:38 compute-0 sudo[407128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:38 compute-0 sudo[407128]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:39 compute-0 sudo[407160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:39 compute-0 sudo[407160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:39 compute-0 sudo[407160]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.033 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.034 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3936MB free_disk=20.942855834960938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.034 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.035 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:39 compute-0 sudo[407185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:22:39 compute-0 sudo[407185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.116 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 98e67486-6e8a-4a86-953d-16116ef2f446 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.117 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.117 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.168 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:39 compute-0 sudo[407185]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:22:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291989988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:22:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:22:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:22:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.653 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:39 compute-0 nova_compute[256940]: 2025-10-02 13:22:39.659 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:22:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:39.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:40 compute-0 ceph-mon[73668]: pgmap v3344: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2673423010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 71211858-b011-4ade-9d8b-7fbc4521f308 does not exist
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 603d319b-51dd-41fa-b66f-c05c5a85db30 does not exist
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 29092e1e-587e-4e93-a5dc-fbb05d6f9067 does not exist
Oct 02 13:22:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:22:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:22:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:22:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:22:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:22:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:40 compute-0 sudo[407261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:40 compute-0 sudo[407261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:40 compute-0 sudo[407261]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:40.216 158296 DEBUG eventlet.wsgi.server [-] (158296) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:40.216 158296 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: Accept: */*
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: Connection: close
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: Content-Type: text/plain
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: Host: 169.254.169.254
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: User-Agent: curl/7.84.0
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: X-Forwarded-For: 10.100.0.9
Oct 02 13:22:40 compute-0 ovn_metadata_agent[158078]: X-Ovn-Network-Id: 7f10919f-48f8-4d93-90be-f6d40210fadd __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 13:22:40 compute-0 sudo[407286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:40 compute-0 sudo[407286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:40 compute-0 sudo[407286]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:40 compute-0 sudo[407311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:40 compute-0 sudo[407311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:40 compute-0 sudo[407311]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:40 compute-0 sudo[407336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:22:40 compute-0 sudo[407336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:40 compute-0 nova_compute[256940]: 2025-10-02 13:22:40.359 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:22:40 compute-0 nova_compute[256940]: 2025-10-02 13:22:40.396 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:22:40 compute-0 nova_compute[256940]: 2025-10-02 13:22:40.397 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:40 compute-0 podman[407399]: 2025-10-02 13:22:40.663839243 +0000 UTC m=+0.037937017 container create 905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sammet, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:40 compute-0 systemd[1]: Started libpod-conmon-905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307.scope.
Oct 02 13:22:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:40 compute-0 podman[407399]: 2025-10-02 13:22:40.741445721 +0000 UTC m=+0.115543525 container init 905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sammet, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:22:40 compute-0 podman[407399]: 2025-10-02 13:22:40.646114913 +0000 UTC m=+0.020212707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:40 compute-0 podman[407399]: 2025-10-02 13:22:40.751597264 +0000 UTC m=+0.125695038 container start 905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:22:40 compute-0 podman[407399]: 2025-10-02 13:22:40.755394363 +0000 UTC m=+0.129492177 container attach 905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:22:40 compute-0 epic_sammet[407416]: 167 167
Oct 02 13:22:40 compute-0 systemd[1]: libpod-905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307.scope: Deactivated successfully.
Oct 02 13:22:40 compute-0 podman[407399]: 2025-10-02 13:22:40.760793943 +0000 UTC m=+0.134891717 container died 905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sammet, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a08be65080ac9d8416b2aa064f68399bc4febac8a8a4a1c6dfc679aaa4e3b8cc-merged.mount: Deactivated successfully.
Oct 02 13:22:40 compute-0 podman[407399]: 2025-10-02 13:22:40.799763606 +0000 UTC m=+0.173861380 container remove 905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:22:40 compute-0 systemd[1]: libpod-conmon-905f423944a4f8cd11efc2a60db87b3e5889394cd8ab9d033319c89c49b4a307.scope: Deactivated successfully.
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021705023962404614 of space, bias 1.0, pg target 0.6511507188721384 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:22:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:22:40 compute-0 podman[407440]: 2025-10-02 13:22:40.955947586 +0000 UTC m=+0.041867149 container create 50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:22:40 compute-0 systemd[1]: Started libpod-conmon-50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9.scope.
Oct 02 13:22:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbf6367585e69b02ac50bfd2866da237d608d7b035d34e612d5dc5ee87d19a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbf6367585e69b02ac50bfd2866da237d608d7b035d34e612d5dc5ee87d19a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbf6367585e69b02ac50bfd2866da237d608d7b035d34e612d5dc5ee87d19a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbf6367585e69b02ac50bfd2866da237d608d7b035d34e612d5dc5ee87d19a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbf6367585e69b02ac50bfd2866da237d608d7b035d34e612d5dc5ee87d19a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:41 compute-0 podman[407440]: 2025-10-02 13:22:40.937877736 +0000 UTC m=+0.023797339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:41 compute-0 podman[407440]: 2025-10-02 13:22:41.044288482 +0000 UTC m=+0.130208085 container init 50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:22:41 compute-0 podman[407440]: 2025-10-02 13:22:41.052430294 +0000 UTC m=+0.138349867 container start 50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:22:41 compute-0 podman[407440]: 2025-10-02 13:22:41.056088959 +0000 UTC m=+0.142008582 container attach 50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:22:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3291989988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:22:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:22:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:22:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:41 compute-0 nova_compute[256940]: 2025-10-02 13:22:41.382 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:41 compute-0 nova_compute[256940]: 2025-10-02 13:22:41.383 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:41 compute-0 nova_compute[256940]: 2025-10-02 13:22:41.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:41 compute-0 silly_robinson[407457]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:22:41 compute-0 silly_robinson[407457]: --> relative data size: 1.0
Oct 02 13:22:41 compute-0 silly_robinson[407457]: --> All data devices are unavailable
Oct 02 13:22:41 compute-0 systemd[1]: libpod-50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9.scope: Deactivated successfully.
Oct 02 13:22:41 compute-0 podman[407440]: 2025-10-02 13:22:41.864693087 +0000 UTC m=+0.950612660 container died 50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 13:22:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:41.913 158296 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 13:22:41 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:41.915 158296 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.6983824
Oct 02 13:22:41 compute-0 haproxy-metadata-proxy-7f10919f-48f8-4d93-90be-f6d40210fadd[406932]: 10.100.0.9:58710 [02/Oct/2025:13:22:40.214] listener listener/metadata 0/0/0/1700/1700 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 02 13:22:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:41.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfcbf6367585e69b02ac50bfd2866da237d608d7b035d34e612d5dc5ee87d19a-merged.mount: Deactivated successfully.
Oct 02 13:22:41 compute-0 podman[407440]: 2025-10-02 13:22:41.946332469 +0000 UTC m=+1.032252042 container remove 50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:22:41 compute-0 systemd[1]: libpod-conmon-50044b3411202d1b18eb995e300ae53c6af468fad50769857f746f2031a94ca9.scope: Deactivated successfully.
Oct 02 13:22:41 compute-0 sudo[407336]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:42.004 158296 DEBUG eventlet.wsgi.server [-] (158296) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:42.004 158296 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: Accept: */*
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: Connection: close
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: Content-Length: 100
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: Content-Type: application/x-www-form-urlencoded
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: Host: 169.254.169.254
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: User-Agent: curl/7.84.0
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: X-Forwarded-For: 10.100.0.9
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: X-Ovn-Network-Id: 7f10919f-48f8-4d93-90be-f6d40210fadd
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 13:22:42 compute-0 sudo[407483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:42 compute-0 sudo[407483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:42 compute-0 sudo[407483]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:42.136 158296 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 13:22:42 compute-0 haproxy-metadata-proxy-7f10919f-48f8-4d93-90be-f6d40210fadd[406932]: 10.100.0.9:58722 [02/Oct/2025:13:22:42.002] listener listener/metadata 0/0/0/134/134 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 02 13:22:42 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:42.137 158296 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.1330795
Oct 02 13:22:42 compute-0 sudo[407508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:42 compute-0 sudo[407508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:42 compute-0 sudo[407508]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:42 compute-0 sudo[407533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:42 compute-0 sudo[407533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:42 compute-0 sudo[407533]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:42 compute-0 nova_compute[256940]: 2025-10-02 13:22:42.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:42 compute-0 nova_compute[256940]: 2025-10-02 13:22:42.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:22:42 compute-0 ceph-mon[73668]: pgmap v3345: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:42 compute-0 sudo[407558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:22:42 compute-0 sudo[407558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:42 compute-0 podman[407621]: 2025-10-02 13:22:42.68003527 +0000 UTC m=+0.053008879 container create fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:22:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:42 compute-0 systemd[1]: Started libpod-conmon-fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a.scope.
Oct 02 13:22:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:42.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:42 compute-0 podman[407621]: 2025-10-02 13:22:42.663165531 +0000 UTC m=+0.036139120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:42 compute-0 podman[407621]: 2025-10-02 13:22:42.765717497 +0000 UTC m=+0.138691166 container init fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:22:42 compute-0 podman[407621]: 2025-10-02 13:22:42.776851216 +0000 UTC m=+0.149824815 container start fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:22:42 compute-0 podman[407621]: 2025-10-02 13:22:42.780727487 +0000 UTC m=+0.153701136 container attach fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:22:42 compute-0 quizzical_hellman[407637]: 167 167
Oct 02 13:22:42 compute-0 systemd[1]: libpod-fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a.scope: Deactivated successfully.
Oct 02 13:22:42 compute-0 podman[407621]: 2025-10-02 13:22:42.786093387 +0000 UTC m=+0.159066966 container died fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-27182904257a34022a76ca8329818222fd9bfa998bd2d519f68c9295b2c7bc85-merged.mount: Deactivated successfully.
Oct 02 13:22:42 compute-0 podman[407621]: 2025-10-02 13:22:42.834266849 +0000 UTC m=+0.207240428 container remove fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hellman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:22:42 compute-0 systemd[1]: libpod-conmon-fcfe0bdfd8703be45c94a3b7de5a6d9e6fbb7e1f6a38aa2dbe5f3c2013f8223a.scope: Deactivated successfully.
Oct 02 13:22:43 compute-0 podman[407663]: 2025-10-02 13:22:43.016325401 +0000 UTC m=+0.041705035 container create 3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:43 compute-0 systemd[1]: Started libpod-conmon-3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011.scope.
Oct 02 13:22:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd0f4e93a0a051075320133853f69d8e4a9b9bc952c3aa8b0c12708db6d1a4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd0f4e93a0a051075320133853f69d8e4a9b9bc952c3aa8b0c12708db6d1a4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd0f4e93a0a051075320133853f69d8e4a9b9bc952c3aa8b0c12708db6d1a4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd0f4e93a0a051075320133853f69d8e4a9b9bc952c3aa8b0c12708db6d1a4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:43 compute-0 podman[407663]: 2025-10-02 13:22:43.001919207 +0000 UTC m=+0.027298851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:43 compute-0 podman[407663]: 2025-10-02 13:22:43.099850072 +0000 UTC m=+0.125229806 container init 3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:22:43 compute-0 podman[407663]: 2025-10-02 13:22:43.109603676 +0000 UTC m=+0.134983340 container start 3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leavitt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:22:43 compute-0 podman[407663]: 2025-10-02 13:22:43.113662671 +0000 UTC m=+0.139042305 container attach 3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]: {
Oct 02 13:22:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:43.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:     "1": [
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:         {
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "devices": [
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "/dev/loop3"
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             ],
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "lv_name": "ceph_lv0",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "lv_size": "7511998464",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "name": "ceph_lv0",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "tags": {
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.cluster_name": "ceph",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.crush_device_class": "",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.encrypted": "0",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.osd_id": "1",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.type": "block",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:                 "ceph.vdo": "0"
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             },
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "type": "block",
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:             "vg_name": "ceph_vg0"
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:         }
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]:     ]
Oct 02 13:22:43 compute-0 vigilant_leavitt[407680]: }
Oct 02 13:22:43 compute-0 systemd[1]: libpod-3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011.scope: Deactivated successfully.
Oct 02 13:22:43 compute-0 podman[407663]: 2025-10-02 13:22:43.942501214 +0000 UTC m=+0.967880848 container died 3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdd0f4e93a0a051075320133853f69d8e4a9b9bc952c3aa8b0c12708db6d1a4a-merged.mount: Deactivated successfully.
Oct 02 13:22:44 compute-0 podman[407663]: 2025-10-02 13:22:44.003823768 +0000 UTC m=+1.029203392 container remove 3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:22:44 compute-0 systemd[1]: libpod-conmon-3022e20d2effcd51bf852c2460276ebacd2b5e32126603d8bd5be8c792c7a011.scope: Deactivated successfully.
Oct 02 13:22:44 compute-0 sudo[407558]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:44 compute-0 sudo[407704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:44 compute-0 sudo[407704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:44 compute-0 sudo[407704]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:44 compute-0 sudo[407729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:44 compute-0 sudo[407729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:44 compute-0 sudo[407729]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:44 compute-0 nova_compute[256940]: 2025-10-02 13:22:44.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:44 compute-0 nova_compute[256940]: 2025-10-02 13:22:44.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:44 compute-0 sudo[407754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:44 compute-0 sudo[407754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:44 compute-0 sudo[407754]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:44 compute-0 ceph-mon[73668]: pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:44 compute-0 sudo[407779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:22:44 compute-0 sudo[407779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:44 compute-0 nova_compute[256940]: 2025-10-02 13:22:44.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 1.3 MiB/s wr, 51 op/s
Oct 02 13:22:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:44.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:44 compute-0 podman[407845]: 2025-10-02 13:22:44.768420042 +0000 UTC m=+0.042738852 container create bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:22:44 compute-0 systemd[1]: Started libpod-conmon-bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89.scope.
Oct 02 13:22:44 compute-0 podman[407845]: 2025-10-02 13:22:44.752498758 +0000 UTC m=+0.026817568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:44 compute-0 podman[407845]: 2025-10-02 13:22:44.882424855 +0000 UTC m=+0.156743665 container init bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 13:22:44 compute-0 podman[407845]: 2025-10-02 13:22:44.894073628 +0000 UTC m=+0.168392418 container start bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:44 compute-0 podman[407845]: 2025-10-02 13:22:44.898176885 +0000 UTC m=+0.172495695 container attach bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:22:44 compute-0 recursing_yonath[407862]: 167 167
Oct 02 13:22:44 compute-0 systemd[1]: libpod-bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89.scope: Deactivated successfully.
Oct 02 13:22:44 compute-0 conmon[407862]: conmon bc029edf6f368fe01ea9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89.scope/container/memory.events
Oct 02 13:22:44 compute-0 podman[407845]: 2025-10-02 13:22:44.904640893 +0000 UTC m=+0.178959713 container died bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f02ec41017039db4e082a455b82c056d811c730476d9a599c6368c7fdee4cd4-merged.mount: Deactivated successfully.
Oct 02 13:22:44 compute-0 podman[407845]: 2025-10-02 13:22:44.950502915 +0000 UTC m=+0.224821705 container remove bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:22:44 compute-0 systemd[1]: libpod-conmon-bc029edf6f368fe01ea9f631f5f5cf2552acce675eb10b9a5b2d61840d576e89.scope: Deactivated successfully.
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.114 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.115 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.115 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.115 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.116 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.117 2 INFO nova.compute.manager [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Terminating instance
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.118 2 DEBUG nova.compute.manager [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:22:45 compute-0 podman[407884]: 2025-10-02 13:22:45.151741356 +0000 UTC m=+0.051807988 container create 091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:22:45 compute-0 systemd[1]: Started libpod-conmon-091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9.scope.
Oct 02 13:22:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1028ea35b43d66078a63df4def5f7cc57f94bcf2a2edd3f845d458c6a8ef3ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1028ea35b43d66078a63df4def5f7cc57f94bcf2a2edd3f845d458c6a8ef3ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1028ea35b43d66078a63df4def5f7cc57f94bcf2a2edd3f845d458c6a8ef3ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1028ea35b43d66078a63df4def5f7cc57f94bcf2a2edd3f845d458c6a8ef3ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:22:45 compute-0 podman[407884]: 2025-10-02 13:22:45.130541574 +0000 UTC m=+0.030608226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:22:45 compute-0 podman[407884]: 2025-10-02 13:22:45.234026564 +0000 UTC m=+0.134093196 container init 091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:22:45 compute-0 podman[407884]: 2025-10-02 13:22:45.24386835 +0000 UTC m=+0.143934982 container start 091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:45 compute-0 podman[407884]: 2025-10-02 13:22:45.247153905 +0000 UTC m=+0.147220617 container attach 091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:22:45 compute-0 ceph-mon[73668]: pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 1.3 MiB/s wr, 51 op/s
Oct 02 13:22:45 compute-0 kernel: tap621d1801-37 (unregistering): left promiscuous mode
Oct 02 13:22:45 compute-0 NetworkManager[44981]: <info>  [1759411365.6523] device (tap621d1801-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:45 compute-0 ovn_controller[148123]: 2025-10-02T13:22:45Z|01068|binding|INFO|Releasing lport 621d1801-37d7-4f03-b509-695c4c7a886d from this chassis (sb_readonly=0)
Oct 02 13:22:45 compute-0 ovn_controller[148123]: 2025-10-02T13:22:45Z|01069|binding|INFO|Setting lport 621d1801-37d7-4f03-b509-695c4c7a886d down in Southbound
Oct 02 13:22:45 compute-0 ovn_controller[148123]: 2025-10-02T13:22:45Z|01070|binding|INFO|Removing iface tap621d1801-37 ovn-installed in OVS
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.678 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:9d:72 10.100.0.9'], port_security=['fa:16:3e:8a:9d:72 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '98e67486-6e8a-4a86-953d-16116ef2f446', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f10919f-48f8-4d93-90be-f6d40210fadd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b72785e0faa4a5eae22ec67bc75f68e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b232f35-94d7-4833-95df-bc2498df7616 789ac8f8-0b3f-4e8f-bfd8-58c28465e093', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59fdd5c5-2edd-4d32-81c5-fe1df2adc6bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=621d1801-37d7-4f03-b509-695c4c7a886d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.679 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 621d1801-37d7-4f03-b509-695c4c7a886d in datapath 7f10919f-48f8-4d93-90be-f6d40210fadd unbound from our chassis
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.680 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f10919f-48f8-4d93-90be-f6d40210fadd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.682 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a145d779-f050-42a2-be9b-3af1787cceae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.683 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd namespace which is not needed anymore
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:45 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000d7.scope: Deactivated successfully.
Oct 02 13:22:45 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000d7.scope: Consumed 14.276s CPU time.
Oct 02 13:22:45 compute-0 systemd-machined[210927]: Machine qemu-109-instance-000000d7 terminated.
Oct 02 13:22:45 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [NOTICE]   (406930) : haproxy version is 2.8.14-c23fe91
Oct 02 13:22:45 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [NOTICE]   (406930) : path to executable is /usr/sbin/haproxy
Oct 02 13:22:45 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [WARNING]  (406930) : Exiting Master process...
Oct 02 13:22:45 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [WARNING]  (406930) : Exiting Master process...
Oct 02 13:22:45 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [ALERT]    (406930) : Current worker (406932) exited with code 143 (Terminated)
Oct 02 13:22:45 compute-0 neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd[406926]: [WARNING]  (406930) : All workers exited. Exiting... (0)
Oct 02 13:22:45 compute-0 systemd[1]: libpod-e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292.scope: Deactivated successfully.
Oct 02 13:22:45 compute-0 podman[407930]: 2025-10-02 13:22:45.822071099 +0000 UTC m=+0.042480325 container died e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-66913df2f92b45e4e4444c8c66e760c305f7d09196cafe76b3299e0e16d016cd-merged.mount: Deactivated successfully.
Oct 02 13:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292-userdata-shm.mount: Deactivated successfully.
Oct 02 13:22:45 compute-0 podman[407930]: 2025-10-02 13:22:45.885985431 +0000 UTC m=+0.106394657 container cleanup e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:22:45 compute-0 systemd[1]: libpod-conmon-e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292.scope: Deactivated successfully.
Oct 02 13:22:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:45.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.955 2 DEBUG nova.compute.manager [req-a0a5a270-eac5-4b2e-bf62-b073cb844dab req-1530e965-4f1b-4038-9592-0026318aa54e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-vif-unplugged-621d1801-37d7-4f03-b509-695c4c7a886d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.956 2 DEBUG oslo_concurrency.lockutils [req-a0a5a270-eac5-4b2e-bf62-b073cb844dab req-1530e965-4f1b-4038-9592-0026318aa54e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.956 2 DEBUG oslo_concurrency.lockutils [req-a0a5a270-eac5-4b2e-bf62-b073cb844dab req-1530e965-4f1b-4038-9592-0026318aa54e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.956 2 DEBUG oslo_concurrency.lockutils [req-a0a5a270-eac5-4b2e-bf62-b073cb844dab req-1530e965-4f1b-4038-9592-0026318aa54e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.956 2 DEBUG nova.compute.manager [req-a0a5a270-eac5-4b2e-bf62-b073cb844dab req-1530e965-4f1b-4038-9592-0026318aa54e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] No waiting events found dispatching network-vif-unplugged-621d1801-37d7-4f03-b509-695c4c7a886d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.956 2 DEBUG nova.compute.manager [req-a0a5a270-eac5-4b2e-bf62-b073cb844dab req-1530e965-4f1b-4038-9592-0026318aa54e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-vif-unplugged-621d1801-37d7-4f03-b509-695c4c7a886d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:22:45 compute-0 podman[407962]: 2025-10-02 13:22:45.960619051 +0000 UTC m=+0.049729324 container remove e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.961 2 INFO nova.virt.libvirt.driver [-] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Instance destroyed successfully.
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.962 2 DEBUG nova.objects.instance [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lazy-loading 'resources' on Instance uuid 98e67486-6e8a-4a86-953d-16116ef2f446 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.969 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[c02b0260-b9f6-47b1-9c1f-5746d4afd79e]: (4, ('Thu Oct  2 01:22:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd (e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292)\ne62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292\nThu Oct  2 01:22:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd (e62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292)\ne62d61b2f8fccfb05a7f5bde4c6b20e5e47810e5315ddbb1a1d691b535621292\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.975 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1cc551-d915-4202-9bab-b7eef0d333c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.977 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f10919f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:45 compute-0 kernel: tap7f10919f-40: left promiscuous mode
Oct 02 13:22:45 compute-0 nova_compute[256940]: 2025-10-02 13:22:45.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:45 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:45.997 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[837a38a0-ebf0-4f8b-8141-71fd78bffed3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.010 2 DEBUG nova.virt.libvirt.vif [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1934594404',display_name='tempest-TestServerBasicOps-server-1934594404',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1934594404',id=215,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD0T0cxVy/+CsBluQz3FpvOn86+syyACkCgBDVjULHd4O3+sJKnq2ksU+7zrRHyRa9uJjIcI/Ncm5KHiL4GoYTe3rqHT0YBpyyOWOg8yvOS6UTmkIu6947QkN2BF5g+3Q==',key_name='tempest-TestServerBasicOps-2097069966',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8b72785e0faa4a5eae22ec67bc75f68e',ramdisk_id='',reservation_id='r-zf5wtvh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-700599071',owner_user_name='tempest-TestServerBasicOps-700599071-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:22:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ccfaeecf44e04594afffe1129fec1b0b',uuid=98e67486-6e8a-4a86-953d-16116ef2f446,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.011 2 DEBUG nova.network.os_vif_util [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Converting VIF {"id": "621d1801-37d7-4f03-b509-695c4c7a886d", "address": "fa:16:3e:8a:9d:72", "network": {"id": "7f10919f-48f8-4d93-90be-f6d40210fadd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-151359714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b72785e0faa4a5eae22ec67bc75f68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap621d1801-37", "ovs_interfaceid": "621d1801-37d7-4f03-b509-695c4c7a886d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.012 2 DEBUG nova.network.os_vif_util [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8a:9d:72,bridge_name='br-int',has_traffic_filtering=True,id=621d1801-37d7-4f03-b509-695c4c7a886d,network=Network(7f10919f-48f8-4d93-90be-f6d40210fadd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap621d1801-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.012 2 DEBUG os_vif [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:9d:72,bridge_name='br-int',has_traffic_filtering=True,id=621d1801-37d7-4f03-b509-695c4c7a886d,network=Network(7f10919f-48f8-4d93-90be-f6d40210fadd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap621d1801-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.014 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621d1801-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.020 2 INFO os_vif [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:9d:72,bridge_name='br-int',has_traffic_filtering=True,id=621d1801-37d7-4f03-b509-695c4c7a886d,network=Network(7f10919f-48f8-4d93-90be-f6d40210fadd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap621d1801-37')
Oct 02 13:22:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:46.025 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f23e0f9b-1899-463b-9bcb-97bf6aa19d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:46.026 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e57cd5ab-586b-4079-9567-9715c00a326c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:46.047 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ff626f14-23ea-40d4-a4b2-298cf15ff17a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923390, 'reachable_time': 40479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408004, 'error': None, 'target': 'ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d7f10919f\x2d48f8\x2d4d93\x2d90be\x2df6d40210fadd.mount: Deactivated successfully.
Oct 02 13:22:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:46.050 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f10919f-48f8-4d93-90be-f6d40210fadd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:22:46 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:46.050 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[5d057116-f85a-49d5-98da-ac428bc2dc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.093 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:46 compute-0 goofy_poincare[407901]: {
Oct 02 13:22:46 compute-0 goofy_poincare[407901]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:22:46 compute-0 goofy_poincare[407901]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:22:46 compute-0 goofy_poincare[407901]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:22:46 compute-0 goofy_poincare[407901]:         "osd_id": 1,
Oct 02 13:22:46 compute-0 goofy_poincare[407901]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:22:46 compute-0 goofy_poincare[407901]:         "type": "bluestore"
Oct 02 13:22:46 compute-0 goofy_poincare[407901]:     }
Oct 02 13:22:46 compute-0 goofy_poincare[407901]: }
Oct 02 13:22:46 compute-0 systemd[1]: libpod-091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9.scope: Deactivated successfully.
Oct 02 13:22:46 compute-0 conmon[407901]: conmon 091494005bdea916b430 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9.scope/container/memory.events
Oct 02 13:22:46 compute-0 podman[407884]: 2025-10-02 13:22:46.129569412 +0000 UTC m=+1.029636044 container died 091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1028ea35b43d66078a63df4def5f7cc57f94bcf2a2edd3f845d458c6a8ef3ce-merged.mount: Deactivated successfully.
Oct 02 13:22:46 compute-0 podman[407884]: 2025-10-02 13:22:46.221074591 +0000 UTC m=+1.121141223 container remove 091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:22:46 compute-0 systemd[1]: libpod-conmon-091494005bdea916b430db06424a4a3f75cf1757e931e8da31e3de58d2d44fd9.scope: Deactivated successfully.
Oct 02 13:22:46 compute-0 sudo[407779]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:22:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:22:46 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4324f711-2ac6-4ed8-ba59-26b967ef0cb9 does not exist
Oct 02 13:22:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7fa3f6f8-f646-4b1c-8ea1-ac55298a814a does not exist
Oct 02 13:22:46 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7076c132-96a7-4fc4-8de1-e6a9455a7adc does not exist
Oct 02 13:22:46 compute-0 sudo[408036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:46 compute-0 sudo[408036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:46 compute-0 sudo[408036]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:46 compute-0 sudo[408062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:22:46 compute-0 sudo[408062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:46 compute-0 sudo[408062]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 97 KiB/s rd, 229 KiB/s wr, 19 op/s
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.740 2 INFO nova.virt.libvirt.driver [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Deleting instance files /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446_del
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.741 2 INFO nova.virt.libvirt.driver [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Deletion of /var/lib/nova/instances/98e67486-6e8a-4a86-953d-16116ef2f446_del complete
Oct 02 13:22:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.810 2 INFO nova.compute.manager [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Took 1.69 seconds to destroy the instance on the hypervisor.
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.811 2 DEBUG oslo.service.loopingcall [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.811 2 DEBUG nova.compute.manager [-] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:22:46 compute-0 nova_compute[256940]: 2025-10-02 13:22:46.811 2 DEBUG nova.network.neutron [-] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:22:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:47 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:47 compute-0 nova_compute[256940]: 2025-10-02 13:22:47.792 2 DEBUG nova.network.neutron [-] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:22:47 compute-0 nova_compute[256940]: 2025-10-02 13:22:47.868 2 INFO nova.compute.manager [-] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Took 1.06 seconds to deallocate network for instance.
Oct 02 13:22:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:47.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:47 compute-0 nova_compute[256940]: 2025-10-02 13:22:47.934 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:47 compute-0 nova_compute[256940]: 2025-10-02 13:22:47.935 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:47 compute-0 nova_compute[256940]: 2025-10-02 13:22:47.949 2 DEBUG nova.compute.manager [req-7bbfc11b-84fa-430e-8c2f-8627e93941e4 req-d2887c69-7de4-4c17-8c6b-95c56cc07cc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-vif-deleted-621d1801-37d7-4f03-b509-695c4c7a886d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:22:48 compute-0 nova_compute[256940]: 2025-10-02 13:22:48.010 2 DEBUG oslo_concurrency.processutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:48 compute-0 nova_compute[256940]: 2025-10-02 13:22:48.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:48 compute-0 ceph-mon[73668]: pgmap v3348: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 97 KiB/s rd, 229 KiB/s wr, 19 op/s
Oct 02 13:22:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:22:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778986615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:48 compute-0 nova_compute[256940]: 2025-10-02 13:22:48.433 2 DEBUG oslo_concurrency.processutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:48 compute-0 nova_compute[256940]: 2025-10-02 13:22:48.443 2 DEBUG nova.compute.provider_tree [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:22:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 17 KiB/s wr, 7 op/s
Oct 02 13:22:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:48.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1778986615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.494 2 DEBUG nova.compute.manager [req-0ddf5ad6-26c9-40f0-aef9-b11660c9d611 req-08d79c1d-b1ba-41d2-a010-c12f51d9f5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received event network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.495 2 DEBUG oslo_concurrency.lockutils [req-0ddf5ad6-26c9-40f0-aef9-b11660c9d611 req-08d79c1d-b1ba-41d2-a010-c12f51d9f5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.495 2 DEBUG oslo_concurrency.lockutils [req-0ddf5ad6-26c9-40f0-aef9-b11660c9d611 req-08d79c1d-b1ba-41d2-a010-c12f51d9f5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.495 2 DEBUG oslo_concurrency.lockutils [req-0ddf5ad6-26c9-40f0-aef9-b11660c9d611 req-08d79c1d-b1ba-41d2-a010-c12f51d9f5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.495 2 DEBUG nova.compute.manager [req-0ddf5ad6-26c9-40f0-aef9-b11660c9d611 req-08d79c1d-b1ba-41d2-a010-c12f51d9f5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] No waiting events found dispatching network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.496 2 WARNING nova.compute.manager [req-0ddf5ad6-26c9-40f0-aef9-b11660c9d611 req-08d79c1d-b1ba-41d2-a010-c12f51d9f5ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Received unexpected event network-vif-plugged-621d1801-37d7-4f03-b509-695c4c7a886d for instance with vm_state deleted and task_state None.
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.507 2 DEBUG nova.scheduler.client.report [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.540 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.581 2 INFO nova.scheduler.client.report [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Deleted allocations for instance 98e67486-6e8a-4a86-953d-16116ef2f446
Oct 02 13:22:49 compute-0 nova_compute[256940]: 2025-10-02 13:22:49.646 2 DEBUG oslo_concurrency.lockutils [None req-78d2fd2f-02f2-4de0-a82c-d80200a0b3bd ccfaeecf44e04594afffe1129fec1b0b 8b72785e0faa4a5eae22ec67bc75f68e - - default default] Lock "98e67486-6e8a-4a86-953d-16116ef2f446" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:49.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:50 compute-0 ceph-mon[73668]: pgmap v3349: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 17 KiB/s wr, 7 op/s
Oct 02 13:22:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct 02 13:22:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:50.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:51 compute-0 nova_compute[256940]: 2025-10-02 13:22:51.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:51 compute-0 ceph-mon[73668]: pgmap v3350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct 02 13:22:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:51.906 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:22:51 compute-0 nova_compute[256940]: 2025-10-02 13:22:51.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:22:51.908 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:22:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:51.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:52 compute-0 podman[408112]: 2025-10-02 13:22:52.414041462 +0000 UTC m=+0.083007978 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:22:52 compute-0 podman[408113]: 2025-10-02 13:22:52.44859416 +0000 UTC m=+0.117167766 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:22:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.2 KiB/s wr, 30 op/s
Oct 02 13:22:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:52.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:53 compute-0 nova_compute[256940]: 2025-10-02 13:22:53.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:53 compute-0 nova_compute[256940]: 2025-10-02 13:22:53.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:53 compute-0 nova_compute[256940]: 2025-10-02 13:22:53.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:22:53 compute-0 nova_compute[256940]: 2025-10-02 13:22:53.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:22:53 compute-0 nova_compute[256940]: 2025-10-02 13:22:53.535 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:22:53 compute-0 ceph-mon[73668]: pgmap v3351: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.2 KiB/s wr, 30 op/s
Oct 02 13:22:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:53.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:54 compute-0 nova_compute[256940]: 2025-10-02 13:22:54.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 02 13:22:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:54.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:55 compute-0 ceph-mon[73668]: pgmap v3352: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 02 13:22:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:55.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:56 compute-0 nova_compute[256940]: 2025-10-02 13:22:56.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 02 13:22:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:22:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:22:57 compute-0 ceph-mon[73668]: pgmap v3353: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 02 13:22:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:57.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:22:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:22:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 938 B/s wr, 22 op/s
Oct 02 13:22:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:58.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:59 compute-0 sudo[408162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:59 compute-0 sudo[408162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:59 compute-0 sudo[408162]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:59 compute-0 sudo[408187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:59 compute-0 sudo[408187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:59 compute-0 sudo[408187]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:59 compute-0 nova_compute[256940]: 2025-10-02 13:22:59.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:59 compute-0 nova_compute[256940]: 2025-10-02 13:22:59.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:59 compute-0 nova_compute[256940]: 2025-10-02 13:22:59.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:59 compute-0 ceph-mon[73668]: pgmap v3354: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 938 B/s wr, 22 op/s
Oct 02 13:22:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:22:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:59.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 938 B/s wr, 22 op/s
Oct 02 13:23:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:00.911 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:23:00 compute-0 nova_compute[256940]: 2025-10-02 13:23:00.960 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411365.958184, 98e67486-6e8a-4a86-953d-16116ef2f446 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:23:00 compute-0 nova_compute[256940]: 2025-10-02 13:23:00.960 2 INFO nova.compute.manager [-] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] VM Stopped (Lifecycle Event)
Oct 02 13:23:01 compute-0 nova_compute[256940]: 2025-10-02 13:23:01.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:01 compute-0 nova_compute[256940]: 2025-10-02 13:23:01.378 2 DEBUG nova.compute.manager [None req-5acf7e1d-9bca-44f9-908d-c5a81ea2103c - - - - - -] [instance: 98e67486-6e8a-4a86-953d-16116ef2f446] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:23:01 compute-0 nova_compute[256940]: 2025-10-02 13:23:01.532 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:01 compute-0 ceph-mon[73668]: pgmap v3355: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 938 B/s wr, 22 op/s
Oct 02 13:23:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:01.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 255 B/s wr, 6 op/s
Oct 02 13:23:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:02.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:03.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:04 compute-0 ceph-mon[73668]: pgmap v3356: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 255 B/s wr, 6 op/s
Oct 02 13:23:04 compute-0 nova_compute[256940]: 2025-10-02 13:23:04.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:04.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:05.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:06 compute-0 nova_compute[256940]: 2025-10-02 13:23:06.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:06 compute-0 ceph-mon[73668]: pgmap v3357: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:23:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:23:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:06.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000027s ======
Oct 02 13:23:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:07.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 02 13:23:08 compute-0 ceph-mon[73668]: pgmap v3358: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:08 compute-0 podman[408218]: 2025-10-02 13:23:08.407043431 +0000 UTC m=+0.071372516 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:23:08 compute-0 podman[408217]: 2025-10-02 13:23:08.440037718 +0000 UTC m=+0.102247408 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:23:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:08.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:09 compute-0 nova_compute[256940]: 2025-10-02 13:23:09.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:10 compute-0 ceph-mon[73668]: pgmap v3359: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:10.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:11 compute-0 nova_compute[256940]: 2025-10-02 13:23:11.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:11 compute-0 ceph-mon[73668]: pgmap v3360: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:14 compute-0 ceph-mon[73668]: pgmap v3361: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:14 compute-0 nova_compute[256940]: 2025-10-02 13:23:14.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:14.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:15.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:16 compute-0 nova_compute[256940]: 2025-10-02 13:23:16.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:16 compute-0 ceph-mon[73668]: pgmap v3362: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:16.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:17.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:18 compute-0 ceph-mon[73668]: pgmap v3363: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:19 compute-0 sudo[408264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:19 compute-0 sudo[408264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:19 compute-0 sudo[408264]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:19 compute-0 sudo[408289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:19 compute-0 sudo[408289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:19 compute-0 sudo[408289]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:19 compute-0 nova_compute[256940]: 2025-10-02 13:23:19.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:19.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:20 compute-0 ceph-mon[73668]: pgmap v3364: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:21 compute-0 nova_compute[256940]: 2025-10-02 13:23:21.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:21 compute-0 ceph-mon[73668]: pgmap v3365: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.716708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401716763, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1665, "num_deletes": 258, "total_data_size": 2915179, "memory_usage": 2957800, "flush_reason": "Manual Compaction"}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401739589, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 2870936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73226, "largest_seqno": 74890, "table_properties": {"data_size": 2863262, "index_size": 4552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15894, "raw_average_key_size": 19, "raw_value_size": 2847854, "raw_average_value_size": 3582, "num_data_blocks": 200, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411236, "oldest_key_time": 1759411236, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 22949 microseconds, and 7559 cpu microseconds.
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.739655) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 2870936 bytes OK
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.739687) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.741809) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.741836) EVENT_LOG_v1 {"time_micros": 1759411401741828, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.741867) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2908222, prev total WAL file size 2908222, number of live WAL files 2.
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.743230) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303138' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(2803KB)], [167(11MB)]
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401743290, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 14892777, "oldest_snapshot_seqno": -1}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 10240 keys, 14749949 bytes, temperature: kUnknown
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401814747, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 14749949, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14681857, "index_size": 41419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 269673, "raw_average_key_size": 26, "raw_value_size": 14500872, "raw_average_value_size": 1416, "num_data_blocks": 1589, "num_entries": 10240, "num_filter_entries": 10240, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.815028) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14749949 bytes
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.816420) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.1 rd, 206.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.5 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 10775, records dropped: 535 output_compression: NoCompression
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.816436) EVENT_LOG_v1 {"time_micros": 1759411401816428, "job": 104, "event": "compaction_finished", "compaction_time_micros": 71578, "compaction_time_cpu_micros": 32154, "output_level": 6, "num_output_files": 1, "total_output_size": 14749949, "num_input_records": 10775, "num_output_records": 10240, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401816933, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401819352, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.743074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.819492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.819496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.819498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.819499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:23:21.819501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:21.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:22.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:23 compute-0 podman[408316]: 2025-10-02 13:23:23.376789669 +0000 UTC m=+0.049957979 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 02 13:23:23 compute-0 podman[408317]: 2025-10-02 13:23:23.398879374 +0000 UTC m=+0.073015199 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:23:23 compute-0 ceph-mon[73668]: pgmap v3366: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:23.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:24 compute-0 nova_compute[256940]: 2025-10-02 13:23:24.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:24.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 13:23:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:25.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 13:23:26 compute-0 nova_compute[256940]: 2025-10-02 13:23:26.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:26 compute-0 ceph-mon[73668]: pgmap v3367: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:26.519 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:26.520 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:26.520 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:26.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:27.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:28 compute-0 ceph-mon[73668]: pgmap v3368: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:23:28
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Oct 02 13:23:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:23:29 compute-0 nova_compute[256940]: 2025-10-02 13:23:29.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:23:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:23:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:29.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:30 compute-0 ceph-mon[73668]: pgmap v3369: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:31 compute-0 nova_compute[256940]: 2025-10-02 13:23:31.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:31.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:32 compute-0 ceph-mon[73668]: pgmap v3370: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:32.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:33 compute-0 ceph-mon[73668]: pgmap v3371: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1469962813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:34 compute-0 nova_compute[256940]: 2025-10-02 13:23:34.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2695112454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:35.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:36 compute-0 nova_compute[256940]: 2025-10-02 13:23:36.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:36 compute-0 ceph-mon[73668]: pgmap v3372: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3013848652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:36.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3858995251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:37.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:38 compute-0 ceph-mon[73668]: pgmap v3373: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.251 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.251 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.252 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.252 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.252 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:39 compute-0 sudo[408369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:23:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 74K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1595 writes, 7061 keys, 1595 commit groups, 1.0 writes per commit group, ingest: 10.76 MB, 0.02 MB/s
                                           Interval WAL: 1595 writes, 1595 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     36.1      2.82              0.30        52    0.054       0      0       0.0       0.0
                                             L6      1/0   14.07 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2     74.2     63.6      8.27              1.51        51    0.162    379K    27K       0.0       0.0
                                            Sum      1/0   14.07 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     55.3     56.6     11.10              1.81       103    0.108    379K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4     47.6     50.0      1.75              0.23        12    0.146     62K   3161       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     74.2     63.6      8.27              1.51        51    0.162    379K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     36.1      2.82              0.30        51    0.055       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.100, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.61 GB write, 0.10 MB/s write, 0.60 GB read, 0.10 MB/s read, 11.1 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 65.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000369 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3818,63.03 MB,20.7337%) FilterBlock(104,1.02 MB,0.336893%) IndexBlock(104,1.73 MB,0.56831%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:23:39 compute-0 sudo[408369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:39 compute-0 sudo[408369]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:39 compute-0 sudo[408411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:39 compute-0 sudo[408411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:39 compute-0 sudo[408411]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:39 compute-0 podman[408395]: 2025-10-02 13:23:39.365485105 +0000 UTC m=+0.058881502 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:23:39 compute-0 podman[408394]: 2025-10-02 13:23:39.385033453 +0000 UTC m=+0.087457964 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:39 compute-0 ceph-mon[73668]: pgmap v3374: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:23:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3676436955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.712 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.854 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.857 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4170MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.857 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.857 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.915 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.916 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:23:39 compute-0 nova_compute[256940]: 2025-10-02 13:23:39.935 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:39.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:23:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33145839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:40 compute-0 nova_compute[256940]: 2025-10-02 13:23:40.364 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:40 compute-0 nova_compute[256940]: 2025-10-02 13:23:40.369 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:23:40 compute-0 nova_compute[256940]: 2025-10-02 13:23:40.384 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:23:40 compute-0 nova_compute[256940]: 2025-10-02 13:23:40.408 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:23:40 compute-0 nova_compute[256940]: 2025-10-02 13:23:40.409 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3676436955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/33145839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:23:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:23:41 compute-0 nova_compute[256940]: 2025-10-02 13:23:41.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:41 compute-0 nova_compute[256940]: 2025-10-02 13:23:41.409 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:41.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:42 compute-0 nova_compute[256940]: 2025-10-02 13:23:42.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:42 compute-0 nova_compute[256940]: 2025-10-02 13:23:42.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:23:42 compute-0 ceph-mon[73668]: pgmap v3375: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:42.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.498 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.499 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.511 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.573 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.574 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.581 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.581 2 INFO nova.compute.claims [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:23:43 compute-0 nova_compute[256940]: 2025-10-02 13:23:43.682 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:43.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:23:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1675875322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.096 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.103 2 DEBUG nova.compute.provider_tree [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.117 2 DEBUG nova.scheduler.client.report [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.137 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.138 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.201 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.201 2 DEBUG nova.network.neutron [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.223 2 INFO nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.240 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.329 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.330 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.331 2 INFO nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Creating image(s)
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.358 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.389 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.422 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:23:44 compute-0 ceph-mon[73668]: pgmap v3376: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1675875322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.427 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.511 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.512 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.513 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.513 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.542 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.546 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5bc23980-d45b-4acb-9b37-858b821d2252_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:44 compute-0 nova_compute[256940]: 2025-10-02 13:23:44.767 2 DEBUG nova.policy [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7eba544d42c8426295f2a88f0e85d446', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ecfaf38d20784d06a43ced1560cede11', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:23:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.406 2 DEBUG nova.network.neutron [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Successfully created port: f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:23:45 compute-0 ceph-mon[73668]: pgmap v3377: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.516 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5bc23980-d45b-4acb-9b37-858b821d2252_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.970s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.592 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] resizing rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.867 2 DEBUG nova.objects.instance [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lazy-loading 'migration_context' on Instance uuid 5bc23980-d45b-4acb-9b37-858b821d2252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.887 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.888 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Ensure instance console log exists: /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.889 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.889 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:45 compute-0 nova_compute[256940]: 2025-10-02 13:23:45.889 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:45.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.052 2 DEBUG nova.network.neutron [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Successfully updated port: f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.073 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.073 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquired lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.073 2 DEBUG nova.network.neutron [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.157 2 DEBUG nova.compute.manager [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-changed-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.157 2 DEBUG nova.compute.manager [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Refreshing instance network info cache due to event network-changed-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.158 2 DEBUG oslo_concurrency.lockutils [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:46 compute-0 nova_compute[256940]: 2025-10-02 13:23:46.222 2 DEBUG nova.network.neutron [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:23:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 702 KiB/s wr, 14 op/s
Oct 02 13:23:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:46.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:46 compute-0 sudo[408693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:46 compute-0 sudo[408693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:46 compute-0 sudo[408693]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:46 compute-0 sudo[408718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:46 compute-0 sudo[408718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:46 compute-0 sudo[408718]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:47 compute-0 sudo[408743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:47 compute-0 sudo[408743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:47 compute-0 sudo[408743]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:47 compute-0 sudo[408768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:23:47 compute-0 sudo[408768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.123 2 DEBUG nova.network.neutron [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updating instance_info_cache with network_info: [{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.138 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Releasing lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.138 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Instance network_info: |[{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.139 2 DEBUG oslo_concurrency.lockutils [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.139 2 DEBUG nova.network.neutron [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Refreshing network info cache for port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.141 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Start _get_guest_xml network_info=[{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'guest_format': None, 'size': 0, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.146 2 WARNING nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.155 2 DEBUG nova.virt.libvirt.host [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.155 2 DEBUG nova.virt.libvirt.host [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.158 2 DEBUG nova.virt.libvirt.host [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.158 2 DEBUG nova.virt.libvirt.host [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.160 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.160 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.161 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.161 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.161 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.161 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.161 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.162 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.162 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.162 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.162 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.162 2 DEBUG nova.virt.hardware [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.165 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:23:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100086224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.641 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.678 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:23:47 compute-0 nova_compute[256940]: 2025-10-02 13:23:47.684 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:47 compute-0 podman[408882]: 2025-10-02 13:23:47.83003743 +0000 UTC m=+0.320803059 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:23:47 compute-0 podman[408882]: 2025-10-02 13:23:47.918921581 +0000 UTC m=+0.409687230 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:23:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:47.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:47 compute-0 ceph-mon[73668]: pgmap v3378: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 702 KiB/s wr, 14 op/s
Oct 02 13:23:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/100086224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:23:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:23:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:23:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:23:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947714417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.172 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:48 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.175 2 DEBUG nova.virt.libvirt.vif [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:23:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-567427000',display_name='tempest-TestSnapshotPattern-server-567427000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-567427000',id=216,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUvYaec5qv58Rt+7etIfxdZk5jlQmcuvdY+kvpbop0QcPd4KDRUXca759VWr6i4CfOCrn9td/XvE0cFAdnY7gxedUw6zaigljtueTe5IN5w0sb9JVzLt/cY3hOst2Sg1A==',key_name='tempest-TestSnapshotPattern-1796131382',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ecfaf38d20784d06a43ced1560cede11',ramdisk_id='',reservation_id='r-a6c7o0dw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1671292510',owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:23:44Z,user_data=None,user_id='7eba544d42c8426295f2a88f0e85d446',uuid=5bc23980-d45b-4acb-9b37-858b821d2252,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.175 2 DEBUG nova.network.os_vif_util [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converting VIF {"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.176 2 DEBUG nova.network.os_vif_util [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:90:7f,bridge_name='br-int',has_traffic_filtering=True,id=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6b746a3-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.177 2 DEBUG nova.objects.instance [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5bc23980-d45b-4acb-9b37-858b821d2252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.190 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <uuid>5bc23980-d45b-4acb-9b37-858b821d2252</uuid>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <name>instance-000000d8</name>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <nova:name>tempest-TestSnapshotPattern-server-567427000</nova:name>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:23:47</nova:creationTime>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:user uuid="7eba544d42c8426295f2a88f0e85d446">tempest-TestSnapshotPattern-1671292510-project-member</nova:user>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:project uuid="ecfaf38d20784d06a43ced1560cede11">tempest-TestSnapshotPattern-1671292510</nova:project>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <nova:port uuid="f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0">
Oct 02 13:23:48 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <system>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <entry name="serial">5bc23980-d45b-4acb-9b37-858b821d2252</entry>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <entry name="uuid">5bc23980-d45b-4acb-9b37-858b821d2252</entry>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </system>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <os>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   </os>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <features>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   </features>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/5bc23980-d45b-4acb-9b37-858b821d2252_disk">
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       </source>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/5bc23980-d45b-4acb-9b37-858b821d2252_disk.config">
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       </source>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:23:48 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:c1:90:7f"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <target dev="tapf6b746a3-d5"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/console.log" append="off"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <video>
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </video>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:23:48 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:23:48 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:23:48 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:23:48 compute-0 nova_compute[256940]: </domain>
Oct 02 13:23:48 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.192 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Preparing to wait for external event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.192 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.192 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.192 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.193 2 DEBUG nova.virt.libvirt.vif [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:23:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-567427000',display_name='tempest-TestSnapshotPattern-server-567427000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-567427000',id=216,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUvYaec5qv58Rt+7etIfxdZk5jlQmcuvdY+kvpbop0QcPd4KDRUXca759VWr6i4CfOCrn9td/XvE0cFAdnY7gxedUw6zaigljtueTe5IN5w0sb9JVzLt/cY3hOst2Sg1A==',key_name='tempest-TestSnapshotPattern-1796131382',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ecfaf38d20784d06a43ced1560cede11',ramdisk_id='',reservation_id='r-a6c7o0dw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1671292510',owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:23:44Z,user_data=None,user_id='7eba544d42c8426295f2a88f0e85d446',uuid=5bc23980-d45b-4acb-9b37-858b821d2252,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.193 2 DEBUG nova.network.os_vif_util [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converting VIF {"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.194 2 DEBUG nova.network.os_vif_util [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:90:7f,bridge_name='br-int',has_traffic_filtering=True,id=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6b746a3-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.194 2 DEBUG os_vif [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:90:7f,bridge_name='br-int',has_traffic_filtering=True,id=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6b746a3-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.195 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.195 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6b746a3-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6b746a3-d5, col_values=(('external_ids', {'iface-id': 'f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:90:7f', 'vm-uuid': '5bc23980-d45b-4acb-9b37-858b821d2252'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:48 compute-0 NetworkManager[44981]: <info>  [1759411428.2030] manager: (tapf6b746a3-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.209 2 INFO os_vif [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:90:7f,bridge_name='br-int',has_traffic_filtering=True,id=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6b746a3-d5')
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.253 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.253 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.253 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] No VIF found with MAC fa:16:3e:c1:90:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.254 2 INFO nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Using config drive
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.286 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.294 2 DEBUG nova.network.neutron [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updated VIF entry in instance network info cache for port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.294 2 DEBUG nova.network.neutron [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updating instance_info_cache with network_info: [{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.320 2 DEBUG oslo_concurrency.lockutils [req-985e460d-3e00-4d97-bf0f-820592b2a256 req-09691507-753d-440d-8c8a-608a9555f595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:23:48 compute-0 podman[409077]: 2025-10-02 13:23:48.580595469 +0000 UTC m=+0.174411525 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:23:48 compute-0 podman[409099]: 2025-10-02 13:23:48.648279138 +0000 UTC m=+0.051371716 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:23:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 702 KiB/s wr, 14 op/s
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.739 2 INFO nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Creating config drive at /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/disk.config
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.743 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxz2urr5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:48 compute-0 podman[409077]: 2025-10-02 13:23:48.841138181 +0000 UTC m=+0.434954257 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:23:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:48.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.893 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxz2urr5" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.933 2 DEBUG nova.storage.rbd_utils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 5bc23980-d45b-4acb-9b37-858b821d2252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:23:48 compute-0 nova_compute[256940]: 2025-10-02 13:23:48.938 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/disk.config 5bc23980-d45b-4acb-9b37-858b821d2252_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3947714417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:23:49 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:49 compute-0 podman[409178]: 2025-10-02 13:23:49.269415183 +0000 UTC m=+0.224983349 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, name=keepalived, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, build-date=2023-02-22T09:23:20)
Oct 02 13:23:49 compute-0 podman[409178]: 2025-10-02 13:23:49.304177526 +0000 UTC m=+0.259745652 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 02 13:23:49 compute-0 sudo[408768]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.535 2 DEBUG oslo_concurrency.processutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/disk.config 5bc23980-d45b-4acb-9b37-858b821d2252_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.536 2 INFO nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Deleting local config drive /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252/disk.config because it was imported into RBD.
Oct 02 13:23:49 compute-0 kernel: tapf6b746a3-d5: entered promiscuous mode
Oct 02 13:23:49 compute-0 NetworkManager[44981]: <info>  [1759411429.5855] manager: (tapf6b746a3-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/472)
Oct 02 13:23:49 compute-0 ovn_controller[148123]: 2025-10-02T13:23:49Z|01071|binding|INFO|Claiming lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for this chassis.
Oct 02 13:23:49 compute-0 ovn_controller[148123]: 2025-10-02T13:23:49Z|01072|binding|INFO|f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0: Claiming fa:16:3e:c1:90:7f 10.100.0.8
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.600 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:90:7f 10.100.0.8'], port_security=['fa:16:3e:c1:90:7f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5bc23980-d45b-4acb-9b37-858b821d2252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecfaf38d20784d06a43ced1560cede11', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd43d4ef4-4d2c-4cee-ba2f-defd20425090', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7f2f940-ba8f-4f9e-ad7e-a0f8c2d0d84f, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.602 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 in datapath 769e2243-bbee-45a3-8fea-c44f0ea9a1e8 bound to our chassis
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.604 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 769e2243-bbee-45a3-8fea-c44f0ea9a1e8
Oct 02 13:23:49 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.617 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[599855a0-7b02-4443-b5ef-0c8a916d68f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.618 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap769e2243-b1 in ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.620 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap769e2243-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.621 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b84092c4-1568-4a3f-8dba-247681c70f0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 systemd-udevd[409247]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:23:49 compute-0 systemd-machined[210927]: New machine qemu-110-instance-000000d8.
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.622 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[f37b0c28-6e00-4716-8014-7e8fd0937ab0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 NetworkManager[44981]: <info>  [1759411429.6352] device (tapf6b746a3-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.633 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[60f0293f-ec12-4bb0-99cf-2114a9804dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 NetworkManager[44981]: <info>  [1759411429.6362] device (tapf6b746a3-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-000000d8.
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.660 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[96f3d9fd-bcec-4ce0-8820-5df1ec42e54c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_controller[148123]: 2025-10-02T13:23:49Z|01073|binding|INFO|Setting lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 ovn-installed in OVS
Oct 02 13:23:49 compute-0 ovn_controller[148123]: 2025-10-02T13:23:49Z|01074|binding|INFO|Setting lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 up in Southbound
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 sudo[409246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:49 compute-0 sudo[409246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:49 compute-0 sudo[409246]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.690 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[14a7d8d5-5415-4d5e-a346-624221808082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.696 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[2d57c86a-7252-49b7-9f30-bab537097896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 NetworkManager[44981]: <info>  [1759411429.6977] manager: (tap769e2243-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/473)
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.729 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[01d8f995-97cf-4de8-809e-a4c9f8445540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.733 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[516e3dfa-f9c9-4701-b6a8-35e2667196d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:23:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:23:49 compute-0 sudo[409279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:49 compute-0 sudo[409279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:49 compute-0 NetworkManager[44981]: <info>  [1759411429.7593] device (tap769e2243-b0): carrier: link connected
Oct 02 13:23:49 compute-0 sudo[409279]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.768 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a891bb-3e27-48db-acca-95be0111ba06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.782 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[decc7edb-72bc-4ab6-8dd4-32a0cf817b2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769e2243-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:fb:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 932524, 'reachable_time': 15115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409333, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.798 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6dc6fe-d037-401f-accf-efae8959efb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3f:fbe0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 932524, 'tstamp': 932524}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409347, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 sudo[409329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:49 compute-0 sudo[409329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:49 compute-0 sudo[409329]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.815 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfc6a54-3bb2-453d-b673-5532151d0adf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769e2243-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:fb:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 932524, 'reachable_time': 15115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409354, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.849 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[73dea22c-4121-48ca-bb23-8c0e0d73b3f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 sudo[409357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:23:49 compute-0 sudo[409357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.901 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcf0833-06f3-470e-b8f6-e67567a10aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.903 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769e2243-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.903 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.903 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap769e2243-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.904 2 DEBUG nova.compute.manager [req-af3efd4e-a560-409c-ba32-ce44b55d4bf5 req-6ea83e65-cc83-4f7b-8fd2-40ce9adc6fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.905 2 DEBUG oslo_concurrency.lockutils [req-af3efd4e-a560-409c-ba32-ce44b55d4bf5 req-6ea83e65-cc83-4f7b-8fd2-40ce9adc6fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.905 2 DEBUG oslo_concurrency.lockutils [req-af3efd4e-a560-409c-ba32-ce44b55d4bf5 req-6ea83e65-cc83-4f7b-8fd2-40ce9adc6fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.905 2 DEBUG oslo_concurrency.lockutils [req-af3efd4e-a560-409c-ba32-ce44b55d4bf5 req-6ea83e65-cc83-4f7b-8fd2-40ce9adc6fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.906 2 DEBUG nova.compute.manager [req-af3efd4e-a560-409c-ba32-ce44b55d4bf5 req-6ea83e65-cc83-4f7b-8fd2-40ce9adc6fea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Processing event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 NetworkManager[44981]: <info>  [1759411429.9559] manager: (tap769e2243-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Oct 02 13:23:49 compute-0 kernel: tap769e2243-b0: entered promiscuous mode
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.961 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap769e2243-b0, col_values=(('external_ids', {'iface-id': '866389a0-d931-4b6e-8a65-5bb4c8a2b7c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 ovn_controller[148123]: 2025-10-02T13:23:49Z|01075|binding|INFO|Releasing lport 866389a0-d931-4b6e-8a65-5bb4c8a2b7c2 from this chassis (sb_readonly=0)
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.965 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.966 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[11f62af3-450e-4df1-8950-41965d8e3cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.967 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-769e2243-bbee-45a3-8fea-c44f0ea9a1e8
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.pid.haproxy
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 769e2243-bbee-45a3-8fea-c44f0ea9a1e8
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:23:49 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:23:49.967 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'env', 'PROCESS_TAG=haproxy-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:23:49 compute-0 nova_compute[256940]: 2025-10-02 13:23:49.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:49.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:50 compute-0 ceph-mon[73668]: pgmap v3379: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 702 KiB/s wr, 14 op/s
Oct 02 13:23:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:50 compute-0 sudo[409357]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:23:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:23:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:23:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:23:50 compute-0 podman[409439]: 2025-10-02 13:23:50.397004392 +0000 UTC m=+0.023723998 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:23:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c53d85eb-2d88-488a-a679-6c4757996765 does not exist
Oct 02 13:23:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6424f803-e5b1-4048-9207-7cd8f2b6ce99 does not exist
Oct 02 13:23:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 73402013-9f74-436a-ab8e-941b12408f9f does not exist
Oct 02 13:23:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:23:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:23:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:23:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:23:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:23:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:50 compute-0 sudo[409489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:50 compute-0 sudo[409489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:50 compute-0 sudo[409489]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:50 compute-0 sudo[409519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:50 compute-0 sudo[409519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:50 compute-0 sudo[409519]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:23:50 compute-0 podman[409439]: 2025-10-02 13:23:50.716089886 +0000 UTC m=+0.342809472 container create 22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:23:50 compute-0 sudo[409544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:50 compute-0 sudo[409544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:50 compute-0 sudo[409544]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:50 compute-0 systemd[1]: Started libpod-conmon-22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d.scope.
Oct 02 13:23:50 compute-0 sudo[409569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:23:50 compute-0 sudo[409569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62856a11b7fcae9581b84d14e44e4988073c6e3286f89d1b54884157362cee39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:50.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:50 compute-0 podman[409439]: 2025-10-02 13:23:50.913628461 +0000 UTC m=+0.540348077 container init 22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:23:50 compute-0 podman[409439]: 2025-10-02 13:23:50.920231182 +0000 UTC m=+0.546950768 container start 22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:23:50 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [NOTICE]   (409601) : New worker (409605) forked
Oct 02 13:23:50 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [NOTICE]   (409601) : Loading success.
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.124 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.124 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411431.1235487, 5bc23980-d45b-4acb-9b37-858b821d2252 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.125 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] VM Started (Lifecycle Event)
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.126 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.129 2 INFO nova.virt.libvirt.driver [-] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Instance spawned successfully.
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.129 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.143 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.148 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.151 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.152 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.152 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.152 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.153 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.153 2 DEBUG nova.virt.libvirt.driver [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.175 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.176 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411431.1237223, 5bc23980-d45b-4acb-9b37-858b821d2252 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.176 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] VM Paused (Lifecycle Event)
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.198 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.207 2 INFO nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Took 6.88 seconds to spawn the instance on the hypervisor.
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.208 2 DEBUG nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.241 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411431.1263692, 5bc23980-d45b-4acb-9b37-858b821d2252 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.241 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] VM Resumed (Lifecycle Event)
Oct 02 13:23:51 compute-0 podman[409650]: 2025-10-02 13:23:51.164804559 +0000 UTC m=+0.020112893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.289 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.292 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.323 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.333 2 INFO nova.compute.manager [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Took 7.78 seconds to build instance.
Oct 02 13:23:51 compute-0 nova_compute[256940]: 2025-10-02 13:23:51.348 2 DEBUG oslo_concurrency.lockutils [None req-ae7cb64f-f479-4bb1-b634-c1267cf797d6 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:51 compute-0 podman[409650]: 2025-10-02 13:23:51.363938215 +0000 UTC m=+0.219246569 container create 3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 13:23:51 compute-0 systemd[1]: Started libpod-conmon-3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455.scope.
Oct 02 13:23:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:23:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:23:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:23:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:51 compute-0 podman[409650]: 2025-10-02 13:23:51.461572373 +0000 UTC m=+0.316880737 container init 3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_northcutt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:23:51 compute-0 podman[409650]: 2025-10-02 13:23:51.469401297 +0000 UTC m=+0.324709631 container start 3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_northcutt, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:23:51 compute-0 podman[409650]: 2025-10-02 13:23:51.47374771 +0000 UTC m=+0.329056044 container attach 3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:23:51 compute-0 inspiring_northcutt[409666]: 167 167
Oct 02 13:23:51 compute-0 systemd[1]: libpod-3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455.scope: Deactivated successfully.
Oct 02 13:23:51 compute-0 conmon[409666]: conmon 3683fcb595dd98bebe82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455.scope/container/memory.events
Oct 02 13:23:51 compute-0 podman[409650]: 2025-10-02 13:23:51.478684098 +0000 UTC m=+0.333992412 container died 3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-768f0e9e1a4617d74bf64616058e60788e2fdedc4ca1e7bb4835a5fbeb2804d2-merged.mount: Deactivated successfully.
Oct 02 13:23:51 compute-0 podman[409650]: 2025-10-02 13:23:51.58109951 +0000 UTC m=+0.436407814 container remove 3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_northcutt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:23:51 compute-0 systemd[1]: libpod-conmon-3683fcb595dd98bebe824a3c41df07ec3778a4fd39fa53d16ed9b2f61db6e455.scope: Deactivated successfully.
Oct 02 13:23:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:51 compute-0 podman[409692]: 2025-10-02 13:23:51.762261378 +0000 UTC m=+0.030671228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:51 compute-0 podman[409692]: 2025-10-02 13:23:51.90238052 +0000 UTC m=+0.170790340 container create 476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_haslett, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:23:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:52 compute-0 nova_compute[256940]: 2025-10-02 13:23:52.026 2 DEBUG nova.compute.manager [req-831ad97c-784c-41f2-b02d-87bfd3f00183 req-3654a9f1-00c7-4110-a9db-95f9f244f43b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:23:52 compute-0 nova_compute[256940]: 2025-10-02 13:23:52.026 2 DEBUG oslo_concurrency.lockutils [req-831ad97c-784c-41f2-b02d-87bfd3f00183 req-3654a9f1-00c7-4110-a9db-95f9f244f43b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:52 compute-0 nova_compute[256940]: 2025-10-02 13:23:52.027 2 DEBUG oslo_concurrency.lockutils [req-831ad97c-784c-41f2-b02d-87bfd3f00183 req-3654a9f1-00c7-4110-a9db-95f9f244f43b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:52 compute-0 nova_compute[256940]: 2025-10-02 13:23:52.027 2 DEBUG oslo_concurrency.lockutils [req-831ad97c-784c-41f2-b02d-87bfd3f00183 req-3654a9f1-00c7-4110-a9db-95f9f244f43b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:52 compute-0 nova_compute[256940]: 2025-10-02 13:23:52.027 2 DEBUG nova.compute.manager [req-831ad97c-784c-41f2-b02d-87bfd3f00183 req-3654a9f1-00c7-4110-a9db-95f9f244f43b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] No waiting events found dispatching network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:23:52 compute-0 nova_compute[256940]: 2025-10-02 13:23:52.027 2 WARNING nova.compute.manager [req-831ad97c-784c-41f2-b02d-87bfd3f00183 req-3654a9f1-00c7-4110-a9db-95f9f244f43b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received unexpected event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for instance with vm_state active and task_state None.
Oct 02 13:23:52 compute-0 systemd[1]: Started libpod-conmon-476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f.scope.
Oct 02 13:23:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5a3d71f852e6d7d35dacebc132c41e0be0c2a49b09cf96d7fdeef07bccb1d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5a3d71f852e6d7d35dacebc132c41e0be0c2a49b09cf96d7fdeef07bccb1d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5a3d71f852e6d7d35dacebc132c41e0be0c2a49b09cf96d7fdeef07bccb1d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5a3d71f852e6d7d35dacebc132c41e0be0c2a49b09cf96d7fdeef07bccb1d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5a3d71f852e6d7d35dacebc132c41e0be0c2a49b09cf96d7fdeef07bccb1d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:52 compute-0 podman[409692]: 2025-10-02 13:23:52.195451888 +0000 UTC m=+0.463861738 container init 476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:23:52 compute-0 podman[409692]: 2025-10-02 13:23:52.207800249 +0000 UTC m=+0.476210079 container start 476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_haslett, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:23:52 compute-0 podman[409692]: 2025-10-02 13:23:52.400793255 +0000 UTC m=+0.669203095 container attach 476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:23:52 compute-0 ceph-mon[73668]: pgmap v3380: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:23:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:23:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:52.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:53 compute-0 suspicious_haslett[409708]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:23:53 compute-0 suspicious_haslett[409708]: --> relative data size: 1.0
Oct 02 13:23:53 compute-0 suspicious_haslett[409708]: --> All data devices are unavailable
Oct 02 13:23:53 compute-0 systemd[1]: libpod-476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f.scope: Deactivated successfully.
Oct 02 13:23:53 compute-0 podman[409692]: 2025-10-02 13:23:53.03884508 +0000 UTC m=+1.307254900 container died 476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_haslett, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:23:53 compute-0 nova_compute[256940]: 2025-10-02 13:23:53.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:53 compute-0 nova_compute[256940]: 2025-10-02 13:23:53.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e5a3d71f852e6d7d35dacebc132c41e0be0c2a49b09cf96d7fdeef07bccb1d2-merged.mount: Deactivated successfully.
Oct 02 13:23:53 compute-0 podman[409692]: 2025-10-02 13:23:53.660607631 +0000 UTC m=+1.929017451 container remove 476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:23:53 compute-0 systemd[1]: libpod-conmon-476969d7106b1bd5f9623e6224a2e0b99274bba4e1afd66e04a0313b5132ee3f.scope: Deactivated successfully.
Oct 02 13:23:53 compute-0 sudo[409569]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:53 compute-0 sudo[409738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:53 compute-0 sudo[409738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:53 compute-0 sudo[409738]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:53 compute-0 podman[409736]: 2025-10-02 13:23:53.776937385 +0000 UTC m=+0.057386433 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:23:53 compute-0 ceph-mon[73668]: pgmap v3381: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:23:53 compute-0 podman[409737]: 2025-10-02 13:23:53.805331383 +0000 UTC m=+0.084024095 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:23:53 compute-0 sudo[409797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:53 compute-0 sudo[409797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:53 compute-0 sudo[409797]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:53 compute-0 sudo[409827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:53 compute-0 sudo[409827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:53 compute-0 sudo[409827]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:53 compute-0 sudo[409852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:23:53 compute-0 sudo[409852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:23:54 compute-0 podman[409913]: 2025-10-02 13:23:54.325458263 +0000 UTC m=+0.030432413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:54 compute-0 podman[409913]: 2025-10-02 13:23:54.473621414 +0000 UTC m=+0.178595554 container create f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brown, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:23:54 compute-0 systemd[1]: Started libpod-conmon-f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7.scope.
Oct 02 13:23:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:54 compute-0 podman[409913]: 2025-10-02 13:23:54.648390647 +0000 UTC m=+0.353364797 container init f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brown, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:23:54 compute-0 podman[409913]: 2025-10-02 13:23:54.656002844 +0000 UTC m=+0.360976974 container start f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:23:54 compute-0 suspicious_brown[409929]: 167 167
Oct 02 13:23:54 compute-0 systemd[1]: libpod-f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7.scope: Deactivated successfully.
Oct 02 13:23:54 compute-0 podman[409913]: 2025-10-02 13:23:54.710537412 +0000 UTC m=+0.415511562 container attach f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brown, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:23:54 compute-0 podman[409913]: 2025-10-02 13:23:54.711001404 +0000 UTC m=+0.415975554 container died f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:23:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.729 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.730 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.731 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:23:54 compute-0 nova_compute[256940]: 2025-10-02 13:23:54.731 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5bc23980-d45b-4acb-9b37-858b821d2252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:23:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:54.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-55e1583df1fbb0962f9a680152d47ec2892048c0ec5d04e89e2c1dd0ddfb2afc-merged.mount: Deactivated successfully.
Oct 02 13:23:55 compute-0 podman[409913]: 2025-10-02 13:23:55.045218271 +0000 UTC m=+0.750192411 container remove f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:23:55 compute-0 systemd[1]: libpod-conmon-f1bc3668ad08f26e1e95bcc67ebe441310f5eed684adf8ef930acfe48cb0d7c7.scope: Deactivated successfully.
Oct 02 13:23:55 compute-0 podman[409955]: 2025-10-02 13:23:55.286473421 +0000 UTC m=+0.090202245 container create bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:55 compute-0 podman[409955]: 2025-10-02 13:23:55.219370468 +0000 UTC m=+0.023099312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:55 compute-0 systemd[1]: Started libpod-conmon-bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd.scope.
Oct 02 13:23:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a75379fa3200e251b45c074b9b6e3c14b629ad9c201e33a55f0bc5c0a5c61ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a75379fa3200e251b45c074b9b6e3c14b629ad9c201e33a55f0bc5c0a5c61ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a75379fa3200e251b45c074b9b6e3c14b629ad9c201e33a55f0bc5c0a5c61ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a75379fa3200e251b45c074b9b6e3c14b629ad9c201e33a55f0bc5c0a5c61ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:55 compute-0 podman[409955]: 2025-10-02 13:23:55.430823033 +0000 UTC m=+0.234551847 container init bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:23:55 compute-0 podman[409955]: 2025-10-02 13:23:55.43877471 +0000 UTC m=+0.242503524 container start bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:23:55 compute-0 podman[409955]: 2025-10-02 13:23:55.53958008 +0000 UTC m=+0.343308904 container attach bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:23:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:56 compute-0 NetworkManager[44981]: <info>  [1759411436.0828] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Oct 02 13:23:56 compute-0 NetworkManager[44981]: <info>  [1759411436.0836] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Oct 02 13:23:56 compute-0 ceph-mon[73668]: pgmap v3382: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:56 compute-0 ovn_controller[148123]: 2025-10-02T13:23:56Z|01076|binding|INFO|Releasing lport 866389a0-d931-4b6e-8a65-5bb4c8a2b7c2 from this chassis (sb_readonly=0)
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:56 compute-0 intelligent_easley[409971]: {
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:     "1": [
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:         {
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "devices": [
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "/dev/loop3"
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             ],
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "lv_name": "ceph_lv0",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "lv_size": "7511998464",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "name": "ceph_lv0",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "tags": {
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.cluster_name": "ceph",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.crush_device_class": "",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.encrypted": "0",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.osd_id": "1",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.type": "block",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:                 "ceph.vdo": "0"
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             },
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "type": "block",
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:             "vg_name": "ceph_vg0"
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:         }
Oct 02 13:23:56 compute-0 intelligent_easley[409971]:     ]
Oct 02 13:23:56 compute-0 intelligent_easley[409971]: }
Oct 02 13:23:56 compute-0 systemd[1]: libpod-bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd.scope: Deactivated successfully.
Oct 02 13:23:56 compute-0 podman[409955]: 2025-10-02 13:23:56.304485512 +0000 UTC m=+1.108214326 container died bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.336 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updating instance_info_cache with network_info: [{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.363 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.363 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.416 2 DEBUG nova.compute.manager [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-changed-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.416 2 DEBUG nova.compute.manager [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Refreshing instance network info cache due to event network-changed-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.416 2 DEBUG oslo_concurrency.lockutils [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.417 2 DEBUG oslo_concurrency.lockutils [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:23:56 compute-0 nova_compute[256940]: 2025-10-02 13:23:56.417 2 DEBUG nova.network.neutron [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Refreshing network info cache for port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a75379fa3200e251b45c074b9b6e3c14b629ad9c201e33a55f0bc5c0a5c61ae-merged.mount: Deactivated successfully.
Oct 02 13:23:56 compute-0 podman[409955]: 2025-10-02 13:23:56.624788217 +0000 UTC m=+1.428517031 container remove bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:23:56 compute-0 systemd[1]: libpod-conmon-bf7d1e2125bb6d43445e0d663cab24c4699aa22259dea02e11d1a209df7136bd.scope: Deactivated successfully.
Oct 02 13:23:56 compute-0 sudo[409852]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:56 compute-0 sudo[409996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:23:56 compute-0 sudo[409996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:56 compute-0 sudo[409996]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:56 compute-0 sudo[410021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:56 compute-0 sudo[410021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:56 compute-0 sudo[410021]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:56 compute-0 sudo[410046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:56 compute-0 sudo[410046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:56 compute-0 sudo[410046]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:56.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:56 compute-0 sudo[410072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:23:56 compute-0 sudo[410072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:57 compute-0 podman[410136]: 2025-10-02 13:23:57.215247355 +0000 UTC m=+0.019849807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:57 compute-0 podman[410136]: 2025-10-02 13:23:57.341576479 +0000 UTC m=+0.146178921 container create 29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:23:57 compute-0 systemd[1]: Started libpod-conmon-29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936.scope.
Oct 02 13:23:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:57 compute-0 podman[410136]: 2025-10-02 13:23:57.556616008 +0000 UTC m=+0.361218460 container init 29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_franklin, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:23:57 compute-0 podman[410136]: 2025-10-02 13:23:57.564642907 +0000 UTC m=+0.369245339 container start 29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_franklin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:57 compute-0 affectionate_franklin[410152]: 167 167
Oct 02 13:23:57 compute-0 systemd[1]: libpod-29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936.scope: Deactivated successfully.
Oct 02 13:23:57 compute-0 podman[410136]: 2025-10-02 13:23:57.64094815 +0000 UTC m=+0.445550632 container attach 29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_franklin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:23:57 compute-0 podman[410136]: 2025-10-02 13:23:57.642383868 +0000 UTC m=+0.446986320 container died 29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_franklin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e6872afe0b4fa6f6e7b284fec45fcb2ab0d2f3680f71af89936a42973b3a7d3-merged.mount: Deactivated successfully.
Oct 02 13:23:57 compute-0 nova_compute[256940]: 2025-10-02 13:23:57.942 2 DEBUG nova.network.neutron [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updated VIF entry in instance network info cache for port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:23:57 compute-0 nova_compute[256940]: 2025-10-02 13:23:57.943 2 DEBUG nova.network.neutron [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updating instance_info_cache with network_info: [{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:23:57 compute-0 nova_compute[256940]: 2025-10-02 13:23:57.958 2 DEBUG oslo_concurrency.lockutils [req-b4e5160e-8c0d-4e2e-a33e-2dd4a9f35cc1 req-e1f3241f-fa81-4f5b-b2b7-ef4667e758ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:23:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:23:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:58.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:23:58 compute-0 podman[410136]: 2025-10-02 13:23:58.079658384 +0000 UTC m=+0.884260856 container remove 29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:23:58 compute-0 systemd[1]: libpod-conmon-29d7a812f7c9e3506624c10fae244d8e6a2802a6388d8bdd59dbe4d7bba6e936.scope: Deactivated successfully.
Oct 02 13:23:58 compute-0 nova_compute[256940]: 2025-10-02 13:23:58.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:58 compute-0 podman[410175]: 2025-10-02 13:23:58.294892068 +0000 UTC m=+0.071031867 container create 61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 02 13:23:58 compute-0 podman[410175]: 2025-10-02 13:23:58.245240817 +0000 UTC m=+0.021380636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:23:58 compute-0 systemd[1]: Started libpod-conmon-61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d.scope.
Oct 02 13:23:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da51a6779bcc3b158220960a2cc3dea91eb7af47952dbfb5d1592f69ef858ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da51a6779bcc3b158220960a2cc3dea91eb7af47952dbfb5d1592f69ef858ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da51a6779bcc3b158220960a2cc3dea91eb7af47952dbfb5d1592f69ef858ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da51a6779bcc3b158220960a2cc3dea91eb7af47952dbfb5d1592f69ef858ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:23:58 compute-0 ceph-mon[73668]: pgmap v3383: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:23:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:23:58 compute-0 podman[410175]: 2025-10-02 13:23:58.544989399 +0000 UTC m=+0.321129208 container init 61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:23:58 compute-0 podman[410175]: 2025-10-02 13:23:58.555267476 +0000 UTC m=+0.331407295 container start 61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:23:58 compute-0 podman[410175]: 2025-10-02 13:23:58.686666191 +0000 UTC m=+0.462806000 container attach 61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:23:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 13:23:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:23:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:58.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:59 compute-0 beautiful_wing[410192]: {
Oct 02 13:23:59 compute-0 beautiful_wing[410192]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:23:59 compute-0 beautiful_wing[410192]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:23:59 compute-0 beautiful_wing[410192]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:23:59 compute-0 beautiful_wing[410192]:         "osd_id": 1,
Oct 02 13:23:59 compute-0 beautiful_wing[410192]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:23:59 compute-0 beautiful_wing[410192]:         "type": "bluestore"
Oct 02 13:23:59 compute-0 beautiful_wing[410192]:     }
Oct 02 13:23:59 compute-0 beautiful_wing[410192]: }
Oct 02 13:23:59 compute-0 systemd[1]: libpod-61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d.scope: Deactivated successfully.
Oct 02 13:23:59 compute-0 podman[410175]: 2025-10-02 13:23:59.402781854 +0000 UTC m=+1.178921653 container died 61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:23:59 compute-0 sudo[410212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:59 compute-0 sudo[410212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:59 compute-0 sudo[410212]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:59 compute-0 nova_compute[256940]: 2025-10-02 13:23:59.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:59 compute-0 sudo[410249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:59 compute-0 sudo[410249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:59 compute-0 sudo[410249]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5da51a6779bcc3b158220960a2cc3dea91eb7af47952dbfb5d1592f69ef858ce-merged.mount: Deactivated successfully.
Oct 02 13:23:59 compute-0 ceph-mon[73668]: pgmap v3384: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 13:23:59 compute-0 podman[410175]: 2025-10-02 13:23:59.906061706 +0000 UTC m=+1.682201545 container remove 61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:23:59 compute-0 sudo[410072]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:23:59 compute-0 systemd[1]: libpod-conmon-61203920796a98210319ca7a4e5f9905292af214a31e56796ed0f9526121f29d.scope: Deactivated successfully.
Oct 02 13:24:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:00.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:24:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:24:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:24:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1a70fd29-5f17-4a9c-b9e9-12f71d26a8d6 does not exist
Oct 02 13:24:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8e28a08b-a9a5-4913-85c4-4105dfac5488 does not exist
Oct 02 13:24:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7dc68714-379e-40cc-a954-dbf1385bf9b4 does not exist
Oct 02 13:24:00 compute-0 sudo[410275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:00 compute-0 sudo[410275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:00 compute-0 sudo[410275]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:00 compute-0 sudo[410300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:24:00 compute-0 sudo[410300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:00 compute-0 sudo[410300]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 13:24:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:24:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:24:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:02 compute-0 ceph-mon[73668]: pgmap v3385: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 13:24:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:24:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:02.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:03 compute-0 nova_compute[256940]: 2025-10-02 13:24:03.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:04.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:04 compute-0 ceph-mon[73668]: pgmap v3386: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:24:04 compute-0 nova_compute[256940]: 2025-10-02 13:24:04.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 169 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 395 KiB/s wr, 87 op/s
Oct 02 13:24:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:05 compute-0 ceph-mon[73668]: pgmap v3387: 305 pgs: 305 active+clean; 169 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 395 KiB/s wr, 87 op/s
Oct 02 13:24:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:24:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:24:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:06.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:06 compute-0 ovn_controller[148123]: 2025-10-02T13:24:06Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:90:7f 10.100.0.8
Oct 02 13:24:06 compute-0 ovn_controller[148123]: 2025-10-02T13:24:06Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:90:7f 10.100.0.8
Oct 02 13:24:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 71 op/s
Oct 02 13:24:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:08.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:08 compute-0 ceph-mon[73668]: pgmap v3388: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 71 op/s
Oct 02 13:24:08 compute-0 nova_compute[256940]: 2025-10-02 13:24:08.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Oct 02 13:24:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:08.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:09 compute-0 nova_compute[256940]: 2025-10-02 13:24:09.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:10 compute-0 ceph-mon[73668]: pgmap v3389: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.056701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450056737, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 712, "num_deletes": 251, "total_data_size": 941478, "memory_usage": 955480, "flush_reason": "Manual Compaction"}
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450065451, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 920236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74892, "largest_seqno": 75602, "table_properties": {"data_size": 916494, "index_size": 1521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8595, "raw_average_key_size": 19, "raw_value_size": 909023, "raw_average_value_size": 2080, "num_data_blocks": 66, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411402, "oldest_key_time": 1759411402, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 8783 microseconds, and 2940 cpu microseconds.
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.065486) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 920236 bytes OK
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.065502) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.066745) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.066756) EVENT_LOG_v1 {"time_micros": 1759411450066752, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.066772) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 937851, prev total WAL file size 937851, number of live WAL files 2.
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.067310) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(898KB)], [170(14MB)]
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450067375, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 15670185, "oldest_snapshot_seqno": -1}
Oct 02 13:24:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:10.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10160 keys, 13668657 bytes, temperature: kUnknown
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450185945, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 13668657, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13602182, "index_size": 39994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 268698, "raw_average_key_size": 26, "raw_value_size": 13423562, "raw_average_value_size": 1321, "num_data_blocks": 1523, "num_entries": 10160, "num_filter_entries": 10160, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.186267) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13668657 bytes
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.187924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.1 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.1 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(31.9) write-amplify(14.9) OK, records in: 10677, records dropped: 517 output_compression: NoCompression
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.187946) EVENT_LOG_v1 {"time_micros": 1759411450187936, "job": 106, "event": "compaction_finished", "compaction_time_micros": 118639, "compaction_time_cpu_micros": 57361, "output_level": 6, "num_output_files": 1, "total_output_size": 13668657, "num_input_records": 10677, "num_output_records": 10160, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450188296, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450191771, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.067199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.191851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.191860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.191863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.191866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:24:10.191869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-0 podman[410331]: 2025-10-02 13:24:10.421054297 +0000 UTC m=+0.080734079 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:24:10 compute-0 podman[410330]: 2025-10-02 13:24:10.434024424 +0000 UTC m=+0.087975438 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:24:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 198 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:24:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:12.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:12 compute-0 ceph-mon[73668]: pgmap v3390: 305 pgs: 305 active+clean; 198 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:24:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:24:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:13 compute-0 nova_compute[256940]: 2025-10-02 13:24:13.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:13 compute-0 nova_compute[256940]: 2025-10-02 13:24:13.620 2 DEBUG nova.compute.manager [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:24:13 compute-0 nova_compute[256940]: 2025-10-02 13:24:13.684 2 INFO nova.compute.manager [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] instance snapshotting
Oct 02 13:24:13 compute-0 nova_compute[256940]: 2025-10-02 13:24:13.993 2 INFO nova.virt.libvirt.driver [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Beginning live snapshot process
Oct 02 13:24:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:14.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:14 compute-0 nova_compute[256940]: 2025-10-02 13:24:14.149 2 DEBUG nova.virt.libvirt.imagebackend [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 13:24:14 compute-0 nova_compute[256940]: 2025-10-02 13:24:14.374 2 DEBUG nova.storage.rbd_utils [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] creating snapshot(9a39e637f2984b97b25e21429786d391) on rbd image(5bc23980-d45b-4acb-9b37-858b821d2252_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:24:14 compute-0 ceph-mon[73668]: pgmap v3391: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:24:14 compute-0 nova_compute[256940]: 2025-10-02 13:24:14.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:24:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:14.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Oct 02 13:24:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Oct 02 13:24:15 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Oct 02 13:24:15 compute-0 ceph-mon[73668]: pgmap v3392: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:24:15 compute-0 nova_compute[256940]: 2025-10-02 13:24:15.765 2 DEBUG nova.storage.rbd_utils [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] cloning vms/5bc23980-d45b-4acb-9b37-858b821d2252_disk@9a39e637f2984b97b25e21429786d391 to images/4dbf986e-53ef-4e53-875d-c9d73e683338 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 13:24:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:16 compute-0 nova_compute[256940]: 2025-10-02 13:24:16.236 2 DEBUG nova.storage.rbd_utils [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] flattening images/4dbf986e-53ef-4e53-875d-c9d73e683338 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 13:24:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:16 compute-0 ceph-mon[73668]: osdmap e408: 3 total, 3 up, 3 in
Oct 02 13:24:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 135 KiB/s wr, 44 op/s
Oct 02 13:24:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:16.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:17 compute-0 ceph-mon[73668]: pgmap v3394: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 135 KiB/s wr, 44 op/s
Oct 02 13:24:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:18.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:18 compute-0 nova_compute[256940]: 2025-10-02 13:24:18.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:18 compute-0 nova_compute[256940]: 2025-10-02 13:24:18.471 2 DEBUG nova.storage.rbd_utils [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] removing snapshot(9a39e637f2984b97b25e21429786d391) on rbd image(5bc23980-d45b-4acb-9b37-858b821d2252_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:24:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 135 KiB/s wr, 44 op/s
Oct 02 13:24:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:18.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Oct 02 13:24:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Oct 02 13:24:19 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Oct 02 13:24:19 compute-0 nova_compute[256940]: 2025-10-02 13:24:19.191 2 DEBUG nova.storage.rbd_utils [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] creating snapshot(snap) on rbd image(4dbf986e-53ef-4e53-875d-c9d73e683338) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:24:19 compute-0 nova_compute[256940]: 2025-10-02 13:24:19.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:19 compute-0 sudo[410515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:19 compute-0 sudo[410515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:19 compute-0 sudo[410515]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:19 compute-0 sudo[410540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:19 compute-0 sudo[410540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:19 compute-0 sudo[410540]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:20.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Oct 02 13:24:20 compute-0 ceph-mon[73668]: pgmap v3395: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 135 KiB/s wr, 44 op/s
Oct 02 13:24:20 compute-0 ceph-mon[73668]: osdmap e409: 3 total, 3 up, 3 in
Oct 02 13:24:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Oct 02 13:24:20 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Oct 02 13:24:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 257 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.5 MiB/s wr, 128 op/s
Oct 02 13:24:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:20.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:21 compute-0 ceph-mon[73668]: osdmap e410: 3 total, 3 up, 3 in
Oct 02 13:24:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:21 compute-0 nova_compute[256940]: 2025-10-02 13:24:21.837 2 INFO nova.virt.libvirt.driver [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Snapshot image upload complete
Oct 02 13:24:21 compute-0 nova_compute[256940]: 2025-10-02 13:24:21.838 2 INFO nova.compute.manager [None req-cc958f15-4480-4dab-a2e3-05a093516d67 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Took 8.15 seconds to snapshot the instance on the hypervisor.
Oct 02 13:24:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:22.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:22 compute-0 ceph-mon[73668]: pgmap v3398: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 257 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.5 MiB/s wr, 128 op/s
Oct 02 13:24:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 6.5 MiB/s wr, 133 op/s
Oct 02 13:24:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:23 compute-0 nova_compute[256940]: 2025-10-02 13:24:23.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:23 compute-0 ceph-mon[73668]: pgmap v3399: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 6.5 MiB/s wr, 133 op/s
Oct 02 13:24:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:24.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:24 compute-0 podman[410568]: 2025-10-02 13:24:24.421479223 +0000 UTC m=+0.087256239 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:24:24 compute-0 podman[410569]: 2025-10-02 13:24:24.428421134 +0000 UTC m=+0.096455578 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:24:24 compute-0 nova_compute[256940]: 2025-10-02 13:24:24.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 98 op/s
Oct 02 13:24:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:24.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:25 compute-0 ceph-mon[73668]: pgmap v3400: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 98 op/s
Oct 02 13:24:26 compute-0 ovn_controller[148123]: 2025-10-02T13:24:26Z|01077|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 02 13:24:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:24:26.520 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:24:26.521 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:24:26.522 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Oct 02 13:24:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Oct 02 13:24:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Oct 02 13:24:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Oct 02 13:24:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:26.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:27 compute-0 ceph-mon[73668]: pgmap v3401: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Oct 02 13:24:27 compute-0 ceph-mon[73668]: osdmap e411: 3 total, 3 up, 3 in
Oct 02 13:24:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/348882763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:28 compute-0 nova_compute[256940]: 2025-10-02 13:24:28.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 95 op/s
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:24:28
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 02 13:24:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:24:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:28.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:24:29.070 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:24:29 compute-0 nova_compute[256940]: 2025-10-02 13:24:29.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:24:29.072 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:24:29 compute-0 nova_compute[256940]: 2025-10-02 13:24:29.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:24:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:24:29 compute-0 ceph-mon[73668]: pgmap v3403: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 95 op/s
Oct 02 13:24:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 63 op/s
Oct 02 13:24:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:30.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:31 compute-0 ceph-mon[73668]: pgmap v3404: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 63 op/s
Oct 02 13:24:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.1 KiB/s wr, 44 op/s
Oct 02 13:24:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1390501743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:24:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:32.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:33 compute-0 nova_compute[256940]: 2025-10-02 13:24:33.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:34 compute-0 ceph-mon[73668]: pgmap v3405: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.1 KiB/s wr, 44 op/s
Oct 02 13:24:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/277055431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:24:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:34 compute-0 nova_compute[256940]: 2025-10-02 13:24:34.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.3 KiB/s wr, 47 op/s
Oct 02 13:24:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:34.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/634568430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3827424755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:36 compute-0 ceph-mon[73668]: pgmap v3406: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.3 KiB/s wr, 47 op/s
Oct 02 13:24:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 616 KiB/s rd, 16 KiB/s wr, 61 op/s
Oct 02 13:24:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:36.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:38 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:24:38.075 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:24:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:38 compute-0 ceph-mon[73668]: pgmap v3407: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 616 KiB/s rd, 16 KiB/s wr, 61 op/s
Oct 02 13:24:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3425334544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:38 compute-0 nova_compute[256940]: 2025-10-02 13:24:38.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 513 KiB/s rd, 13 KiB/s wr, 51 op/s
Oct 02 13:24:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:38.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.236 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.236 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.237 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:24:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/25163837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:24:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606460489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.711 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:24:39 compute-0 sudo[410642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:39 compute-0 sudo[410642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:39 compute-0 sudo[410642]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.789 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.790 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:24:39 compute-0 sudo[410670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:39 compute-0 sudo[410670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:39 compute-0 sudo[410670]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.942 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.944 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3932MB free_disk=20.94255828857422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.945 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:39 compute-0 nova_compute[256940]: 2025-10-02 13:24:39.945 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.056 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 5bc23980-d45b-4acb-9b37-858b821d2252 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.057 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.057 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:24:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:40.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.161 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:24:40 compute-0 ceph-mon[73668]: pgmap v3408: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 513 KiB/s rd, 13 KiB/s wr, 51 op/s
Oct 02 13:24:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1606460489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:24:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2282584196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.632 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.638 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.656 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.684 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:24:40 compute-0 nova_compute[256940]: 2025-10-02 13:24:40.685 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 99 op/s
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021784995579750607 of space, bias 1.0, pg target 0.6535498673925182 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.2200215487158013 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:24:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:24:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:24:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:40.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:24:41 compute-0 podman[410719]: 2025-10-02 13:24:41.386937299 +0000 UTC m=+0.057364532 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd)
Oct 02 13:24:41 compute-0 podman[410718]: 2025-10-02 13:24:41.422032181 +0000 UTC m=+0.085632737 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:24:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2282584196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:41 compute-0 ceph-mon[73668]: pgmap v3409: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 99 op/s
Oct 02 13:24:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:42.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 02 13:24:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:42.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:43 compute-0 nova_compute[256940]: 2025-10-02 13:24:43.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:43 compute-0 nova_compute[256940]: 2025-10-02 13:24:43.685 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:43 compute-0 nova_compute[256940]: 2025-10-02 13:24:43.686 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:43 compute-0 nova_compute[256940]: 2025-10-02 13:24:43.686 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:24:43 compute-0 ceph-mon[73668]: pgmap v3410: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 02 13:24:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:44 compute-0 nova_compute[256940]: 2025-10-02 13:24:44.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:44 compute-0 nova_compute[256940]: 2025-10-02 13:24:44.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:24:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:44.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:45 compute-0 ceph-mon[73668]: pgmap v3411: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:24:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:46.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:46 compute-0 nova_compute[256940]: 2025-10-02 13:24:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 13:24:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:48 compute-0 ceph-mon[73668]: pgmap v3412: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 13:24:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:48.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:48 compute-0 nova_compute[256940]: 2025-10-02 13:24:48.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:48 compute-0 nova_compute[256940]: 2025-10-02 13:24:48.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1023 B/s wr, 54 op/s
Oct 02 13:24:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:48.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:49 compute-0 nova_compute[256940]: 2025-10-02 13:24:49.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:50 compute-0 ceph-mon[73668]: pgmap v3413: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1023 B/s wr, 54 op/s
Oct 02 13:24:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:50 compute-0 nova_compute[256940]: 2025-10-02 13:24:50.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 292 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 470 KiB/s wr, 82 op/s
Oct 02 13:24:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:50.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:52 compute-0 ceph-mon[73668]: pgmap v3414: 305 pgs: 305 active+clean; 292 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 470 KiB/s wr, 82 op/s
Oct 02 13:24:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:52.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 293 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 495 KiB/s wr, 58 op/s
Oct 02 13:24:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:52.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:53 compute-0 nova_compute[256940]: 2025-10-02 13:24:53.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:54.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:54 compute-0 ceph-mon[73668]: pgmap v3415: 305 pgs: 305 active+clean; 293 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 495 KiB/s wr, 58 op/s
Oct 02 13:24:54 compute-0 nova_compute[256940]: 2025-10-02 13:24:54.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 531 KiB/s wr, 54 op/s
Oct 02 13:24:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:24:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:54.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.210 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.210 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:24:55 compute-0 podman[410767]: 2025-10-02 13:24:55.411142424 +0000 UTC m=+0.081374827 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:24:55 compute-0 podman[410768]: 2025-10-02 13:24:55.42599496 +0000 UTC m=+0.091180241 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.810 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.811 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.811 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:24:55 compute-0 nova_compute[256940]: 2025-10-02 13:24:55.811 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5bc23980-d45b-4acb-9b37-858b821d2252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:24:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:56.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:56 compute-0 ceph-mon[73668]: pgmap v3416: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 531 KiB/s wr, 54 op/s
Oct 02 13:24:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 56 op/s
Oct 02 13:24:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:57 compute-0 nova_compute[256940]: 2025-10-02 13:24:57.119 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updating instance_info_cache with network_info: [{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:24:57 compute-0 nova_compute[256940]: 2025-10-02 13:24:57.134 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:24:57 compute-0 nova_compute[256940]: 2025-10-02 13:24:57.134 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:24:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 13:24:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:58.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 13:24:58 compute-0 nova_compute[256940]: 2025-10-02 13:24:58.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:58 compute-0 ceph-mon[73668]: pgmap v3417: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 56 op/s
Oct 02 13:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:24:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:24:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:24:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:24:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:58.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:59 compute-0 nova_compute[256940]: 2025-10-02 13:24:59.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:59 compute-0 sudo[410814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:59 compute-0 sudo[410814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:59 compute-0 sudo[410814]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:59 compute-0 sudo[410839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:59 compute-0 sudo[410839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:59 compute-0 sudo[410839]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:00.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:00 compute-0 ceph-mon[73668]: pgmap v3418: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:25:00 compute-0 sudo[410864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:00 compute-0 sudo[410864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:00 compute-0 sudo[410864]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:00 compute-0 sudo[410889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:00 compute-0 sudo[410889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:00 compute-0 sudo[410889]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 543 KiB/s wr, 54 op/s
Oct 02 13:25:00 compute-0 sudo[410914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:00 compute-0 sudo[410914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:00 compute-0 sudo[410914]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:00 compute-0 sudo[410939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:25:00 compute-0 sudo[410939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:01 compute-0 sudo[410939]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:25:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:25:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:25:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:25:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 133f5faa-5fdb-4983-a969-95b43efa24be does not exist
Oct 02 13:25:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e3501146-4874-4722-89c6-bae73229d26f does not exist
Oct 02 13:25:01 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bfdd70dd-98a6-4f4c-9222-37ca4a1e6ef2 does not exist
Oct 02 13:25:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:25:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:25:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:25:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:25:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:25:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:01 compute-0 sudo[410996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:01 compute-0 sudo[410996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:01 compute-0 sudo[410996]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:01 compute-0 sudo[411021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:01 compute-0 sudo[411021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:01 compute-0 sudo[411021]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:01 compute-0 sudo[411046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:01 compute-0 sudo[411046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:01 compute-0 sudo[411046]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:01 compute-0 sudo[411071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:25:01 compute-0 sudo[411071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:02 compute-0 podman[411138]: 2025-10-02 13:25:01.935705733 +0000 UTC m=+0.025317249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:02 compute-0 podman[411138]: 2025-10-02 13:25:02.08177088 +0000 UTC m=+0.171382376 container create 98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:25:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:02 compute-0 systemd[1]: Started libpod-conmon-98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c.scope.
Oct 02 13:25:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:02 compute-0 podman[411138]: 2025-10-02 13:25:02.342345773 +0000 UTC m=+0.431957299 container init 98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:25:02 compute-0 podman[411138]: 2025-10-02 13:25:02.354229472 +0000 UTC m=+0.443840968 container start 98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:25:02 compute-0 tender_blackwell[411154]: 167 167
Oct 02 13:25:02 compute-0 systemd[1]: libpod-98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c.scope: Deactivated successfully.
Oct 02 13:25:02 compute-0 conmon[411154]: conmon 98e512d95da7a950ca19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c.scope/container/memory.events
Oct 02 13:25:02 compute-0 podman[411138]: 2025-10-02 13:25:02.365901345 +0000 UTC m=+0.455512861 container attach 98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:25:02 compute-0 podman[411138]: 2025-10-02 13:25:02.367166768 +0000 UTC m=+0.456778274 container died 98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:25:02 compute-0 ceph-mon[73668]: pgmap v3419: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 543 KiB/s wr, 54 op/s
Oct 02 13:25:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:25:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:25:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:25:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-43c3bd9b39c60d5495798e29937ac8ef53c452761cb62adaa7ccdced85c5bb07-merged.mount: Deactivated successfully.
Oct 02 13:25:02 compute-0 podman[411138]: 2025-10-02 13:25:02.620893423 +0000 UTC m=+0.710504919 container remove 98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:25:02 compute-0 systemd[1]: libpod-conmon-98e512d95da7a950ca1968b1ace6c1845c0fee2a564b842f96e3c706e652bd7c.scope: Deactivated successfully.
Oct 02 13:25:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 196 KiB/s rd, 75 KiB/s wr, 26 op/s
Oct 02 13:25:02 compute-0 podman[411180]: 2025-10-02 13:25:02.804400903 +0000 UTC m=+0.049781045 container create 417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:25:02 compute-0 systemd[1]: Started libpod-conmon-417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c.scope.
Oct 02 13:25:02 compute-0 podman[411180]: 2025-10-02 13:25:02.778670474 +0000 UTC m=+0.024050706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a03967ab5648ea28d6883916ba69ceaa6f6790a14046d18d7c250b820548c76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a03967ab5648ea28d6883916ba69ceaa6f6790a14046d18d7c250b820548c76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a03967ab5648ea28d6883916ba69ceaa6f6790a14046d18d7c250b820548c76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a03967ab5648ea28d6883916ba69ceaa6f6790a14046d18d7c250b820548c76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a03967ab5648ea28d6883916ba69ceaa6f6790a14046d18d7c250b820548c76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:02 compute-0 podman[411180]: 2025-10-02 13:25:02.905622814 +0000 UTC m=+0.151002976 container init 417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:25:02 compute-0 podman[411180]: 2025-10-02 13:25:02.913008326 +0000 UTC m=+0.158388508 container start 417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:25:02 compute-0 podman[411180]: 2025-10-02 13:25:02.918639972 +0000 UTC m=+0.164020134 container attach 417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:25:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:03 compute-0 nova_compute[256940]: 2025-10-02 13:25:03.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:03 compute-0 amazing_turing[411198]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:25:03 compute-0 amazing_turing[411198]: --> relative data size: 1.0
Oct 02 13:25:03 compute-0 amazing_turing[411198]: --> All data devices are unavailable
Oct 02 13:25:03 compute-0 systemd[1]: libpod-417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c.scope: Deactivated successfully.
Oct 02 13:25:03 compute-0 podman[411180]: 2025-10-02 13:25:03.775003151 +0000 UTC m=+1.020383293 container died 417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 13:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a03967ab5648ea28d6883916ba69ceaa6f6790a14046d18d7c250b820548c76-merged.mount: Deactivated successfully.
Oct 02 13:25:03 compute-0 podman[411180]: 2025-10-02 13:25:03.856894029 +0000 UTC m=+1.102274171 container remove 417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_turing, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:25:03 compute-0 systemd[1]: libpod-conmon-417a50a70a5c2ed092ae3593f380cba033c9323c7506a0c9d126a720fad8f75c.scope: Deactivated successfully.
Oct 02 13:25:03 compute-0 sudo[411071]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:03 compute-0 sudo[411228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:03 compute-0 sudo[411228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:03 compute-0 sudo[411228]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:04 compute-0 sudo[411253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:04 compute-0 sudo[411253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:04 compute-0 sudo[411253]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:04.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:04 compute-0 nova_compute[256940]: 2025-10-02 13:25:04.130 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:04 compute-0 sudo[411278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:04 compute-0 sudo[411278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:04 compute-0 sudo[411278]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:04 compute-0 sudo[411303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:25:04 compute-0 sudo[411303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:04 compute-0 ceph-mon[73668]: pgmap v3420: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 196 KiB/s rd, 75 KiB/s wr, 26 op/s
Oct 02 13:25:04 compute-0 nova_compute[256940]: 2025-10-02 13:25:04.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:04 compute-0 podman[411369]: 2025-10-02 13:25:04.662215272 +0000 UTC m=+0.045372661 container create 3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:25:04 compute-0 systemd[1]: Started libpod-conmon-3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed.scope.
Oct 02 13:25:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:04 compute-0 podman[411369]: 2025-10-02 13:25:04.643966997 +0000 UTC m=+0.027124426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 240 KiB/s wr, 5 op/s
Oct 02 13:25:04 compute-0 podman[411369]: 2025-10-02 13:25:04.777383365 +0000 UTC m=+0.160540844 container init 3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:25:04 compute-0 podman[411369]: 2025-10-02 13:25:04.787269492 +0000 UTC m=+0.170426881 container start 3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:25:04 compute-0 dreamy_gates[411386]: 167 167
Oct 02 13:25:04 compute-0 systemd[1]: libpod-3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed.scope: Deactivated successfully.
Oct 02 13:25:04 compute-0 podman[411369]: 2025-10-02 13:25:04.812050386 +0000 UTC m=+0.195207805 container attach 3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:25:04 compute-0 podman[411369]: 2025-10-02 13:25:04.812994501 +0000 UTC m=+0.196151890 container died 3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-77ac16a195a6277bb0a86d8effe9422282c164b3b7a295a079fdc838e9833377-merged.mount: Deactivated successfully.
Oct 02 13:25:04 compute-0 podman[411369]: 2025-10-02 13:25:04.915376682 +0000 UTC m=+0.298534071 container remove 3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 13:25:04 compute-0 systemd[1]: libpod-conmon-3ce801e20a5adf7907ac2d0da5947d6c707008d36c43036e7d616429de3f24ed.scope: Deactivated successfully.
Oct 02 13:25:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:04.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:05 compute-0 podman[411413]: 2025-10-02 13:25:05.109174729 +0000 UTC m=+0.048976164 container create 03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:25:05 compute-0 systemd[1]: Started libpod-conmon-03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1.scope.
Oct 02 13:25:05 compute-0 podman[411413]: 2025-10-02 13:25:05.086725626 +0000 UTC m=+0.026527081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698f9572d8296e539051bbbf45b36cdad25ffa7cb796f0afa0d6780e2408877e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698f9572d8296e539051bbbf45b36cdad25ffa7cb796f0afa0d6780e2408877e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698f9572d8296e539051bbbf45b36cdad25ffa7cb796f0afa0d6780e2408877e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698f9572d8296e539051bbbf45b36cdad25ffa7cb796f0afa0d6780e2408877e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:05 compute-0 podman[411413]: 2025-10-02 13:25:05.29733993 +0000 UTC m=+0.237141385 container init 03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:25:05 compute-0 podman[411413]: 2025-10-02 13:25:05.304668221 +0000 UTC m=+0.244469676 container start 03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:25:05 compute-0 podman[411413]: 2025-10-02 13:25:05.367075933 +0000 UTC m=+0.306877398 container attach 03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mirzakhani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:25:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:25:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]: {
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:     "1": [
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:         {
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "devices": [
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "/dev/loop3"
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             ],
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "lv_name": "ceph_lv0",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "lv_size": "7511998464",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "name": "ceph_lv0",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "tags": {
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.cluster_name": "ceph",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.crush_device_class": "",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.encrypted": "0",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.osd_id": "1",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.type": "block",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:                 "ceph.vdo": "0"
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             },
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "type": "block",
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:             "vg_name": "ceph_vg0"
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:         }
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]:     ]
Oct 02 13:25:06 compute-0 vigilant_mirzakhani[411429]: }
Oct 02 13:25:06 compute-0 systemd[1]: libpod-03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1.scope: Deactivated successfully.
Oct 02 13:25:06 compute-0 podman[411413]: 2025-10-02 13:25:06.093909885 +0000 UTC m=+1.033711330 container died 03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:06.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-698f9572d8296e539051bbbf45b36cdad25ffa7cb796f0afa0d6780e2408877e-merged.mount: Deactivated successfully.
Oct 02 13:25:06 compute-0 podman[411413]: 2025-10-02 13:25:06.543358978 +0000 UTC m=+1.483160413 container remove 03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:06 compute-0 systemd[1]: libpod-conmon-03fb7395e82105a198c77c97fef93be6734e0717e8c464df3514523f2dc697c1.scope: Deactivated successfully.
Oct 02 13:25:06 compute-0 sudo[411303]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:06 compute-0 sudo[411454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:06 compute-0 sudo[411454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:06 compute-0 sudo[411454]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 209 KiB/s wr, 7 op/s
Oct 02 13:25:06 compute-0 sudo[411479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:06 compute-0 sudo[411479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:06 compute-0 sudo[411479]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:06 compute-0 sudo[411504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:06 compute-0 sudo[411504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:06 compute-0 sudo[411504]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:06 compute-0 sudo[411530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:25:06 compute-0 sudo[411530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:06 compute-0 ceph-mon[73668]: pgmap v3421: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 240 KiB/s wr, 5 op/s
Oct 02 13:25:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:06.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:07 compute-0 podman[411597]: 2025-10-02 13:25:07.267789027 +0000 UTC m=+0.092738772 container create 62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:07 compute-0 podman[411597]: 2025-10-02 13:25:07.196862263 +0000 UTC m=+0.021812008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:07 compute-0 systemd[1]: Started libpod-conmon-62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff.scope.
Oct 02 13:25:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:07 compute-0 podman[411597]: 2025-10-02 13:25:07.55685679 +0000 UTC m=+0.381806545 container init 62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:07 compute-0 podman[411597]: 2025-10-02 13:25:07.565524436 +0000 UTC m=+0.390474171 container start 62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:25:07 compute-0 crazy_williamson[411613]: 167 167
Oct 02 13:25:07 compute-0 systemd[1]: libpod-62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff.scope: Deactivated successfully.
Oct 02 13:25:07 compute-0 podman[411597]: 2025-10-02 13:25:07.723609825 +0000 UTC m=+0.548559650 container attach 62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 13:25:07 compute-0 podman[411597]: 2025-10-02 13:25:07.724390305 +0000 UTC m=+0.549340070 container died 62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 13:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e54e15ab0ffb8f80f0de7d2d98fcd8102283a0e537fd1e258d477c3f6c19fd55-merged.mount: Deactivated successfully.
Oct 02 13:25:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:08 compute-0 ceph-mon[73668]: pgmap v3422: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 209 KiB/s wr, 7 op/s
Oct 02 13:25:08 compute-0 nova_compute[256940]: 2025-10-02 13:25:08.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:08 compute-0 podman[411597]: 2025-10-02 13:25:08.297405679 +0000 UTC m=+1.122355404 container remove 62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williamson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:25:08 compute-0 systemd[1]: libpod-conmon-62d1445d50cebe5e06f144264db742fbdd7a5669d2e8b221cc6ff8c37db7c8ff.scope: Deactivated successfully.
Oct 02 13:25:08 compute-0 podman[411638]: 2025-10-02 13:25:08.487212383 +0000 UTC m=+0.023694747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:25:08 compute-0 podman[411638]: 2025-10-02 13:25:08.671662057 +0000 UTC m=+0.208144441 container create 0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:25:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:25:08 compute-0 systemd[1]: Started libpod-conmon-0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404.scope.
Oct 02 13:25:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d362e2b17f4bc8ec58ddb719445831254a670b81aede47451574da712b6b1a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d362e2b17f4bc8ec58ddb719445831254a670b81aede47451574da712b6b1a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d362e2b17f4bc8ec58ddb719445831254a670b81aede47451574da712b6b1a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d362e2b17f4bc8ec58ddb719445831254a670b81aede47451574da712b6b1a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:25:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:08.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:09 compute-0 podman[411638]: 2025-10-02 13:25:09.096985753 +0000 UTC m=+0.633468147 container init 0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:25:09 compute-0 podman[411638]: 2025-10-02 13:25:09.107448865 +0000 UTC m=+0.643931249 container start 0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:25:09 compute-0 podman[411638]: 2025-10-02 13:25:09.115632597 +0000 UTC m=+0.652114961 container attach 0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:25:09 compute-0 ovn_controller[148123]: 2025-10-02T13:25:09Z|01078|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Oct 02 13:25:09 compute-0 nova_compute[256940]: 2025-10-02 13:25:09.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]: {
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]:         "osd_id": 1,
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]:         "type": "bluestore"
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]:     }
Oct 02 13:25:09 compute-0 mystifying_meitner[411655]: }
Oct 02 13:25:09 compute-0 systemd[1]: libpod-0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404.scope: Deactivated successfully.
Oct 02 13:25:09 compute-0 podman[411638]: 2025-10-02 13:25:09.952703695 +0000 UTC m=+1.489186049 container died 0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d362e2b17f4bc8ec58ddb719445831254a670b81aede47451574da712b6b1a1-merged.mount: Deactivated successfully.
Oct 02 13:25:10 compute-0 podman[411638]: 2025-10-02 13:25:10.016354689 +0000 UTC m=+1.552837053 container remove 0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:25:10 compute-0 systemd[1]: libpod-conmon-0bee3d2f6ddb862408e22a68e79eca0c2fe254872c879098b4bf8c6413c21404.scope: Deactivated successfully.
Oct 02 13:25:10 compute-0 sudo[411530]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:25:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:25:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:10 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9463bdbd-71b8-4ced-b19d-2e524411af01 does not exist
Oct 02 13:25:10 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c2e18680-1d37-4f4c-978c-70c6437cad49 does not exist
Oct 02 13:25:10 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e2ab3195-48d2-47bd-adf4-93cfd6773c31 does not exist
Oct 02 13:25:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:10.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:10 compute-0 sudo[411690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:10 compute-0 sudo[411690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:10 compute-0 sudo[411690]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:10 compute-0 sudo[411715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:25:10 compute-0 sudo[411715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:10 compute-0 sudo[411715]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:10 compute-0 ceph-mon[73668]: pgmap v3423: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:25:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:25:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:10.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:12.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:12 compute-0 ceph-mon[73668]: pgmap v3424: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:25:12 compute-0 podman[411741]: 2025-10-02 13:25:12.406832094 +0000 UTC m=+0.075597896 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:25:12 compute-0 podman[411742]: 2025-10-02 13:25:12.409563825 +0000 UTC m=+0.072724302 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:25:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 196 KiB/s wr, 5 op/s
Oct 02 13:25:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:13.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:13 compute-0 nova_compute[256940]: 2025-10-02 13:25:13.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Oct 02 13:25:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Oct 02 13:25:13 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Oct 02 13:25:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:14.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:14 compute-0 ceph-mon[73668]: pgmap v3425: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 196 KiB/s wr, 5 op/s
Oct 02 13:25:14 compute-0 ceph-mon[73668]: osdmap e412: 3 total, 3 up, 3 in
Oct 02 13:25:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Oct 02 13:25:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Oct 02 13:25:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Oct 02 13:25:14 compute-0 nova_compute[256940]: 2025-10-02 13:25:14.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 330 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 46 op/s
Oct 02 13:25:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:15.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Oct 02 13:25:15 compute-0 ceph-mon[73668]: osdmap e413: 3 total, 3 up, 3 in
Oct 02 13:25:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Oct 02 13:25:15 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Oct 02 13:25:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:16.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:16 compute-0 ceph-mon[73668]: pgmap v3428: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 330 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 46 op/s
Oct 02 13:25:16 compute-0 ceph-mon[73668]: osdmap e414: 3 total, 3 up, 3 in
Oct 02 13:25:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 401 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 198 op/s
Oct 02 13:25:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:17.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:18 compute-0 nova_compute[256940]: 2025-10-02 13:25:18.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Oct 02 13:25:18 compute-0 ceph-mon[73668]: pgmap v3430: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 401 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 198 op/s
Oct 02 13:25:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Oct 02 13:25:18 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Oct 02 13:25:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 401 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 16 MiB/s wr, 216 op/s
Oct 02 13:25:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:19.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:19 compute-0 ceph-mon[73668]: osdmap e415: 3 total, 3 up, 3 in
Oct 02 13:25:19 compute-0 ceph-mon[73668]: pgmap v3432: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 401 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 16 MiB/s wr, 216 op/s
Oct 02 13:25:19 compute-0 nova_compute[256940]: 2025-10-02 13:25:19.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:20 compute-0 sudo[411782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:20 compute-0 sudo[411782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:20 compute-0 sudo[411782]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:20.078 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:25:20 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:20.079 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:25:20 compute-0 nova_compute[256940]: 2025-10-02 13:25:20.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:20 compute-0 sudo[411807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:20 compute-0 sudo[411807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:20 compute-0 sudo[411807]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:20.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 336 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 14 MiB/s wr, 194 op/s
Oct 02 13:25:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Oct 02 13:25:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Oct 02 13:25:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Oct 02 13:25:22 compute-0 ceph-mon[73668]: pgmap v3433: 305 pgs: 305 active+clean; 336 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 14 MiB/s wr, 194 op/s
Oct 02 13:25:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:22.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 9.0 MiB/s wr, 156 op/s
Oct 02 13:25:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:23.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:23.082 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:25:23 compute-0 ceph-mon[73668]: osdmap e416: 3 total, 3 up, 3 in
Oct 02 13:25:23 compute-0 nova_compute[256940]: 2025-10-02 13:25:23.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:24.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:24 compute-0 ceph-mon[73668]: pgmap v3435: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 9.0 MiB/s wr, 156 op/s
Oct 02 13:25:24 compute-0 nova_compute[256940]: 2025-10-02 13:25:24.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 288 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 4.1 KiB/s wr, 68 op/s
Oct 02 13:25:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:25.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4070755414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:26.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Oct 02 13:25:26 compute-0 podman[411836]: 2025-10-02 13:25:26.418360429 +0000 UTC m=+0.077333551 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:25:26 compute-0 podman[411837]: 2025-10-02 13:25:26.498949554 +0000 UTC m=+0.160358319 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:25:26 compute-0 ceph-mon[73668]: pgmap v3436: 305 pgs: 305 active+clean; 288 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 4.1 KiB/s wr, 68 op/s
Oct 02 13:25:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:26.522 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:26.522 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:26.523 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Oct 02 13:25:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Oct 02 13:25:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 6.4 KiB/s wr, 89 op/s
Oct 02 13:25:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Oct 02 13:25:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Oct 02 13:25:26 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Oct 02 13:25:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:27.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:27 compute-0 ceph-mon[73668]: osdmap e417: 3 total, 3 up, 3 in
Oct 02 13:25:27 compute-0 ceph-mon[73668]: pgmap v3438: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 6.4 KiB/s wr, 89 op/s
Oct 02 13:25:27 compute-0 ceph-mon[73668]: osdmap e418: 3 total, 3 up, 3 in
Oct 02 13:25:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:28.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.244 2 DEBUG nova.compute.manager [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-changed-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.245 2 DEBUG nova.compute.manager [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Refreshing instance network info cache due to event network-changed-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.245 2 DEBUG oslo_concurrency.lockutils [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.246 2 DEBUG oslo_concurrency.lockutils [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.246 2 DEBUG nova.network.neutron [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Refreshing network info cache for port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.331 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.332 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.332 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.333 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.333 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.334 2 INFO nova.compute.manager [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Terminating instance
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.336 2 DEBUG nova.compute.manager [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:28 compute-0 kernel: tapf6b746a3-d5 (unregistering): left promiscuous mode
Oct 02 13:25:28 compute-0 NetworkManager[44981]: <info>  [1759411528.5919] device (tapf6b746a3-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01079|binding|INFO|Releasing lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 from this chassis (sb_readonly=0)
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01080|binding|INFO|Setting lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 down in Southbound
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01081|binding|INFO|Removing iface tapf6b746a3-d5 ovn-installed in OVS
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.669 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:90:7f 10.100.0.8'], port_security=['fa:16:3e:c1:90:7f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5bc23980-d45b-4acb-9b37-858b821d2252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecfaf38d20784d06a43ced1560cede11', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd43d4ef4-4d2c-4cee-ba2f-defd20425090', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7f2f940-ba8f-4f9e-ad7e-a0f8c2d0d84f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.671 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 in datapath 769e2243-bbee-45a3-8fea-c44f0ea9a1e8 unbound from our chassis
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.673 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 769e2243-bbee-45a3-8fea-c44f0ea9a1e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.675 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[ca86ecee-9fbe-4dee-9769-1f5d3f5d3aca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.676 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 namespace which is not needed anymore
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Oct 02 13:25:28 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000d8.scope: Consumed 17.563s CPU time.
Oct 02 13:25:28 compute-0 systemd-machined[210927]: Machine qemu-110-instance-000000d8 terminated.
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Oct 02 13:25:28 compute-0 kernel: tapf6b746a3-d5: entered promiscuous mode
Oct 02 13:25:28 compute-0 NetworkManager[44981]: <info>  [1759411528.7626] manager: (tapf6b746a3-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/477)
Oct 02 13:25:28 compute-0 systemd-udevd[411888]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01082|binding|INFO|Claiming lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for this chassis.
Oct 02 13:25:28 compute-0 kernel: tapf6b746a3-d5 (unregistering): left promiscuous mode
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01083|binding|INFO|f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0: Claiming fa:16:3e:c1:90:7f 10.100.0.8
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.771 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:90:7f 10.100.0.8'], port_security=['fa:16:3e:c1:90:7f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5bc23980-d45b-4acb-9b37-858b821d2252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecfaf38d20784d06a43ced1560cede11', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd43d4ef4-4d2c-4cee-ba2f-defd20425090', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7f2f940-ba8f-4f9e-ad7e-a0f8c2d0d84f, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.804 2 INFO nova.virt.libvirt.driver [-] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Instance destroyed successfully.
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01084|binding|INFO|Setting lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 ovn-installed in OVS
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01085|binding|INFO|Setting lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 up in Southbound
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01086|binding|INFO|Releasing lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 from this chassis (sb_readonly=1)
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01087|if_status|INFO|Dropped 1 log messages in last 1193 seconds (most recently, 1193 seconds ago) due to excessive rate
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01088|if_status|INFO|Not setting lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 down as sb is readonly
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.807 2 DEBUG nova.objects.instance [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lazy-loading 'resources' on Instance uuid 5bc23980-d45b-4acb-9b37-858b821d2252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01089|binding|INFO|Removing iface tapf6b746a3-d5 ovn-installed in OVS
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01090|binding|INFO|Releasing lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 from this chassis (sb_readonly=0)
Oct 02 13:25:28 compute-0 ovn_controller[148123]: 2025-10-02T13:25:28Z|01091|binding|INFO|Setting lport f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 down in Southbound
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.821 2 DEBUG nova.virt.libvirt.vif [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:23:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-567427000',display_name='tempest-TestSnapshotPattern-server-567427000',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-567427000',id=216,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUvYaec5qv58Rt+7etIfxdZk5jlQmcuvdY+kvpbop0QcPd4KDRUXca759VWr6i4CfOCrn9td/XvE0cFAdnY7gxedUw6zaigljtueTe5IN5w0sb9JVzLt/cY3hOst2Sg1A==',key_name='tempest-TestSnapshotPattern-1796131382',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:23:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ecfaf38d20784d06a43ced1560cede11',ramdisk_id='',reservation_id='r-a6c7o0dw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1671292510',owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:24:21Z,user_data=None,user_id='7eba544d42c8426295f2a88f0e85d446',uuid=5bc23980-d45b-4acb-9b37-858b821d2252,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.822 2 DEBUG nova.network.os_vif_util [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converting VIF {"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.823 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:90:7f 10.100.0.8'], port_security=['fa:16:3e:c1:90:7f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5bc23980-d45b-4acb-9b37-858b821d2252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecfaf38d20784d06a43ced1560cede11', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd43d4ef4-4d2c-4cee-ba2f-defd20425090', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7f2f940-ba8f-4f9e-ad7e-a0f8c2d0d84f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.823 2 DEBUG nova.network.os_vif_util [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:90:7f,bridge_name='br-int',has_traffic_filtering=True,id=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6b746a3-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.825 2 DEBUG os_vif [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:90:7f,bridge_name='br-int',has_traffic_filtering=True,id=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6b746a3-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6b746a3-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [NOTICE]   (409601) : haproxy version is 2.8.14-c23fe91
Oct 02 13:25:28 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [NOTICE]   (409601) : path to executable is /usr/sbin/haproxy
Oct 02 13:25:28 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [WARNING]  (409601) : Exiting Master process...
Oct 02 13:25:28 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [WARNING]  (409601) : Exiting Master process...
Oct 02 13:25:28 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [ALERT]    (409601) : Current worker (409605) exited with code 143 (Terminated)
Oct 02 13:25:28 compute-0 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[409595]: [WARNING]  (409601) : All workers exited. Exiting... (0)
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 systemd[1]: libpod-22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d.scope: Deactivated successfully.
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.836 2 INFO os_vif [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:90:7f,bridge_name='br-int',has_traffic_filtering=True,id=f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6b746a3-d5')
Oct 02 13:25:28 compute-0 podman[411911]: 2025-10-02 13:25:28.841801561 +0000 UTC m=+0.052357102 container died 22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d-userdata-shm.mount: Deactivated successfully.
Oct 02 13:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-62856a11b7fcae9581b84d14e44e4988073c6e3286f89d1b54884157362cee39-merged.mount: Deactivated successfully.
Oct 02 13:25:28 compute-0 podman[411911]: 2025-10-02 13:25:28.889218603 +0000 UTC m=+0.099774154 container cleanup 22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:25:28 compute-0 systemd[1]: libpod-conmon-22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d.scope: Deactivated successfully.
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:25:28
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 02 13:25:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:25:28 compute-0 podman[411961]: 2025-10-02 13:25:28.942861648 +0000 UTC m=+0.034912799 container remove 22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.948 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4834db-b594-4d19-bc4e-6c6254029dc9]: (4, ('Thu Oct  2 01:25:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 (22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d)\n22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d\nThu Oct  2 01:25:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 (22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d)\n22c2fdbf48a671ae2b8a3c6c0032e8f448a18fd9c23650421af19f3e5062b41d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.950 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[a27f0279-184b-4460-ad04-1347c1e2c9be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.950 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769e2243-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:25:28 compute-0 kernel: tap769e2243-b0: left promiscuous mode
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.956 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[80b46620-d473-4f80-a448-18e5a71d19b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:28 compute-0 nova_compute[256940]: 2025-10-02 13:25:28.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:28.999 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[935a96f5-29f7-4059-aa91-6236e9eb60c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.000 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e650d2fb-7d54-4ea4-ab54-f2decec93048]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.015 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e9dd3888-f079-4523-8b3a-ff29ead3cc88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 932516, 'reachable_time': 17589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411977, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.017 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.017 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[950ca3df-3f4d-4704-943d-918e4d763a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.018 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 in datapath 769e2243-bbee-45a3-8fea-c44f0ea9a1e8 unbound from our chassis
Oct 02 13:25:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d769e2243\x2dbbee\x2d45a3\x2d8fea\x2dc44f0ea9a1e8.mount: Deactivated successfully.
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.019 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 769e2243-bbee-45a3-8fea-c44f0ea9a1e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.020 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bb5e95-726e-4b71-a3fc-d91b74cfd4b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.021 158104 INFO neutron.agent.ovn.metadata.agent [-] Port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 in datapath 769e2243-bbee-45a3-8fea-c44f0ea9a1e8 unbound from our chassis
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.022 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 769e2243-bbee-45a3-8fea-c44f0ea9a1e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:25:29 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:25:29.023 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[775e0724-8cb0-484d-a626-387d93b94eab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:25:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:29.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:29 compute-0 nova_compute[256940]: 2025-10-02 13:25:29.422 2 DEBUG nova.network.neutron [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updated VIF entry in instance network info cache for port f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:25:29 compute-0 nova_compute[256940]: 2025-10-02 13:25:29.423 2 DEBUG nova.network.neutron [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updating instance_info_cache with network_info: [{"id": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "address": "fa:16:3e:c1:90:7f", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6b746a3-d5", "ovs_interfaceid": "f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:25:29 compute-0 nova_compute[256940]: 2025-10-02 13:25:29.444 2 DEBUG oslo_concurrency.lockutils [req-a2f6ca84-33b4-43ba-a324-baac65e43367 req-824bec84-e079-4e5b-a0d6-b6f7625897b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5bc23980-d45b-4acb-9b37-858b821d2252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:25:29 compute-0 nova_compute[256940]: 2025-10-02 13:25:29.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:25:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:25:29 compute-0 ceph-mon[73668]: pgmap v3440: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Oct 02 13:25:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:30.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.313 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-unplugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.314 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.314 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.314 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.315 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] No waiting events found dispatching network-vif-unplugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.315 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-unplugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.315 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.315 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.316 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.316 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.316 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] No waiting events found dispatching network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.316 2 WARNING nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received unexpected event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for instance with vm_state active and task_state deleting.
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.317 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.317 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.317 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.317 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.318 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] No waiting events found dispatching network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.319 2 WARNING nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received unexpected event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for instance with vm_state active and task_state deleting.
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.319 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.320 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.320 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.320 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.320 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] No waiting events found dispatching network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.321 2 WARNING nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received unexpected event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for instance with vm_state active and task_state deleting.
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.321 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-unplugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.321 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.321 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.321 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.322 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] No waiting events found dispatching network-vif-unplugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.322 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-unplugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.322 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.322 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.323 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.323 2 DEBUG oslo_concurrency.lockutils [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.323 2 DEBUG nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] No waiting events found dispatching network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.323 2 WARNING nova.compute.manager [req-12d4b823-6df6-4252-8e17-464e09b0ab23 req-afee7bdf-52ae-4c3d-9742-586e3f9695ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received unexpected event network-vif-plugged-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 for instance with vm_state active and task_state deleting.
Oct 02 13:25:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.4 KiB/s wr, 72 op/s
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.875 2 INFO nova.virt.libvirt.driver [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Deleting instance files /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252_del
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.876 2 INFO nova.virt.libvirt.driver [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Deletion of /var/lib/nova/instances/5bc23980-d45b-4acb-9b37-858b821d2252_del complete
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.918 2 INFO nova.compute.manager [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Took 2.58 seconds to destroy the instance on the hypervisor.
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.919 2 DEBUG oslo.service.loopingcall [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.919 2 DEBUG nova.compute.manager [-] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:25:30 compute-0 nova_compute[256940]: 2025-10-02 13:25:30.919 2 DEBUG nova.network.neutron [-] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:25:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:31.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:32 compute-0 ceph-mon[73668]: pgmap v3441: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.4 KiB/s wr, 72 op/s
Oct 02 13:25:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:32.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:32 compute-0 nova_compute[256940]: 2025-10-02 13:25:32.410 2 DEBUG nova.network.neutron [-] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:25:32 compute-0 nova_compute[256940]: 2025-10-02 13:25:32.428 2 INFO nova.compute.manager [-] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Took 1.51 seconds to deallocate network for instance.
Oct 02 13:25:32 compute-0 nova_compute[256940]: 2025-10-02 13:25:32.466 2 DEBUG nova.compute.manager [req-fa42017c-6ba9-466f-bbcd-5bf3f7405c97 req-2a72dd0e-8b8f-4880-8956-f9c7a0c6f664 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Received event network-vif-deleted-f6b746a3-d5c1-4b05-b69b-dd6a089bb0c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:25:32 compute-0 nova_compute[256940]: 2025-10-02 13:25:32.478 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:32 compute-0 nova_compute[256940]: 2025-10-02 13:25:32.478 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:32 compute-0 nova_compute[256940]: 2025-10-02 13:25:32.571 2 DEBUG oslo_concurrency.processutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 159 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 4.6 KiB/s wr, 81 op/s
Oct 02 13:25:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:25:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833919162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:33.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:33 compute-0 nova_compute[256940]: 2025-10-02 13:25:33.050 2 DEBUG oslo_concurrency.processutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:33 compute-0 nova_compute[256940]: 2025-10-02 13:25:33.057 2 DEBUG nova.compute.provider_tree [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:25:33 compute-0 nova_compute[256940]: 2025-10-02 13:25:33.073 2 DEBUG nova.scheduler.client.report [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:25:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2833919162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:33 compute-0 nova_compute[256940]: 2025-10-02 13:25:33.092 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:33 compute-0 nova_compute[256940]: 2025-10-02 13:25:33.113 2 INFO nova.scheduler.client.report [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Deleted allocations for instance 5bc23980-d45b-4acb-9b37-858b821d2252
Oct 02 13:25:33 compute-0 nova_compute[256940]: 2025-10-02 13:25:33.166 2 DEBUG oslo_concurrency.lockutils [None req-19187cb6-b00b-45f8-b8a5-a2fa295d6133 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "5bc23980-d45b-4acb-9b37-858b821d2252" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:33 compute-0 nova_compute[256940]: 2025-10-02 13:25:33.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:34 compute-0 ceph-mon[73668]: pgmap v3442: 305 pgs: 305 active+clean; 159 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 4.6 KiB/s wr, 81 op/s
Oct 02 13:25:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:34.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:34 compute-0 nova_compute[256940]: 2025-10-02 13:25:34.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 64 op/s
Oct 02 13:25:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1206261349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3046713103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:36 compute-0 ceph-mon[73668]: pgmap v3443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 64 op/s
Oct 02 13:25:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:36.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:25:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Oct 02 13:25:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Oct 02 13:25:36 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.931672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536931707, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1129, "num_deletes": 254, "total_data_size": 1672386, "memory_usage": 1708112, "flush_reason": "Manual Compaction"}
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536946924, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1085387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75603, "largest_seqno": 76731, "table_properties": {"data_size": 1080869, "index_size": 2041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11767, "raw_average_key_size": 21, "raw_value_size": 1071177, "raw_average_value_size": 1933, "num_data_blocks": 90, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411451, "oldest_key_time": 1759411451, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 15341 microseconds, and 4405 cpu microseconds.
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.947008) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1085387 bytes OK
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.947033) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.949392) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.949419) EVENT_LOG_v1 {"time_micros": 1759411536949410, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.949446) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1667265, prev total WAL file size 1667265, number of live WAL files 2.
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.950606) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303039' seq:0, type:0; will stop at (end)
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1059KB)], [173(13MB)]
Oct 02 13:25:36 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536950679, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 14754044, "oldest_snapshot_seqno": -1}
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10228 keys, 11465407 bytes, temperature: kUnknown
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537038446, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 11465407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11401764, "index_size": 36970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270387, "raw_average_key_size": 26, "raw_value_size": 11225317, "raw_average_value_size": 1097, "num_data_blocks": 1398, "num_entries": 10228, "num_filter_entries": 10228, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:25:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.038902) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11465407 bytes
Oct 02 13:25:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:37.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.041277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.0 rd, 130.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(24.2) write-amplify(10.6) OK, records in: 10714, records dropped: 486 output_compression: NoCompression
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.041322) EVENT_LOG_v1 {"time_micros": 1759411537041295, "job": 108, "event": "compaction_finished", "compaction_time_micros": 87839, "compaction_time_cpu_micros": 38104, "output_level": 6, "num_output_files": 1, "total_output_size": 11465407, "num_input_records": 10714, "num_output_records": 10228, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537041835, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537046360, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:36.950476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.046425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.046431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.046434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.046437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:25:37.046440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-0 nova_compute[256940]: 2025-10-02 13:25:37.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:37 compute-0 nova_compute[256940]: 2025-10-02 13:25:37.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:37 compute-0 ceph-mon[73668]: pgmap v3444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:25:37 compute-0 ceph-mon[73668]: osdmap e419: 3 total, 3 up, 3 in
Oct 02 13:25:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:38.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:25:38 compute-0 nova_compute[256940]: 2025-10-02 13:25:38.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:39.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:25:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 57K writes, 226K keys, 57K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s
                                           Cumulative WAL: 57K writes, 20K syncs, 2.85 writes per sync, written: 0.21 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2950 writes, 10K keys, 2950 commit groups, 1.0 writes per commit group, ingest: 9.51 MB, 0.02 MB/s
                                           Interval WAL: 2950 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:25:39 compute-0 nova_compute[256940]: 2025-10-02 13:25:39.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:39 compute-0 ceph-mon[73668]: pgmap v3446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:25:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/881232608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:40.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.231 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.231 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.231 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.231 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.232 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:40 compute-0 sudo[412007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:40 compute-0 sudo[412007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:40 compute-0 sudo[412007]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:40 compute-0 sudo[412033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:40 compute-0 sudo[412033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:40 compute-0 sudo[412033]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:25:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860199988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.670 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.829 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.830 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4114MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.830 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.830 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.889 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.889 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:25:40 compute-0 nova_compute[256940]: 2025-10-02 13:25:40.907 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:25:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:25:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2823375801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3860199988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:41.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:25:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610177502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:41 compute-0 nova_compute[256940]: 2025-10-02 13:25:41.382 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:41 compute-0 nova_compute[256940]: 2025-10-02 13:25:41.389 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:25:41 compute-0 nova_compute[256940]: 2025-10-02 13:25:41.421 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:25:41 compute-0 nova_compute[256940]: 2025-10-02 13:25:41.444 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:25:41 compute-0 nova_compute[256940]: 2025-10-02 13:25:41.445 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:42 compute-0 ceph-mon[73668]: pgmap v3447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 02 13:25:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1610177502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:42.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 409 B/s wr, 4 op/s
Oct 02 13:25:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:43.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:43 compute-0 podman[412105]: 2025-10-02 13:25:43.41829439 +0000 UTC m=+0.081527550 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct 02 13:25:43 compute-0 podman[412104]: 2025-10-02 13:25:43.424358688 +0000 UTC m=+0.083768758 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:25:43 compute-0 nova_compute[256940]: 2025-10-02 13:25:43.447 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:43 compute-0 nova_compute[256940]: 2025-10-02 13:25:43.800 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411528.797933, 5bc23980-d45b-4acb-9b37-858b821d2252 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:25:43 compute-0 nova_compute[256940]: 2025-10-02 13:25:43.801 2 INFO nova.compute.manager [-] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] VM Stopped (Lifecycle Event)
Oct 02 13:25:43 compute-0 nova_compute[256940]: 2025-10-02 13:25:43.824 2 DEBUG nova.compute.manager [None req-4c53b428-4b76-4a70-b344-f3be758475dc - - - - - -] [instance: 5bc23980-d45b-4acb-9b37-858b821d2252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:25:43 compute-0 nova_compute[256940]: 2025-10-02 13:25:43.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:44 compute-0 ceph-mon[73668]: pgmap v3448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 409 B/s wr, 4 op/s
Oct 02 13:25:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:44.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:44 compute-0 nova_compute[256940]: 2025-10-02 13:25:44.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:44 compute-0 nova_compute[256940]: 2025-10-02 13:25:44.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:44 compute-0 nova_compute[256940]: 2025-10-02 13:25:44.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:25:44 compute-0 nova_compute[256940]: 2025-10-02 13:25:44.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:45.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:46 compute-0 ceph-mon[73668]: pgmap v3449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:47.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 13:25:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:48.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:48 compute-0 ceph-mon[73668]: pgmap v3450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:48 compute-0 nova_compute[256940]: 2025-10-02 13:25:48.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:48 compute-0 nova_compute[256940]: 2025-10-02 13:25:48.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:49.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:49 compute-0 nova_compute[256940]: 2025-10-02 13:25:49.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:49 compute-0 nova_compute[256940]: 2025-10-02 13:25:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:50.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:50 compute-0 ceph-mon[73668]: pgmap v3451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:51 compute-0 nova_compute[256940]: 2025-10-02 13:25:51.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:52.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:52 compute-0 ceph-mon[73668]: pgmap v3452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:53 compute-0 ceph-mon[73668]: pgmap v3453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:53 compute-0 nova_compute[256940]: 2025-10-02 13:25:53.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:54.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:54 compute-0 nova_compute[256940]: 2025-10-02 13:25:54.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:55 compute-0 ceph-mon[73668]: pgmap v3454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:25:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:25:56 compute-0 nova_compute[256940]: 2025-10-02 13:25:56.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:56 compute-0 nova_compute[256940]: 2025-10-02 13:25:56.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:25:56 compute-0 nova_compute[256940]: 2025-10-02 13:25:56.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:25:56 compute-0 nova_compute[256940]: 2025-10-02 13:25:56.229 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:25:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:57.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:57 compute-0 nova_compute[256940]: 2025-10-02 13:25:57.223 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:57 compute-0 podman[412151]: 2025-10-02 13:25:57.410876491 +0000 UTC m=+0.079825095 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 13:25:57 compute-0 podman[412152]: 2025-10-02 13:25:57.41966531 +0000 UTC m=+0.085699309 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:25:58 compute-0 ceph-mon[73668]: pgmap v3455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:58.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:25:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:25:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:58 compute-0 nova_compute[256940]: 2025-10-02 13:25:58.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:25:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:59 compute-0 nova_compute[256940]: 2025-10-02 13:25:59.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:00.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:00 compute-0 ceph-mon[73668]: pgmap v3456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:00.370 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:26:00 compute-0 nova_compute[256940]: 2025-10-02 13:26:00.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:00.371 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:26:00 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:00.372 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:26:00 compute-0 sudo[412198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:00 compute-0 sudo[412198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:00 compute-0 sudo[412198]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:00 compute-0 sudo[412223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:00 compute-0 sudo[412223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:00 compute-0 sudo[412223]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:01.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:01 compute-0 ceph-mon[73668]: pgmap v3457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:02.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:03.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:03 compute-0 ceph-mon[73668]: pgmap v3458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:03 compute-0 nova_compute[256940]: 2025-10-02 13:26:03.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:04 compute-0 nova_compute[256940]: 2025-10-02 13:26:04.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:05 compute-0 ceph-mon[73668]: pgmap v3459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1524008942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:26:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1524008942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:26:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:06.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:08 compute-0 ceph-mon[73668]: pgmap v3460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:26:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:08.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:26:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:08 compute-0 nova_compute[256940]: 2025-10-02 13:26:08.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:09.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:09 compute-0 nova_compute[256940]: 2025-10-02 13:26:09.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:10 compute-0 ceph-mon[73668]: pgmap v3461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:10.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:10 compute-0 sudo[412253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:10 compute-0 sudo[412253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:10 compute-0 sudo[412253]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:10 compute-0 sudo[412278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:26:10 compute-0 sudo[412278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:10 compute-0 sudo[412278]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:10 compute-0 sudo[412303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:10 compute-0 sudo[412303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:10 compute-0 sudo[412303]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:10 compute-0 sudo[412328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:26:10 compute-0 sudo[412328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:11.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:11 compute-0 sudo[412328]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:26:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:26:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:26:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1365a107-1e19-4dd1-abe2-f6aa3e9270ab does not exist
Oct 02 13:26:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8ba096e5-68bb-4054-9705-8d36602cd7cd does not exist
Oct 02 13:26:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 683ddc0f-cb17-434f-a5af-0edf771aa5df does not exist
Oct 02 13:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:26:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:26:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:26:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:26:11 compute-0 sudo[412384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:11 compute-0 sudo[412384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:11 compute-0 sudo[412384]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:11 compute-0 sudo[412409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:26:11 compute-0 sudo[412409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:11 compute-0 sudo[412409]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:11 compute-0 sudo[412434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:11 compute-0 sudo[412434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:11 compute-0 sudo[412434]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:11 compute-0 sudo[412459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:26:11 compute-0 sudo[412459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:12 compute-0 podman[412525]: 2025-10-02 13:26:12.01925825 +0000 UTC m=+0.043275586 container create b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_easley, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:26:12 compute-0 systemd[1]: Started libpod-conmon-b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def.scope.
Oct 02 13:26:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:12 compute-0 podman[412525]: 2025-10-02 13:26:12.001974571 +0000 UTC m=+0.025991937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:12 compute-0 podman[412525]: 2025-10-02 13:26:12.098461999 +0000 UTC m=+0.122479335 container init b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_easley, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:26:12 compute-0 podman[412525]: 2025-10-02 13:26:12.107194716 +0000 UTC m=+0.131212052 container start b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_easley, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 13:26:12 compute-0 podman[412525]: 2025-10-02 13:26:12.111178549 +0000 UTC m=+0.135195875 container attach b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_easley, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:26:12 compute-0 pensive_easley[412541]: 167 167
Oct 02 13:26:12 compute-0 systemd[1]: libpod-b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def.scope: Deactivated successfully.
Oct 02 13:26:12 compute-0 podman[412525]: 2025-10-02 13:26:12.115226215 +0000 UTC m=+0.139243551 container died b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:26:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cca6b9b904674c75f231a78fbec09a0c7a71825bcffc5f6e6c5442967640508-merged.mount: Deactivated successfully.
Oct 02 13:26:12 compute-0 podman[412525]: 2025-10-02 13:26:12.152968976 +0000 UTC m=+0.176986312 container remove b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:26:12 compute-0 systemd[1]: libpod-conmon-b7f396ad6dd821a87b0cb1ba57d4b894afe31af9e687ffdfe48b6cf1a6253def.scope: Deactivated successfully.
Oct 02 13:26:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:12.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:12 compute-0 ceph-mon[73668]: pgmap v3462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:26:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:26:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:26:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:26:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:26:12 compute-0 podman[412565]: 2025-10-02 13:26:12.349748711 +0000 UTC m=+0.052710662 container create f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_merkle, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:26:12 compute-0 systemd[1]: Started libpod-conmon-f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7.scope.
Oct 02 13:26:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f03779c744654fae2669d6a0dcc7039b378389430cacfab26d31266d542ec22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f03779c744654fae2669d6a0dcc7039b378389430cacfab26d31266d542ec22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f03779c744654fae2669d6a0dcc7039b378389430cacfab26d31266d542ec22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f03779c744654fae2669d6a0dcc7039b378389430cacfab26d31266d542ec22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f03779c744654fae2669d6a0dcc7039b378389430cacfab26d31266d542ec22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:12 compute-0 podman[412565]: 2025-10-02 13:26:12.328497418 +0000 UTC m=+0.031459419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:12 compute-0 podman[412565]: 2025-10-02 13:26:12.427538293 +0000 UTC m=+0.130500264 container init f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:26:12 compute-0 podman[412565]: 2025-10-02 13:26:12.4378126 +0000 UTC m=+0.140774541 container start f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_merkle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:26:12 compute-0 podman[412565]: 2025-10-02 13:26:12.4412991 +0000 UTC m=+0.144261061 container attach f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_merkle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:26:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:13.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:13 compute-0 cool_merkle[412580]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:26:13 compute-0 cool_merkle[412580]: --> relative data size: 1.0
Oct 02 13:26:13 compute-0 cool_merkle[412580]: --> All data devices are unavailable
Oct 02 13:26:13 compute-0 systemd[1]: libpod-f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7.scope: Deactivated successfully.
Oct 02 13:26:13 compute-0 podman[412565]: 2025-10-02 13:26:13.292718411 +0000 UTC m=+0.995680392 container died f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_merkle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f03779c744654fae2669d6a0dcc7039b378389430cacfab26d31266d542ec22-merged.mount: Deactivated successfully.
Oct 02 13:26:13 compute-0 podman[412565]: 2025-10-02 13:26:13.37270563 +0000 UTC m=+1.075667581 container remove f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:26:13 compute-0 systemd[1]: libpod-conmon-f15311becba15f21a5bff9e16ce6bcb327a209f0da6a80f8cda122b52215c3b7.scope: Deactivated successfully.
Oct 02 13:26:13 compute-0 sudo[412459]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:13 compute-0 sudo[412609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:13 compute-0 sudo[412609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:13 compute-0 sudo[412609]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:13 compute-0 podman[412633]: 2025-10-02 13:26:13.597629617 +0000 UTC m=+0.075863683 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 02 13:26:13 compute-0 podman[412634]: 2025-10-02 13:26:13.598249393 +0000 UTC m=+0.076087739 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:26:13 compute-0 sudo[412650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:26:13 compute-0 sudo[412650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:13 compute-0 sudo[412650]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:13 compute-0 sudo[412699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:13 compute-0 sudo[412699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:13 compute-0 sudo[412699]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:13 compute-0 sudo[412724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:26:13 compute-0 sudo[412724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:13 compute-0 nova_compute[256940]: 2025-10-02 13:26:13.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:14 compute-0 podman[412790]: 2025-10-02 13:26:14.118213938 +0000 UTC m=+0.047589938 container create b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:26:14 compute-0 systemd[1]: Started libpod-conmon-b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da.scope.
Oct 02 13:26:14 compute-0 podman[412790]: 2025-10-02 13:26:14.100255121 +0000 UTC m=+0.029631141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:14.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:14 compute-0 podman[412790]: 2025-10-02 13:26:14.218871354 +0000 UTC m=+0.148247374 container init b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:26:14 compute-0 podman[412790]: 2025-10-02 13:26:14.228013902 +0000 UTC m=+0.157389902 container start b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:26:14 compute-0 podman[412790]: 2025-10-02 13:26:14.231578545 +0000 UTC m=+0.160954615 container attach b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:26:14 compute-0 funny_galois[412806]: 167 167
Oct 02 13:26:14 compute-0 systemd[1]: libpod-b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da.scope: Deactivated successfully.
Oct 02 13:26:14 compute-0 podman[412790]: 2025-10-02 13:26:14.23641909 +0000 UTC m=+0.165795090 container died b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-905bdb2cc4c19fc11f61d0af0f5a8f4e19fae8692a7e766efb4d1d3f6c491df1-merged.mount: Deactivated successfully.
Oct 02 13:26:14 compute-0 podman[412790]: 2025-10-02 13:26:14.277021686 +0000 UTC m=+0.206397686 container remove b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:26:14 compute-0 systemd[1]: libpod-conmon-b978e7a5c1012d8412e53c67a11527db8935181412ecfbe3cdc389e9c21a95da.scope: Deactivated successfully.
Oct 02 13:26:14 compute-0 ceph-mon[73668]: pgmap v3463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:14 compute-0 podman[412829]: 2025-10-02 13:26:14.469060997 +0000 UTC m=+0.053076780 container create cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 13:26:14 compute-0 systemd[1]: Started libpod-conmon-cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5.scope.
Oct 02 13:26:14 compute-0 podman[412829]: 2025-10-02 13:26:14.445649949 +0000 UTC m=+0.029665732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:14 compute-0 nova_compute[256940]: 2025-10-02 13:26:14.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b467fd675bf56d63a153c15be00a52eb01b848d600c0373f6be15321f1f8494/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b467fd675bf56d63a153c15be00a52eb01b848d600c0373f6be15321f1f8494/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b467fd675bf56d63a153c15be00a52eb01b848d600c0373f6be15321f1f8494/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b467fd675bf56d63a153c15be00a52eb01b848d600c0373f6be15321f1f8494/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:14 compute-0 podman[412829]: 2025-10-02 13:26:14.580450263 +0000 UTC m=+0.164466036 container init cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:26:14 compute-0 podman[412829]: 2025-10-02 13:26:14.588721498 +0000 UTC m=+0.172737261 container start cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:26:14 compute-0 podman[412829]: 2025-10-02 13:26:14.592212198 +0000 UTC m=+0.176227961 container attach cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:26:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Oct 02 13:26:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:15.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:15 compute-0 brave_northcutt[412845]: {
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:     "1": [
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:         {
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "devices": [
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "/dev/loop3"
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             ],
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "lv_name": "ceph_lv0",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "lv_size": "7511998464",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "name": "ceph_lv0",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "tags": {
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.cluster_name": "ceph",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.crush_device_class": "",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.encrypted": "0",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.osd_id": "1",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.type": "block",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:                 "ceph.vdo": "0"
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             },
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "type": "block",
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:             "vg_name": "ceph_vg0"
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:         }
Oct 02 13:26:15 compute-0 brave_northcutt[412845]:     ]
Oct 02 13:26:15 compute-0 brave_northcutt[412845]: }
Oct 02 13:26:15 compute-0 systemd[1]: libpod-cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5.scope: Deactivated successfully.
Oct 02 13:26:15 compute-0 podman[412855]: 2025-10-02 13:26:15.399734637 +0000 UTC m=+0.029656602 container died cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b467fd675bf56d63a153c15be00a52eb01b848d600c0373f6be15321f1f8494-merged.mount: Deactivated successfully.
Oct 02 13:26:15 compute-0 podman[412855]: 2025-10-02 13:26:15.464819809 +0000 UTC m=+0.094741754 container remove cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:26:15 compute-0 systemd[1]: libpod-conmon-cd988ef11992eb1381a899ff2ebf203245f471fc21315626328d05ed411089f5.scope: Deactivated successfully.
Oct 02 13:26:15 compute-0 sudo[412724]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:15 compute-0 sudo[412870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:15 compute-0 sudo[412870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:15 compute-0 sudo[412870]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:15 compute-0 sudo[412895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:26:15 compute-0 sudo[412895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:15 compute-0 sudo[412895]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:15 compute-0 sudo[412920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:15 compute-0 sudo[412920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:15 compute-0 sudo[412920]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:15 compute-0 sudo[412945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:26:15 compute-0 sudo[412945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:16 compute-0 podman[413008]: 2025-10-02 13:26:16.115156713 +0000 UTC m=+0.038427700 container create a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:26:16 compute-0 systemd[1]: Started libpod-conmon-a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e.scope.
Oct 02 13:26:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:16 compute-0 podman[413008]: 2025-10-02 13:26:16.184246379 +0000 UTC m=+0.107517386 container init a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_maxwell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:26:16 compute-0 podman[413008]: 2025-10-02 13:26:16.189923106 +0000 UTC m=+0.113194093 container start a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:26:16 compute-0 podman[413008]: 2025-10-02 13:26:16.097806402 +0000 UTC m=+0.021077419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:16 compute-0 podman[413008]: 2025-10-02 13:26:16.19315777 +0000 UTC m=+0.116428757 container attach a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:26:16 compute-0 sleepy_maxwell[413024]: 167 167
Oct 02 13:26:16 compute-0 systemd[1]: libpod-a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e.scope: Deactivated successfully.
Oct 02 13:26:16 compute-0 conmon[413024]: conmon a81bb49b2268c16e54e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e.scope/container/memory.events
Oct 02 13:26:16 compute-0 podman[413008]: 2025-10-02 13:26:16.19545949 +0000 UTC m=+0.118730477 container died a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:26:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:16.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d780976b87a135006b057943a1056a37dcd59e5b5dc4f2cac5f1249da6f8914f-merged.mount: Deactivated successfully.
Oct 02 13:26:16 compute-0 podman[413008]: 2025-10-02 13:26:16.229774962 +0000 UTC m=+0.153045949 container remove a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_maxwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:26:16 compute-0 systemd[1]: libpod-conmon-a81bb49b2268c16e54e3d494792ea6a7875ad52c19d6bea9ab7e3250a4aa609e.scope: Deactivated successfully.
Oct 02 13:26:16 compute-0 podman[413048]: 2025-10-02 13:26:16.400073119 +0000 UTC m=+0.049255222 container create fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:26:16 compute-0 systemd[1]: Started libpod-conmon-fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd.scope.
Oct 02 13:26:16 compute-0 ceph-mon[73668]: pgmap v3464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Oct 02 13:26:16 compute-0 podman[413048]: 2025-10-02 13:26:16.378665212 +0000 UTC m=+0.027847325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:26:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cedec12ac1bdcc98c4bd428269d63fbe9a5758cb8783caa5cf34e92eb927369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cedec12ac1bdcc98c4bd428269d63fbe9a5758cb8783caa5cf34e92eb927369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cedec12ac1bdcc98c4bd428269d63fbe9a5758cb8783caa5cf34e92eb927369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cedec12ac1bdcc98c4bd428269d63fbe9a5758cb8783caa5cf34e92eb927369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:26:16 compute-0 podman[413048]: 2025-10-02 13:26:16.513021135 +0000 UTC m=+0.162203248 container init fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:26:16 compute-0 podman[413048]: 2025-10-02 13:26:16.521375262 +0000 UTC m=+0.170557365 container start fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:26:16 compute-0 podman[413048]: 2025-10-02 13:26:16.524752489 +0000 UTC m=+0.173934592 container attach fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:26:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]: {
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]:         "osd_id": 1,
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]:         "type": "bluestore"
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]:     }
Oct 02 13:26:17 compute-0 beautiful_satoshi[413064]: }
Oct 02 13:26:17 compute-0 systemd[1]: libpod-fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd.scope: Deactivated successfully.
Oct 02 13:26:17 compute-0 podman[413048]: 2025-10-02 13:26:17.399880566 +0000 UTC m=+1.049062659 container died fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cedec12ac1bdcc98c4bd428269d63fbe9a5758cb8783caa5cf34e92eb927369-merged.mount: Deactivated successfully.
Oct 02 13:26:17 compute-0 podman[413048]: 2025-10-02 13:26:17.456062217 +0000 UTC m=+1.105244320 container remove fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:26:17 compute-0 systemd[1]: libpod-conmon-fe7e8043c6d270199865c92958e89dbe747aa1a7aba2a6a7d924512529ebf9bd.scope: Deactivated successfully.
Oct 02 13:26:17 compute-0 sudo[412945]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:26:17 compute-0 ceph-mon[73668]: pgmap v3465: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:26:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 45a1a63d-87fd-4409-8ebb-503840084040 does not exist
Oct 02 13:26:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 29f6d6e0-83a1-4579-9c3b-f3815fa96f9e does not exist
Oct 02 13:26:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9ba399af-99bc-4220-9a0d-cc832bbe2593 does not exist
Oct 02 13:26:17 compute-0 sudo[413096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:17 compute-0 sudo[413096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:17 compute-0 sudo[413096]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:17 compute-0 sudo[413121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:26:17 compute-0 sudo[413121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:17 compute-0 sudo[413121]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:18.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:18 compute-0 nova_compute[256940]: 2025-10-02 13:26:18.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:19.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:19 compute-0 nova_compute[256940]: 2025-10-02 13:26:19.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:19 compute-0 ceph-mon[73668]: pgmap v3466: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3344741174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:20.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:20 compute-0 sudo[413147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:20 compute-0 sudo[413147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:20 compute-0 sudo[413147]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:20 compute-0 sudo[413172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:20 compute-0 sudo[413172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:20 compute-0 sudo[413172]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/119302741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:26:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:21.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:21 compute-0 ceph-mon[73668]: pgmap v3467: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:22.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:23.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:23 compute-0 nova_compute[256940]: 2025-10-02 13:26:23.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:23 compute-0 ovn_controller[148123]: 2025-10-02T13:26:23Z|01092|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 02 13:26:23 compute-0 ceph-mon[73668]: pgmap v3468: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/977123340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:26:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:24.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:24 compute-0 nova_compute[256940]: 2025-10-02 13:26:24.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:26 compute-0 ceph-mon[73668]: pgmap v3469: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:26.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:26.523 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:26.524 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:26.524 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 22 KiB/s wr, 10 op/s
Oct 02 13:26:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:27.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:28 compute-0 ceph-mon[73668]: pgmap v3470: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 22 KiB/s wr, 10 op/s
Oct 02 13:26:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:28 compute-0 podman[413202]: 2025-10-02 13:26:28.382663007 +0000 UTC m=+0.055094653 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:26:28 compute-0 podman[413203]: 2025-10-02 13:26:28.418269002 +0000 UTC m=+0.090682528 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 6 op/s
Oct 02 13:26:28 compute-0 nova_compute[256940]: 2025-10-02 13:26:28.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:26:28
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control']
Oct 02 13:26:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:26:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:29 compute-0 nova_compute[256940]: 2025-10-02 13:26:29.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:26:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:26:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:30.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:30 compute-0 ceph-mon[73668]: pgmap v3471: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 6 op/s
Oct 02 13:26:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 02 13:26:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:31.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:32.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:32 compute-0 ceph-mon[73668]: pgmap v3472: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 02 13:26:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2893176451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 19 op/s
Oct 02 13:26:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:33.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:33 compute-0 nova_compute[256940]: 2025-10-02 13:26:33.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:34.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:34 compute-0 ceph-mon[73668]: pgmap v3473: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 19 op/s
Oct 02 13:26:34 compute-0 nova_compute[256940]: 2025-10-02 13:26:34.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 20 op/s
Oct 02 13:26:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:36 compute-0 ceph-mon[73668]: pgmap v3474: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 20 op/s
Oct 02 13:26:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:36.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 13 KiB/s wr, 32 op/s
Oct 02 13:26:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:37.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/859102279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3214726545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:26:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828647226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:26:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:26:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828647226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:26:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:38 compute-0 ceph-mon[73668]: pgmap v3475: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 13 KiB/s wr, 32 op/s
Oct 02 13:26:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/828647226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:26:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/828647226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:26:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 13 KiB/s wr, 26 op/s
Oct 02 13:26:38 compute-0 nova_compute[256940]: 2025-10-02 13:26:38.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:39 compute-0 nova_compute[256940]: 2025-10-02 13:26:39.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:39 compute-0 ceph-mon[73668]: pgmap v3476: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 13 KiB/s wr, 26 op/s
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.237 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.238 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:26:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:40 compute-0 sudo[413272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:40 compute-0 sudo[413272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:40 compute-0 sudo[413272]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:26:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812219334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.704 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:26:40 compute-0 sudo[413297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:40 compute-0 sudo[413297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:40 compute-0 sudo[413297]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 730 KiB/s rd, 13 KiB/s wr, 34 op/s
Oct 02 13:26:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3812219334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.852 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.853 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4147MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.854 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.854 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.908 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.908 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.923 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6485334659408152 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:26:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.940 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.941 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:26:40 compute-0 nova_compute[256940]: 2025-10-02 13:26:40.955 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:26:41 compute-0 nova_compute[256940]: 2025-10-02 13:26:41.013 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:26:41 compute-0 nova_compute[256940]: 2025-10-02 13:26:41.027 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:26:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:26:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386844792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:41 compute-0 nova_compute[256940]: 2025-10-02 13:26:41.520 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:26:41 compute-0 nova_compute[256940]: 2025-10-02 13:26:41.527 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:26:41 compute-0 nova_compute[256940]: 2025-10-02 13:26:41.544 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:26:41 compute-0 nova_compute[256940]: 2025-10-02 13:26:41.546 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:26:41 compute-0 nova_compute[256940]: 2025-10-02 13:26:41.546 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:42 compute-0 ceph-mon[73668]: pgmap v3477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 730 KiB/s rd, 13 KiB/s wr, 34 op/s
Oct 02 13:26:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1386844792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3229219732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:42.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 34 op/s
Oct 02 13:26:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:43.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/298458676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:43 compute-0 nova_compute[256940]: 2025-10-02 13:26:43.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:44 compute-0 ceph-mon[73668]: pgmap v3478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 34 op/s
Oct 02 13:26:44 compute-0 podman[413348]: 2025-10-02 13:26:44.389274878 +0000 UTC m=+0.059714543 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:26:44 compute-0 podman[413349]: 2025-10-02 13:26:44.389385401 +0000 UTC m=+0.058972314 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:26:44 compute-0 nova_compute[256940]: 2025-10-02 13:26:44.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 25 op/s
Oct 02 13:26:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:45 compute-0 nova_compute[256940]: 2025-10-02 13:26:45.547 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:45 compute-0 nova_compute[256940]: 2025-10-02 13:26:45.548 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:45 compute-0 nova_compute[256940]: 2025-10-02 13:26:45.548 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:26:46 compute-0 nova_compute[256940]: 2025-10-02 13:26:46.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:46 compute-0 ceph-mon[73668]: pgmap v3479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 25 op/s
Oct 02 13:26:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 37 op/s
Oct 02 13:26:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:47.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:47 compute-0 ceph-mon[73668]: pgmap v3480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 37 op/s
Oct 02 13:26:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 25 op/s
Oct 02 13:26:48 compute-0 nova_compute[256940]: 2025-10-02 13:26:48.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:49.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:49 compute-0 nova_compute[256940]: 2025-10-02 13:26:49.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Oct 02 13:26:50 compute-0 nova_compute[256940]: 2025-10-02 13:26:50.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:50 compute-0 nova_compute[256940]: 2025-10-02 13:26:50.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:50 compute-0 ceph-mon[73668]: pgmap v3481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 25 op/s
Oct 02 13:26:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Oct 02 13:26:50 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Oct 02 13:26:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 149 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 44 op/s
Oct 02 13:26:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:51.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:51 compute-0 ceph-mon[73668]: osdmap e420: 3 total, 3 up, 3 in
Oct 02 13:26:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:52 compute-0 nova_compute[256940]: 2025-10-02 13:26:52.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:52 compute-0 ceph-mon[73668]: pgmap v3483: 305 pgs: 305 active+clean; 149 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 44 op/s
Oct 02 13:26:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Oct 02 13:26:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:53.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:53 compute-0 ceph-mon[73668]: pgmap v3484: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Oct 02 13:26:53 compute-0 nova_compute[256940]: 2025-10-02 13:26:53.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:54.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:54 compute-0 nova_compute[256940]: 2025-10-02 13:26:54.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 02 13:26:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:55.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:56 compute-0 ceph-mon[73668]: pgmap v3485: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 02 13:26:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:56.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 13:26:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:57.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:57 compute-0 nova_compute[256940]: 2025-10-02 13:26:57.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:57 compute-0 nova_compute[256940]: 2025-10-02 13:26:57.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:26:57 compute-0 nova_compute[256940]: 2025-10-02 13:26:57.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:26:57 compute-0 nova_compute[256940]: 2025-10-02 13:26:57.256 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:26:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2514806046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:58 compute-0 nova_compute[256940]: 2025-10-02 13:26:58.250 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:58.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:26:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:26:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 13:26:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:58.879 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:26:58 compute-0 nova_compute[256940]: 2025-10-02 13:26:58.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:58 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:26:58.880 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:26:58 compute-0 nova_compute[256940]: 2025-10-02 13:26:58.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:58 compute-0 ceph-mon[73668]: pgmap v3486: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 13:26:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:26:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:26:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:59.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:26:59 compute-0 podman[413393]: 2025-10-02 13:26:59.433228178 +0000 UTC m=+0.091828398 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:26:59 compute-0 podman[413394]: 2025-10-02 13:26:59.457918869 +0000 UTC m=+0.112985287 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:26:59 compute-0 nova_compute[256940]: 2025-10-02 13:26:59.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:00 compute-0 ceph-mon[73668]: pgmap v3487: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 13:27:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:00.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:00 compute-0 sudo[413436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:00 compute-0 sudo[413436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:00 compute-0 sudo[413436]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 MiB/s wr, 34 op/s
Oct 02 13:27:00 compute-0 sudo[413461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:00 compute-0 sudo[413461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:00 compute-0 sudo[413461]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:01.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:02 compute-0 ceph-mon[73668]: pgmap v3488: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 MiB/s wr, 34 op/s
Oct 02 13:27:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 852 KiB/s wr, 18 op/s
Oct 02 13:27:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3595277946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:03 compute-0 nova_compute[256940]: 2025-10-02 13:27:03.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:04 compute-0 ceph-mon[73668]: pgmap v3489: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 852 KiB/s wr, 18 op/s
Oct 02 13:27:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3139082809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:04 compute-0 nova_compute[256940]: 2025-10-02 13:27:04.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Oct 02 13:27:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:05.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:06.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:06 compute-0 ceph-mon[73668]: pgmap v3490: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Oct 02 13:27:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 02 13:27:06 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:27:06.884 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:27:06 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:07.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:07 compute-0 nova_compute[256940]: 2025-10-02 13:27:07.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:07 compute-0 ceph-mon[73668]: pgmap v3491: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 02 13:27:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 02 13:27:08 compute-0 nova_compute[256940]: 2025-10-02 13:27:08.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:09 compute-0 nova_compute[256940]: 2025-10-02 13:27:09.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:09 compute-0 ceph-mon[73668]: pgmap v3492: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 02 13:27:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:27:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3209300186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:27:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3209300186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3209300186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3209300186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/288643418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Oct 02 13:27:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Oct 02 13:27:11 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Oct 02 13:27:11 compute-0 ceph-mon[73668]: pgmap v3493: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 111 op/s
Oct 02 13:27:12 compute-0 ceph-mon[73668]: osdmap e421: 3 total, 3 up, 3 in
Oct 02 13:27:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:13 compute-0 nova_compute[256940]: 2025-10-02 13:27:13.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:14 compute-0 nova_compute[256940]: 2025-10-02 13:27:14.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:14 compute-0 nova_compute[256940]: 2025-10-02 13:27:14.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:27:14 compute-0 nova_compute[256940]: 2025-10-02 13:27:14.238 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:27:14 compute-0 nova_compute[256940]: 2025-10-02 13:27:14.239 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:14 compute-0 nova_compute[256940]: 2025-10-02 13:27:14.239 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:27:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:14.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:14 compute-0 ceph-mon[73668]: pgmap v3495: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 111 op/s
Oct 02 13:27:14 compute-0 nova_compute[256940]: 2025-10-02 13:27:14.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:27:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1862209437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:27:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1862209437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 150 op/s
Oct 02 13:27:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:15.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:15 compute-0 podman[413494]: 2025-10-02 13:27:15.393179387 +0000 UTC m=+0.063456620 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:27:15 compute-0 podman[413495]: 2025-10-02 13:27:15.398440954 +0000 UTC m=+0.064336333 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1862209437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1862209437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:16.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:16 compute-0 ceph-mon[73668]: pgmap v3496: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 150 op/s
Oct 02 13:27:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 KiB/s wr, 155 op/s
Oct 02 13:27:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.261 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.263 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.264 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.264 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.264 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.265 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.353 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.353 2 WARNING nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.353 2 WARNING nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.353 2 WARNING nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.353 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Removable base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.354 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.354 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.354 2 INFO nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9db343af560801cab61c541fe1a9a4551054155b
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.354 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.354 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 13:27:17 compute-0 nova_compute[256940]: 2025-10-02 13:27:17.355 2 DEBUG nova.virt.libvirt.imagecache [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 13:27:17 compute-0 ceph-mon[73668]: pgmap v3497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 KiB/s wr, 155 op/s
Oct 02 13:27:18 compute-0 sudo[413535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:18 compute-0 sudo[413535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-0 sudo[413535]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:18 compute-0 sudo[413560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:18 compute-0 sudo[413560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-0 sudo[413560]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:18 compute-0 sudo[413585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:18 compute-0 sudo[413585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-0 sudo[413585]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:18 compute-0 sudo[413610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:27:18 compute-0 sudo[413610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 KiB/s wr, 155 op/s
Oct 02 13:27:18 compute-0 nova_compute[256940]: 2025-10-02 13:27:18.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:19 compute-0 sudo[413610]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:19.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:19 compute-0 nova_compute[256940]: 2025-10-02 13:27:19.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:19 compute-0 ceph-mon[73668]: pgmap v3498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 KiB/s wr, 155 op/s
Oct 02 13:27:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:27:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:27:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 200 op/s
Oct 02 13:27:20 compute-0 sudo[413668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:20 compute-0 sudo[413668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:20 compute-0 sudo[413668]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:21 compute-0 sudo[413693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:21 compute-0 sudo[413693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:21 compute-0 sudo[413693]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:27:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:27:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:27:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 345eacca-c26d-4f09-a0f6-019616590d89 does not exist
Oct 02 13:27:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 66ae7677-7c50-4e13-b20d-dff5ee3a87a3 does not exist
Oct 02 13:27:21 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a3d7a9a2-28bb-4290-b784-c062a491cc5d does not exist
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:27:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:27:21 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:27:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:21 compute-0 sudo[413718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:21 compute-0 sudo[413718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:21 compute-0 sudo[413718]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:21 compute-0 sudo[413743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:21 compute-0 sudo[413743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:21 compute-0 sudo[413743]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:21 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:21 compute-0 sudo[413768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:21 compute-0 sudo[413768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:21 compute-0 sudo[413768]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:21 compute-0 sudo[413793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:27:21 compute-0 sudo[413793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:21 compute-0 podman[413858]: 2025-10-02 13:27:21.661340062 +0000 UTC m=+0.025625187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:21 compute-0 podman[413858]: 2025-10-02 13:27:21.903988819 +0000 UTC m=+0.268273924 container create fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Oct 02 13:27:21 compute-0 systemd[1]: Started libpod-conmon-fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0.scope.
Oct 02 13:27:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Oct 02 13:27:22 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Oct 02 13:27:22 compute-0 podman[413858]: 2025-10-02 13:27:22.115839846 +0000 UTC m=+0.480124971 container init fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldwasser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:22 compute-0 podman[413858]: 2025-10-02 13:27:22.125815215 +0000 UTC m=+0.490100350 container start fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:27:22 compute-0 podman[413858]: 2025-10-02 13:27:22.132841258 +0000 UTC m=+0.497126373 container attach fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:27:22 compute-0 systemd[1]: libpod-fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0.scope: Deactivated successfully.
Oct 02 13:27:22 compute-0 quizzical_goldwasser[413874]: 167 167
Oct 02 13:27:22 compute-0 conmon[413874]: conmon fedfba1653ca77924d98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0.scope/container/memory.events
Oct 02 13:27:22 compute-0 podman[413858]: 2025-10-02 13:27:22.136748539 +0000 UTC m=+0.501033654 container died fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fdc573b42fbf5278b1db329e0870a9faac73aab63f3262f81ec4401dc409472-merged.mount: Deactivated successfully.
Oct 02 13:27:22 compute-0 podman[413858]: 2025-10-02 13:27:22.193804532 +0000 UTC m=+0.558089627 container remove fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:27:22 compute-0 systemd[1]: libpod-conmon-fedfba1653ca77924d985dbe7ac4727e513fcf8ac42e1af9e42237cf783096a0.scope: Deactivated successfully.
Oct 02 13:27:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:22.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:22 compute-0 ceph-mon[73668]: pgmap v3499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 200 op/s
Oct 02 13:27:22 compute-0 ceph-mon[73668]: osdmap e422: 3 total, 3 up, 3 in
Oct 02 13:27:22 compute-0 podman[413899]: 2025-10-02 13:27:22.359692584 +0000 UTC m=+0.044545319 container create b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:22 compute-0 systemd[1]: Started libpod-conmon-b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667.scope.
Oct 02 13:27:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f709e7b6395714c5b113b5ed19e3c620ab594039310c6c6c91d2a6e15778872c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f709e7b6395714c5b113b5ed19e3c620ab594039310c6c6c91d2a6e15778872c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f709e7b6395714c5b113b5ed19e3c620ab594039310c6c6c91d2a6e15778872c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f709e7b6395714c5b113b5ed19e3c620ab594039310c6c6c91d2a6e15778872c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f709e7b6395714c5b113b5ed19e3c620ab594039310c6c6c91d2a6e15778872c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:22 compute-0 podman[413899]: 2025-10-02 13:27:22.440045753 +0000 UTC m=+0.124898488 container init b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:27:22 compute-0 podman[413899]: 2025-10-02 13:27:22.342196709 +0000 UTC m=+0.027049464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:22 compute-0 podman[413899]: 2025-10-02 13:27:22.446880741 +0000 UTC m=+0.131733476 container start b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_proskuriakova, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:27:22 compute-0 podman[413899]: 2025-10-02 13:27:22.453289137 +0000 UTC m=+0.138141902 container attach b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 KiB/s wr, 217 op/s
Oct 02 13:27:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:23.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:23 compute-0 upbeat_proskuriakova[413916]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:27:23 compute-0 upbeat_proskuriakova[413916]: --> relative data size: 1.0
Oct 02 13:27:23 compute-0 upbeat_proskuriakova[413916]: --> All data devices are unavailable
Oct 02 13:27:23 compute-0 systemd[1]: libpod-b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667.scope: Deactivated successfully.
Oct 02 13:27:23 compute-0 podman[413899]: 2025-10-02 13:27:23.254908862 +0000 UTC m=+0.939761597 container died b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 02 13:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f709e7b6395714c5b113b5ed19e3c620ab594039310c6c6c91d2a6e15778872c-merged.mount: Deactivated successfully.
Oct 02 13:27:23 compute-0 podman[413899]: 2025-10-02 13:27:23.327845168 +0000 UTC m=+1.012697903 container remove b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:23 compute-0 systemd[1]: libpod-conmon-b17037697b8d48826fa7a7661a1a8f9af71d418f5de0b3a6adf63f1e06965667.scope: Deactivated successfully.
Oct 02 13:27:23 compute-0 sudo[413793]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:23 compute-0 sudo[413945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:23 compute-0 sudo[413945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:23 compute-0 sudo[413945]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:23 compute-0 sudo[413970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:23 compute-0 sudo[413970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:23 compute-0 sudo[413970]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:23 compute-0 sudo[413995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:23 compute-0 sudo[413995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:23 compute-0 sudo[413995]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:23 compute-0 sudo[414020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:27:23 compute-0 sudo[414020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:23 compute-0 podman[414085]: 2025-10-02 13:27:23.949299722 +0000 UTC m=+0.101853729 container create bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:27:23 compute-0 podman[414085]: 2025-10-02 13:27:23.869899708 +0000 UTC m=+0.022453755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:23 compute-0 nova_compute[256940]: 2025-10-02 13:27:23.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:24 compute-0 systemd[1]: Started libpod-conmon-bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85.scope.
Oct 02 13:27:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:24 compute-0 podman[414085]: 2025-10-02 13:27:24.25548487 +0000 UTC m=+0.408038877 container init bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elgamal, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:27:24 compute-0 podman[414085]: 2025-10-02 13:27:24.263209421 +0000 UTC m=+0.415763428 container start bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:24 compute-0 amazing_elgamal[414101]: 167 167
Oct 02 13:27:24 compute-0 systemd[1]: libpod-bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85.scope: Deactivated successfully.
Oct 02 13:27:24 compute-0 podman[414085]: 2025-10-02 13:27:24.285581872 +0000 UTC m=+0.438135919 container attach bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:24 compute-0 podman[414085]: 2025-10-02 13:27:24.28702965 +0000 UTC m=+0.439583697 container died bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elgamal, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:24 compute-0 ceph-mon[73668]: pgmap v3501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 KiB/s wr, 217 op/s
Oct 02 13:27:24 compute-0 nova_compute[256940]: 2025-10-02 13:27:24.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9352be40f2fb505c118d4c80aa86fbde495d3f166c1c5e37ceba2d43cda39d63-merged.mount: Deactivated successfully.
Oct 02 13:27:24 compute-0 podman[414085]: 2025-10-02 13:27:24.776987365 +0000 UTC m=+0.929541392 container remove bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:27:24 compute-0 systemd[1]: libpod-conmon-bb6e1334e1b0ac6547fd5831bd545ccaa96d9a1002a242bf657ff6f8128c1e85.scope: Deactivated successfully.
Oct 02 13:27:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 818 B/s wr, 203 op/s
Oct 02 13:27:25 compute-0 podman[414127]: 2025-10-02 13:27:24.923907724 +0000 UTC m=+0.027811904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:25 compute-0 podman[414127]: 2025-10-02 13:27:25.150219677 +0000 UTC m=+0.254123827 container create f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:27:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:25.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:25 compute-0 systemd[1]: Started libpod-conmon-f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f.scope.
Oct 02 13:27:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d95287df6293b7a1af1abaa3bc3d50345230179198bf30005e399894f031ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d95287df6293b7a1af1abaa3bc3d50345230179198bf30005e399894f031ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d95287df6293b7a1af1abaa3bc3d50345230179198bf30005e399894f031ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d95287df6293b7a1af1abaa3bc3d50345230179198bf30005e399894f031ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:25 compute-0 podman[414127]: 2025-10-02 13:27:25.405563524 +0000 UTC m=+0.509467674 container init f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 02 13:27:25 compute-0 podman[414127]: 2025-10-02 13:27:25.41388941 +0000 UTC m=+0.517793570 container start f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:27:25 compute-0 podman[414127]: 2025-10-02 13:27:25.534931547 +0000 UTC m=+0.638835737 container attach f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:27:25 compute-0 ceph-mon[73668]: pgmap v3502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 818 B/s wr, 203 op/s
Oct 02 13:27:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:26.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]: {
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:     "1": [
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:         {
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "devices": [
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "/dev/loop3"
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             ],
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "lv_name": "ceph_lv0",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "lv_size": "7511998464",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "name": "ceph_lv0",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "tags": {
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.cluster_name": "ceph",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.crush_device_class": "",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.encrypted": "0",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.osd_id": "1",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.type": "block",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:                 "ceph.vdo": "0"
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             },
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "type": "block",
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:             "vg_name": "ceph_vg0"
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:         }
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]:     ]
Oct 02 13:27:26 compute-0 recursing_elgamal[414143]: }
Oct 02 13:27:26 compute-0 systemd[1]: libpod-f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f.scope: Deactivated successfully.
Oct 02 13:27:26 compute-0 podman[414127]: 2025-10-02 13:27:26.429170159 +0000 UTC m=+1.533074329 container died f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:27:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:27:26.524 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:27:26.525 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:27:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 409 B/s wr, 218 op/s
Oct 02 13:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d95287df6293b7a1af1abaa3bc3d50345230179198bf30005e399894f031ca-merged.mount: Deactivated successfully.
Oct 02 13:27:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:27.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:27 compute-0 podman[414127]: 2025-10-02 13:27:27.4390902 +0000 UTC m=+2.542994360 container remove f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elgamal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:27:27 compute-0 sudo[414020]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:27 compute-0 systemd[1]: libpod-conmon-f716ac384dac1a21a8e5bf0da8eed32c4f9e98732ca63518a741647a14bb889f.scope: Deactivated successfully.
Oct 02 13:27:27 compute-0 sudo[414167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:27 compute-0 sudo[414167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:27 compute-0 sudo[414167]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:27 compute-0 ceph-mon[73668]: pgmap v3503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 409 B/s wr, 218 op/s
Oct 02 13:27:27 compute-0 sudo[414192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:27 compute-0 sudo[414192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:27 compute-0 sudo[414192]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:27 compute-0 sudo[414217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:27 compute-0 sudo[414217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:27 compute-0 sudo[414217]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:27 compute-0 sudo[414242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:27:27 compute-0 sudo[414242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:28.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:28 compute-0 podman[414307]: 2025-10-02 13:27:28.318424546 +0000 UTC m=+0.107228968 container create 05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:28 compute-0 podman[414307]: 2025-10-02 13:27:28.243048067 +0000 UTC m=+0.031852529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:28 compute-0 systemd[1]: Started libpod-conmon-05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23.scope.
Oct 02 13:27:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:28 compute-0 podman[414307]: 2025-10-02 13:27:28.697413627 +0000 UTC m=+0.486218029 container init 05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:28 compute-0 podman[414307]: 2025-10-02 13:27:28.704413379 +0000 UTC m=+0.493217801 container start 05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:27:28 compute-0 nifty_almeida[414323]: 167 167
Oct 02 13:27:28 compute-0 systemd[1]: libpod-05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23.scope: Deactivated successfully.
Oct 02 13:27:28 compute-0 conmon[414323]: conmon 05f1fc9120691e228d75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23.scope/container/memory.events
Oct 02 13:27:28 compute-0 podman[414307]: 2025-10-02 13:27:28.746582195 +0000 UTC m=+0.535386637 container attach 05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:27:28 compute-0 podman[414307]: 2025-10-02 13:27:28.747081748 +0000 UTC m=+0.535886140 container died 05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4272fe67673ace4e1fa59d30d2ec849ca6ef0ff333f7c25592112a90867d34f6-merged.mount: Deactivated successfully.
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 409 B/s wr, 218 op/s
Oct 02 13:27:28 compute-0 podman[414307]: 2025-10-02 13:27:28.824150541 +0000 UTC m=+0.612954923 container remove 05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:27:28 compute-0 systemd[1]: libpod-conmon-05f1fc9120691e228d75678bc4867432062113e45d1111dddeb0ab50509fbb23.scope: Deactivated successfully.
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:27:28
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups']
Oct 02 13:27:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:27:28 compute-0 nova_compute[256940]: 2025-10-02 13:27:28.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:29 compute-0 podman[414350]: 2025-10-02 13:27:28.956527542 +0000 UTC m=+0.024213300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:29 compute-0 podman[414350]: 2025-10-02 13:27:29.095464474 +0000 UTC m=+0.163150242 container create e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_engelbart, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:29.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:29 compute-0 systemd[1]: Started libpod-conmon-e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f.scope.
Oct 02 13:27:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d359240f651c053d07c0e60b4cdecd0d3121a67a1f9d8833dc15bc8f4e6f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d359240f651c053d07c0e60b4cdecd0d3121a67a1f9d8833dc15bc8f4e6f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d359240f651c053d07c0e60b4cdecd0d3121a67a1f9d8833dc15bc8f4e6f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d359240f651c053d07c0e60b4cdecd0d3121a67a1f9d8833dc15bc8f4e6f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:29 compute-0 podman[414350]: 2025-10-02 13:27:29.524735081 +0000 UTC m=+0.592420839 container init e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_engelbart, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:29 compute-0 podman[414350]: 2025-10-02 13:27:29.53583659 +0000 UTC m=+0.603522328 container start e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:27:29 compute-0 nova_compute[256940]: 2025-10-02 13:27:29.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:29 compute-0 podman[414350]: 2025-10-02 13:27:29.633837687 +0000 UTC m=+0.701523425 container attach e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_engelbart, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:27:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:27:30 compute-0 ceph-mon[73668]: pgmap v3504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 409 B/s wr, 218 op/s
Oct 02 13:27:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:30.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:30 compute-0 podman[414376]: 2025-10-02 13:27:30.412960388 +0000 UTC m=+0.077876475 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:30 compute-0 podman[414377]: 2025-10-02 13:27:30.435942765 +0000 UTC m=+0.105457362 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]: {
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]:         "osd_id": 1,
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]:         "type": "bluestore"
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]:     }
Oct 02 13:27:30 compute-0 romantic_engelbart[414366]: }
Oct 02 13:27:30 compute-0 systemd[1]: libpod-e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f.scope: Deactivated successfully.
Oct 02 13:27:30 compute-0 podman[414432]: 2025-10-02 13:27:30.574957329 +0000 UTC m=+0.029742845 container died e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 131 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 674 KiB/s wr, 123 op/s
Oct 02 13:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe5d359240f651c053d07c0e60b4cdecd0d3121a67a1f9d8833dc15bc8f4e6f4-merged.mount: Deactivated successfully.
Oct 02 13:27:31 compute-0 podman[414432]: 2025-10-02 13:27:31.087528162 +0000 UTC m=+0.542313688 container remove e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:27:31 compute-0 systemd[1]: libpod-conmon-e07bb0158ad7a01adade4aec72c87f8a1fc5b9f3cc76bde4f371abb8905ea27f.scope: Deactivated successfully.
Oct 02 13:27:31 compute-0 sudo[414242]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:27:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:27:31 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 83124218-19d4-4caa-9c43-aab0696d0df5 does not exist
Oct 02 13:27:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d436691c-7733-426d-8f7e-1415168fe16d does not exist
Oct 02 13:27:31 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 57d89899-79e8-4945-856c-7af1f033296d does not exist
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.179383) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651179438, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1297, "num_deletes": 252, "total_data_size": 2090854, "memory_usage": 2127072, "flush_reason": "Manual Compaction"}
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651195047, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 2055343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76732, "largest_seqno": 78028, "table_properties": {"data_size": 2049288, "index_size": 3321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13355, "raw_average_key_size": 20, "raw_value_size": 2036904, "raw_average_value_size": 3086, "num_data_blocks": 146, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411537, "oldest_key_time": 1759411537, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 15761 microseconds, and 5391 cpu microseconds.
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.195147) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 2055343 bytes OK
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.195175) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.200942) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.200967) EVENT_LOG_v1 {"time_micros": 1759411651200959, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.200990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 2085171, prev total WAL file size 2116051, number of live WAL files 2.
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.201861) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(2007KB)], [176(10MB)]
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651201940, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 13520750, "oldest_snapshot_seqno": -1}
Oct 02 13:27:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:31 compute-0 sudo[414448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:31 compute-0 sudo[414448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:31 compute-0 sudo[414448]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10365 keys, 11507402 bytes, temperature: kUnknown
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651293041, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 11507402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11442872, "index_size": 37537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 273970, "raw_average_key_size": 26, "raw_value_size": 11263977, "raw_average_value_size": 1086, "num_data_blocks": 1416, "num_entries": 10365, "num_filter_entries": 10365, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.293292) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11507402 bytes
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.297220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 126.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.9 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(12.2) write-amplify(5.6) OK, records in: 10888, records dropped: 523 output_compression: NoCompression
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.297264) EVENT_LOG_v1 {"time_micros": 1759411651297246, "job": 110, "event": "compaction_finished", "compaction_time_micros": 91181, "compaction_time_cpu_micros": 30571, "output_level": 6, "num_output_files": 1, "total_output_size": 11507402, "num_input_records": 10888, "num_output_records": 10365, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651297982, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651301027, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.201744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.301090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.301097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.301121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.301124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:27:31.301127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-0 sudo[414473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:27:31 compute-0 sudo[414473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:31 compute-0 sudo[414473]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:32 compute-0 ceph-mon[73668]: pgmap v3505: 305 pgs: 305 active+clean; 131 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 674 KiB/s wr, 123 op/s
Oct 02 13:27:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:32 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 140 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 965 KiB/s wr, 82 op/s
Oct 02 13:27:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:33.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev fefac5d5-fdd7-40a8-857d-39d41f53c597 does not exist
Oct 02 13:27:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e6dcc3a1-04a2-41d1-9712-4ab077c0e254 does not exist
Oct 02 13:27:33 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 82566777-93d2-43b5-958a-30f33b0ce43e does not exist
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:27:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:33 compute-0 sudo[414499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:33 compute-0 sudo[414499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:33 compute-0 sudo[414499]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:33 compute-0 sudo[414524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:33 compute-0 sudo[414524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:33 compute-0 sudo[414524]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:33 compute-0 sudo[414549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:33 compute-0 sudo[414549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:33 compute-0 sudo[414549]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:33 compute-0 sudo[414574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:27:33 compute-0 sudo[414574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:33 compute-0 nova_compute[256940]: 2025-10-02 13:27:33.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:34 compute-0 podman[414640]: 2025-10-02 13:27:34.137659692 +0000 UTC m=+0.063912962 container create 867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:27:34 compute-0 systemd[1]: Started libpod-conmon-867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe.scope.
Oct 02 13:27:34 compute-0 podman[414640]: 2025-10-02 13:27:34.098808623 +0000 UTC m=+0.025061903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:34 compute-0 podman[414640]: 2025-10-02 13:27:34.255168017 +0000 UTC m=+0.181421317 container init 867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:27:34 compute-0 podman[414640]: 2025-10-02 13:27:34.262315663 +0000 UTC m=+0.188568933 container start 867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:34 compute-0 nifty_kare[414656]: 167 167
Oct 02 13:27:34 compute-0 systemd[1]: libpod-867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe.scope: Deactivated successfully.
Oct 02 13:27:34 compute-0 podman[414640]: 2025-10-02 13:27:34.267904328 +0000 UTC m=+0.194157598 container attach 867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:34 compute-0 podman[414640]: 2025-10-02 13:27:34.268446622 +0000 UTC m=+0.194699892 container died 867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:34.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4b196038fdae2fafefbfe84b50e222c37cd6936df0392fc2ffc041826fe9bca-merged.mount: Deactivated successfully.
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:34 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:34 compute-0 podman[414640]: 2025-10-02 13:27:34.443219115 +0000 UTC m=+0.369472395 container remove 867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:34 compute-0 systemd[1]: libpod-conmon-867807c3c527510241eafd787b0546899a9d0800ae04e85d3083cc9b256400fe.scope: Deactivated successfully.
Oct 02 13:27:34 compute-0 nova_compute[256940]: 2025-10-02 13:27:34.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:34 compute-0 podman[414681]: 2025-10-02 13:27:34.683397308 +0000 UTC m=+0.060634187 container create 1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:34 compute-0 systemd[1]: Started libpod-conmon-1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448.scope.
Oct 02 13:27:34 compute-0 podman[414681]: 2025-10-02 13:27:34.656297973 +0000 UTC m=+0.033534942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca402f492777872106a31c699484f9e92029c6f214a54670d5a7f8069c995256/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca402f492777872106a31c699484f9e92029c6f214a54670d5a7f8069c995256/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca402f492777872106a31c699484f9e92029c6f214a54670d5a7f8069c995256/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca402f492777872106a31c699484f9e92029c6f214a54670d5a7f8069c995256/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca402f492777872106a31c699484f9e92029c6f214a54670d5a7f8069c995256/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:34 compute-0 podman[414681]: 2025-10-02 13:27:34.79430684 +0000 UTC m=+0.171543759 container init 1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:34 compute-0 podman[414681]: 2025-10-02 13:27:34.801711273 +0000 UTC m=+0.178948152 container start 1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:34 compute-0 podman[414681]: 2025-10-02 13:27:34.805233864 +0000 UTC m=+0.182470753 container attach 1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:27:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:27:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:35.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:35 compute-0 recursing_margulis[414698]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:27:35 compute-0 recursing_margulis[414698]: --> relative data size: 1.0
Oct 02 13:27:35 compute-0 recursing_margulis[414698]: --> All data devices are unavailable
Oct 02 13:27:35 compute-0 systemd[1]: libpod-1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448.scope: Deactivated successfully.
Oct 02 13:27:35 compute-0 podman[414714]: 2025-10-02 13:27:35.699600302 +0000 UTC m=+0.030671789 container died 1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:27:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca402f492777872106a31c699484f9e92029c6f214a54670d5a7f8069c995256-merged.mount: Deactivated successfully.
Oct 02 13:27:35 compute-0 podman[414714]: 2025-10-02 13:27:35.771398328 +0000 UTC m=+0.102469785 container remove 1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:27:35 compute-0 systemd[1]: libpod-conmon-1bb624cff52c878d285df902213cfdae3052b613bc82a258f6e7dfbe97a73448.scope: Deactivated successfully.
Oct 02 13:27:35 compute-0 sudo[414574]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:35 compute-0 sudo[414729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:35 compute-0 sudo[414729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:35 compute-0 sudo[414729]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:35 compute-0 sudo[414754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:35 compute-0 sudo[414754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:35 compute-0 sudo[414754]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:36 compute-0 sudo[414779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:36 compute-0 sudo[414779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:36 compute-0 sudo[414779]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:36 compute-0 sudo[414804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:27:36 compute-0 sudo[414804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:36.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:36 compute-0 podman[414868]: 2025-10-02 13:27:36.478327923 +0000 UTC m=+0.108997854 container create 2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lichterman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:36 compute-0 podman[414868]: 2025-10-02 13:27:36.390186692 +0000 UTC m=+0.020856633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 13:27:36 compute-0 systemd[1]: Started libpod-conmon-2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b.scope.
Oct 02 13:27:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:37 compute-0 podman[414868]: 2025-10-02 13:27:37.054567741 +0000 UTC m=+0.685237662 container init 2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:27:37 compute-0 podman[414868]: 2025-10-02 13:27:37.062531788 +0000 UTC m=+0.693201709 container start 2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:27:37 compute-0 peaceful_lichterman[414885]: 167 167
Oct 02 13:27:37 compute-0 systemd[1]: libpod-2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b.scope: Deactivated successfully.
Oct 02 13:27:37 compute-0 podman[414868]: 2025-10-02 13:27:37.077284471 +0000 UTC m=+0.707954392 container attach 2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:37 compute-0 podman[414868]: 2025-10-02 13:27:37.0776092 +0000 UTC m=+0.708279121 container died 2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-19df2bc4d81bbb0117cbbc3a0f860f7daf5b758620d6c38a5f1409eeb98bf66c-merged.mount: Deactivated successfully.
Oct 02 13:27:37 compute-0 podman[414868]: 2025-10-02 13:27:37.149734903 +0000 UTC m=+0.780404824 container remove 2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lichterman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:37 compute-0 systemd[1]: libpod-conmon-2a5d0dfd57e26344f47749a5ac6c67e48a2e9d346918f16aa77c33988b35e12b.scope: Deactivated successfully.
Oct 02 13:27:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:37.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:37 compute-0 podman[414911]: 2025-10-02 13:27:37.31963197 +0000 UTC m=+0.054954080 container create 278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_burnell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:27:37 compute-0 systemd[1]: Started libpod-conmon-278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb.scope.
Oct 02 13:27:37 compute-0 podman[414911]: 2025-10-02 13:27:37.291328834 +0000 UTC m=+0.026651024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04cb7a0f0d5771c80c03aba7ec00cadab5da652b28f4b6fc9e0e352bf3db1f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04cb7a0f0d5771c80c03aba7ec00cadab5da652b28f4b6fc9e0e352bf3db1f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04cb7a0f0d5771c80c03aba7ec00cadab5da652b28f4b6fc9e0e352bf3db1f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04cb7a0f0d5771c80c03aba7ec00cadab5da652b28f4b6fc9e0e352bf3db1f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:37 compute-0 podman[414911]: 2025-10-02 13:27:37.440748938 +0000 UTC m=+0.176071048 container init 278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:27:37 compute-0 podman[414911]: 2025-10-02 13:27:37.447958795 +0000 UTC m=+0.183280905 container start 278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:27:37 compute-0 podman[414911]: 2025-10-02 13:27:37.461590409 +0000 UTC m=+0.196912619 container attach 278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_burnell, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:38 compute-0 ceph-mon[73668]: pgmap v3507: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:27:38 compute-0 ceph-mon[73668]: pgmap v3508: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 13:27:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3531806881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2860605944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:38 compute-0 jolly_burnell[414928]: {
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:     "1": [
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:         {
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "devices": [
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "/dev/loop3"
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             ],
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "lv_name": "ceph_lv0",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "lv_size": "7511998464",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "name": "ceph_lv0",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "tags": {
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.cluster_name": "ceph",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.crush_device_class": "",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.encrypted": "0",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.osd_id": "1",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.type": "block",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:                 "ceph.vdo": "0"
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             },
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "type": "block",
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:             "vg_name": "ceph_vg0"
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:         }
Oct 02 13:27:38 compute-0 jolly_burnell[414928]:     ]
Oct 02 13:27:38 compute-0 jolly_burnell[414928]: }
Oct 02 13:27:38 compute-0 systemd[1]: libpod-278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb.scope: Deactivated successfully.
Oct 02 13:27:38 compute-0 podman[414911]: 2025-10-02 13:27:38.233957825 +0000 UTC m=+0.969279935 container died 278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_burnell, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b04cb7a0f0d5771c80c03aba7ec00cadab5da652b28f4b6fc9e0e352bf3db1f9-merged.mount: Deactivated successfully.
Oct 02 13:27:38 compute-0 podman[414911]: 2025-10-02 13:27:38.298616036 +0000 UTC m=+1.033938146 container remove 278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:38.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:38 compute-0 systemd[1]: libpod-conmon-278412e81ed0ceddb814fec15e7acedfc5f9fb9eaa6e9ae2532f31ded3d734fb.scope: Deactivated successfully.
Oct 02 13:27:38 compute-0 sudo[414804]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:38 compute-0 sudo[414949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:38 compute-0 sudo[414949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:38 compute-0 sudo[414949]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:38 compute-0 sudo[414974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:38 compute-0 sudo[414974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:38 compute-0 sudo[414974]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:38 compute-0 sudo[414999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:38 compute-0 sudo[414999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:38 compute-0 sudo[414999]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:38 compute-0 sudo[415024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:27:38 compute-0 sudo[415024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:27:38 compute-0 podman[415090]: 2025-10-02 13:27:38.908599081 +0000 UTC m=+0.050502203 container create 16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:27:38 compute-0 systemd[1]: Started libpod-conmon-16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5.scope.
Oct 02 13:27:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:38 compute-0 podman[415090]: 2025-10-02 13:27:38.884744551 +0000 UTC m=+0.026647683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:38 compute-0 podman[415090]: 2025-10-02 13:27:38.980001807 +0000 UTC m=+0.121904919 container init 16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:27:38 compute-0 podman[415090]: 2025-10-02 13:27:38.988995131 +0000 UTC m=+0.130898213 container start 16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:27:38 compute-0 gallant_yalow[415107]: 167 167
Oct 02 13:27:38 compute-0 systemd[1]: libpod-16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5.scope: Deactivated successfully.
Oct 02 13:27:38 compute-0 podman[415090]: 2025-10-02 13:27:38.995715266 +0000 UTC m=+0.137618348 container attach 16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:38 compute-0 podman[415090]: 2025-10-02 13:27:38.996778003 +0000 UTC m=+0.138681085 container died 16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:38 compute-0 nova_compute[256940]: 2025-10-02 13:27:38.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-38f3f0a32855309c609898053d949ce6eb3431c16c2157eb40e532bf1eea2a47-merged.mount: Deactivated successfully.
Oct 02 13:27:39 compute-0 podman[415090]: 2025-10-02 13:27:39.044328499 +0000 UTC m=+0.186231571 container remove 16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:39 compute-0 systemd[1]: libpod-conmon-16224a609b5432ed2bef7afb4f6eff3cd122bbdb7066f4579e38f42bd52a28a5.scope: Deactivated successfully.
Oct 02 13:27:39 compute-0 ceph-mon[73668]: pgmap v3509: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:27:39 compute-0 podman[415132]: 2025-10-02 13:27:39.217931702 +0000 UTC m=+0.058433630 container create 4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:27:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:39.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:39 compute-0 podman[415132]: 2025-10-02 13:27:39.180029586 +0000 UTC m=+0.020531504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:27:39 compute-0 systemd[1]: Started libpod-conmon-4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72.scope.
Oct 02 13:27:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1342c89232e6a5ef61c03f6576a31a01a630b6e5a65c008a8e322de20d308ec3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1342c89232e6a5ef61c03f6576a31a01a630b6e5a65c008a8e322de20d308ec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1342c89232e6a5ef61c03f6576a31a01a630b6e5a65c008a8e322de20d308ec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1342c89232e6a5ef61c03f6576a31a01a630b6e5a65c008a8e322de20d308ec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:27:39 compute-0 podman[415132]: 2025-10-02 13:27:39.354658236 +0000 UTC m=+0.195160194 container init 4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:39 compute-0 podman[415132]: 2025-10-02 13:27:39.367434608 +0000 UTC m=+0.207936566 container start 4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:27:39 compute-0 podman[415132]: 2025-10-02 13:27:39.373508185 +0000 UTC m=+0.214010193 container attach 4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:27:39 compute-0 nova_compute[256940]: 2025-10-02 13:27:39.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:40 compute-0 brave_fermi[415148]: {
Oct 02 13:27:40 compute-0 brave_fermi[415148]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:27:40 compute-0 brave_fermi[415148]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:27:40 compute-0 brave_fermi[415148]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:27:40 compute-0 brave_fermi[415148]:         "osd_id": 1,
Oct 02 13:27:40 compute-0 brave_fermi[415148]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:27:40 compute-0 brave_fermi[415148]:         "type": "bluestore"
Oct 02 13:27:40 compute-0 brave_fermi[415148]:     }
Oct 02 13:27:40 compute-0 brave_fermi[415148]: }
Oct 02 13:27:40 compute-0 systemd[1]: libpod-4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72.scope: Deactivated successfully.
Oct 02 13:27:40 compute-0 podman[415132]: 2025-10-02 13:27:40.242090402 +0000 UTC m=+1.082592320 container died 4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:27:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:40.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1342c89232e6a5ef61c03f6576a31a01a630b6e5a65c008a8e322de20d308ec3-merged.mount: Deactivated successfully.
Oct 02 13:27:40 compute-0 podman[415132]: 2025-10-02 13:27:40.503769834 +0000 UTC m=+1.344271742 container remove 4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:27:40 compute-0 sudo[415024]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:27:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:27:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d282fb6e-4013-4554-af99-3eab22125d59 does not exist
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b849dc6f-ec07-4e63-a7f0-14a36bfabfc3 does not exist
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev da03addd-83fc-4945-849a-69eb096b0a70 does not exist
Oct 02 13:27:40 compute-0 systemd[1]: libpod-conmon-4ad0a206d47b585460704910cd68b2877d6c623faf66a5d8189560875c6fab72.scope: Deactivated successfully.
Oct 02 13:27:40 compute-0 sudo[415181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:40 compute-0 sudo[415181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:40 compute-0 sudo[415181]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:40 compute-0 sudo[415206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:27:40 compute-0 sudo[415206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:40 compute-0 sudo[415206]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031503364624046156 of space, bias 1.0, pg target 0.9451009387213847 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:27:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:27:41 compute-0 sudo[415232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:41 compute-0 sudo[415232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:41 compute-0 sudo[415232]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:41.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:41 compute-0 sudo[415257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:41 compute-0 sudo[415257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:41 compute-0 sudo[415257]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:41 compute-0 ceph-mon[73668]: pgmap v3510: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:27:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3155975616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:42.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.304 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.332 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.332 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.332 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.332 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.333 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:27:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3220820876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3497635110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 19 op/s
Oct 02 13:27:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:27:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147395669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.847 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.987 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.988 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4065MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.988 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:42 compute-0 nova_compute[256940]: 2025-10-02 13:27:42.988 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.071 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.072 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.095 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:27:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:43.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:27:43.324 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:27:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:27:43.325 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:27:43 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:27:43.326 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:27:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829377715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.536 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.543 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.564 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.566 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:27:43 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.567 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:43 compute-0 ceph-mon[73668]: pgmap v3511: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 19 op/s
Oct 02 13:27:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/147395669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/213874345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/829377715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:44 compute-0 nova_compute[256940]: 2025-10-02 13:27:43.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:44.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:44 compute-0 nova_compute[256940]: 2025-10-02 13:27:44.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 955 KiB/s wr, 4 op/s
Oct 02 13:27:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:45.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:45 compute-0 nova_compute[256940]: 2025-10-02 13:27:45.475 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:45 compute-0 ceph-mon[73668]: pgmap v3512: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 955 KiB/s wr, 4 op/s
Oct 02 13:27:46 compute-0 nova_compute[256940]: 2025-10-02 13:27:46.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:46 compute-0 nova_compute[256940]: 2025-10-02 13:27:46.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:46 compute-0 nova_compute[256940]: 2025-10-02 13:27:46.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:27:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:46 compute-0 podman[415328]: 2025-10-02 13:27:46.4030407 +0000 UTC m=+0.064361644 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:27:46 compute-0 podman[415329]: 2025-10-02 13:27:46.431669144 +0000 UTC m=+0.092235608 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:27:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 13:27:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:47.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:47 compute-0 ceph-mon[73668]: pgmap v3513: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 13:27:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4220012604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:49 compute-0 nova_compute[256940]: 2025-10-02 13:27:49.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:49.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:49 compute-0 nova_compute[256940]: 2025-10-02 13:27:49.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:49 compute-0 ceph-mon[73668]: pgmap v3514: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:50 compute-0 nova_compute[256940]: 2025-10-02 13:27:50.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s
Oct 02 13:27:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:51.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:51 compute-0 ceph-mon[73668]: pgmap v3515: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s
Oct 02 13:27:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:52 compute-0 nova_compute[256940]: 2025-10-02 13:27:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:52 compute-0 nova_compute[256940]: 2025-10-02 13:27:52.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:27:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:52.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:27:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 341 B/s wr, 13 op/s
Oct 02 13:27:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:53.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:54 compute-0 nova_compute[256940]: 2025-10-02 13:27:54.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:54 compute-0 ceph-mon[73668]: pgmap v3516: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 341 B/s wr, 13 op/s
Oct 02 13:27:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:54 compute-0 nova_compute[256940]: 2025-10-02 13:27:54.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 14 KiB/s wr, 49 op/s
Oct 02 13:27:55 compute-0 nova_compute[256940]: 2025-10-02 13:27:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:55.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:55 compute-0 ceph-mon[73668]: pgmap v3517: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 14 KiB/s wr, 49 op/s
Oct 02 13:27:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:56.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:57.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:58 compute-0 ceph-mon[73668]: pgmap v3518: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:58.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:27:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:27:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:59 compute-0 nova_compute[256940]: 2025-10-02 13:27:59.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:59 compute-0 ceph-mon[73668]: pgmap v3519: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:59 compute-0 nova_compute[256940]: 2025-10-02 13:27:59.236 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:59 compute-0 nova_compute[256940]: 2025-10-02 13:27:59.237 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:59 compute-0 nova_compute[256940]: 2025-10-02 13:27:59.237 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:27:59 compute-0 nova_compute[256940]: 2025-10-02 13:27:59.237 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:27:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:27:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:59.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:59 compute-0 nova_compute[256940]: 2025-10-02 13:27:59.260 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:27:59 compute-0 nova_compute[256940]: 2025-10-02 13:27:59.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:28:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:01.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:01 compute-0 sudo[415376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:01 compute-0 sudo[415376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:01 compute-0 sudo[415376]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:01 compute-0 sudo[415414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:01 compute-0 sudo[415414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:01 compute-0 sudo[415414]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:01 compute-0 podman[415399]: 2025-10-02 13:28:01.412802999 +0000 UTC m=+0.083011128 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:28:01 compute-0 podman[415401]: 2025-10-02 13:28:01.41751273 +0000 UTC m=+0.084045865 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:28:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:02 compute-0 ceph-mon[73668]: pgmap v3520: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:28:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:02.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 02 13:28:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:03.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:03 compute-0 ceph-mon[73668]: pgmap v3521: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 02 13:28:04 compute-0 nova_compute[256940]: 2025-10-02 13:28:04.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:04 compute-0 nova_compute[256940]: 2025-10-02 13:28:04.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 168 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 610 KiB/s wr, 76 op/s
Oct 02 13:28:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:06 compute-0 ceph-mon[73668]: pgmap v3522: 305 pgs: 305 active+clean; 168 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 610 KiB/s wr, 76 op/s
Oct 02 13:28:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:28:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:28:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:06.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 86 op/s
Oct 02 13:28:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:07.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:07 compute-0 ceph-mon[73668]: pgmap v3523: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 86 op/s
Oct 02 13:28:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:28:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:28:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 366 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 13:28:09 compute-0 nova_compute[256940]: 2025-10-02 13:28:09.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:09.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:09 compute-0 nova_compute[256940]: 2025-10-02 13:28:09.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:09 compute-0 ceph-mon[73668]: pgmap v3524: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 366 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 13:28:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:28:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:10.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:28:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 423 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:28:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:11.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:11 compute-0 ceph-mon[73668]: pgmap v3525: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 423 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:28:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:12.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 424 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 02 13:28:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:13.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:13 compute-0 ceph-mon[73668]: pgmap v3526: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 424 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 02 13:28:14 compute-0 nova_compute[256940]: 2025-10-02 13:28:14.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:14.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:14 compute-0 nova_compute[256940]: 2025-10-02 13:28:14.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Oct 02 13:28:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Oct 02 13:28:14 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Oct 02 13:28:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 442 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Oct 02 13:28:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:15.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:15 compute-0 nova_compute[256940]: 2025-10-02 13:28:15.358 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:15 compute-0 ceph-mon[73668]: osdmap e423: 3 total, 3 up, 3 in
Oct 02 13:28:15 compute-0 ceph-mon[73668]: pgmap v3528: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 442 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Oct 02 13:28:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:16.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 78 KiB/s wr, 20 op/s
Oct 02 13:28:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:17.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:17 compute-0 ceph-mon[73668]: pgmap v3529: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 78 KiB/s wr, 20 op/s
Oct 02 13:28:17 compute-0 podman[415477]: 2025-10-02 13:28:17.398450538 +0000 UTC m=+0.057635848 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:28:17 compute-0 podman[415476]: 2025-10-02 13:28:17.429185626 +0000 UTC m=+0.092571333 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:28:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:18.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 78 KiB/s wr, 20 op/s
Oct 02 13:28:19 compute-0 nova_compute[256940]: 2025-10-02 13:28:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:19 compute-0 ceph-mon[73668]: pgmap v3530: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 78 KiB/s wr, 20 op/s
Oct 02 13:28:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:19.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:19 compute-0 nova_compute[256940]: 2025-10-02 13:28:19.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:20.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 19 op/s
Oct 02 13:28:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:21.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:21 compute-0 sudo[415521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:21 compute-0 sudo[415521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:21 compute-0 sudo[415521]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:21 compute-0 sudo[415546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:21 compute-0 sudo[415546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:21 compute-0 sudo[415546]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:22 compute-0 ceph-mon[73668]: pgmap v3531: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 19 op/s
Oct 02 13:28:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:28:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:22.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:28:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 20 op/s
Oct 02 13:28:23 compute-0 ceph-mon[73668]: pgmap v3532: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 20 op/s
Oct 02 13:28:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:23.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:24 compute-0 nova_compute[256940]: 2025-10-02 13:28:24.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1513027977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:24 compute-0 nova_compute[256940]: 2025-10-02 13:28:24.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 5.8 KiB/s wr, 17 op/s
Oct 02 13:28:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:25.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:25 compute-0 ceph-mon[73668]: pgmap v3533: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 5.8 KiB/s wr, 17 op/s
Oct 02 13:28:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:28:26.525 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:28:26.525 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:28:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 5.3 KiB/s wr, 21 op/s
Oct 02 13:28:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:27.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:28 compute-0 ceph-mon[73668]: pgmap v3534: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 5.3 KiB/s wr, 21 op/s
Oct 02 13:28:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 3.6 KiB/s wr, 13 op/s
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:28:28
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data']
Oct 02 13:28:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:28:29 compute-0 nova_compute[256940]: 2025-10-02 13:28:29.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-0 ceph-mon[73668]: pgmap v3535: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 3.6 KiB/s wr, 13 op/s
Oct 02 13:28:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:29.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:29 compute-0 nova_compute[256940]: 2025-10-02 13:28:29.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:28:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:28:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2295047483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:28:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/521275966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:28:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 4.0 KiB/s wr, 17 op/s
Oct 02 13:28:31 compute-0 ceph-mon[73668]: pgmap v3536: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 4.0 KiB/s wr, 17 op/s
Oct 02 13:28:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:31.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:32 compute-0 podman[415576]: 2025-10-02 13:28:32.380473815 +0000 UTC m=+0.055047982 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:28:32 compute-0 podman[415577]: 2025-10-02 13:28:32.440189565 +0000 UTC m=+0.110567834 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:28:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 13 op/s
Oct 02 13:28:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:33.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:34 compute-0 ceph-mon[73668]: pgmap v3537: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 13 op/s
Oct 02 13:28:34 compute-0 nova_compute[256940]: 2025-10-02 13:28:34.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:34 compute-0 nova_compute[256940]: 2025-10-02 13:28:34.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 53 op/s
Oct 02 13:28:35 compute-0 ceph-mon[73668]: pgmap v3538: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 53 op/s
Oct 02 13:28:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:35.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 88 op/s
Oct 02 13:28:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:37.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:37 compute-0 ceph-mon[73668]: pgmap v3539: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 88 op/s
Oct 02 13:28:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1506722830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 80 op/s
Oct 02 13:28:39 compute-0 nova_compute[256940]: 2025-10-02 13:28:39.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1612691171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:39.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:39 compute-0 nova_compute[256940]: 2025-10-02 13:28:39.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:40 compute-0 ceph-mon[73668]: pgmap v3540: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 80 op/s
Oct 02 13:28:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:40.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 82 op/s
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.617607714498418e-05 of space, bias 1.0, pg target 0.004852823143495254 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004327373266796948 of space, bias 1.0, pg target 1.2982119800390843 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:28:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:28:41 compute-0 sudo[415622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:41 compute-0 sudo[415622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-0 sudo[415622]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-0 sudo[415647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:41 compute-0 sudo[415647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-0 sudo[415647]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-0 sudo[415672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:41 compute-0 sudo[415672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-0 sudo[415672]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-0 ceph-mon[73668]: pgmap v3541: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 82 op/s
Oct 02 13:28:41 compute-0 sudo[415697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:28:41 compute-0 sudo[415697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:41.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:41 compute-0 sudo[415741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:41 compute-0 sudo[415741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-0 sudo[415741]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-0 sudo[415770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:41 compute-0 sudo[415770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-0 sudo[415770]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-0 sudo[415697]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:28:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:28:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:28:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:28:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a232d4e5-297b-4ee5-8d15-d85002ee1c1e does not exist
Oct 02 13:28:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cf544100-69cf-4265-83c1-1acfe3f34c49 does not exist
Oct 02 13:28:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b5a2ddfe-084d-4cf1-a8aa-be3ec8f20edb does not exist
Oct 02 13:28:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:28:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:28:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:28:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:28:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:28:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:42 compute-0 sudo[415803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:42 compute-0 sudo[415803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:42 compute-0 sudo[415803]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:42 compute-0 sudo[415828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:42 compute-0 sudo[415828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:42 compute-0 sudo[415828]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:42 compute-0 sudo[415853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:42 compute-0 sudo[415853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:42 compute-0 sudo[415853]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:42 compute-0 sudo[415878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:28:42 compute-0 sudo[415878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:28:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:28:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:28:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:42 compute-0 podman[415944]: 2025-10-02 13:28:42.650068339 +0000 UTC m=+0.024744555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:42 compute-0 podman[415944]: 2025-10-02 13:28:42.809757812 +0000 UTC m=+0.184434008 container create 28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noyce, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:28:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 79 op/s
Oct 02 13:28:42 compute-0 systemd[1]: Started libpod-conmon-28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284.scope.
Oct 02 13:28:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:43 compute-0 podman[415944]: 2025-10-02 13:28:43.048508582 +0000 UTC m=+0.423184828 container init 28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noyce, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:43 compute-0 podman[415944]: 2025-10-02 13:28:43.060601341 +0000 UTC m=+0.435277537 container start 28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:28:43 compute-0 systemd[1]: libpod-28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284.scope: Deactivated successfully.
Oct 02 13:28:43 compute-0 nice_noyce[415961]: 167 167
Oct 02 13:28:43 compute-0 conmon[415961]: conmon 28994066dd6846ab2a15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284.scope/container/memory.events
Oct 02 13:28:43 compute-0 podman[415944]: 2025-10-02 13:28:43.205046964 +0000 UTC m=+0.579723190 container attach 28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:28:43 compute-0 podman[415944]: 2025-10-02 13:28:43.205539986 +0000 UTC m=+0.580216192 container died 28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:28:43 compute-0 nova_compute[256940]: 2025-10-02 13:28:43.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e02b92dbaf16b1d0cca24b268e00cfb868b4318c167956d4996e73a95fd8f9f9-merged.mount: Deactivated successfully.
Oct 02 13:28:43 compute-0 ceph-mon[73668]: pgmap v3542: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 79 op/s
Oct 02 13:28:43 compute-0 podman[415944]: 2025-10-02 13:28:43.768888085 +0000 UTC m=+1.143564281 container remove 28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:28:43 compute-0 systemd[1]: libpod-conmon-28994066dd6846ab2a15bcda18df7773c13908983a77cc781ac0de7c9e6ac284.scope: Deactivated successfully.
Oct 02 13:28:44 compute-0 podman[415985]: 2025-10-02 13:28:43.928057385 +0000 UTC m=+0.021394159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:44 compute-0 podman[415985]: 2025-10-02 13:28:44.027755641 +0000 UTC m=+0.121092395 container create 286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 13:28:44 compute-0 nova_compute[256940]: 2025-10-02 13:28:44.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:44 compute-0 systemd[1]: Started libpod-conmon-286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0.scope.
Oct 02 13:28:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a6f97b3637b7a4096eed21c91c91a6264b56077dfb16fe84b99dceec77317b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a6f97b3637b7a4096eed21c91c91a6264b56077dfb16fe84b99dceec77317b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a6f97b3637b7a4096eed21c91c91a6264b56077dfb16fe84b99dceec77317b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a6f97b3637b7a4096eed21c91c91a6264b56077dfb16fe84b99dceec77317b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a6f97b3637b7a4096eed21c91c91a6264b56077dfb16fe84b99dceec77317b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:44 compute-0 podman[415985]: 2025-10-02 13:28:44.192741699 +0000 UTC m=+0.286078473 container init 286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:28:44 compute-0 podman[415985]: 2025-10-02 13:28:44.199617645 +0000 UTC m=+0.292954399 container start 286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:44 compute-0 podman[415985]: 2025-10-02 13:28:44.250604442 +0000 UTC m=+0.343941226 container attach 286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:28:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:44.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:44 compute-0 nova_compute[256940]: 2025-10-02 13:28:44.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 77 op/s
Oct 02 13:28:44 compute-0 nova_compute[256940]: 2025-10-02 13:28:44.888 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:44 compute-0 nova_compute[256940]: 2025-10-02 13:28:44.889 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:44 compute-0 nova_compute[256940]: 2025-10-02 13:28:44.889 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:44 compute-0 nova_compute[256940]: 2025-10-02 13:28:44.889 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:28:44 compute-0 nova_compute[256940]: 2025-10-02 13:28:44.889 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:45 compute-0 elastic_blackwell[416002]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:28:45 compute-0 elastic_blackwell[416002]: --> relative data size: 1.0
Oct 02 13:28:45 compute-0 elastic_blackwell[416002]: --> All data devices are unavailable
Oct 02 13:28:45 compute-0 systemd[1]: libpod-286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0.scope: Deactivated successfully.
Oct 02 13:28:45 compute-0 podman[415985]: 2025-10-02 13:28:45.256611666 +0000 UTC m=+1.349948440 container died 286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:28:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:45.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:28:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243518253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.481 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.658 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.660 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4051MB free_disk=20.98794174194336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.660 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.661 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:45 compute-0 ceph-mon[73668]: pgmap v3543: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 77 op/s
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.744 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.745 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:28:45 compute-0 nova_compute[256940]: 2025-10-02 13:28:45.760 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-76a6f97b3637b7a4096eed21c91c91a6264b56077dfb16fe84b99dceec77317b-merged.mount: Deactivated successfully.
Oct 02 13:28:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:28:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/957602223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-0 nova_compute[256940]: 2025-10-02 13:28:46.208 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:46 compute-0 nova_compute[256940]: 2025-10-02 13:28:46.215 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:28:46 compute-0 nova_compute[256940]: 2025-10-02 13:28:46.246 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:28:46 compute-0 nova_compute[256940]: 2025-10-02 13:28:46.249 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:28:46 compute-0 nova_compute[256940]: 2025-10-02 13:28:46.249 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:46 compute-0 podman[415985]: 2025-10-02 13:28:46.403358898 +0000 UTC m=+2.496695652 container remove 286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:28:46 compute-0 systemd[1]: libpod-conmon-286a5922aa05ab3006c69c5017c46b03cc01516f992fff5e21eea5da6de958b0.scope: Deactivated successfully.
Oct 02 13:28:46 compute-0 sudo[415878]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:46 compute-0 sudo[416075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:46 compute-0 sudo[416075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:46 compute-0 sudo[416075]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:46 compute-0 sudo[416100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:46 compute-0 sudo[416100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:46 compute-0 sudo[416100]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:46 compute-0 sudo[416125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:46 compute-0 sudo[416125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:46 compute-0 sudo[416125]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:46 compute-0 sudo[416150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:28:46 compute-0 sudo[416150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3495008382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1243518253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3216221222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/957602223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 236 KiB/s wr, 61 op/s
Oct 02 13:28:47 compute-0 podman[416216]: 2025-10-02 13:28:47.046239715 +0000 UTC m=+0.026551161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:47 compute-0 podman[416216]: 2025-10-02 13:28:47.143367015 +0000 UTC m=+0.123678411 container create a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_williamson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:28:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:47.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:47 compute-0 systemd[1]: Started libpod-conmon-a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7.scope.
Oct 02 13:28:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:47 compute-0 podman[416216]: 2025-10-02 13:28:47.546940499 +0000 UTC m=+0.527251925 container init a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:28:47 compute-0 podman[416216]: 2025-10-02 13:28:47.554238976 +0000 UTC m=+0.534550382 container start a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_williamson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:28:47 compute-0 musing_williamson[416232]: 167 167
Oct 02 13:28:47 compute-0 systemd[1]: libpod-a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7.scope: Deactivated successfully.
Oct 02 13:28:47 compute-0 conmon[416232]: conmon a63fcc826591bb9f8cab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7.scope/container/memory.events
Oct 02 13:28:47 compute-0 podman[416216]: 2025-10-02 13:28:47.601522878 +0000 UTC m=+0.581834274 container attach a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:28:47 compute-0 podman[416216]: 2025-10-02 13:28:47.602952964 +0000 UTC m=+0.583264380 container died a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_williamson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-83232c2ad27198c5eb5025be6e31f56f2ead005d649a15df681534526c24ef64-merged.mount: Deactivated successfully.
Oct 02 13:28:48 compute-0 podman[416216]: 2025-10-02 13:28:48.013455556 +0000 UTC m=+0.993766992 container remove a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:28:48 compute-0 systemd[1]: libpod-conmon-a63fcc826591bb9f8cab15b18b43bd7b924f91b2619c6da3baf2570fb17caff7.scope: Deactivated successfully.
Oct 02 13:28:48 compute-0 podman[416244]: 2025-10-02 13:28:48.061320362 +0000 UTC m=+0.462174296 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:28:48 compute-0 podman[416238]: 2025-10-02 13:28:48.061923798 +0000 UTC m=+0.463903891 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:28:48 compute-0 ceph-mon[73668]: pgmap v3544: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 236 KiB/s wr, 61 op/s
Oct 02 13:28:48 compute-0 podman[416296]: 2025-10-02 13:28:48.215292549 +0000 UTC m=+0.052140327 container create 261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:28:48 compute-0 nova_compute[256940]: 2025-10-02 13:28:48.250 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:48 compute-0 nova_compute[256940]: 2025-10-02 13:28:48.251 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:48 compute-0 nova_compute[256940]: 2025-10-02 13:28:48.251 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:48 compute-0 nova_compute[256940]: 2025-10-02 13:28:48.251 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:28:48 compute-0 systemd[1]: Started libpod-conmon-261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb.scope.
Oct 02 13:28:48 compute-0 podman[416296]: 2025-10-02 13:28:48.190386221 +0000 UTC m=+0.027233989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf20096d89d1e9f0deab3b47a18f69e7ce9e2dbc5e26e5eb8e0dec1a430d48d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf20096d89d1e9f0deab3b47a18f69e7ce9e2dbc5e26e5eb8e0dec1a430d48d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf20096d89d1e9f0deab3b47a18f69e7ce9e2dbc5e26e5eb8e0dec1a430d48d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf20096d89d1e9f0deab3b47a18f69e7ce9e2dbc5e26e5eb8e0dec1a430d48d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:48 compute-0 podman[416296]: 2025-10-02 13:28:48.333324304 +0000 UTC m=+0.170172072 container init 261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:28:48 compute-0 podman[416296]: 2025-10-02 13:28:48.341883524 +0000 UTC m=+0.178731272 container start 261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:28:48 compute-0 podman[416296]: 2025-10-02 13:28:48.345835865 +0000 UTC m=+0.182683613 container attach 261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 13:28:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:48.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 475 KiB/s rd, 234 KiB/s wr, 26 op/s
Oct 02 13:28:49 compute-0 nova_compute[256940]: 2025-10-02 13:28:49.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]: {
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:     "1": [
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:         {
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "devices": [
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "/dev/loop3"
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             ],
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "lv_name": "ceph_lv0",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "lv_size": "7511998464",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "name": "ceph_lv0",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "tags": {
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.cluster_name": "ceph",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.crush_device_class": "",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.encrypted": "0",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.osd_id": "1",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.type": "block",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:                 "ceph.vdo": "0"
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             },
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "type": "block",
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:             "vg_name": "ceph_vg0"
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:         }
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]:     ]
Oct 02 13:28:49 compute-0 awesome_elbakyan[416313]: }
Oct 02 13:28:49 compute-0 systemd[1]: libpod-261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb.scope: Deactivated successfully.
Oct 02 13:28:49 compute-0 podman[416296]: 2025-10-02 13:28:49.16385114 +0000 UTC m=+1.000698888 container died 261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:28:49 compute-0 ceph-mon[73668]: pgmap v3545: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 475 KiB/s rd, 234 KiB/s wr, 26 op/s
Oct 02 13:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf20096d89d1e9f0deab3b47a18f69e7ce9e2dbc5e26e5eb8e0dec1a430d48d6-merged.mount: Deactivated successfully.
Oct 02 13:28:49 compute-0 podman[416296]: 2025-10-02 13:28:49.225623143 +0000 UTC m=+1.062470891 container remove 261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:28:49 compute-0 systemd[1]: libpod-conmon-261f211645d39fa7b7374f7fa5fb44c90221a4c8031395108e21bba5c88398eb.scope: Deactivated successfully.
Oct 02 13:28:49 compute-0 sudo[416150]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:49 compute-0 sudo[416334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:49 compute-0 sudo[416334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:49 compute-0 sudo[416334]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:49.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:49 compute-0 sudo[416359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:49 compute-0 sudo[416359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:49 compute-0 sudo[416359]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:49 compute-0 sudo[416384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:49 compute-0 sudo[416384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:49 compute-0 sudo[416384]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:49 compute-0 sudo[416409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:28:49 compute-0 sudo[416409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:49 compute-0 nova_compute[256940]: 2025-10-02 13:28:49.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:49 compute-0 podman[416476]: 2025-10-02 13:28:49.85521304 +0000 UTC m=+0.041379731 container create b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:28:49 compute-0 systemd[1]: Started libpod-conmon-b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea.scope.
Oct 02 13:28:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:49 compute-0 podman[416476]: 2025-10-02 13:28:49.837895596 +0000 UTC m=+0.024062307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:49 compute-0 podman[416476]: 2025-10-02 13:28:49.949157618 +0000 UTC m=+0.135324329 container init b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:28:49 compute-0 podman[416476]: 2025-10-02 13:28:49.961209937 +0000 UTC m=+0.147376628 container start b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:28:49 compute-0 agitated_buck[416492]: 167 167
Oct 02 13:28:49 compute-0 podman[416476]: 2025-10-02 13:28:49.967889268 +0000 UTC m=+0.154055979 container attach b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 02 13:28:49 compute-0 systemd[1]: libpod-b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea.scope: Deactivated successfully.
Oct 02 13:28:49 compute-0 conmon[416492]: conmon b5734dc9737e15838c70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea.scope/container/memory.events
Oct 02 13:28:49 compute-0 podman[416476]: 2025-10-02 13:28:49.971154042 +0000 UTC m=+0.157320753 container died b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 13:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e58c9ddacd54720911c7977794530869b54c7c1aa54e67d436319d7a8f67df-merged.mount: Deactivated successfully.
Oct 02 13:28:50 compute-0 podman[416476]: 2025-10-02 13:28:50.01477831 +0000 UTC m=+0.200945001 container remove b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:28:50 compute-0 systemd[1]: libpod-conmon-b5734dc9737e15838c70aa2342bbe5c2810f1e5a7d1e522fde40aa1541b624ea.scope: Deactivated successfully.
Oct 02 13:28:50 compute-0 podman[416517]: 2025-10-02 13:28:50.194164198 +0000 UTC m=+0.046068622 container create 498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:28:50 compute-0 systemd[1]: Started libpod-conmon-498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39.scope.
Oct 02 13:28:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec0e12b8044fd8d8e4a78eed9b58dc077cd83d392ec63a79ab8c9f5d3a6ecf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec0e12b8044fd8d8e4a78eed9b58dc077cd83d392ec63a79ab8c9f5d3a6ecf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec0e12b8044fd8d8e4a78eed9b58dc077cd83d392ec63a79ab8c9f5d3a6ecf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec0e12b8044fd8d8e4a78eed9b58dc077cd83d392ec63a79ab8c9f5d3a6ecf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:50 compute-0 podman[416517]: 2025-10-02 13:28:50.17355223 +0000 UTC m=+0.025456684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:28:50 compute-0 podman[416517]: 2025-10-02 13:28:50.281689671 +0000 UTC m=+0.133594115 container init 498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:28:50 compute-0 podman[416517]: 2025-10-02 13:28:50.289618695 +0000 UTC m=+0.141523119 container start 498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:28:50 compute-0 podman[416517]: 2025-10-02 13:28:50.293857533 +0000 UTC m=+0.145761987 container attach 498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:28:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:50.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 998 KiB/s rd, 495 KiB/s wr, 48 op/s
Oct 02 13:28:51 compute-0 compassionate_panini[416533]: {
Oct 02 13:28:51 compute-0 compassionate_panini[416533]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:28:51 compute-0 compassionate_panini[416533]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:28:51 compute-0 compassionate_panini[416533]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:28:51 compute-0 compassionate_panini[416533]:         "osd_id": 1,
Oct 02 13:28:51 compute-0 compassionate_panini[416533]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:28:51 compute-0 compassionate_panini[416533]:         "type": "bluestore"
Oct 02 13:28:51 compute-0 compassionate_panini[416533]:     }
Oct 02 13:28:51 compute-0 compassionate_panini[416533]: }
Oct 02 13:28:51 compute-0 systemd[1]: libpod-498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39.scope: Deactivated successfully.
Oct 02 13:28:51 compute-0 podman[416517]: 2025-10-02 13:28:51.182844438 +0000 UTC m=+1.034748862 container died 498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:28:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:51.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ec0e12b8044fd8d8e4a78eed9b58dc077cd83d392ec63a79ab8c9f5d3a6ecf3-merged.mount: Deactivated successfully.
Oct 02 13:28:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:52 compute-0 ceph-mon[73668]: pgmap v3546: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 998 KiB/s rd, 495 KiB/s wr, 48 op/s
Oct 02 13:28:52 compute-0 nova_compute[256940]: 2025-10-02 13:28:52.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:52 compute-0 podman[416517]: 2025-10-02 13:28:52.413226133 +0000 UTC m=+2.265130547 container remove 498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 02 13:28:52 compute-0 sudo[416409]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:52 compute-0 systemd[1]: libpod-conmon-498f82a4ecedc07b07af210a485e0d056a095e35e55b3678654fd2d26d1a7e39.scope: Deactivated successfully.
Oct 02 13:28:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:28:52 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:28:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:28:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a8b87f1f-bc0b-402b-abc3-33b94a288729 does not exist
Oct 02 13:28:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev dbf7c777-296f-464a-9b5c-561ecc7a325e does not exist
Oct 02 13:28:53 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2413aefb-7bed-456c-be2e-be258d5d938e does not exist
Oct 02 13:28:53 compute-0 sudo[416569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:53 compute-0 sudo[416569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:53 compute-0 sudo[416569]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:53 compute-0 sudo[416594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:28:53 compute-0 sudo[416594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:53 compute-0 sudo[416594]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:53 compute-0 nova_compute[256940]: 2025-10-02 13:28:53.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:53 compute-0 nova_compute[256940]: 2025-10-02 13:28:53.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:53.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:53 compute-0 ceph-mon[73668]: pgmap v3547: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:28:53 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:54 compute-0 nova_compute[256940]: 2025-10-02 13:28:54.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:54 compute-0 nova_compute[256940]: 2025-10-02 13:28:54.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:28:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:55 compute-0 ceph-mon[73668]: pgmap v3548: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:28:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:28:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:28:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 549 KiB/s wr, 56 op/s
Oct 02 13:28:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:57.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:58 compute-0 ceph-mon[73668]: pgmap v3549: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 549 KiB/s wr, 56 op/s
Oct 02 13:28:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:58.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:28:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:28:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 617 KiB/s rd, 316 KiB/s wr, 32 op/s
Oct 02 13:28:59 compute-0 nova_compute[256940]: 2025-10-02 13:28:59.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:59 compute-0 nova_compute[256940]: 2025-10-02 13:28:59.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:59 compute-0 nova_compute[256940]: 2025-10-02 13:28:59.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:28:59 compute-0 nova_compute[256940]: 2025-10-02 13:28:59.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:28:59 compute-0 nova_compute[256940]: 2025-10-02 13:28:59.237 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:28:59 compute-0 ceph-mon[73668]: pgmap v3550: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 617 KiB/s rd, 316 KiB/s wr, 32 op/s
Oct 02 13:28:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:28:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:59.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:59 compute-0 nova_compute[256940]: 2025-10-02 13:28:59.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:00.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 802 KiB/s rd, 317 KiB/s wr, 32 op/s
Oct 02 13:29:01 compute-0 nova_compute[256940]: 2025-10-02 13:29:01.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:01.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:01 compute-0 ceph-mon[73668]: pgmap v3551: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 802 KiB/s rd, 317 KiB/s wr, 32 op/s
Oct 02 13:29:01 compute-0 sudo[416623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:01 compute-0 sudo[416623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:01 compute-0 sudo[416623]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:01 compute-0 sudo[416648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:01 compute-0 sudo[416648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:01 compute-0 sudo[416648]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:02.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 278 KiB/s rd, 66 KiB/s wr, 12 op/s
Oct 02 13:29:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:03.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:03 compute-0 podman[416674]: 2025-10-02 13:29:03.382876281 +0000 UTC m=+0.053405430 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:29:03 compute-0 podman[416675]: 2025-10-02 13:29:03.411197536 +0000 UTC m=+0.081718195 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct 02 13:29:03 compute-0 ceph-mon[73668]: pgmap v3552: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 278 KiB/s rd, 66 KiB/s wr, 12 op/s
Oct 02 13:29:04 compute-0 nova_compute[256940]: 2025-10-02 13:29:04.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:04.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:04 compute-0 nova_compute[256940]: 2025-10-02 13:29:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 19 KiB/s wr, 5 op/s
Oct 02 13:29:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:05.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:29:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/883652432' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:29:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/883652432' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:05 compute-0 ceph-mon[73668]: pgmap v3553: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 19 KiB/s wr, 5 op/s
Oct 02 13:29:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:06.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 208 KiB/s wr, 7 op/s
Oct 02 13:29:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/883652432' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/883652432' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:07.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:08 compute-0 ceph-mon[73668]: pgmap v3554: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 208 KiB/s wr, 7 op/s
Oct 02 13:29:08 compute-0 nova_compute[256940]: 2025-10-02 13:29:08.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:08.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 185 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:29:09 compute-0 nova_compute[256940]: 2025-10-02 13:29:09.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:09.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:09 compute-0 ceph-mon[73668]: pgmap v3555: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 185 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:29:09 compute-0 nova_compute[256940]: 2025-10-02 13:29:09.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:10.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 185 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:29:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:11.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:11 compute-0 ceph-mon[73668]: pgmap v3556: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 185 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:29:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:12.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 199 KiB/s wr, 9 op/s
Oct 02 13:29:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:13.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:13 compute-0 ceph-mon[73668]: pgmap v3557: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 199 KiB/s wr, 9 op/s
Oct 02 13:29:14 compute-0 nova_compute[256940]: 2025-10-02 13:29:14.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:14.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:14 compute-0 nova_compute[256940]: 2025-10-02 13:29:14.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 190 KiB/s wr, 16 op/s
Oct 02 13:29:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:29:15.153 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:29:15 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:29:15.154 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:29:15 compute-0 nova_compute[256940]: 2025-10-02 13:29:15.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:29:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:15.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:29:15 compute-0 ceph-mon[73668]: pgmap v3558: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 190 KiB/s wr, 16 op/s
Oct 02 13:29:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:16.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1352680470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 190 KiB/s wr, 17 op/s
Oct 02 13:29:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:17.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:18 compute-0 ceph-mon[73668]: pgmap v3559: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 190 KiB/s wr, 17 op/s
Oct 02 13:29:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:29:18.156 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:29:18 compute-0 podman[416725]: 2025-10-02 13:29:18.376908994 +0000 UTC m=+0.052639630 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:18 compute-0 podman[416726]: 2025-10-02 13:29:18.406985785 +0000 UTC m=+0.080237157 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:29:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 852 B/s wr, 14 op/s
Oct 02 13:29:19 compute-0 nova_compute[256940]: 2025-10-02 13:29:19.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:19.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:19 compute-0 nova_compute[256940]: 2025-10-02 13:29:19.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:20 compute-0 ceph-mon[73668]: pgmap v3560: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 852 B/s wr, 14 op/s
Oct 02 13:29:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:29:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:20.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:29:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Oct 02 13:29:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:21.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Oct 02 13:29:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Oct 02 13:29:21 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Oct 02 13:29:21 compute-0 sudo[416767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:21 compute-0 sudo[416767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:21 compute-0 sudo[416767]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:22 compute-0 sudo[416792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:22 compute-0 sudo[416792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:22 compute-0 sudo[416792]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:22 compute-0 ceph-mon[73668]: pgmap v3561: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Oct 02 13:29:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:22.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Oct 02 13:29:23 compute-0 ceph-mon[73668]: osdmap e424: 3 total, 3 up, 3 in
Oct 02 13:29:23 compute-0 ceph-mon[73668]: pgmap v3563: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Oct 02 13:29:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:23.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:24 compute-0 nova_compute[256940]: 2025-10-02 13:29:24.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:24 compute-0 nova_compute[256940]: 2025-10-02 13:29:24.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Oct 02 13:29:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:25.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:25 compute-0 ceph-mon[73668]: pgmap v3564: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Oct 02 13:29:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 13:29:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:26.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 13:29:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:29:26.525 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:29:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:29:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Oct 02 13:29:27 compute-0 ceph-mon[73668]: pgmap v3565: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Oct 02 13:29:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:27.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Oct 02 13:29:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Oct 02 13:29:27 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Oct 02 13:29:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:28.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 639 B/s wr, 19 op/s
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:29:28
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images']
Oct 02 13:29:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:29:29 compute-0 ceph-mon[73668]: osdmap e425: 3 total, 3 up, 3 in
Oct 02 13:29:29 compute-0 nova_compute[256940]: 2025-10-02 13:29:29.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 13:29:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:29 compute-0 nova_compute[256940]: 2025-10-02 13:29:29.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:29:29 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:29:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:29:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:29:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:29:30 compute-0 ceph-mon[73668]: pgmap v3567: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 639 B/s wr, 19 op/s
Oct 02 13:29:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:30.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 911 B/s wr, 23 op/s
Oct 02 13:29:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:31.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1239937293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:31 compute-0 ceph-mon[73668]: pgmap v3568: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 911 B/s wr, 23 op/s
Oct 02 13:29:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:32.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 818 B/s wr, 21 op/s
Oct 02 13:29:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:33 compute-0 ceph-mon[73668]: pgmap v3569: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 818 B/s wr, 21 op/s
Oct 02 13:29:34 compute-0 nova_compute[256940]: 2025-10-02 13:29:34.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:34 compute-0 podman[416823]: 2025-10-02 13:29:34.407674451 +0000 UTC m=+0.066445244 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:29:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:34.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:34 compute-0 podman[416824]: 2025-10-02 13:29:34.436070209 +0000 UTC m=+0.099872011 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:34 compute-0 nova_compute[256940]: 2025-10-02 13:29:34.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 166 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 818 B/s wr, 22 op/s
Oct 02 13:29:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:29:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:35.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:29:35 compute-0 ceph-mon[73668]: pgmap v3570: 305 pgs: 305 active+clean; 166 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 818 B/s wr, 22 op/s
Oct 02 13:29:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 02 13:29:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/155816079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/155816079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1020 B/s wr, 21 op/s
Oct 02 13:29:39 compute-0 nova_compute[256940]: 2025-10-02 13:29:39.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:39 compute-0 ceph-mon[73668]: pgmap v3571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 02 13:29:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1826604766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/112464615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:39 compute-0 nova_compute[256940]: 2025-10-02 13:29:39.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Oct 02 13:29:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Oct 02 13:29:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:40.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:40 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Oct 02 13:29:40 compute-0 ceph-mon[73668]: pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1020 B/s wr, 21 op/s
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5711064000558348 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:29:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:29:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:42 compute-0 sudo[416869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:42 compute-0 sudo[416869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:42 compute-0 sudo[416869]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:42 compute-0 sudo[416894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:42 compute-0 sudo[416894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:42 compute-0 sudo[416894]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:42 compute-0 ceph-mon[73668]: osdmap e426: 3 total, 3 up, 3 in
Oct 02 13:29:42 compute-0 ceph-mon[73668]: pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct 02 13:29:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:29:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.240 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.241 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:29:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:43.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:29:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026244130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:43 compute-0 ceph-mon[73668]: pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.770 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.981 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.982 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4148MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.983 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:43 compute-0 nova_compute[256940]: 2025-10-02 13:29:43.983 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:44 compute-0 nova_compute[256940]: 2025-10-02 13:29:44.173 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:29:44 compute-0 nova_compute[256940]: 2025-10-02 13:29:44.174 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:29:44 compute-0 nova_compute[256940]: 2025-10-02 13:29:44.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:44 compute-0 nova_compute[256940]: 2025-10-02 13:29:44.244 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:29:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:44 compute-0 nova_compute[256940]: 2025-10-02 13:29:44.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 36 op/s
Oct 02 13:29:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2026244130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:29:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997606669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:45 compute-0 nova_compute[256940]: 2025-10-02 13:29:45.196 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.952s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:29:45 compute-0 nova_compute[256940]: 2025-10-02 13:29:45.202 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:29:45 compute-0 nova_compute[256940]: 2025-10-02 13:29:45.254 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:29:45 compute-0 nova_compute[256940]: 2025-10-02 13:29:45.256 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:29:45 compute-0 nova_compute[256940]: 2025-10-02 13:29:45.256 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:45.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:46 compute-0 ceph-mon[73668]: pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 36 op/s
Oct 02 13:29:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/997606669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4132422905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:29:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:29:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 921 B/s wr, 23 op/s
Oct 02 13:29:47 compute-0 nova_compute[256940]: 2025-10-02 13:29:47.257 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:47.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2373861693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:47 compute-0 ceph-mon[73668]: pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 921 B/s wr, 23 op/s
Oct 02 13:29:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Oct 02 13:29:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Oct 02 13:29:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:48 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Oct 02 13:29:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 485 B/s wr, 18 op/s
Oct 02 13:29:49 compute-0 nova_compute[256940]: 2025-10-02 13:29:49.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:49 compute-0 nova_compute[256940]: 2025-10-02 13:29:49.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:49 compute-0 nova_compute[256940]: 2025-10-02 13:29:49.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:49 compute-0 nova_compute[256940]: 2025-10-02 13:29:49.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:29:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:49.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:49 compute-0 podman[416967]: 2025-10-02 13:29:49.419004888 +0000 UTC m=+0.085459321 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 02 13:29:49 compute-0 podman[416968]: 2025-10-02 13:29:49.421044401 +0000 UTC m=+0.082436184 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:29:49 compute-0 ceph-mon[73668]: osdmap e427: 3 total, 3 up, 3 in
Oct 02 13:29:49 compute-0 ceph-mon[73668]: pgmap v3579: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 485 B/s wr, 18 op/s
Oct 02 13:29:49 compute-0 nova_compute[256940]: 2025-10-02 13:29:49.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:50.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 409 B/s wr, 15 op/s
Oct 02 13:29:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:51.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:51 compute-0 ceph-mon[73668]: pgmap v3580: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 409 B/s wr, 15 op/s
Oct 02 13:29:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:29:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:52.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:29:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.8 KiB/s rd, 409 B/s wr, 12 op/s
Oct 02 13:29:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:53 compute-0 nova_compute[256940]: 2025-10-02 13:29:53.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:53 compute-0 ceph-mon[73668]: pgmap v3581: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.8 KiB/s rd, 409 B/s wr, 12 op/s
Oct 02 13:29:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:53.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:53 compute-0 sudo[417009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:53 compute-0 sudo[417009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:53 compute-0 sudo[417009]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:53 compute-0 sudo[417034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:53 compute-0 sudo[417034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:53 compute-0 sudo[417034]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:53 compute-0 sudo[417059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:53 compute-0 sudo[417059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:53 compute-0 sudo[417059]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:53 compute-0 sudo[417084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:29:53 compute-0 sudo[417084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:54 compute-0 sudo[417084]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:54 compute-0 nova_compute[256940]: 2025-10-02 13:29:54.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:54 compute-0 nova_compute[256940]: 2025-10-02 13:29:54.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:29:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:29:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:29:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:29:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2bb6ce8e-92a0-4389-b22e-b1f6dc5a452a does not exist
Oct 02 13:29:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f8f759a5-4a67-41fd-9777-871bb49aa8c2 does not exist
Oct 02 13:29:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4f002a78-185d-4649-a9cc-391b2e1d0b49 does not exist
Oct 02 13:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:29:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:29:54 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:29:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:29:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:29:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:54 compute-0 sudo[417140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:54 compute-0 sudo[417140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:54 compute-0 sudo[417140]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:54 compute-0 sudo[417165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:54 compute-0 sudo[417165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:54 compute-0 sudo[417165]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:54.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:54 compute-0 sudo[417190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:54 compute-0 sudo[417190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:54 compute-0 sudo[417190]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:54 compute-0 sudo[417215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:29:54 compute-0 sudo[417215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:54 compute-0 nova_compute[256940]: 2025-10-02 13:29:54.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 KiB/s rd, 307 B/s wr, 2 op/s
Oct 02 13:29:54 compute-0 podman[417281]: 2025-10-02 13:29:54.915543167 +0000 UTC m=+0.058946502 container create 54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:29:54 compute-0 systemd[1]: Started libpod-conmon-54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039.scope.
Oct 02 13:29:54 compute-0 podman[417281]: 2025-10-02 13:29:54.876554947 +0000 UTC m=+0.019958302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:55 compute-0 podman[417281]: 2025-10-02 13:29:55.015381756 +0000 UTC m=+0.158785181 container init 54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:29:55 compute-0 podman[417281]: 2025-10-02 13:29:55.025559536 +0000 UTC m=+0.168962871 container start 54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:29:55 compute-0 podman[417281]: 2025-10-02 13:29:55.028935603 +0000 UTC m=+0.172338978 container attach 54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:55 compute-0 boring_mahavira[417297]: 167 167
Oct 02 13:29:55 compute-0 systemd[1]: libpod-54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039.scope: Deactivated successfully.
Oct 02 13:29:55 compute-0 conmon[417297]: conmon 54acb5a47b5ff8c49e6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039.scope/container/memory.events
Oct 02 13:29:55 compute-0 podman[417281]: 2025-10-02 13:29:55.035975943 +0000 UTC m=+0.179379278 container died 54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-77204d4cb9e57aeafd5098df92dc240c33bcedeeb41c3f7c1ac72e03cc998146-merged.mount: Deactivated successfully.
Oct 02 13:29:55 compute-0 podman[417281]: 2025-10-02 13:29:55.086227991 +0000 UTC m=+0.229631326 container remove 54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 13:29:55 compute-0 systemd[1]: libpod-conmon-54acb5a47b5ff8c49e6cd2a14d4c6be5ac85d4918c988f7b54fc530d9484c039.scope: Deactivated successfully.
Oct 02 13:29:55 compute-0 nova_compute[256940]: 2025-10-02 13:29:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:55 compute-0 podman[417321]: 2025-10-02 13:29:55.236542034 +0000 UTC m=+0.038374874 container create 93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:29:55 compute-0 systemd[1]: Started libpod-conmon-93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc.scope.
Oct 02 13:29:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418782dc2a60b0dbd2ee85899f3079baf110d82e7080928fa4d2e4437adb1867/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418782dc2a60b0dbd2ee85899f3079baf110d82e7080928fa4d2e4437adb1867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418782dc2a60b0dbd2ee85899f3079baf110d82e7080928fa4d2e4437adb1867/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418782dc2a60b0dbd2ee85899f3079baf110d82e7080928fa4d2e4437adb1867/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/418782dc2a60b0dbd2ee85899f3079baf110d82e7080928fa4d2e4437adb1867/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:55 compute-0 podman[417321]: 2025-10-02 13:29:55.221426247 +0000 UTC m=+0.023259117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:55 compute-0 podman[417321]: 2025-10-02 13:29:55.3686205 +0000 UTC m=+0.170453340 container init 93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:29:55 compute-0 ceph-mon[73668]: pgmap v3582: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 KiB/s rd, 307 B/s wr, 2 op/s
Oct 02 13:29:55 compute-0 podman[417321]: 2025-10-02 13:29:55.375076715 +0000 UTC m=+0.176909555 container start 93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:29:55 compute-0 podman[417321]: 2025-10-02 13:29:55.393969289 +0000 UTC m=+0.195802149 container attach 93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:29:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:29:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:29:56 compute-0 adoring_williamson[417338]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:29:56 compute-0 adoring_williamson[417338]: --> relative data size: 1.0
Oct 02 13:29:56 compute-0 adoring_williamson[417338]: --> All data devices are unavailable
Oct 02 13:29:56 compute-0 systemd[1]: libpod-93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc.scope: Deactivated successfully.
Oct 02 13:29:56 compute-0 podman[417353]: 2025-10-02 13:29:56.261061753 +0000 UTC m=+0.025077213 container died 93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-418782dc2a60b0dbd2ee85899f3079baf110d82e7080928fa4d2e4437adb1867-merged.mount: Deactivated successfully.
Oct 02 13:29:56 compute-0 podman[417353]: 2025-10-02 13:29:56.32413766 +0000 UTC m=+0.088153090 container remove 93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:29:56 compute-0 systemd[1]: libpod-conmon-93effcc1354db680ef8f07ccc4cc194646f3396f7b116cfd2085635f1baf82fc.scope: Deactivated successfully.
Oct 02 13:29:56 compute-0 sudo[417215]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:56 compute-0 sudo[417369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:56 compute-0 sudo[417369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:56 compute-0 sudo[417369]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:56.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:56 compute-0 sudo[417394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:56 compute-0 sudo[417394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:56 compute-0 sudo[417394]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:56 compute-0 sudo[417419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:56 compute-0 sudo[417419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:56 compute-0 sudo[417419]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:56 compute-0 sudo[417444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:29:56 compute-0 sudo[417444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:29:56 compute-0 podman[417507]: 2025-10-02 13:29:56.913272289 +0000 UTC m=+0.039264498 container create a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_raman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:29:56 compute-0 systemd[1]: Started libpod-conmon-a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18.scope.
Oct 02 13:29:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:56 compute-0 podman[417507]: 2025-10-02 13:29:56.894687673 +0000 UTC m=+0.020679912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:56 compute-0 podman[417507]: 2025-10-02 13:29:56.998837212 +0000 UTC m=+0.124829421 container init a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_raman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:57 compute-0 podman[417507]: 2025-10-02 13:29:57.006483418 +0000 UTC m=+0.132475627 container start a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_raman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 13:29:57 compute-0 podman[417507]: 2025-10-02 13:29:57.010334487 +0000 UTC m=+0.136326716 container attach a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_raman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:29:57 compute-0 determined_raman[417523]: 167 167
Oct 02 13:29:57 compute-0 systemd[1]: libpod-a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18.scope: Deactivated successfully.
Oct 02 13:29:57 compute-0 podman[417507]: 2025-10-02 13:29:57.01202863 +0000 UTC m=+0.138020849 container died a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67f1786973f30f1adb96ec9dea74a275149aaa23a7965fb0cc61267e4566a5e-merged.mount: Deactivated successfully.
Oct 02 13:29:57 compute-0 podman[417507]: 2025-10-02 13:29:57.047489159 +0000 UTC m=+0.173481368 container remove a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:29:57 compute-0 systemd[1]: libpod-conmon-a6aee7b1fedcdc62bffc01ae16098d8f889b2e5c7feb4297412c1942da960a18.scope: Deactivated successfully.
Oct 02 13:29:57 compute-0 podman[417547]: 2025-10-02 13:29:57.198823578 +0000 UTC m=+0.036668961 container create 796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:29:57 compute-0 systemd[1]: Started libpod-conmon-796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940.scope.
Oct 02 13:29:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19fca2f85d3236893c62ae91fd8d4f5788d913fb7d86f1204658a486568bbfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19fca2f85d3236893c62ae91fd8d4f5788d913fb7d86f1204658a486568bbfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19fca2f85d3236893c62ae91fd8d4f5788d913fb7d86f1204658a486568bbfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19fca2f85d3236893c62ae91fd8d4f5788d913fb7d86f1204658a486568bbfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:57 compute-0 podman[417547]: 2025-10-02 13:29:57.182825048 +0000 UTC m=+0.020670451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:57 compute-0 podman[417547]: 2025-10-02 13:29:57.315900879 +0000 UTC m=+0.153746282 container init 796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:29:57 compute-0 podman[417547]: 2025-10-02 13:29:57.323439572 +0000 UTC m=+0.161284955 container start 796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:29:57 compute-0 podman[417547]: 2025-10-02 13:29:57.337209355 +0000 UTC m=+0.175054768 container attach 796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:29:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:57.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:57 compute-0 ceph-mon[73668]: pgmap v3583: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:29:58 compute-0 competent_mendel[417563]: {
Oct 02 13:29:58 compute-0 competent_mendel[417563]:     "1": [
Oct 02 13:29:58 compute-0 competent_mendel[417563]:         {
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "devices": [
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "/dev/loop3"
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             ],
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "lv_name": "ceph_lv0",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "lv_size": "7511998464",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "name": "ceph_lv0",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "tags": {
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.cluster_name": "ceph",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.crush_device_class": "",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.encrypted": "0",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.osd_id": "1",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.type": "block",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:                 "ceph.vdo": "0"
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             },
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "type": "block",
Oct 02 13:29:58 compute-0 competent_mendel[417563]:             "vg_name": "ceph_vg0"
Oct 02 13:29:58 compute-0 competent_mendel[417563]:         }
Oct 02 13:29:58 compute-0 competent_mendel[417563]:     ]
Oct 02 13:29:58 compute-0 competent_mendel[417563]: }
Oct 02 13:29:58 compute-0 systemd[1]: libpod-796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940.scope: Deactivated successfully.
Oct 02 13:29:58 compute-0 podman[417547]: 2025-10-02 13:29:58.145663036 +0000 UTC m=+0.983508419 container died 796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d19fca2f85d3236893c62ae91fd8d4f5788d913fb7d86f1204658a486568bbfe-merged.mount: Deactivated successfully.
Oct 02 13:29:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:58 compute-0 podman[417547]: 2025-10-02 13:29:58.21292736 +0000 UTC m=+1.050772743 container remove 796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:29:58 compute-0 systemd[1]: libpod-conmon-796e95dc6e6997e711d0d463b5bed8e89317d9582e580f6e2c1d9f62ed130940.scope: Deactivated successfully.
Oct 02 13:29:58 compute-0 sudo[417444]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:58 compute-0 sudo[417586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:58 compute-0 sudo[417586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:58 compute-0 sudo[417586]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:58 compute-0 sudo[417611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:58 compute-0 sudo[417611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:58 compute-0 sudo[417611]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:58 compute-0 sudo[417636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:58 compute-0 sudo[417636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:58 compute-0 sudo[417636]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:29:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:29:58 compute-0 sudo[417661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:29:58 compute-0 sudo[417661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:58 compute-0 podman[417727]: 2025-10-02 13:29:58.859442751 +0000 UTC m=+0.039392051 container create 4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:29:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 98 B/s wr, 7 op/s
Oct 02 13:29:58 compute-0 systemd[1]: Started libpod-conmon-4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c.scope.
Oct 02 13:29:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:58 compute-0 podman[417727]: 2025-10-02 13:29:58.937020139 +0000 UTC m=+0.116969449 container init 4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:29:58 compute-0 podman[417727]: 2025-10-02 13:29:58.842362493 +0000 UTC m=+0.022311813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:58 compute-0 podman[417727]: 2025-10-02 13:29:58.94407868 +0000 UTC m=+0.124027980 container start 4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:29:58 compute-0 crazy_hodgkin[417744]: 167 167
Oct 02 13:29:58 compute-0 systemd[1]: libpod-4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c.scope: Deactivated successfully.
Oct 02 13:29:58 compute-0 podman[417727]: 2025-10-02 13:29:58.953510192 +0000 UTC m=+0.133459482 container attach 4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:29:58 compute-0 podman[417727]: 2025-10-02 13:29:58.954666381 +0000 UTC m=+0.134615701 container died 4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ee7b36795abdea557895bb1c627bb9540578925cb5bd2032d56bb392ba4014f-merged.mount: Deactivated successfully.
Oct 02 13:29:58 compute-0 podman[417727]: 2025-10-02 13:29:58.996698039 +0000 UTC m=+0.176647339 container remove 4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:29:59 compute-0 systemd[1]: libpod-conmon-4e15b3edeff1b751f09913fb0b118b87d87d9241c9174a71b0d81b640b72db3c.scope: Deactivated successfully.
Oct 02 13:29:59 compute-0 podman[417768]: 2025-10-02 13:29:59.183940918 +0000 UTC m=+0.047454797 container create cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:29:59 compute-0 nova_compute[256940]: 2025-10-02 13:29:59.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:59 compute-0 nova_compute[256940]: 2025-10-02 13:29:59.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:59 compute-0 nova_compute[256940]: 2025-10-02 13:29:59.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:29:59 compute-0 nova_compute[256940]: 2025-10-02 13:29:59.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:29:59 compute-0 systemd[1]: Started libpod-conmon-cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed.scope.
Oct 02 13:29:59 compute-0 nova_compute[256940]: 2025-10-02 13:29:59.237 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:29:59 compute-0 podman[417768]: 2025-10-02 13:29:59.16414581 +0000 UTC m=+0.027659699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:29:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523a596299d454861ff4918bee52e84d72365641c2fd1fe2fbc6ffd5ea485537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523a596299d454861ff4918bee52e84d72365641c2fd1fe2fbc6ffd5ea485537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523a596299d454861ff4918bee52e84d72365641c2fd1fe2fbc6ffd5ea485537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523a596299d454861ff4918bee52e84d72365641c2fd1fe2fbc6ffd5ea485537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:29:59 compute-0 podman[417768]: 2025-10-02 13:29:59.346587787 +0000 UTC m=+0.210101666 container init cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 02 13:29:59 compute-0 podman[417768]: 2025-10-02 13:29:59.352770665 +0000 UTC m=+0.216284544 container start cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:29:59 compute-0 podman[417768]: 2025-10-02 13:29:59.417970196 +0000 UTC m=+0.281484075 container attach cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:29:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:29:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:59.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:59 compute-0 nova_compute[256940]: 2025-10-02 13:29:59.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:30:00 compute-0 ceph-mon[73668]: pgmap v3584: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 98 B/s wr, 7 op/s
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]: {
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]:         "osd_id": 1,
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]:         "type": "bluestore"
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]:     }
Oct 02 13:30:00 compute-0 competent_brahmagupta[417785]: }
Oct 02 13:30:00 compute-0 systemd[1]: libpod-cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed.scope: Deactivated successfully.
Oct 02 13:30:00 compute-0 podman[417768]: 2025-10-02 13:30:00.215391415 +0000 UTC m=+1.078905304 container died cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-523a596299d454861ff4918bee52e84d72365641c2fd1fe2fbc6ffd5ea485537-merged.mount: Deactivated successfully.
Oct 02 13:30:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:00.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:00 compute-0 podman[417768]: 2025-10-02 13:30:00.476357032 +0000 UTC m=+1.339870911 container remove cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:30:00 compute-0 systemd[1]: libpod-conmon-cf36cafb7d22bb2745a1693080607edab0e992109a31ba1d6bdfec2d8a594fed.scope: Deactivated successfully.
Oct 02 13:30:00 compute-0 sudo[417661]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:30:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:30:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:30:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:30:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 10c7b9d2-5349-479c-b4a0-31763648357d does not exist
Oct 02 13:30:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bc249915-90d6-41bc-9c51-bb5a57bc4604 does not exist
Oct 02 13:30:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c37318cd-d214-456a-82bf-e826a0b04207 does not exist
Oct 02 13:30:00 compute-0 sudo[417818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:00 compute-0 sudo[417818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:00 compute-0 sudo[417818]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:00 compute-0 sudo[417843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:30:00 compute-0 sudo[417843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:00 compute-0 sudo[417843]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 16 op/s
Oct 02 13:30:01 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 13:30:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:30:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:30:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:02 compute-0 ceph-mon[73668]: pgmap v3585: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 16 op/s
Oct 02 13:30:02 compute-0 nova_compute[256940]: 2025-10-02 13:30:02.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:02 compute-0 sudo[417869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:02 compute-0 sudo[417869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:02 compute-0 sudo[417869]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:02 compute-0 sudo[417894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:02 compute-0 sudo[417894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:02 compute-0 sudo[417894]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:02.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 134 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 458 KiB/s wr, 35 op/s
Oct 02 13:30:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:03 compute-0 ceph-mon[73668]: pgmap v3586: 305 pgs: 305 active+clean; 134 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 458 KiB/s wr, 35 op/s
Oct 02 13:30:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:03.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:04 compute-0 nova_compute[256940]: 2025-10-02 13:30:04.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:04 compute-0 nova_compute[256940]: 2025-10-02 13:30:04.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 147 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 707 KiB/s wr, 40 op/s
Oct 02 13:30:05 compute-0 podman[417921]: 2025-10-02 13:30:05.41761814 +0000 UTC m=+0.084148868 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 13:30:05 compute-0 podman[417922]: 2025-10-02 13:30:05.420812872 +0000 UTC m=+0.086402156 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:30:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:06 compute-0 ceph-mon[73668]: pgmap v3587: 305 pgs: 305 active+clean; 147 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 707 KiB/s wr, 40 op/s
Oct 02 13:30:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1085208703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:30:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1085208703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:30:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 02 13:30:07 compute-0 ceph-mon[73668]: pgmap v3588: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 02 13:30:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3192368789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:08.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 13:30:09 compute-0 nova_compute[256940]: 2025-10-02 13:30:09.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:09 compute-0 ceph-mon[73668]: pgmap v3589: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 13:30:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:09 compute-0 nova_compute[256940]: 2025-10-02 13:30:09.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:10.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 13:30:11 compute-0 ceph-mon[73668]: pgmap v3590: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 13:30:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:11.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:30:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:12.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:30:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1655923150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 13:30:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.312337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813312397, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1703, "num_deletes": 257, "total_data_size": 2927198, "memory_usage": 2987192, "flush_reason": "Manual Compaction"}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct 02 13:30:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813465706, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 2871132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78029, "largest_seqno": 79731, "table_properties": {"data_size": 2863316, "index_size": 4693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16490, "raw_average_key_size": 20, "raw_value_size": 2847513, "raw_average_value_size": 3481, "num_data_blocks": 206, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411651, "oldest_key_time": 1759411651, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 153406 microseconds, and 6260 cpu microseconds.
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.465752) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 2871132 bytes OK
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.465772) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.533061) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.533158) EVENT_LOG_v1 {"time_micros": 1759411813533147, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.533186) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2920024, prev total WAL file size 2920024, number of live WAL files 2.
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.534436) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323730' seq:72057594037927935, type:22 .. '6C6F676D0033353231' seq:0, type:0; will stop at (end)
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(2803KB)], [179(10MB)]
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813534482, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 14378534, "oldest_snapshot_seqno": -1}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10652 keys, 14240827 bytes, temperature: kUnknown
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813696182, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 14240827, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14171370, "index_size": 41696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26693, "raw_key_size": 281019, "raw_average_key_size": 26, "raw_value_size": 13984583, "raw_average_value_size": 1312, "num_data_blocks": 1592, "num_entries": 10652, "num_filter_entries": 10652, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.696457) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14240827 bytes
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.771796) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.9 rd, 88.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.0 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 11183, records dropped: 531 output_compression: NoCompression
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.771835) EVENT_LOG_v1 {"time_micros": 1759411813771820, "job": 112, "event": "compaction_finished", "compaction_time_micros": 161792, "compaction_time_cpu_micros": 42494, "output_level": 6, "num_output_files": 1, "total_output_size": 14240827, "num_input_records": 11183, "num_output_records": 10652, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813772930, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813775041, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.534237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.775334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.775344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.775346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.775349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:30:13.775351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-0 ceph-mon[73668]: pgmap v3591: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 13:30:14 compute-0 nova_compute[256940]: 2025-10-02 13:30:14.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:14.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:14 compute-0 nova_compute[256940]: 2025-10-02 13:30:14.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 1.3 MiB/s wr, 5 op/s
Oct 02 13:30:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:15.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:15 compute-0 ceph-mon[73668]: pgmap v3592: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 1.3 MiB/s wr, 5 op/s
Oct 02 13:30:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:16.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 1.1 MiB/s wr, 1 op/s
Oct 02 13:30:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:18 compute-0 ceph-mon[73668]: pgmap v3593: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 1.1 MiB/s wr, 1 op/s
Oct 02 13:30:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:18.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:19 compute-0 nova_compute[256940]: 2025-10-02 13:30:19.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:19 compute-0 ceph-mon[73668]: pgmap v3594: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:19.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:19 compute-0 nova_compute[256940]: 2025-10-02 13:30:19.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:20 compute-0 podman[417974]: 2025-10-02 13:30:20.423448267 +0000 UTC m=+0.068627660 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:30:20 compute-0 podman[417975]: 2025-10-02 13:30:20.442173786 +0000 UTC m=+0.089965876 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:30:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3095014498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:22 compute-0 sudo[418016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:22 compute-0 sudo[418016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:22 compute-0 sudo[418016]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:22.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:22 compute-0 sudo[418041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:22 compute-0 sudo[418041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:22 compute-0 sudo[418041]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:22 compute-0 ceph-mon[73668]: pgmap v3595: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:30:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:30:23.138 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:30:23 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:30:23.138 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:30:23 compute-0 nova_compute[256940]: 2025-10-02 13:30:23.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:23 compute-0 ceph-mon[73668]: pgmap v3596: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:30:24 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:30:24.142 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:30:24 compute-0 nova_compute[256940]: 2025-10-02 13:30:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:24.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:24 compute-0 nova_compute[256940]: 2025-10-02 13:30:24.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 6 op/s
Oct 02 13:30:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:30:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:30:25 compute-0 ceph-mon[73668]: pgmap v3597: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 6 op/s
Oct 02 13:30:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:26.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:30:26.526 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:30:26.527 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:30:26.527 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 67 op/s
Oct 02 13:30:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:27.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:28 compute-0 ceph-mon[73668]: pgmap v3598: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 67 op/s
Oct 02 13:30:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:28.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 67 op/s
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:30:28
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'backups', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct 02 13:30:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:30:29 compute-0 nova_compute[256940]: 2025-10-02 13:30:29.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:29 compute-0 ceph-mon[73668]: pgmap v3599: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 67 op/s
Oct 02 13:30:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:29.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:29 compute-0 nova_compute[256940]: 2025-10-02 13:30:29.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:30:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:30.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:32 compute-0 ceph-mon[73668]: pgmap v3600: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:33 compute-0 ceph-mon[73668]: pgmap v3601: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:33.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:34 compute-0 nova_compute[256940]: 2025-10-02 13:30:34.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:34.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:34 compute-0 nova_compute[256940]: 2025-10-02 13:30:34.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:36 compute-0 ceph-mon[73668]: pgmap v3602: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:36 compute-0 podman[418073]: 2025-10-02 13:30:36.412590635 +0000 UTC m=+0.076241765 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:30:36 compute-0 podman[418074]: 2025-10-02 13:30:36.422848648 +0000 UTC m=+0.087245647 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:30:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:36.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 290 KiB/s wr, 78 op/s
Oct 02 13:30:37 compute-0 ceph-mon[73668]: pgmap v3603: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 290 KiB/s wr, 78 op/s
Oct 02 13:30:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:30:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:30:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 205 KiB/s rd, 290 KiB/s wr, 17 op/s
Oct 02 13:30:39 compute-0 nova_compute[256940]: 2025-10-02 13:30:39.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:39.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:39 compute-0 nova_compute[256940]: 2025-10-02 13:30:39.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:40 compute-0 ceph-mon[73668]: pgmap v3604: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 205 KiB/s rd, 290 KiB/s wr, 17 op/s
Oct 02 13:30:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3749824698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1847762052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:40.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 195 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004292840068397543 of space, bias 1.0, pg target 1.287852020519263 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:30:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:30:41 compute-0 ceph-mon[73668]: pgmap v3605: 305 pgs: 305 active+clean; 195 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:30:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:41.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:42.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:42 compute-0 sudo[418120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:42 compute-0 sudo[418120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:42 compute-0 sudo[418120]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:42 compute-0 sudo[418145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:42 compute-0 sudo[418145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:42 compute-0 sudo[418145]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.237 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.238 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.238 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:30:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:43.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:30:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:30:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511948820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.720 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.882 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.883 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4138MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.883 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.883 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.943 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:30:43 compute-0 nova_compute[256940]: 2025-10-02 13:30:43.943 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:30:43 compute-0 ceph-mon[73668]: pgmap v3606: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 13:30:43 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1511948820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.012 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:30:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749386317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.463 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.468 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.486 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.488 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.488 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:44.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:44 compute-0 nova_compute[256940]: 2025-10-02 13:30:44.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 13:30:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1749386317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:45.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:46 compute-0 ceph-mon[73668]: pgmap v3607: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 13:30:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2790356428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:46.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 02 13:30:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1118045396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:48 compute-0 ceph-mon[73668]: pgmap v3608: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 02 13:30:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:48 compute-0 nova_compute[256940]: 2025-10-02 13:30:48.491 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:48.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 283 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 13:30:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3144257203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:49 compute-0 nova_compute[256940]: 2025-10-02 13:30:49.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:49 compute-0 nova_compute[256940]: 2025-10-02 13:30:49.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:49 compute-0 nova_compute[256940]: 2025-10-02 13:30:49.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:49 compute-0 nova_compute[256940]: 2025-10-02 13:30:49.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:30:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:30:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:49.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:30:49 compute-0 nova_compute[256940]: 2025-10-02 13:30:49.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:50 compute-0 ceph-mon[73668]: pgmap v3609: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 283 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 13:30:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:50.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Oct 02 13:30:51 compute-0 ceph-mon[73668]: pgmap v3610: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Oct 02 13:30:51 compute-0 podman[418219]: 2025-10-02 13:30:51.388031373 +0000 UTC m=+0.055093013 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid)
Oct 02 13:30:51 compute-0 podman[418220]: 2025-10-02 13:30:51.394381045 +0000 UTC m=+0.058324995 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:30:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.246 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.246 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.267 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.344 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.345 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.350 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.351 2 INFO nova.compute.claims [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.455 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:30:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:52.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:30:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:30:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550152388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.897 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 77 KiB/s wr, 25 op/s
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.902 2 DEBUG nova.compute.provider_tree [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.916 2 DEBUG nova.scheduler.client.report [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.938 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.939 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:30:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2550152388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.984 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:30:52 compute-0 nova_compute[256940]: 2025-10-02 13:30:52.984 2 DEBUG nova.network.neutron [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.003 2 INFO nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.019 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.060 2 INFO nova.virt.block_device [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Booting with volume c345d50c-19ab-4a23-a4a8-f7c734528d26 at /dev/vda
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.216 2 DEBUG os_brick.utils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.217 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.226 1002 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.227 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[11493512-94b7-4c49-a329-e2e85f57ddf0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.228 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.234 1002 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.234 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[b45afab2-a198-404a-b1e9-3e89f1dbc666]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b6d21f9028d8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.235 1002 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.241 1002 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.242 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1582d6a7-9d6a-4e3a-966f-92cd067611a8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.243 1002 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed0395e-c871-4fcb-83d9-f8505d37b804]: (4, '8a1e3318-b91c-48d1-8474-e3593dbdcd45') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.243 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.272 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.274 2 DEBUG os_brick.initiator.connectors.lightos [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.274 2 DEBUG os_brick.initiator.connectors.lightos [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.274 2 DEBUG os_brick.initiator.connectors.lightos [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.275 2 DEBUG os_brick.utils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b6d21f9028d8', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '8a1e3318-b91c-48d1-8474-e3593dbdcd45', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.275 2 DEBUG nova.virt.block_device [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updating existing volume attachment record: 88440515-4864-4dd0-8ca6-82833afd669f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:30:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:53.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:53 compute-0 nova_compute[256940]: 2025-10-02 13:30:53.959 2 DEBUG nova.policy [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:30:54 compute-0 ceph-mon[73668]: pgmap v3611: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 77 KiB/s wr, 25 op/s
Oct 02 13:30:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1104312888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.230 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.232 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.232 2 INFO nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Creating image(s)
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.233 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.233 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Ensure instance console log exists: /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.234 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.234 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.234 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:30:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:54.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.721 2 DEBUG nova.network.neutron [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Successfully created port: 47596c7d-5357-4ca4-967d-cefac652d34f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:30:54 compute-0 nova_compute[256940]: 2025-10-02 13:30:54.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 30 KiB/s wr, 16 op/s
Oct 02 13:30:55 compute-0 nova_compute[256940]: 2025-10-02 13:30:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:55 compute-0 nova_compute[256940]: 2025-10-02 13:30:55.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:55.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:56 compute-0 ceph-mon[73668]: pgmap v3612: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 30 KiB/s wr, 16 op/s
Oct 02 13:30:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:56.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 29 KiB/s wr, 15 op/s
Oct 02 13:30:57 compute-0 nova_compute[256940]: 2025-10-02 13:30:57.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:58 compute-0 ceph-mon[73668]: pgmap v3613: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 29 KiB/s wr, 15 op/s
Oct 02 13:30:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:30:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:30:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:58.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.052 2 DEBUG nova.network.neutron [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Successfully updated port: 47596c7d-5357-4ca4-967d-cefac652d34f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.067 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.067 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.068 2 DEBUG nova.network.neutron [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:30:59 compute-0 ceph-mon[73668]: pgmap v3614: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.166 2 DEBUG nova.compute.manager [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-changed-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.167 2 DEBUG nova.compute.manager [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Refreshing instance network info cache due to event network-changed-47596c7d-5357-4ca4-967d-cefac652d34f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.167 2 DEBUG oslo_concurrency.lockutils [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.296 2 DEBUG nova.network.neutron [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:30:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:30:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:30:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.891 2 DEBUG nova.network.neutron [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updating instance_info_cache with network_info: [{"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.908 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.909 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Instance network_info: |[{"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.909 2 DEBUG oslo_concurrency.lockutils [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.909 2 DEBUG nova.network.neutron [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Refreshing network info cache for port 47596c7d-5357-4ca4-967d-cefac652d34f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.912 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Start _get_guest_xml network_info=[{"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c345d50c-19ab-4a23-a4a8-f7c734528d26', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c345d50c-19ab-4a23-a4a8-f7c734528d26', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '800b27d2-30f8-4e7e-9411-bbdae9fd342b', 'attached_at': '', 'detached_at': '', 'volume_id': 'c345d50c-19ab-4a23-a4a8-f7c734528d26', 'serial': 'c345d50c-19ab-4a23-a4a8-f7c734528d26'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '88440515-4864-4dd0-8ca6-82833afd669f', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.916 2 WARNING nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.920 2 DEBUG nova.virt.libvirt.host [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.921 2 DEBUG nova.virt.libvirt.host [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.923 2 DEBUG nova.virt.libvirt.host [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.923 2 DEBUG nova.virt.libvirt.host [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.924 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.925 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.925 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.925 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.925 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.926 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.926 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.926 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.926 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.926 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.927 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.927 2 DEBUG nova.virt.hardware [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.953 2 DEBUG nova.storage.rbd_utils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 800b27d2-30f8-4e7e-9411-bbdae9fd342b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:30:59 compute-0 nova_compute[256940]: 2025-10-02 13:30:59.958 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.226 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.226 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:31:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:31:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/971268917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.428 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.450 2 DEBUG nova.virt.libvirt.vif [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-117715933',display_name='tempest-TestVolumeBootPattern-server-117715933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-117715933',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-fvcmtygl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:30:53Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=800b27d2-30f8-4e7e-9411-bbdae9fd342b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.451 2 DEBUG nova.network.os_vif_util [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.451 2 DEBUG nova.network.os_vif_util [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:78:d9,bridge_name='br-int',has_traffic_filtering=True,id=47596c7d-5357-4ca4-967d-cefac652d34f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47596c7d-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.453 2 DEBUG nova.objects.instance [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 800b27d2-30f8-4e7e-9411-bbdae9fd342b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:31:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/971268917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.469 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <uuid>800b27d2-30f8-4e7e-9411-bbdae9fd342b</uuid>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <name>instance-000000df</name>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <memory>131072</memory>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <vcpu>1</vcpu>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <metadata>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <nova:name>tempest-TestVolumeBootPattern-server-117715933</nova:name>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <nova:creationTime>2025-10-02 13:30:59</nova:creationTime>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <nova:flavor name="m1.nano">
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:memory>128</nova:memory>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:disk>1</nova:disk>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:swap>0</nova:swap>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       </nova:flavor>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <nova:owner>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       </nova:owner>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <nova:ports>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <nova:port uuid="47596c7d-5357-4ca4-967d-cefac652d34f">
Oct 02 13:31:00 compute-0 nova_compute[256940]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         </nova:port>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       </nova:ports>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </nova:instance>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   </metadata>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <sysinfo type="smbios">
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <system>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <entry name="serial">800b27d2-30f8-4e7e-9411-bbdae9fd342b</entry>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <entry name="uuid">800b27d2-30f8-4e7e-9411-bbdae9fd342b</entry>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </system>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   </sysinfo>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <os>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <boot dev="hd"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <smbios mode="sysinfo"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   </os>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <features>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <acpi/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <apic/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <vmcoreinfo/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   </features>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <clock offset="utc">
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <timer name="hpet" present="no"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   </clock>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <cpu mode="custom" match="exact">
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <model>Nehalem</model>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   </cpu>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   <devices>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <disk type="network" device="cdrom">
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <driver type="raw" cache="none"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <source protocol="rbd" name="vms/800b27d2-30f8-4e7e-9411-bbdae9fd342b_disk.config">
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       </source>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <target dev="sda" bus="sata"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <disk type="network" device="disk">
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <source protocol="rbd" name="volumes/volume-c345d50c-19ab-4a23-a4a8-f7c734528d26">
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       </source>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <auth username="openstack">
Oct 02 13:31:00 compute-0 nova_compute[256940]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       </auth>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <target dev="vda" bus="virtio"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <serial>c345d50c-19ab-4a23-a4a8-f7c734528d26</serial>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </disk>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <interface type="ethernet">
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <mac address="fa:16:3e:e3:78:d9"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <mtu size="1442"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <target dev="tap47596c7d-53"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </interface>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <serial type="pty">
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <log file="/var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/console.log" append="off"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </serial>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <video>
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <model type="virtio"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </video>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <input type="tablet" bus="usb"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <rng model="virtio">
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </rng>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <controller type="usb" index="0"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     <memballoon model="virtio">
Oct 02 13:31:00 compute-0 nova_compute[256940]:       <stats period="10"/>
Oct 02 13:31:00 compute-0 nova_compute[256940]:     </memballoon>
Oct 02 13:31:00 compute-0 nova_compute[256940]:   </devices>
Oct 02 13:31:00 compute-0 nova_compute[256940]: </domain>
Oct 02 13:31:00 compute-0 nova_compute[256940]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.470 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Preparing to wait for external event network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.470 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.470 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.470 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.471 2 DEBUG nova.virt.libvirt.vif [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-117715933',display_name='tempest-TestVolumeBootPattern-server-117715933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-117715933',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-fvcmtygl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:30:53Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=800b27d2-30f8-4e7e-9411-bbdae9fd342b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.471 2 DEBUG nova.network.os_vif_util [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.472 2 DEBUG nova.network.os_vif_util [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:78:d9,bridge_name='br-int',has_traffic_filtering=True,id=47596c7d-5357-4ca4-967d-cefac652d34f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47596c7d-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.472 2 DEBUG os_vif [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:78:d9,bridge_name='br-int',has_traffic_filtering=True,id=47596c7d-5357-4ca4-967d-cefac652d34f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47596c7d-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47596c7d-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47596c7d-53, col_values=(('external_ids', {'iface-id': '47596c7d-5357-4ca4-967d-cefac652d34f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:78:d9', 'vm-uuid': '800b27d2-30f8-4e7e-9411-bbdae9fd342b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:00 compute-0 NetworkManager[44981]: <info>  [1759411860.4808] manager: (tap47596c7d-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.488 2 INFO os_vif [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:78:d9,bridge_name='br-int',has_traffic_filtering=True,id=47596c7d-5357-4ca4-967d-cefac652d34f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47596c7d-53')
Oct 02 13:31:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:00.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.531 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.531 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.531 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:e3:78:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.532 2 INFO nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Using config drive
Oct 02 13:31:00 compute-0 nova_compute[256940]: 2025-10-02 13:31:00.560 2 DEBUG nova.storage.rbd_utils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 800b27d2-30f8-4e7e-9411-bbdae9fd342b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:31:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 02 13:31:01 compute-0 sudo[418351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:01 compute-0 sudo[418351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-0 sudo[418351]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.184 2 INFO nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Creating config drive at /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/disk.config
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.189 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1kyzxms execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:01 compute-0 sudo[418376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:01 compute-0 sudo[418376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-0 sudo[418376]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-0 sudo[418404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:01 compute-0 sudo[418404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-0 sudo[418404]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.334 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1kyzxms" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.372 2 DEBUG nova.storage.rbd_utils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 800b27d2-30f8-4e7e-9411-bbdae9fd342b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.378 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/disk.config 800b27d2-30f8-4e7e-9411-bbdae9fd342b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:01 compute-0 sudo[418429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:31:01 compute-0 sudo[418429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:01.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:01 compute-0 ceph-mon[73668]: pgmap v3615: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 02 13:31:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:31:01 compute-0 sudo[418429]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:31:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:31:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.789 2 DEBUG oslo_concurrency.processutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/disk.config 800b27d2-30f8-4e7e-9411-bbdae9fd342b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.790 2 INFO nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Deleting local config drive /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b/disk.config because it was imported into RBD.
Oct 02 13:31:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:31:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:01 compute-0 kernel: tap47596c7d-53: entered promiscuous mode
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:01 compute-0 NetworkManager[44981]: <info>  [1759411861.8476] manager: (tap47596c7d-53): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Oct 02 13:31:01 compute-0 ovn_controller[148123]: 2025-10-02T13:31:01Z|01093|binding|INFO|Claiming lport 47596c7d-5357-4ca4-967d-cefac652d34f for this chassis.
Oct 02 13:31:01 compute-0 ovn_controller[148123]: 2025-10-02T13:31:01Z|01094|binding|INFO|47596c7d-5357-4ca4-967d-cefac652d34f: Claiming fa:16:3e:e3:78:d9 10.100.0.6
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.861 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:78:d9 10.100.0.6'], port_security=['fa:16:3e:e3:78:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '800b27d2-30f8-4e7e-9411-bbdae9fd342b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=47596c7d-5357-4ca4-967d-cefac652d34f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.862 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 47596c7d-5357-4ca4-967d-cefac652d34f in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.863 158104 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.876 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d31158c6-62d7-4d26-a889-a85cd52c9368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.877 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.879 264216 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.879 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a5f7cc-ab55-4056-978b-6e74a9cb525d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.880 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e70d3ae5-31bf-400b-a025-b15dbb35e0ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 systemd-udevd[418525]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:31:01 compute-0 systemd-machined[210927]: New machine qemu-111-instance-000000df.
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.894 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[756c38d6-fa46-48cc-b4fc-a581297f9b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 NetworkManager[44981]: <info>  [1759411861.9025] device (tap47596c7d-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:31:01 compute-0 NetworkManager[44981]: <info>  [1759411861.9034] device (tap47596c7d-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:31:01 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:01 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-000000df.
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:01 compute-0 ovn_controller[148123]: 2025-10-02T13:31:01Z|01095|binding|INFO|Setting lport 47596c7d-5357-4ca4-967d-cefac652d34f ovn-installed in OVS
Oct 02 13:31:01 compute-0 ovn_controller[148123]: 2025-10-02T13:31:01Z|01096|binding|INFO|Setting lport 47596c7d-5357-4ca4-967d-cefac652d34f up in Southbound
Oct 02 13:31:01 compute-0 nova_compute[256940]: 2025-10-02 13:31:01.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.920 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[9d16ff0a-a532-467a-b04c-33273757bb83]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct 02 13:31:01 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:01.945561) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:31:01 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct 02 13:31:01 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861945635, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 692, "num_deletes": 251, "total_data_size": 917331, "memory_usage": 930008, "flush_reason": "Manual Compaction"}
Oct 02 13:31:01 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.950 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[e44466fc-a900-42de-94d6-022fb23b711d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 NetworkManager[44981]: <info>  [1759411861.9561] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Oct 02 13:31:01 compute-0 systemd-udevd[418528]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.954 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[68776f9b-08a4-4d2d-b7cc-9063ed5247b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 sudo[418529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:01 compute-0 sudo[418529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-0 sudo[418529]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.991 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[f03f9529-9ab2-4e9c-b8e1-7a25b467dc71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:01 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:01.998 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[cea8cbff-bbf9-4f90-98e3-0f6f324fe9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861999614, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 907628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79732, "largest_seqno": 80423, "table_properties": {"data_size": 903979, "index_size": 1492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8226, "raw_average_key_size": 19, "raw_value_size": 896732, "raw_average_value_size": 2119, "num_data_blocks": 65, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411813, "oldest_key_time": 1759411813, "file_creation_time": 1759411861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 54088 microseconds, and 2919 cpu microseconds.
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:31:02 compute-0 NetworkManager[44981]: <info>  [1759411862.0234] device (tap858f2b6f-80): carrier: link connected
Oct 02 13:31:02 compute-0 sudo[418573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:02 compute-0 sudo[418573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.028 264450 DEBUG oslo.privsep.daemon [-] privsep: reply[0f385fe3-fd7e-40e1-86cf-27cf048f8499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 sudo[418573]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:01.999655) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 907628 bytes OK
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:01.999674) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.045557) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.045595) EVENT_LOG_v1 {"time_micros": 1759411862045587, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.045616) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 913808, prev total WAL file size 913808, number of live WAL files 2.
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.046151) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(886KB)], [182(13MB)]
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862046183, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 15148455, "oldest_snapshot_seqno": -1}
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.046 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca94d88-fcae-469a-a397-b7d8b8282c5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 975750, 'reachable_time': 33224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418606, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.063 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[932e10d5-ab2e-434b-930e-3093528b9a93]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 975750, 'tstamp': 975750}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418613, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.080 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[82744226-25f3-454b-93e9-376b89bacae6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 975750, 'reachable_time': 33224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 418626, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 sudo[418607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:02 compute-0 sudo[418607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-0 sudo[418607]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.111 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[e97306a2-37c3-485b-a2f0-27add6488d3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 sudo[418651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:31:02 compute-0 sudo[418651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.171 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[12ce6878-2f67-448c-aee8-4b848d428893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.172 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.172 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.173 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:02 compute-0 NetworkManager[44981]: <info>  [1759411862.1750] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:02 compute-0 kernel: tap858f2b6f-80: entered promiscuous mode
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.176 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:02 compute-0 ovn_controller[148123]: 2025-10-02T13:31:02Z|01097|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.194 158104 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.195 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4261e123-1614-4d86-87a7-72c103762379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.195 158104 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: global
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     log         /dev/log local0 debug
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     user        root
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     group       root
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     maxconn     1024
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     daemon
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: defaults
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     log global
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     mode http
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     option httplog
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     option dontlognull
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     option http-server-close
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     option forwardfor
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     retries                 3
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     timeout http-request    30s
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     timeout connect         30s
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     timeout client          32s
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     timeout server          32s
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     timeout http-keep-alive 30s
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: listen listener
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     bind 169.254.169.254:80
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:     http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.196 158104 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.242 2 DEBUG nova.compute.manager [req-6aa8f412-e2af-44db-96d9-c10bc5cd4a74 req-c52287e7-f015-4e03-abda-752ba87c0e0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.243 2 DEBUG oslo_concurrency.lockutils [req-6aa8f412-e2af-44db-96d9-c10bc5cd4a74 req-c52287e7-f015-4e03-abda-752ba87c0e0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.244 2 DEBUG oslo_concurrency.lockutils [req-6aa8f412-e2af-44db-96d9-c10bc5cd4a74 req-c52287e7-f015-4e03-abda-752ba87c0e0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.244 2 DEBUG oslo_concurrency.lockutils [req-6aa8f412-e2af-44db-96d9-c10bc5cd4a74 req-c52287e7-f015-4e03-abda-752ba87c0e0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.245 2 DEBUG nova.compute.manager [req-6aa8f412-e2af-44db-96d9-c10bc5cd4a74 req-c52287e7-f015-4e03-abda-752ba87c0e0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Processing event network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10559 keys, 13195482 bytes, temperature: kUnknown
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862276317, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 13195482, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13127546, "index_size": 40431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 279726, "raw_average_key_size": 26, "raw_value_size": 12943181, "raw_average_value_size": 1225, "num_data_blocks": 1532, "num_entries": 10559, "num_filter_entries": 10559, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759411862, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.276581) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13195482 bytes
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.298001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.8 rd, 57.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.6 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(31.2) write-amplify(14.5) OK, records in: 11075, records dropped: 516 output_compression: NoCompression
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.298034) EVENT_LOG_v1 {"time_micros": 1759411862298022, "job": 114, "event": "compaction_finished", "compaction_time_micros": 230214, "compaction_time_cpu_micros": 30318, "output_level": 6, "num_output_files": 1, "total_output_size": 13195482, "num_input_records": 11075, "num_output_records": 10559, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862298342, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862300660, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.046081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.300689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.300693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.300695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.300696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:31:02.300698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:02.358 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:02.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:02 compute-0 sudo[418651]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:31:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:31:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:31:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:31:02 compute-0 podman[418751]: 2025-10-02 13:31:02.555226904 +0000 UTC m=+0.025499115 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.688 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.690 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411862.690131, 800b27d2-30f8-4e7e-9411-bbdae9fd342b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.691 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] VM Started (Lifecycle Event)
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.695 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:31:02 compute-0 sudo[418775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.701 2 INFO nova.virt.libvirt.driver [-] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Instance spawned successfully.
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.702 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:31:02 compute-0 sudo[418775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-0 sudo[418775]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.735 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.741 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.741 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.742 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.742 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.742 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.743 2 DEBUG nova.virt.libvirt.driver [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.747 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:31:02 compute-0 sudo[418800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:02 compute-0 sudo[418800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-0 sudo[418800]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 63d8f080-7ad6-4971-aafa-410b8776b476 does not exist
Oct 02 13:31:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e55f8420-146c-44b4-9281-79c50c250225 does not exist
Oct 02 13:31:02 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 215f06d6-1f36-4821-a233-2bfa770d9e1a does not exist
Oct 02 13:31:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:31:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:31:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:31:02 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.798 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.799 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411862.6902769, 800b27d2-30f8-4e7e-9411-bbdae9fd342b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.799 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] VM Paused (Lifecycle Event)
Oct 02 13:31:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:31:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.821 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.824 2 DEBUG nova.virt.driver [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] Emitting event <LifecycleEvent: 1759411862.694671, 800b27d2-30f8-4e7e-9411-bbdae9fd342b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.824 2 INFO nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] VM Resumed (Lifecycle Event)
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.827 2 INFO nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Took 8.60 seconds to spawn the instance on the hypervisor.
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.827 2 DEBUG nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.835 2 DEBUG nova.network.neutron [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updated VIF entry in instance network info cache for port 47596c7d-5357-4ca4-967d-cefac652d34f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.835 2 DEBUG nova.network.neutron [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updating instance_info_cache with network_info: [{"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:31:02 compute-0 podman[418751]: 2025-10-02 13:31:02.837894569 +0000 UTC m=+0.308166750 container create 7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:31:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:02 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.867 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:02 compute-0 systemd[1]: Started libpod-conmon-7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7.scope.
Oct 02 13:31:02 compute-0 sudo[418825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.877 2 DEBUG oslo_concurrency.lockutils [req-a2679544-6e46-4ff8-818f-52eefd59b7cc req-54527082-7a70-47ac-8d95-65daebfecdae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:31:02 compute-0 sudo[418825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.878 2 DEBUG nova.compute.manager [None req-def6e6b3-dd36-4327-a3e5-4266b2d1ed28 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:31:02 compute-0 sudo[418825]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.901 2 INFO nova.compute.manager [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Took 10.58 seconds to build instance.
Oct 02 13:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a3a1ab3a2820b6b1f91d77c7e087653e3b93c1d4b53d1f17625dbad11d5bce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:02 compute-0 nova_compute[256940]: 2025-10-02 13:31:02.917 2 DEBUG oslo_concurrency.lockutils [None req-5e4a9577-a4c2-40ea-bdd8-0883ebbdd005 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:02 compute-0 sudo[418857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:02 compute-0 sudo[418857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-0 sudo[418857]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-0 podman[418751]: 2025-10-02 13:31:02.974780967 +0000 UTC m=+0.445053178 container init 7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:02 compute-0 podman[418751]: 2025-10-02 13:31:02.982131086 +0000 UTC m=+0.452403267 container start 7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:31:02 compute-0 sudo[418882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:02 compute-0 sudo[418882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:03 compute-0 sudo[418882]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:03 compute-0 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[418853]: [NOTICE]   (418906) : New worker (418911) forked
Oct 02 13:31:03 compute-0 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[418853]: [NOTICE]   (418906) : Loading success.
Oct 02 13:31:03 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:03.059 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:31:03 compute-0 sudo[418910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:31:03 compute-0 sudo[418910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:03 compute-0 nova_compute[256940]: 2025-10-02 13:31:03.222 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:03 compute-0 podman[418985]: 2025-10-02 13:31:03.432878308 +0000 UTC m=+0.055036671 container create fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:31:03 compute-0 systemd[1]: Started libpod-conmon-fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0.scope.
Oct 02 13:31:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:03 compute-0 podman[418985]: 2025-10-02 13:31:03.406910533 +0000 UTC m=+0.029068926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:03 compute-0 podman[418985]: 2025-10-02 13:31:03.518193755 +0000 UTC m=+0.140352118 container init fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:31:03 compute-0 podman[418985]: 2025-10-02 13:31:03.524400594 +0000 UTC m=+0.146558937 container start fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:31:03 compute-0 podman[418985]: 2025-10-02 13:31:03.527858703 +0000 UTC m=+0.150017086 container attach fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:31:03 compute-0 clever_williamson[419001]: 167 167
Oct 02 13:31:03 compute-0 systemd[1]: libpod-fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0.scope: Deactivated successfully.
Oct 02 13:31:03 compute-0 conmon[419001]: conmon fa677f6cd3f7e45ee1ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0.scope/container/memory.events
Oct 02 13:31:03 compute-0 podman[418985]: 2025-10-02 13:31:03.530985943 +0000 UTC m=+0.153144296 container died fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:31:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:03.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-72dd5c438c8b5c42df009ca696fbc8cfd2d731e5ffa4393ce2fe8bf5bd17a55c-merged.mount: Deactivated successfully.
Oct 02 13:31:03 compute-0 podman[418985]: 2025-10-02 13:31:03.57807629 +0000 UTC m=+0.200234643 container remove fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:31:03 compute-0 systemd[1]: libpod-conmon-fa677f6cd3f7e45ee1efca6451f1c1a645bb21d505a444c24911e56d538006c0.scope: Deactivated successfully.
Oct 02 13:31:03 compute-0 podman[419026]: 2025-10-02 13:31:03.732251512 +0000 UTC m=+0.022599851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:04 compute-0 podman[419026]: 2025-10-02 13:31:04.042438332 +0000 UTC m=+0.332786651 container create 42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:31:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:31:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:31:04 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:04 compute-0 ceph-mon[73668]: pgmap v3616: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:31:04 compute-0 nova_compute[256940]: 2025-10-02 13:31:04.314 2 DEBUG nova.compute.manager [req-4eef047f-afa4-47a9-95f3-686937889ef5 req-d7db2ed1-4b60-451b-96b2-1850c5cd109f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:31:04 compute-0 nova_compute[256940]: 2025-10-02 13:31:04.316 2 DEBUG oslo_concurrency.lockutils [req-4eef047f-afa4-47a9-95f3-686937889ef5 req-d7db2ed1-4b60-451b-96b2-1850c5cd109f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:04 compute-0 nova_compute[256940]: 2025-10-02 13:31:04.318 2 DEBUG oslo_concurrency.lockutils [req-4eef047f-afa4-47a9-95f3-686937889ef5 req-d7db2ed1-4b60-451b-96b2-1850c5cd109f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:04 compute-0 nova_compute[256940]: 2025-10-02 13:31:04.318 2 DEBUG oslo_concurrency.lockutils [req-4eef047f-afa4-47a9-95f3-686937889ef5 req-d7db2ed1-4b60-451b-96b2-1850c5cd109f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:04 compute-0 nova_compute[256940]: 2025-10-02 13:31:04.318 2 DEBUG nova.compute.manager [req-4eef047f-afa4-47a9-95f3-686937889ef5 req-d7db2ed1-4b60-451b-96b2-1850c5cd109f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] No waiting events found dispatching network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:31:04 compute-0 nova_compute[256940]: 2025-10-02 13:31:04.319 2 WARNING nova.compute.manager [req-4eef047f-afa4-47a9-95f3-686937889ef5 req-d7db2ed1-4b60-451b-96b2-1850c5cd109f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received unexpected event network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f for instance with vm_state active and task_state None.
Oct 02 13:31:04 compute-0 systemd[1]: Started libpod-conmon-42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63.scope.
Oct 02 13:31:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a952a40149898219d1b00dbfc0e665a2b2d8ba74e3d710260e2aa7ce91d0eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a952a40149898219d1b00dbfc0e665a2b2d8ba74e3d710260e2aa7ce91d0eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a952a40149898219d1b00dbfc0e665a2b2d8ba74e3d710260e2aa7ce91d0eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a952a40149898219d1b00dbfc0e665a2b2d8ba74e3d710260e2aa7ce91d0eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a952a40149898219d1b00dbfc0e665a2b2d8ba74e3d710260e2aa7ce91d0eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:04 compute-0 podman[419026]: 2025-10-02 13:31:04.416964941 +0000 UTC m=+0.707313340 container init 42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:31:04 compute-0 podman[419026]: 2025-10-02 13:31:04.425078549 +0000 UTC m=+0.715426868 container start 42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:31:04 compute-0 podman[419026]: 2025-10-02 13:31:04.452923583 +0000 UTC m=+0.743271912 container attach 42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:31:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:04.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:04 compute-0 nova_compute[256940]: 2025-10-02 13:31:04.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 02 13:31:05 compute-0 bold_shockley[419042]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:31:05 compute-0 bold_shockley[419042]: --> relative data size: 1.0
Oct 02 13:31:05 compute-0 bold_shockley[419042]: --> All data devices are unavailable
Oct 02 13:31:05 compute-0 systemd[1]: libpod-42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63.scope: Deactivated successfully.
Oct 02 13:31:05 compute-0 podman[419026]: 2025-10-02 13:31:05.227389192 +0000 UTC m=+1.517737511 container died 42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-00a952a40149898219d1b00dbfc0e665a2b2d8ba74e3d710260e2aa7ce91d0eb-merged.mount: Deactivated successfully.
Oct 02 13:31:05 compute-0 podman[419026]: 2025-10-02 13:31:05.382509128 +0000 UTC m=+1.672857447 container remove 42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:31:05 compute-0 systemd[1]: libpod-conmon-42476bcf19885053407d6cd02bfca34c0516f2783dcacca95ea20f7bae4fdd63.scope: Deactivated successfully.
Oct 02 13:31:05 compute-0 sudo[418910]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:05 compute-0 ceph-mon[73668]: pgmap v3617: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 02 13:31:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1995901434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:31:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1995901434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:31:05 compute-0 nova_compute[256940]: 2025-10-02 13:31:05.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:05 compute-0 sudo[419073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:05 compute-0 sudo[419073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:05 compute-0 sudo[419073]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:05.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:05 compute-0 sudo[419098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:05 compute-0 sudo[419098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:05 compute-0 sudo[419098]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:05 compute-0 sudo[419123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:05 compute-0 sudo[419123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:05 compute-0 sudo[419123]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:05 compute-0 sudo[419148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:31:05 compute-0 sudo[419148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:06 compute-0 podman[419211]: 2025-10-02 13:31:06.033346499 +0000 UTC m=+0.040817617 container create f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:31:06 compute-0 systemd[1]: Started libpod-conmon-f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165.scope.
Oct 02 13:31:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:06 compute-0 podman[419211]: 2025-10-02 13:31:06.01427118 +0000 UTC m=+0.021742328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:06 compute-0 podman[419211]: 2025-10-02 13:31:06.141275166 +0000 UTC m=+0.148746304 container init f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 13:31:06 compute-0 podman[419211]: 2025-10-02 13:31:06.149170058 +0000 UTC m=+0.156641206 container start f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:31:06 compute-0 suspicious_mendeleev[419227]: 167 167
Oct 02 13:31:06 compute-0 systemd[1]: libpod-f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165.scope: Deactivated successfully.
Oct 02 13:31:06 compute-0 podman[419211]: 2025-10-02 13:31:06.165385984 +0000 UTC m=+0.172857122 container attach f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:31:06 compute-0 podman[419211]: 2025-10-02 13:31:06.16721026 +0000 UTC m=+0.174681418 container died f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6a48aad760a089f8d1a26bdb956238267d24c99eed018c68ad14e019617c679-merged.mount: Deactivated successfully.
Oct 02 13:31:06 compute-0 podman[419211]: 2025-10-02 13:31:06.216984386 +0000 UTC m=+0.224455504 container remove f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:31:06 compute-0 systemd[1]: libpod-conmon-f134bfc084666d5f8f0962c9e836e0296ee465af88569f6712615f5c9c381165.scope: Deactivated successfully.
Oct 02 13:31:06 compute-0 podman[419251]: 2025-10-02 13:31:06.387537118 +0000 UTC m=+0.044456331 container create 943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 13:31:06 compute-0 systemd[1]: Started libpod-conmon-943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a.scope.
Oct 02 13:31:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36f9a5e369fc565a92fbb4651a03ebfbfd8c3b3678ae04b873ce177e75976c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36f9a5e369fc565a92fbb4651a03ebfbfd8c3b3678ae04b873ce177e75976c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36f9a5e369fc565a92fbb4651a03ebfbfd8c3b3678ae04b873ce177e75976c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36f9a5e369fc565a92fbb4651a03ebfbfd8c3b3678ae04b873ce177e75976c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:06 compute-0 podman[419251]: 2025-10-02 13:31:06.366923079 +0000 UTC m=+0.023842302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:06 compute-0 podman[419251]: 2025-10-02 13:31:06.470546425 +0000 UTC m=+0.127465648 container init 943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:31:06 compute-0 podman[419251]: 2025-10-02 13:31:06.47734806 +0000 UTC m=+0.134267263 container start 943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:31:06 compute-0 podman[419251]: 2025-10-02 13:31:06.482143292 +0000 UTC m=+0.139062515 container attach 943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:31:06 compute-0 podman[419269]: 2025-10-02 13:31:06.512349537 +0000 UTC m=+0.058829199 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:31:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:06.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:06 compute-0 podman[419271]: 2025-10-02 13:31:06.569701687 +0000 UTC m=+0.116087847 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:31:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:07 compute-0 hungry_colden[419267]: {
Oct 02 13:31:07 compute-0 hungry_colden[419267]:     "1": [
Oct 02 13:31:07 compute-0 hungry_colden[419267]:         {
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "devices": [
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "/dev/loop3"
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             ],
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "lv_name": "ceph_lv0",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "lv_size": "7511998464",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "name": "ceph_lv0",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "tags": {
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.cluster_name": "ceph",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.crush_device_class": "",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.encrypted": "0",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.osd_id": "1",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.type": "block",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:                 "ceph.vdo": "0"
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             },
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "type": "block",
Oct 02 13:31:07 compute-0 hungry_colden[419267]:             "vg_name": "ceph_vg0"
Oct 02 13:31:07 compute-0 hungry_colden[419267]:         }
Oct 02 13:31:07 compute-0 hungry_colden[419267]:     ]
Oct 02 13:31:07 compute-0 hungry_colden[419267]: }
Oct 02 13:31:07 compute-0 systemd[1]: libpod-943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a.scope: Deactivated successfully.
Oct 02 13:31:07 compute-0 podman[419251]: 2025-10-02 13:31:07.260439161 +0000 UTC m=+0.917358394 container died 943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:07 compute-0 ceph-mon[73668]: pgmap v3618: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36f9a5e369fc565a92fbb4651a03ebfbfd8c3b3678ae04b873ce177e75976c4-merged.mount: Deactivated successfully.
Oct 02 13:31:07 compute-0 podman[419251]: 2025-10-02 13:31:07.377333327 +0000 UTC m=+1.034252530 container remove 943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:31:07 compute-0 systemd[1]: libpod-conmon-943c70d43f2d101d37d92e8ecc5db0e5311d458e0fd388c447f006645a7a7e0a.scope: Deactivated successfully.
Oct 02 13:31:07 compute-0 sudo[419148]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:07 compute-0 sudo[419333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:07 compute-0 sudo[419333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:07 compute-0 sudo[419333]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:07 compute-0 sudo[419358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:07 compute-0 sudo[419358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:07 compute-0 sudo[419358]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:07.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:07 compute-0 sudo[419383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:07 compute-0 sudo[419383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:07 compute-0 sudo[419383]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:07 compute-0 sudo[419408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:31:07 compute-0 sudo[419408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:08 compute-0 podman[419473]: 2025-10-02 13:31:08.01496038 +0000 UTC m=+0.055256518 container create 08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:31:08 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:08.062 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:08 compute-0 podman[419473]: 2025-10-02 13:31:07.988913522 +0000 UTC m=+0.029209690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:08 compute-0 systemd[1]: Started libpod-conmon-08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56.scope.
Oct 02 13:31:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:08 compute-0 podman[419473]: 2025-10-02 13:31:08.230268318 +0000 UTC m=+0.270564496 container init 08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:08 compute-0 podman[419473]: 2025-10-02 13:31:08.240684765 +0000 UTC m=+0.280980913 container start 08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:31:08 compute-0 competent_cannon[419490]: 167 167
Oct 02 13:31:08 compute-0 systemd[1]: libpod-08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56.scope: Deactivated successfully.
Oct 02 13:31:08 compute-0 podman[419473]: 2025-10-02 13:31:08.271760302 +0000 UTC m=+0.312056460 container attach 08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:31:08 compute-0 podman[419473]: 2025-10-02 13:31:08.272398468 +0000 UTC m=+0.312694616 container died 08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:31:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-567ba366e8a7d6338e9109f72e90b5e32edf9c7517ccc9def2d0a4149a0c311a-merged.mount: Deactivated successfully.
Oct 02 13:31:08 compute-0 podman[419473]: 2025-10-02 13:31:08.332297762 +0000 UTC m=+0.372593940 container remove 08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:08 compute-0 systemd[1]: libpod-conmon-08741bdb3fd8cd565776551d9e075cecee1c1b2d9671f82da4bedafd60337a56.scope: Deactivated successfully.
Oct 02 13:31:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:31:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:08.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:31:08 compute-0 podman[419516]: 2025-10-02 13:31:08.561342173 +0000 UTC m=+0.061855807 container create a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:08 compute-0 systemd[1]: Started libpod-conmon-a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303.scope.
Oct 02 13:31:08 compute-0 podman[419516]: 2025-10-02 13:31:08.539023031 +0000 UTC m=+0.039536685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:31:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d00096cd172063a3e98d611b9eac5b250ae4d93ae2abe9554d1519f77b437b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d00096cd172063a3e98d611b9eac5b250ae4d93ae2abe9554d1519f77b437b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d00096cd172063a3e98d611b9eac5b250ae4d93ae2abe9554d1519f77b437b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d00096cd172063a3e98d611b9eac5b250ae4d93ae2abe9554d1519f77b437b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:08 compute-0 podman[419516]: 2025-10-02 13:31:08.677584472 +0000 UTC m=+0.178098126 container init a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:08 compute-0 podman[419516]: 2025-10-02 13:31:08.691463938 +0000 UTC m=+0.191977562 container start a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:31:08 compute-0 podman[419516]: 2025-10-02 13:31:08.695004559 +0000 UTC m=+0.195518223 container attach a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:31:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:09 compute-0 ceph-mon[73668]: pgmap v3619: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:31:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:31:09 compute-0 silly_curie[419532]: {
Oct 02 13:31:09 compute-0 silly_curie[419532]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:31:09 compute-0 silly_curie[419532]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:31:09 compute-0 silly_curie[419532]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:31:09 compute-0 silly_curie[419532]:         "osd_id": 1,
Oct 02 13:31:09 compute-0 silly_curie[419532]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:31:09 compute-0 silly_curie[419532]:         "type": "bluestore"
Oct 02 13:31:09 compute-0 silly_curie[419532]:     }
Oct 02 13:31:09 compute-0 silly_curie[419532]: }
Oct 02 13:31:09 compute-0 systemd[1]: libpod-a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303.scope: Deactivated successfully.
Oct 02 13:31:09 compute-0 podman[419516]: 2025-10-02 13:31:09.623844535 +0000 UTC m=+1.124358169 container died a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:31:09 compute-0 nova_compute[256940]: 2025-10-02 13:31:09.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d00096cd172063a3e98d611b9eac5b250ae4d93ae2abe9554d1519f77b437b-merged.mount: Deactivated successfully.
Oct 02 13:31:10 compute-0 NetworkManager[44981]: <info>  [1759411870.2795] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Oct 02 13:31:10 compute-0 NetworkManager[44981]: <info>  [1759411870.2823] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Oct 02 13:31:10 compute-0 nova_compute[256940]: 2025-10-02 13:31:10.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:10 compute-0 ovn_controller[148123]: 2025-10-02T13:31:10Z|01098|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct 02 13:31:10 compute-0 nova_compute[256940]: 2025-10-02 13:31:10.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:10 compute-0 nova_compute[256940]: 2025-10-02 13:31:10.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:10 compute-0 podman[419516]: 2025-10-02 13:31:10.40939147 +0000 UTC m=+1.909905104 container remove a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:31:10 compute-0 sudo[419408]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:31:10 compute-0 nova_compute[256940]: 2025-10-02 13:31:10.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:10 compute-0 systemd[1]: libpod-conmon-a0b893715ad65808f986e42e489ed0e7aa6cf251b9aa71f5ae27a6e3119e9303.scope: Deactivated successfully.
Oct 02 13:31:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:10.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:31:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:11 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 67bdf401-fb8f-4b67-acfc-b4e4ab06d1be does not exist
Oct 02 13:31:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 91c6b7a2-7f7b-4abe-86cb-d7dee999801a does not exist
Oct 02 13:31:11 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ff4b92b8-e5c4-4bcb-8f61-281a3f11ec53 does not exist
Oct 02 13:31:11 compute-0 sudo[419571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:11 compute-0 sudo[419571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:11 compute-0 sudo[419571]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:11 compute-0 sudo[419596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:31:11 compute-0 sudo[419596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:11 compute-0 nova_compute[256940]: 2025-10-02 13:31:11.161 2 DEBUG nova.compute.manager [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-changed-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:31:11 compute-0 nova_compute[256940]: 2025-10-02 13:31:11.162 2 DEBUG nova.compute.manager [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Refreshing instance network info cache due to event network-changed-47596c7d-5357-4ca4-967d-cefac652d34f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:31:11 compute-0 nova_compute[256940]: 2025-10-02 13:31:11.162 2 DEBUG oslo_concurrency.lockutils [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:31:11 compute-0 nova_compute[256940]: 2025-10-02 13:31:11.162 2 DEBUG oslo_concurrency.lockutils [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:31:11 compute-0 nova_compute[256940]: 2025-10-02 13:31:11.162 2 DEBUG nova.network.neutron [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Refreshing network info cache for port 47596c7d-5357-4ca4-967d-cefac652d34f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:31:11 compute-0 sudo[419596]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:11 compute-0 nova_compute[256940]: 2025-10-02 13:31:11.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:11.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:12 compute-0 ceph-mon[73668]: pgmap v3620: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:12 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:12.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:13 compute-0 nova_compute[256940]: 2025-10-02 13:31:13.239 2 DEBUG nova.network.neutron [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updated VIF entry in instance network info cache for port 47596c7d-5357-4ca4-967d-cefac652d34f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:31:13 compute-0 nova_compute[256940]: 2025-10-02 13:31:13.240 2 DEBUG nova.network.neutron [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updating instance_info_cache with network_info: [{"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:31:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:13 compute-0 nova_compute[256940]: 2025-10-02 13:31:13.283 2 DEBUG oslo_concurrency.lockutils [req-133eb8ea-93f7-41ec-ba12-8e904eaef19c req-3e6c7c24-5b43-4f73-8fa4-2f641ee226d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:31:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:13.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:13 compute-0 ceph-mon[73668]: pgmap v3621: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:14.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:31:14 compute-0 nova_compute[256940]: 2025-10-02 13:31:14.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:15 compute-0 nova_compute[256940]: 2025-10-02 13:31:15.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:15 compute-0 ceph-mon[73668]: pgmap v3622: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:31:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:16.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 88 op/s
Oct 02 13:31:17 compute-0 ovn_controller[148123]: 2025-10-02T13:31:17Z|00134|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.6
Oct 02 13:31:17 compute-0 ovn_controller[148123]: 2025-10-02T13:31:17Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e3:78:d9 10.100.0.6
Oct 02 13:31:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:17 compute-0 ceph-mon[73668]: pgmap v3623: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 88 op/s
Oct 02 13:31:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:18.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 12 KiB/s wr, 22 op/s
Oct 02 13:31:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:19 compute-0 ceph-mon[73668]: pgmap v3624: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 12 KiB/s wr, 22 op/s
Oct 02 13:31:19 compute-0 nova_compute[256940]: 2025-10-02 13:31:19.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:20 compute-0 ovn_controller[148123]: 2025-10-02T13:31:20Z|00136|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.6
Oct 02 13:31:20 compute-0 ovn_controller[148123]: 2025-10-02T13:31:20Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e3:78:d9 10.100.0.6
Oct 02 13:31:20 compute-0 nova_compute[256940]: 2025-10-02 13:31:20.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:20.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:21 compute-0 ceph-mon[73668]: pgmap v3625: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:22 compute-0 ovn_controller[148123]: 2025-10-02T13:31:22Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:78:d9 10.100.0.6
Oct 02 13:31:22 compute-0 ovn_controller[148123]: 2025-10-02T13:31:22Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:78:d9 10.100.0.6
Oct 02 13:31:22 compute-0 podman[419629]: 2025-10-02 13:31:22.418786686 +0000 UTC m=+0.067765377 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:31:22 compute-0 podman[419628]: 2025-10-02 13:31:22.442258658 +0000 UTC m=+0.098227449 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:31:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:22.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:22 compute-0 sudo[419668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:22 compute-0 sudo[419668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:22 compute-0 sudo[419668]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:22 compute-0 sudo[419694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:22 compute-0 sudo[419694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:22 compute-0 sudo[419694]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:23 compute-0 ceph-mon[73668]: pgmap v3626: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:24.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:24 compute-0 nova_compute[256940]: 2025-10-02 13:31:24.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:25 compute-0 nova_compute[256940]: 2025-10-02 13:31:25.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:26 compute-0 ceph-mon[73668]: pgmap v3627: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:26.529 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:26.530 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:31:26.530 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:26.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 518 KiB/s rd, 25 KiB/s wr, 42 op/s
Oct 02 13:31:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Oct 02 13:31:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Oct 02 13:31:28 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Oct 02 13:31:28 compute-0 ceph-mon[73668]: pgmap v3628: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 518 KiB/s rd, 25 KiB/s wr, 42 op/s
Oct 02 13:31:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:31:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 16 KiB/s wr, 27 op/s
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:31:28
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.log', 'images']
Oct 02 13:31:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:31:29 compute-0 ceph-mon[73668]: osdmap e428: 3 total, 3 up, 3 in
Oct 02 13:31:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:29.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:29 compute-0 nova_compute[256940]: 2025-10-02 13:31:29.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:31:30 compute-0 ceph-mon[73668]: pgmap v3630: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 16 KiB/s wr, 27 op/s
Oct 02 13:31:30 compute-0 nova_compute[256940]: 2025-10-02 13:31:30.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:30.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 236 KiB/s rd, 8.9 KiB/s wr, 7 op/s
Oct 02 13:31:31 compute-0 ceph-mon[73668]: pgmap v3631: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 236 KiB/s rd, 8.9 KiB/s wr, 7 op/s
Oct 02 13:31:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:32.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 15 KiB/s wr, 33 op/s
Oct 02 13:31:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:33 compute-0 ceph-mon[73668]: pgmap v3632: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 15 KiB/s wr, 33 op/s
Oct 02 13:31:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:34.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 15 KiB/s wr, 34 op/s
Oct 02 13:31:34 compute-0 nova_compute[256940]: 2025-10-02 13:31:34.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/541340958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:35 compute-0 nova_compute[256940]: 2025-10-02 13:31:35.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:35 compute-0 ovn_controller[148123]: 2025-10-02T13:31:35Z|01099|memory|INFO|peak resident set size grew 50% in last 5749.8 seconds, from 16256 kB to 24396 kB
Oct 02 13:31:35 compute-0 ovn_controller[148123]: 2025-10-02T13:31:35Z|01100|memory|INFO|idl-cells-OVN_Southbound:5024 idl-cells-Open_vSwitch:870 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:120 lflow-cache-entries-cache-matches:214 lflow-cache-size-KB:462 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:228 ofctrl_installed_flow_usage-KB:166 ofctrl_sb_flow_ref_usage-KB:94
Oct 02 13:31:36 compute-0 ceph-mon[73668]: pgmap v3633: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 15 KiB/s wr, 34 op/s
Oct 02 13:31:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3826796133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:36.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 20 KiB/s wr, 36 op/s
Oct 02 13:31:37 compute-0 ceph-mon[73668]: pgmap v3634: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 20 KiB/s wr, 36 op/s
Oct 02 13:31:37 compute-0 podman[419726]: 2025-10-02 13:31:37.426209053 +0000 UTC m=+0.074384847 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:31:37 compute-0 podman[419727]: 2025-10-02 13:31:37.466781383 +0000 UTC m=+0.114829364 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 13:31:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:37.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:38 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3281413989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:38.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 18 KiB/s wr, 33 op/s
Oct 02 13:31:39 compute-0 ceph-mon[73668]: pgmap v3635: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 18 KiB/s wr, 33 op/s
Oct 02 13:31:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3871989529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:39.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:39 compute-0 nova_compute[256940]: 2025-10-02 13:31:39.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:40 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1750166120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:40 compute-0 nova_compute[256940]: 2025-10-02 13:31:40.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:40.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 17 KiB/s wr, 34 op/s
Oct 02 13:31:40 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.270147031453564e-06 of space, bias 1.0, pg target 0.002181044109436069 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004335552182207333 of space, bias 1.0, pg target 1.3006656546622 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:31:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Oct 02 13:31:41 compute-0 ceph-mon[73668]: pgmap v3636: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 17 KiB/s wr, 34 op/s
Oct 02 13:31:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:41.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 22 KiB/s wr, 41 op/s
Oct 02 13:31:43 compute-0 sudo[419775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:43 compute-0 sudo[419775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:43 compute-0 sudo[419775]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:43 compute-0 sudo[419800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:43 compute-0 sudo[419800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:43 compute-0 sudo[419800]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:43.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:43 compute-0 ceph-mon[73668]: pgmap v3637: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 22 KiB/s wr, 41 op/s
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.248 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.248 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.248 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:31:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774053728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.712 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.809 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.810 2 DEBUG nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:31:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 16 KiB/s wr, 28 op/s
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.989 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:31:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3774053728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.990 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3931MB free_disk=20.987987518310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.991 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:44 compute-0 nova_compute[256940]: 2025-10-02 13:31:44.991 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.074 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Instance 800b27d2-30f8-4e7e-9411-bbdae9fd342b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.075 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.075 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.089 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.109 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.109 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.124 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.141 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.179 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:31:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351703209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:45.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.644 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.649 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.785 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.814 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:31:45 compute-0 nova_compute[256940]: 2025-10-02 13:31:45.814 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:46 compute-0 ceph-mon[73668]: pgmap v3638: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 16 KiB/s wr, 28 op/s
Oct 02 13:31:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/349842219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3351703209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 78 op/s
Oct 02 13:31:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3626880089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:47.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:48 compute-0 ceph-mon[73668]: pgmap v3639: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 78 op/s
Oct 02 13:31:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:48 compute-0 nova_compute[256940]: 2025-10-02 13:31:48.816 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 02 13:31:49 compute-0 nova_compute[256940]: 2025-10-02 13:31:49.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:49 compute-0 nova_compute[256940]: 2025-10-02 13:31:49.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:31:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:49 compute-0 nova_compute[256940]: 2025-10-02 13:31:49.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:50 compute-0 ceph-mon[73668]: pgmap v3640: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 02 13:31:50 compute-0 nova_compute[256940]: 2025-10-02 13:31:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:50.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 02 13:31:51 compute-0 nova_compute[256940]: 2025-10-02 13:31:51.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:51.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:52 compute-0 ceph-mon[73668]: pgmap v3641: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 02 13:31:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:52.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:31:53 compute-0 ceph-mon[73668]: pgmap v3642: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:31:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:53 compute-0 podman[419875]: 2025-10-02 13:31:53.399928748 +0000 UTC m=+0.062770660 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 13:31:53 compute-0 podman[419876]: 2025-10-02 13:31:53.40781654 +0000 UTC m=+0.066649809 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:31:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:53.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:54.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 211 KiB/s wr, 70 op/s
Oct 02 13:31:54 compute-0 nova_compute[256940]: 2025-10-02 13:31:54.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:55 compute-0 nova_compute[256940]: 2025-10-02 13:31:55.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:55.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:55 compute-0 ceph-mon[73668]: pgmap v3643: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 211 KiB/s wr, 70 op/s
Oct 02 13:31:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 499 KiB/s wr, 105 op/s
Oct 02 13:31:57 compute-0 nova_compute[256940]: 2025-10-02 13:31:57.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:57 compute-0 nova_compute[256940]: 2025-10-02 13:31:57.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:57.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:58 compute-0 ceph-mon[73668]: pgmap v3644: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 499 KiB/s wr, 105 op/s
Oct 02 13:31:58 compute-0 nova_compute[256940]: 2025-10-02 13:31:58.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:31:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:31:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:31:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:58.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:31:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 54 op/s
Oct 02 13:31:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:31:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:59.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:59 compute-0 nova_compute[256940]: 2025-10-02 13:31:59.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:00 compute-0 ceph-mon[73668]: pgmap v3645: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 54 op/s
Oct 02 13:32:00 compute-0 nova_compute[256940]: 2025-10-02 13:32:00.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:00.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 509 KiB/s wr, 56 op/s
Oct 02 13:32:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:01.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:02 compute-0 ceph-mon[73668]: pgmap v3646: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 509 KiB/s wr, 56 op/s
Oct 02 13:32:02 compute-0 nova_compute[256940]: 2025-10-02 13:32:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:02 compute-0 nova_compute[256940]: 2025-10-02 13:32:02.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:32:02 compute-0 nova_compute[256940]: 2025-10-02 13:32:02.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:32:02 compute-0 nova_compute[256940]: 2025-10-02 13:32:02.439 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:32:02 compute-0 nova_compute[256940]: 2025-10-02 13:32:02.439 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquired lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:32:02 compute-0 nova_compute[256940]: 2025-10-02 13:32:02.439 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:32:02 compute-0 nova_compute[256940]: 2025-10-02 13:32:02.440 2 DEBUG nova.objects.instance [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 800b27d2-30f8-4e7e-9411-bbdae9fd342b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:32:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:02.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 580 KiB/s wr, 57 op/s
Oct 02 13:32:03 compute-0 sudo[419919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:03 compute-0 sudo[419919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:03 compute-0 sudo[419919]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:03 compute-0 sudo[419944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:03 compute-0 sudo[419944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:03 compute-0 ceph-mon[73668]: pgmap v3647: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 580 KiB/s wr, 57 op/s
Oct 02 13:32:03 compute-0 sudo[419944]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:03.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:04.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 580 KiB/s wr, 57 op/s
Oct 02 13:32:05 compute-0 nova_compute[256940]: 2025-10-02 13:32:05.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:05 compute-0 nova_compute[256940]: 2025-10-02 13:32:05.172 2 DEBUG nova.network.neutron [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updating instance_info_cache with network_info: [{"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:32:05 compute-0 nova_compute[256940]: 2025-10-02 13:32:05.211 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Releasing lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:32:05 compute-0 nova_compute[256940]: 2025-10-02 13:32:05.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:32:05 compute-0 nova_compute[256940]: 2025-10-02 13:32:05.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:05.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:06 compute-0 ceph-mon[73668]: pgmap v3648: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 580 KiB/s wr, 57 op/s
Oct 02 13:32:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:06 compute-0 nova_compute[256940]: 2025-10-02 13:32:06.207 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 371 KiB/s wr, 46 op/s
Oct 02 13:32:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:07.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:08 compute-0 ceph-mon[73668]: pgmap v3649: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 371 KiB/s wr, 46 op/s
Oct 02 13:32:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:08 compute-0 podman[419971]: 2025-10-02 13:32:08.374828771 +0000 UTC m=+0.049672784 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:32:08 compute-0 podman[419972]: 2025-10-02 13:32:08.403030644 +0000 UTC m=+0.074444719 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:32:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:08.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 83 KiB/s wr, 3 op/s
Oct 02 13:32:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:09.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:10 compute-0 nova_compute[256940]: 2025-10-02 13:32:10.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:10 compute-0 ceph-mon[73668]: pgmap v3650: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 83 KiB/s wr, 3 op/s
Oct 02 13:32:10 compute-0 nova_compute[256940]: 2025-10-02 13:32:10.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 104 KiB/s rd, 125 KiB/s wr, 6 op/s
Oct 02 13:32:11 compute-0 ceph-mon[73668]: pgmap v3651: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 104 KiB/s rd, 125 KiB/s wr, 6 op/s
Oct 02 13:32:11 compute-0 sudo[420019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:11 compute-0 sudo[420019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:11 compute-0 sudo[420019]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:11 compute-0 sudo[420044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:11 compute-0 sudo[420044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:11 compute-0 sudo[420044]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:11.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:11 compute-0 sudo[420069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:11 compute-0 sudo[420069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:11 compute-0 sudo[420069]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:11 compute-0 sudo[420094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:32:11 compute-0 sudo[420094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:12 compute-0 sudo[420094]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:12.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4e6c1b3e-1933-4fbb-bd51-8c0ea6046ac4 does not exist
Oct 02 13:32:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6bc6f732-d954-43e9-98e3-f83def0e9b16 does not exist
Oct 02 13:32:12 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7880b7df-1158-4f98-956a-f43b156bedba does not exist
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:32:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:32:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:12 compute-0 sudo[420151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:12 compute-0 sudo[420151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:12 compute-0 sudo[420151]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 119 KiB/s wr, 5 op/s
Oct 02 13:32:12 compute-0 ovn_controller[148123]: 2025-10-02T13:32:12Z|01101|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 02 13:32:12 compute-0 sudo[420177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:12 compute-0 sudo[420177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:12 compute-0 sudo[420177]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:13 compute-0 sudo[420202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:13 compute-0 sudo[420202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:13 compute-0 sudo[420202]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:32:13 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:13 compute-0 sudo[420227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:32:13 compute-0 sudo[420227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:13 compute-0 podman[420294]: 2025-10-02 13:32:13.44140424 +0000 UTC m=+0.050739702 container create 175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldstine, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:32:13 compute-0 systemd[1]: Started libpod-conmon-175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5.scope.
Oct 02 13:32:13 compute-0 podman[420294]: 2025-10-02 13:32:13.415697711 +0000 UTC m=+0.025033193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:13 compute-0 podman[420294]: 2025-10-02 13:32:13.569301958 +0000 UTC m=+0.178637460 container init 175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:13 compute-0 podman[420294]: 2025-10-02 13:32:13.577625931 +0000 UTC m=+0.186961393 container start 175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:32:13 compute-0 podman[420294]: 2025-10-02 13:32:13.582360852 +0000 UTC m=+0.191696334 container attach 175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:32:13 compute-0 agitated_goldstine[420310]: 167 167
Oct 02 13:32:13 compute-0 systemd[1]: libpod-175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5.scope: Deactivated successfully.
Oct 02 13:32:13 compute-0 podman[420294]: 2025-10-02 13:32:13.58538019 +0000 UTC m=+0.194715652 container died 175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldstine, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:32:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a59626c1b9f064313c975d24cf79155d6fa6e5a8615427b318015c2da1310d6-merged.mount: Deactivated successfully.
Oct 02 13:32:13 compute-0 podman[420294]: 2025-10-02 13:32:13.629722386 +0000 UTC m=+0.239057848 container remove 175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:32:13 compute-0 systemd[1]: libpod-conmon-175400246f34d707f1cd83034f71227bb914b13cf8c4f7a39626c3a9728e3cd5.scope: Deactivated successfully.
Oct 02 13:32:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:13.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:13 compute-0 podman[420333]: 2025-10-02 13:32:13.841897155 +0000 UTC m=+0.053182644 container create f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:32:13 compute-0 systemd[1]: Started libpod-conmon-f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb.scope.
Oct 02 13:32:13 compute-0 podman[420333]: 2025-10-02 13:32:13.817894439 +0000 UTC m=+0.029180018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e54470abbbe3e8fc4a67fdaaf31284a21877b77706b9b4e9d6026341c6d9b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e54470abbbe3e8fc4a67fdaaf31284a21877b77706b9b4e9d6026341c6d9b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e54470abbbe3e8fc4a67fdaaf31284a21877b77706b9b4e9d6026341c6d9b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e54470abbbe3e8fc4a67fdaaf31284a21877b77706b9b4e9d6026341c6d9b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e54470abbbe3e8fc4a67fdaaf31284a21877b77706b9b4e9d6026341c6d9b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:13 compute-0 podman[420333]: 2025-10-02 13:32:13.934052737 +0000 UTC m=+0.145338236 container init f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:32:13 compute-0 podman[420333]: 2025-10-02 13:32:13.943568381 +0000 UTC m=+0.154853870 container start f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:32:13 compute-0 podman[420333]: 2025-10-02 13:32:13.946424564 +0000 UTC m=+0.157710053 container attach f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:14 compute-0 ceph-mon[73668]: pgmap v3652: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 119 KiB/s wr, 5 op/s
Oct 02 13:32:14 compute-0 nova_compute[256940]: 2025-10-02 13:32:14.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:14 compute-0 nova_compute[256940]: 2025-10-02 13:32:14.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:32:14 compute-0 nova_compute[256940]: 2025-10-02 13:32:14.237 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:32:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:14.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:14 compute-0 funny_wright[420349]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:32:14 compute-0 funny_wright[420349]: --> relative data size: 1.0
Oct 02 13:32:14 compute-0 funny_wright[420349]: --> All data devices are unavailable
Oct 02 13:32:14 compute-0 systemd[1]: libpod-f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb.scope: Deactivated successfully.
Oct 02 13:32:14 compute-0 conmon[420349]: conmon f5b3b6d890a5a6c0b535 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb.scope/container/memory.events
Oct 02 13:32:14 compute-0 podman[420333]: 2025-10-02 13:32:14.796507112 +0000 UTC m=+1.007792611 container died f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:32:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e54470abbbe3e8fc4a67fdaaf31284a21877b77706b9b4e9d6026341c6d9b2-merged.mount: Deactivated successfully.
Oct 02 13:32:14 compute-0 podman[420333]: 2025-10-02 13:32:14.85885688 +0000 UTC m=+1.070142379 container remove f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:32:14 compute-0 systemd[1]: libpod-conmon-f5b3b6d890a5a6c0b5356bd5b381f2cc4d6aa6bef919fd3d7a75e3ee3be15ffb.scope: Deactivated successfully.
Oct 02 13:32:14 compute-0 sudo[420227]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 49 KiB/s wr, 4 op/s
Oct 02 13:32:14 compute-0 sudo[420376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:14 compute-0 sudo[420376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:14 compute-0 sudo[420376]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:15 compute-0 sudo[420401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:15 compute-0 sudo[420401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:15 compute-0 sudo[420401]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:15 compute-0 nova_compute[256940]: 2025-10-02 13:32:15.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:15 compute-0 sudo[420426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:15 compute-0 sudo[420426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:15 compute-0 sudo[420426]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:15 compute-0 sudo[420451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:32:15 compute-0 sudo[420451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:15 compute-0 podman[420518]: 2025-10-02 13:32:15.522963492 +0000 UTC m=+0.052973629 container create bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:32:15 compute-0 nova_compute[256940]: 2025-10-02 13:32:15.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:15 compute-0 systemd[1]: Started libpod-conmon-bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567.scope.
Oct 02 13:32:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:15 compute-0 podman[420518]: 2025-10-02 13:32:15.600395266 +0000 UTC m=+0.130405443 container init bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:32:15 compute-0 podman[420518]: 2025-10-02 13:32:15.505822242 +0000 UTC m=+0.035832399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:15 compute-0 podman[420518]: 2025-10-02 13:32:15.607034456 +0000 UTC m=+0.137044603 container start bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:32:15 compute-0 podman[420518]: 2025-10-02 13:32:15.610487875 +0000 UTC m=+0.140498042 container attach bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:32:15 compute-0 friendly_chatelet[420534]: 167 167
Oct 02 13:32:15 compute-0 systemd[1]: libpod-bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567.scope: Deactivated successfully.
Oct 02 13:32:15 compute-0 podman[420518]: 2025-10-02 13:32:15.6153562 +0000 UTC m=+0.145366367 container died bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84d3ef3e4265f3a8eaad91af2b063a61b25d0ac7273d39c7a119caa7d737f53-merged.mount: Deactivated successfully.
Oct 02 13:32:15 compute-0 podman[420518]: 2025-10-02 13:32:15.656308529 +0000 UTC m=+0.186318666 container remove bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:32:15 compute-0 systemd[1]: libpod-conmon-bc980e8d1abb022f83842266d0df31fab7746f1af27c5816f605bf7927dc8567.scope: Deactivated successfully.
Oct 02 13:32:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:15.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:15 compute-0 podman[420556]: 2025-10-02 13:32:15.813026266 +0000 UTC m=+0.036887946 container create 22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:15 compute-0 systemd[1]: Started libpod-conmon-22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638.scope.
Oct 02 13:32:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:15 compute-0 podman[420556]: 2025-10-02 13:32:15.799307714 +0000 UTC m=+0.023169414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83fa707a0bd49adfd01f532125deb7704aa0d97f18a257ff01d8318f6541314/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83fa707a0bd49adfd01f532125deb7704aa0d97f18a257ff01d8318f6541314/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83fa707a0bd49adfd01f532125deb7704aa0d97f18a257ff01d8318f6541314/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83fa707a0bd49adfd01f532125deb7704aa0d97f18a257ff01d8318f6541314/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:15 compute-0 podman[420556]: 2025-10-02 13:32:15.915845521 +0000 UTC m=+0.139707221 container init 22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:32:15 compute-0 podman[420556]: 2025-10-02 13:32:15.924593486 +0000 UTC m=+0.148455166 container start 22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:32:15 compute-0 podman[420556]: 2025-10-02 13:32:15.928431454 +0000 UTC m=+0.152293134 container attach 22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:32:16 compute-0 ceph-mon[73668]: pgmap v3653: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 49 KiB/s wr, 4 op/s
Oct 02 13:32:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:16.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]: {
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:     "1": [
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:         {
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "devices": [
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "/dev/loop3"
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             ],
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "lv_name": "ceph_lv0",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "lv_size": "7511998464",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "name": "ceph_lv0",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "tags": {
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.cluster_name": "ceph",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.crush_device_class": "",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.encrypted": "0",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.osd_id": "1",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.type": "block",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:                 "ceph.vdo": "0"
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             },
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "type": "block",
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:             "vg_name": "ceph_vg0"
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:         }
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]:     ]
Oct 02 13:32:16 compute-0 wizardly_grothendieck[420573]: }
Oct 02 13:32:16 compute-0 systemd[1]: libpod-22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638.scope: Deactivated successfully.
Oct 02 13:32:16 compute-0 conmon[420573]: conmon 22c84ddcb6ce2f979d0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638.scope/container/memory.events
Oct 02 13:32:16 compute-0 podman[420556]: 2025-10-02 13:32:16.697551866 +0000 UTC m=+0.921413576 container died 22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83fa707a0bd49adfd01f532125deb7704aa0d97f18a257ff01d8318f6541314-merged.mount: Deactivated successfully.
Oct 02 13:32:16 compute-0 podman[420556]: 2025-10-02 13:32:16.758687923 +0000 UTC m=+0.982549613 container remove 22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:16 compute-0 systemd[1]: libpod-conmon-22c84ddcb6ce2f979d0fdf8c510e4b53e228600a16939940cbcf56596cf47638.scope: Deactivated successfully.
Oct 02 13:32:16 compute-0 sudo[420451]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:16 compute-0 sudo[420597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:16 compute-0 sudo[420597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:16 compute-0 sudo[420597]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 49 KiB/s wr, 8 op/s
Oct 02 13:32:16 compute-0 sudo[420623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:16 compute-0 sudo[420623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:16 compute-0 sudo[420623]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:17 compute-0 sudo[420648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:17 compute-0 sudo[420648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:17 compute-0 sudo[420648]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:17 compute-0 sudo[420673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:32:17 compute-0 sudo[420673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:17 compute-0 podman[420739]: 2025-10-02 13:32:17.438732663 +0000 UTC m=+0.044673996 container create 124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_knuth, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 13:32:17 compute-0 systemd[1]: Started libpod-conmon-124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d.scope.
Oct 02 13:32:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:17 compute-0 podman[420739]: 2025-10-02 13:32:17.418409632 +0000 UTC m=+0.024350975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:17 compute-0 podman[420739]: 2025-10-02 13:32:17.521650518 +0000 UTC m=+0.127591871 container init 124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_knuth, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:32:17 compute-0 podman[420739]: 2025-10-02 13:32:17.529306635 +0000 UTC m=+0.135247998 container start 124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:32:17 compute-0 podman[420739]: 2025-10-02 13:32:17.533627085 +0000 UTC m=+0.139568448 container attach 124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:32:17 compute-0 vigorous_knuth[420756]: 167 167
Oct 02 13:32:17 compute-0 systemd[1]: libpod-124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d.scope: Deactivated successfully.
Oct 02 13:32:17 compute-0 podman[420739]: 2025-10-02 13:32:17.53692518 +0000 UTC m=+0.142866523 container died 124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-27d23c29d17bc9d01c17545ed02d53bf3a1547629eed10a6646d81fc3e6dbe24-merged.mount: Deactivated successfully.
Oct 02 13:32:17 compute-0 podman[420739]: 2025-10-02 13:32:17.581748189 +0000 UTC m=+0.187689522 container remove 124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_knuth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:17 compute-0 systemd[1]: libpod-conmon-124123bbba954bfb413b55a79a88ba45344f235fb52d72b1c09cfe2b55ab023d.scope: Deactivated successfully.
Oct 02 13:32:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:17.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:17 compute-0 podman[420780]: 2025-10-02 13:32:17.738074996 +0000 UTC m=+0.040253213 container create 406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:32:17 compute-0 systemd[1]: Started libpod-conmon-406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2.scope.
Oct 02 13:32:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775f08e5fad4ce62e0e009ced32c73c76b8fbe882a97d1eeee1e60b18905b575/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775f08e5fad4ce62e0e009ced32c73c76b8fbe882a97d1eeee1e60b18905b575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775f08e5fad4ce62e0e009ced32c73c76b8fbe882a97d1eeee1e60b18905b575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775f08e5fad4ce62e0e009ced32c73c76b8fbe882a97d1eeee1e60b18905b575/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:32:17 compute-0 podman[420780]: 2025-10-02 13:32:17.808519211 +0000 UTC m=+0.110697438 container init 406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:32:17 compute-0 podman[420780]: 2025-10-02 13:32:17.719383446 +0000 UTC m=+0.021561683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:32:17 compute-0 podman[420780]: 2025-10-02 13:32:17.817412449 +0000 UTC m=+0.119590706 container start 406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:32:17 compute-0 podman[420780]: 2025-10-02 13:32:17.842902542 +0000 UTC m=+0.145080789 container attach 406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:32:18 compute-0 ceph-mon[73668]: pgmap v3654: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 49 KiB/s wr, 8 op/s
Oct 02 13:32:18 compute-0 nova_compute[256940]: 2025-10-02 13:32:18.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:18.108 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:32:18 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:18.113 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:32:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:18.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:18 compute-0 gallant_ride[420797]: {
Oct 02 13:32:18 compute-0 gallant_ride[420797]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:32:18 compute-0 gallant_ride[420797]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:32:18 compute-0 gallant_ride[420797]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:32:18 compute-0 gallant_ride[420797]:         "osd_id": 1,
Oct 02 13:32:18 compute-0 gallant_ride[420797]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:32:18 compute-0 gallant_ride[420797]:         "type": "bluestore"
Oct 02 13:32:18 compute-0 gallant_ride[420797]:     }
Oct 02 13:32:18 compute-0 gallant_ride[420797]: }
Oct 02 13:32:18 compute-0 systemd[1]: libpod-406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2.scope: Deactivated successfully.
Oct 02 13:32:18 compute-0 podman[420780]: 2025-10-02 13:32:18.740764945 +0000 UTC m=+1.042943162 container died 406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-775f08e5fad4ce62e0e009ced32c73c76b8fbe882a97d1eeee1e60b18905b575-merged.mount: Deactivated successfully.
Oct 02 13:32:18 compute-0 podman[420780]: 2025-10-02 13:32:18.798280259 +0000 UTC m=+1.100458476 container remove 406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:32:18 compute-0 systemd[1]: libpod-conmon-406943e85fe7fc0d02fdffb916f055d172ed5b9957b84ab21345c9334ec27da2.scope: Deactivated successfully.
Oct 02 13:32:18 compute-0 sudo[420673]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:32:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:32:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 97d604c4-2d8b-4436-86c8-752a72584e9d does not exist
Oct 02 13:32:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ac2df5c0-d014-4ae0-88d7-40ec17968afb does not exist
Oct 02 13:32:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1c3a33ca-a442-402b-8c97-b1009648ce84 does not exist
Oct 02 13:32:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 47 KiB/s wr, 7 op/s
Oct 02 13:32:18 compute-0 sudo[420832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:18 compute-0 sudo[420832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:18 compute-0 sudo[420832]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:19 compute-0 sudo[420857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:32:19 compute-0 sudo[420857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:19 compute-0 sudo[420857]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.003000077s ======
Oct 02 13:32:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:19.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000077s
Oct 02 13:32:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:19 compute-0 ceph-mon[73668]: pgmap v3655: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 47 KiB/s wr, 7 op/s
Oct 02 13:32:20 compute-0 nova_compute[256940]: 2025-10-02 13:32:20.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:20 compute-0 nova_compute[256940]: 2025-10-02 13:32:20.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:20.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 254 KiB/s rd, 47 KiB/s wr, 18 op/s
Oct 02 13:32:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:21.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:22 compute-0 ceph-mon[73668]: pgmap v3656: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 254 KiB/s rd, 47 KiB/s wr, 18 op/s
Oct 02 13:32:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3910490391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:22 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:22.116 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:32:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:22.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 6.0 KiB/s wr, 19 op/s
Oct 02 13:32:23 compute-0 sudo[420884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:23 compute-0 sudo[420884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:23 compute-0 sudo[420884]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:23 compute-0 sudo[420909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:23 compute-0 sudo[420909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:23 compute-0 sudo[420909]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:24 compute-0 ceph-mon[73668]: pgmap v3657: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 6.0 KiB/s wr, 19 op/s
Oct 02 13:32:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2315824707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2315824707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:24 compute-0 podman[420934]: 2025-10-02 13:32:24.40022648 +0000 UTC m=+0.068916298 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:32:24 compute-0 podman[420935]: 2025-10-02 13:32:24.40529172 +0000 UTC m=+0.071259588 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:32:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.6 KiB/s wr, 20 op/s
Oct 02 13:32:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Oct 02 13:32:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Oct 02 13:32:25 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Oct 02 13:32:25 compute-0 nova_compute[256940]: 2025-10-02 13:32:25.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:25 compute-0 nova_compute[256940]: 2025-10-02 13:32:25.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:26 compute-0 ceph-mon[73668]: pgmap v3658: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.6 KiB/s wr, 20 op/s
Oct 02 13:32:26 compute-0 ceph-mon[73668]: osdmap e429: 3 total, 3 up, 3 in
Oct 02 13:32:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:26.530 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:26.530 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:26.531 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:26.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 121 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 02 13:32:27 compute-0 nova_compute[256940]: 2025-10-02 13:32:27.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:27 compute-0 nova_compute[256940]: 2025-10-02 13:32:27.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:32:27 compute-0 ceph-mon[73668]: pgmap v3660: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 121 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 02 13:32:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.363 2 DEBUG nova.compute.manager [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-changed-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.364 2 DEBUG nova.compute.manager [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Refreshing instance network info cache due to event network-changed-47596c7d-5357-4ca4-967d-cefac652d34f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.365 2 DEBUG oslo_concurrency.lockutils [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.365 2 DEBUG oslo_concurrency.lockutils [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.365 2 DEBUG nova.network.neutron [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Refreshing network info cache for port 47596c7d-5357-4ca4-967d-cefac652d34f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.483 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.484 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.484 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.484 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.484 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.485 2 INFO nova.compute.manager [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Terminating instance
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.487 2 DEBUG nova.compute.manager [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:28 compute-0 kernel: tap47596c7d-53 (unregistering): left promiscuous mode
Oct 02 13:32:28 compute-0 NetworkManager[44981]: <info>  [1759411948.5503] device (tap47596c7d-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:32:28 compute-0 ovn_controller[148123]: 2025-10-02T13:32:28Z|01102|binding|INFO|Releasing lport 47596c7d-5357-4ca4-967d-cefac652d34f from this chassis (sb_readonly=0)
Oct 02 13:32:28 compute-0 ovn_controller[148123]: 2025-10-02T13:32:28Z|01103|binding|INFO|Setting lport 47596c7d-5357-4ca4-967d-cefac652d34f down in Southbound
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:28 compute-0 ovn_controller[148123]: 2025-10-02T13:32:28Z|01104|binding|INFO|Removing iface tap47596c7d-53 ovn-installed in OVS
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:28.569 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:78:d9 10.100.0.6'], port_security=['fa:16:3e:e3:78:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '800b27d2-30f8-4e7e-9411-bbdae9fd342b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>], logical_port=47596c7d-5357-4ca4-967d-cefac652d34f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f66b80ad700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:32:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:28.571 158104 INFO neutron.agent.ovn.metadata.agent [-] Port 47596c7d-5357-4ca4-967d-cefac652d34f in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis
Oct 02 13:32:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:28.572 158104 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:32:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:28.574 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4f26670a-f3a8-4c52-9193-3d48dd8c026f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:28 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:28.574 158104 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:28.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:28 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000df.scope: Deactivated successfully.
Oct 02 13:32:28 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000df.scope: Consumed 16.169s CPU time.
Oct 02 13:32:28 compute-0 systemd-machined[210927]: Machine qemu-111-instance-000000df terminated.
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.720 2 INFO nova.virt.libvirt.driver [-] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Instance destroyed successfully.
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.720 2 DEBUG nova.objects.instance [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 800b27d2-30f8-4e7e-9411-bbdae9fd342b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.738 2 DEBUG nova.virt.libvirt.vif [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-117715933',display_name='tempest-TestVolumeBootPattern-server-117715933',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-117715933',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:31:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-fvcmtygl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:31:02Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=800b27d2-30f8-4e7e-9411-bbdae9fd342b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.739 2 DEBUG nova.network.os_vif_util [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.739 2 DEBUG nova.network.os_vif_util [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:78:d9,bridge_name='br-int',has_traffic_filtering=True,id=47596c7d-5357-4ca4-967d-cefac652d34f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47596c7d-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.740 2 DEBUG os_vif [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:78:d9,bridge_name='br-int',has_traffic_filtering=True,id=47596c7d-5357-4ca4-967d-cefac652d34f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47596c7d-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.742 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47596c7d-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.747 2 INFO os_vif [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:78:d9,bridge_name='br-int',has_traffic_filtering=True,id=47596c7d-5357-4ca4-967d-cefac652d34f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47596c7d-53')
Oct 02 13:32:28 compute-0 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[418853]: [NOTICE]   (418906) : haproxy version is 2.8.14-c23fe91
Oct 02 13:32:28 compute-0 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[418853]: [NOTICE]   (418906) : path to executable is /usr/sbin/haproxy
Oct 02 13:32:28 compute-0 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[418853]: [WARNING]  (418906) : Exiting Master process...
Oct 02 13:32:28 compute-0 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[418853]: [ALERT]    (418906) : Current worker (418911) exited with code 143 (Terminated)
Oct 02 13:32:28 compute-0 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[418853]: [WARNING]  (418906) : All workers exited. Exiting... (0)
Oct 02 13:32:28 compute-0 systemd[1]: libpod-7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7.scope: Deactivated successfully.
Oct 02 13:32:28 compute-0 podman[421003]: 2025-10-02 13:32:28.918593788 +0000 UTC m=+0.258082407 container died 7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:32:28
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms']
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:32:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 121 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.956 2 DEBUG nova.compute.manager [req-d1c22526-b710-4242-9776-5644b9fe38a6 req-7a555018-296f-427b-ace6-a9830165ba89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-vif-unplugged-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.956 2 DEBUG oslo_concurrency.lockutils [req-d1c22526-b710-4242-9776-5644b9fe38a6 req-7a555018-296f-427b-ace6-a9830165ba89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.957 2 DEBUG oslo_concurrency.lockutils [req-d1c22526-b710-4242-9776-5644b9fe38a6 req-7a555018-296f-427b-ace6-a9830165ba89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.957 2 DEBUG oslo_concurrency.lockutils [req-d1c22526-b710-4242-9776-5644b9fe38a6 req-7a555018-296f-427b-ace6-a9830165ba89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.957 2 DEBUG nova.compute.manager [req-d1c22526-b710-4242-9776-5644b9fe38a6 req-7a555018-296f-427b-ace6-a9830165ba89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] No waiting events found dispatching network-vif-unplugged-47596c7d-5357-4ca4-967d-cefac652d34f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:32:28 compute-0 nova_compute[256940]: 2025-10-02 13:32:28.958 2 DEBUG nova.compute.manager [req-d1c22526-b710-4242-9776-5644b9fe38a6 req-7a555018-296f-427b-ace6-a9830165ba89 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-vif-unplugged-47596c7d-5357-4ca4-967d-cefac652d34f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:32:29 compute-0 ceph-mon[73668]: pgmap v3661: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 121 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 02 13:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7-userdata-shm.mount: Deactivated successfully.
Oct 02 13:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a3a1ab3a2820b6b1f91d77c7e087653e3b93c1d4b53d1f17625dbad11d5bce-merged.mount: Deactivated successfully.
Oct 02 13:32:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:29 compute-0 podman[421003]: 2025-10-02 13:32:29.789842508 +0000 UTC m=+1.129331117 container cleanup 7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:32:29 compute-0 systemd[1]: libpod-conmon-7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7.scope: Deactivated successfully.
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:32:30 compute-0 podman[421063]: 2025-10-02 13:32:30.600872265 +0000 UTC m=+0.788974453 container remove 7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:32:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:30.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.607 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee74827-38ca-4ffe-8fcd-e0fc07e567d2]: (4, ('Thu Oct  2 01:32:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7)\n7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7\nThu Oct  2 01:32:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7)\n7f9ef4737ae62dbcd1d622111b12782403eba79003a15963f55c866122a4cdf7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.610 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[b5fc6266-85f8-4a25-af7b-5e8f7513a592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.611 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:30 compute-0 kernel: tap858f2b6f-80: left promiscuous mode
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.674 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[5847ae88-63f1-47ea-8981-a42162ca04eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.699 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[fb42a5ef-4be5-4c5f-8838-14845a753fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.700 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[d70836dc-e3ce-4e1f-9b30-8c5a14138466]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.714 264216 DEBUG oslo.privsep.daemon [-] privsep: reply[554f94a2-785e-4af9-b8ab-1fd96b5caed1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 975742, 'reachable_time': 22220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421080, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.716 158301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:32:30 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:32:30.716 158301 DEBUG oslo.privsep.daemon [-] privsep: reply[3596946d-a2f4-4c9f-a4c8-6869756cb8fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.905 2 INFO nova.virt.libvirt.driver [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Deleting instance files /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b_del
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.906 2 INFO nova.virt.libvirt.driver [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Deletion of /var/lib/nova/instances/800b27d2-30f8-4e7e-9411-bbdae9fd342b_del complete
Oct 02 13:32:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 44 op/s
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.977 2 INFO nova.compute.manager [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Took 2.49 seconds to destroy the instance on the hypervisor.
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.978 2 DEBUG oslo.service.loopingcall [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.978 2 DEBUG nova.compute.manager [-] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:32:30 compute-0 nova_compute[256940]: 2025-10-02 13:32:30.978 2 DEBUG nova.network.neutron [-] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.082 2 DEBUG nova.compute.manager [req-77bb4e15-ea98-49c0-af83-c8c6277a67b8 req-00aae83f-7118-4572-9ffb-ee9cec1fa1b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.083 2 DEBUG oslo_concurrency.lockutils [req-77bb4e15-ea98-49c0-af83-c8c6277a67b8 req-00aae83f-7118-4572-9ffb-ee9cec1fa1b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.084 2 DEBUG oslo_concurrency.lockutils [req-77bb4e15-ea98-49c0-af83-c8c6277a67b8 req-00aae83f-7118-4572-9ffb-ee9cec1fa1b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.085 2 DEBUG oslo_concurrency.lockutils [req-77bb4e15-ea98-49c0-af83-c8c6277a67b8 req-00aae83f-7118-4572-9ffb-ee9cec1fa1b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.085 2 DEBUG nova.compute.manager [req-77bb4e15-ea98-49c0-af83-c8c6277a67b8 req-00aae83f-7118-4572-9ffb-ee9cec1fa1b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] No waiting events found dispatching network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.086 2 WARNING nova.compute.manager [req-77bb4e15-ea98-49c0-af83-c8c6277a67b8 req-00aae83f-7118-4572-9ffb-ee9cec1fa1b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received unexpected event network-vif-plugged-47596c7d-5357-4ca4-967d-cefac652d34f for instance with vm_state active and task_state deleting.
Oct 02 13:32:31 compute-0 ceph-mon[73668]: pgmap v3662: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 44 op/s
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.417 2 DEBUG nova.network.neutron [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updated VIF entry in instance network info cache for port 47596c7d-5357-4ca4-967d-cefac652d34f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.418 2 DEBUG nova.network.neutron [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updating instance_info_cache with network_info: [{"id": "47596c7d-5357-4ca4-967d-cefac652d34f", "address": "fa:16:3e:e3:78:d9", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47596c7d-53", "ovs_interfaceid": "47596c7d-5357-4ca4-967d-cefac652d34f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:32:31 compute-0 nova_compute[256940]: 2025-10-02 13:32:31.446 2 DEBUG oslo_concurrency.lockutils [req-c7b65552-9aac-4606-8895-10fcf4cabe72 req-d39e90b9-5974-4216-a4dc-498884a173ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-800b27d2-30f8-4e7e-9411-bbdae9fd342b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:32:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:31.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:32 compute-0 nova_compute[256940]: 2025-10-02 13:32:32.405 2 DEBUG nova.network.neutron [-] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:32:32 compute-0 nova_compute[256940]: 2025-10-02 13:32:32.423 2 INFO nova.compute.manager [-] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Took 1.44 seconds to deallocate network for instance.
Oct 02 13:32:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:32.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:32 compute-0 nova_compute[256940]: 2025-10-02 13:32:32.707 2 INFO nova.compute.manager [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Took 0.28 seconds to detach 1 volumes for instance.
Oct 02 13:32:32 compute-0 nova_compute[256940]: 2025-10-02 13:32:32.765 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:32 compute-0 nova_compute[256940]: 2025-10-02 13:32:32.765 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:32 compute-0 nova_compute[256940]: 2025-10-02 13:32:32.835 2 DEBUG oslo_concurrency.processutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.183 2 DEBUG nova.compute.manager [req-4169b28d-2831-4e1a-af75-19616b4314b6 req-b8e3ce96-eccb-4003-960b-14213c68f83d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Received event network-vif-deleted-47596c7d-5357-4ca4-967d-cefac652d34f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731007096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.291 2 DEBUG oslo_concurrency.processutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.300 2 DEBUG nova.compute.provider_tree [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:32:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Oct 02 13:32:33 compute-0 ceph-mon[73668]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.320 2 DEBUG nova.scheduler.client.report [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.351 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.386 2 INFO nova.scheduler.client.report [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 800b27d2-30f8-4e7e-9411-bbdae9fd342b
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.464 2 DEBUG oslo_concurrency.lockutils [None req-587072ce-ea54-4513-9cb3-d72a213b192b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "800b27d2-30f8-4e7e-9411-bbdae9fd342b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:33.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:33 compute-0 nova_compute[256940]: 2025-10-02 13:32:33.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:34 compute-0 ceph-mon[73668]: pgmap v3663: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:32:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/731007096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:34 compute-0 ceph-mon[73668]: osdmap e430: 3 total, 3 up, 3 in
Oct 02 13:32:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Oct 02 13:32:35 compute-0 nova_compute[256940]: 2025-10-02 13:32:35.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:36 compute-0 ceph-mon[73668]: pgmap v3665: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Oct 02 13:32:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2730303382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:36 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2730303382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:32:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:32:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 139 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:32:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:37.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:38 compute-0 ceph-mon[73668]: pgmap v3666: 305 pgs: 305 active+clean; 139 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:32:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:38.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:38 compute-0 nova_compute[256940]: 2025-10-02 13:32:38.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 139 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:32:39 compute-0 ceph-mon[73668]: pgmap v3667: 305 pgs: 305 active+clean; 139 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:32:39 compute-0 podman[421108]: 2025-10-02 13:32:39.428585613 +0000 UTC m=+0.091188558 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:32:39 compute-0 podman[421109]: 2025-10-02 13:32:39.441504074 +0000 UTC m=+0.096314789 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:32:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:39.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:40 compute-0 nova_compute[256940]: 2025-10-02 13:32:40.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:32:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:40.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:32:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1023 B/s wr, 33 op/s
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:32:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:32:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:41.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:42 compute-0 ceph-mon[73668]: pgmap v3668: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1023 B/s wr, 33 op/s
Oct 02 13:32:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3679501352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/556147337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:42.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 716 B/s wr, 21 op/s
Oct 02 13:32:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:43 compute-0 sudo[421155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:43 compute-0 sudo[421155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:43 compute-0 sudo[421155]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:43 compute-0 sudo[421180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:43 compute-0 sudo[421180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:43 compute-0 sudo[421180]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:43 compute-0 nova_compute[256940]: 2025-10-02 13:32:43.719 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411948.718055, 800b27d2-30f8-4e7e-9411-bbdae9fd342b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:32:43 compute-0 nova_compute[256940]: 2025-10-02 13:32:43.720 2 INFO nova.compute.manager [-] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] VM Stopped (Lifecycle Event)
Oct 02 13:32:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:43.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:43 compute-0 nova_compute[256940]: 2025-10-02 13:32:43.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:43 compute-0 nova_compute[256940]: 2025-10-02 13:32:43.756 2 DEBUG nova.compute.manager [None req-47a707d1-e679-4894-9e35-62f25caad4b4 - - - - - -] [instance: 800b27d2-30f8-4e7e-9411-bbdae9fd342b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:32:44 compute-0 ceph-mon[73668]: pgmap v3669: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 716 B/s wr, 21 op/s
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.228 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.271 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.272 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.272 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.272 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.273 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:44.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663759352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.691 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.861 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.863 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4112MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.863 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.864 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.923 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.923 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:32:44 compute-0 nova_compute[256940]: 2025-10-02 13:32:44.943 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 616 B/s wr, 18 op/s
Oct 02 13:32:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2663759352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:45 compute-0 nova_compute[256940]: 2025-10-02 13:32:45.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/863069546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:45 compute-0 nova_compute[256940]: 2025-10-02 13:32:45.371 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:45 compute-0 nova_compute[256940]: 2025-10-02 13:32:45.377 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:32:45 compute-0 nova_compute[256940]: 2025-10-02 13:32:45.404 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:32:45 compute-0 nova_compute[256940]: 2025-10-02 13:32:45.435 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:32:45 compute-0 nova_compute[256940]: 2025-10-02 13:32:45.436 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:45.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:46 compute-0 ceph-mon[73668]: pgmap v3670: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 616 B/s wr, 18 op/s
Oct 02 13:32:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/863069546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Oct 02 13:32:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:47.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:48 compute-0 ceph-mon[73668]: pgmap v3671: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Oct 02 13:32:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3673655604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:48.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:48 compute-0 nova_compute[256940]: 2025-10-02 13:32:48.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:32:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3453873877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:49.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:50 compute-0 nova_compute[256940]: 2025-10-02 13:32:50.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:50 compute-0 ceph-mon[73668]: pgmap v3672: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:32:50 compute-0 nova_compute[256940]: 2025-10-02 13:32:50.419 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:50 compute-0 nova_compute[256940]: 2025-10-02 13:32:50.419 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:50 compute-0 nova_compute[256940]: 2025-10-02 13:32:50.420 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:32:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:50.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:32:51 compute-0 nova_compute[256940]: 2025-10-02 13:32:51.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:51.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:52 compute-0 ceph-mon[73668]: pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:32:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:52.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:53 compute-0 ceph-mon[73668]: pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:53.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:53 compute-0 nova_compute[256940]: 2025-10-02 13:32:53.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:54.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:55 compute-0 nova_compute[256940]: 2025-10-02 13:32:55.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:55 compute-0 nova_compute[256940]: 2025-10-02 13:32:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:55 compute-0 podman[421258]: 2025-10-02 13:32:55.438785304 +0000 UTC m=+0.089200367 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:32:55 compute-0 podman[421257]: 2025-10-02 13:32:55.468422213 +0000 UTC m=+0.126389200 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:32:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:55.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:56 compute-0 ceph-mon[73668]: pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:32:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:56.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:32:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:57.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:58 compute-0 ceph-mon[73668]: pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:32:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:32:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:58 compute-0 nova_compute[256940]: 2025-10-02 13:32:58.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:59 compute-0 nova_compute[256940]: 2025-10-02 13:32:59.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:59 compute-0 nova_compute[256940]: 2025-10-02 13:32:59.233 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:32:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:59.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:00 compute-0 ceph-mon[73668]: pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:00 compute-0 nova_compute[256940]: 2025-10-02 13:33:00.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:00 compute-0 nova_compute[256940]: 2025-10-02 13:33:00.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:33:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:33:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:01.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:02 compute-0 ceph-mon[73668]: pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:02 compute-0 nova_compute[256940]: 2025-10-02 13:33:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:02 compute-0 nova_compute[256940]: 2025-10-02 13:33:02.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:33:02 compute-0 nova_compute[256940]: 2025-10-02 13:33:02.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:33:02 compute-0 nova_compute[256940]: 2025-10-02 13:33:02.244 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:33:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:03 compute-0 ceph-mon[73668]: pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:03 compute-0 sudo[421298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:03 compute-0 sudo[421298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:03 compute-0 sudo[421298]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:03 compute-0 sudo[421323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:03 compute-0 sudo[421323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:03 compute-0 sudo[421323]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:03.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:03 compute-0 nova_compute[256940]: 2025-10-02 13:33:03.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:04.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:05 compute-0 nova_compute[256940]: 2025-10-02 13:33:05.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:05 compute-0 nova_compute[256940]: 2025-10-02 13:33:05.240 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:33:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/487181521' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:33:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:33:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/487181521' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:33:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:06 compute-0 ceph-mon[73668]: pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/487181521' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:33:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/487181521' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:33:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:06.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:07.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:08 compute-0 ceph-mon[73668]: pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:08.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:08 compute-0 nova_compute[256940]: 2025-10-02 13:33:08.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:09.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:10 compute-0 nova_compute[256940]: 2025-10-02 13:33:10.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:10 compute-0 ceph-mon[73668]: pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:10 compute-0 podman[421351]: 2025-10-02 13:33:10.387943135 +0000 UTC m=+0.059594109 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:33:10 compute-0 podman[421352]: 2025-10-02 13:33:10.411577621 +0000 UTC m=+0.083104971 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:33:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:10.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:11 compute-0 ceph-mon[73668]: pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:12 compute-0 nova_compute[256940]: 2025-10-02 13:33:12.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:12.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:13.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:13 compute-0 nova_compute[256940]: 2025-10-02 13:33:13.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:14 compute-0 ceph-mon[73668]: pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:14.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:15 compute-0 nova_compute[256940]: 2025-10-02 13:33:15.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:15.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:16 compute-0 ceph-mon[73668]: pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:16.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:18 compute-0 ceph-mon[73668]: pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:18.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:18 compute-0 nova_compute[256940]: 2025-10-02 13:33:18.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:19 compute-0 ceph-mon[73668]: pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:19 compute-0 sudo[421401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:19 compute-0 sudo[421401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-0 sudo[421401]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:19 compute-0 sudo[421426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:19 compute-0 sudo[421426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-0 sudo[421426]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:19 compute-0 sudo[421451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:19 compute-0 sudo[421451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-0 sudo[421451]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:19 compute-0 sudo[421476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:33:19 compute-0 sudo[421476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:19 compute-0 ovn_controller[148123]: 2025-10-02T13:33:19Z|01105|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 02 13:33:20 compute-0 sudo[421476]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:20 compute-0 nova_compute[256940]: 2025-10-02 13:33:20.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:33:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:33:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:33:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev da037bd5-c191-42f2-88aa-0e3a93dbdd6f does not exist
Oct 02 13:33:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev baf6db34-6c4c-4aef-b422-d51e9370d556 does not exist
Oct 02 13:33:20 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8e758e2f-1077-4801-ade3-3dfe5dec6bcc does not exist
Oct 02 13:33:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:33:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:33:20 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:33:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:33:20 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:20 compute-0 sudo[421532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:20 compute-0 sudo[421532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:20 compute-0 sudo[421532]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:20 compute-0 sudo[421557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:20 compute-0 sudo[421557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:20 compute-0 sudo[421557]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:20 compute-0 sudo[421582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:20 compute-0 sudo[421582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:20 compute-0 sudo[421582]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:20 compute-0 sudo[421607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:33:20 compute-0 sudo[421607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:33:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:20.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:33:20 compute-0 podman[421672]: 2025-10-02 13:33:20.831504398 +0000 UTC m=+0.042275324 container create 00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:33:20 compute-0 systemd[1]: Started libpod-conmon-00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766.scope.
Oct 02 13:33:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:20 compute-0 podman[421672]: 2025-10-02 13:33:20.809973627 +0000 UTC m=+0.020744563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:20 compute-0 podman[421672]: 2025-10-02 13:33:20.917944834 +0000 UTC m=+0.128715840 container init 00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:20 compute-0 podman[421672]: 2025-10-02 13:33:20.926227276 +0000 UTC m=+0.136998192 container start 00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:33:20 compute-0 systemd[1]: libpod-00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766.scope: Deactivated successfully.
Oct 02 13:33:20 compute-0 reverent_einstein[421689]: 167 167
Oct 02 13:33:20 compute-0 podman[421672]: 2025-10-02 13:33:20.9334005 +0000 UTC m=+0.144171436 container attach 00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:33:20 compute-0 conmon[421689]: conmon 00612828d843bceb4913 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766.scope/container/memory.events
Oct 02 13:33:20 compute-0 podman[421672]: 2025-10-02 13:33:20.93494805 +0000 UTC m=+0.145718996 container died 00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:33:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-61dc3159ccc629bf646aaea378ac230c07cf43f78faf39cb867f97e788ad5857-merged.mount: Deactivated successfully.
Oct 02 13:33:20 compute-0 podman[421672]: 2025-10-02 13:33:20.987200579 +0000 UTC m=+0.197971485 container remove 00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:33:21 compute-0 systemd[1]: libpod-conmon-00612828d843bceb4913e7ae935308882d13e210aaab0dab1632adec97308766.scope: Deactivated successfully.
Oct 02 13:33:21 compute-0 podman[421712]: 2025-10-02 13:33:21.149523099 +0000 UTC m=+0.040609271 container create 2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:33:21 compute-0 systemd[1]: Started libpod-conmon-2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66.scope.
Oct 02 13:33:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f895443bbcadbeb10195f74639a828c6215ef0a3138459a581ff2bf2e9e0641/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:21 compute-0 podman[421712]: 2025-10-02 13:33:21.133130819 +0000 UTC m=+0.024217021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f895443bbcadbeb10195f74639a828c6215ef0a3138459a581ff2bf2e9e0641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f895443bbcadbeb10195f74639a828c6215ef0a3138459a581ff2bf2e9e0641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f895443bbcadbeb10195f74639a828c6215ef0a3138459a581ff2bf2e9e0641/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f895443bbcadbeb10195f74639a828c6215ef0a3138459a581ff2bf2e9e0641/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:21 compute-0 podman[421712]: 2025-10-02 13:33:21.254375047 +0000 UTC m=+0.145461479 container init 2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goodall, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:33:21 compute-0 podman[421712]: 2025-10-02 13:33:21.260697119 +0000 UTC m=+0.151783291 container start 2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:33:21 compute-0 podman[421712]: 2025-10-02 13:33:21.2646233 +0000 UTC m=+0.155709492 container attach 2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goodall, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:33:21 compute-0 ceph-mon[73668]: pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:22 compute-0 mystifying_goodall[421728]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:33:22 compute-0 mystifying_goodall[421728]: --> relative data size: 1.0
Oct 02 13:33:22 compute-0 mystifying_goodall[421728]: --> All data devices are unavailable
Oct 02 13:33:22 compute-0 systemd[1]: libpod-2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66.scope: Deactivated successfully.
Oct 02 13:33:22 compute-0 podman[421712]: 2025-10-02 13:33:22.097253891 +0000 UTC m=+0.988340093 container died 2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f895443bbcadbeb10195f74639a828c6215ef0a3138459a581ff2bf2e9e0641-merged.mount: Deactivated successfully.
Oct 02 13:33:22 compute-0 podman[421712]: 2025-10-02 13:33:22.162887163 +0000 UTC m=+1.053973365 container remove 2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:33:22 compute-0 systemd[1]: libpod-conmon-2c1b417b3e30d391507426adf9b63c6ff46291f457e046209cc1db60f891cb66.scope: Deactivated successfully.
Oct 02 13:33:22 compute-0 sudo[421607]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:22 compute-0 sudo[421755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:22 compute-0 sudo[421755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:22 compute-0 sudo[421755]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:22 compute-0 sudo[421780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:22 compute-0 sudo[421780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:22 compute-0 sudo[421780]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:22 compute-0 sudo[421805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:22 compute-0 sudo[421805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:22 compute-0 sudo[421805]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:22 compute-0 sudo[421830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:33:22 compute-0 sudo[421830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:22.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:22 compute-0 podman[421897]: 2025-10-02 13:33:22.83306933 +0000 UTC m=+0.047242252 container create 834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:33:22 compute-0 systemd[1]: Started libpod-conmon-834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479.scope.
Oct 02 13:33:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:22 compute-0 podman[421897]: 2025-10-02 13:33:22.903593188 +0000 UTC m=+0.117766140 container init 834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldwasser, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:33:22 compute-0 podman[421897]: 2025-10-02 13:33:22.811037545 +0000 UTC m=+0.025210517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:22 compute-0 podman[421897]: 2025-10-02 13:33:22.909381836 +0000 UTC m=+0.123554758 container start 834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:33:22 compute-0 podman[421897]: 2025-10-02 13:33:22.912964748 +0000 UTC m=+0.127137690 container attach 834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:33:22 compute-0 gifted_goldwasser[421914]: 167 167
Oct 02 13:33:22 compute-0 systemd[1]: libpod-834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479.scope: Deactivated successfully.
Oct 02 13:33:22 compute-0 podman[421897]: 2025-10-02 13:33:22.915054801 +0000 UTC m=+0.129227723 container died 834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldwasser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e4c6f58f0071cb567915d321a43b3b89b4aaa350ea03cef6c241563bca6b9bd-merged.mount: Deactivated successfully.
Oct 02 13:33:22 compute-0 podman[421897]: 2025-10-02 13:33:22.952767258 +0000 UTC m=+0.166940180 container remove 834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldwasser, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:33:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:22 compute-0 systemd[1]: libpod-conmon-834b1b955e03aa0a8124709036bb4208fc342e849e55dde2f40ed198c5840479.scope: Deactivated successfully.
Oct 02 13:33:23 compute-0 podman[421939]: 2025-10-02 13:33:23.103285706 +0000 UTC m=+0.040030337 container create e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:23 compute-0 systemd[1]: Started libpod-conmon-e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e.scope.
Oct 02 13:33:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1063c63ed161b71757c8f96e543a6c4286055905a7d068d26fc1d9fffe145935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1063c63ed161b71757c8f96e543a6c4286055905a7d068d26fc1d9fffe145935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1063c63ed161b71757c8f96e543a6c4286055905a7d068d26fc1d9fffe145935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1063c63ed161b71757c8f96e543a6c4286055905a7d068d26fc1d9fffe145935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:23 compute-0 podman[421939]: 2025-10-02 13:33:23.164441603 +0000 UTC m=+0.101186264 container init e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:33:23 compute-0 podman[421939]: 2025-10-02 13:33:23.174127132 +0000 UTC m=+0.110871773 container start e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:23 compute-0 podman[421939]: 2025-10-02 13:33:23.179593492 +0000 UTC m=+0.116338183 container attach e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:33:23 compute-0 podman[421939]: 2025-10-02 13:33:23.085135311 +0000 UTC m=+0.021879972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:23 compute-0 sudo[421961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:23 compute-0 sudo[421961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:23 compute-0 sudo[421961]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:23 compute-0 sudo[421986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:23 compute-0 sudo[421986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:23 compute-0 sudo[421986]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:33:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:33:23 compute-0 nova_compute[256940]: 2025-10-02 13:33:23.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:23 compute-0 jolly_kirch[421956]: {
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:     "1": [
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:         {
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "devices": [
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "/dev/loop3"
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             ],
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "lv_name": "ceph_lv0",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "lv_size": "7511998464",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "name": "ceph_lv0",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "tags": {
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.cluster_name": "ceph",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.crush_device_class": "",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.encrypted": "0",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.osd_id": "1",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.type": "block",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:                 "ceph.vdo": "0"
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             },
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "type": "block",
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:             "vg_name": "ceph_vg0"
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:         }
Oct 02 13:33:23 compute-0 jolly_kirch[421956]:     ]
Oct 02 13:33:23 compute-0 jolly_kirch[421956]: }
Oct 02 13:33:23 compute-0 systemd[1]: libpod-e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e.scope: Deactivated successfully.
Oct 02 13:33:23 compute-0 podman[421939]: 2025-10-02 13:33:23.972975627 +0000 UTC m=+0.909720268 container died e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1063c63ed161b71757c8f96e543a6c4286055905a7d068d26fc1d9fffe145935-merged.mount: Deactivated successfully.
Oct 02 13:33:24 compute-0 podman[421939]: 2025-10-02 13:33:24.134664911 +0000 UTC m=+1.071409552 container remove e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:33:24 compute-0 ceph-mon[73668]: pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:24 compute-0 systemd[1]: libpod-conmon-e9f691ecac5bacab927cc08b2e9d71f60e3754e9ccabd9b2e199d252bdfb737e.scope: Deactivated successfully.
Oct 02 13:33:24 compute-0 sudo[421830]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:24 compute-0 sudo[422027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:24 compute-0 sudo[422027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:24 compute-0 sudo[422027]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:24 compute-0 sudo[422052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:24 compute-0 sudo[422052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:24 compute-0 sudo[422052]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:24 compute-0 sudo[422077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:24 compute-0 sudo[422077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:24 compute-0 sudo[422077]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:24 compute-0 sudo[422102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:33:24 compute-0 sudo[422102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:24.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:24 compute-0 podman[422166]: 2025-10-02 13:33:24.831926851 +0000 UTC m=+0.056974191 container create 028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:33:24 compute-0 systemd[1]: Started libpod-conmon-028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e.scope.
Oct 02 13:33:24 compute-0 podman[422166]: 2025-10-02 13:33:24.812992816 +0000 UTC m=+0.038040266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:24 compute-0 podman[422166]: 2025-10-02 13:33:24.927058679 +0000 UTC m=+0.152106029 container init 028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:33:24 compute-0 podman[422166]: 2025-10-02 13:33:24.935696511 +0000 UTC m=+0.160743841 container start 028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:33:24 compute-0 podman[422166]: 2025-10-02 13:33:24.939735704 +0000 UTC m=+0.164783054 container attach 028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:33:24 compute-0 pensive_darwin[422184]: 167 167
Oct 02 13:33:24 compute-0 systemd[1]: libpod-028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e.scope: Deactivated successfully.
Oct 02 13:33:24 compute-0 podman[422166]: 2025-10-02 13:33:24.942997038 +0000 UTC m=+0.168044398 container died 028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:33:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-471c7aef2db7f07ebdcb59dab118687a298683c0900128a20450643606358d45-merged.mount: Deactivated successfully.
Oct 02 13:33:24 compute-0 podman[422166]: 2025-10-02 13:33:24.981133796 +0000 UTC m=+0.206181136 container remove 028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:33:24 compute-0 systemd[1]: libpod-conmon-028f64ac8ec63d5099744021f1e2a48dd5ed15b4d8a4b180ee126c4340c3267e.scope: Deactivated successfully.
Oct 02 13:33:25 compute-0 nova_compute[256940]: 2025-10-02 13:33:25.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:25 compute-0 podman[422208]: 2025-10-02 13:33:25.151892242 +0000 UTC m=+0.049428458 container create 87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_poitras, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:33:25 compute-0 systemd[1]: Started libpod-conmon-87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12.scope.
Oct 02 13:33:25 compute-0 ceph-mon[73668]: pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:33:25 compute-0 podman[422208]: 2025-10-02 13:33:25.129755745 +0000 UTC m=+0.027291981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfde4fa2b4e510d7f88af642ccd64c1051158f66ed036e9e9ffdab170cec3fb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfde4fa2b4e510d7f88af642ccd64c1051158f66ed036e9e9ffdab170cec3fb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfde4fa2b4e510d7f88af642ccd64c1051158f66ed036e9e9ffdab170cec3fb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfde4fa2b4e510d7f88af642ccd64c1051158f66ed036e9e9ffdab170cec3fb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:33:25 compute-0 podman[422208]: 2025-10-02 13:33:25.245756198 +0000 UTC m=+0.143292434 container init 87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_poitras, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:33:25 compute-0 podman[422208]: 2025-10-02 13:33:25.257677214 +0000 UTC m=+0.155213430 container start 87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:33:25 compute-0 podman[422208]: 2025-10-02 13:33:25.260910716 +0000 UTC m=+0.158446942 container attach 87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_poitras, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:33:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:33:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:25.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:33:26 compute-0 priceless_poitras[422224]: {
Oct 02 13:33:26 compute-0 priceless_poitras[422224]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:33:26 compute-0 priceless_poitras[422224]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:33:26 compute-0 priceless_poitras[422224]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:33:26 compute-0 priceless_poitras[422224]:         "osd_id": 1,
Oct 02 13:33:26 compute-0 priceless_poitras[422224]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:33:26 compute-0 priceless_poitras[422224]:         "type": "bluestore"
Oct 02 13:33:26 compute-0 priceless_poitras[422224]:     }
Oct 02 13:33:26 compute-0 priceless_poitras[422224]: }
Oct 02 13:33:26 compute-0 systemd[1]: libpod-87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12.scope: Deactivated successfully.
Oct 02 13:33:26 compute-0 podman[422208]: 2025-10-02 13:33:26.182500477 +0000 UTC m=+1.080036693 container died 87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfde4fa2b4e510d7f88af642ccd64c1051158f66ed036e9e9ffdab170cec3fb5-merged.mount: Deactivated successfully.
Oct 02 13:33:26 compute-0 podman[422208]: 2025-10-02 13:33:26.243645095 +0000 UTC m=+1.141181311 container remove 87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_poitras, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:33:26 compute-0 systemd[1]: libpod-conmon-87af35a241a4f0023d3655954f238bde1aaa7e42da471863afca3b97eff25c12.scope: Deactivated successfully.
Oct 02 13:33:26 compute-0 sudo[422102]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:26 compute-0 podman[422246]: 2025-10-02 13:33:26.278712023 +0000 UTC m=+0.066511555 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:33:26 compute-0 podman[422254]: 2025-10-02 13:33:26.278936539 +0000 UTC m=+0.068593089 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:33:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:33:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:33:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev db4d262f-1230-4708-aa37-d0165825f3ba does not exist
Oct 02 13:33:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8023e01b-2d16-49d4-bd45-fd81c87a5a55 does not exist
Oct 02 13:33:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5a8a4ee9-0e86-4547-ac51-dc3b57925774 does not exist
Oct 02 13:33:26 compute-0 sudo[422300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:26 compute-0 sudo[422300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:26 compute-0 sudo[422300]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:26 compute-0 sudo[422325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:33:26 compute-0 sudo[422325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:26 compute-0 sudo[422325]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:33:26.531 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:33:26.532 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:33:26.532 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:26.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:27 compute-0 ceph-mon[73668]: pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:33:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:27.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:33:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:28.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:28 compute-0 nova_compute[256940]: 2025-10-02 13:33:28.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:33:28
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.mgr']
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:33:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:29.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:33:30 compute-0 nova_compute[256940]: 2025-10-02 13:33:30.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:30 compute-0 ceph-mon[73668]: pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:33:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:30.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:33:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:31 compute-0 ceph-mon[73668]: pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:32.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:33 compute-0 nova_compute[256940]: 2025-10-02 13:33:33.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:34 compute-0 ceph-mon[73668]: pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:34.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:35 compute-0 nova_compute[256940]: 2025-10-02 13:33:35.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:35.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:36 compute-0 ceph-mon[73668]: pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:36.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:38 compute-0 ceph-mon[73668]: pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:38.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:38 compute-0 nova_compute[256940]: 2025-10-02 13:33:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:33:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 81K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1599 writes, 6777 keys, 1599 commit groups, 1.0 writes per commit group, ingest: 10.62 MB, 0.02 MB/s
                                           Interval WAL: 1599 writes, 1599 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     35.6      3.07              0.32        57    0.054       0      0       0.0       0.0
                                             L6      1/0   12.58 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4     76.3     65.5      8.96              1.71        56    0.160    434K    30K       0.0       0.0
                                            Sum      1/0   12.58 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     56.8     57.9     12.03              2.03       113    0.106    434K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.2     74.8     73.2      0.94              0.22        10    0.094     54K   2573       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     76.3     65.5      8.96              1.71        56    0.160    434K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.7      3.07              0.32        56    0.055       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.107, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.68 GB write, 0.11 MB/s write, 0.67 GB read, 0.10 MB/s read, 12.0 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 73.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000652 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4279,70.78 MB,23.2822%) FilterBlock(114,1.16 MB,0.383211%) IndexBlock(114,1.94 MB,0.639634%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:33:39 compute-0 ceph-mon[73668]: pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:39.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:40 compute-0 nova_compute[256940]: 2025-10-02 13:33:40.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:40.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:33:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:33:41 compute-0 podman[422358]: 2025-10-02 13:33:41.391085351 +0000 UTC m=+0.063609001 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:33:41 compute-0 podman[422359]: 2025-10-02 13:33:41.454463465 +0000 UTC m=+0.122906411 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 13:33:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:33:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:33:42 compute-0 ceph-mon[73668]: pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:33:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:42.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:33:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:43 compute-0 ceph-mon[73668]: pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:43 compute-0 nova_compute[256940]: 2025-10-02 13:33:43.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:43 compute-0 sudo[422404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:43 compute-0 sudo[422404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:43 compute-0 sudo[422404]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:43 compute-0 sudo[422429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:43 compute-0 sudo[422429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:43 compute-0 sudo[422429]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2390697569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3466644390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:44.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:45 compute-0 nova_compute[256940]: 2025-10-02 13:33:45.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:45 compute-0 ceph-mon[73668]: pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.268 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.269 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.269 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.269 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.269 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:33:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:46.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:33:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699282246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.722 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:33:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3699282246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.920 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.921 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4154MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.921 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:46 compute-0 nova_compute[256940]: 2025-10-02 13:33:46.922 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.052 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.053 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.083 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:33:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:33:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2823414242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.535 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.541 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.642 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.644 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:33:47 compute-0 nova_compute[256940]: 2025-10-02 13:33:47.645 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:47 compute-0 ceph-mon[73668]: pgmap v3701: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2823414242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:48.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:48 compute-0 nova_compute[256940]: 2025-10-02 13:33:48.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:49.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:50 compute-0 ceph-mon[73668]: pgmap v3702: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3420975852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:50 compute-0 nova_compute[256940]: 2025-10-02 13:33:50.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:50.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:50 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:51 compute-0 ceph-mon[73668]: pgmap v3703: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:51.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/796460566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:52 compute-0 nova_compute[256940]: 2025-10-02 13:33:52.645 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:52 compute-0 nova_compute[256940]: 2025-10-02 13:33:52.646 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:52 compute-0 nova_compute[256940]: 2025-10-02 13:33:52.646 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:33:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:52.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:52 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:53 compute-0 nova_compute[256940]: 2025-10-02 13:33:53.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:53 compute-0 ceph-mon[73668]: pgmap v3704: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:53 compute-0 nova_compute[256940]: 2025-10-02 13:33:53.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:53.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:54.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:54 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:55 compute-0 nova_compute[256940]: 2025-10-02 13:33:55.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:55.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:56 compute-0 ceph-mon[73668]: pgmap v3705: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:56 compute-0 podman[422504]: 2025-10-02 13:33:56.378848255 +0000 UTC m=+0.050996078 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:33:56 compute-0 podman[422505]: 2025-10-02 13:33:56.386381438 +0000 UTC m=+0.058647835 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:33:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:56.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:56 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:57.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:58 compute-0 ceph-mon[73668]: pgmap v3706: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.334325) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038334365, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1792, "num_deletes": 252, "total_data_size": 3193993, "memory_usage": 3246072, "flush_reason": "Manual Compaction"}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038345456, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1865443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80425, "largest_seqno": 82215, "table_properties": {"data_size": 1859405, "index_size": 3048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16082, "raw_average_key_size": 21, "raw_value_size": 1845919, "raw_average_value_size": 2422, "num_data_blocks": 136, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411862, "oldest_key_time": 1759411862, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 11172 microseconds, and 5171 cpu microseconds.
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.345496) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1865443 bytes OK
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.345515) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.348306) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.348322) EVENT_LOG_v1 {"time_micros": 1759412038348317, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.348339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3186585, prev total WAL file size 3186585, number of live WAL files 2.
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.349172) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303038' seq:72057594037927935, type:22 .. '6D6772737461740033323630' seq:0, type:0; will stop at (end)
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1821KB)], [185(12MB)]
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038349209, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 15060925, "oldest_snapshot_seqno": -1}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 10880 keys, 12449407 bytes, temperature: kUnknown
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038449011, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 12449407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12381659, "index_size": 39475, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27205, "raw_key_size": 286670, "raw_average_key_size": 26, "raw_value_size": 12194140, "raw_average_value_size": 1120, "num_data_blocks": 1496, "num_entries": 10880, "num_filter_entries": 10880, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.449285) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12449407 bytes
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.450341) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.8 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(14.7) write-amplify(6.7) OK, records in: 11321, records dropped: 441 output_compression: NoCompression
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.450358) EVENT_LOG_v1 {"time_micros": 1759412038450350, "job": 116, "event": "compaction_finished", "compaction_time_micros": 99876, "compaction_time_cpu_micros": 29359, "output_level": 6, "num_output_files": 1, "total_output_size": 12449407, "num_input_records": 11321, "num_output_records": 10880, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038450762, "job": 116, "event": "table_file_deletion", "file_number": 187}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038453121, "job": 116, "event": "table_file_deletion", "file_number": 185}
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.349093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.453191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.453197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.453199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.453201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:33:58.453203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:33:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:33:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:58.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:58 compute-0 nova_compute[256940]: 2025-10-02 13:33:58.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:58 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:59 compute-0 ceph-mon[73668]: pgmap v3707: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:33:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:59.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:00 compute-0 nova_compute[256940]: 2025-10-02 13:34:00.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:00 compute-0 nova_compute[256940]: 2025-10-02 13:34:00.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:00 compute-0 nova_compute[256940]: 2025-10-02 13:34:00.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:00.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:00 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:01 compute-0 nova_compute[256940]: 2025-10-02 13:34:01.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:01.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:02 compute-0 sshd-session[422546]: Accepted publickey for zuul from 192.168.122.10 port 42934 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:34:02 compute-0 systemd-logind[820]: New session 75 of user zuul.
Oct 02 13:34:02 compute-0 systemd[1]: Started Session 75 of User zuul.
Oct 02 13:34:02 compute-0 sshd-session[422546]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:34:02 compute-0 ceph-mon[73668]: pgmap v3708: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:02 compute-0 sudo[422550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 13:34:02 compute-0 sudo[422550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:34:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:34:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:02.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:34:02 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:03 compute-0 nova_compute[256940]: 2025-10-02 13:34:03.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:03 compute-0 nova_compute[256940]: 2025-10-02 13:34:03.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:34:03 compute-0 nova_compute[256940]: 2025-10-02 13:34:03.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:34:03 compute-0 nova_compute[256940]: 2025-10-02 13:34:03.227 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:34:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:03 compute-0 nova_compute[256940]: 2025-10-02 13:34:03.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:34:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:34:03 compute-0 sudo[422631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:04 compute-0 sudo[422631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:04 compute-0 sudo[422631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:04 compute-0 sudo[422674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:04 compute-0 sudo[422674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:04 compute-0 sudo[422674]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:04 compute-0 ceph-mon[73668]: pgmap v3709: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:04.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:04 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.36975 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:04 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:05 compute-0 nova_compute[256940]: 2025-10-02 13:34:05.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:05 compute-0 ceph-mon[73668]: from='client.36975 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:05 compute-0 ceph-mon[73668]: pgmap v3710: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:05 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.36981 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:05.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:34:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772241408' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:05 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.45911 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4257618237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4257618237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73668]: from='client.36981 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2772241408' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mon[73668]: from='client.45911 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.45917 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:06.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:06 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46774 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:07 compute-0 nova_compute[256940]: 2025-10-02 13:34:07.222 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:07 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46780 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73668]: from='client.45917 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73668]: from='client.46774 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-0 ceph-mon[73668]: pgmap v3711: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:07 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3064877731' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:07.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:08 compute-0 ceph-mon[73668]: from='client.46780 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/630458637' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 13:34:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 13:34:08 compute-0 nova_compute[256940]: 2025-10-02 13:34:08.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:08 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:09 compute-0 ovs-vsctl[422888]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 13:34:09 compute-0 ceph-mon[73668]: pgmap v3712: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:09.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:10 compute-0 nova_compute[256940]: 2025-10-02 13:34:10.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:10 compute-0 virtqemud[257589]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 13:34:10 compute-0 virtqemud[257589]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 13:34:10 compute-0 virtqemud[257589]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:34:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:10.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:10 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: cache status {prefix=cache status} (starting...)
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: client ls {prefix=client ls} (starting...)
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:11 compute-0 lvm[423238]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 13:34:11 compute-0 lvm[423238]: VG ceph_vg0 finished
Oct 02 13:34:11 compute-0 kernel: block vda: the capability attribute has been deprecated.
Oct 02 13:34:11 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37002 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:11.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 13:34:11 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:34:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145955627' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37014 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.45932 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73668]: pgmap v3713: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2145955627' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-0 podman[423404]: 2025-10-02 13:34:12.406914881 +0000 UTC m=+0.064368371 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:34:12 compute-0 podman[423405]: 2025-10-02 13:34:12.432775513 +0000 UTC m=+0.089931056 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:34:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:34:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219750363' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.45944 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:12.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:34:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 13:34:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149323355' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 13:34:12 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37038 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:12 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:12 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:12.945+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:12 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:13 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: ops {prefix=ops} (starting...)
Oct 02 13:34:13 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 13:34:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999954966' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.37002 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.37014 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.45932 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2219750363' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.45944 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/535649708' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2149323355' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.37038 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: pgmap v3714: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1976143621' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/999954966' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 13:34:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986665879' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.45971 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:13.462+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:13 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46798 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37059 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:34:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:34:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/491187411' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: session ls {prefix=session ls} (starting...)
Oct 02 13:34:13 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-0 nova_compute[256940]: 2025-10-02 13:34:13.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:13.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46813 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: status {prefix=status} (starting...)
Oct 02 13:34:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37071 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:34:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715324174' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46004 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3986665879' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.45971 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.46798 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/42129484' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.37059 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/25412291' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/491187411' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2805152507' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.46813 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2149402501' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2033142661' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1715324174' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2900548446' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:34:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337664208' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46019 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:34:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140374564' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:14.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46843 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:14.718+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:14 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:14 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059518549' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3004068556' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:15 compute-0 nova_compute[256940]: 2025-10-02 13:34:15.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.37071 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.46004 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1531075006' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2337664208' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.46019 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/140374564' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1477980080' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.46843 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: pgmap v3715: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/133173445' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1059518549' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/731540033' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1353089157' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/278414120' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3004068556' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37110 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:15 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:15.431+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290070505' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46882 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:15.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629866350' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46058 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:15.970+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:15 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:34:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183441616' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213820232' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37146 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.37110 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.46864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/290070505' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/966138210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4036943251' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2678827406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.46882 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2629866350' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.46058 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4183441616' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/888571476' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3869755304' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2540074245' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/213820232' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:16.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:16 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37158 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46106 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:34:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477531783' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:16 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:17 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37164 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46933 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:17 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:17.195+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:34:17 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46115 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:34:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810063346' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3713185548' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.37146 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1298687505' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/977154715' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3242153421' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.37158 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.46106 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3477531783' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1460685059' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2460489288' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: pgmap v3716: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.37164 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/810063346' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3363334927' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2199224408' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:34.649297+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374317056 unmapped: 53067776 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4205845 data_alloc: 251658240 data_used: 47316992
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:35.649442+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374317056 unmapped: 53067776 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a6405000/0x0/0x1bfc00000, data 0x3ff1342/0x41c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1562f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:36.649623+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374317056 unmapped: 53067776 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:37.649796+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374317056 unmapped: 53067776 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:38.649944+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.984443665s of 19.040981293s, submitted: 8
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 381730816 unmapped: 45654016 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:39.650068+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a474f000/0x0/0x1bfc00000, data 0x4b07342/0x4cdf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x167cf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 382656512 unmapped: 44728320 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4304155 data_alloc: 251658240 data_used: 48361472
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:40.650224+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383827968 unmapped: 43556864 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:41.650362+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383844352 unmapped: 43540480 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:42.650500+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a4715000/0x0/0x1bfc00000, data 0x4b40342/0x4d18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x167cf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383844352 unmapped: 43540480 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:43.650681+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383844352 unmapped: 43540480 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:44.650873+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a4715000/0x0/0x1bfc00000, data 0x4b40342/0x4d18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x167cf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383844352 unmapped: 43540480 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a4715000/0x0/0x1bfc00000, data 0x4b40342/0x4d18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x167cf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4324555 data_alloc: 251658240 data_used: 48631808
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:45.651039+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383844352 unmapped: 43540480 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:46.651212+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383844352 unmapped: 43540480 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:47.651292+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06dde3400 session 0x55a0752483c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383320064 unmapped: 44064768 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:48.651431+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383320064 unmapped: 44064768 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:49.651566+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a07049ac00 session 0x55a0706bef00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a074afa000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a074afa000 session 0x55a06ec6d2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383320064 unmapped: 44064768 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a071d3ec00 session 0x55a06dd55680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a071d3ec00 session 0x55a07015b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.342293739s of 11.659924507s, submitted: 107
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4347349 data_alloc: 251658240 data_used: 48705536
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06dde3400 session 0x55a06de9a5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:50.651690+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a07049ac00 session 0x55a06de9b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a070891c00 session 0x55a06f9e50e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a074afa000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a074afa000 session 0x55a071dd1a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06dde3400 session 0x55a071dd1680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a4464000/0x0/0x1bfc00000, data 0x4df1352/0x4fca000, compress 0x0/0x0/0x0, omap 0x639, meta 0x167cf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383320064 unmapped: 44064768 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:51.651885+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06fa1bc00 session 0x55a070c06780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383328256 unmapped: 44056576 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a07049ac00 session 0x55a0708edc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a070891c00 session 0x55a075249e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a071d3ec00 session 0x55a07015b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:52.652018+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06dde3400 session 0x55a070b41860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06fa1bc00 session 0x55a0709ee960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374620160 unmapped: 52764672 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a071d3ec00 session 0x55a0709efc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:53.652225+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374620160 unmapped: 52764672 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:54.652383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374620160 unmapped: 52764672 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a070891c00 session 0x55a071dd05a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a07375b800 session 0x55a06dd510e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06dde3400 session 0x55a0752485a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06fa1bc00 session 0x55a07085a1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4237311 data_alloc: 251658240 data_used: 36995072
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:55.652532+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a070891c00 session 0x55a071dd01e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374628352 unmapped: 52756480 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a4928000/0x0/0x1bfc00000, data 0x492d352/0x4b06000, compress 0x0/0x0/0x0, omap 0x639, meta 0x167cf9c7), peers [0,2] op hist [1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a071d3ec00 session 0x55a070def860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:56.652669+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374792192 unmapped: 52592640 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a07375b800 session 0x55a070a2ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:57.652806+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a07036d800 session 0x55a070ac2b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a075699000 session 0x55a070bf43c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374849536 unmapped: 52535296 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:58.652953+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a07049ac00 session 0x55a075248780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a06fa1bc00 session 0x55a0706bed20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374890496 unmapped: 52494336 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:04:59.653093+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 ms_handle_reset con 0x55a070891c00 session 0x55a070c07860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374898688 unmapped: 52486144 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 heartbeat osd_stat(store_statfs(0x1a5e37000/0x0/0x1bfc00000, data 0x445d3b4/0x4637000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4192273 data_alloc: 234881024 data_used: 34680832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:00.653333+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374898688 unmapped: 52486144 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.369946480s of 11.025991440s, submitted: 97
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 354 ms_handle_reset con 0x55a070891c00 session 0x55a07068d0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:01.653568+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 354 ms_handle_reset con 0x55a06fa1bc00 session 0x55a07068da40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 354 ms_handle_reset con 0x55a07036d800 session 0x55a06f9f4f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 381911040 unmapped: 45473792 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 354 ms_handle_reset con 0x55a07049ac00 session 0x55a06dd872c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:02.653831+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 381919232 unmapped: 45465600 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 356 ms_handle_reset con 0x55a075699000 session 0x55a06d238960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:03.653945+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 356 heartbeat osd_stat(store_statfs(0x1a4947000/0x0/0x1bfc00000, data 0x5947d3f/0x5b26000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 382787584 unmapped: 44597248 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:04.654176+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 356 heartbeat osd_stat(store_statfs(0x1a4943000/0x0/0x1bfc00000, data 0x59499b4/0x5b29000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 43114496 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 356 ms_handle_reset con 0x55a070668c00 session 0x55a07087e3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4506377 data_alloc: 268435456 data_used: 55173120
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:05.654311+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 356 heartbeat osd_stat(store_statfs(0x1a4943000/0x0/0x1bfc00000, data 0x59499b4/0x5b29000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380837888 unmapped: 46546944 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:06.654502+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 356 ms_handle_reset con 0x55a06fa1bc00 session 0x55a070a3d2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380846080 unmapped: 46538752 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:07.654776+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380846080 unmapped: 46538752 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:08.654998+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380846080 unmapped: 46538752 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 47K writes, 190K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.93 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6725 writes, 25K keys, 6725 commit groups, 1.0 writes per commit group, ingest: 26.21 MB, 0.04 MB/s
                                           Interval WAL: 6725 writes, 2604 syncs, 2.58 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:09.655192+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a075699000 session 0x55a06f9b1a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383025152 unmapped: 44359680 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:10.655336+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4571585 data_alloc: 268435456 data_used: 55164928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380542976 unmapped: 46841856 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.089349747s of 10.002647400s, submitted: 123
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a3fa0000/0x0/0x1bfc00000, data 0x5dba4f3/0x5f9b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:11.655459+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a07036d800 session 0x55a070ba6f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 48775168 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:12.655672+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a07049ac00 session 0x55a075249c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 52740096 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:13.655863+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 51355648 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:14.656090+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 379076608 unmapped: 48308224 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a071d3ec00 session 0x55a07087ef00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06fa1a400 session 0x55a075249860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a4245000/0x0/0x1bfc00000, data 0x604b45e/0x6229000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:15.656246+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4474212 data_alloc: 251658240 data_used: 42131456
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374996992 unmapped: 52387840 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06fa1bc00 session 0x55a070966780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:16.656413+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374734848 unmapped: 52649984 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:17.656636+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374734848 unmapped: 52649984 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a59ad000/0x0/0x1bfc00000, data 0x48e345e/0x4ac1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:18.656840+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a59ad000/0x0/0x1bfc00000, data 0x48e345e/0x4ac1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374734848 unmapped: 52649984 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:19.656999+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374734848 unmapped: 52649984 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:20.657371+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4204140 data_alloc: 234881024 data_used: 29642752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374734848 unmapped: 52649984 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:21.657597+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a59ad000/0x0/0x1bfc00000, data 0x48e345e/0x4ac1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374734848 unmapped: 52649984 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:22.657784+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a07375b800 session 0x55a070e10000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.233114243s of 11.301034927s, submitted: 141
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06dde3400 session 0x55a0706be780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374734848 unmapped: 52649984 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:23.657938+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374743040 unmapped: 52641792 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06dde3400 session 0x55a0709efa40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:24.658089+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a668d000/0x0/0x1bfc00000, data 0x3c0444e/0x3de1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374751232 unmapped: 52633600 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:25.658350+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4082930 data_alloc: 234881024 data_used: 26685440
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374751232 unmapped: 52633600 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a66b1000/0x0/0x1bfc00000, data 0x3be044e/0x3dbd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:26.658521+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374751232 unmapped: 52633600 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:27.658652+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374751232 unmapped: 52633600 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:28.658804+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374759424 unmapped: 52625408 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:29.658964+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374759424 unmapped: 52625408 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:30.659135+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4082930 data_alloc: 234881024 data_used: 26685440
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06fa1a400 session 0x55a07015b860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374759424 unmapped: 52625408 heap: 427384832 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06fa1bc00 session 0x55a06dd32780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a071d3ec00 session 0x55a06de9ab40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a07375b800 session 0x55a06f9e50e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:31.659245+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a07375b800 session 0x55a06de9b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a66b1000/0x0/0x1bfc00000, data 0x3be044e/0x3dbd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06dde3400 session 0x55a070ac2b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390668288 unmapped: 40697856 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06fa1a400 session 0x55a070dee780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:32.659393+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06fa1bc00 session 0x55a06de9a1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390668288 unmapped: 40697856 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:33.659521+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3ec00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a071d3ec00 session 0x55a075248780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.381121635s of 11.616708755s, submitted: 44
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390668288 unmapped: 40697856 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a06dde3400 session 0x55a06f9f45a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:34.659688+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390668288 unmapped: 40697856 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:35.659850+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4261002 data_alloc: 251658240 data_used: 40935424
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 ms_handle_reset con 0x55a07375b800 session 0x55a07087e5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385269760 unmapped: 46096384 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:36.660016+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a57fc000/0x0/0x1bfc00000, data 0x4a9445e/0x4c72000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385269760 unmapped: 46096384 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:37.660178+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385269760 unmapped: 46096384 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:38.660308+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385269760 unmapped: 46096384 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:39.660441+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385269760 unmapped: 46096384 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:40.660569+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4259024 data_alloc: 251658240 data_used: 40935424
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a57fc000/0x0/0x1bfc00000, data 0x4a9445e/0x4c72000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385277952 unmapped: 46088192 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:41.660751+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385277952 unmapped: 46088192 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:42.660927+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 385294336 unmapped: 46071808 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:43.661136+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a57fc000/0x0/0x1bfc00000, data 0x4a9445e/0x4c72000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.966307640s of 10.034378052s, submitted: 22
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 51634176 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a07036d800 session 0x55a06eb08d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:44.661677+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374800384 unmapped: 56565760 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:45.662151+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4109802 data_alloc: 234881024 data_used: 27271168
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374800384 unmapped: 56565760 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:46.662769+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374800384 unmapped: 56565760 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:47.663318+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a070709000 session 0x55a070b40780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a072d03000 session 0x55a06ec6cb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a072d03000 session 0x55a070bf5680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a06dde3400 session 0x55a070e11680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374800384 unmapped: 56565760 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:48.663460+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6ce0000/0x0/0x1bfc00000, data 0x35af0a9/0x378d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a07036d800 session 0x55a06fca4780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a070709000 session 0x55a071dd1c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a07375b800 session 0x55a06f9b12c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a06dde3400 session 0x55a070e101e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 ms_handle_reset con 0x55a07036d800 session 0x55a070e11680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 56311808 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:49.663600+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a070b40780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 56000512 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:50.663737+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4168927 data_alloc: 234881024 data_used: 27279360
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070706800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 56000512 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:51.664071+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 56000512 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:52.664398+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6759000/0x0/0x1bfc00000, data 0x3b33bf8/0x3d14000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375242752 unmapped: 56123392 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:53.664587+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375177216 unmapped: 56188928 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:54.664722+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375177216 unmapped: 56188928 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:55.664855+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4171867 data_alloc: 234881024 data_used: 27701248
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375177216 unmapped: 56188928 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:56.665060+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375177216 unmapped: 56188928 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a675a000/0x0/0x1bfc00000, data 0x3b33bf8/0x3d14000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:57.665487+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375177216 unmapped: 56188928 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:58.665834+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.725045204s of 14.943206787s, submitted: 67
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375177216 unmapped: 56188928 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:59.666010+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07db91000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a07db91000 session 0x55a06e6aa3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375177216 unmapped: 56188928 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:00.666203+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a06de9a000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4172651 data_alloc: 234881024 data_used: 27697152
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6756000/0x0/0x1bfc00000, data 0x3b37bf8/0x3d18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1bc00 session 0x55a0707c52c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a07087eb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a072d03000 session 0x55a07087e5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070706800 session 0x55a06ec3bc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375185408 unmapped: 56180736 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:01.666349+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375185408 unmapped: 56180736 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:02.666553+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde3400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06dde3400 session 0x55a06f9b1a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a070a3d2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070706800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375341056 unmapped: 56025088 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:03.666744+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6732000/0x0/0x1bfc00000, data 0x3b5bbf8/0x3d3c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375275520 unmapped: 56090624 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:04.666981+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:05.667142+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4208048 data_alloc: 234881024 data_used: 31997952
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:06.667250+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:07.667432+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:08.667654+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:09.667797+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.353804588s of 10.453245163s, submitted: 18
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6732000/0x0/0x1bfc00000, data 0x3b5bbf8/0x3d3c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:10.667916+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4208340 data_alloc: 234881024 data_used: 32002048
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07db91000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:11.668252+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:12.668498+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6732000/0x0/0x1bfc00000, data 0x3b5bbf8/0x3d3c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:13.668758+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:14.668975+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375291904 unmapped: 56074240 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:15.669192+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4234956 data_alloc: 234881024 data_used: 32071680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 54853632 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:16.669353+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 54509568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:17.669481+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:18.669633+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 54509568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a63d0000/0x0/0x1bfc00000, data 0x3ebdbf8/0x409e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:19.669816+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 54509568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993698120s of 10.100827217s, submitted: 32
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:20.669956+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375242752 unmapped: 56123392 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4262722 data_alloc: 234881024 data_used: 32124928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a070b41860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a07db91000 session 0x55a06ec6de00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:21.670091+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375242752 unmapped: 56123392 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:22.670246+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375250944 unmapped: 56115200 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a600b000/0x0/0x1bfc00000, data 0x4281bf8/0x4462000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:23.670422+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 54853632 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:24.670607+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 54853632 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:25.670749+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 54853632 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a600c000/0x0/0x1bfc00000, data 0x4281bf8/0x4462000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4270038 data_alloc: 234881024 data_used: 32239616
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:26.670943+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 54853632 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:27.671126+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 54853632 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:28.671285+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 54853632 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:29.671463+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376807424 unmapped: 54558720 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.300975800s of 10.068059921s, submitted: 40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a5d62000/0x0/0x1bfc00000, data 0x4525bf8/0x4706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:30.671607+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 54075392 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4296022 data_alloc: 234881024 data_used: 32358400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:31.671744+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377421824 unmapped: 53944320 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:32.671871+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377421824 unmapped: 53944320 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a5d3e000/0x0/0x1bfc00000, data 0x4540bf8/0x4721000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:33.672050+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377421824 unmapped: 53944320 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a075fe4c00 session 0x55a06f9e4000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a0707c4000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:34.672313+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377421824 unmapped: 53944320 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:35.672499+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378339328 unmapped: 53026816 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4257640 data_alloc: 234881024 data_used: 32153600
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a06dd86000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:36.672615+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a06f9e54a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378339328 unmapped: 53026816 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a06d562f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a075fe4c00 session 0x55a071dd1680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6122000/0x0/0x1bfc00000, data 0x416abf8/0x434b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07db91000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a07db91000 session 0x55a070ac3a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a070def680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a07015b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a070def860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a578f000/0x0/0x1bfc00000, data 0x4afcc31/0x4cdf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a075fe4c00 session 0x55a06dd49e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709400 session 0x55a07085b860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:37.672752+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 52994048 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:38.672885+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 52994048 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a070ac3680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:39.673016+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 52994048 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.608272552s of 10.055900574s, submitted: 76
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a06dd554a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a075249860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:40.673221+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378437632 unmapped: 52928512 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4328706 data_alloc: 234881024 data_used: 32153600
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:41.673371+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378437632 unmapped: 52928512 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a5792000/0x0/0x1bfc00000, data 0x4afcb97/0x4cdc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a072d03000 session 0x55a0708ec000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a07036d800 session 0x55a070e11860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:42.673514+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 52912128 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:43.673647+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376455168 unmapped: 54910976 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a06ec3b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:44.673803+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376455168 unmapped: 54910976 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6183000/0x0/0x1bfc00000, data 0x3d5fb87/0x3f3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:45.673935+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 59899904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4166539 data_alloc: 234881024 data_used: 24043520
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6530000/0x0/0x1bfc00000, data 0x3d5fb87/0x3f3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [0,2] op hist [0,1,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:46.674086+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371499008 unmapped: 59867136 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:47.675194+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371638272 unmapped: 59727872 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a06f9b12c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:48.675363+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371679232 unmapped: 59686912 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:49.675479+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371679232 unmapped: 59686912 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:50.675620+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371679232 unmapped: 59686912 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4168528 data_alloc: 234881024 data_used: 24047616
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a611f000/0x0/0x1bfc00000, data 0x3d5fbaa/0x3f3f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:51.675745+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371679232 unmapped: 59686912 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:52.675900+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371802112 unmapped: 59564032 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:53.676054+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371802112 unmapped: 59564032 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.198473930s of 13.882845879s, submitted: 412
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:54.676234+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a611f000/0x0/0x1bfc00000, data 0x3d5fbaa/0x3f3f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371802112 unmapped: 59564032 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:55.676376+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 59555840 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240352 data_alloc: 251658240 data_used: 34086912
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:56.676536+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 59555840 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:57.676699+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 59555840 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:58.676845+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 59555840 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a611f000/0x0/0x1bfc00000, data 0x3d5fbaa/0x3f3f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a611f000/0x0/0x1bfc00000, data 0x3d5fbaa/0x3f3f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:59.676972+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 59555840 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:00.677148+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 59555840 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a611f000/0x0/0x1bfc00000, data 0x3d5fbaa/0x3f3f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240352 data_alloc: 251658240 data_used: 34086912
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:01.677324+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371843072 unmapped: 59523072 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:02.677451+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 376471552 unmapped: 54894592 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a55cd000/0x0/0x1bfc00000, data 0x48b1baa/0x4a91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:03.677656+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375095296 unmapped: 56270848 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.268042564s of 10.500041962s, submitted: 84
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:04.677936+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375119872 unmapped: 56246272 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:05.678169+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375119872 unmapped: 56246272 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4335330 data_alloc: 251658240 data_used: 34488320
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:06.678357+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375119872 unmapped: 56246272 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:07.678584+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375119872 unmapped: 56246272 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:08.678763+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375119872 unmapped: 56246272 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a55bb000/0x0/0x1bfc00000, data 0x48c3baa/0x4aa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:09.679005+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375119872 unmapped: 56246272 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:10.679279+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375119872 unmapped: 56246272 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4333710 data_alloc: 251658240 data_used: 34492416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a07036d800 session 0x55a070def4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:11.680108+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 375128064 unmapped: 56238080 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a075248960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:12.680261+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 60317696 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6aa9000/0x0/0x1bfc00000, data 0x33d6b15/0x35b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:13.680452+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 60317696 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6aa6000/0x0/0x1bfc00000, data 0x33d9b15/0x35b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:14.680781+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 60317696 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:15.680910+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 60317696 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4107004 data_alloc: 234881024 data_used: 24043520
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:16.681092+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 60317696 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:17.681350+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 60317696 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.813339233s of 14.006870270s, submitted: 49
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1bc00 session 0x55a06f9f4f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070706800 session 0x55a06eb0b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:18.681476+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371056640 unmapped: 60309504 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:19.681603+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369639424 unmapped: 61726720 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a7762000/0x0/0x1bfc00000, data 0x2720b05/0x28fc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1bc00 session 0x55a07085a3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:20.681782+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 61718528 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970658 data_alloc: 234881024 data_used: 19558400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:21.681964+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 61718528 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:22.682154+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 61718528 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a7786000/0x0/0x1bfc00000, data 0x26fcb05/0x28d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:23.682344+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 61718528 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:24.682519+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 61718528 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:25.682691+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 61718528 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970658 data_alloc: 234881024 data_used: 19558400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:26.682847+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 61718528 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:27.683033+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 369655808 unmapped: 61710336 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.026583672s of 10.133852005s, submitted: 42
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a070a26d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a7786000/0x0/0x1bfc00000, data 0x26fcb05/0x28d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:28.683179+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370712576 unmapped: 60653568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a07015b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:29.683311+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370712576 unmapped: 60653568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:30.683426+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a7786000/0x0/0x1bfc00000, data 0x26fcb05/0x28d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370712576 unmapped: 60653568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a07036d800 session 0x55a07085a5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3975000 data_alloc: 234881024 data_used: 23228416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a06dd6ab40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:31.683556+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370712576 unmapped: 60653568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:32.683695+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370712576 unmapped: 60653568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:33.683825+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a7785000/0x0/0x1bfc00000, data 0x26fcb05/0x28d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370712576 unmapped: 60653568 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:34.684011+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a06dd47860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371761152 unmapped: 59604992 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:35.684175+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371908608 unmapped: 59457536 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1bc00 session 0x55a06de9b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4026662 data_alloc: 234881024 data_used: 23228416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070706800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:36.684308+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070706800 session 0x55a06d562780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 59572224 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a070e103c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070709000 session 0x55a070defe00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:37.684485+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371777536 unmapped: 59588608 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:38.684627+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371777536 unmapped: 59588608 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6a0d000/0x0/0x1bfc00000, data 0x3475b05/0x3651000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6a0d000/0x0/0x1bfc00000, data 0x3475b05/0x3651000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:39.684754+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371785728 unmapped: 59580416 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a071dd0780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:40.684920+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371785728 unmapped: 59580416 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1a400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1a400 session 0x55a070a26f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079905 data_alloc: 234881024 data_used: 23228416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:41.685015+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06fa1bc00 session 0x55a070e11a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070706800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.605986595s of 13.453388214s, submitted: 128
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 59572224 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070706800 session 0x55a070ba6b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:42.685192+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 59572224 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6a0c000/0x0/0x1bfc00000, data 0x3475b28/0x3652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:43.685325+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371761152 unmapped: 59604992 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6a0c000/0x0/0x1bfc00000, data 0x3475b28/0x3652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:44.685593+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 60923904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:45.685770+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 60923904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4127124 data_alloc: 234881024 data_used: 29687808
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6a0c000/0x0/0x1bfc00000, data 0x3475b28/0x3652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:46.686006+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 60923904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:47.686208+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 60923904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070e14c00 session 0x55a070ac2b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:48.686404+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6a0c000/0x0/0x1bfc00000, data 0x3475b28/0x3652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 60923904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a0708eb400 session 0x55a06debbc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:49.686595+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a6a0c000/0x0/0x1bfc00000, data 0x3475b28/0x3652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a075699400 session 0x55a075248b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 60923904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:50.686747+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06eabcc00 session 0x55a070a27e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 60923904 heap: 431366144 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f80800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a072f80800 session 0x55a06de9bc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f80800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4128853 data_alloc: 234881024 data_used: 29691904
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06eabcc00 session 0x55a075248780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a0708eb400 session 0x55a070e11860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a070e14c00 session 0x55a06de9a3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:51.686876+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a075699400 session 0x55a070a3d4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 370753536 unmapped: 64282624 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:52.687025+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371941376 unmapped: 63094784 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:53.687195+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 371941376 unmapped: 63094784 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a5e10000/0x0/0x1bfc00000, data 0x4071b28/0x424e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:54.687640+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.447444916s of 12.893205643s, submitted: 24
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 373219328 unmapped: 61816832 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:55.687907+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374431744 unmapped: 60604416 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4323297 data_alloc: 251658240 data_used: 33107968
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:56.688210+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374849536 unmapped: 60186624 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:57.688324+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374849536 unmapped: 60186624 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:58.688648+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374046720 unmapped: 60989440 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:59.688787+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a568c000/0x0/0x1bfc00000, data 0x47f5b28/0x49d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,1,0,2])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374472704 unmapped: 60563456 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:00.688966+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374489088 unmapped: 60547072 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4327009 data_alloc: 251658240 data_used: 33402880
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:01.689127+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374489088 unmapped: 60547072 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:02.689263+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 374489088 unmapped: 60547072 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06de08400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:03.689467+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 55369728 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a502b000/0x0/0x1bfc00000, data 0x4e56b28/0x5033000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,21,15])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:04.689686+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.688902855s of 10.145482063s, submitted: 102
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377389056 unmapped: 57647104 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:05.689851+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 57180160 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06de08400 session 0x55a0707c5c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4385140 data_alloc: 251658240 data_used: 33615872
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:06.689966+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a4fcd000/0x0/0x1bfc00000, data 0x4eaeb28/0x508b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 57180160 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06de08400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:07.690201+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 381739008 unmapped: 53297152 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a4fc6000/0x0/0x1bfc00000, data 0x4ebbb28/0x5098000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4,2])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:08.690474+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a0708eb400 session 0x55a071dd12c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 377970688 unmapped: 57065472 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:09.690590+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a4c09000/0x0/0x1bfc00000, data 0x5276b28/0x5453000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380379136 unmapped: 54657024 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 ms_handle_reset con 0x55a06ddeb000 session 0x55a07085be00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:10.690747+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380379136 unmapped: 54657024 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4478586 data_alloc: 251658240 data_used: 42147840
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:11.690914+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 54591488 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:12.691080+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 380452864 unmapped: 54583296 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:13.691341+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 386834432 unmapped: 48201728 heap: 435036160 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 360 heartbeat osd_stat(store_statfs(0x1a4bf7000/0x0/0x1bfc00000, data 0x5288781/0x5466000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,3])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:14.691595+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.481998682s of 10.008502960s, submitted: 63
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 49881088 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:15.691848+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 49872896 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 360 ms_handle_reset con 0x55a070e14c00 session 0x55a07068c3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651982 data_alloc: 251658240 data_used: 48521216
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:16.692028+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 49872896 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:17.692239+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 360 heartbeat osd_stat(store_statfs(0x1a3956000/0x0/0x1bfc00000, data 0x652a781/0x6708000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 49872896 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 361 ms_handle_reset con 0x55a075699400 session 0x55a0706be000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:18.692372+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 387612672 unmapped: 56549376 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:19.692516+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 391995392 unmapped: 52166656 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:20.692660+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390373376 unmapped: 53788672 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706044 data_alloc: 268435456 data_used: 52338688
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:21.692828+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390373376 unmapped: 53788672 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 362 heartbeat osd_stat(store_statfs(0x1a32c9000/0x0/0x1bfc00000, data 0x6bac0a3/0x6d8c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,13])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:22.692989+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 391176192 unmapped: 52985856 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:23.693210+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392224768 unmapped: 51937280 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:24.693460+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 391651328 unmapped: 52510720 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.403481960s of 10.258595467s, submitted: 56
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:25.693611+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 391667712 unmapped: 52494336 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4727002 data_alloc: 268435456 data_used: 52412416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:26.693799+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 391667712 unmapped: 52494336 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 heartbeat osd_stat(store_statfs(0x1a32a7000/0x0/0x1bfc00000, data 0x6bd3be2/0x6db5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:27.693956+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe5400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a075fe5400 session 0x55a070a27e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a06ddeb000 session 0x55a075248b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a0708eb400 session 0x55a06debbc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a070e14c00 session 0x55a070ac2b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 51511296 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:28.694283+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a075699400 session 0x55a070ba6b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a071d3fc00 session 0x55a070e103c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a071d3fc00 session 0x55a06de9b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a06ddeb000 session 0x55a06dd47860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a0708eb400 session 0x55a06dd6ab40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 57016320 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a072d03400 session 0x55a06e6aa1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:29.694423+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 387153920 unmapped: 57008128 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:30.694559+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383705088 unmapped: 60456960 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4654351 data_alloc: 251658240 data_used: 37855232
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:31.694697+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 heartbeat osd_stat(store_statfs(0x1a34c2000/0x0/0x1bfc00000, data 0x69babcf/0x6b9c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383713280 unmapped: 60448768 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:32.694835+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383713280 unmapped: 60448768 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:33.694965+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383721472 unmapped: 60440576 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a06de08400 session 0x55a0709ee5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a06eabcc00 session 0x55a06dd55680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:34.695119+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a06ddeb000 session 0x55a070e11680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383827968 unmapped: 60334080 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:35.695288+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.322427750s of 10.595580101s, submitted: 109
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a071d3fc00 session 0x55a071dd1860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383844352 unmapped: 60317696 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4450246 data_alloc: 234881024 data_used: 27746304
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:36.695426+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a070e14c00 session 0x55a071dd05a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 383852544 unmapped: 60309504 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 heartbeat osd_stat(store_statfs(0x1a3ecf000/0x0/0x1bfc00000, data 0x5747bf2/0x592a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a075699400 session 0x55a06f9e54a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:37.695576+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a06ddeb000 session 0x55a07087f680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401350656 unmapped: 42811392 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a06eabcc00 session 0x55a06d1d6f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:38.695717+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401350656 unmapped: 42811392 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x6590bf2/0x6773000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:39.695855+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a070e14c00 session 0x55a070e101e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 ms_handle_reset con 0x55a071d3fc00 session 0x55a071dd1680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401350656 unmapped: 42811392 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:40.695986+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 42803200 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4667197 data_alloc: 251658240 data_used: 50122752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:41.696149+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 42803200 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:42.696318+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 42803200 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a074fdf800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:43.696540+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 heartbeat osd_stat(store_statfs(0x1a38ea000/0x0/0x1bfc00000, data 0x6590c15/0x6774000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 42803200 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:44.696705+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 364 ms_handle_reset con 0x55a074fdf800 session 0x55a06da481e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390496256 unmapped: 53665792 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:45.696894+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.284109116s of 10.195640564s, submitted: 65
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 364 ms_handle_reset con 0x55a06ddeb000 session 0x55a070bf5e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390496256 unmapped: 53665792 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 364 ms_handle_reset con 0x55a06eabcc00 session 0x55a070bf4f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4473049 data_alloc: 251658240 data_used: 33062912
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:46.697204+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a4b87000/0x0/0x1bfc00000, data 0x52f08d2/0x54d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 390717440 unmapped: 53444608 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:47.697366+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 52068352 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:48.697764+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392101888 unmapped: 52060160 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:49.697892+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393371648 unmapped: 50790400 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:50.698099+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 50987008 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4516479 data_alloc: 251658240 data_used: 33398784
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:51.698262+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 50987008 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a478d000/0x0/0x1bfc00000, data 0x56e9411/0x58d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:52.698404+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 50987008 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:53.698587+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a070e14c00 session 0x55a0706be3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393183232 unmapped: 50978816 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:54.698742+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 50970624 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a071d3fc00 session 0x55a06de9b4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:55.698901+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a476c000/0x0/0x1bfc00000, data 0x570b411/0x58f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393199616 unmapped: 50962432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4535815 data_alloc: 251658240 data_used: 35319808
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:56.699368+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a476c000/0x0/0x1bfc00000, data 0x570b411/0x58f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393199616 unmapped: 50962432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:57.700568+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a476c000/0x0/0x1bfc00000, data 0x570b411/0x58f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393199616 unmapped: 50962432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:58.700702+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393199616 unmapped: 50962432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:59.700992+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.802832603s of 13.976032257s, submitted: 89
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393453568 unmapped: 50708480 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:00.701753+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393453568 unmapped: 50708480 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4537677 data_alloc: 234881024 data_used: 35315712
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:01.702318+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a074fdf800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a074fdf800 session 0x55a06dd50960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddeb000 session 0x55a06dd49e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06eabcc00 session 0x55a06dd6ad20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a070e14c00 session 0x55a070e114a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a071d3fc00 session 0x55a06dd554a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a4767000/0x0/0x1bfc00000, data 0x5710411/0x58f7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394092544 unmapped: 50069504 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:02.702497+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddebc00 session 0x55a06dd70d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394027008 unmapped: 50135040 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:03.702674+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a0706d9800 session 0x55a0708ec000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394027008 unmapped: 50135040 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:04.702910+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddeb000 session 0x55a06dd554a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394043392 unmapped: 50118656 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:05.703092+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394043392 unmapped: 50118656 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4461541 data_alloc: 234881024 data_used: 32514048
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:06.703469+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddebc00 session 0x55a0706be3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394043392 unmapped: 50118656 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a4c65000/0x0/0x1bfc00000, data 0x4f063ee/0x50ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:07.703789+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06eabcc00 session 0x55a070bf5e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394043392 unmapped: 50118656 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:08.704241+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e14c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a070e14c00 session 0x55a071dd1680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394043392 unmapped: 50118656 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:09.704491+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddeb000 session 0x55a070e101e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a4f6d000/0x0/0x1bfc00000, data 0x4f09421/0x50f1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddebc00 session 0x55a06dd44b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394215424 unmapped: 49946624 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.828692436s of 10.493861198s, submitted: 58
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:10.704618+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394215424 unmapped: 49946624 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a4f49000/0x0/0x1bfc00000, data 0x4f2d421/0x5115000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4476131 data_alloc: 234881024 data_used: 33312768
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:11.704757+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394403840 unmapped: 49758208 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:12.704870+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06dce2800 session 0x55a06d562f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 49528832 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:13.704980+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a0706f0c00 session 0x55a070e01680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392937472 unmapped: 51224576 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:14.705133+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a4f49000/0x0/0x1bfc00000, data 0x4f2d421/0x5115000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070845800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a070845800 session 0x55a070dee000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392953856 unmapped: 51208192 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:15.705269+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392953856 unmapped: 51208192 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:16.705402+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4516409 data_alloc: 234881024 data_used: 38776832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 392953856 unmapped: 51208192 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:17.705523+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06dce2800 session 0x55a071dd0b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 400269312 unmapped: 43892736 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:18.705652+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddeb000 session 0x55a07085ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddebc00 session 0x55a070def680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 50970624 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:19.705787+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a0706f0c00 session 0x55a075249e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393199616 unmapped: 50962432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:20.705996+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a45ce000/0x0/0x1bfc00000, data 0x58a8421/0x5a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393199616 unmapped: 50962432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:21.706161+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4595527 data_alloc: 234881024 data_used: 38776832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 393199616 unmapped: 50962432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.943366051s of 12.172684669s, submitted: 41
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:22.706330+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070d6d000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a070d6d000 session 0x55a06dd44780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a4206000/0x0/0x1bfc00000, data 0x5c70421/0x5e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,2,0,4])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06dce2800 session 0x55a07068c780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 399409152 unmapped: 44752896 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:23.706496+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddeb000 session 0x55a07085bc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 399417344 unmapped: 44744704 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:24.706690+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddebc00 session 0x55a07085a960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 399810560 unmapped: 44351488 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:25.706887+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a375b000/0x0/0x1bfc00000, data 0x6712431/0x68fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 44130304 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:26.707075+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4725538 data_alloc: 234881024 data_used: 40144896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 400236544 unmapped: 43925504 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:27.707201+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 400236544 unmapped: 43925504 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a375b000/0x0/0x1bfc00000, data 0x6712431/0x68fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:28.707349+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a375b000/0x0/0x1bfc00000, data 0x6712431/0x68fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 400236544 unmapped: 43925504 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:29.707563+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 400367616 unmapped: 43794432 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a07049b800 session 0x55a06d1d7680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:30.707728+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a375b000/0x0/0x1bfc00000, data 0x6712431/0x68fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06dde1c00 session 0x55a071dd0f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 40378368 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:31.707868+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4788275 data_alloc: 251658240 data_used: 48852992
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 40378368 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:32.708024+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 40378368 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:33.708225+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 40378368 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:34.708427+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 40378368 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.163222313s of 12.874962807s, submitted: 163
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:35.708609+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 40353792 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a373e000/0x0/0x1bfc00000, data 0x6736441/0x6920000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:36.708856+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4789638 data_alloc: 251658240 data_used: 48869376
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408985600 unmapped: 35176448 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:37.708996+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 404832256 unmapped: 39329792 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:38.709176+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x707f441/0x7269000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405987328 unmapped: 38174720 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:39.709302+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06ddebc00 session 0x55a075248000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406372352 unmapped: 37789696 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:40.709457+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409960448 unmapped: 34201600 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:41.709592+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5004648 data_alloc: 251658240 data_used: 49745920
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410083328 unmapped: 34078720 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:42.709732+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409829376 unmapped: 34332672 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a1dba000/0x0/0x1bfc00000, data 0x80b2441/0x829c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:43.709953+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a07049b800 session 0x55a07085af00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409837568 unmapped: 34324480 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a075fe4800 session 0x55a070e103c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:44.710194+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a07375ac00 session 0x55a06de9a3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409837568 unmapped: 34324480 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e15800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.550899506s of 10.028506279s, submitted: 193
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a070e15800 session 0x55a070a27e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:45.710403+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06eabd000 session 0x55a06dd50960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a0706d9800 session 0x55a0752492c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 ms_handle_reset con 0x55a06dce2800 session 0x55a06f9e50e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e15800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 34291712 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:46.710590+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5013019 data_alloc: 251658240 data_used: 50180096
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a1d9c000/0x0/0x1bfc00000, data 0x80d7451/0x82c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 34291712 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:47.710752+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409878528 unmapped: 34283520 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:48.710905+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 heartbeat osd_stat(store_statfs(0x1a1d9c000/0x0/0x1bfc00000, data 0x80d7451/0x82c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409886720 unmapped: 34275328 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:49.711175+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411967488 unmapped: 32194560 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:50.711340+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 31039488 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:51.711509+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5053071 data_alloc: 251658240 data_used: 54190080
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 ms_handle_reset con 0x55a07049b800 session 0x55a070bf45a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 30998528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 ms_handle_reset con 0x55a07375ac00 session 0x55a0706bef00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:52.711669+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 heartbeat osd_stat(store_statfs(0x1a1d98000/0x0/0x1bfc00000, data 0x80db451/0x82c6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 ms_handle_reset con 0x55a06dce2800 session 0x55a0709665a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 ms_handle_reset con 0x55a07036b800 session 0x55a06ec3b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413171712 unmapped: 30990336 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:53.711911+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413171712 unmapped: 30990336 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:54.712146+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413900800 unmapped: 30261248 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:55.712311+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.444748878s of 10.629464149s, submitted: 49
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413900800 unmapped: 30261248 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:56.712445+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5053097 data_alloc: 251658240 data_used: 54296576
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 heartbeat osd_stat(store_statfs(0x1a1d94000/0x0/0x1bfc00000, data 0x80dd10c/0x82ca000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413900800 unmapped: 30261248 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:57.712576+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 heartbeat osd_stat(store_statfs(0x1a1d90000/0x0/0x1bfc00000, data 0x80e110c/0x82ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413900800 unmapped: 30261248 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:58.712693+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413900800 unmapped: 30261248 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:59.712825+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413900800 unmapped: 30261248 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:00.713025+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 29392896 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:01.713235+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5072963 data_alloc: 251658240 data_used: 54292480
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 heartbeat osd_stat(store_statfs(0x1a1b38000/0x0/0x1bfc00000, data 0x833110c/0x851e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15b9f9c7), peers [0,2] op hist [0,0,0,0,0,0,4])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414932992 unmapped: 29229056 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:02.713451+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 29581312 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:03.713614+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 367 ms_handle_reset con 0x55a075fe4800 session 0x55a070ba85a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 367 ms_handle_reset con 0x55a07375ac00 session 0x55a070dee960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 367 heartbeat osd_stat(store_statfs(0x1a06b5000/0x0/0x1bfc00000, data 0x861ad57/0x8808000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 29573120 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:04.713810+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415694848 unmapped: 28467200 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:05.713941+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 367 heartbeat osd_stat(store_statfs(0x1a0687000/0x0/0x1bfc00000, data 0x8649d57/0x8837000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.905332565s of 10.024168015s, submitted: 104
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415760384 unmapped: 28401664 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:06.714088+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5122727 data_alloc: 251658240 data_used: 54984704
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415760384 unmapped: 28401664 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:07.714258+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415760384 unmapped: 28401664 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:08.714413+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415760384 unmapped: 28401664 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:09.714545+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 367 ms_handle_reset con 0x55a06ddebc00 session 0x55a0709ef860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 367 ms_handle_reset con 0x55a070e15800 session 0x55a06da49e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415760384 unmapped: 28401664 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:10.714664+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 32735232 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:11.714791+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5011019 data_alloc: 251658240 data_used: 49532928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a0fe7000/0x0/0x1bfc00000, data 0x7c30896/0x7e1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 32735232 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:12.714944+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406904832 unmapped: 37257216 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:13.715087+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 ms_handle_reset con 0x55a06ddebc00 session 0x55a0709665a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406904832 unmapped: 37257216 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:14.715314+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a10c4000/0x0/0x1bfc00000, data 0x7c0c886/0x7dfa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406913024 unmapped: 37249024 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:15.715446+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 37150720 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:16.715569+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5000618 data_alloc: 251658240 data_used: 51716096
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 ms_handle_reset con 0x55a071d3fc00 session 0x55a071dd1860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 ms_handle_reset con 0x55a06eabcc00 session 0x55a06f9e54a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 37150720 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:17.715681+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.558176041s of 11.921883583s, submitted: 45
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407019520 unmapped: 37142528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:18.715799+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 ms_handle_reset con 0x55a07036b800 session 0x55a070a2b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:19.715904+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18d3000/0x0/0x1bfc00000, data 0x73fd886/0x75eb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18f8000/0x0/0x1bfc00000, data 0x73d9853/0x75c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:20.716016+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18f8000/0x0/0x1bfc00000, data 0x73d9853/0x75c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:21.716158+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4913039 data_alloc: 251658240 data_used: 49455104
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:22.716292+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:23.716435+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 ms_handle_reset con 0x55a075699000 session 0x55a070ac30e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 ms_handle_reset con 0x55a0706f0c00 session 0x55a070a3d4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 ms_handle_reset con 0x55a06ddeb000 session 0x55a070a26780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:24.716588+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:25.716714+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18f8000/0x0/0x1bfc00000, data 0x73d9853/0x75c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:26.716861+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 38248448 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4914791 data_alloc: 251658240 data_used: 49500160
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18f9000/0x0/0x1bfc00000, data 0x73d9853/0x75c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:27.716993+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 38363136 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:28.717274+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 38363136 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:29.717418+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 38363136 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18f9000/0x0/0x1bfc00000, data 0x73d9853/0x75c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:30.717539+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 38363136 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:31.717721+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 38363136 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4920247 data_alloc: 251658240 data_used: 50208768
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.121734619s of 14.377829552s, submitted: 50
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:32.717868+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 38363136 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18f9000/0x0/0x1bfc00000, data 0x73d9853/0x75c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:33.718046+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 38289408 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a18f9000/0x0/0x1bfc00000, data 0x73d9853/0x75c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 ms_handle_reset con 0x55a06ddebc00 session 0x55a070967a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:34.718275+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405880832 unmapped: 38281216 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:35.718421+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 ms_handle_reset con 0x55a06eabcc00 session 0x55a070e11680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405880832 unmapped: 38281216 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:36.718558+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 38264832 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 heartbeat osd_stat(store_statfs(0x1a18f4000/0x0/0x1bfc00000, data 0x73db50e/0x75c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4926733 data_alloc: 251658240 data_used: 50216960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 ms_handle_reset con 0x55a06ddeb000 session 0x55a07085b860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:37.718728+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 38264832 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:38.718886+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 38264832 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:39.719028+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 heartbeat osd_stat(store_statfs(0x1a18f5000/0x0/0x1bfc00000, data 0x73db50e/0x75c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:40.719270+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:41.719454+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925617 data_alloc: 251658240 data_used: 50245632
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:42.719584+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:43.719718+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:44.719936+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:45.724199+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 heartbeat osd_stat(store_statfs(0x1a18f5000/0x0/0x1bfc00000, data 0x73db50e/0x75c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.777255058s of 14.080558777s, submitted: 10
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:46.724352+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e15800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925623 data_alloc: 251658240 data_used: 50241536
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 heartbeat osd_stat(store_statfs(0x1a18f6000/0x0/0x1bfc00000, data 0x73db4ac/0x75c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:47.724519+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 38240256 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 heartbeat osd_stat(store_statfs(0x1a18f6000/0x0/0x1bfc00000, data 0x73db4ac/0x75c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:48.724690+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405946368 unmapped: 38215680 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 369 ms_handle_reset con 0x55a070e15800 session 0x55a070a2a3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:49.724886+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405962752 unmapped: 38199296 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:50.725040+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405962752 unmapped: 38199296 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 heartbeat osd_stat(store_statfs(0x1a18f0000/0x0/0x1bfc00000, data 0x73dd159/0x75cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:51.725148+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 38084608 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944249 data_alloc: 251658240 data_used: 52879360
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 ms_handle_reset con 0x55a06dce2800 session 0x55a06dd44780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:52.725314+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 38084608 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 ms_handle_reset con 0x55a07375ac00 session 0x55a070e003c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 ms_handle_reset con 0x55a075fe4800 session 0x55a070e11a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 ms_handle_reset con 0x55a06dce2800 session 0x55a06da48f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 ms_handle_reset con 0x55a06ddeb000 session 0x55a071dd0780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e15800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:53.725440+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 38084608 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:54.725610+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410255360 unmapped: 33906688 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 ms_handle_reset con 0x55a070e15800 session 0x55a070dee5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:55.725778+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 37511168 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 heartbeat osd_stat(store_statfs(0x1a1500000/0x0/0x1bfc00000, data 0x77d0159/0x79be000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.387188911s of 10.036653519s, submitted: 41
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a071d3fc00 session 0x55a07085b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:56.725907+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 37494784 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4983115 data_alloc: 251658240 data_used: 52891648
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:57.726041+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 37494784 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:58.726170+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 37494784 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:59.726406+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 37494784 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06eabd000 session 0x55a070ba8780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:00.734902+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 37494784 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06eabd000 session 0x55a07068cd20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:01.735037+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406675456 unmapped: 37486592 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a14f8000/0x0/0x1bfc00000, data 0x77d5c98/0x79c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4769836 data_alloc: 234881024 data_used: 41975808
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06dce2800 session 0x55a070a3cf00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:02.735206+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06ddebc00 session 0x55a070ba61e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 43089920 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:03.735374+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 43089920 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:04.735628+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 43089920 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:05.735782+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06ddeb000 session 0x55a07087e5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 43089920 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a27cb000/0x0/0x1bfc00000, data 0x6503c98/0x66f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:06.735965+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 43089920 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770094 data_alloc: 234881024 data_used: 41971712
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:07.736203+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a27ca000/0x0/0x1bfc00000, data 0x6503cbb/0x66f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 43089920 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070e15800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a071d3fc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.072452545s of 11.793210983s, submitted: 35
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a071d3fc00 session 0x55a06dd6d4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:08.736350+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 42393600 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06dce2800 session 0x55a07068dc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:09.736539+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 42262528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:10.736711+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 42262528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:11.736900+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 42262528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4800728 data_alloc: 251658240 data_used: 45936640
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06ddeb000 session 0x55a06eb08960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:12.737020+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06ddebc00 session 0x55a07087f680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 42262528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a27c3000/0x0/0x1bfc00000, data 0x6509cbb/0x66fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:13.737184+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 42262528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:14.737345+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 42262528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:15.737511+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 42262528 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a27c3000/0x0/0x1bfc00000, data 0x6509cbb/0x66fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:16.737721+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 42246144 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4801237 data_alloc: 251658240 data_used: 45936640
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:17.737906+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 42246144 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.438075066s of 10.077000618s, submitted: 32
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:18.738034+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401997824 unmapped: 42164224 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:19.738210+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406544384 unmapped: 37617664 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:20.738392+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405430272 unmapped: 38731776 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a22c5000/0x0/0x1bfc00000, data 0x6a00cbb/0x6bf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x16d3f9c7), peers [0,2] op hist [0,0,0,0,0,1,0,1,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:21.738556+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06eabd000 session 0x55a070e103c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 404348928 unmapped: 39813120 heap: 444162048 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4857842 data_alloc: 251658240 data_used: 45928448
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375ac00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:22.738754+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411664384 unmapped: 36700160 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a07375ac00 session 0x55a07087eb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06dce2800 session 0x55a07068c1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06ddeb000 session 0x55a070ac3680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:23.738937+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405970944 unmapped: 42393600 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x19fee7000/0x0/0x1bfc00000, data 0x7c3ecbb/0x7e2f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:24.739139+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 42213376 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:25.739290+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x19fee3000/0x0/0x1bfc00000, data 0x7c4acbb/0x7e3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:26.739461+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4987557 data_alloc: 251658240 data_used: 46735360
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:27.739628+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:28.739741+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:29.739887+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.071774483s of 11.738485336s, submitted: 178
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x19fee3000/0x0/0x1bfc00000, data 0x7c4acbb/0x7e3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:30.740054+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x19fee3000/0x0/0x1bfc00000, data 0x7c4acbb/0x7e3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37185 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:31.740152+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4987573 data_alloc: 251658240 data_used: 46735360
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:32.740251+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x19fee3000/0x0/0x1bfc00000, data 0x7c4acbb/0x7e3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:33.740394+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:34.740608+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 42205184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:35.740770+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406167552 unmapped: 42196992 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:36.740916+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x19fee3000/0x0/0x1bfc00000, data 0x7c4acbb/0x7e3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406323200 unmapped: 42041344 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 ms_handle_reset con 0x55a06ddebc00 session 0x55a06dd55680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988505 data_alloc: 251658240 data_used: 46735360
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:37.741067+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 406323200 unmapped: 42041344 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:38.741181+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 37986304 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:39.741394+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 31899648 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 heartbeat osd_stat(store_statfs(0x19febf000/0x0/0x1bfc00000, data 0x7c6ecbb/0x7e5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:40.741510+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 31899648 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.120629311s of 11.135842323s, submitted: 3
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:41.741664+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 31899648 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07084bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5105249 data_alloc: 268435456 data_used: 63004672
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 371 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 372 ms_handle_reset con 0x55a072f36000 session 0x55a070e11680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:42.741865+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eb97400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416514048 unmapped: 31850496 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 ms_handle_reset con 0x55a06eb97400 session 0x55a070a2b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 ms_handle_reset con 0x55a07084bc00 session 0x55a0707c5860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:43.742076+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 ms_handle_reset con 0x55a06dce2800 session 0x55a070dee960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417038336 unmapped: 31326208 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:44.742303+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 ms_handle_reset con 0x55a06eabd000 session 0x55a06f9f45a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 ms_handle_reset con 0x55a0708dd400 session 0x55a0752481e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417038336 unmapped: 31326208 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:45.742470+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddeb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 31293440 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 heartbeat osd_stat(store_statfs(0x19f47b000/0x0/0x1bfc00000, data 0x86ab5fd/0x88a1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:46.742598+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 heartbeat osd_stat(store_statfs(0x19f47b000/0x0/0x1bfc00000, data 0x86ab5fd/0x88a1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417136640 unmapped: 31227904 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5199914 data_alloc: 268435456 data_used: 63008768
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 ms_handle_reset con 0x55a06ddeb000 session 0x55a070a2be00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:47.742784+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417144832 unmapped: 31219712 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:48.742949+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417144832 unmapped: 31219712 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:49.743266+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417144832 unmapped: 31219712 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:50.743416+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 ms_handle_reset con 0x55a06dce2800 session 0x55a06dd48f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 heartbeat osd_stat(store_statfs(0x19f4a1000/0x0/0x1bfc00000, data 0x86875fd/0x887d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417161216 unmapped: 31203328 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 heartbeat osd_stat(store_statfs(0x19f4a1000/0x0/0x1bfc00000, data 0x86875fd/0x887d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:51.743547+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.032499313s of 10.363330841s, submitted: 54
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 374 ms_handle_reset con 0x55a06eabd000 session 0x55a07087f0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07084bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 374 ms_handle_reset con 0x55a07084bc00 session 0x55a07068cb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418226176 unmapped: 30138368 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5200796 data_alloc: 268435456 data_used: 63016960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 374 ms_handle_reset con 0x55a0708dd400 session 0x55a070a26d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:52.743663+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 33595392 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddebc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 374 heartbeat osd_stat(store_statfs(0x1a0041000/0x0/0x1bfc00000, data 0x76d52aa/0x78cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:53.743767+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 33595392 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 375 ms_handle_reset con 0x55a06ddebc00 session 0x55a06ec3bc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:54.743959+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 375 ms_handle_reset con 0x55a06dce2800 session 0x55a06ec6cf00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:55.744194+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:56.744350+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 33570816 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971127 data_alloc: 251658240 data_used: 46751744
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:57.744475+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 33562624 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07084bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:58.744668+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a0042000/0x0/0x1bfc00000, data 0x76d6f35/0x78cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 33456128 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:59.744834+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414932992 unmapped: 33431552 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:00.744991+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414932992 unmapped: 33431552 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:01.745141+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.554318428s of 10.063819885s, submitted: 234
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 375 ms_handle_reset con 0x55a0708dd400 session 0x55a07015a3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414973952 unmapped: 33390592 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052251 data_alloc: 251658240 data_used: 54996992
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:02.745347+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414973952 unmapped: 33390592 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a003d000/0x0/0x1bfc00000, data 0x76d8b0e/0x78d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:03.745515+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414990336 unmapped: 33374208 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:04.745655+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414990336 unmapped: 33374208 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:05.745761+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 33366016 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:06.745899+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a003e000/0x0/0x1bfc00000, data 0x76d8b0e/0x78d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 33349632 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5051195 data_alloc: 251658240 data_used: 54996992
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:07.746040+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415023104 unmapped: 33341440 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:08.746175+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415023104 unmapped: 33341440 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06eabd000 session 0x55a06dd874a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:09.746297+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415023104 unmapped: 33341440 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a003d000/0x0/0x1bfc00000, data 0x76d8b37/0x78d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:10.746427+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415350784 unmapped: 33013760 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:11.746612+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 32989184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137408 data_alloc: 251658240 data_used: 55001088
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a075699000 session 0x55a07068d0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:12.746848+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.630032539s of 10.896110535s, submitted: 99
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a0706f0c00 session 0x55a070e10000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x19f555000/0x0/0x1bfc00000, data 0x81c0b37/0x83b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 32989184 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:13.747017+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419602432 unmapped: 28762112 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a072f36000 session 0x55a070bf41e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:14.747379+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419700736 unmapped: 28663808 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06eabd000 session 0x55a06dd52780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:15.747547+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 30646272 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:16.747652+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 30318592 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5064959 data_alloc: 251658240 data_used: 51191808
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06dce2800 session 0x55a06f7863c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:17.747821+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a0706f0c00 session 0x55a070ac2b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 30318592 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:18.747955+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x19faf9000/0x0/0x1bfc00000, data 0x7c0ab60/0x7e02000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 30318592 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:19.748124+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x19faf9000/0x0/0x1bfc00000, data 0x7c0ab60/0x7e02000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418054144 unmapped: 30310400 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:20.748670+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a0706d9800 session 0x55a070ba8000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a07049b800 session 0x55a06f786d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 29908992 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:21.748798+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419586048 unmapped: 28778496 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5144156 data_alloc: 268435456 data_used: 62226432
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:22.749007+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.964372635s of 10.025998116s, submitted: 135
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 28770304 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:23.749217+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 28770304 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a07084bc00 session 0x55a070e114a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:24.749462+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 28770304 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:25.749606+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a07e1000/0x0/0x1bfc00000, data 0x6f35b60/0x712d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 28770304 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a0706d9800 session 0x55a070966000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:26.749755+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 28770304 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5008807 data_alloc: 251658240 data_used: 56999936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:27.749964+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 32530432 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:28.750135+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 32530432 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:29.750332+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 32530432 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:30.750500+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 32530432 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:31.750671+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06dce2800 session 0x55a06fca61e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a174b000/0x0/0x1bfc00000, data 0x5f65afe/0x615c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06eabd000 session 0x55a070ba41e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417062912 unmapped: 31301632 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4840729 data_alloc: 251658240 data_used: 45948928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:32.750804+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.021047592s of 10.046791077s, submitted: 79
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 30089216 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a13c4000/0x0/0x1bfc00000, data 0x6353afe/0x654a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,2])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:33.750954+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419282944 unmapped: 29081600 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:34.751180+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 29007872 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06dce2800 session 0x55a070ac3e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:35.751379+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a0708eb400 session 0x55a070e112c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a072d03400 session 0x55a06dd47a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a070e15800 session 0x55a06dd494a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 29007872 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:36.751525+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049b800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 28418048 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4888173 data_alloc: 251658240 data_used: 46342144
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:37.751646+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a0fdf000/0x0/0x1bfc00000, data 0x6731aee/0x6927000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 28409856 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:38.751768+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 33521664 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:39.751898+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a2532000/0x0/0x1bfc00000, data 0x51e7ade/0x53dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:40.752062+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a07049b800 session 0x55a06de9a5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a0706d9800 session 0x55a070ac21e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:41.752178+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4645151 data_alloc: 234881024 data_used: 36757504
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:42.752434+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a255c000/0x0/0x1bfc00000, data 0x51bda98/0x53b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.512210846s of 10.207167625s, submitted: 206
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:43.752559+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:44.752729+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:45.752868+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:46.752957+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a070709000 session 0x55a07068c960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 33579008 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a253a000/0x0/0x1bfc00000, data 0x51e1a98/0x53d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4646563 data_alloc: 234881024 data_used: 36757504
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:47.753076+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a253a000/0x0/0x1bfc00000, data 0x51e1a98/0x53d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 33570816 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:48.753178+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 33570816 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:49.753342+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06dce2800 session 0x55a06dd474a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a2f3e000/0x0/0x1bfc00000, data 0x47dda98/0x49d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 33570816 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:50.753465+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 33570816 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:51.753574+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 33562624 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4541259 data_alloc: 234881024 data_used: 34967552
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:52.753730+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 33562624 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:53.753925+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.629728317s of 10.762213707s, submitted: 12
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 33562624 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:54.754163+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 33562624 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a2f3b000/0x0/0x1bfc00000, data 0x47e0a98/0x49d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:55.754307+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06fabb000 session 0x55a0707c43c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a072f80800 session 0x55a075248d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 33562624 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:56.754449+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f80800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a072f80800 session 0x55a070bf5a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414826496 unmapped: 33538048 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4540767 data_alloc: 234881024 data_used: 34975744
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:57.754592+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414826496 unmapped: 33538048 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:58.754717+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414826496 unmapped: 33538048 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:59.754865+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06dce2800 session 0x55a070e012c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a2f3b000/0x0/0x1bfc00000, data 0x47e0a98/0x49d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 33521664 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:00.754985+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a06fabb000 session 0x55a07085a5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 33447936 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:01.755179+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 ms_handle_reset con 0x55a0706d9800 session 0x55a06f9f4f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 33447936 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4426764 data_alloc: 234881024 data_used: 34332672
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:02.755333+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 33447936 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:03.755466+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.541580200s of 10.056720734s, submitted: 152
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 377 ms_handle_reset con 0x55a070709000 session 0x55a0752494a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 377 heartbeat osd_stat(store_statfs(0x1a3ce6000/0x0/0x1bfc00000, data 0x3a336f1/0x3c27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 33570816 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:04.755641+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 33497088 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:05.755787+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 33497088 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:06.755923+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414875648 unmapped: 33488896 heap: 448364544 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4444338 data_alloc: 234881024 data_used: 34336768
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:07.756083+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a3ce6000/0x0/0x1bfc00000, data 0x3a336f1/0x3c27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 378 ms_handle_reset con 0x55a070709000 session 0x55a06de9ab40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432005120 unmapped: 23748608 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:08.756285+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 378 ms_handle_reset con 0x55a06dce2800 session 0x55a06dd510e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422502400 unmapped: 33251328 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:09.756448+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 378 ms_handle_reset con 0x55a06fabb000 session 0x55a06dd87e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422502400 unmapped: 33251328 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:10.756551+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 handle_osd_map epochs [380,380], i have 380, src has [1,380]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a0706d9800 session 0x55a070e00b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 33226752 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:11.756716+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 heartbeat osd_stat(store_statfs(0x1a2c3d000/0x0/0x1bfc00000, data 0x4ad7c6c/0x4ccf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f80800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a072f80800 session 0x55a07015a3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a06dce2800 session 0x55a07085ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 33226752 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a06fabb000 session 0x55a06dd705a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4605843 data_alloc: 234881024 data_used: 38445056
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:12.756907+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422764544 unmapped: 32989184 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:13.757080+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.798730850s of 10.226181030s, submitted: 56
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 42033152 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a0706d9800 session 0x55a06e7d1680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:14.757251+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 heartbeat osd_stat(store_statfs(0x1a2c3f000/0x0/0x1bfc00000, data 0x4ad7c6c/0x4ccf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 42033152 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:15.757381+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 42033152 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:16.757571+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 42033152 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4583203 data_alloc: 234881024 data_used: 38445056
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:17.757684+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a070709000 session 0x55a071dd1680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 42033152 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a0708eb400 session 0x55a071dd0000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a0708eb400 session 0x55a070ba65a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:18.757812+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a06dce2800 session 0x55a070a2af00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a06fabb000 session 0x55a070e10b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 42016768 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:19.757929+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706d9800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 heartbeat osd_stat(store_statfs(0x1a2c3f000/0x0/0x1bfc00000, data 0x4ad7c6c/0x4ccf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a0706d9800 session 0x55a070ac3a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 ms_handle_reset con 0x55a070709000 session 0x55a0752490e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 42663936 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:20.758260+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 42655744 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:21.758397+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 42655744 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4587057 data_alloc: 234881024 data_used: 38453248
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:22.758516+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 42655744 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 heartbeat osd_stat(store_statfs(0x1a2c3b000/0x0/0x1bfc00000, data 0x4ad97ab/0x4cd2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:23.758660+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 42655744 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:24.758842+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413130752 unmapped: 42622976 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:25.758964+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.890002251s of 11.739995956s, submitted: 20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 ms_handle_reset con 0x55a06dce2800 session 0x55a07068da40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413155328 unmapped: 42598400 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 ms_handle_reset con 0x55a06fabb000 session 0x55a0706be1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:26.759065+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 ms_handle_reset con 0x55a075698c00 session 0x55a07068cb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 heartbeat osd_stat(store_statfs(0x1a2c3c000/0x0/0x1bfc00000, data 0x4ad97ab/0x4cd2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 ms_handle_reset con 0x55a072f36000 session 0x55a06e644d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 42491904 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4773038 data_alloc: 251658240 data_used: 45166592
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:27.759171+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 42491904 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:28.759300+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 42491904 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:29.759476+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f37400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dfc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 ms_handle_reset con 0x55a07049bc00 session 0x55a07085b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 42491904 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:30.759643+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 382 ms_handle_reset con 0x55a07049bc00 session 0x55a070a274a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 42475520 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 382 ms_handle_reset con 0x55a06dce2800 session 0x55a070ba90e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:31.759797+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 382 ms_handle_reset con 0x55a06fabb000 session 0x55a070a27680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 383 ms_handle_reset con 0x55a0708dfc00 session 0x55a070bf45a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 383 heartbeat osd_stat(store_statfs(0x1a1c54000/0x0/0x1bfc00000, data 0x5aba0e4/0x5cb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 42278912 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4792225 data_alloc: 251658240 data_used: 45170688
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:32.759957+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070708000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 384 ms_handle_reset con 0x55a070708000 session 0x55a070c06d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414670848 unmapped: 41082880 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:33.760159+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 39747584 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 385 ms_handle_reset con 0x55a0708df400 session 0x55a070bf54a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:34.760325+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 385 ms_handle_reset con 0x55a072f37400 session 0x55a0706bfa40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416071680 unmapped: 39682048 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:35.760563+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.946635246s of 10.511343956s, submitted: 89
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 385 ms_handle_reset con 0x55a06dce2800 session 0x55a071dd1a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416071680 unmapped: 39682048 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:36.760719+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417406976 unmapped: 38346752 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4932515 data_alloc: 251658240 data_used: 50991104
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:37.760858+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 385 ms_handle_reset con 0x55a07049bc00 session 0x55a075248000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dfc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a1123000/0x0/0x1bfc00000, data 0x65ea9de/0x67eb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 385 ms_handle_reset con 0x55a0708dfc00 session 0x55a07085bc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417677312 unmapped: 38076416 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:38.760983+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 386 ms_handle_reset con 0x55a0706f1800 session 0x55a07068de00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 386 ms_handle_reset con 0x55a06dce2800 session 0x55a070ac21e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 386 ms_handle_reset con 0x55a06fabb000 session 0x55a06e7d0d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 39313408 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:39.761229+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 39313408 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:40.761402+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416456704 unmapped: 39297024 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:41.761575+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416456704 unmapped: 39297024 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:42.761733+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4984544 data_alloc: 251658240 data_used: 52748288
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416456704 unmapped: 39297024 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:43.761869+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0d9c000/0x0/0x1bfc00000, data 0x696e192/0x6b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 37863424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:44.762042+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422313984 unmapped: 33439744 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:45.770004+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.615744591s of 10.218914986s, submitted: 50
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420601856 unmapped: 35151872 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:46.770169+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 35823616 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:47.770417+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5044084 data_alloc: 251658240 data_used: 53325824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 35823616 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:48.770549+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0698000/0x0/0x1bfc00000, data 0x7072192/0x7275000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 35823616 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:49.770658+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0699000/0x0/0x1bfc00000, data 0x7072192/0x7275000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 35823616 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:50.770828+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0699000/0x0/0x1bfc00000, data 0x7072192/0x7275000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0693000/0x0/0x1bfc00000, data 0x7078192/0x727b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,2])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 35823616 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:51.770950+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a07049bc00 session 0x55a070c070e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 35815424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:52.771332+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5047784 data_alloc: 251658240 data_used: 53325824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a0708df400 session 0x55a0708ed2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:53.771687+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 35815424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:54.773429+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 35815424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:55.773576+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 35815424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:56.773706+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 35815424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0687000/0x0/0x1bfc00000, data 0x7081192/0x7284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:57.773865+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 35815424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5047784 data_alloc: 251658240 data_used: 53325824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.429454327s of 11.743603706s, submitted: 36
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a0708df400 session 0x55a0708ec3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:58.774025+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a06dce2800 session 0x55a0708ed860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 35815424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0686000/0x0/0x1bfc00000, data 0x70841a2/0x7288000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:59.774169+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 35807232 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:00.774268+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:01.774383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:02.774529+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0686000/0x0/0x1bfc00000, data 0x70841a2/0x7288000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5076438 data_alloc: 251658240 data_used: 54034432
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:03.774669+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:04.774861+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:05.775016+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0686000/0x0/0x1bfc00000, data 0x70841a2/0x7288000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:06.775175+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:07.775308+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5080849 data_alloc: 251658240 data_used: 54910976
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a0706f1800 session 0x55a06dd33860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0686000/0x0/0x1bfc00000, data 0x70841a2/0x7288000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:08.775438+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:09.775539+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a070709000 session 0x55a06ec3b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:10.775676+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f37400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.969857216s of 13.105335236s, submitted: 9
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0686000/0x0/0x1bfc00000, data 0x70841a2/0x7288000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:11.775806+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a0686000/0x0/0x1bfc00000, data 0x70841a2/0x7288000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:12.775928+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 35528704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5080969 data_alloc: 251658240 data_used: 54915072
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:13.776067+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420626432 unmapped: 35127296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:14.776303+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421773312 unmapped: 33980416 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:15.776433+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421781504 unmapped: 33972224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:16.776560+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421781504 unmapped: 33972224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a04c9000/0x0/0x1bfc00000, data 0x75c41a2/0x7442000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:17.776690+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a04c9000/0x0/0x1bfc00000, data 0x75c41a2/0x7442000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421789696 unmapped: 33964032 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5122105 data_alloc: 251658240 data_used: 55025664
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:18.776821+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421789696 unmapped: 33964032 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a04c9000/0x0/0x1bfc00000, data 0x75c41a2/0x7442000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a072f37400 session 0x55a070a26000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:19.776970+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421789696 unmapped: 33964032 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:20.777195+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421789696 unmapped: 33964032 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:21.777324+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a04c9000/0x0/0x1bfc00000, data 0x75c41a2/0x7442000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421789696 unmapped: 33964032 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a06dce2800 session 0x55a0709eeb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a0706f1800 session 0x55a070b405a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a070709000 session 0x55a070defe00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a0708df400 session 0x55a0707c4780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.567829132s of 10.856853485s, submitted: 38
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a06ddea400 session 0x55a0708ec960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:22.777482+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 33497088 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5174655 data_alloc: 251658240 data_used: 55185408
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a06ddea400 session 0x55a070966b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 heartbeat osd_stat(store_statfs(0x19ff1f000/0x0/0x1bfc00000, data 0x7b711a2/0x79ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 ms_handle_reset con 0x55a06dce2800 session 0x55a0752494a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:23.777627+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 33480704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:24.777824+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422281216 unmapped: 33472512 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:25.777937+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 33456128 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 ms_handle_reset con 0x55a0706f1800 session 0x55a06de9be00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:26.778086+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419545088 unmapped: 36208640 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 ms_handle_reset con 0x55a070709000 session 0x55a070e101e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 ms_handle_reset con 0x55a06fabb000 session 0x55a0708ed4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 ms_handle_reset con 0x55a07049bc00 session 0x55a0708ec780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 ms_handle_reset con 0x55a06fabb000 session 0x55a06f787a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:27.778246+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 36192256 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977967 data_alloc: 251658240 data_used: 43360256
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 heartbeat osd_stat(store_statfs(0x1a0f8d000/0x0/0x1bfc00000, data 0x6b00e3f/0x697f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 ms_handle_reset con 0x55a06dce2800 session 0x55a06f9b12c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:28.778403+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419577856 unmapped: 36175872 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 389 ms_handle_reset con 0x55a0708df400 session 0x55a07015b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:29.778583+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419577856 unmapped: 36175872 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 389 heartbeat osd_stat(store_statfs(0x1a0f8b000/0x0/0x1bfc00000, data 0x6b02b08/0x6982000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:30.778698+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419586048 unmapped: 36167680 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:31.778832+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419602432 unmapped: 36151296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:32.778971+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419602432 unmapped: 36151296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4991770 data_alloc: 251658240 data_used: 44417024
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 390 ms_handle_reset con 0x55a075699000 session 0x55a070deef00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.894825935s of 10.867634773s, submitted: 120
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 390 ms_handle_reset con 0x55a0708dd400 session 0x55a071dd01e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a0f88000/0x0/0x1bfc00000, data 0x6b04663/0x6985000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:33.779188+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419602432 unmapped: 36151296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 390 heartbeat osd_stat(store_statfs(0x1a0f88000/0x0/0x1bfc00000, data 0x6b04663/0x6985000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:34.779394+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419627008 unmapped: 36126720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:35.779561+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 42385408 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 392 ms_handle_reset con 0x55a06fabb000 session 0x55a070ac32c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 392 ms_handle_reset con 0x55a0708df400 session 0x55a070e010e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 393 ms_handle_reset con 0x55a07049bc00 session 0x55a06ec6d2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036d000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 393 ms_handle_reset con 0x55a07036d000 session 0x55a06dd86780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 393 ms_handle_reset con 0x55a070709000 session 0x55a070def4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 393 ms_handle_reset con 0x55a06dce2800 session 0x55a070967860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:36.779691+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412950528 unmapped: 42803200 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:37.779850+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412950528 unmapped: 42803200 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525364 data_alloc: 234881024 data_used: 31002624
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 393 heartbeat osd_stat(store_statfs(0x1a3c1a000/0x0/0x1bfc00000, data 0x3ae7afa/0x3cf0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:38.780012+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412950528 unmapped: 42803200 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:39.780188+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412950528 unmapped: 42803200 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:40.780300+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412950528 unmapped: 42803200 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 393 ms_handle_reset con 0x55a06fabb000 session 0x55a0706bfc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:41.780441+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412434432 unmapped: 43319296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:42.780645+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412876800 unmapped: 42876928 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4604300 data_alloc: 234881024 data_used: 31289344
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a3503000/0x0/0x1bfc00000, data 0x41ff5f3/0x4408000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:43.780857+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412876800 unmapped: 42876928 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:44.781057+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412876800 unmapped: 42876928 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:45.781184+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412876800 unmapped: 42876928 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.128514290s of 13.056159973s, submitted: 260
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 394 handle_osd_map epochs [395,395], i have 395, src has [1,395]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a3503000/0x0/0x1bfc00000, data 0x41ff5f3/0x4408000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:46.781318+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 43204608 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:47.781458+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3502000/0x0/0x1bfc00000, data 0x4201132/0x440b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 43204608 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601242 data_alloc: 234881024 data_used: 31293440
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:48.781671+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 43196416 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a0708ec5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a07068dc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:49.781769+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 43196416 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:50.781948+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 48480256 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:51.782099+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:52.782270+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4464792 data_alloc: 234881024 data_used: 26546176
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a41c2000/0x0/0x1bfc00000, data 0x3542132/0x374c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:53.782450+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a070c06960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3ddc000/0x0/0x1bfc00000, data 0x3518132/0x3722000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:54.782656+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:55.782804+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:56.782966+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:57.783086+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070a2a1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456486 data_alloc: 234881024 data_used: 26411008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:58.783278+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a070a2b4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.273836136s of 12.963358879s, submitted: 48
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a070ac2f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:59.783425+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3ddc000/0x0/0x1bfc00000, data 0x3518132/0x3722000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:00.783598+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46130 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:01.783693+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4b8e000/0x0/0x1bfc00000, data 0x2766132/0x2970000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401588224 unmapped: 54165504 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:02.783820+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401604608 unmapped: 54149120 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4b8e000/0x0/0x1bfc00000, data 0x2766132/0x2970000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311350 data_alloc: 218103808 data_used: 20557824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:03.783982+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 54140928 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dce2800 session 0x55a0709ee960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:04.784156+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401620992 unmapped: 54132736 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:05.784304+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401620992 unmapped: 54132736 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:06.784453+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401620992 unmapped: 54132736 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:07.784599+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301623 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb9000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:08.784737+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 54K writes, 215K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s
                                           Cumulative WAL: 54K writes, 19K syncs, 2.88 writes per sync, written: 0.20 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7295 writes, 25K keys, 7295 commit groups, 1.0 writes per commit group, ingest: 25.84 MB, 0.04 MB/s
                                           Interval WAL: 7295 writes, 2838 syncs, 2.57 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:09.784912+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets getting new tickets!
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.785120+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _finish_auth 0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.785754+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:11.785211+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:12.785351+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301623 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:13.785455+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb9000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:14.785703+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc ms_handle_reset ms_handle_reset con 0x55a0708e1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: get_auth_request con 0x55a075699000 auth_method 0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:15.785900+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dce2800 session 0x55a06fca5680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070e10780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a071dd0780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a07087e000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 54042624 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.029085159s of 17.251653671s, submitted: 36
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:16.786072+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 52994048 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706d8400 session 0x55a070b41c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:17.786237+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a070b414a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402751488 unmapped: 53002240 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4338959 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a06dd474a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a0707c5c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a07015b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a07085a3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:18.786395+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402767872 unmapped: 52985856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47b7000/0x0/0x1bfc00000, data 0x2b3e161/0x2d47000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:19.786516+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402767872 unmapped: 52985856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:20.786618+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402776064 unmapped: 52977664 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:21.786711+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402776064 unmapped: 52977664 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a07068d860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:22.786854+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403128320 unmapped: 52625408 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4341645 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:23.787017+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403128320 unmapped: 52625408 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:24.787237+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a478d000/0x0/0x1bfc00000, data 0x2b68161/0x2d71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:25.787437+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:26.787598+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:27.787759+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4372045 data_alloc: 218103808 data_used: 24625152
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:28.787956+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:29.788215+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a478d000/0x0/0x1bfc00000, data 0x2b68161/0x2d71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:30.788358+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:31.788500+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:32.788663+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4372045 data_alloc: 218103808 data_used: 24625152
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a478d000/0x0/0x1bfc00000, data 0x2b68161/0x2d71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:33.788816+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.274131775s of 18.334613800s, submitted: 38
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:34.789608+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405454848 unmapped: 50298880 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:35.789749+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408371200 unmapped: 47382528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e6b000/0x0/0x1bfc00000, data 0x348a161/0x3693000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:36.789871+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:37.790000+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4457259 data_alloc: 218103808 data_used: 24952832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:38.790137+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:39.790276+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3f000/0x0/0x1bfc00000, data 0x34b6161/0x36bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:40.790411+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:41.790530+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 47464448 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:42.790646+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 47464448 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4451799 data_alloc: 218103808 data_used: 24956928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a071dd10e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:43.790870+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 47464448 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3b000/0x0/0x1bfc00000, data 0x34b81d3/0x36c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:44.791043+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 47456256 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:45.791199+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447254181s of 11.133079529s, submitted: 94
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408313856 unmapped: 47439872 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:46.791456+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408313856 unmapped: 47439872 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:47.791626+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 46391296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456859 data_alloc: 218103808 data_used: 24956928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a06f9f45a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:48.791763+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 46391296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3a000/0x0/0x1bfc00000, data 0x34b91d3/0x36c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070ba6f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a07068d0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:49.791967+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3a000/0x0/0x1bfc00000, data 0x34b91d3/0x36c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 46366720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:50.792164+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070a2b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 46358528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:51.792287+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:52.792488+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316919 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:53.792702+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:54.792871+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:55.793409+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:56.793913+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:57.794428+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316919 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:58.794766+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:59.795193+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:00.795461+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:01.795828+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:02.795970+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316919 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:03.796268+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:04.796454+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408190976 unmapped: 47562752 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.548774719s of 19.727108002s, submitted: 40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:05.796590+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a075249e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 47218688 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a471e000/0x0/0x1bfc00000, data 0x2bd6171/0x2de0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:06.796810+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a471e000/0x0/0x1bfc00000, data 0x2bd6171/0x2de0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:07.797079+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361005 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:08.797307+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a471e000/0x0/0x1bfc00000, data 0x2bd6171/0x2de0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:09.797564+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:10.797756+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:11.797927+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a06f786780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408715264 unmapped: 47038464 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:12.798095+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408715264 unmapped: 47038464 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363639 data_alloc: 218103808 data_used: 20422656
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:13.798300+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:14.798474+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:15.798605+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:16.798756+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:17.798915+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4388919 data_alloc: 218103808 data_used: 23932928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:18.799085+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:19.799288+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:20.799455+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:21.799602+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:22.799735+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408739840 unmapped: 47013888 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4388919 data_alloc: 218103808 data_used: 23932928
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:23.799909+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408739840 unmapped: 47013888 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.091243744s of 19.163955688s, submitted: 16
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:24.800074+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 43646976 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:25.800207+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f39000/0x0/0x1bfc00000, data 0x33ba171/0x35c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:26.800345+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:27.800472+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4461973 data_alloc: 234881024 data_used: 25284608
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:28.800650+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:29.800820+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:30.800970+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:31.801157+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f2d000/0x0/0x1bfc00000, data 0x33c6171/0x35d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:32.801319+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4461989 data_alloc: 234881024 data_used: 25284608
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:33.801453+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a070709000 session 0x55a070a2b860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:34.801619+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:35.801751+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:36.801906+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.064500809s of 12.350695610s, submitted: 83
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411443200 unmapped: 44310528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:37.802017+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411443200 unmapped: 44310528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4463005 data_alloc: 234881024 data_used: 25284608
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f2d000/0x0/0x1bfc00000, data 0x33c61d3/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:38.802186+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a07049bc00 session 0x55a0707c54a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411443200 unmapped: 44310528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:39.802316+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411451392 unmapped: 44302336 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a070a2a960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06fabb000 session 0x55a06ec6cb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:40.802467+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f2d000/0x0/0x1bfc00000, data 0x33c61d3/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:41.802618+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a06f9f4f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:42.802750+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb7000/0x0/0x1bfc00000, data 0x273c1d3/0x2947000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4328499 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb7000/0x0/0x1bfc00000, data 0x273c1d3/0x2947000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:43.802896+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:44.803288+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:45.803499+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408682496 unmapped: 47071232 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:46.803628+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.785040855s of 10.047825813s, submitted: 25
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408698880 unmapped: 47054848 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a0752483c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb7000/0x0/0x1bfc00000, data 0x273c1d3/0x2947000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,3,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:47.803798+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a070709000 session 0x55a06dd554a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408797184 unmapped: 46956544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4327235 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:48.803974+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a070709000 session 0x55a070def860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a07087e5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 46874624 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:49.804121+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:50.804241+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a0752494a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:51.804447+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:52.804739+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325249 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:53.804867+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:54.805017+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:55.805147+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:56.805273+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:57.805818+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325249 data_alloc: 218103808 data_used: 20418560
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:58.805939+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:59.806268+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.085711479s of 13.555994987s, submitted: 392
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06fabb000 session 0x55a0707c4780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:00.806407+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:01.806580+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:02.806739+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 396 ms_handle_reset con 0x55a075698c00 session 0x55a070b405a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408969216 unmapped: 46784512 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330920 data_alloc: 218103808 data_used: 20426752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:03.806878+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 396 ms_handle_reset con 0x55a075698c00 session 0x55a0708ec3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 46776320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 397 ms_handle_reset con 0x55a06dde1400 session 0x55a070ac21e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:04.807197+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 397 ms_handle_reset con 0x55a06ddea400 session 0x55a0706bfa40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:05.807339+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 397 ms_handle_reset con 0x55a06fabb000 session 0x55a070bf45a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:06.807498+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a4bb4000/0x0/0x1bfc00000, data 0x273fa05/0x294a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a4bb4000/0x0/0x1bfc00000, data 0x273fa05/0x294a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:07.807625+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4332582 data_alloc: 218103808 data_used: 20430848
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:08.807834+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a4bb4000/0x0/0x1bfc00000, data 0x273fa05/0x294a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:09.807993+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:10.808140+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408920064 unmapped: 46833664 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:11.808300+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.481833458s of 11.654898643s, submitted: 48
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:12.808447+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:13.808603+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 398 ms_handle_reset con 0x55a06faa6c00 session 0x55a06de9ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:14.808804+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:15.808994+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:16.809172+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409985024 unmapped: 45768704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:17.809344+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409985024 unmapped: 45768704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:18.809493+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409993216 unmapped: 45760512 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:19.809633+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409993216 unmapped: 45760512 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:20.809842+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:21.809997+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:22.810165+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:23.810320+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:24.810493+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:25.810626+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:26.810768+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:27.810896+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:28.811033+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 45744128 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:29.811207+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.665527344s of 17.685497284s, submitted: 14
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 55238656 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:30.811334+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06ddea400 session 0x55a070dee960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:31.811610+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:32.811860+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4429647 data_alloc: 218103808 data_used: 20447232
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:33.812171+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:34.812443+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06fabb000 session 0x55a06dd47860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a075698c00 session 0x55a070a3c3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a070709000 session 0x55a06eb0b680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 55214080 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:35.812700+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a072f36000 session 0x55a0708ecf00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06ddea400 session 0x55a07087eb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418308096 unmapped: 45842432 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:36.812899+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418308096 unmapped: 45842432 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:37.813176+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06fabb000 session 0x55a070bf41e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a070709000 session 0x55a06dd49c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a075698c00 session 0x55a070a3c3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a0708dd400 session 0x55a06de9ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06ddea400 session 0x55a070bf45a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06fabb000 session 0x55a0707c4780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4520248 data_alloc: 234881024 data_used: 30212096
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:38.813383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:39.813582+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:40.813820+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fe000/0x0/0x1bfc00000, data 0x3af01f3/0x3d00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:41.814007+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fe000/0x0/0x1bfc00000, data 0x3af01f3/0x3d00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:42.814165+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.855843544s of 13.100606918s, submitted: 40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a070709000 session 0x55a070def860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418619392 unmapped: 45531136 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:43.814302+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4521449 data_alloc: 234881024 data_used: 30220288
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418570240 unmapped: 45580288 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:44.814522+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fe000/0x0/0x1bfc00000, data 0x3af01f3/0x3d00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:45.814671+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:46.814956+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:47.815143+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:48.815251+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4567209 data_alloc: 234881024 data_used: 36593664
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:49.815386+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375a000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a07375a000 session 0x55a0709ee960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fd000/0x0/0x1bfc00000, data 0x3af0255/0x3d01000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:50.815531+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:51.815690+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fd000/0x0/0x1bfc00000, data 0x3af0255/0x3d01000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:52.815844+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708e5c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:53.815973+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4568991 data_alloc: 234881024 data_used: 36593664
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fd000/0x0/0x1bfc00000, data 0x3af0255/0x3d01000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.379405022s of 11.574952126s, submitted: 8
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:54.816190+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 ms_handle_reset con 0x55a072d03800 session 0x55a070bf50e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:55.816317+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420610048 unmapped: 43540480 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a3149000/0x0/0x1bfc00000, data 0x41a2eae/0x43b5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:56.816466+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422387712 unmapped: 41762816 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:57.816615+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:58.816790+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4652820 data_alloc: 234881024 data_used: 38694912
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:59.816962+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:00.817130+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:01.817257+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:02.817412+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:03.817531+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4652820 data_alloc: 234881024 data_used: 38694912
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:04.817763+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:05.817950+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:06.818149+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:07.818305+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:08.818496+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4652820 data_alloc: 234881024 data_used: 38694912
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:09.818622+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:10.818806+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:11.818931+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.468400955s of 17.711011887s, submitted: 78
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:12.819070+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 ms_handle_reset con 0x55a06ddea400 session 0x55a06e6aa3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30ba000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a06fabb000 session 0x55a06fca43c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:13.819215+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422461440 unmapped: 41689088 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651650 data_alloc: 234881024 data_used: 38731776
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30ba000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30b7000/0x0/0x1bfc00000, data 0x4233b5b/0x4447000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:14.819433+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422461440 unmapped: 41689088 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:15.819612+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:16.819781+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:17.819919+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:18.820076+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4650418 data_alloc: 234881024 data_used: 38731776
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a075698c00 session 0x55a06ec6cb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a0708df400 session 0x55a07085b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a0708e5c00 session 0x55a070a2b4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:19.820248+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a06ddea400 session 0x55a070e00b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30b7000/0x0/0x1bfc00000, data 0x4233b5b/0x4447000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:20.820383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30b7000/0x0/0x1bfc00000, data 0x4233b5b/0x4447000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:21.820595+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 41664512 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:22.820741+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 41664512 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:23.820909+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4655300 data_alloc: 234881024 data_used: 38797312
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:24.821152+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:25.821281+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:26.821493+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:27.821625+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:28.821766+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4655300 data_alloc: 234881024 data_used: 38797312
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:29.821906+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:30.822050+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:31.822200+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:32.822353+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422502400 unmapped: 41648128 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.606676102s of 21.199974060s, submitted: 36
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:33.822523+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4659344 data_alloc: 234881024 data_used: 39182336
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:34.822695+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:35.822857+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:36.822985+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:37.823089+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:38.823261+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4661904 data_alloc: 234881024 data_used: 39387136
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:39.823387+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:40.830775+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:41.830901+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:42.831061+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:43.831208+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4662672 data_alloc: 234881024 data_used: 39415808
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:44.831417+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:45.831576+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:46.831733+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:47.831873+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:48.832008+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4662672 data_alloc: 234881024 data_used: 39415808
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:49.832342+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:50.832452+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:51.832566+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a0708df400 session 0x55a070a2b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422518784 unmapped: 41631744 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:52.832701+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 41623552 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.710981369s of 20.009254456s, submitted: 5
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:53.832825+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a070709000 session 0x55a0708ed4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 41623552 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4661800 data_alloc: 234881024 data_used: 39411712
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:54.833024+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a06fabb000 session 0x55a07068c5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a075698c00 session 0x55a070e01680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 41623552 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:55.833171+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a06ddea400 session 0x55a070ba41e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:56.833333+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:57.833500+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3f33000/0x0/0x1bfc00000, data 0x33b8628/0x35cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:58.833649+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4480010 data_alloc: 234881024 data_used: 30236672
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:59.833875+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:00.834089+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:01.834321+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 42745856 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a3f31000/0x0/0x1bfc00000, data 0x33ba2b2/0x35cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a3f31000/0x0/0x1bfc00000, data 0x33ba2b2/0x35cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:02.834488+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 403 ms_handle_reset con 0x55a06fabb000 session 0x55a070a2b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414613504 unmapped: 49537024 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a4731000/0x0/0x1bfc00000, data 0x274a28f/0x295c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.192452431s of 10.011085510s, submitted: 87
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:03.834698+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414621696 unmapped: 57925632 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456133 data_alloc: 218103808 data_used: 20021248
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 404 ms_handle_reset con 0x55a070709000 session 0x55a070deeb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:04.834888+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:05.835006+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:06.835187+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:07.835383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:08.835585+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461899 data_alloc: 218103808 data_used: 20029440
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:09.835757+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:10.835999+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:11.836239+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 57982976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:12.836534+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 57982976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:13.836807+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 57982976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464345 data_alloc: 218103808 data_used: 20029440
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.417022705s of 10.853311539s, submitted: 21
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a0708df400 session 0x55a071dd12c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375a000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f29000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:14.837042+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a07375a000 session 0x55a06dd6d4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:15.837258+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:16.837506+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f2a000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:17.837768+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:18.837969+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422281216 unmapped: 50266112 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4499663 data_alloc: 234881024 data_used: 31506432
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06ddea400 session 0x55a07087fa40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fabb000 session 0x55a0708ed2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a070709000 session 0x55a070b41a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a0708df400 session 0x55a07068d0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:19.838161+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a075fe4c00 session 0x55a06d1d6f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:20.838365+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:21.838580+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:22.838737+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:23.838907+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421871616 unmapped: 50675712 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4521353 data_alloc: 234881024 data_used: 31506432
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:24.839163+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:25.839349+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:26.839565+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:27.839746+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:28.839974+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4540713 data_alloc: 234881024 data_used: 34197504
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.272889137s of 14.806378365s, submitted: 38
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fabb000 session 0x55a06ec6c960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:29.840233+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:30.840396+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:31.840643+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:32.840830+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:33.841044+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c96000/0x0/0x1bfc00000, data 0x364fae8/0x3868000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4541725 data_alloc: 234881024 data_used: 34197504
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:34.841381+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:35.841567+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424501248 unmapped: 48046080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:36.841769+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424714240 unmapped: 47833088 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:37.841942+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 47742976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:38.842184+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4587119 data_alloc: 234881024 data_used: 34201600
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:39.842350+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36c7000/0x0/0x1bfc00000, data 0x3c1eae8/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.107211113s of 11.299701691s, submitted: 54
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:40.842543+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36c7000/0x0/0x1bfc00000, data 0x3c1eae8/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:41.842757+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36c5000/0x0/0x1bfc00000, data 0x3c20ae8/0x3e39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,0,0,3])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:42.842932+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:43.843216+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4590479 data_alloc: 234881024 data_used: 34512896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36aa000/0x0/0x1bfc00000, data 0x3c3bae8/0x3e54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:44.843456+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 48136192 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:45.850543+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:46.850691+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:47.850836+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:48.851026+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4594277 data_alloc: 234881024 data_used: 34508800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:49.851182+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36a2000/0x0/0x1bfc00000, data 0x3c43ae8/0x3e5c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:50.851348+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:51.851494+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.190292358s of 11.736324310s, submitted: 18
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36a2000/0x0/0x1bfc00000, data 0x3c43ae8/0x3e5c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:52.851633+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:53.851802+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369f000/0x0/0x1bfc00000, data 0x3c46ae8/0x3e5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4594765 data_alloc: 234881024 data_used: 34508800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:54.851993+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:55.852162+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:56.852307+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369f000/0x0/0x1bfc00000, data 0x3c46ae8/0x3e5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:57.852432+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:58.852564+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34512896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:59.852686+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:00.852867+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:01.853007+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:02.853156+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:03.853364+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34512896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:04.853567+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:05.853752+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:06.853951+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:07.854073+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:08.854323+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.575996399s of 16.818254471s, submitted: 3
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593881 data_alloc: 234881024 data_used: 34512896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:09.854483+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:10.854627+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 47046656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:11.854823+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06ddea400 session 0x55a0709ef860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a070709000 session 0x55a06dd6ad20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:12.854975+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070849400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:13.855194+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593525 data_alloc: 234881024 data_used: 34516992
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:14.855459+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:15.855618+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:16.855752+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:17.855899+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:18.856075+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34545664
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:19.856217+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:20.856386+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:21.856512+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 47030272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:22.856612+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 47030272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:23.856796+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 47030272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34545664
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.512025833s of 15.597708702s, submitted: 6
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:24.856942+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:25.857152+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:26.857280+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:27.857378+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:28.857517+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4598009 data_alloc: 234881024 data_used: 35102720
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:29.857677+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:30.857846+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:31.858007+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:32.858202+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:33.858378+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4600341 data_alloc: 234881024 data_used: 35106816
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:34.858620+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:35.858780+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:36.858970+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:37.859186+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:38.859431+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4600341 data_alloc: 234881024 data_used: 35106816
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:39.859622+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.862111092s of 16.040271759s, submitted: 8
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:40.859816+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a070849400 session 0x55a07087fc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:41.859978+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fb47c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fb47c00 session 0x55a070ac2f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06ddea400 session 0x55a070c06960
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:42.860160+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a0708df400 session 0x55a070ac2000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:43.860268+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4504435 data_alloc: 234881024 data_used: 31506432
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fabb000 session 0x55a0708ecb40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:44.860722+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f27000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:45.860840+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:46.861057+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:47.861187+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f27000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:48.861290+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4503609 data_alloc: 234881024 data_used: 31502336
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:49.861411+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 52436992 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.566287041s of 10.001653671s, submitted: 74
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:50.861540+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 406 ms_handle_reset con 0x55a070709000 session 0x55a0708ed4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:51.861686+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 406 heartbeat osd_stat(store_statfs(0x1a4b99000/0x0/0x1bfc00000, data 0x274f6f0/0x2965000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:52.861842+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:53.861984+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389934 data_alloc: 218103808 data_used: 20037632
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:54.862171+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 406 heartbeat osd_stat(store_statfs(0x1a4b99000/0x0/0x1bfc00000, data 0x274f6f0/0x2965000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:55.862325+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:56.862471+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:57.862604+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:58.862711+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:59.862824+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:00.863071+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:01.863247+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:02.863372+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:03.863500+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:04.863662+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:05.863788+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:06.863934+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:07.864080+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:08.864231+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:09.864427+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:10.864582+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:11.864717+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:12.864930+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:13.865068+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:14.865231+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:15.865360+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:16.865465+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:17.865575+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:18.865713+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:19.865852+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:20.865960+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:21.866136+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:22.866224+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:23.866389+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:24.866569+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:25.866722+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:26.866860+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:27.867020+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:28.867135+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:29.867289+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:30.867428+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:31.867565+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:32.867686+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:33.867837+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:34.868056+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:35.868205+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:36.868369+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:37.868565+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:38.868728+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:39.868934+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:40.869192+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070849400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.996181488s of 51.030910492s, submitted: 17
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:41.869375+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a070849400 session 0x55a070a27e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 60252160 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:42.869532+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412303360 unmapped: 60243968 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:43.869679+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428190 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:44.869882+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:45.870049+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:46.870184+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:47.870378+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:48.870515+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428190 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:49.870992+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:50.872151+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:51.873234+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:52.874157+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:53.874453+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456510 data_alloc: 218103808 data_used: 23871488
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:54.875321+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:55.875983+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:56.876264+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:57.876640+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:58.876854+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456510 data_alloc: 218103808 data_used: 23871488
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:59.877204+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.503690720s of 18.558368683s, submitted: 8
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 59154432 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:00.877360+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:01.877618+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 59154432 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:02.877848+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:03.878072+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:04.878335+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4476274 data_alloc: 218103808 data_used: 23891968
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4575000/0x0/0x1bfc00000, data 0x2d6a22f/0x2f81000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:05.878529+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:06.878654+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:07.878871+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 59351040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a455b000/0x0/0x1bfc00000, data 0x2d8c22f/0x2fa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:08.879003+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:09.879215+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4472746 data_alloc: 218103808 data_used: 23891968
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:10.879469+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a455b000/0x0/0x1bfc00000, data 0x2d8c22f/0x2fa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:11.879600+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:12.879752+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.390573502s of 12.535014153s, submitted: 38
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:13.879883+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:14.880062+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4473622 data_alloc: 218103808 data_used: 23900160
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd505a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:15.880185+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4550000/0x0/0x1bfc00000, data 0x2d9722f/0x2fae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a06fabb000 session 0x55a0707c4000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:16.880323+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:17.880484+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:18.880661+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:19.880864+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:20.881024+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:21.881811+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:22.882022+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:23.882490+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:24.883561+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:25.884017+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:26.884343+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:27.884565+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:28.884925+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:29.885566+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:30.885860+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:31.886014+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:32.886303+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:33.886566+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:34.886841+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:35.887165+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:36.887582+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:37.887786+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:38.887961+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:39.888245+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:40.888423+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:41.888607+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:42.888763+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:43.888944+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:44.889164+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:45.889371+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:46.889578+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:47.889780+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:48.889952+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:49.890185+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:50.890351+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413450240 unmapped: 59097088 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:51.890511+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 59088896 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:52.890680+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 59088896 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:53.890800+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 59088896 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:54.890949+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:55.891063+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:56.891134+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:57.891279+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:58.891456+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:59.891611+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:00.891757+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:01.891919+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:02.892053+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 59072512 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:03.892263+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 59072512 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:04.892484+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:05.892647+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:06.892776+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:07.892911+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:08.893049+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:09.893161+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:10.893322+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:11.893473+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 59056128 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:12.893863+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 59056128 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:13.893991+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 59056128 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:14.894186+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.355045319s of 62.387897491s, submitted: 8
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a070709000 session 0x55a06de9b860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:15.894331+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:16.894463+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:17.894611+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:18.894805+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 59719680 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:19.894946+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4429796 data_alloc: 218103808 data_used: 20045824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 59719680 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:20.895093+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 59719680 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:21.895268+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412835840 unmapped: 59711488 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:22.895435+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:23.895561+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:24.895726+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4451236 data_alloc: 218103808 data_used: 22974464
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:25.895849+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:26.895991+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:27.896209+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:28.896346+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:29.896475+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4451236 data_alloc: 218103808 data_used: 22974464
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:30.896603+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:31.896738+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.108156204s of 17.133529663s, submitted: 12
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414777344 unmapped: 57769984 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:32.896884+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 56041472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:33.897019+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 55869440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:34.897250+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4532264 data_alloc: 218103808 data_used: 24707072
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 55869440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:35.897481+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 55869440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:36.897668+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a3b0e000/0x0/0x1bfc00000, data 0x33c922f/0x35e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:37.897850+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:38.897999+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:39.898202+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a3ada000/0x0/0x1bfc00000, data 0x33fd22f/0x3614000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4533836 data_alloc: 234881024 data_used: 24936448
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:40.898383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:41.898533+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.606778145s of 10.013916016s, submitted: 99
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:42.898758+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a3ada000/0x0/0x1bfc00000, data 0x33fd22f/0x3614000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 55894016 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:43.899005+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 55894016 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:44.899312+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4534560 data_alloc: 234881024 data_used: 24973312
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 55894016 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070707400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:45.899528+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 408 ms_handle_reset con 0x55a06eabd400 session 0x55a06fca4780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 408 ms_handle_reset con 0x55a070707400 session 0x55a070e005a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417710080 unmapped: 54837248 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:46.899699+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 439975936 unmapped: 39198720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 408 heartbeat osd_stat(store_statfs(0x1a2d9d000/0x0/0x1bfc00000, data 0x4134f5c/0x4350000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [0,0,0,0,0,1,0,0,2,6])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:47.899966+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 408 ms_handle_reset con 0x55a06ddea400 session 0x55a0708ede00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:48.900183+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 409 ms_handle_reset con 0x55a06eabd400 session 0x55a06e6aad20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416997376 unmapped: 62177280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:49.900399+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811272 data_alloc: 234881024 data_used: 33505280
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a06fabb000 session 0x55a07087fe00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417038336 unmapped: 62136320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a1fbd000/0x0/0x1bfc00000, data 0x4f1187e/0x512f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:50.900602+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a070709000 session 0x55a071dd05a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddefc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a06ddefc00 session 0x55a07068c1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a06ddea400 session 0x55a070a2bc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417054720 unmapped: 62119936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:51.900763+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417062912 unmapped: 62111744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:52.900980+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:53.901164+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:54.901400+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a1fbd000/0x0/0x1bfc00000, data 0x4f1187e/0x512f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4810976 data_alloc: 234881024 data_used: 33505280
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:55.901542+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:56.901730+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.591431618s of 14.673515320s, submitted: 91
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06eabd400 session 0x55a06dd474a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06fabb000 session 0x55a070bf5c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070709000 session 0x55a06e5b7860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 69304320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070707000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070707000 session 0x55a06dd48f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06eabd400 session 0x55a06fca6780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06ddea400 session 0x55a06fca61e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:57.901911+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06fabb000 session 0x55a070b40f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070709000 session 0x55a070a27c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe5800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a075fe5800 session 0x55a0709670e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409968640 unmapped: 69206016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:58.902172+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409968640 unmapped: 69206016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:59.902336+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4793558 data_alloc: 234881024 data_used: 33509376
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409985024 unmapped: 69189632 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:00.902687+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fbb000/0x0/0x1bfc00000, data 0x4f133cd/0x5133000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 69165056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:01.902847+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06ddea400 session 0x55a071dd14a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 69165056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:02.903046+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06eabd400 session 0x55a06dd47a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 69165056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:03.903199+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06fabb000 session 0x55a0706be5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070709000 session 0x55a06ec3bc20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411074560 unmapped: 68100096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:04.903371+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4795396 data_alloc: 234881024 data_used: 33509376
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411074560 unmapped: 68100096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:05.903646+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412622848 unmapped: 66551808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:06.903849+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:07.904042+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:08.904219+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:09.904355+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886756 data_alloc: 251658240 data_used: 44552192
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:10.904547+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:11.904678+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:12.904838+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:13.904975+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:14.905216+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886756 data_alloc: 251658240 data_used: 44552192
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:15.905350+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:16.905532+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.954988480s of 20.212965012s, submitted: 12
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:17.905767+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:18.905997+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418545664 unmapped: 60628992 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:19.906209+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934062 data_alloc: 251658240 data_used: 50483200
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:20.906353+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:21.906546+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:22.906698+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:23.906926+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:24.907161+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934766 data_alloc: 251658240 data_used: 50483200
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:25.907302+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:26.907450+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:27.907587+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:28.907878+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.713869095s of 11.773273468s, submitted: 7
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:29.908044+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934414 data_alloc: 251658240 data_used: 50483200
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:30.908189+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:31.908361+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:32.908492+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420143104 unmapped: 59031552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:33.908634+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:34.908819+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934238 data_alloc: 251658240 data_used: 50483200
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:35.908994+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:36.909185+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:37.909315+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:38.909437+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420216832 unmapped: 58957824 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:39.909577+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934238 data_alloc: 251658240 data_used: 50483200
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 58941440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:40.909729+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420495360 unmapped: 58679296 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:41.909857+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a07036c800 session 0x55a070bf4d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420560896 unmapped: 58613760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:42.909992+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.115175247s of 14.167896271s, submitted: 6
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a06ddea400 session 0x55a070e01e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a06fabb000 session 0x55a06dd44b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a06eabd400 session 0x55a06f9f4f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 440082432 unmapped: 39092224 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:43.910182+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a07036c800 session 0x55a070e00780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a070709000 session 0x55a06dd55e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 47538176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:44.910402+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 412 handle_osd_map epochs [414,414], i have 412, src has [1,414]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 412 handle_osd_map epochs [413,414], i have 412, src has [1,414]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd86b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5240270 data_alloc: 268435456 data_used: 62140416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 414 heartbeat osd_stat(store_statfs(0x19f93f000/0x0/0x1bfc00000, data 0x75869ba/0x77ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 431718400 unmapped: 47456256 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:45.910524+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a06eabd400 session 0x55a070deed20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a06fabb000 session 0x55a070b41a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a07036c800 session 0x55a0709ee1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432889856 unmapped: 46284800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:46.910668+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dcc00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432889856 unmapped: 46284800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:47.910833+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432922624 unmapped: 46252032 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:48.910950+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 415 ms_handle_reset con 0x55a0708dcc00 session 0x55a0706be5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432930816 unmapped: 46243840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:49.911080+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a1d40000/0x0/0x1bfc00000, data 0x5188621/0x53ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 415 ms_handle_reset con 0x55a0706f0400 session 0x55a06da49e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 415 ms_handle_reset con 0x55a0706ef000 session 0x55a070e11860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4996972 data_alloc: 268435456 data_used: 62132224
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432930816 unmapped: 46243840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:50.911230+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432930816 unmapped: 46243840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:51.911366+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432955392 unmapped: 46219264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:52.911645+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 416 ms_handle_reset con 0x55a06ddea400 session 0x55a070def4a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1fab000/0x0/0x1bfc00000, data 0x4f1c16c/0x5142000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1fac000/0x0/0x1bfc00000, data 0x4f1c15c/0x5141000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432955392 unmapped: 46219264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:53.911777+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432955392 unmapped: 46219264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:54.912000+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.380544662s of 12.168486595s, submitted: 103
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4968539 data_alloc: 251658240 data_used: 59604992
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432963584 unmapped: 46211072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:55.912173+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432988160 unmapped: 46186496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:56.912280+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 418 ms_handle_reset con 0x55a06eabd400 session 0x55a06d1d6f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 54345728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:57.912400+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a47b6000/0x0/0x1bfc00000, data 0x3439890/0x365e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 418 ms_handle_reset con 0x55a0708df400 session 0x55a06de9b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 54345728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:58.912509+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:59.912644+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4469598 data_alloc: 218103808 data_used: 18591744
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 418 ms_handle_reset con 0x55a06ddea400 session 0x55a06fca61e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:00.912848+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:01.913000+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a57a5000/0x0/0x1bfc00000, data 0x2764890/0x2989000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:02.913201+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:03.913348+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:04.913529+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4469598 data_alloc: 218103808 data_used: 18591744
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:05.913679+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:06.913811+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.185250282s of 11.454839706s, submitted: 82
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:07.913949+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:08.914090+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 57K writes, 226K keys, 57K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s
                                           Cumulative WAL: 57K writes, 20K syncs, 2.85 writes per sync, written: 0.21 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2950 writes, 10K keys, 2950 commit groups, 1.0 writes per commit group, ingest: 9.51 MB, 0.02 MB/s
                                           Interval WAL: 2950 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:09.914251+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:10.914380+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:11.914529+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:12.914653+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:13.914776+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:14.914989+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:15.915174+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:16.915298+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:17.915429+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:18.915578+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:19.915734+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:20.915896+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:21.916031+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:22.916154+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:23.916306+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:24.916482+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:25.916679+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:26.916868+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:27.917019+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:28.917192+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:29.917308+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:30.917499+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:31.917668+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:32.917805+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:33.917938+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:34.918521+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:35.919355+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:36.919518+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:37.920035+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:38.920450+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:39.920661+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:40.921414+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:41.921552+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.615318298s of 35.640926361s, submitted: 13
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:42.921784+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663de/0x298d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:43.922090+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5761000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06eabd400 session 0x55a07087fe00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:44.922670+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479523 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:45.922810+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:46.922974+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:47.923113+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:48.923436+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5761000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:49.923557+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479523 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:50.923921+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:51.924086+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:52.924288+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:53.924488+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:54.924683+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.724076271s of 12.753091812s, submitted: 7
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5761000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706ef000 session 0x55a0707c4000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4481350 data_alloc: 218103808 data_used: 18599936
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:55.924843+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:56.925004+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:57.925231+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:58.925377+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:59.925527+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706f0400 session 0x55a070a27e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482262 data_alloc: 218103808 data_used: 18870272
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:00.925636+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06fabb000 session 0x55a070ac2f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:01.925797+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:02.926055+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:03.926267+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06ddea400 session 0x55a070e101e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:04.926469+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.236925125s of 10.163439751s, submitted: 26
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480511 data_alloc: 218103808 data_used: 18862080
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:05.926674+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5762000/0x0/0x1bfc00000, data 0x27a63cf/0x29cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:06.926839+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06eabd400 session 0x55a06dd49e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:07.926966+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a2000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:08.927156+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:09.927310+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06fabb000 session 0x55a07085ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706ef000 session 0x55a06dd54b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474155 data_alloc: 218103808 data_used: 21745664
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:10.927448+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:11.927560+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706f0400 session 0x55a070e00000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a2000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:12.927710+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:13.927839+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:14.928019+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06ddea400 session 0x55a06e7d0d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.611102104s of 10.015885353s, submitted: 25
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:15.928151+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475892 data_alloc: 218103808 data_used: 21745664
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a2000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:16.928265+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06eabd400 session 0x55a0708ec780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:17.928423+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06fabb000 session 0x55a07068d860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706ef000 session 0x55a06f9e4000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:18.928560+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:19.928667+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a07036c800 session 0x55a070a2ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:20.928801+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4539361 data_alloc: 218103808 data_used: 21753856
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:21.928959+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5009000/0x0/0x1bfc00000, data 0x2efc08a/0x3124000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:22.929082+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:23.929282+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:24.929516+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:25.929677+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4539361 data_alloc: 218103808 data_used: 21753856
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5009000/0x0/0x1bfc00000, data 0x2efc08a/0x3124000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:26.929917+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:27.930074+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.243913651s of 12.955587387s, submitted: 41
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:28.930227+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06ddea400 session 0x55a070defa40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:29.930383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:30.930543+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541143 data_alloc: 218103808 data_used: 21753856
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:31.930676+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5008000/0x0/0x1bfc00000, data 0x2efc0ec/0x3125000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:32.930880+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:33.931062+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06eabd400 session 0x55a070966b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:34.931378+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:35.931596+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541900 data_alloc: 218103808 data_used: 21757952
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416047104 unmapped: 63127552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:36.931741+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416055296 unmapped: 63119360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5008000/0x0/0x1bfc00000, data 0x2efc10f/0x3126000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:37.931890+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06fabb000 session 0x55a070e114a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416055296 unmapped: 63119360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.679343224s of 10.163791656s, submitted: 5
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:38.932183+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a0706ef000 session 0x55a070ac2000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416055296 unmapped: 63119360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:39.932490+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5008000/0x0/0x1bfc00000, data 0x2efc0ec/0x3125000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a072d03400 session 0x55a06d5623c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:40.932644+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4588443 data_alloc: 234881024 data_used: 28573696
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06ddea400 session 0x55a06f9f5e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:41.933150+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a06eabd400 session 0x55a07015b2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:42.933495+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:43.933700+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a06fabb000 session 0x55a070ac3860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 66928640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:44.934436+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 66928640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579b000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:45.934626+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4489055 data_alloc: 218103808 data_used: 21762048
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 66912256 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:46.934795+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412303360 unmapped: 66871296 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:47.934983+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 66830336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.170576572s of 10.021444321s, submitted: 241
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:48.935144+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,2,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412327936 unmapped: 66846720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a0706ef000 session 0x55a070e101e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07db91800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:49.935305+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a07db91800 session 0x55a06f7870e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,1,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413360128 unmapped: 65814528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,1,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:50.935466+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4488807 data_alloc: 218103808 data_used: 21762048
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 65781760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:51.935659+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd543c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 65798144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:52.935777+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:53.935929+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:54.936165+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:55.936311+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4495387 data_alloc: 218103808 data_used: 21774336
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06eabd400 session 0x55a070e01a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:56.936502+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:57.936728+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.733456612s of 10.062119484s, submitted: 194
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06fabb000 session 0x55a06e6aa3c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:58.936869+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f9000/0x0/0x1bfc00000, data 0x276b814/0x2995000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:59.937017+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a0706ef000 session 0x55a06e5b7e00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:00.937183+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a070891400 session 0x55a070ba4d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:01.937372+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:02.937524+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:03.937665+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:04.937864+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:05.938009+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:06.938153+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:07.938290+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:08.938395+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:09.938523+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:10.938662+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:11.938866+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:12.939003+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:13.939196+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:14.939376+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:15.939531+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:16.939704+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 65757184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:17.939830+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 65757184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:18.939957+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 65757184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.165819168s of 20.921848297s, submitted: 22
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06ddea400 session 0x55a06fca65a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:19.940076+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412450816 unmapped: 66723840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:20.940217+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547958 data_alloc: 218103808 data_used: 21774336
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 66330624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:21.940331+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:22.940468+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:23.940591+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:24.940741+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:25.940900+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4596438 data_alloc: 234881024 data_used: 28655616
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:26.941046+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:27.941180+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:28.941349+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:29.941488+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:30.941661+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4596438 data_alloc: 234881024 data_used: 28655616
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:31.941825+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.558998108s of 12.588422775s, submitted: 5
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:32.941982+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:33.942160+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:34.942383+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418168832 unmapped: 61005824 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:35.942557+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4664224 data_alloc: 234881024 data_used: 28803072
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:36.942725+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:37.942935+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a37e2000/0x0/0x1bfc00000, data 0x3580846/0x37ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:38.943160+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:39.943352+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:40.943503+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4664224 data_alloc: 234881024 data_used: 28803072
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:41.943633+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:42.943757+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a37df000/0x0/0x1bfc00000, data 0x3583846/0x37af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.099544525s of 10.999442101s, submitted: 52
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:43.943978+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a37df000/0x0/0x1bfc00000, data 0x3583846/0x37af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 422 handle_osd_map epochs [423,423], i have 423, src has [1,423]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:44.944133+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06fabb000 session 0x55a071dd0780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:45.944275+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668876 data_alloc: 234881024 data_used: 28811264
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:46.944446+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:47.944672+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:48.944875+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:49.945034+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:50.945227+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668876 data_alloc: 234881024 data_used: 28811264
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:51.945375+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:52.945649+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:53.945838+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.160892487s of 11.599664688s, submitted: 7
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:54.946014+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a072d03c00 session 0x55a070e103c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a0706ef000 session 0x55a07085b0e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:55.946192+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669648 data_alloc: 234881024 data_used: 28811264
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:56.946419+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:57.946591+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:58.946733+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:59.946878+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:00.947075+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669648 data_alloc: 234881024 data_used: 28811264
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:01.947247+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:02.947432+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:03.947560+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:04.947709+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:05.947867+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.391080856s of 11.623667717s, submitted: 3
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4670128 data_alloc: 234881024 data_used: 28868608
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:06.948001+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:07.948155+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:08.948324+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:09.948458+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:10.948623+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669952 data_alloc: 234881024 data_used: 28868608
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:11.948746+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:12.948868+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:13.948987+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419209216 unmapped: 59965440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:14.949165+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,2])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417415168 unmapped: 61759488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:15.949282+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4737360 data_alloc: 234881024 data_used: 29294592
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417415168 unmapped: 61759488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:16.949415+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:17.949673+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:18.949824+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:19.949971+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:20.950152+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742480 data_alloc: 234881024 data_used: 30220288
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:21.950331+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:22.950592+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:23.950902+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:24.951179+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:25.951428+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742480 data_alloc: 234881024 data_used: 30220288
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:26.951658+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:27.951951+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:28.952166+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:29.952362+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.808780670s of 23.742141724s, submitted: 5
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 57483264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:30.952606+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4765360 data_alloc: 234881024 data_used: 31952896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419414016 unmapped: 59760640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:31.952837+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:32.953019+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:33.953206+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:34.953419+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:35.953636+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4765360 data_alloc: 234881024 data_used: 31952896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:36.953815+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:37.953990+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:38.954207+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06eabc400 session 0x55a07085af00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:39.954397+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:40.954553+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290309906s of 11.381837845s, submitted: 6
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4765360 data_alloc: 234881024 data_used: 31952896
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06ddea400 session 0x55a0709ee5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:41.954774+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:42.954990+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:43.955228+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:44.955535+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06eabc400 session 0x55a0706bed20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:45.955684+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4704432 data_alloc: 234881024 data_used: 31891456
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:46.955901+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06fabb000 session 0x55a06e6aba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:47.956257+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:48.956462+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a0706ef000 session 0x55a06e6450e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03c00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:49.956651+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:50.956822+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4703691 data_alloc: 234881024 data_used: 31887360
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.943298340s of 10.327015877s, submitted: 15
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37db000/0x0/0x1bfc00000, data 0x37b05f1/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:51.956952+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417193984 unmapped: 61980672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 424 ms_handle_reset con 0x55a072d03c00 session 0x55a06debb680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:52.957172+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417193984 unmapped: 61980672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:53.957441+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 424 ms_handle_reset con 0x55a06eabd400 session 0x55a07068d2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:54.957737+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:55.957885+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4695700 data_alloc: 234881024 data_used: 31891456
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:56.958087+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 424 ms_handle_reset con 0x55a06ddea400 session 0x55a06de9af00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x358727b/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:57.958354+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:58.958590+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:59.958753+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 425 ms_handle_reset con 0x55a06eabc400 session 0x55a070ac30e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:00.958955+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4699921 data_alloc: 234881024 data_used: 31899648
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3588dba/0x37b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:01.959207+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.219140530s of 10.156544685s, submitted: 50
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:02.959424+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:03.959571+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a37d8000/0x0/0x1bfc00000, data 0x2770dab/0x299e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,8])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:04.959839+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 425 ms_handle_reset con 0x55a06fabb000 session 0x55a070e01860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:05.960022+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521128 data_alloc: 218103808 data_used: 21794816
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:06.960256+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a45f0000/0x0/0x1bfc00000, data 0x2770dab/0x299e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a45f0000/0x0/0x1bfc00000, data 0x2770dab/0x299e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:07.960452+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:08.960652+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:09.960873+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:10.961066+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415801344 unmapped: 63373312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525302 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:11.961284+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.447560787s of 10.297828674s, submitted: 36
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a45ec000/0x0/0x1bfc00000, data 0x2772a58/0x29a1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:12.973881+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 426 ms_handle_reset con 0x55a0706ef000 session 0x55a070a2a1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:13.974053+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:14.974282+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:15.974471+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a45ec000/0x0/0x1bfc00000, data 0x2772a58/0x29a1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525302 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:16.974654+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:17.974888+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:18.975173+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x2774597/0x29a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:19.975412+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:20.975623+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4528276 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:21.975814+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:22.975995+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x2774597/0x29a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:23.976165+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 63348736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.541569710s of 12.292300224s, submitted: 15
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06ddea400 session 0x55a070deed20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:24.976337+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45ea000/0x0/0x1bfc00000, data 0x2774597/0x29a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabc400 session 0x55a070e11680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 63275008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:25.976500+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 63275008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4528196 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabd400 session 0x55a06eb121e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:26.976692+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 63275008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:27.976916+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 63266816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x27745a7/0x29a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:28.977059+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 63258624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06fabb000 session 0x55a070a3c1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1b000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x27745a7/0x29a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06fa1b000 session 0x55a070a3c1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:29.977176+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 63250432 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06ddea400 session 0x55a070e11680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:30.977314+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabc400 session 0x55a070e01860
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:31.977464+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:32.977592+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:33.977813+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:34.978017+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:35.978184+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:36.978392+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:37.978574+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:38.978714+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:39.978867+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:40.979044+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:41.979178+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:42.979307+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:43.979456+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 63193088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:44.979614+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 63193088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:45.979880+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 63193088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:46.980021+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:47.980163+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:48.980302+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabd400 session 0x55a07068d2c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:49.980523+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06fabb000 session 0x55a06debb680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070842800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a070842800 session 0x55a06e6450e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:50.980715+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:51.980879+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06ddea400 session 0x55a06e6aba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:52.981149+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:53.981409+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [1])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:54.981668+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 64258048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:55.981822+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4713894 data_alloc: 234881024 data_used: 32583680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:56.981973+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:57.982153+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:58.982315+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:59.982443+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:00.982589+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:01.982771+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4713894 data_alloc: 234881024 data_used: 32583680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:02.982955+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:03.983163+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:04.983373+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.079414368s of 40.885025024s, submitted: 18
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,10,17,3])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:05.983530+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 59334656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:06.983668+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4815398 data_alloc: 234881024 data_used: 33091584
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:07.983863+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:08.984008+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:09.984163+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c42000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:10.984298+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:11.984450+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4824148 data_alloc: 234881024 data_used: 32972800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:12.984613+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:13.984748+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c42000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:14.985019+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabc400 session 0x55a0709ee5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418709504 unmapped: 60465152 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc ms_handle_reset ms_handle_reset con 0x55a075699000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: get_auth_request con 0x55a070842800 auth_method 0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.103345871s of 10.333523750s, submitted: 100
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabd400 session 0x55a070a2ba40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:15.985187+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:16.985391+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4818180 data_alloc: 234881024 data_used: 32956416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06dce2800 session 0x55a070b41680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:17.985569+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:18.985706+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:19.985930+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:20.986199+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:21.986400+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4818180 data_alloc: 234881024 data_used: 32956416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:22.986546+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:23.986782+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:24.986964+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:25.987141+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:26.987327+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4818180 data_alloc: 234881024 data_used: 32956416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:27.987468+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:28.987644+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:29.987768+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:30.987915+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.739142418s of 15.747713089s, submitted: 1
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a075699800 session 0x55a070ba63c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:31.988039+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4816663 data_alloc: 234881024 data_used: 32956416
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07db93000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:32.988179+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418881536 unmapped: 60293120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:33.988255+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:34.988420+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x41055ca/0x4337000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:35.988548+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:36.988711+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4816183 data_alloc: 234881024 data_used: 33579008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:37.988897+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:38.989183+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:39.989333+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x41055ca/0x4337000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:40.989467+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:41.989611+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4816183 data_alloc: 234881024 data_used: 33579008
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:42.989772+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.251277924s of 12.339883804s, submitted: 4
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:43.989912+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:44.990095+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x41055ca/0x4337000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:45.990276+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:46.990435+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4824999 data_alloc: 234881024 data_used: 34119680
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:47.990606+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:48.990774+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:49.990906+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:50.991089+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x41115ca/0x4343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:51.991266+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4828535 data_alloc: 234881024 data_used: 34115584
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:52.991389+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:53.991506+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:54.991699+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x41115ca/0x4343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:55.991839+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.064384460s of 12.121880531s, submitted: 26
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x41115ca/0x4343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:56.991951+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4836247 data_alloc: 234881024 data_used: 35917824
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a074fdf000
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:57.992098+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a074fdf000 session 0x55a070a2a5a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:58.992282+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:59.992438+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabc400 session 0x55a0752483c0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06ddea400 session 0x55a070b40f00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c40000/0x0/0x1bfc00000, data 0x41c8223/0x434d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:00.992562+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c3f000/0x0/0x1bfc00000, data 0x41c8295/0x434f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:01.992698+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4867651 data_alloc: 234881024 data_used: 35926016
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c35000/0x0/0x1bfc00000, data 0x427e295/0x4359000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:02.992853+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:03.993036+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:04.993272+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:05.993446+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:06.993601+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4867651 data_alloc: 234881024 data_used: 35926016
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabd400 session 0x55a075248d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:07.993781+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c35000/0x0/0x1bfc00000, data 0x427e295/0x4359000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a075699800 session 0x55a070b40b40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:08.993934+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a0708eb800 session 0x55a070b401e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.019895554s of 13.079443932s, submitted: 13
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd44780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:09.994069+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c34000/0x0/0x1bfc00000, data 0x427e2a5/0x435a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:10.994195+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419241984 unmapped: 59932672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:11.994437+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4870129 data_alloc: 234881024 data_used: 35942400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:12.994591+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:13.994777+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c34000/0x0/0x1bfc00000, data 0x427e2a5/0x435a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:14.994948+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:15.995183+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:16.995324+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4870129 data_alloc: 234881024 data_used: 35942400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:17.995462+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:18.995669+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:19.995817+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c34000/0x0/0x1bfc00000, data 0x427e2a5/0x435a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:20.995979+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:21.996180+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.991645813s of 13.058052063s, submitted: 1
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4887340 data_alloc: 234881024 data_used: 36724736
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:22.996330+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:23.996501+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:24.996658+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:25.996848+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x42c42a5/0x43a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:26.997055+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889596 data_alloc: 234881024 data_used: 36868096
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:27.997315+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:28.997528+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x42c42a5/0x43a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:29.997709+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:30.997986+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:31.998184+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889628 data_alloc: 234881024 data_used: 36868096
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.439553261s of 10.471271515s, submitted: 10
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:32.998360+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x42c42a5/0x43a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:33.998563+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:34.998751+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:35.998911+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:36.999044+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892634 data_alloc: 234881024 data_used: 36872192
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:37.999192+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:38.999380+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:39.999475+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:40.999629+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:41.999779+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4894314 data_alloc: 234881024 data_used: 37044224
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:42.999920+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:44.000091+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:45.000305+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:46.000460+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:47.000627+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4894314 data_alloc: 234881024 data_used: 37044224
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:48.000766+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabc400 session 0x55a07085a780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.469573975s of 15.737050056s, submitted: 14
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabd400 session 0x55a07068c1e0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a075699800 session 0x55a070defe00
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:49.000929+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be8000/0x0/0x1bfc00000, data 0x42c4295/0x43a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:50.001227+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:51.001412+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:52.001591+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be8000/0x0/0x1bfc00000, data 0x42c4295/0x43a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892077 data_alloc: 234881024 data_used: 37048320
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070843400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a070843400 session 0x55a06f786d20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:53.001776+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06ddea400 session 0x55a070e005a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabc400 session 0x55a06da485a0
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:54.001906+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:55.002139+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabd400 session 0x55a07068da40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:56.002359+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:57.002534+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4877579 data_alloc: 234881024 data_used: 36933632
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a2833000/0x0/0x1bfc00000, data 0x4115ed0/0x434a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:58.002726+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 429 ms_handle_reset con 0x55a07db93000 session 0x55a070ba8780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.269157410s of 10.401799202s, submitted: 32
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:59.002862+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:00.003018+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 429 ms_handle_reset con 0x55a075699800 session 0x55a06dd54780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:01.003259+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:02.003409+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4876004 data_alloc: 234881024 data_used: 36929536
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:03.003531+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a2834000/0x0/0x1bfc00000, data 0x4115ead/0x4349000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 429 handle_osd_map epochs [430,430], i have 430, src has [1,430]
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419397632 unmapped: 59777024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:04.003679+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419397632 unmapped: 59777024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd47a40
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:05.003891+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 ms_handle_reset con 0x55a06eabc400 session 0x55a075248780
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2831000/0x0/0x1bfc00000, data 0x41179ec/0x434c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416833536 unmapped: 62341120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:06.004166+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416833536 unmapped: 62341120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:07.004294+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:08.004499+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:09.004665+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:10.004930+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:11.005277+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:12.005481+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:13.005696+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:14.005842+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 ms_handle_reset con 0x55a06dde1400 session 0x55a070b41c20
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:15.006006+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:16.006207+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:17.006368+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:18.006489+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:19.006688+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:20.006841+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:21.007054+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:22.007213+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:23.007417+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:24.007607+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:25.007779+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:26.007973+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:27.008184+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:28.008369+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:29.008523+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:30.008667+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:31.008875+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416866304 unmapped: 62308352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:32.009053+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:33.009244+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:34.009370+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:35.009580+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:36.009722+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:37.009874+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:38.010150+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:39.010332+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:40.010468+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:41.010642+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:42.010804+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:43.010945+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:44.011159+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:45.011356+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416890880 unmapped: 62283776 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:46.011547+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416890880 unmapped: 62283776 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:47.011710+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416899072 unmapped: 62275584 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:48.011883+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:49.012075+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:50.012228+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:51.012541+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:52.012678+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:53.012811+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:54.012948+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:55.013164+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:56.013333+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:57.013458+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:58.013645+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:59.013771+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:00.013950+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:01.014074+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:02.014229+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:03.014452+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416915456 unmapped: 62259200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:04.014620+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:05.014759+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 62251008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:06.014897+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 62251008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:07.015053+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 62251008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:08.015237+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:09.015476+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:10.015653+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:11.015809+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:12.015965+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:13.016130+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:14.016318+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:15.016535+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:16.016692+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:17.016832+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:18.016977+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:19.017091+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:20.017239+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:21.017375+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:22.017582+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:23.017732+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:24.017896+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:25.018073+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:26.018209+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:27.018399+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:28.018529+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416964608 unmapped: 62210048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:29.018665+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416964608 unmapped: 62210048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:30.018762+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416964608 unmapped: 62210048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:31.018863+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:32.019033+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:33.019160+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:34.019298+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:35.019522+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416980992 unmapped: 62193664 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:36.019791+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:37.019955+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:38.020285+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:39.020422+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:40.020551+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:41.020678+0000)
Oct 02 13:34:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:42.020826+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:43.021154+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:44.021341+0000)
Oct 02 13:34:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252430662' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416997376 unmapped: 62177280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:45.021591+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'config show' '{prefix=config show}'
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416825344 unmapped: 62349312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:46.021708+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416612352 unmapped: 62562304 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:47.021832+0000)
Oct 02 13:34:17 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 62521344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:34:17 compute-0 ceph-osd[84115]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:34:17 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:34:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:17.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46148 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37194 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:34:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944900759' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46978 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46160 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.46933 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.46115 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.37185 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.46130 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3429833707' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/252430662' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1208016185' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/113161508' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.46148 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.37194 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2595115548' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/944900759' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/874236552' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37206 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46993 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:34:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704768393' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:18.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:18 compute-0 nova_compute[256940]: 2025-10-02 13:34:18.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46178 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37224 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 13:34:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262734' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46193 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37248 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 crontab[424578]: (root) LIST (root)
Oct 02 13:34:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47017 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.46978 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.46160 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.37206 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2588293621' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.46993 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/704768393' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1184456124' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.46178 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.37224 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: pgmap v3717: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.47005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1278800428' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/262734' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/584366054' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/585650995' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46205 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37260 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47029 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:19.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46220 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 13:34:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860676982' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:20 compute-0 nova_compute[256940]: 2025-10-02 13:34:20.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47041 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 13:34:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977002523' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:20 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:20.592+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.46193 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.37248 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.47017 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2205226372' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.46205 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.37260 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.47029 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.46220 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/847947838' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2860676982' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3914800062' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2831220437' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/977002523' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47053 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46241 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:20.694+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:20 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:20.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 13:34:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903563452' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:20 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:21 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47068 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 13:34:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807069971' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 13:34:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841789548' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47080 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1669040533' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.47041 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.37284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.47053 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.46241 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/388357450' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3903563452' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1738238686' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: pgmap v3718: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.47068 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/807069971' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1289503258' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2579968663' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3841789548' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1520222773' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2065690386' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 13:34:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031844630' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:21.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 13:34:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374216749' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 13:34:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434328923' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47101 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:22 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:34:22.245+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:34:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 13:34:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591116761' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 13:34:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148650609' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.47080 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1669040533' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3031844630' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3386889593' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1792309887' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1117553503' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2374216749' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1434328923' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1179389264' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3583909278' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2658690698' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2591116761' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/148650609' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/209078430' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2701917429' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:22 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 13:34:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 13:34:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/856630200' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:22 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:23 compute-0 systemd[1]: Started Hostname Service.
Oct 02 13:34:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 13:34:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4127737275' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 13:34:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128338992' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 13:34:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252421204' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.47101 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/564963111' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2972436643' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/856630200' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: pgmap v3719: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2796163003' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4127737275' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/990446423' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3813509163' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2317346084' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2128338992' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3027924521' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3780658384' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/252421204' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3331320437' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1810802137' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46361 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:23 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37386 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:23 compute-0 nova_compute[256940]: 2025-10-02 13:34:23.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:23.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 13:34:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3619739455' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46367 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 sudo[425214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:24 compute-0 sudo[425214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:24 compute-0 sudo[425214]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:24 compute-0 sudo[425248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46373 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 sudo[425248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:24 compute-0 sudo[425248]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37398 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47167 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46382 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37434 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.46361 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.37386 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2877372303' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3619739455' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.46367 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3258619475' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/11611765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3605085164' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2676800153' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46403 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 13:34:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473643529' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46424 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 nova_compute[256940]: 2025-10-02 13:34:25.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47206 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 13:34:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145739425' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47212 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.46373 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.37398 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.47167 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.46382 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.37434 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.46403 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1683149492' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: pgmap v3720: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.37446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1263180927' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1049783261' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2473643529' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3341206882' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1875449801' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4145739425' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37464 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47218 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46457 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285745496' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47233 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37479 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46466 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 13:34:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044801920' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:34:26.532 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:34:26.533 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:34:26.533 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47245 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:26.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37491 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 sudo[425587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:26 compute-0 sudo[425587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:26 compute-0 sudo[425587]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:26 compute-0 sudo[425631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.46424 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.37452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.47206 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.46445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.47212 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.37464 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.47218 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.46457 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3910303546' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/285745496' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1068626528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2044801920' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-0 sudo[425631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:26 compute-0 sudo[425631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:26 compute-0 podman[425614]: 2025-10-02 13:34:26.866203329 +0000 UTC m=+0.099903512 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:34:26 compute-0 podman[425613]: 2025-10-02 13:34:26.872956922 +0000 UTC m=+0.107060615 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:34:26 compute-0 sudo[425702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:26 compute-0 sudo[425702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:26 compute-0 sudo[425702]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:26 compute-0 sudo[425732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:34:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 sudo[425732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 13:34:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/681771291' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47290 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 podman[425885]: 2025-10-02 13:34:27.549543253 +0000 UTC m=+0.059108256 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:34:27 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46517 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 podman[425885]: 2025-10-02 13:34:27.690732072 +0000 UTC m=+0.200297095 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:34:27 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37533 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47302 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:27.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.47233 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.37479 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.46466 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.47245 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.37491 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1392752194' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.47263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4044084775' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2709166255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/681771291' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2576054269' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584179053' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:28 compute-0 podman[426064]: 2025-10-02 13:34:28.203010002 +0000 UTC m=+0.063429907 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:34:28 compute-0 podman[426064]: 2025-10-02 13:34:28.214391294 +0000 UTC m=+0.074811189 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47317 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:28 compute-0 podman[426164]: 2025-10-02 13:34:28.471899004 +0000 UTC m=+0.078714558 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived)
Oct 02 13:34:28 compute-0 podman[426164]: 2025-10-02 13:34:28.482941617 +0000 UTC m=+0.089757181 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, vcs-type=git, name=keepalived, release=1793, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875532775' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:28 compute-0 sudo[425732]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:34:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:28.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:28 compute-0 sudo[426241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:28 compute-0 sudo[426241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-0 sudo[426241]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-0 sudo[426269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:34:28 compute-0 sudo[426269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-0 sudo[426269]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-0 nova_compute[256940]: 2025-10-02 13:34:28.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:28 compute-0 sudo[426316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:28 compute-0 sudo[426316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-0 sudo[426316]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:34:28
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'vms']
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:34:28 compute-0 sudo[426342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:34:28 compute-0 sudo[426342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.47290 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.46517 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.37533 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.47302 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1619898129' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2584179053' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1544442175' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1130845526' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1875532775' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:29 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3337079194' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:29 compute-0 sudo[426342]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a3a5949c-41af-41ef-8e96-a1eb8d71c32f does not exist
Oct 02 13:34:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d122ea8a-f329-4c4a-951b-357a54708988 does not exist
Oct 02 13:34:29 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e1780c82-5754-4aa3-af31-58ea0106e6f0 does not exist
Oct 02 13:34:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.483989) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069484061, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 810, "num_deletes": 251, "total_data_size": 834399, "memory_usage": 849088, "flush_reason": "Manual Compaction"}
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Oct 02 13:34:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069498233, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 825939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82216, "largest_seqno": 83025, "table_properties": {"data_size": 821354, "index_size": 1980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13647, "raw_average_key_size": 22, "raw_value_size": 811086, "raw_average_value_size": 1320, "num_data_blocks": 84, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412039, "oldest_key_time": 1759412039, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 14280 microseconds, and 3439 cpu microseconds.
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.498277) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 825939 bytes OK
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.498298) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.500193) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.500221) EVENT_LOG_v1 {"time_micros": 1759412069500214, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.500241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 829654, prev total WAL file size 829654, number of live WAL files 2.
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.500692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(806KB)], [188(11MB)]
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069500732, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 13275346, "oldest_snapshot_seqno": -1}
Oct 02 13:34:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47359 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 13:34:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791472702' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:29 compute-0 sudo[426452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:29 compute-0 sudo[426452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:29 compute-0 sudo[426452]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 10978 keys, 11368662 bytes, temperature: kUnknown
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069590242, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11368662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11301463, "index_size": 38670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 290771, "raw_average_key_size": 26, "raw_value_size": 11113420, "raw_average_value_size": 1012, "num_data_blocks": 1453, "num_entries": 10978, "num_filter_entries": 10978, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.590523) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11368662 bytes
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.592051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.2 rd, 126.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(29.8) write-amplify(13.8) OK, records in: 11494, records dropped: 516 output_compression: NoCompression
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.592069) EVENT_LOG_v1 {"time_micros": 1759412069592060, "job": 118, "event": "compaction_finished", "compaction_time_micros": 89580, "compaction_time_cpu_micros": 26946, "output_level": 6, "num_output_files": 1, "total_output_size": 11368662, "num_input_records": 11494, "num_output_records": 10978, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069592290, "job": 118, "event": "table_file_deletion", "file_number": 190}
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069594126, "job": 118, "event": "table_file_deletion", "file_number": 188}
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.500626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.594180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.594187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.594188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.594190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:34:29.594193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-0 sudo[426487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:34:29 compute-0 sudo[426487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:29 compute-0 sudo[426487]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:29 compute-0 sudo[426518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:29 compute-0 sudo[426518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:29 compute-0 sudo[426518]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:29 compute-0 sudo[426567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:34:29 compute-0 sudo[426567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46589 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:29.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37590 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='client.47317 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2241500397' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1577056816' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2024816665' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/791472702' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4115939123' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:34:30 compute-0 podman[426650]: 2025-10-02 13:34:30.106246723 +0000 UTC m=+0.060984554 container create 5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:34:30 compute-0 systemd[1]: Started libpod-conmon-5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a.scope.
Oct 02 13:34:30 compute-0 podman[426650]: 2025-10-02 13:34:30.071341038 +0000 UTC m=+0.026078889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:34:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:34:30 compute-0 podman[426650]: 2025-10-02 13:34:30.250319085 +0000 UTC m=+0.205056916 container init 5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:34:30 compute-0 podman[426650]: 2025-10-02 13:34:30.262229621 +0000 UTC m=+0.216967452 container start 5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:34:30 compute-0 podman[426650]: 2025-10-02 13:34:30.269181349 +0000 UTC m=+0.223919200 container attach 5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_neumann, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:34:30 compute-0 conmon[426697]: conmon 5efd528296788916b135 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a.scope/container/memory.events
Oct 02 13:34:30 compute-0 focused_neumann[426697]: 167 167
Oct 02 13:34:30 compute-0 systemd[1]: libpod-5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a.scope: Deactivated successfully.
Oct 02 13:34:30 compute-0 podman[426650]: 2025-10-02 13:34:30.274338111 +0000 UTC m=+0.229075942 container died 5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:34:30 compute-0 nova_compute[256940]: 2025-10-02 13:34:30.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bce77d26ef4cc48d65032ac58d659cf59c62c9737d1c1dd6b9bd5cca07f1a25-merged.mount: Deactivated successfully.
Oct 02 13:34:30 compute-0 podman[426650]: 2025-10-02 13:34:30.334045031 +0000 UTC m=+0.288782862 container remove 5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:34:30 compute-0 systemd[1]: libpod-conmon-5efd528296788916b135fe5d3a71b3bbe3533b85af454c65c54a03d923f8355a.scope: Deactivated successfully.
Oct 02 13:34:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 13:34:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79628961' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:30 compute-0 podman[426744]: 2025-10-02 13:34:30.533272018 +0000 UTC m=+0.063372346 container create 12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:34:30 compute-0 systemd[1]: Started libpod-conmon-12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282.scope.
Oct 02 13:34:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:34:30 compute-0 podman[426744]: 2025-10-02 13:34:30.515663916 +0000 UTC m=+0.045764264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e458e994d3de15248b1536bacbebceba7c12e44df40085b80343d9ddc0f830/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e458e994d3de15248b1536bacbebceba7c12e44df40085b80343d9ddc0f830/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e458e994d3de15248b1536bacbebceba7c12e44df40085b80343d9ddc0f830/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e458e994d3de15248b1536bacbebceba7c12e44df40085b80343d9ddc0f830/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e458e994d3de15248b1536bacbebceba7c12e44df40085b80343d9ddc0f830/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:30 compute-0 podman[426744]: 2025-10-02 13:34:30.647009463 +0000 UTC m=+0.177109801 container init 12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:34:30 compute-0 podman[426744]: 2025-10-02 13:34:30.658562909 +0000 UTC m=+0.188663237 container start 12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hertz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:34:30 compute-0 podman[426744]: 2025-10-02 13:34:30.665289461 +0000 UTC m=+0.195389789 container attach 12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hertz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:34:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:30.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:30 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:31 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46610 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.47359 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.46589 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.37590 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2142265629' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/79628961' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3804244376' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4641624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2364329330' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3245435999' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37605 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-0 epic_hertz[426782]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:34:31 compute-0 epic_hertz[426782]: --> relative data size: 1.0
Oct 02 13:34:31 compute-0 epic_hertz[426782]: --> All data devices are unavailable
Oct 02 13:34:31 compute-0 systemd[1]: libpod-12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282.scope: Deactivated successfully.
Oct 02 13:34:31 compute-0 podman[426744]: 2025-10-02 13:34:31.494047453 +0000 UTC m=+1.024147781 container died 12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hertz, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:34:31 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 13:34:31 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819595688' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:31 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47389 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:31.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e458e994d3de15248b1536bacbebceba7c12e44df40085b80343d9ddc0f830-merged.mount: Deactivated successfully.
Oct 02 13:34:32 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37620 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 podman[426744]: 2025-10-02 13:34:32.208181947 +0000 UTC m=+1.738282275 container remove 12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:34:32 compute-0 systemd[1]: libpod-conmon-12837db63491b0836107fb5a02e8c10222df196b1d4d21b6624e4169200ba282.scope: Deactivated successfully.
Oct 02 13:34:32 compute-0 sudo[426567]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:32 compute-0 sudo[427064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:32 compute-0 sudo[427064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:32 compute-0 sudo[427064]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:32 compute-0 ceph-mon[73668]: pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.46610 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1326738878' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.37605 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3819595688' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.47389 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2703532931' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.37620 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1448114110' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:32 compute-0 sudo[427108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:34:32 compute-0 sudo[427108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:32 compute-0 sudo[427108]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:32 compute-0 sudo[427164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:32 compute-0 sudo[427164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:32 compute-0 sudo[427164]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:32 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46625 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 sudo[427204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:34:32 compute-0 sudo[427204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:32 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37626 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:32.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:32 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46634 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-0 podman[427354]: 2025-10-02 13:34:32.829394298 +0000 UTC m=+0.021745118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:34:32 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 13:34:32 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175671781' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:32 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:33 compute-0 podman[427354]: 2025-10-02 13:34:33.082596578 +0000 UTC m=+0.274947388 container create eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:34:33 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47410 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-0 systemd[1]: Started libpod-conmon-eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b.scope.
Oct 02 13:34:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 13:34:33 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428714164' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:34:33 compute-0 podman[427354]: 2025-10-02 13:34:33.39520139 +0000 UTC m=+0.587552220 container init eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:34:33 compute-0 podman[427354]: 2025-10-02 13:34:33.403473882 +0000 UTC m=+0.595824682 container start eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:34:33 compute-0 eloquent_bose[427504]: 167 167
Oct 02 13:34:33 compute-0 systemd[1]: libpod-eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b.scope: Deactivated successfully.
Oct 02 13:34:33 compute-0 podman[427354]: 2025-10-02 13:34:33.423029873 +0000 UTC m=+0.615380693 container attach eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:34:33 compute-0 podman[427354]: 2025-10-02 13:34:33.423348641 +0000 UTC m=+0.615699431 container died eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.46625 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.37626 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3814933560' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.46634 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3175671781' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:33 compute-0 ceph-mon[73668]: pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.47410 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4169906255' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:33 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/428714164' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f0adf1da5470ea2f5c316df23466163d12fc8c39348bd3af792a331343566f-merged.mount: Deactivated successfully.
Oct 02 13:34:33 compute-0 podman[427354]: 2025-10-02 13:34:33.64761418 +0000 UTC m=+0.839964980 container remove eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:34:33 compute-0 systemd[1]: libpod-conmon-eb0e5ac9f2e32e4508eb8eb6ba67962e2c31517c42e4924d1d882d7701ea511b.scope: Deactivated successfully.
Oct 02 13:34:33 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37647 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-0 nova_compute[256940]: 2025-10-02 13:34:33.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:33 compute-0 podman[427646]: 2025-10-02 13:34:33.867572927 +0000 UTC m=+0.073727350 container create 704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 13:34:33 compute-0 systemd[1]: Started libpod-conmon-704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e.scope.
Oct 02 13:34:33 compute-0 podman[427646]: 2025-10-02 13:34:33.819872365 +0000 UTC m=+0.026026808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:34:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88f1d41a0c885e7fd7a72cdcd1f6d2d00bdce2fb3f4f825a8dc7dbabefec3f60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88f1d41a0c885e7fd7a72cdcd1f6d2d00bdce2fb3f4f825a8dc7dbabefec3f60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88f1d41a0c885e7fd7a72cdcd1f6d2d00bdce2fb3f4f825a8dc7dbabefec3f60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88f1d41a0c885e7fd7a72cdcd1f6d2d00bdce2fb3f4f825a8dc7dbabefec3f60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:33 compute-0 podman[427646]: 2025-10-02 13:34:33.997835826 +0000 UTC m=+0.203990269 container init 704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:34:34 compute-0 podman[427646]: 2025-10-02 13:34:34.007469913 +0000 UTC m=+0.213624336 container start 704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:34:34 compute-0 podman[427646]: 2025-10-02 13:34:34.027169688 +0000 UTC m=+0.233324111 container attach 704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37653 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 13:34:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519601793' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47425 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46658 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3043116930' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2495208596' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mon[73668]: from='client.37647 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mon[73668]: from='client.37653 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2519601793' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:34.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 13:34:34 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012681020' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:34 compute-0 nice_jemison[427705]: {
Oct 02 13:34:34 compute-0 nice_jemison[427705]:     "1": [
Oct 02 13:34:34 compute-0 nice_jemison[427705]:         {
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "devices": [
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "/dev/loop3"
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             ],
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "lv_name": "ceph_lv0",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "lv_size": "7511998464",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "name": "ceph_lv0",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "tags": {
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.cluster_name": "ceph",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.crush_device_class": "",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.encrypted": "0",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.osd_id": "1",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.type": "block",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:                 "ceph.vdo": "0"
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             },
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "type": "block",
Oct 02 13:34:34 compute-0 nice_jemison[427705]:             "vg_name": "ceph_vg0"
Oct 02 13:34:34 compute-0 nice_jemison[427705]:         }
Oct 02 13:34:34 compute-0 nice_jemison[427705]:     ]
Oct 02 13:34:34 compute-0 nice_jemison[427705]: }
Oct 02 13:34:34 compute-0 systemd[1]: libpod-704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e.scope: Deactivated successfully.
Oct 02 13:34:34 compute-0 podman[427646]: 2025-10-02 13:34:34.900578704 +0000 UTC m=+1.106733127 container died 704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-88f1d41a0c885e7fd7a72cdcd1f6d2d00bdce2fb3f4f825a8dc7dbabefec3f60-merged.mount: Deactivated successfully.
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46664 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:34 compute-0 podman[427646]: 2025-10-02 13:34:34.968278799 +0000 UTC m=+1.174433222 container remove 704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:34:34 compute-0 systemd[1]: libpod-conmon-704c64e1703310012c16405be361b038f015b3c3a47c1479519f57b761c83e6e.scope: Deactivated successfully.
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47431 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:35 compute-0 sudo[427204]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:35 compute-0 sudo[428087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:35 compute-0 sudo[428087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:35 compute-0 sudo[428087]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:35 compute-0 sudo[428145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:34:35 compute-0 sudo[428145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:35 compute-0 sudo[428145]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:35 compute-0 sudo[428193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:35 compute-0 sudo[428193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:35 compute-0 sudo[428193]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:35 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37674 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-0 nova_compute[256940]: 2025-10-02 13:34:35.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:35 compute-0 sudo[428230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:34:35 compute-0 sudo[428230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:35 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37683 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-0 ovs-appctl[428392]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:35 compute-0 podman[428379]: 2025-10-02 13:34:35.663692343 +0000 UTC m=+0.047913489 container create d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:34:35 compute-0 ceph-mon[73668]: from='client.47425 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-0 ceph-mon[73668]: from='client.46658 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3012681020' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:35 compute-0 ceph-mon[73668]: from='client.46664 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-0 ceph-mon[73668]: from='client.47431 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-0 ceph-mon[73668]: pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/159196570' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:35 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3885141461' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:35 compute-0 ovs-appctl[428402]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:35 compute-0 ovs-appctl[428409]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:35 compute-0 systemd[1]: Started libpod-conmon-d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6.scope.
Oct 02 13:34:35 compute-0 podman[428379]: 2025-10-02 13:34:35.639440531 +0000 UTC m=+0.023661697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:34:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:34:35 compute-0 podman[428379]: 2025-10-02 13:34:35.760832493 +0000 UTC m=+0.145053649 container init d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:34:35 compute-0 podman[428379]: 2025-10-02 13:34:35.773559579 +0000 UTC m=+0.157780715 container start d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_banzai, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:34:35 compute-0 systemd[1]: libpod-d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6.scope: Deactivated successfully.
Oct 02 13:34:35 compute-0 blissful_banzai[428413]: 167 167
Oct 02 13:34:35 compute-0 conmon[428413]: conmon d469898d222c2cd0dd97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6.scope/container/memory.events
Oct 02 13:34:35 compute-0 podman[428379]: 2025-10-02 13:34:35.779985654 +0000 UTC m=+0.164206800 container attach d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_banzai, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:34:35 compute-0 podman[428379]: 2025-10-02 13:34:35.780526368 +0000 UTC m=+0.164747524 container died d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_banzai, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb88c73418db138fb5e31ada71bd24820831779952734897d6fc887192bc3740-merged.mount: Deactivated successfully.
Oct 02 13:34:35 compute-0 podman[428379]: 2025-10-02 13:34:35.844439096 +0000 UTC m=+0.228660252 container remove d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_banzai, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:34:35 compute-0 systemd[1]: libpod-conmon-d469898d222c2cd0dd97e7eb89ef3ae619076058fe85a04c1fffddf17a9060c6.scope: Deactivated successfully.
Oct 02 13:34:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46694 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:36 compute-0 podman[428504]: 2025-10-02 13:34:36.058424539 +0000 UTC m=+0.055092193 container create b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 13:34:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:34:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058153738' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:36 compute-0 systemd[1]: Started libpod-conmon-b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d.scope.
Oct 02 13:34:36 compute-0 podman[428504]: 2025-10-02 13:34:36.036485997 +0000 UTC m=+0.033153681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:34:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87abdad392d7c64d2a430223830a545405cb2e917f67f8d115431d4fb35b51d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87abdad392d7c64d2a430223830a545405cb2e917f67f8d115431d4fb35b51d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87abdad392d7c64d2a430223830a545405cb2e917f67f8d115431d4fb35b51d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87abdad392d7c64d2a430223830a545405cb2e917f67f8d115431d4fb35b51d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47452 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:36 compute-0 podman[428504]: 2025-10-02 13:34:36.173481048 +0000 UTC m=+0.170148722 container init b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:34:36 compute-0 podman[428504]: 2025-10-02 13:34:36.181567886 +0000 UTC m=+0.178235540 container start b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:34:36 compute-0 podman[428504]: 2025-10-02 13:34:36.186513292 +0000 UTC m=+0.183180946 container attach b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46700 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47461 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 02 13:34:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818688740' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:36.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:36 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457941525' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:36 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:37 compute-0 trusting_turing[428543]: {
Oct 02 13:34:37 compute-0 trusting_turing[428543]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:34:37 compute-0 trusting_turing[428543]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:34:37 compute-0 trusting_turing[428543]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:34:37 compute-0 trusting_turing[428543]:         "osd_id": 1,
Oct 02 13:34:37 compute-0 trusting_turing[428543]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:34:37 compute-0 trusting_turing[428543]:         "type": "bluestore"
Oct 02 13:34:37 compute-0 trusting_turing[428543]:     }
Oct 02 13:34:37 compute-0 trusting_turing[428543]: }
Oct 02 13:34:37 compute-0 systemd[1]: libpod-b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d.scope: Deactivated successfully.
Oct 02 13:34:37 compute-0 podman[428504]: 2025-10-02 13:34:37.09704108 +0000 UTC m=+1.093708744 container died b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.37674 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.37683 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2244708378' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2454552929' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.46694 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3058153738' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.47452 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3818688740' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37710 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:37 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4261411917' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:37.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:38 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46742 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:38 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47473 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:38 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47479 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:38.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 02 13:34:38 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166609430' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:38 compute-0 nova_compute[256940]: 2025-10-02 13:34:38.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:38 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/662320132' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 02 13:34:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462444664' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.46700 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.47461 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3348321435' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2985896459' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3457941525' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1199591392' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2982575882' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/534626633' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:39 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4261411917' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:39.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b87abdad392d7c64d2a430223830a545405cb2e917f67f8d115431d4fb35b51d-merged.mount: Deactivated successfully.
Oct 02 13:34:40 compute-0 nova_compute[256940]: 2025-10-02 13:34:40.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:40 compute-0 podman[428504]: 2025-10-02 13:34:40.286920768 +0000 UTC m=+4.283588422 container remove b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_turing, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:34:40 compute-0 sudo[428230]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:34:40 compute-0 systemd[1]: libpod-conmon-b0d6760eb2c411424c73f05b68fd53d8b934bf87764b2d77a3bf0bf8be42141d.scope: Deactivated successfully.
Oct 02 13:34:40 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37740 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:40 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47503 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:40.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:40 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 02 13:34:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710434068' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.37710 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.46742 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.47473 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3211568006' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.47479 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2166609430' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1232340834' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1102273128' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/662320132' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1855449270' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2047886250' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/462444664' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1969284233' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c94d767e-80fd-4b4f-a4b6-87042b387f23 does not exist
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1ad8ec7c-2490-462f-ae23-4bf5312f49f4 does not exist
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f87c313e-11c1-4c5f-b669-8fbf53a3374c does not exist
Oct 02 13:34:41 compute-0 sudo[429234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:41 compute-0 sudo[429234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:41 compute-0 sudo[429234]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:41 compute-0 sudo[429270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:34:41 compute-0 sudo[429270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:41 compute-0 sudo[429270]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:41 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:41 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285501038' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46778 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:42 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37764 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 02 13:34:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055223004' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:42.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.37740 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.47503 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/369431865' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/710434068' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3643781993' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4127268521' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/285501038' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.46778 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/594778116' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/753413889' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1645206274' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47533 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:43 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37776 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:43 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46796 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:43 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37782 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:43 compute-0 podman[429743]: 2025-10-02 13:34:43.396879837 +0000 UTC m=+0.062442911 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:34:43 compute-0 podman[429744]: 2025-10-02 13:34:43.429992086 +0000 UTC m=+0.098515246 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 13:34:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:43 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726113376' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:43 compute-0 nova_compute[256940]: 2025-10-02 13:34:43.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:43.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46811 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.37764 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3055223004' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3820141928' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.47533 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.37776 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2475723777' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/957710283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3726113376' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 02 13:34:44 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695014501' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47554 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 sudo[429985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:44 compute-0 sudo[429985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:44 compute-0 sudo[429985]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:44 compute-0 sudo[430013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:44 compute-0 sudo[430013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:44 compute-0 sudo[430013]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46817 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37803 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:44.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37809 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47566 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:45 compute-0 ceph-mon[73668]: from='client.46796 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mon[73668]: from='client.37782 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4235838842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mon[73668]: from='client.46811 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2695014501' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2540540668' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2314818960' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 nova_compute[256940]: 2025-10-02 13:34:45.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1426669717' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47581 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46847 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 02 13:34:45 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819816859' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46856 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:45 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37833 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.269 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.269 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.269 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.269 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.270 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.47554 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.46817 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.37803 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.37809 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.47566 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1588464893' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3215334327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1426669717' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1819816859' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2736487162' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4129652381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2347324164' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.37839 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47614 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:34:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392394444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.733 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:34:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.881 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.882 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.882 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:46 compute-0 nova_compute[256940]: 2025-10-02 13:34:46.882 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47620 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46877 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 13:34:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/217139599' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.47581 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.46847 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.46856 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.37833 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/10710595' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.37839 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.47614 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3352078029' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/392394444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.47620 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.46877 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/217139599' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mon[73668]: pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1946319729' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.46886 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 nova_compute[256940]: 2025-10-02 13:34:47.386 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:34:47 compute-0 nova_compute[256940]: 2025-10-02 13:34:47.387 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:34:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 02 13:34:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682664001' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-0 nova_compute[256940]: 2025-10-02 13:34:47.802 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:34:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:47.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:48 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47638 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:34:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3625609811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:48 compute-0 nova_compute[256940]: 2025-10-02 13:34:48.264 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:34:48 compute-0 nova_compute[256940]: 2025-10-02 13:34:48.272 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:34:48 compute-0 nova_compute[256940]: 2025-10-02 13:34:48.334 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:34:48 compute-0 nova_compute[256940]: 2025-10-02 13:34:48.336 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:34:48 compute-0 nova_compute[256940]: 2025-10-02 13:34:48.336 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:48 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47647 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: from='client.46886 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/682664001' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2707107688' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2260468465' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: from='client.47638 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1666422989' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3625609811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:34:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:48.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:34:48 compute-0 nova_compute[256940]: 2025-10-02 13:34:48.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:48 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:49 compute-0 virtqemud[257589]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:34:49 compute-0 ceph-mon[73668]: from='client.47647 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/229797035' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:49 compute-0 ceph-mon[73668]: pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/241776807' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:49.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:50 compute-0 nova_compute[256940]: 2025-10-02 13:34:50.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:50.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:51.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:52 compute-0 ceph-mon[73668]: pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1433318012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:52.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:52 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 13:34:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:53 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 13:34:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 13:34:53 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 13:34:53 compute-0 systemd[1]: Started Hostname Service.
Oct 02 13:34:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3555859192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:53 compute-0 nova_compute[256940]: 2025-10-02 13:34:53.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:54 compute-0 ceph-mon[73668]: pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:54.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:55 compute-0 nova_compute[256940]: 2025-10-02 13:34:55.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:55 compute-0 nova_compute[256940]: 2025-10-02 13:34:55.336 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:55 compute-0 nova_compute[256940]: 2025-10-02 13:34:55.337 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:55 compute-0 nova_compute[256940]: 2025-10-02 13:34:55.337 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:55 compute-0 nova_compute[256940]: 2025-10-02 13:34:55.337 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:34:55 compute-0 ceph-mon[73668]: pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:34:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:34:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:34:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:56.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:34:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:57 compute-0 podman[430772]: 2025-10-02 13:34:57.393849606 +0000 UTC m=+0.066339340 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct 02 13:34:57 compute-0 podman[430773]: 2025-10-02 13:34:57.416985269 +0000 UTC m=+0.088729764 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Oct 02 13:34:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:58 compute-0 ceph-mon[73668]: pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:34:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:34:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:58.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:58 compute-0 nova_compute[256940]: 2025-10-02 13:34:58.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:34:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:00 compute-0 ceph-mon[73668]: pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:00 compute-0 nova_compute[256940]: 2025-10-02 13:35:00.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:00.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:01 compute-0 nova_compute[256940]: 2025-10-02 13:35:01.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:01 compute-0 ceph-mon[73668]: pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:01.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:02 compute-0 nova_compute[256940]: 2025-10-02 13:35:02.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:02 compute-0 nova_compute[256940]: 2025-10-02 13:35:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:35:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:02.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:35:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:03 compute-0 nova_compute[256940]: 2025-10-02 13:35:03.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:04 compute-0 ceph-mon[73668]: pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:04 compute-0 sudo[430814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:04 compute-0 sudo[430814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:04 compute-0 sudo[430814]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:04 compute-0 sudo[430839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:04 compute-0 sudo[430839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:04 compute-0 sudo[430839]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:04.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:05 compute-0 nova_compute[256940]: 2025-10-02 13:35:05.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:05 compute-0 nova_compute[256940]: 2025-10-02 13:35:05.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:35:05 compute-0 nova_compute[256940]: 2025-10-02 13:35:05.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:35:05 compute-0 nova_compute[256940]: 2025-10-02 13:35:05.231 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:35:05 compute-0 nova_compute[256940]: 2025-10-02 13:35:05.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:05.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:06 compute-0 ceph-mon[73668]: pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4065200134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:35:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4065200134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:35:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:06.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:07 compute-0 ceph-mon[73668]: pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:35:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:35:08 compute-0 nova_compute[256940]: 2025-10-02 13:35:08.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:09 compute-0 nova_compute[256940]: 2025-10-02 13:35:09.226 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:09.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:10 compute-0 ceph-mon[73668]: pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:10 compute-0 nova_compute[256940]: 2025-10-02 13:35:10.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:11 compute-0 ceph-mon[73668]: pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:11.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:13 compute-0 nova_compute[256940]: 2025-10-02 13:35:13.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:13 compute-0 podman[430869]: 2025-10-02 13:35:13.931238028 +0000 UTC m=+0.093230611 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:35:13 compute-0 podman[430870]: 2025-10-02 13:35:13.972579667 +0000 UTC m=+0.131831580 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 13:35:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:13.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:14 compute-0 ceph-mon[73668]: pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:14.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:15 compute-0 nova_compute[256940]: 2025-10-02 13:35:15.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:15 compute-0 nova_compute[256940]: 2025-10-02 13:35:15.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:15 compute-0 ceph-mon[73668]: pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:16.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:17.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:18 compute-0 ceph-mon[73668]: pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:18.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:18 compute-0 nova_compute[256940]: 2025-10-02 13:35:18.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:19.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:20 compute-0 ceph-mon[73668]: pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:20 compute-0 nova_compute[256940]: 2025-10-02 13:35:20.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:20.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:21 compute-0 ceph-mon[73668]: pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:22.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:22.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:23 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 13:35:23 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 13:35:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:23 compute-0 nova_compute[256940]: 2025-10-02 13:35:23.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:24.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:24 compute-0 ceph-mon[73668]: pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:24 compute-0 sudo[430918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:24 compute-0 sudo[430918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:24 compute-0 sudo[430918]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:24 compute-0 sudo[430943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:24 compute-0 sudo[430943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:24 compute-0 sudo[430943]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:25 compute-0 nova_compute[256940]: 2025-10-02 13:35:25.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:26.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:26 compute-0 ceph-mon[73668]: pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:35:26.533 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:35:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:35:26.535 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:35:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:35:26.535 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:35:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:26.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:28.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:28 compute-0 ceph-mon[73668]: pgmap v3751: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:28 compute-0 podman[430970]: 2025-10-02 13:35:28.401390094 +0000 UTC m=+0.075755972 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:35:28 compute-0 podman[430971]: 2025-10-02 13:35:28.401908078 +0000 UTC m=+0.074600524 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:35:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:28 compute-0 nova_compute[256940]: 2025-10-02 13:35:28.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:35:28
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'vms', '.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta']
Oct 02 13:35:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:35:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:29 compute-0 ceph-mon[73668]: pgmap v3752: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:30.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:35:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:35:30 compute-0 nova_compute[256940]: 2025-10-02 13:35:30.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:30.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:32.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:32 compute-0 ceph-mon[73668]: pgmap v3753: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:32.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:33 compute-0 nova_compute[256940]: 2025-10-02 13:35:33.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:34.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:34 compute-0 ceph-mon[73668]: pgmap v3754: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:34.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:35 compute-0 ceph-mon[73668]: pgmap v3755: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:35 compute-0 nova_compute[256940]: 2025-10-02 13:35:35.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:35:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:36.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:35:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:36.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:38 compute-0 ceph-mon[73668]: pgmap v3756: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:38 compute-0 nova_compute[256940]: 2025-10-02 13:35:38.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:39 compute-0 ceph-mon[73668]: pgmap v3757: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:35:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 59K writes, 231K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 59K writes, 21K syncs, 2.83 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1753 writes, 5167 keys, 1753 commit groups, 1.0 writes per commit group, ingest: 3.97 MB, 0.01 MB/s
                                           Interval WAL: 1753 writes, 742 syncs, 2.36 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:35:39 compute-0 sudo[422550]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:39 compute-0 sshd-session[422549]: Received disconnect from 192.168.122.10 port 42934:11: disconnected by user
Oct 02 13:35:39 compute-0 sshd-session[422549]: Disconnected from user zuul 192.168.122.10 port 42934
Oct 02 13:35:39 compute-0 sshd-session[422546]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:35:39 compute-0 systemd[1]: session-75.scope: Deactivated successfully.
Oct 02 13:35:39 compute-0 systemd[1]: session-75.scope: Consumed 2min 51.857s CPU time, 960.6M memory peak, read 347.8M from disk, written 397.1M to disk.
Oct 02 13:35:39 compute-0 systemd-logind[820]: Session 75 logged out. Waiting for processes to exit.
Oct 02 13:35:39 compute-0 systemd-logind[820]: Removed session 75.
Oct 02 13:35:39 compute-0 sshd-session[431014]: Accepted publickey for zuul from 192.168.122.10 port 40502 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:35:39 compute-0 systemd-logind[820]: New session 76 of user zuul.
Oct 02 13:35:39 compute-0 systemd[1]: Started Session 76 of User zuul.
Oct 02 13:35:39 compute-0 sshd-session[431014]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:35:39 compute-0 sudo[431018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-02-rlgjnxf.tar.xz
Oct 02 13:35:39 compute-0 sudo[431018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:35:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:40.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:40 compute-0 sudo[431018]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:40 compute-0 sshd-session[431017]: Received disconnect from 192.168.122.10 port 40502:11: disconnected by user
Oct 02 13:35:40 compute-0 sshd-session[431017]: Disconnected from user zuul 192.168.122.10 port 40502
Oct 02 13:35:40 compute-0 sshd-session[431014]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:35:40 compute-0 systemd[1]: session-76.scope: Deactivated successfully.
Oct 02 13:35:40 compute-0 systemd-logind[820]: Session 76 logged out. Waiting for processes to exit.
Oct 02 13:35:40 compute-0 systemd-logind[820]: Removed session 76.
Oct 02 13:35:40 compute-0 sshd-session[431043]: Accepted publickey for zuul from 192.168.122.10 port 40508 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:35:40 compute-0 systemd-logind[820]: New session 77 of user zuul.
Oct 02 13:35:40 compute-0 systemd[1]: Started Session 77 of User zuul.
Oct 02 13:35:40 compute-0 sshd-session[431043]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:35:40 compute-0 nova_compute[256940]: 2025-10-02 13:35:40.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:40 compute-0 sudo[431047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 02 13:35:40 compute-0 sudo[431047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:35:40 compute-0 sudo[431047]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:40 compute-0 sshd-session[431046]: Received disconnect from 192.168.122.10 port 40508:11: disconnected by user
Oct 02 13:35:40 compute-0 sshd-session[431046]: Disconnected from user zuul 192.168.122.10 port 40508
Oct 02 13:35:40 compute-0 sshd-session[431043]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:35:40 compute-0 systemd[1]: session-77.scope: Deactivated successfully.
Oct 02 13:35:40 compute-0 systemd-logind[820]: Session 77 logged out. Waiting for processes to exit.
Oct 02 13:35:40 compute-0 systemd-logind[820]: Removed session 77.
Oct 02 13:35:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:40.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:35:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:35:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:42.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:42 compute-0 ceph-mon[73668]: pgmap v3758: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:42 compute-0 sudo[431073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:42 compute-0 sudo[431073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-0 sudo[431073]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:42 compute-0 sudo[431098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:35:42 compute-0 sudo[431098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-0 sudo[431098]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:42 compute-0 sudo[431123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:42 compute-0 sudo[431123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-0 sudo[431123]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:42 compute-0 sudo[431148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:35:42 compute-0 sudo[431148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:42.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:42 compute-0 sudo[431148]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:35:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:35:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:35:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:35:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:35:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 779c61d8-b7bf-4b91-aa1d-8c0d8a460432 does not exist
Oct 02 13:35:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0e948b4b-3729-4571-af0b-04f93bd393f0 does not exist
Oct 02 13:35:42 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 381f5630-7538-46b5-baaf-5956ef048094 does not exist
Oct 02 13:35:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:35:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:35:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:35:42 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:35:42 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:35:42 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:35:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:43 compute-0 sudo[431205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:43 compute-0 sudo[431205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:43 compute-0 sudo[431205]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:35:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:35:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:35:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:35:43 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:35:43 compute-0 sudo[431230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:35:43 compute-0 sudo[431230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:43 compute-0 sudo[431230]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:43 compute-0 sudo[431255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:43 compute-0 sudo[431255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:43 compute-0 sudo[431255]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:43 compute-0 sudo[431280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:35:43 compute-0 sudo[431280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:43 compute-0 podman[431346]: 2025-10-02 13:35:43.699308257 +0000 UTC m=+0.053646046 container create 183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_allen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 13:35:43 compute-0 systemd[1]: Started libpod-conmon-183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019.scope.
Oct 02 13:35:43 compute-0 podman[431346]: 2025-10-02 13:35:43.671028623 +0000 UTC m=+0.025366482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:35:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:35:43 compute-0 podman[431346]: 2025-10-02 13:35:43.81293146 +0000 UTC m=+0.167269239 container init 183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:35:43 compute-0 podman[431346]: 2025-10-02 13:35:43.820165885 +0000 UTC m=+0.174503704 container start 183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_allen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:35:43 compute-0 podman[431346]: 2025-10-02 13:35:43.823736707 +0000 UTC m=+0.178074506 container attach 183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 13:35:43 compute-0 festive_allen[431362]: 167 167
Oct 02 13:35:43 compute-0 systemd[1]: libpod-183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019.scope: Deactivated successfully.
Oct 02 13:35:43 compute-0 podman[431346]: 2025-10-02 13:35:43.829591257 +0000 UTC m=+0.183929066 container died 183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_allen, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b430ba64739329fc7ef713a98fec9c2d1dc766660c74bdb17a30d3555c6bf99-merged.mount: Deactivated successfully.
Oct 02 13:35:43 compute-0 podman[431346]: 2025-10-02 13:35:43.872222729 +0000 UTC m=+0.226560518 container remove 183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_allen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:35:43 compute-0 systemd[1]: libpod-conmon-183888ef61793ae68e97043f89c43db2112fb636c3d5e58127bf2a05e8264019.scope: Deactivated successfully.
Oct 02 13:35:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:43 compute-0 nova_compute[256940]: 2025-10-02 13:35:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:44 compute-0 podman[431384]: 2025-10-02 13:35:44.046803783 +0000 UTC m=+0.044582754 container create aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:35:44 compute-0 systemd[1]: Started libpod-conmon-aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d.scope.
Oct 02 13:35:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:35:44 compute-0 ceph-mon[73668]: pgmap v3759: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4356a81088b17cfc3d43de7a7aec03280b819cb12f1410d493c123b80132029b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4356a81088b17cfc3d43de7a7aec03280b819cb12f1410d493c123b80132029b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4356a81088b17cfc3d43de7a7aec03280b819cb12f1410d493c123b80132029b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4356a81088b17cfc3d43de7a7aec03280b819cb12f1410d493c123b80132029b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4356a81088b17cfc3d43de7a7aec03280b819cb12f1410d493c123b80132029b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:44 compute-0 podman[431384]: 2025-10-02 13:35:44.118420479 +0000 UTC m=+0.116199480 container init aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:35:44 compute-0 podman[431384]: 2025-10-02 13:35:44.027428916 +0000 UTC m=+0.025207907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:35:44 compute-0 podman[431384]: 2025-10-02 13:35:44.129950824 +0000 UTC m=+0.127729795 container start aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shirley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:35:44 compute-0 podman[431384]: 2025-10-02 13:35:44.134306446 +0000 UTC m=+0.132085417 container attach aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:35:44 compute-0 podman[431398]: 2025-10-02 13:35:44.147525225 +0000 UTC m=+0.062904764 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 13:35:44 compute-0 podman[431402]: 2025-10-02 13:35:44.212158041 +0000 UTC m=+0.121882955 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:35:44 compute-0 sudo[431448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:44 compute-0 sudo[431448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:44 compute-0 sudo[431448]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:44 compute-0 sudo[431473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:44 compute-0 sudo[431473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:44 compute-0 sudo[431473]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:44.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:44 compute-0 elated_shirley[431403]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:35:44 compute-0 elated_shirley[431403]: --> relative data size: 1.0
Oct 02 13:35:44 compute-0 elated_shirley[431403]: --> All data devices are unavailable
Oct 02 13:35:44 compute-0 systemd[1]: libpod-aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d.scope: Deactivated successfully.
Oct 02 13:35:44 compute-0 podman[431384]: 2025-10-02 13:35:44.994500233 +0000 UTC m=+0.992279224 container died aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shirley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 13:35:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4356a81088b17cfc3d43de7a7aec03280b819cb12f1410d493c123b80132029b-merged.mount: Deactivated successfully.
Oct 02 13:35:45 compute-0 podman[431384]: 2025-10-02 13:35:45.052844628 +0000 UTC m=+1.050623599 container remove aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:35:45 compute-0 systemd[1]: libpod-conmon-aa3f045e30df91040581f6d6791f7df0aeb2db9bc145a048af5a8193cc44976d.scope: Deactivated successfully.
Oct 02 13:35:45 compute-0 sudo[431280]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:45 compute-0 sudo[431520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:45 compute-0 sudo[431520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:45 compute-0 sudo[431520]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:45 compute-0 sudo[431545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:35:45 compute-0 sudo[431545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:45 compute-0 sudo[431545]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:45 compute-0 sudo[431570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:45 compute-0 sudo[431570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:45 compute-0 sudo[431570]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:45 compute-0 nova_compute[256940]: 2025-10-02 13:35:45.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:45 compute-0 sudo[431595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:35:45 compute-0 sudo[431595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:45 compute-0 podman[431661]: 2025-10-02 13:35:45.743336096 +0000 UTC m=+0.050655589 container create 6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:35:45 compute-0 systemd[1]: Started libpod-conmon-6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4.scope.
Oct 02 13:35:45 compute-0 podman[431661]: 2025-10-02 13:35:45.720953733 +0000 UTC m=+0.028273246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:35:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:35:45 compute-0 podman[431661]: 2025-10-02 13:35:45.852218967 +0000 UTC m=+0.159538480 container init 6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swirles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:35:45 compute-0 podman[431661]: 2025-10-02 13:35:45.85779429 +0000 UTC m=+0.165113773 container start 6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swirles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:35:45 compute-0 distracted_swirles[431677]: 167 167
Oct 02 13:35:45 compute-0 podman[431661]: 2025-10-02 13:35:45.862900911 +0000 UTC m=+0.170220394 container attach 6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swirles, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:35:45 compute-0 systemd[1]: libpod-6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4.scope: Deactivated successfully.
Oct 02 13:35:45 compute-0 podman[431661]: 2025-10-02 13:35:45.863971648 +0000 UTC m=+0.171291131 container died 6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea033e0491a51435dc24d549c5ee7d257cd0d21d1d4a9037acc60bdc9b539b2d-merged.mount: Deactivated successfully.
Oct 02 13:35:45 compute-0 podman[431661]: 2025-10-02 13:35:45.908974422 +0000 UTC m=+0.216293905 container remove 6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:35:45 compute-0 systemd[1]: libpod-conmon-6008cb722f0b883fdabe2a59a002400de8ce32266aad82cf4978dbfe8726a0d4.scope: Deactivated successfully.
Oct 02 13:35:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:46.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:46 compute-0 podman[431701]: 2025-10-02 13:35:46.075394147 +0000 UTC m=+0.054735584 container create aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:35:46 compute-0 systemd[1]: Started libpod-conmon-aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea.scope.
Oct 02 13:35:46 compute-0 ceph-mon[73668]: pgmap v3760: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:46 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2360798426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:46 compute-0 podman[431701]: 2025-10-02 13:35:46.048090267 +0000 UTC m=+0.027431724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:35:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac757b8b420002ca4af1fb4af279025b1ae0bd8a76b24aef176c0ae712a0bb01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac757b8b420002ca4af1fb4af279025b1ae0bd8a76b24aef176c0ae712a0bb01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac757b8b420002ca4af1fb4af279025b1ae0bd8a76b24aef176c0ae712a0bb01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac757b8b420002ca4af1fb4af279025b1ae0bd8a76b24aef176c0ae712a0bb01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:46 compute-0 podman[431701]: 2025-10-02 13:35:46.181553958 +0000 UTC m=+0.160895485 container init aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:35:46 compute-0 podman[431701]: 2025-10-02 13:35:46.191795181 +0000 UTC m=+0.171136618 container start aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:35:46 compute-0 podman[431701]: 2025-10-02 13:35:46.196633895 +0000 UTC m=+0.175975372 container attach aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.260 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.261 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.262 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.262 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.263 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:35:46 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:35:46 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077650695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.699 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:35:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:46.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.900 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.901 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4017MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.902 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:35:46 compute-0 nova_compute[256940]: 2025-10-02 13:35:46.902 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]: {
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:     "1": [
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:         {
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "devices": [
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "/dev/loop3"
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             ],
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "lv_name": "ceph_lv0",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "lv_size": "7511998464",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "name": "ceph_lv0",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "tags": {
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.cluster_name": "ceph",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.crush_device_class": "",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.encrypted": "0",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.osd_id": "1",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.type": "block",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:                 "ceph.vdo": "0"
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             },
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "type": "block",
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:             "vg_name": "ceph_vg0"
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:         }
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]:     ]
Oct 02 13:35:46 compute-0 blissful_varahamihira[431717]: }
Oct 02 13:35:47 compute-0 systemd[1]: libpod-aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea.scope: Deactivated successfully.
Oct 02 13:35:47 compute-0 podman[431701]: 2025-10-02 13:35:47.007886138 +0000 UTC m=+0.987227575 container died aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 13:35:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac757b8b420002ca4af1fb4af279025b1ae0bd8a76b24aef176c0ae712a0bb01-merged.mount: Deactivated successfully.
Oct 02 13:35:47 compute-0 podman[431701]: 2025-10-02 13:35:47.090472724 +0000 UTC m=+1.069814161 container remove aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 13:35:47 compute-0 systemd[1]: libpod-conmon-aa7e973a1505f45f4eaa84b434363d18e8969c02ce9967625cce2db7fefeaaea.scope: Deactivated successfully.
Oct 02 13:35:47 compute-0 sudo[431595]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:47 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2077650695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:47 compute-0 sudo[431761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:47 compute-0 sudo[431761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:47 compute-0 sudo[431761]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:47 compute-0 sudo[431786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:35:47 compute-0 sudo[431786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:47 compute-0 sudo[431786]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:47 compute-0 nova_compute[256940]: 2025-10-02 13:35:47.297 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:35:47 compute-0 nova_compute[256940]: 2025-10-02 13:35:47.299 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:35:47 compute-0 sudo[431811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:47 compute-0 sudo[431811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:47 compute-0 sudo[431811]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:47 compute-0 sudo[431836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:35:47 compute-0 sudo[431836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:47 compute-0 nova_compute[256940]: 2025-10-02 13:35:47.492 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:35:47 compute-0 podman[431921]: 2025-10-02 13:35:47.79537022 +0000 UTC m=+0.042965202 container create 7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:35:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 13:35:47 compute-0 systemd[1]: Started libpod-conmon-7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c.scope.
Oct 02 13:35:47 compute-0 podman[431921]: 2025-10-02 13:35:47.775576573 +0000 UTC m=+0.023171575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:35:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:35:47 compute-0 podman[431921]: 2025-10-02 13:35:47.897905648 +0000 UTC m=+0.145500650 container init 7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:35:47 compute-0 podman[431921]: 2025-10-02 13:35:47.904533058 +0000 UTC m=+0.152128040 container start 7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:35:47 compute-0 trusting_taussig[431937]: 167 167
Oct 02 13:35:47 compute-0 systemd[1]: libpod-7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c.scope: Deactivated successfully.
Oct 02 13:35:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:35:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108203446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:47 compute-0 podman[431921]: 2025-10-02 13:35:47.917230504 +0000 UTC m=+0.164825486 container attach 7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:35:47 compute-0 podman[431921]: 2025-10-02 13:35:47.917616864 +0000 UTC m=+0.165211846 container died 7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:35:47 compute-0 nova_compute[256940]: 2025-10-02 13:35:47.932 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:35:47 compute-0 nova_compute[256940]: 2025-10-02 13:35:47.939 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:35:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-344a4d673dd872e9aaf8bf5af1903a11057d951af16c2682668f438b9062fbba-merged.mount: Deactivated successfully.
Oct 02 13:35:48 compute-0 podman[431921]: 2025-10-02 13:35:48.014414865 +0000 UTC m=+0.262009847 container remove 7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:35:48 compute-0 systemd[1]: libpod-conmon-7f0db8d9849141b7902f99b47b8121af7150b0ea10bf71385083b7305a1a4d7c.scope: Deactivated successfully.
Oct 02 13:35:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:48 compute-0 nova_compute[256940]: 2025-10-02 13:35:48.113 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:35:48 compute-0 nova_compute[256940]: 2025-10-02 13:35:48.115 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:35:48 compute-0 nova_compute[256940]: 2025-10-02 13:35:48.116 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:35:48 compute-0 ceph-mon[73668]: pgmap v3761: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/970104711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2108203446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:48 compute-0 podman[431963]: 2025-10-02 13:35:48.205563964 +0000 UTC m=+0.062014070 container create 277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:35:48 compute-0 systemd[1]: Started libpod-conmon-277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3.scope.
Oct 02 13:35:48 compute-0 podman[431963]: 2025-10-02 13:35:48.171039289 +0000 UTC m=+0.027489475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:35:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24781135f12049cf67fb54eac1fe2f0f59a89bf982326b3b7a1d81dfef9c1266/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24781135f12049cf67fb54eac1fe2f0f59a89bf982326b3b7a1d81dfef9c1266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24781135f12049cf67fb54eac1fe2f0f59a89bf982326b3b7a1d81dfef9c1266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24781135f12049cf67fb54eac1fe2f0f59a89bf982326b3b7a1d81dfef9c1266/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:35:48 compute-0 podman[431963]: 2025-10-02 13:35:48.306040509 +0000 UTC m=+0.162490625 container init 277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:35:48 compute-0 podman[431963]: 2025-10-02 13:35:48.315212054 +0000 UTC m=+0.171662160 container start 277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:35:48 compute-0 podman[431963]: 2025-10-02 13:35:48.323579899 +0000 UTC m=+0.180030035 container attach 277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:35:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:48.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:48 compute-0 nova_compute[256940]: 2025-10-02 13:35:48.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:49 compute-0 happy_darwin[431980]: {
Oct 02 13:35:49 compute-0 happy_darwin[431980]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:35:49 compute-0 happy_darwin[431980]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:35:49 compute-0 happy_darwin[431980]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:35:49 compute-0 happy_darwin[431980]:         "osd_id": 1,
Oct 02 13:35:49 compute-0 happy_darwin[431980]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:35:49 compute-0 happy_darwin[431980]:         "type": "bluestore"
Oct 02 13:35:49 compute-0 happy_darwin[431980]:     }
Oct 02 13:35:49 compute-0 happy_darwin[431980]: }
Oct 02 13:35:49 compute-0 systemd[1]: libpod-277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3.scope: Deactivated successfully.
Oct 02 13:35:49 compute-0 podman[431963]: 2025-10-02 13:35:49.171088231 +0000 UTC m=+1.027538337 container died 277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 13:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-24781135f12049cf67fb54eac1fe2f0f59a89bf982326b3b7a1d81dfef9c1266-merged.mount: Deactivated successfully.
Oct 02 13:35:49 compute-0 podman[431963]: 2025-10-02 13:35:49.220942749 +0000 UTC m=+1.077392855 container remove 277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_darwin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:35:49 compute-0 systemd[1]: libpod-conmon-277579c6c0f088833b812771368cd31eed8f0d24e3f6d8d3e578962b8f532cc3.scope: Deactivated successfully.
Oct 02 13:35:49 compute-0 sudo[431836]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:35:49 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:35:49 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:49 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 0b1088bf-1444-4b5a-b4f1-536872222ec8 does not exist
Oct 02 13:35:49 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7b939273-1f15-473e-8c28-33cc9773ca81 does not exist
Oct 02 13:35:49 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f822dd2b-4a5e-4a34-b82e-adb44bbf75e8 does not exist
Oct 02 13:35:49 compute-0 sudo[432016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:49 compute-0 sudo[432016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:49 compute-0 sudo[432016]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:49 compute-0 sudo[432041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:35:49 compute-0 sudo[432041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:49 compute-0 sudo[432041]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:50.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:50 compute-0 ceph-mon[73668]: pgmap v3762: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:50 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:50 compute-0 nova_compute[256940]: 2025-10-02 13:35:50.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:51 compute-0 ceph-mon[73668]: pgmap v3763: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:35:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:52.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:35:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1326945611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:52.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:53 compute-0 ceph-mon[73668]: pgmap v3764: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2167374719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:53 compute-0 nova_compute[256940]: 2025-10-02 13:35:53.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:54.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:55 compute-0 nova_compute[256940]: 2025-10-02 13:35:55.117 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:55 compute-0 nova_compute[256940]: 2025-10-02 13:35:55.117 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:55 compute-0 nova_compute[256940]: 2025-10-02 13:35:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:55 compute-0 nova_compute[256940]: 2025-10-02 13:35:55.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:35:55 compute-0 nova_compute[256940]: 2025-10-02 13:35:55.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:35:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:56.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:35:56 compute-0 ceph-mon[73668]: pgmap v3765: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:56.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:58.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:58 compute-0 ceph-mon[73668]: pgmap v3766: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:35:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:35:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:35:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:58.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:58 compute-0 nova_compute[256940]: 2025-10-02 13:35:58.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:59 compute-0 podman[432071]: 2025-10-02 13:35:59.407908816 +0000 UTC m=+0.072669443 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 13:35:59 compute-0 podman[432072]: 2025-10-02 13:35:59.418951169 +0000 UTC m=+0.068104106 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd)
Oct 02 13:36:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:00.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:00 compute-0 ceph-mon[73668]: pgmap v3767: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:00 compute-0 nova_compute[256940]: 2025-10-02 13:36:00.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:00.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:01 compute-0 nova_compute[256940]: 2025-10-02 13:36:01.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:36:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:02.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:36:02 compute-0 ceph-mon[73668]: pgmap v3768: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:02 compute-0 nova_compute[256940]: 2025-10-02 13:36:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:03 compute-0 nova_compute[256940]: 2025-10-02 13:36:03.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:03 compute-0 nova_compute[256940]: 2025-10-02 13:36:03.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:04.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:04 compute-0 ceph-mon[73668]: pgmap v3769: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:04 compute-0 sudo[432113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:04 compute-0 sudo[432113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:04 compute-0 sudo[432113]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:04 compute-0 sudo[432139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:04 compute-0 sudo[432139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:04 compute-0 sudo[432139]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:05 compute-0 nova_compute[256940]: 2025-10-02 13:36:05.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:05 compute-0 ceph-mon[73668]: pgmap v3770: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:36:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452282918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:36:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:36:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452282918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:36:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:06.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:06 compute-0 nova_compute[256940]: 2025-10-02 13:36:06.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:06 compute-0 nova_compute[256940]: 2025-10-02 13:36:06.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:36:06 compute-0 nova_compute[256940]: 2025-10-02 13:36:06.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:36:06 compute-0 nova_compute[256940]: 2025-10-02 13:36:06.234 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:36:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/452282918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:36:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/452282918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:36:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:06.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:07 compute-0 ceph-mon[73668]: pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:36:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:08.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:36:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:08.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:08 compute-0 nova_compute[256940]: 2025-10-02 13:36:08.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:10.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:10 compute-0 ceph-mon[73668]: pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:10 compute-0 nova_compute[256940]: 2025-10-02 13:36:10.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:11 compute-0 nova_compute[256940]: 2025-10-02 13:36:11.230 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:12.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:12 compute-0 ceph-mon[73668]: pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:12.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:13 compute-0 ceph-mon[73668]: pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:13 compute-0 nova_compute[256940]: 2025-10-02 13:36:13.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:14 compute-0 podman[432168]: 2025-10-02 13:36:14.427167047 +0000 UTC m=+0.081989212 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 13:36:14 compute-0 podman[432169]: 2025-10-02 13:36:14.473472544 +0000 UTC m=+0.123414724 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:36:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:14.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:15 compute-0 nova_compute[256940]: 2025-10-02 13:36:15.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:16.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:16 compute-0 ceph-mon[73668]: pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:16.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:18.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:18 compute-0 ceph-mon[73668]: pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:18.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:18 compute-0 nova_compute[256940]: 2025-10-02 13:36:18.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:20 compute-0 ceph-mon[73668]: pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:20 compute-0 nova_compute[256940]: 2025-10-02 13:36:20.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:20.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:21 compute-0 ceph-mon[73668]: pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:23 compute-0 ceph-mon[73668]: pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:23 compute-0 nova_compute[256940]: 2025-10-02 13:36:23.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:24.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:25 compute-0 sudo[432215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:25 compute-0 sudo[432215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:25 compute-0 sudo[432215]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:25 compute-0 sudo[432240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:25 compute-0 sudo[432240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:25 compute-0 sudo[432240]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:25 compute-0 nova_compute[256940]: 2025-10-02 13:36:25.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:26 compute-0 ceph-mon[73668]: pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:36:26.535 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:36:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:36:26.535 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:36:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:36:26.536 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:36:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:26.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:28.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:28 compute-0 ceph-mon[73668]: pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:36:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:28.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:36:28
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images']
Oct 02 13:36:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:36:28 compute-0 nova_compute[256940]: 2025-10-02 13:36:28.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:36:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:36:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:30 compute-0 ceph-mon[73668]: pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:30 compute-0 nova_compute[256940]: 2025-10-02 13:36:30.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:30 compute-0 podman[432268]: 2025-10-02 13:36:30.401928899 +0000 UTC m=+0.067773708 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 02 13:36:30 compute-0 podman[432267]: 2025-10-02 13:36:30.41328337 +0000 UTC m=+0.089539126 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:36:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:30.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:32.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:32 compute-0 ceph-mon[73668]: pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:32.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:33 compute-0 nova_compute[256940]: 2025-10-02 13:36:33.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:34.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:34 compute-0 ceph-mon[73668]: pgmap v3784: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:34.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:35 compute-0 ceph-mon[73668]: pgmap v3785: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:35 compute-0 nova_compute[256940]: 2025-10-02 13:36:35.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:36.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:37 compute-0 ceph-mon[73668]: pgmap v3786: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:38.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:38.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:38 compute-0 nova_compute[256940]: 2025-10-02 13:36:38.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:40 compute-0 ceph-mon[73668]: pgmap v3787: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:40 compute-0 nova_compute[256940]: 2025-10-02 13:36:40.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:40.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:36:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:36:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:42.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:42 compute-0 ceph-mon[73668]: pgmap v3788: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:42.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:43 compute-0 nova_compute[256940]: 2025-10-02 13:36:43.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:44.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:44 compute-0 ceph-mon[73668]: pgmap v3789: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:44.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:45 compute-0 sudo[432314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:45 compute-0 sudo[432314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:45 compute-0 sudo[432314]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:45 compute-0 podman[432338]: 2025-10-02 13:36:45.239980524 +0000 UTC m=+0.059141977 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:36:45 compute-0 sudo[432353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:45 compute-0 sudo[432353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:45 compute-0 sudo[432353]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:45 compute-0 podman[432339]: 2025-10-02 13:36:45.26560025 +0000 UTC m=+0.079309843 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 13:36:45 compute-0 ceph-mon[73668]: pgmap v3790: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:45 compute-0 nova_compute[256940]: 2025-10-02 13:36:45.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:36:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:46.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:36:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.241 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.241 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.241 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:36:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:36:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2708338914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.705 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.865 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.866 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4075MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.866 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.866 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.948 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:36:47 compute-0 nova_compute[256940]: 2025-10-02 13:36:47.948 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.003 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.036 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.037 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.055 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.086 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.102 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:36:48 compute-0 ceph-mon[73668]: pgmap v3791: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2708338914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1800985970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:36:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80985682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.599 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.605 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.648 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.651 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.652 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:36:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:48.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:48 compute-0 nova_compute[256940]: 2025-10-02 13:36:48.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/80985682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:49 compute-0 sudo[432450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:49 compute-0 sudo[432450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:49 compute-0 sudo[432450]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:49 compute-0 sudo[432475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:36:49 compute-0 sudo[432475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:49 compute-0 sudo[432475]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:50 compute-0 sudo[432500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:50 compute-0 sudo[432500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:50 compute-0 sudo[432500]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:50 compute-0 sudo[432525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:36:50 compute-0 sudo[432525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:50 compute-0 ceph-mon[73668]: pgmap v3792: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3282365555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:50 compute-0 nova_compute[256940]: 2025-10-02 13:36:50.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:50 compute-0 sudo[432525]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:36:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:36:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:36:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:36:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:36:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2cac6329-7e91-4fbc-b396-6dbbf3ff0436 does not exist
Oct 02 13:36:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2a2b1bb3-f758-4c4d-ad5a-46275ffa8c71 does not exist
Oct 02 13:36:50 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e452a508-1a44-4826-a899-e6349d7307d4 does not exist
Oct 02 13:36:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:36:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:36:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:36:50 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:36:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:36:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:36:50 compute-0 sudo[432581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:50 compute-0 sudo[432581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:50 compute-0 sudo[432581]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:50 compute-0 sudo[432606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:36:50 compute-0 sudo[432606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:50 compute-0 sudo[432606]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:50 compute-0 sudo[432631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:50 compute-0 sudo[432631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:50 compute-0 sudo[432631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:50 compute-0 sudo[432656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:36:50 compute-0 sudo[432656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:50.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:36:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:36:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:36:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:36:51 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:36:51 compute-0 podman[432720]: 2025-10-02 13:36:51.220935799 +0000 UTC m=+0.046442572 container create 091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kilby, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:36:51 compute-0 systemd[1]: Started libpod-conmon-091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180.scope.
Oct 02 13:36:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:36:51 compute-0 podman[432720]: 2025-10-02 13:36:51.197668812 +0000 UTC m=+0.023175605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:36:51 compute-0 podman[432720]: 2025-10-02 13:36:51.301055872 +0000 UTC m=+0.126562675 container init 091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:36:51 compute-0 podman[432720]: 2025-10-02 13:36:51.309861738 +0000 UTC m=+0.135368511 container start 091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kilby, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:36:51 compute-0 podman[432720]: 2025-10-02 13:36:51.312875015 +0000 UTC m=+0.138381788 container attach 091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kilby, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:36:51 compute-0 pensive_kilby[432736]: 167 167
Oct 02 13:36:51 compute-0 systemd[1]: libpod-091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180.scope: Deactivated successfully.
Oct 02 13:36:51 compute-0 conmon[432736]: conmon 091300a06d46188235a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180.scope/container/memory.events
Oct 02 13:36:51 compute-0 podman[432720]: 2025-10-02 13:36:51.316757054 +0000 UTC m=+0.142263827 container died 091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d25633633ef68048e12652cee6307a136223e488fd50d8f377d35e4ba49dca3-merged.mount: Deactivated successfully.
Oct 02 13:36:51 compute-0 podman[432720]: 2025-10-02 13:36:51.35442852 +0000 UTC m=+0.179935293 container remove 091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:36:51 compute-0 systemd[1]: libpod-conmon-091300a06d46188235a3d21bb407f49717e456dcd168d2365c156e4d98a83180.scope: Deactivated successfully.
Oct 02 13:36:51 compute-0 podman[432762]: 2025-10-02 13:36:51.530407031 +0000 UTC m=+0.043802984 container create 229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:36:51 compute-0 systemd[1]: Started libpod-conmon-229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4.scope.
Oct 02 13:36:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a049f2918574e274b95dde43fdcfee2c218b5e7a4b548929276acd4a48ab49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a049f2918574e274b95dde43fdcfee2c218b5e7a4b548929276acd4a48ab49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a049f2918574e274b95dde43fdcfee2c218b5e7a4b548929276acd4a48ab49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a049f2918574e274b95dde43fdcfee2c218b5e7a4b548929276acd4a48ab49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a049f2918574e274b95dde43fdcfee2c218b5e7a4b548929276acd4a48ab49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:51 compute-0 podman[432762]: 2025-10-02 13:36:51.510792348 +0000 UTC m=+0.024188291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:36:51 compute-0 podman[432762]: 2025-10-02 13:36:51.617837701 +0000 UTC m=+0.131233644 container init 229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:36:51 compute-0 podman[432762]: 2025-10-02 13:36:51.624180294 +0000 UTC m=+0.137576207 container start 229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:36:51 compute-0 podman[432762]: 2025-10-02 13:36:51.628072854 +0000 UTC m=+0.141468777 container attach 229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 13:36:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:52.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:52 compute-0 ceph-mon[73668]: pgmap v3793: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:52 compute-0 nostalgic_ardinghelli[432779]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:36:52 compute-0 nostalgic_ardinghelli[432779]: --> relative data size: 1.0
Oct 02 13:36:52 compute-0 nostalgic_ardinghelli[432779]: --> All data devices are unavailable
Oct 02 13:36:52 compute-0 systemd[1]: libpod-229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4.scope: Deactivated successfully.
Oct 02 13:36:52 compute-0 podman[432762]: 2025-10-02 13:36:52.46407078 +0000 UTC m=+0.977466713 container died 229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7a049f2918574e274b95dde43fdcfee2c218b5e7a4b548929276acd4a48ab49-merged.mount: Deactivated successfully.
Oct 02 13:36:52 compute-0 podman[432762]: 2025-10-02 13:36:52.52221277 +0000 UTC m=+1.035608683 container remove 229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:36:52 compute-0 systemd[1]: libpod-conmon-229dcc6ab897df278d3df593ae9235caeb0e98b8d122c15f0bff7a0fda0f3da4.scope: Deactivated successfully.
Oct 02 13:36:52 compute-0 sudo[432656]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:52 compute-0 sudo[432805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:52 compute-0 sudo[432805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:52 compute-0 sudo[432805]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:52 compute-0 sudo[432830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:36:52 compute-0 sudo[432830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:52 compute-0 sudo[432830]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:52 compute-0 sudo[432855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:52 compute-0 sudo[432855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:52 compute-0 sudo[432855]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:52 compute-0 sudo[432880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:36:52 compute-0 sudo[432880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:52.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:53 compute-0 podman[432947]: 2025-10-02 13:36:53.168270459 +0000 UTC m=+0.054314643 container create 062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 13:36:53 compute-0 systemd[1]: Started libpod-conmon-062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e.scope.
Oct 02 13:36:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:36:53 compute-0 podman[432947]: 2025-10-02 13:36:53.143762301 +0000 UTC m=+0.029806575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:36:53 compute-0 podman[432947]: 2025-10-02 13:36:53.242741328 +0000 UTC m=+0.128785532 container init 062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:36:53 compute-0 podman[432947]: 2025-10-02 13:36:53.252384215 +0000 UTC m=+0.138428399 container start 062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:36:53 compute-0 podman[432947]: 2025-10-02 13:36:53.255653379 +0000 UTC m=+0.141697573 container attach 062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:36:53 compute-0 elated_pascal[432964]: 167 167
Oct 02 13:36:53 compute-0 systemd[1]: libpod-062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e.scope: Deactivated successfully.
Oct 02 13:36:53 compute-0 podman[432947]: 2025-10-02 13:36:53.258714597 +0000 UTC m=+0.144758781 container died 062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 13:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e156fea3b12d2dca3c0aee2235f36befc860ec49e21ca375fd43939e070cb42f-merged.mount: Deactivated successfully.
Oct 02 13:36:53 compute-0 podman[432947]: 2025-10-02 13:36:53.3025248 +0000 UTC m=+0.188568994 container remove 062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:36:53 compute-0 systemd[1]: libpod-conmon-062b286e1316e36097ef6e9923233583f0dfc1472025d25caa9541d9ab6a491e.scope: Deactivated successfully.
Oct 02 13:36:53 compute-0 podman[432988]: 2025-10-02 13:36:53.464220535 +0000 UTC m=+0.045827196 container create 847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sammet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:36:53 compute-0 systemd[1]: Started libpod-conmon-847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36.scope.
Oct 02 13:36:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:36:53 compute-0 podman[432988]: 2025-10-02 13:36:53.444477399 +0000 UTC m=+0.026084080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ab83d5a193af1b7a85e63d3c0f6bbe199f6b67111bcb296dabc1abb6d36a25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ab83d5a193af1b7a85e63d3c0f6bbe199f6b67111bcb296dabc1abb6d36a25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ab83d5a193af1b7a85e63d3c0f6bbe199f6b67111bcb296dabc1abb6d36a25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ab83d5a193af1b7a85e63d3c0f6bbe199f6b67111bcb296dabc1abb6d36a25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:53 compute-0 podman[432988]: 2025-10-02 13:36:53.564159256 +0000 UTC m=+0.145765987 container init 847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sammet, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:36:53 compute-0 podman[432988]: 2025-10-02 13:36:53.572180362 +0000 UTC m=+0.153787053 container start 847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sammet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:36:53 compute-0 podman[432988]: 2025-10-02 13:36:53.577073297 +0000 UTC m=+0.158679988 container attach 847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 13:36:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:53 compute-0 nova_compute[256940]: 2025-10-02 13:36:53.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:54.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:54 compute-0 ceph-mon[73668]: pgmap v3794: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2432501427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]: {
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:     "1": [
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:         {
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "devices": [
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "/dev/loop3"
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             ],
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "lv_name": "ceph_lv0",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "lv_size": "7511998464",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "name": "ceph_lv0",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "tags": {
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.cluster_name": "ceph",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.crush_device_class": "",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.encrypted": "0",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.osd_id": "1",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.type": "block",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:                 "ceph.vdo": "0"
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             },
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "type": "block",
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:             "vg_name": "ceph_vg0"
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:         }
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]:     ]
Oct 02 13:36:54 compute-0 optimistic_sammet[433004]: }
Oct 02 13:36:54 compute-0 systemd[1]: libpod-847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36.scope: Deactivated successfully.
Oct 02 13:36:54 compute-0 podman[432988]: 2025-10-02 13:36:54.348648343 +0000 UTC m=+0.930255004 container died 847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sammet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-50ab83d5a193af1b7a85e63d3c0f6bbe199f6b67111bcb296dabc1abb6d36a25-merged.mount: Deactivated successfully.
Oct 02 13:36:54 compute-0 podman[432988]: 2025-10-02 13:36:54.425093052 +0000 UTC m=+1.006699713 container remove 847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:36:54 compute-0 systemd[1]: libpod-conmon-847ad43dcf56f08e0e2ca1cc13c90446579e19a2e338c58c6493a04017ecaa36.scope: Deactivated successfully.
Oct 02 13:36:54 compute-0 sudo[432880]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:54 compute-0 sudo[433025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:54 compute-0 sudo[433025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:54 compute-0 sudo[433025]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:54 compute-0 sudo[433050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:36:54 compute-0 sudo[433050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:54 compute-0 sudo[433050]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:54 compute-0 sudo[433075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:54 compute-0 sudo[433075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:54 compute-0 sudo[433075]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:54 compute-0 sudo[433100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:36:54 compute-0 sudo[433100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:54.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:55 compute-0 podman[433167]: 2025-10-02 13:36:55.004358589 +0000 UTC m=+0.040230982 container create ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:36:55 compute-0 systemd[1]: Started libpod-conmon-ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41.scope.
Oct 02 13:36:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:36:55 compute-0 podman[433167]: 2025-10-02 13:36:54.986403209 +0000 UTC m=+0.022275612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:36:55 compute-0 podman[433167]: 2025-10-02 13:36:55.089366198 +0000 UTC m=+0.125238601 container init ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:36:55 compute-0 podman[433167]: 2025-10-02 13:36:55.097264131 +0000 UTC m=+0.133136534 container start ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:36:55 compute-0 confident_mclaren[433184]: 167 167
Oct 02 13:36:55 compute-0 systemd[1]: libpod-ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41.scope: Deactivated successfully.
Oct 02 13:36:55 compute-0 podman[433167]: 2025-10-02 13:36:55.113982209 +0000 UTC m=+0.149854622 container attach ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:36:55 compute-0 podman[433167]: 2025-10-02 13:36:55.114348949 +0000 UTC m=+0.150221332 container died ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e3244275278469a42ca945d7f7a3f73c743d6883ec63d571c333f48317669c-merged.mount: Deactivated successfully.
Oct 02 13:36:55 compute-0 podman[433167]: 2025-10-02 13:36:55.158677645 +0000 UTC m=+0.194550028 container remove ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:36:55 compute-0 systemd[1]: libpod-conmon-ab7a320069c82b67c652bfdce95a2b577e68a77154dd1ae829da339d1f132f41.scope: Deactivated successfully.
Oct 02 13:36:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1900774806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:55 compute-0 podman[433210]: 2025-10-02 13:36:55.34733713 +0000 UTC m=+0.047822026 container create 9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:36:55 compute-0 nova_compute[256940]: 2025-10-02 13:36:55.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:55 compute-0 systemd[1]: Started libpod-conmon-9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62.scope.
Oct 02 13:36:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:36:55 compute-0 podman[433210]: 2025-10-02 13:36:55.326628289 +0000 UTC m=+0.027113215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dee201dd08004d5b3f937686ccc4399b71d3addc3d77e9a4162ad5642496840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dee201dd08004d5b3f937686ccc4399b71d3addc3d77e9a4162ad5642496840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dee201dd08004d5b3f937686ccc4399b71d3addc3d77e9a4162ad5642496840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dee201dd08004d5b3f937686ccc4399b71d3addc3d77e9a4162ad5642496840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:36:55 compute-0 podman[433210]: 2025-10-02 13:36:55.452737242 +0000 UTC m=+0.153222158 container init 9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_robinson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:36:55 compute-0 podman[433210]: 2025-10-02 13:36:55.459626008 +0000 UTC m=+0.160110904 container start 9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:36:55 compute-0 podman[433210]: 2025-10-02 13:36:55.463228971 +0000 UTC m=+0.163713907 container attach 9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 13:36:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:56.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:56 compute-0 ceph-mon[73668]: pgmap v3795: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:56 compute-0 condescending_robinson[433228]: {
Oct 02 13:36:56 compute-0 condescending_robinson[433228]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:36:56 compute-0 condescending_robinson[433228]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:36:56 compute-0 condescending_robinson[433228]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:36:56 compute-0 condescending_robinson[433228]:         "osd_id": 1,
Oct 02 13:36:56 compute-0 condescending_robinson[433228]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:36:56 compute-0 condescending_robinson[433228]:         "type": "bluestore"
Oct 02 13:36:56 compute-0 condescending_robinson[433228]:     }
Oct 02 13:36:56 compute-0 condescending_robinson[433228]: }
Oct 02 13:36:56 compute-0 systemd[1]: libpod-9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62.scope: Deactivated successfully.
Oct 02 13:36:56 compute-0 podman[433210]: 2025-10-02 13:36:56.366661465 +0000 UTC m=+1.067146381 container died 9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_robinson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dee201dd08004d5b3f937686ccc4399b71d3addc3d77e9a4162ad5642496840-merged.mount: Deactivated successfully.
Oct 02 13:36:56 compute-0 podman[433210]: 2025-10-02 13:36:56.423905262 +0000 UTC m=+1.124390158 container remove 9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_robinson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 13:36:56 compute-0 systemd[1]: libpod-conmon-9d03492da51adc2234e3e4e3697e3cbb13ae7b7ecfa2e73d7ee7058ec4757c62.scope: Deactivated successfully.
Oct 02 13:36:56 compute-0 sudo[433100]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:36:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:36:56 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 27d6917e-4cda-4f35-a5d7-56208069d613 does not exist
Oct 02 13:36:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b840f95f-6476-4ed5-a3b1-0e1f16fc2fbc does not exist
Oct 02 13:36:56 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 4fc7d46e-8fda-4dc1-9711-ec7c18a8f3d0 does not exist
Oct 02 13:36:56 compute-0 sudo[433264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:56 compute-0 sudo[433264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:56 compute-0 sudo[433264]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:56 compute-0 sudo[433289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:36:56 compute-0 sudo[433289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:56 compute-0 sudo[433289]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:56 compute-0 nova_compute[256940]: 2025-10-02 13:36:56.654 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:56 compute-0 nova_compute[256940]: 2025-10-02 13:36:56.655 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:56 compute-0 nova_compute[256940]: 2025-10-02 13:36:56.655 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:56 compute-0 nova_compute[256940]: 2025-10-02 13:36:56.655 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:36:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:57 compute-0 ceph-mon[73668]: pgmap v3796: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:36:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:36:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:36:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:58.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:58 compute-0 nova_compute[256940]: 2025-10-02 13:36:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:00 compute-0 ceph-mon[73668]: pgmap v3797: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:00.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:00 compute-0 nova_compute[256940]: 2025-10-02 13:37:00.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:00.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:01 compute-0 podman[433317]: 2025-10-02 13:37:01.410239825 +0000 UTC m=+0.071468913 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:37:01 compute-0 podman[433318]: 2025-10-02 13:37:01.411431096 +0000 UTC m=+0.072445108 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:37:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:02.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:02 compute-0 nova_compute[256940]: 2025-10-02 13:37:02.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:02 compute-0 ceph-mon[73668]: pgmap v3798: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:02.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:03 compute-0 ceph-mon[73668]: pgmap v3799: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:03 compute-0 nova_compute[256940]: 2025-10-02 13:37:03.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:04 compute-0 nova_compute[256940]: 2025-10-02 13:37:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:04 compute-0 nova_compute[256940]: 2025-10-02 13:37:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:05 compute-0 sudo[433359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:05 compute-0 sudo[433359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:05 compute-0 sudo[433359]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:05 compute-0 nova_compute[256940]: 2025-10-02 13:37:05.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:05 compute-0 sudo[433384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:05 compute-0 sudo[433384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:05 compute-0 sudo[433384]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:05 compute-0 ceph-mon[73668]: pgmap v3800: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:37:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:06.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:37:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:37:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:37:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:07 compute-0 nova_compute[256940]: 2025-10-02 13:37:07.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:07 compute-0 nova_compute[256940]: 2025-10-02 13:37:07.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:37:07 compute-0 nova_compute[256940]: 2025-10-02 13:37:07.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:37:07 compute-0 nova_compute[256940]: 2025-10-02 13:37:07.230 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:37:07 compute-0 ceph-mon[73668]: pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:08.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:08.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.947383) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228947418, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1735, "num_deletes": 256, "total_data_size": 2918209, "memory_usage": 2975528, "flush_reason": "Manual Compaction"}
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Oct 02 13:37:08 compute-0 nova_compute[256940]: 2025-10-02 13:37:08.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228979710, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 2863530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83026, "largest_seqno": 84760, "table_properties": {"data_size": 2855298, "index_size": 4917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18571, "raw_average_key_size": 20, "raw_value_size": 2838286, "raw_average_value_size": 3174, "num_data_blocks": 216, "num_entries": 894, "num_filter_entries": 894, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412069, "oldest_key_time": 1759412069, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 32374 microseconds, and 6592 cpu microseconds.
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.979754) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 2863530 bytes OK
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.979774) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.985666) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.985698) EVENT_LOG_v1 {"time_micros": 1759412228985690, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.985719) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2910584, prev total WAL file size 2910584, number of live WAL files 2.
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.987163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353230' seq:72057594037927935, type:22 .. '6C6F676D0033373733' seq:0, type:0; will stop at (end)
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(2796KB)], [191(10MB)]
Oct 02 13:37:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228987222, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 14232192, "oldest_snapshot_seqno": -1}
Oct 02 13:37:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 11347 keys, 14109605 bytes, temperature: kUnknown
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229084169, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 14109605, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14036992, "index_size": 43124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 300173, "raw_average_key_size": 26, "raw_value_size": 13839567, "raw_average_value_size": 1219, "num_data_blocks": 1641, "num_entries": 11347, "num_filter_entries": 11347, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.084563) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14109605 bytes
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.112924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.6 rd, 145.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 11872, records dropped: 525 output_compression: NoCompression
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.112989) EVENT_LOG_v1 {"time_micros": 1759412229112944, "job": 120, "event": "compaction_finished", "compaction_time_micros": 97069, "compaction_time_cpu_micros": 32144, "output_level": 6, "num_output_files": 1, "total_output_size": 14109605, "num_input_records": 11872, "num_output_records": 11347, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229114064, "job": 120, "event": "table_file_deletion", "file_number": 193}
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229118207, "job": 120, "event": "table_file_deletion", "file_number": 191}
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:08.987058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.118448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.118456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.118459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.118462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:37:09.118465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-0 ceph-mon[73668]: pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:10 compute-0 nova_compute[256940]: 2025-10-02 13:37:10.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:10.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:12.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:12 compute-0 ceph-mon[73668]: pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:12.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:13 compute-0 nova_compute[256940]: 2025-10-02 13:37:13.226 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:13 compute-0 ceph-mon[73668]: pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:13 compute-0 nova_compute[256940]: 2025-10-02 13:37:13.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:14 compute-0 nova_compute[256940]: 2025-10-02 13:37:14.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:14 compute-0 nova_compute[256940]: 2025-10-02 13:37:14.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:37:14 compute-0 nova_compute[256940]: 2025-10-02 13:37:14.265 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:37:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:37:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:14.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:37:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:15 compute-0 nova_compute[256940]: 2025-10-02 13:37:15.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:15 compute-0 podman[433414]: 2025-10-02 13:37:15.394762175 +0000 UTC m=+0.060069580 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:37:15 compute-0 podman[433415]: 2025-10-02 13:37:15.475376281 +0000 UTC m=+0.136376047 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:37:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:16 compute-0 ceph-mon[73668]: pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:16.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:18.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:18 compute-0 ceph-mon[73668]: pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:18 compute-0 nova_compute[256940]: 2025-10-02 13:37:18.261 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:37:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:37:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:18 compute-0 nova_compute[256940]: 2025-10-02 13:37:18.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:20.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:20 compute-0 ceph-mon[73668]: pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:20 compute-0 nova_compute[256940]: 2025-10-02 13:37:20.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:20.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 13:37:21 compute-0 ceph-mon[73668]: pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 13:37:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:37:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:22.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:37:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:37:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:22.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:37:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 89 op/s
Oct 02 13:37:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:24 compute-0 nova_compute[256940]: 2025-10-02 13:37:24.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:37:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:24.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:37:24 compute-0 ceph-mon[73668]: pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 89 op/s
Oct 02 13:37:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:37:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:37:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Oct 02 13:37:25 compute-0 nova_compute[256940]: 2025-10-02 13:37:25.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:25 compute-0 ceph-mon[73668]: pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Oct 02 13:37:25 compute-0 sudo[433463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:25 compute-0 sudo[433463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:25 compute-0 sudo[433463]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:25 compute-0 sudo[433488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:25 compute-0 sudo[433488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:25 compute-0 sudo[433488]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:26.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:37:26.536 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:37:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:37:26.537 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:37:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:37:26.537 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:37:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Oct 02 13:37:28 compute-0 ceph-mon[73668]: pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Oct 02 13:37:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:28.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:37:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:37:28
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', 'backups', '.rgw.root', 'default.rgw.control']
Oct 02 13:37:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:37:29 compute-0 nova_compute[256940]: 2025-10-02 13:37:29.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Oct 02 13:37:29 compute-0 nova_compute[256940]: 2025-10-02 13:37:29.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:29 compute-0 nova_compute[256940]: 2025-10-02 13:37:29.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:37:29 compute-0 ceph-mon[73668]: pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:37:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:37:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:30.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:30 compute-0 nova_compute[256940]: 2025-10-02 13:37:30.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:30.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:37:32 compute-0 ceph-mon[73668]: pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:37:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:37:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:37:32 compute-0 podman[433517]: 2025-10-02 13:37:32.407790956 +0000 UTC m=+0.063532819 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 13:37:32 compute-0 podman[433516]: 2025-10-02 13:37:32.425836529 +0000 UTC m=+0.076538403 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:37:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:32.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s
Oct 02 13:37:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:34 compute-0 nova_compute[256940]: 2025-10-02 13:37:34.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:34 compute-0 ceph-mon[73668]: pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s
Oct 02 13:37:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:34.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 88 op/s
Oct 02 13:37:35 compute-0 nova_compute[256940]: 2025-10-02 13:37:35.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:37:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:36.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:37:36 compute-0 ceph-mon[73668]: pgmap v3815: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 88 op/s
Oct 02 13:37:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:36.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Oct 02 13:37:37 compute-0 ceph-mon[73668]: pgmap v3816: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Oct 02 13:37:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:38.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:38.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:39 compute-0 nova_compute[256940]: 2025-10-02 13:37:39.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:37:40 compute-0 ceph-mon[73668]: pgmap v3817: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:37:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:40.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:40 compute-0 nova_compute[256940]: 2025-10-02 13:37:40.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:37:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:37:42 compute-0 ceph-mon[73668]: pgmap v3818: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:37:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:42.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:42 compute-0 nova_compute[256940]: 2025-10-02 13:37:42.531 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Oct 02 13:37:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:44 compute-0 nova_compute[256940]: 2025-10-02 13:37:44.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:44 compute-0 ceph-mon[73668]: pgmap v3819: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Oct 02 13:37:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:37:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:37:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:44.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:37:45 compute-0 ceph-mon[73668]: pgmap v3820: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:37:45 compute-0 nova_compute[256940]: 2025-10-02 13:37:45.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:45 compute-0 sudo[433563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:45 compute-0 sudo[433563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:45 compute-0 sudo[433563]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:45 compute-0 podman[433587]: 2025-10-02 13:37:45.699242642 +0000 UTC m=+0.068549688 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:37:45 compute-0 sudo[433601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:45 compute-0 sudo[433601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:45 compute-0 sudo[433601]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:45 compute-0 podman[433588]: 2025-10-02 13:37:45.733514551 +0000 UTC m=+0.096022262 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:37:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:37:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:37:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:46.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.214 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.258 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.258 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.259 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.259 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.259 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:37:47 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:37:47 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208392920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.754 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.934 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.936 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4086MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.936 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:37:47 compute-0 nova_compute[256940]: 2025-10-02 13:37:47.936 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.058 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.059 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.075 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:37:48 compute-0 ceph-mon[73668]: pgmap v3821: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:37:48 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3208392920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:48.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:37:48 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385079418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.522 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.527 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.556 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.557 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:37:48 compute-0 nova_compute[256940]: 2025-10-02 13:37:48.557 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:37:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:49 compute-0 nova_compute[256940]: 2025-10-02 13:37:49.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/385079418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:49 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3332069202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:37:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:50.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:37:50 compute-0 ceph-mon[73668]: pgmap v3822: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1459108784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:50 compute-0 nova_compute[256940]: 2025-10-02 13:37:50.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:51 compute-0 ceph-mon[73668]: pgmap v3823: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:52.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:52.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:54 compute-0 nova_compute[256940]: 2025-10-02 13:37:54.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:54 compute-0 ceph-mon[73668]: pgmap v3824: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:54.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:55 compute-0 nova_compute[256940]: 2025-10-02 13:37:55.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:55 compute-0 nova_compute[256940]: 2025-10-02 13:37:55.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:56 compute-0 nova_compute[256940]: 2025-10-02 13:37:56.223 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:56.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:56 compute-0 ceph-mon[73668]: pgmap v3825: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/439213855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:56.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:56 compute-0 sudo[433710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:56 compute-0 sudo[433710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:56 compute-0 sudo[433710]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:57 compute-0 sudo[433735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:37:57 compute-0 sudo[433735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:57 compute-0 sudo[433735]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:57 compute-0 sudo[433760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:57 compute-0 sudo[433760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:57 compute-0 sudo[433760]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:57 compute-0 sudo[433785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:37:57 compute-0 sudo[433785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:57 compute-0 nova_compute[256940]: 2025-10-02 13:37:57.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3149066955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:57 compute-0 ceph-mon[73668]: pgmap v3826: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:57 compute-0 sudo[433785]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:58 compute-0 nova_compute[256940]: 2025-10-02 13:37:58.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:58 compute-0 nova_compute[256940]: 2025-10-02 13:37:58.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:37:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:37:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:37:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:37:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:58.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:37:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:37:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:37:59 compute-0 nova_compute[256940]: 2025-10-02 13:37:59.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:59 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:00 compute-0 ceph-mon[73668]: pgmap v3827: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:00 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:38:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:38:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:38:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:38:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:38:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7650db7e-836b-45da-b5b1-6eeed11a79c3 does not exist
Oct 02 13:38:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bfa5392c-4d90-463c-9a4d-0d4a4f5f1209 does not exist
Oct 02 13:38:00 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev cebbe27e-01e7-42f9-9711-fd291e8d8ad9 does not exist
Oct 02 13:38:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:38:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:38:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:38:00 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:38:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:38:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:38:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:00 compute-0 sudo[433842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:00 compute-0 sudo[433842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:00 compute-0 sudo[433842]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:00 compute-0 sudo[433867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:38:00 compute-0 sudo[433867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:00 compute-0 sudo[433867]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:00 compute-0 nova_compute[256940]: 2025-10-02 13:38:00.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:00 compute-0 sudo[433892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:00 compute-0 sudo[433892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:00 compute-0 sudo[433892]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:00 compute-0 sudo[433917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:38:00 compute-0 sudo[433917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:00 compute-0 podman[433981]: 2025-10-02 13:38:00.882852706 +0000 UTC m=+0.098142986 container create 3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:38:00 compute-0 podman[433981]: 2025-10-02 13:38:00.809367642 +0000 UTC m=+0.024657942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:38:00 compute-0 systemd[1]: Started libpod-conmon-3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229.scope.
Oct 02 13:38:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:00.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:38:00 compute-0 podman[433981]: 2025-10-02 13:38:00.97664456 +0000 UTC m=+0.191934860 container init 3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:38:00 compute-0 podman[433981]: 2025-10-02 13:38:00.982924541 +0000 UTC m=+0.198214821 container start 3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_margulis, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 13:38:00 compute-0 podman[433981]: 2025-10-02 13:38:00.987913019 +0000 UTC m=+0.203203319 container attach 3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_margulis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:38:00 compute-0 trusting_margulis[433999]: 167 167
Oct 02 13:38:00 compute-0 systemd[1]: libpod-3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229.scope: Deactivated successfully.
Oct 02 13:38:00 compute-0 podman[433981]: 2025-10-02 13:38:00.988785121 +0000 UTC m=+0.204075401 container died 3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_margulis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe8dfa9b7d1cf8bd2027329815487c9dbeb0635fe133742b7f574cd5c37928e-merged.mount: Deactivated successfully.
Oct 02 13:38:01 compute-0 podman[433981]: 2025-10-02 13:38:01.039560023 +0000 UTC m=+0.254850303 container remove 3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 13:38:01 compute-0 systemd[1]: libpod-conmon-3db08c7bc7c4bda702de97b3939b882ec01c49f97f29db12e99cefc7bac7a229.scope: Deactivated successfully.
Oct 02 13:38:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:38:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:38:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:38:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:38:01 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:38:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:01 compute-0 podman[434025]: 2025-10-02 13:38:01.246812524 +0000 UTC m=+0.047777436 container create 854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_driscoll, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 13:38:01 compute-0 systemd[1]: Started libpod-conmon-854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87.scope.
Oct 02 13:38:01 compute-0 podman[434025]: 2025-10-02 13:38:01.226137335 +0000 UTC m=+0.027102276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:38:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820599523d3ad0283db9ba4712660d54c0ec555ea7f5272a89634375724fea26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820599523d3ad0283db9ba4712660d54c0ec555ea7f5272a89634375724fea26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820599523d3ad0283db9ba4712660d54c0ec555ea7f5272a89634375724fea26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820599523d3ad0283db9ba4712660d54c0ec555ea7f5272a89634375724fea26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820599523d3ad0283db9ba4712660d54c0ec555ea7f5272a89634375724fea26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:01 compute-0 podman[434025]: 2025-10-02 13:38:01.351062016 +0000 UTC m=+0.152026947 container init 854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_driscoll, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:38:01 compute-0 podman[434025]: 2025-10-02 13:38:01.367362324 +0000 UTC m=+0.168327235 container start 854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:38:01 compute-0 podman[434025]: 2025-10-02 13:38:01.373987594 +0000 UTC m=+0.174952535 container attach 854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 13:38:02 compute-0 ceph-mon[73668]: pgmap v3828: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:02 compute-0 recursing_driscoll[434042]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:38:02 compute-0 recursing_driscoll[434042]: --> relative data size: 1.0
Oct 02 13:38:02 compute-0 recursing_driscoll[434042]: --> All data devices are unavailable
Oct 02 13:38:02 compute-0 systemd[1]: libpod-854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87.scope: Deactivated successfully.
Oct 02 13:38:02 compute-0 podman[434025]: 2025-10-02 13:38:02.250169211 +0000 UTC m=+1.051134142 container died 854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:38:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:02.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-820599523d3ad0283db9ba4712660d54c0ec555ea7f5272a89634375724fea26-merged.mount: Deactivated successfully.
Oct 02 13:38:02 compute-0 podman[434025]: 2025-10-02 13:38:02.335422026 +0000 UTC m=+1.136386937 container remove 854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:38:02 compute-0 systemd[1]: libpod-conmon-854540eea960f500b816982aa3603bd4cdff24ebbbf5874da9f3626f9e253f87.scope: Deactivated successfully.
Oct 02 13:38:02 compute-0 sudo[433917]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:02 compute-0 sudo[434071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:02 compute-0 sudo[434071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:02 compute-0 sudo[434071]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:02 compute-0 podman[434096]: 2025-10-02 13:38:02.554387969 +0000 UTC m=+0.067788739 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 13:38:02 compute-0 sudo[434108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:38:02 compute-0 sudo[434108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:02 compute-0 sudo[434108]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:02 compute-0 podman[434095]: 2025-10-02 13:38:02.577224494 +0000 UTC m=+0.097952082 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:38:02 compute-0 sudo[434160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:02 compute-0 sudo[434160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:02 compute-0 sudo[434160]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:02 compute-0 sudo[434185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:38:02 compute-0 sudo[434185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:02.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:03 compute-0 podman[434252]: 2025-10-02 13:38:03.122353066 +0000 UTC m=+0.050946977 container create e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:38:03 compute-0 systemd[1]: Started libpod-conmon-e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad.scope.
Oct 02 13:38:03 compute-0 podman[434252]: 2025-10-02 13:38:03.095640571 +0000 UTC m=+0.024234492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:38:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:38:03 compute-0 nova_compute[256940]: 2025-10-02 13:38:03.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:03 compute-0 podman[434252]: 2025-10-02 13:38:03.234173722 +0000 UTC m=+0.162767623 container init e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:38:03 compute-0 podman[434252]: 2025-10-02 13:38:03.248777836 +0000 UTC m=+0.177371707 container start e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:38:03 compute-0 lucid_bassi[434268]: 167 167
Oct 02 13:38:03 compute-0 systemd[1]: libpod-e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad.scope: Deactivated successfully.
Oct 02 13:38:03 compute-0 podman[434252]: 2025-10-02 13:38:03.259357308 +0000 UTC m=+0.187951179 container attach e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:38:03 compute-0 podman[434252]: 2025-10-02 13:38:03.260057095 +0000 UTC m=+0.188650986 container died e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:38:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-70b3dd6babff41a68df5386e188a4520184999a6915840a4300405e23491812a-merged.mount: Deactivated successfully.
Oct 02 13:38:03 compute-0 podman[434252]: 2025-10-02 13:38:03.32930265 +0000 UTC m=+0.257896511 container remove e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:38:03 compute-0 systemd[1]: libpod-conmon-e73c4b4f4cf69d6d45866f01f2ed766a194b84c9fa668e4baef8a54558921cad.scope: Deactivated successfully.
Oct 02 13:38:03 compute-0 podman[434292]: 2025-10-02 13:38:03.569830824 +0000 UTC m=+0.063977701 container create 1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:38:03 compute-0 systemd[1]: Started libpod-conmon-1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab.scope.
Oct 02 13:38:03 compute-0 podman[434292]: 2025-10-02 13:38:03.545750957 +0000 UTC m=+0.039897834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:38:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb52e4f0147e3becf107a949830ab892023adb82569801f04e8ffa879804b628/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb52e4f0147e3becf107a949830ab892023adb82569801f04e8ffa879804b628/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb52e4f0147e3becf107a949830ab892023adb82569801f04e8ffa879804b628/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb52e4f0147e3becf107a949830ab892023adb82569801f04e8ffa879804b628/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:03 compute-0 podman[434292]: 2025-10-02 13:38:03.667962829 +0000 UTC m=+0.162109696 container init 1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:38:03 compute-0 podman[434292]: 2025-10-02 13:38:03.67580555 +0000 UTC m=+0.169952437 container start 1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:38:03 compute-0 podman[434292]: 2025-10-02 13:38:03.680367627 +0000 UTC m=+0.174514504 container attach 1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:38:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:04 compute-0 nova_compute[256940]: 2025-10-02 13:38:04.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:04 compute-0 ceph-mon[73668]: pgmap v3829: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:04.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]: {
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:     "1": [
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:         {
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "devices": [
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "/dev/loop3"
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             ],
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "lv_name": "ceph_lv0",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "lv_size": "7511998464",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "name": "ceph_lv0",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "tags": {
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.cluster_name": "ceph",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.crush_device_class": "",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.encrypted": "0",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.osd_id": "1",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.type": "block",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:                 "ceph.vdo": "0"
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             },
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "type": "block",
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:             "vg_name": "ceph_vg0"
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:         }
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]:     ]
Oct 02 13:38:04 compute-0 quirky_lehmann[434308]: }
Oct 02 13:38:04 compute-0 systemd[1]: libpod-1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab.scope: Deactivated successfully.
Oct 02 13:38:04 compute-0 podman[434317]: 2025-10-02 13:38:04.48574612 +0000 UTC m=+0.032839203 container died 1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb52e4f0147e3becf107a949830ab892023adb82569801f04e8ffa879804b628-merged.mount: Deactivated successfully.
Oct 02 13:38:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:04.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:05 compute-0 nova_compute[256940]: 2025-10-02 13:38:05.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:05 compute-0 nova_compute[256940]: 2025-10-02 13:38:05.214 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:05 compute-0 nova_compute[256940]: 2025-10-02 13:38:05.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:05 compute-0 podman[434317]: 2025-10-02 13:38:05.474950734 +0000 UTC m=+1.022043797 container remove 1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:38:05 compute-0 systemd[1]: libpod-conmon-1d753bf0323047ce9a936385daf26d8f68052b18c4cff26d3c9e979d0f134bab.scope: Deactivated successfully.
Oct 02 13:38:05 compute-0 sudo[434185]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:05 compute-0 sudo[434333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:05 compute-0 sudo[434333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:05 compute-0 sudo[434333]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:05 compute-0 sudo[434358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:38:05 compute-0 sudo[434358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:05 compute-0 sudo[434358]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:05 compute-0 ceph-mon[73668]: pgmap v3830: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:05 compute-0 sudo[434383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:05 compute-0 sudo[434383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:05 compute-0 sudo[434383]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:05 compute-0 sudo[434388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:05 compute-0 sudo[434388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:05 compute-0 sudo[434388]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:05 compute-0 sudo[434433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:38:05 compute-0 sudo[434433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:05 compute-0 sudo[434438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:05 compute-0 sudo[434438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:05 compute-0 sudo[434438]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:06.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:06 compute-0 podman[434522]: 2025-10-02 13:38:06.275770929 +0000 UTC m=+0.042722206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:38:06 compute-0 podman[434522]: 2025-10-02 13:38:06.372414266 +0000 UTC m=+0.139365483 container create caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 13:38:06 compute-0 systemd[1]: Started libpod-conmon-caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755.scope.
Oct 02 13:38:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:38:06 compute-0 podman[434522]: 2025-10-02 13:38:06.588683129 +0000 UTC m=+0.355634386 container init caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:38:06 compute-0 podman[434522]: 2025-10-02 13:38:06.595173656 +0000 UTC m=+0.362124873 container start caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:38:06 compute-0 trusting_tu[434538]: 167 167
Oct 02 13:38:06 compute-0 systemd[1]: libpod-caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755.scope: Deactivated successfully.
Oct 02 13:38:06 compute-0 podman[434522]: 2025-10-02 13:38:06.607473181 +0000 UTC m=+0.374424418 container attach caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tu, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:38:06 compute-0 podman[434522]: 2025-10-02 13:38:06.608330153 +0000 UTC m=+0.375281550 container died caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tu, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf55cb52dd9b43c5d47dff2ac4bb97b710c675345c6030b7968b63fa0b4b1926-merged.mount: Deactivated successfully.
Oct 02 13:38:06 compute-0 podman[434522]: 2025-10-02 13:38:06.690586101 +0000 UTC m=+0.457537318 container remove caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:38:06 compute-0 systemd[1]: libpod-conmon-caffb6d96d986d0072ea903beb3c71d0ee2e6af79f9ad9722467617a87604755.scope: Deactivated successfully.
Oct 02 13:38:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/535136869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:38:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/535136869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:38:06 compute-0 podman[434564]: 2025-10-02 13:38:06.84661609 +0000 UTC m=+0.021985894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:38:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:06 compute-0 podman[434564]: 2025-10-02 13:38:06.954843885 +0000 UTC m=+0.130213659 container create c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamarr, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:38:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:06.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:07 compute-0 systemd[1]: Started libpod-conmon-c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b.scope.
Oct 02 13:38:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fff49abb1a2e98ebbc7d1d9d5e2915efc282cf0088f66c83a2750f5c0c70f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fff49abb1a2e98ebbc7d1d9d5e2915efc282cf0088f66c83a2750f5c0c70f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fff49abb1a2e98ebbc7d1d9d5e2915efc282cf0088f66c83a2750f5c0c70f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fff49abb1a2e98ebbc7d1d9d5e2915efc282cf0088f66c83a2750f5c0c70f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:38:07 compute-0 podman[434564]: 2025-10-02 13:38:07.196870327 +0000 UTC m=+0.372240161 container init c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:38:07 compute-0 podman[434564]: 2025-10-02 13:38:07.207169121 +0000 UTC m=+0.382538905 container start c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamarr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:38:07 compute-0 podman[434564]: 2025-10-02 13:38:07.417185354 +0000 UTC m=+0.592555158 container attach c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:38:07 compute-0 ceph-mon[73668]: pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]: {
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]:         "osd_id": 1,
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]:         "type": "bluestore"
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]:     }
Oct 02 13:38:08 compute-0 exciting_lamarr[434582]: }
Oct 02 13:38:08 compute-0 systemd[1]: libpod-c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b.scope: Deactivated successfully.
Oct 02 13:38:08 compute-0 podman[434564]: 2025-10-02 13:38:08.148705733 +0000 UTC m=+1.324075537 container died c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 13:38:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:08.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96fff49abb1a2e98ebbc7d1d9d5e2915efc282cf0088f66c83a2750f5c0c70f-merged.mount: Deactivated successfully.
Oct 02 13:38:08 compute-0 podman[434564]: 2025-10-02 13:38:08.504750169 +0000 UTC m=+1.680119973 container remove c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:38:08 compute-0 systemd[1]: libpod-conmon-c4ee00404f093af8cb88f1898346746bea85c2f15b071ca442073c4b93ad940b.scope: Deactivated successfully.
Oct 02 13:38:08 compute-0 sudo[434433]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:38:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.579384) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288579407, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 752, "num_deletes": 251, "total_data_size": 1049279, "memory_usage": 1068664, "flush_reason": "Manual Compaction"}
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288600221, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 1037761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84761, "largest_seqno": 85512, "table_properties": {"data_size": 1033885, "index_size": 1655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8768, "raw_average_key_size": 19, "raw_value_size": 1026112, "raw_average_value_size": 2295, "num_data_blocks": 73, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412229, "oldest_key_time": 1759412229, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 20899 microseconds, and 3750 cpu microseconds.
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:38:08 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:08 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 70d99717-95b6-4934-8076-331179a2d32e does not exist
Oct 02 13:38:08 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c3577bed-e88d-4ccd-a59b-c9e9c5951aa1 does not exist
Oct 02 13:38:08 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9ffe6f8f-24aa-4762-a9f5-3bb59a9afe8c does not exist
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.600274) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 1037761 bytes OK
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.600300) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.626668) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.626696) EVENT_LOG_v1 {"time_micros": 1759412288626688, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.626720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1045540, prev total WAL file size 1086632, number of live WAL files 2.
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.627793) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(1013KB)], [194(13MB)]
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288627859, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 15147366, "oldest_snapshot_seqno": -1}
Oct 02 13:38:08 compute-0 sudo[434615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:08 compute-0 sudo[434615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:08 compute-0 sudo[434615]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 11278 keys, 13208060 bytes, temperature: kUnknown
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288726457, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 13208060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13136746, "index_size": 42017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299420, "raw_average_key_size": 26, "raw_value_size": 12941253, "raw_average_value_size": 1147, "num_data_blocks": 1589, "num_entries": 11278, "num_filter_entries": 11278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.727161) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13208060 bytes
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.733052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.0 rd, 133.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.5 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(27.3) write-amplify(12.7) OK, records in: 11794, records dropped: 516 output_compression: NoCompression
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.733083) EVENT_LOG_v1 {"time_micros": 1759412288733069, "job": 122, "event": "compaction_finished", "compaction_time_micros": 99002, "compaction_time_cpu_micros": 41132, "output_level": 6, "num_output_files": 1, "total_output_size": 13208060, "num_input_records": 11794, "num_output_records": 11278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288733559, "job": 122, "event": "table_file_deletion", "file_number": 196}
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288738455, "job": 122, "event": "table_file_deletion", "file_number": 194}
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.627684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.738544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.738552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.738556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.738559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:38:08.738563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-0 sudo[434640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:38:08 compute-0 sudo[434640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:08 compute-0 sudo[434640]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:08.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:09 compute-0 nova_compute[256940]: 2025-10-02 13:38:09.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:09 compute-0 nova_compute[256940]: 2025-10-02 13:38:09.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:09 compute-0 nova_compute[256940]: 2025-10-02 13:38:09.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:38:09 compute-0 nova_compute[256940]: 2025-10-02 13:38:09.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:38:09 compute-0 nova_compute[256940]: 2025-10-02 13:38:09.235 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:38:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:09 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:09 compute-0 ceph-mon[73668]: pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:10.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:10 compute-0 nova_compute[256940]: 2025-10-02 13:38:10.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:10.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:12.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:12 compute-0 ceph-mon[73668]: pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:13 compute-0 ceph-mon[73668]: pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:14 compute-0 nova_compute[256940]: 2025-10-02 13:38:14.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:14 compute-0 nova_compute[256940]: 2025-10-02 13:38:14.228 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:38:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:38:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:14.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:15 compute-0 nova_compute[256940]: 2025-10-02 13:38:15.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:16 compute-0 ceph-mon[73668]: pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:16.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:16 compute-0 podman[434669]: 2025-10-02 13:38:16.385960088 +0000 UTC m=+0.056251483 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:38:16 compute-0 podman[434670]: 2025-10-02 13:38:16.453677374 +0000 UTC m=+0.123768464 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:38:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:16.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:18 compute-0 ceph-mon[73668]: pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:18.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:19 compute-0 nova_compute[256940]: 2025-10-02 13:38:19.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:19 compute-0 ceph-mon[73668]: pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:20.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:20 compute-0 nova_compute[256940]: 2025-10-02 13:38:20.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:20.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:21 compute-0 ceph-mon[73668]: pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:22.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:22.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:24 compute-0 nova_compute[256940]: 2025-10-02 13:38:24.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:24.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:24 compute-0 ceph-mon[73668]: pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:24.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:25 compute-0 ceph-mon[73668]: pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:25 compute-0 nova_compute[256940]: 2025-10-02 13:38:25.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:25 compute-0 sudo[434720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:25 compute-0 sudo[434720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:25 compute-0 sudo[434720]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:26 compute-0 sudo[434745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:26 compute-0 sudo[434745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:26 compute-0 sudo[434745]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:26.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:38:26.538 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:38:26.539 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:38:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:38:26.539 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:38:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:26.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:28 compute-0 ceph-mon[73668]: pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:28.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:38:28
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control']
Oct 02 13:38:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:38:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:28.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:29 compute-0 nova_compute[256940]: 2025-10-02 13:38:29.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:29 compute-0 ceph-mon[73668]: pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:38:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:38:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:30.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:30 compute-0 nova_compute[256940]: 2025-10-02 13:38:30.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:30.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:31 compute-0 ceph-mon[73668]: pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:32.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:32.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:33 compute-0 podman[434775]: 2025-10-02 13:38:33.398080787 +0000 UTC m=+0.066226469 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:38:33 compute-0 podman[434774]: 2025-10-02 13:38:33.413384989 +0000 UTC m=+0.083545112 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 13:38:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:34 compute-0 nova_compute[256940]: 2025-10-02 13:38:34.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:34 compute-0 ceph-mon[73668]: pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:34.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:38:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:34.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:38:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:35 compute-0 ceph-mon[73668]: pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:35 compute-0 nova_compute[256940]: 2025-10-02 13:38:35.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:36.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:36.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:38 compute-0 ceph-mon[73668]: pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:38.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:38.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:39 compute-0 nova_compute[256940]: 2025-10-02 13:38:39.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:38:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:40.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:38:40 compute-0 nova_compute[256940]: 2025-10-02 13:38:40.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:40 compute-0 ceph-mon[73668]: pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:40.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:38:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:41 compute-0 ceph-mon[73668]: pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:42.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:42 compute-0 nova_compute[256940]: 2025-10-02 13:38:42.358 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:42 compute-0 nova_compute[256940]: 2025-10-02 13:38:42.956 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:38:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:42.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:43 compute-0 nova_compute[256940]: 2025-10-02 13:38:43.006 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:38:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:44 compute-0 nova_compute[256940]: 2025-10-02 13:38:44.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:44.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:44 compute-0 ceph-mon[73668]: pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:44.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:45 compute-0 ceph-mon[73668]: pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:45 compute-0 nova_compute[256940]: 2025-10-02 13:38:45.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:46 compute-0 sudo[434821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:46 compute-0 sudo[434821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:46 compute-0 sudo[434821]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:46 compute-0 sudo[434846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:46 compute-0 sudo[434846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:46 compute-0 sudo[434846]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:46.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:47 compute-0 podman[434872]: 2025-10-02 13:38:47.38513464 +0000 UTC m=+0.057415122 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:38:47 compute-0 podman[434873]: 2025-10-02 13:38:47.417210922 +0000 UTC m=+0.083678945 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 13:38:47 compute-0 ceph-mon[73668]: pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:38:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:48.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:38:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:48.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.305 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.306 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.306 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.306 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.306 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:38:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:38:49 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120944310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.730 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.872 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.873 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4087MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.874 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.874 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.998 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:38:49 compute-0 nova_compute[256940]: 2025-10-02 13:38:49.998 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:38:50 compute-0 nova_compute[256940]: 2025-10-02 13:38:50.026 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:38:50 compute-0 ceph-mon[73668]: pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4120944310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:50 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4086869222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:50.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:50 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:38:50 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682809650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:50 compute-0 nova_compute[256940]: 2025-10-02 13:38:50.456 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:38:50 compute-0 nova_compute[256940]: 2025-10-02 13:38:50.464 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:38:50 compute-0 nova_compute[256940]: 2025-10-02 13:38:50.491 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:38:50 compute-0 nova_compute[256940]: 2025-10-02 13:38:50.492 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:38:50 compute-0 nova_compute[256940]: 2025-10-02 13:38:50.493 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:38:50 compute-0 nova_compute[256940]: 2025-10-02 13:38:50.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:38:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:50.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:38:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2682809650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2539287375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:38:51.511 158104 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:38:51 compute-0 nova_compute[256940]: 2025-10-02 13:38:51.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:51 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:38:51.512 158104 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:38:52 compute-0 ceph-mon[73668]: pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:52.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:52 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:38:52.515 158104 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=17f11839-42bc-4ba9-92b4-53d0d88b0404, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:38:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:53 compute-0 ceph-mon[73668]: pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:54 compute-0 nova_compute[256940]: 2025-10-02 13:38:54.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:38:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:54.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:38:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:55.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:55 compute-0 nova_compute[256940]: 2025-10-02 13:38:55.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:56 compute-0 ceph-mon[73668]: pgmap v3855: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2831501368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:56.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:57.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3151770201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:58 compute-0 ceph-mon[73668]: pgmap v3856: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:58 compute-0 nova_compute[256940]: 2025-10-02 13:38:58.494 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:38:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:38:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:38:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:59 compute-0 nova_compute[256940]: 2025-10-02 13:38:59.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:59 compute-0 nova_compute[256940]: 2025-10-02 13:38:59.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:59 compute-0 nova_compute[256940]: 2025-10-02 13:38:59.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:38:59 compute-0 nova_compute[256940]: 2025-10-02 13:38:59.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:59 compute-0 ceph-mon[73668]: pgmap v3857: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:00 compute-0 nova_compute[256940]: 2025-10-02 13:39:00.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:02 compute-0 ceph-mon[73668]: pgmap v3858: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:39:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:39:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:03.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:03 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:04 compute-0 nova_compute[256940]: 2025-10-02 13:39:04.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:04 compute-0 nova_compute[256940]: 2025-10-02 13:39:04.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:04 compute-0 ceph-mon[73668]: pgmap v3859: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:04.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:04 compute-0 podman[434970]: 2025-10-02 13:39:04.410329484 +0000 UTC m=+0.085278646 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 02 13:39:04 compute-0 podman[434971]: 2025-10-02 13:39:04.413377892 +0000 UTC m=+0.078418340 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 02 13:39:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:05 compute-0 nova_compute[256940]: 2025-10-02 13:39:05.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:06 compute-0 nova_compute[256940]: 2025-10-02 13:39:06.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:06 compute-0 sudo[435009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:06 compute-0 sudo[435009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:06 compute-0 sudo[435009]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:06 compute-0 sudo[435034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:06 compute-0 sudo[435034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:06 compute-0 sudo[435034]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:06 compute-0 ceph-mon[73668]: pgmap v3860: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:39:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:39:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:39:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:06.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:39:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:07.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:07 compute-0 nova_compute[256940]: 2025-10-02 13:39:07.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:07 compute-0 ceph-mon[73668]: pgmap v3861: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:08.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:09 compute-0 sudo[435061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:09 compute-0 sudo[435061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-0 sudo[435061]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:09 compute-0 sudo[435086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:39:09 compute-0 sudo[435086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-0 sudo[435086]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:09 compute-0 nova_compute[256940]: 2025-10-02 13:39:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:09 compute-0 nova_compute[256940]: 2025-10-02 13:39:09.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:39:09 compute-0 nova_compute[256940]: 2025-10-02 13:39:09.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:39:09 compute-0 nova_compute[256940]: 2025-10-02 13:39:09.224 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:39:09 compute-0 nova_compute[256940]: 2025-10-02 13:39:09.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:09 compute-0 sudo[435111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:09 compute-0 sudo[435111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-0 sudo[435111]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:09 compute-0 sudo[435136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:39:09 compute-0 sudo[435136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-0 sudo[435136]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:39:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:39:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:39:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:39:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:39:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:09 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7a2ae8d7-2041-4ded-ab89-d12205af54ee does not exist
Oct 02 13:39:09 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e3be2d06-26bf-4816-afb5-bc70889810ef does not exist
Oct 02 13:39:09 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 355368b0-8e35-4422-a7dc-8a042c7ff19f does not exist
Oct 02 13:39:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:39:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:39:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:39:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:39:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:39:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:39:09 compute-0 sudo[435190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:09 compute-0 sudo[435190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-0 sudo[435190]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:10 compute-0 sudo[435215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:39:10 compute-0 sudo[435215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:10 compute-0 sudo[435215]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:10 compute-0 sudo[435240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:10 compute-0 sudo[435240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:10 compute-0 sudo[435240]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:10 compute-0 sudo[435265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:39:10 compute-0 sudo[435265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:10 compute-0 ceph-mon[73668]: pgmap v3862: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:39:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:39:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:39:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:39:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:39:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:10.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:10 compute-0 podman[435330]: 2025-10-02 13:39:10.534656093 +0000 UTC m=+0.039195705 container create c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:39:10 compute-0 systemd[1]: Started libpod-conmon-c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099.scope.
Oct 02 13:39:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:39:10 compute-0 nova_compute[256940]: 2025-10-02 13:39:10.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:10 compute-0 podman[435330]: 2025-10-02 13:39:10.517331379 +0000 UTC m=+0.021871011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:39:10 compute-0 podman[435330]: 2025-10-02 13:39:10.614739256 +0000 UTC m=+0.119278878 container init c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:39:10 compute-0 podman[435330]: 2025-10-02 13:39:10.621563181 +0000 UTC m=+0.126102793 container start c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 13:39:10 compute-0 podman[435330]: 2025-10-02 13:39:10.625235255 +0000 UTC m=+0.129774887 container attach c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:39:10 compute-0 systemd[1]: libpod-c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099.scope: Deactivated successfully.
Oct 02 13:39:10 compute-0 beautiful_hamilton[435346]: 167 167
Oct 02 13:39:10 compute-0 conmon[435346]: conmon c619ec51f9f19e45edbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099.scope/container/memory.events
Oct 02 13:39:10 compute-0 podman[435330]: 2025-10-02 13:39:10.629545396 +0000 UTC m=+0.134085008 container died c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-05dc8ff18ede7488ad1a22aa6bcd0def4c328ee4ec5a77a2437e862807b6bb2f-merged.mount: Deactivated successfully.
Oct 02 13:39:10 compute-0 podman[435330]: 2025-10-02 13:39:10.672953028 +0000 UTC m=+0.177492650 container remove c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 13:39:10 compute-0 systemd[1]: libpod-conmon-c619ec51f9f19e45edbeb89e6fc26c0d8ef3e54077c5b10e9c30c493dbd23099.scope: Deactivated successfully.
Oct 02 13:39:10 compute-0 podman[435371]: 2025-10-02 13:39:10.823785384 +0000 UTC m=+0.043177768 container create 4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:39:10 compute-0 systemd[1]: Started libpod-conmon-4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433.scope.
Oct 02 13:39:10 compute-0 podman[435371]: 2025-10-02 13:39:10.8029438 +0000 UTC m=+0.022336224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:39:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f906f3e046c4f0fe47a5de6dec038c2ed65dd80709aef31c39745a49419260/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f906f3e046c4f0fe47a5de6dec038c2ed65dd80709aef31c39745a49419260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f906f3e046c4f0fe47a5de6dec038c2ed65dd80709aef31c39745a49419260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f906f3e046c4f0fe47a5de6dec038c2ed65dd80709aef31c39745a49419260/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f906f3e046c4f0fe47a5de6dec038c2ed65dd80709aef31c39745a49419260/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:10 compute-0 podman[435371]: 2025-10-02 13:39:10.914762326 +0000 UTC m=+0.134154730 container init 4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:39:10 compute-0 podman[435371]: 2025-10-02 13:39:10.926311522 +0000 UTC m=+0.145703936 container start 4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:39:10 compute-0 podman[435371]: 2025-10-02 13:39:10.929827282 +0000 UTC m=+0.149219726 container attach 4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:39:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:11 compute-0 sleepy_shannon[435387]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:39:11 compute-0 sleepy_shannon[435387]: --> relative data size: 1.0
Oct 02 13:39:11 compute-0 sleepy_shannon[435387]: --> All data devices are unavailable
Oct 02 13:39:11 compute-0 systemd[1]: libpod-4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433.scope: Deactivated successfully.
Oct 02 13:39:11 compute-0 podman[435403]: 2025-10-02 13:39:11.803608157 +0000 UTC m=+0.024726305 container died 4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8f906f3e046c4f0fe47a5de6dec038c2ed65dd80709aef31c39745a49419260-merged.mount: Deactivated successfully.
Oct 02 13:39:12 compute-0 podman[435403]: 2025-10-02 13:39:12.039246416 +0000 UTC m=+0.260364564 container remove 4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:39:12 compute-0 systemd[1]: libpod-conmon-4604da25845a04dae6cc4d0505bdb7610072d32bba7967275717c5cb42a8f433.scope: Deactivated successfully.
Oct 02 13:39:12 compute-0 sudo[435265]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:12 compute-0 sudo[435418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:12 compute-0 sudo[435418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:12 compute-0 sudo[435418]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:12 compute-0 sudo[435443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:39:12 compute-0 sudo[435443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:12 compute-0 sudo[435443]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:12 compute-0 ceph-mon[73668]: pgmap v3863: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:12 compute-0 sudo[435468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:12 compute-0 sudo[435468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:12 compute-0 sudo[435468]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:12 compute-0 sudo[435493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:39:12 compute-0 sudo[435493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:39:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:39:12 compute-0 podman[435559]: 2025-10-02 13:39:12.671734097 +0000 UTC m=+0.059582978 container create e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:39:12 compute-0 systemd[1]: Started libpod-conmon-e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3.scope.
Oct 02 13:39:12 compute-0 podman[435559]: 2025-10-02 13:39:12.637380467 +0000 UTC m=+0.025229378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:39:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:39:12 compute-0 podman[435559]: 2025-10-02 13:39:12.817019361 +0000 UTC m=+0.204868292 container init e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:39:12 compute-0 podman[435559]: 2025-10-02 13:39:12.828940027 +0000 UTC m=+0.216788908 container start e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wiles, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:39:12 compute-0 hungry_wiles[435576]: 167 167
Oct 02 13:39:12 compute-0 systemd[1]: libpod-e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3.scope: Deactivated successfully.
Oct 02 13:39:12 compute-0 podman[435559]: 2025-10-02 13:39:12.839424555 +0000 UTC m=+0.227273466 container attach e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:39:12 compute-0 podman[435559]: 2025-10-02 13:39:12.840030131 +0000 UTC m=+0.227879012 container died e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wiles, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:39:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c505b352d7eda6e1bdeedf8e03e8a6b3e2838c39d5cbbf3fb5e1970ed27e4529-merged.mount: Deactivated successfully.
Oct 02 13:39:12 compute-0 podman[435559]: 2025-10-02 13:39:12.960358715 +0000 UTC m=+0.348207596 container remove e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wiles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:39:12 compute-0 systemd[1]: libpod-conmon-e2e4f75086a11ee7037e9cbad8fcf5baba102259c6c869b63accde16283cf1c3.scope: Deactivated successfully.
Oct 02 13:39:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:13.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:13 compute-0 podman[435602]: 2025-10-02 13:39:13.231048533 +0000 UTC m=+0.108597565 container create 7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:39:13 compute-0 podman[435602]: 2025-10-02 13:39:13.161227373 +0000 UTC m=+0.038776455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:39:13 compute-0 systemd[1]: Started libpod-conmon-7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d.scope.
Oct 02 13:39:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a7396e97c03f2978a78bc4ea7c1d4dff3cd541bd4b10ae5e22afc0c7bd0049/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a7396e97c03f2978a78bc4ea7c1d4dff3cd541bd4b10ae5e22afc0c7bd0049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a7396e97c03f2978a78bc4ea7c1d4dff3cd541bd4b10ae5e22afc0c7bd0049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a7396e97c03f2978a78bc4ea7c1d4dff3cd541bd4b10ae5e22afc0c7bd0049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:13 compute-0 podman[435602]: 2025-10-02 13:39:13.421019652 +0000 UTC m=+0.298568704 container init 7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:39:13 compute-0 podman[435602]: 2025-10-02 13:39:13.437892394 +0000 UTC m=+0.315441436 container start 7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:39:13 compute-0 ceph-mon[73668]: pgmap v3864: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:13 compute-0 podman[435602]: 2025-10-02 13:39:13.474993675 +0000 UTC m=+0.352542727 container attach 7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:39:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:14 compute-0 keen_curie[435618]: {
Oct 02 13:39:14 compute-0 keen_curie[435618]:     "1": [
Oct 02 13:39:14 compute-0 keen_curie[435618]:         {
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "devices": [
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "/dev/loop3"
Oct 02 13:39:14 compute-0 keen_curie[435618]:             ],
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "lv_name": "ceph_lv0",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "lv_size": "7511998464",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "name": "ceph_lv0",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "tags": {
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.cluster_name": "ceph",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.crush_device_class": "",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.encrypted": "0",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.osd_id": "1",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.type": "block",
Oct 02 13:39:14 compute-0 keen_curie[435618]:                 "ceph.vdo": "0"
Oct 02 13:39:14 compute-0 keen_curie[435618]:             },
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "type": "block",
Oct 02 13:39:14 compute-0 keen_curie[435618]:             "vg_name": "ceph_vg0"
Oct 02 13:39:14 compute-0 keen_curie[435618]:         }
Oct 02 13:39:14 compute-0 keen_curie[435618]:     ]
Oct 02 13:39:14 compute-0 keen_curie[435618]: }
Oct 02 13:39:14 compute-0 systemd[1]: libpod-7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d.scope: Deactivated successfully.
Oct 02 13:39:14 compute-0 podman[435602]: 2025-10-02 13:39:14.191957732 +0000 UTC m=+1.069506784 container died 7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 02 13:39:14 compute-0 nova_compute[256940]: 2025-10-02 13:39:14.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-30a7396e97c03f2978a78bc4ea7c1d4dff3cd541bd4b10ae5e22afc0c7bd0049-merged.mount: Deactivated successfully.
Oct 02 13:39:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:14.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:14 compute-0 podman[435602]: 2025-10-02 13:39:14.394915254 +0000 UTC m=+1.272464316 container remove 7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:39:14 compute-0 systemd[1]: libpod-conmon-7a551edae82cd92fac5189df7ede9e50ee427040c652810435d0617ad90ce09d.scope: Deactivated successfully.
Oct 02 13:39:14 compute-0 sudo[435493]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:14 compute-0 sudo[435639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:14 compute-0 sudo[435639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:14 compute-0 sudo[435639]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:14 compute-0 sudo[435664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:39:14 compute-0 sudo[435664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:14 compute-0 sudo[435664]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:14 compute-0 sudo[435689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:14 compute-0 sudo[435689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:14 compute-0 sudo[435689]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:14 compute-0 sudo[435714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:39:14 compute-0 sudo[435714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:15.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:15 compute-0 podman[435780]: 2025-10-02 13:39:15.091498196 +0000 UTC m=+0.092036878 container create ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:39:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:15 compute-0 podman[435780]: 2025-10-02 13:39:15.033248604 +0000 UTC m=+0.033787296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:39:15 compute-0 systemd[1]: Started libpod-conmon-ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a.scope.
Oct 02 13:39:15 compute-0 nova_compute[256940]: 2025-10-02 13:39:15.219 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:39:15 compute-0 podman[435780]: 2025-10-02 13:39:15.260334674 +0000 UTC m=+0.260873346 container init ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 13:39:15 compute-0 podman[435780]: 2025-10-02 13:39:15.270963666 +0000 UTC m=+0.271502328 container start ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 13:39:15 compute-0 bold_tu[435797]: 167 167
Oct 02 13:39:15 compute-0 systemd[1]: libpod-ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a.scope: Deactivated successfully.
Oct 02 13:39:15 compute-0 podman[435780]: 2025-10-02 13:39:15.347028266 +0000 UTC m=+0.347566958 container attach ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:39:15 compute-0 podman[435780]: 2025-10-02 13:39:15.349095159 +0000 UTC m=+0.349633821 container died ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc08a3945c2b5bd46ad907ca1efb5c1ee309bfce322feb05d6e59c26c7714f3-merged.mount: Deactivated successfully.
Oct 02 13:39:15 compute-0 podman[435780]: 2025-10-02 13:39:15.595849293 +0000 UTC m=+0.596387955 container remove ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tu, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:39:15 compute-0 nova_compute[256940]: 2025-10-02 13:39:15.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:15 compute-0 systemd[1]: libpod-conmon-ea6d5f3102cf4d7a40aac367ad575fe730e7a64de1012d736c7c96c715b9095a.scope: Deactivated successfully.
Oct 02 13:39:15 compute-0 podman[435821]: 2025-10-02 13:39:15.824989126 +0000 UTC m=+0.116150068 container create 08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:39:15 compute-0 podman[435821]: 2025-10-02 13:39:15.737833833 +0000 UTC m=+0.028994795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:39:15 compute-0 systemd[1]: Started libpod-conmon-08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1.scope.
Oct 02 13:39:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b1fa22292ace1df509054682accd91392453461aa3da080b2c88b991049e48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b1fa22292ace1df509054682accd91392453461aa3da080b2c88b991049e48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b1fa22292ace1df509054682accd91392453461aa3da080b2c88b991049e48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b1fa22292ace1df509054682accd91392453461aa3da080b2c88b991049e48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:39:15 compute-0 podman[435821]: 2025-10-02 13:39:15.937721796 +0000 UTC m=+0.228882778 container init 08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:39:15 compute-0 podman[435821]: 2025-10-02 13:39:15.948301377 +0000 UTC m=+0.239462279 container start 08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:39:15 compute-0 podman[435821]: 2025-10-02 13:39:15.952685639 +0000 UTC m=+0.243846541 container attach 08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 13:39:16 compute-0 ceph-mon[73668]: pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:39:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:39:16 compute-0 quirky_bouman[435837]: {
Oct 02 13:39:16 compute-0 quirky_bouman[435837]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:39:16 compute-0 quirky_bouman[435837]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:39:16 compute-0 quirky_bouman[435837]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:39:16 compute-0 quirky_bouman[435837]:         "osd_id": 1,
Oct 02 13:39:16 compute-0 quirky_bouman[435837]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:39:16 compute-0 quirky_bouman[435837]:         "type": "bluestore"
Oct 02 13:39:16 compute-0 quirky_bouman[435837]:     }
Oct 02 13:39:16 compute-0 quirky_bouman[435837]: }
Oct 02 13:39:16 compute-0 systemd[1]: libpod-08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1.scope: Deactivated successfully.
Oct 02 13:39:16 compute-0 podman[435821]: 2025-10-02 13:39:16.825531461 +0000 UTC m=+1.116692373 container died 08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5b1fa22292ace1df509054682accd91392453461aa3da080b2c88b991049e48-merged.mount: Deactivated successfully.
Oct 02 13:39:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:17.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:17 compute-0 podman[435821]: 2025-10-02 13:39:17.115287688 +0000 UTC m=+1.406448590 container remove 08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 13:39:17 compute-0 systemd[1]: libpod-conmon-08102e7d1faa3b4cf699b9639a3431bfa66ef9d13bcc63f2be1a4dee86206af1.scope: Deactivated successfully.
Oct 02 13:39:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:17 compute-0 sudo[435714]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:39:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:39:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c0e6717f-dab6-43b5-8320-5ddc14b18498 does not exist
Oct 02 13:39:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 701ba5e3-3cfc-498d-bce1-6ce025ecc8a6 does not exist
Oct 02 13:39:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5fa1a23c-f251-4b76-86b9-4e8d7645ebab does not exist
Oct 02 13:39:17 compute-0 sudo[435873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:17 compute-0 sudo[435873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:17 compute-0 sudo[435873]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:17 compute-0 sudo[435898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:39:17 compute-0 sudo[435898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:17 compute-0 sudo[435898]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:18 compute-0 ceph-mon[73668]: pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:18 compute-0 podman[435923]: 2025-10-02 13:39:18.430256211 +0000 UTC m=+0.092670446 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 13:39:18 compute-0 podman[435924]: 2025-10-02 13:39:18.460153968 +0000 UTC m=+0.122720086 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:39:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:19.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:19 compute-0 nova_compute[256940]: 2025-10-02 13:39:19.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:20 compute-0 ceph-mon[73668]: pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:20.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:20 compute-0 nova_compute[256940]: 2025-10-02 13:39:20.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:21.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:21 compute-0 nova_compute[256940]: 2025-10-02 13:39:21.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:21 compute-0 ceph-mon[73668]: pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:22.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:23.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:24 compute-0 nova_compute[256940]: 2025-10-02 13:39:24.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:24 compute-0 ceph-mon[73668]: pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:25.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:25 compute-0 nova_compute[256940]: 2025-10-02 13:39:25.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:25 compute-0 ceph-mon[73668]: pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:26 compute-0 sudo[435973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:26 compute-0 sudo[435973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:26 compute-0 sudo[435973]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:26 compute-0 sudo[435998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:26 compute-0 sudo[435998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:26 compute-0 sudo[435998]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:39:26.540 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:39:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:39:26.540 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:39:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:39:26.540 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:39:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:27.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:28 compute-0 ceph-mon[73668]: pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:28.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:39:28
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', '.mgr']
Oct 02 13:39:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:39:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:29.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:29 compute-0 nova_compute[256940]: 2025-10-02 13:39:29.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:39:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:39:30 compute-0 ceph-mon[73668]: pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:30 compute-0 nova_compute[256940]: 2025-10-02 13:39:30.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:31.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:31 compute-0 ceph-mon[73668]: pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:39:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:32.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:39:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:33.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:33 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:34 compute-0 ceph-mon[73668]: pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:34 compute-0 nova_compute[256940]: 2025-10-02 13:39:34.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:35.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:35 compute-0 podman[436029]: 2025-10-02 13:39:35.386560329 +0000 UTC m=+0.055035982 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, tcib_managed=true)
Oct 02 13:39:35 compute-0 podman[436028]: 2025-10-02 13:39:35.391002893 +0000 UTC m=+0.058265605 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:39:35 compute-0 nova_compute[256940]: 2025-10-02 13:39:35.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:36 compute-0 ceph-mon[73668]: pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:36.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:37 compute-0 ceph-mon[73668]: pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:38.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:39.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:39 compute-0 nova_compute[256940]: 2025-10-02 13:39:39.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:40 compute-0 ceph-mon[73668]: pgmap v3877: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:40.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:40 compute-0 nova_compute[256940]: 2025-10-02 13:39:40.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:39:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:41 compute-0 ceph-mon[73668]: pgmap v3878: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:42.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:43.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:43 compute-0 ceph-mon[73668]: pgmap v3879: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:43 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:44 compute-0 nova_compute[256940]: 2025-10-02 13:39:44.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:45.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:45 compute-0 nova_compute[256940]: 2025-10-02 13:39:45.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:46 compute-0 ceph-mon[73668]: pgmap v3880: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:39:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:46.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:39:46 compute-0 sudo[436074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:46 compute-0 sudo[436074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:46 compute-0 sudo[436074]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:46 compute-0 sudo[436099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:46 compute-0 sudo[436099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:46 compute-0 sudo[436099]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:47.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:48.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:48 compute-0 ceph-mon[73668]: pgmap v3881: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:48 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:39:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:49.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:39:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:49 compute-0 nova_compute[256940]: 2025-10-02 13:39:49.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:49 compute-0 podman[436126]: 2025-10-02 13:39:49.408095298 +0000 UTC m=+0.082794013 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 13:39:49 compute-0 podman[436127]: 2025-10-02 13:39:49.419542571 +0000 UTC m=+0.087487563 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:39:49 compute-0 ceph-mon[73668]: pgmap v3882: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:39:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:50.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:39:50 compute-0 nova_compute[256940]: 2025-10-02 13:39:50.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:51.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:51 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3052700554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.238 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.239 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.239 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.240 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:39:51 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:39:51 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3126593498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.703 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.834 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.835 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4104MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.836 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:39:51 compute-0 nova_compute[256940]: 2025-10-02 13:39:51.836 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:39:52 compute-0 ceph-mon[73668]: pgmap v3883: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3126593498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/557114497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:52 compute-0 nova_compute[256940]: 2025-10-02 13:39:52.944 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:39:52 compute-0 nova_compute[256940]: 2025-10-02 13:39:52.944 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:39:52 compute-0 nova_compute[256940]: 2025-10-02 13:39:52.957 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:39:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:53.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:39:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482641165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:53 compute-0 nova_compute[256940]: 2025-10-02 13:39:53.380 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:39:53 compute-0 nova_compute[256940]: 2025-10-02 13:39:53.386 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:39:53 compute-0 ceph-mon[73668]: pgmap v3884: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/482641165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:53 compute-0 nova_compute[256940]: 2025-10-02 13:39:53.483 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:39:53 compute-0 nova_compute[256940]: 2025-10-02 13:39:53.485 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:39:53 compute-0 nova_compute[256940]: 2025-10-02 13:39:53.486 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:39:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:54 compute-0 nova_compute[256940]: 2025-10-02 13:39:54.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:55 compute-0 nova_compute[256940]: 2025-10-02 13:39:55.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:56 compute-0 ceph-mon[73668]: pgmap v3885: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2950983091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:39:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:56.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:39:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:39:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:57.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:39:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3887751941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:39:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:39:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:39:58 compute-0 ceph-mon[73668]: pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:39:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:59 compute-0 nova_compute[256940]: 2025-10-02 13:39:59.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:59 compute-0 ceph-mon[73668]: pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:00 compute-0 ceph-mon[73668]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 02 13:40:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:40:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:40:00 compute-0 nova_compute[256940]: 2025-10-02 13:40:00.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:00 compute-0 ceph-mon[73668]: overall HEALTH_OK
Oct 02 13:40:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:01.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:01 compute-0 nova_compute[256940]: 2025-10-02 13:40:01.486 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:01 compute-0 nova_compute[256940]: 2025-10-02 13:40:01.486 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:01 compute-0 nova_compute[256940]: 2025-10-02 13:40:01.487 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:01 compute-0 nova_compute[256940]: 2025-10-02 13:40:01.487 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:40:02 compute-0 ceph-mon[73668]: pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:02.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:03.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:04 compute-0 ceph-mon[73668]: pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:04 compute-0 nova_compute[256940]: 2025-10-02 13:40:04.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:05.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:05 compute-0 nova_compute[256940]: 2025-10-02 13:40:05.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:40:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/302418049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:40:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:40:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/302418049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:40:05 compute-0 ceph-mon[73668]: pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/302418049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:40:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/302418049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:40:05 compute-0 nova_compute[256940]: 2025-10-02 13:40:05.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:06 compute-0 nova_compute[256940]: 2025-10-02 13:40:06.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:06 compute-0 podman[436222]: 2025-10-02 13:40:06.390523496 +0000 UTC m=+0.063101359 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:40:06 compute-0 podman[436223]: 2025-10-02 13:40:06.404472343 +0000 UTC m=+0.068959338 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 13:40:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:06.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:06 compute-0 sudo[436261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:06 compute-0 sudo[436261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:06 compute-0 sudo[436261]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:06 compute-0 sudo[436286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:06 compute-0 sudo[436286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:06 compute-0 sudo[436286]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:07.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:08 compute-0 ceph-mon[73668]: pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:09.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:09 compute-0 nova_compute[256940]: 2025-10-02 13:40:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:09 compute-0 nova_compute[256940]: 2025-10-02 13:40:09.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:10 compute-0 nova_compute[256940]: 2025-10-02 13:40:10.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:10 compute-0 nova_compute[256940]: 2025-10-02 13:40:10.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:40:10 compute-0 nova_compute[256940]: 2025-10-02 13:40:10.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:40:10 compute-0 nova_compute[256940]: 2025-10-02 13:40:10.230 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:40:10 compute-0 ceph-mon[73668]: pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:10.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:10 compute-0 nova_compute[256940]: 2025-10-02 13:40:10.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:11.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:12 compute-0 ceph-mon[73668]: pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:12.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:13.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:14 compute-0 ceph-mon[73668]: pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:14 compute-0 nova_compute[256940]: 2025-10-02 13:40:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:14.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:15.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:15 compute-0 ceph-mon[73668]: pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:15 compute-0 nova_compute[256940]: 2025-10-02 13:40:15.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:16.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:17.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:17 compute-0 nova_compute[256940]: 2025-10-02 13:40:17.225 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:17 compute-0 sudo[436317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:17 compute-0 sudo[436317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:17 compute-0 sudo[436317]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:17 compute-0 sudo[436342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:40:17 compute-0 sudo[436342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:17 compute-0 sudo[436342]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:17 compute-0 sudo[436367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:17 compute-0 sudo[436367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:17 compute-0 sudo[436367]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:17 compute-0 sudo[436392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:40:17 compute-0 sudo[436392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:18 compute-0 ceph-mon[73668]: pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:18 compute-0 sudo[436392]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:40:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:40:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:40:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:40:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:40:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:18.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c817a99d-157a-497f-8d91-9b0d051e8626 does not exist
Oct 02 13:40:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a254ed63-937c-4165-a59b-f060eacec2c2 does not exist
Oct 02 13:40:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 345050b9-a984-4d04-a46f-7713091aaad1 does not exist
Oct 02 13:40:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:40:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:40:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:40:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:40:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:40:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:40:18 compute-0 sudo[436447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:18 compute-0 sudo[436447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:18 compute-0 sudo[436447]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:18 compute-0 sudo[436472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:40:18 compute-0 sudo[436472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:18 compute-0 sudo[436472]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:18 compute-0 sudo[436497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:18 compute-0 sudo[436497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:18 compute-0 sudo[436497]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:18 compute-0 sudo[436522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:40:18 compute-0 sudo[436522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:19.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:19 compute-0 podman[436590]: 2025-10-02 13:40:19.073822264 +0000 UTC m=+0.025393822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:40:19 compute-0 podman[436590]: 2025-10-02 13:40:19.171740984 +0000 UTC m=+0.123312452 container create bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:40:19 compute-0 nova_compute[256940]: 2025-10-02 13:40:19.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:40:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:40:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:40:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:40:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:40:19 compute-0 systemd[1]: Started libpod-conmon-bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4.scope.
Oct 02 13:40:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:40:19 compute-0 podman[436590]: 2025-10-02 13:40:19.607094782 +0000 UTC m=+0.558666280 container init bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:40:19 compute-0 podman[436590]: 2025-10-02 13:40:19.616960885 +0000 UTC m=+0.568532373 container start bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 02 13:40:19 compute-0 vibrant_goodall[436607]: 167 167
Oct 02 13:40:19 compute-0 systemd[1]: libpod-bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4.scope: Deactivated successfully.
Oct 02 13:40:19 compute-0 conmon[436607]: conmon bae189f9f41a3bcdd3b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4.scope/container/memory.events
Oct 02 13:40:19 compute-0 podman[436590]: 2025-10-02 13:40:19.653622154 +0000 UTC m=+0.605193652 container attach bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:40:19 compute-0 podman[436590]: 2025-10-02 13:40:19.656030146 +0000 UTC m=+0.607601634 container died bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-85c21121ecc47392f3316813f528df047cfd190b00081e71bd718d5f02295a40-merged.mount: Deactivated successfully.
Oct 02 13:40:19 compute-0 podman[436590]: 2025-10-02 13:40:19.956052546 +0000 UTC m=+0.907624004 container remove bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:40:19 compute-0 systemd[1]: libpod-conmon-bae189f9f41a3bcdd3b8ef700167827ae4a48e041df2732652359d0c55c514c4.scope: Deactivated successfully.
Oct 02 13:40:19 compute-0 podman[436608]: 2025-10-02 13:40:19.967044987 +0000 UTC m=+0.416639310 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:40:20 compute-0 podman[436610]: 2025-10-02 13:40:20.040205033 +0000 UTC m=+0.485387722 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:40:20 compute-0 podman[436679]: 2025-10-02 13:40:20.192470905 +0000 UTC m=+0.091411514 container create 7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:40:20 compute-0 podman[436679]: 2025-10-02 13:40:20.125626302 +0000 UTC m=+0.024566951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:40:20 compute-0 systemd[1]: Started libpod-conmon-7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6.scope.
Oct 02 13:40:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b881ff65f2495918d940b36308998f49e3d18322bcca5659e124d57d07c2271a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b881ff65f2495918d940b36308998f49e3d18322bcca5659e124d57d07c2271a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b881ff65f2495918d940b36308998f49e3d18322bcca5659e124d57d07c2271a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b881ff65f2495918d940b36308998f49e3d18322bcca5659e124d57d07c2271a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b881ff65f2495918d940b36308998f49e3d18322bcca5659e124d57d07c2271a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:20 compute-0 podman[436679]: 2025-10-02 13:40:20.383536542 +0000 UTC m=+0.282477141 container init 7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:40:20 compute-0 podman[436679]: 2025-10-02 13:40:20.390905061 +0000 UTC m=+0.289845620 container start 7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:40:20 compute-0 podman[436679]: 2025-10-02 13:40:20.394218046 +0000 UTC m=+0.293158645 container attach 7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_yonath, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:40:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:20.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:20 compute-0 ceph-mon[73668]: pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:20 compute-0 nova_compute[256940]: 2025-10-02 13:40:20.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:21 compute-0 focused_yonath[436695]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:40:21 compute-0 focused_yonath[436695]: --> relative data size: 1.0
Oct 02 13:40:21 compute-0 focused_yonath[436695]: --> All data devices are unavailable
Oct 02 13:40:21 compute-0 systemd[1]: libpod-7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6.scope: Deactivated successfully.
Oct 02 13:40:21 compute-0 podman[436679]: 2025-10-02 13:40:21.231595829 +0000 UTC m=+1.130536398 container died 7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b881ff65f2495918d940b36308998f49e3d18322bcca5659e124d57d07c2271a-merged.mount: Deactivated successfully.
Oct 02 13:40:21 compute-0 ceph-mon[73668]: pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:22.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:22 compute-0 podman[436679]: 2025-10-02 13:40:22.55351643 +0000 UTC m=+2.452456999 container remove 7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:40:22 compute-0 sudo[436522]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:22 compute-0 systemd[1]: libpod-conmon-7de1d3bc4a5825cf2acf457f335fe53d2047674195f7534a6a277a92509b7cf6.scope: Deactivated successfully.
Oct 02 13:40:22 compute-0 sudo[436724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:22 compute-0 sudo[436724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:22 compute-0 sudo[436724]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:22 compute-0 sudo[436749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:40:22 compute-0 sudo[436749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:22 compute-0 sudo[436749]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:22 compute-0 sudo[436774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:22 compute-0 sudo[436774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:22 compute-0 sudo[436774]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:22 compute-0 sudo[436799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:40:22 compute-0 sudo[436799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:23.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:23 compute-0 podman[436867]: 2025-10-02 13:40:23.181914266 +0000 UTC m=+0.022335974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:40:23 compute-0 podman[436867]: 2025-10-02 13:40:23.418231543 +0000 UTC m=+0.258653251 container create 967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:40:23 compute-0 ceph-mon[73668]: pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:23 compute-0 systemd[1]: Started libpod-conmon-967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708.scope.
Oct 02 13:40:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:40:23 compute-0 podman[436867]: 2025-10-02 13:40:23.96238888 +0000 UTC m=+0.802810568 container init 967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:40:23 compute-0 podman[436867]: 2025-10-02 13:40:23.974179572 +0000 UTC m=+0.814601230 container start 967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 13:40:23 compute-0 agitated_buck[436883]: 167 167
Oct 02 13:40:23 compute-0 systemd[1]: libpod-967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708.scope: Deactivated successfully.
Oct 02 13:40:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:24 compute-0 podman[436867]: 2025-10-02 13:40:24.124454344 +0000 UTC m=+0.964876032 container attach 967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:40:24 compute-0 podman[436867]: 2025-10-02 13:40:24.125450799 +0000 UTC m=+0.965872467 container died 967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 13:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-93488dafc1196e2a3ee26c667244e54fbe664ae2013da51bf11c5565ed030094-merged.mount: Deactivated successfully.
Oct 02 13:40:24 compute-0 podman[436867]: 2025-10-02 13:40:24.411396808 +0000 UTC m=+1.251818466 container remove 967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:40:24 compute-0 nova_compute[256940]: 2025-10-02 13:40:24.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:24.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:24 compute-0 systemd[1]: libpod-conmon-967090a596a61e51464dde7c79b2565e5a6babfb3662f0b4ed857b3de3a84708.scope: Deactivated successfully.
Oct 02 13:40:24 compute-0 podman[436908]: 2025-10-02 13:40:24.550151035 +0000 UTC m=+0.022865837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:40:24 compute-0 podman[436908]: 2025-10-02 13:40:24.659429266 +0000 UTC m=+0.132144078 container create 7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:40:24 compute-0 systemd[1]: Started libpod-conmon-7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7.scope.
Oct 02 13:40:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26680a4776c65004b949d5e4c7af6805c92a386526270d3d34185ade041eb7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26680a4776c65004b949d5e4c7af6805c92a386526270d3d34185ade041eb7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26680a4776c65004b949d5e4c7af6805c92a386526270d3d34185ade041eb7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26680a4776c65004b949d5e4c7af6805c92a386526270d3d34185ade041eb7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:24 compute-0 podman[436908]: 2025-10-02 13:40:24.853440698 +0000 UTC m=+0.326155490 container init 7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nash, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:40:24 compute-0 podman[436908]: 2025-10-02 13:40:24.860879929 +0000 UTC m=+0.333594741 container start 7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:40:24 compute-0 podman[436908]: 2025-10-02 13:40:24.94908633 +0000 UTC m=+0.421801102 container attach 7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:40:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:40:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:25.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:40:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:25 compute-0 keen_nash[436924]: {
Oct 02 13:40:25 compute-0 keen_nash[436924]:     "1": [
Oct 02 13:40:25 compute-0 keen_nash[436924]:         {
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "devices": [
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "/dev/loop3"
Oct 02 13:40:25 compute-0 keen_nash[436924]:             ],
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "lv_name": "ceph_lv0",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "lv_size": "7511998464",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "name": "ceph_lv0",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "tags": {
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.cluster_name": "ceph",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.crush_device_class": "",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.encrypted": "0",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.osd_id": "1",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.type": "block",
Oct 02 13:40:25 compute-0 keen_nash[436924]:                 "ceph.vdo": "0"
Oct 02 13:40:25 compute-0 keen_nash[436924]:             },
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "type": "block",
Oct 02 13:40:25 compute-0 keen_nash[436924]:             "vg_name": "ceph_vg0"
Oct 02 13:40:25 compute-0 keen_nash[436924]:         }
Oct 02 13:40:25 compute-0 keen_nash[436924]:     ]
Oct 02 13:40:25 compute-0 keen_nash[436924]: }
Oct 02 13:40:25 compute-0 systemd[1]: libpod-7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7.scope: Deactivated successfully.
Oct 02 13:40:25 compute-0 podman[436908]: 2025-10-02 13:40:25.638848889 +0000 UTC m=+1.111563741 container died 7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:40:25 compute-0 nova_compute[256940]: 2025-10-02 13:40:25.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a26680a4776c65004b949d5e4c7af6805c92a386526270d3d34185ade041eb7b-merged.mount: Deactivated successfully.
Oct 02 13:40:26 compute-0 podman[436908]: 2025-10-02 13:40:26.085993239 +0000 UTC m=+1.558708031 container remove 7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:40:26 compute-0 systemd[1]: libpod-conmon-7e91c883748f106e1ac1e7c8815c95ed1336d9d7331c164a942f27b5ea2835b7.scope: Deactivated successfully.
Oct 02 13:40:26 compute-0 sudo[436799]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:26 compute-0 sudo[436947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:26 compute-0 sudo[436947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:26 compute-0 sudo[436947]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:26 compute-0 ceph-mon[73668]: pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:26 compute-0 sudo[436972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:40:26 compute-0 sudo[436972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:26 compute-0 sudo[436972]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:26 compute-0 sudo[436997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:26 compute-0 sudo[436997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:26 compute-0 sudo[436997]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:26.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:26 compute-0 sudo[437022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:40:26 compute-0 sudo[437022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:40:26.541 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:40:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:40:26.542 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:40:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:40:26.542 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:40:26 compute-0 sudo[437092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:26 compute-0 sudo[437092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:26 compute-0 sudo[437092]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:26 compute-0 podman[437086]: 2025-10-02 13:40:26.804489654 +0000 UTC m=+0.034044343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:40:26 compute-0 sudo[437125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:26 compute-0 sudo[437125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:26 compute-0 sudo[437125]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:26 compute-0 podman[437086]: 2025-10-02 13:40:26.940784388 +0000 UTC m=+0.170339047 container create 651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 02 13:40:26 compute-0 systemd[1]: Started libpod-conmon-651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976.scope.
Oct 02 13:40:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:40:27 compute-0 podman[437086]: 2025-10-02 13:40:27.046517028 +0000 UTC m=+0.276071767 container init 651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:40:27 compute-0 podman[437086]: 2025-10-02 13:40:27.055784485 +0000 UTC m=+0.285339144 container start 651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:40:27 compute-0 priceless_golick[437153]: 167 167
Oct 02 13:40:27 compute-0 systemd[1]: libpod-651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976.scope: Deactivated successfully.
Oct 02 13:40:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:40:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:40:27 compute-0 podman[437086]: 2025-10-02 13:40:27.126630921 +0000 UTC m=+0.356185580 container attach 651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 13:40:27 compute-0 podman[437086]: 2025-10-02 13:40:27.127020121 +0000 UTC m=+0.356574800 container died 651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:40:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d0c8bc94d89517748e604d881e2f718eba70c4804cb09ba824098c2de2e51f7-merged.mount: Deactivated successfully.
Oct 02 13:40:27 compute-0 podman[437086]: 2025-10-02 13:40:27.426969829 +0000 UTC m=+0.656524478 container remove 651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:40:27 compute-0 systemd[1]: libpod-conmon-651f88fd2dd87dc65d3dbcbf4db13f6cb2c971b4be4a1f56246bc6fc76fc9976.scope: Deactivated successfully.
Oct 02 13:40:27 compute-0 ceph-mon[73668]: pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:27 compute-0 podman[437178]: 2025-10-02 13:40:27.613958851 +0000 UTC m=+0.052363763 container create c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 13:40:27 compute-0 systemd[1]: Started libpod-conmon-c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1.scope.
Oct 02 13:40:27 compute-0 podman[437178]: 2025-10-02 13:40:27.585906212 +0000 UTC m=+0.024311144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:40:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bf480ddd71f540771ef27642201d6f5fdbc467b647d0a3b78b7acc026dc95d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bf480ddd71f540771ef27642201d6f5fdbc467b647d0a3b78b7acc026dc95d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bf480ddd71f540771ef27642201d6f5fdbc467b647d0a3b78b7acc026dc95d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bf480ddd71f540771ef27642201d6f5fdbc467b647d0a3b78b7acc026dc95d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:40:27 compute-0 podman[437178]: 2025-10-02 13:40:27.761292978 +0000 UTC m=+0.199697880 container init c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shtern, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:40:27 compute-0 podman[437178]: 2025-10-02 13:40:27.769274272 +0000 UTC m=+0.207679154 container start c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shtern, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:40:27 compute-0 podman[437178]: 2025-10-02 13:40:27.826415767 +0000 UTC m=+0.264820669 container attach c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:40:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:28.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:40:28 compute-0 kind_shtern[437195]: {
Oct 02 13:40:28 compute-0 kind_shtern[437195]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:40:28 compute-0 kind_shtern[437195]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:40:28 compute-0 kind_shtern[437195]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:40:28 compute-0 kind_shtern[437195]:         "osd_id": 1,
Oct 02 13:40:28 compute-0 kind_shtern[437195]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:40:28 compute-0 kind_shtern[437195]:         "type": "bluestore"
Oct 02 13:40:28 compute-0 kind_shtern[437195]:     }
Oct 02 13:40:28 compute-0 kind_shtern[437195]: }
Oct 02 13:40:28 compute-0 systemd[1]: libpod-c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1.scope: Deactivated successfully.
Oct 02 13:40:28 compute-0 podman[437178]: 2025-10-02 13:40:28.669848895 +0000 UTC m=+1.108253777 container died c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-71bf480ddd71f540771ef27642201d6f5fdbc467b647d0a3b78b7acc026dc95d-merged.mount: Deactivated successfully.
Oct 02 13:40:28 compute-0 podman[437178]: 2025-10-02 13:40:28.72622499 +0000 UTC m=+1.164629872 container remove c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shtern, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 13:40:28 compute-0 systemd[1]: libpod-conmon-c2b4c434c62e13ea375d004047ca2a4e6e730cdb40536296e01a63911df017b1.scope: Deactivated successfully.
Oct 02 13:40:28 compute-0 sudo[437022]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:40:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:28 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:40:28 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6370ade0-f321-4221-b6fc-dd274a5a2dfa does not exist
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 2b8cbc05-2462-40a9-80b5-b6057c215049 does not exist
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e24374d3-2ea5-4efe-8bf2-44e5553474f4 does not exist
Oct 02 13:40:28 compute-0 sudo[437230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:28 compute-0 sudo[437230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:28 compute-0 sudo[437230]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:28 compute-0 sudo[437256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:40:28 compute-0 sudo[437256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:28 compute-0 sudo[437256]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:40:28
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'images', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.rgw.root']
Oct 02 13:40:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:40:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:29.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:29 compute-0 nova_compute[256940]: 2025-10-02 13:40:29.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:29 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:29 compute-0 ceph-mon[73668]: pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:40:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:40:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000051s ======
Oct 02 13:40:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:30.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Oct 02 13:40:30 compute-0 nova_compute[256940]: 2025-10-02 13:40:30.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:31.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:32 compute-0 ceph-mon[73668]: pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:32.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:33.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:34 compute-0 ceph-mon[73668]: pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:34 compute-0 nova_compute[256940]: 2025-10-02 13:40:34.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:34.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:40:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:35.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:40:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:35 compute-0 nova_compute[256940]: 2025-10-02 13:40:35.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:36 compute-0 ceph-mon[73668]: pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:36.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:40:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:37.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:40:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:37 compute-0 podman[437286]: 2025-10-02 13:40:37.418310951 +0000 UTC m=+0.079803425 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:40:37 compute-0 podman[437285]: 2025-10-02 13:40:37.422860548 +0000 UTC m=+0.083963362 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 02 13:40:37 compute-0 ceph-mon[73668]: pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:39 compute-0 nova_compute[256940]: 2025-10-02 13:40:39.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:40 compute-0 ceph-mon[73668]: pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:40 compute-0 nova_compute[256940]: 2025-10-02 13:40:40.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:40:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:41 compute-0 ceph-mon[73668]: pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:43.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:43 compute-0 ceph-mon[73668]: pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:44 compute-0 nova_compute[256940]: 2025-10-02 13:40:44.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:44.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:45.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:45 compute-0 ceph-mon[73668]: pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:45 compute-0 nova_compute[256940]: 2025-10-02 13:40:45.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:46.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:46 compute-0 sudo[437330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:46 compute-0 sudo[437330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:46 compute-0 sudo[437330]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:47 compute-0 sudo[437355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:47 compute-0 sudo[437355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:47 compute-0 sudo[437355]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:48 compute-0 ceph-mon[73668]: pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:48.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:49 compute-0 nova_compute[256940]: 2025-10-02 13:40:49.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:49 compute-0 ceph-mon[73668]: pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:50 compute-0 podman[437381]: 2025-10-02 13:40:50.411954125 +0000 UTC m=+0.074358606 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:40:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:50.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:50 compute-0 podman[437382]: 2025-10-02 13:40:50.468210327 +0000 UTC m=+0.133187524 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:40:50 compute-0 nova_compute[256940]: 2025-10-02 13:40:50.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.240 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.241 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.241 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.241 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:40:52 compute-0 ceph-mon[73668]: pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:52 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4087168575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:52.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:52 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:40:52 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254812133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.710 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.888 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.889 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4098MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.890 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:40:52 compute-0 nova_compute[256940]: 2025-10-02 13:40:52.890 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.064 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.064 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:40:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.156 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:40:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1254812133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:53 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/814707070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:53 compute-0 ceph-mon[73668]: pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:40:53 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730228373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.727 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.734 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.761 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.763 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:40:53 compute-0 nova_compute[256940]: 2025-10-02 13:40:53.764 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:40:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:54 compute-0 nova_compute[256940]: 2025-10-02 13:40:54.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:54.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:54 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/730228373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:55.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:55 compute-0 nova_compute[256940]: 2025-10-02 13:40:55.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:55 compute-0 ceph-mon[73668]: pgmap v3915: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:56.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:57.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/630243333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:58 compute-0 ceph-mon[73668]: pgmap v3916: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3725250433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:40:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:58.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:40:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:40:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:40:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:40:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:40:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:59 compute-0 nova_compute[256940]: 2025-10-02 13:40:59.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:59 compute-0 ceph-mon[73668]: pgmap v3917: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:00.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:00 compute-0 nova_compute[256940]: 2025-10-02 13:41:00.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:01.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:02 compute-0 ceph-mon[73668]: pgmap v3918: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:02.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:02 compute-0 nova_compute[256940]: 2025-10-02 13:41:02.765 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:02 compute-0 nova_compute[256940]: 2025-10-02 13:41:02.765 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:02 compute-0 nova_compute[256940]: 2025-10-02 13:41:02.765 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:41:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:03 compute-0 nova_compute[256940]: 2025-10-02 13:41:03.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:03 compute-0 ceph-mon[73668]: pgmap v3919: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:04 compute-0 nova_compute[256940]: 2025-10-02 13:41:04.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:04.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:05.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:05 compute-0 nova_compute[256940]: 2025-10-02 13:41:05.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:06 compute-0 nova_compute[256940]: 2025-10-02 13:41:06.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:06 compute-0 nova_compute[256940]: 2025-10-02 13:41:06.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:06 compute-0 ceph-mon[73668]: pgmap v3920: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:41:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:41:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:07.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:07 compute-0 sudo[437479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:07 compute-0 sudo[437479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:07 compute-0 sudo[437479]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:07 compute-0 sudo[437504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:07 compute-0 sudo[437504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:07 compute-0 sudo[437504]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:08 compute-0 ceph-mon[73668]: pgmap v3921: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:08 compute-0 podman[437529]: 2025-10-02 13:41:08.412568149 +0000 UTC m=+0.077091837 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 13:41:08 compute-0 podman[437530]: 2025-10-02 13:41:08.4126271 +0000 UTC m=+0.070628141 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:41:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:08.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:09.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:09 compute-0 nova_compute[256940]: 2025-10-02 13:41:09.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:09 compute-0 ceph-mon[73668]: pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:10 compute-0 nova_compute[256940]: 2025-10-02 13:41:10.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:10.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:10 compute-0 nova_compute[256940]: 2025-10-02 13:41:10.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:11.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:11 compute-0 nova_compute[256940]: 2025-10-02 13:41:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:11 compute-0 nova_compute[256940]: 2025-10-02 13:41:11.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:41:11 compute-0 nova_compute[256940]: 2025-10-02 13:41:11.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:41:11 compute-0 nova_compute[256940]: 2025-10-02 13:41:11.232 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:41:12 compute-0 ceph-mon[73668]: pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:41:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:13.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:41:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:14 compute-0 ceph-mon[73668]: pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:14 compute-0 nova_compute[256940]: 2025-10-02 13:41:14.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:15 compute-0 ceph-mon[73668]: pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:15 compute-0 nova_compute[256940]: 2025-10-02 13:41:15.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:41:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:18.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:41:18 compute-0 ceph-mon[73668]: pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:19 compute-0 nova_compute[256940]: 2025-10-02 13:41:19.227 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:19 compute-0 nova_compute[256940]: 2025-10-02 13:41:19.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:19 compute-0 ceph-mon[73668]: pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:20 compute-0 nova_compute[256940]: 2025-10-02 13:41:20.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:21 compute-0 podman[437577]: 2025-10-02 13:41:21.375963936 +0000 UTC m=+0.047033766 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:41:21 compute-0 podman[437578]: 2025-10-02 13:41:21.436058807 +0000 UTC m=+0.090274825 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:41:22 compute-0 ceph-mon[73668]: pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:23.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:23 compute-0 nova_compute[256940]: 2025-10-02 13:41:23.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:24 compute-0 ceph-mon[73668]: pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:24 compute-0 nova_compute[256940]: 2025-10-02 13:41:24.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:24.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:25 compute-0 nova_compute[256940]: 2025-10-02 13:41:25.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:26 compute-0 ceph-mon[73668]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:41:26.542 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:41:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:41:26.542 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:41:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:41:26.542 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:41:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:26.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:27.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:27 compute-0 sudo[437626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:27 compute-0 sudo[437626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:27 compute-0 sudo[437626]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:27 compute-0 sudo[437651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:27 compute-0 sudo[437651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:27 compute-0 sudo[437651]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:27 compute-0 ceph-mon[73668]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:41:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:28.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:41:28
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Oct 02 13:41:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:41:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:29.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:29 compute-0 sudo[437677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:29 compute-0 sudo[437677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-0 sudo[437677]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:29 compute-0 sudo[437702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:41:29 compute-0 sudo[437702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-0 sudo[437702]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:29 compute-0 nova_compute[256940]: 2025-10-02 13:41:29.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:29 compute-0 sudo[437727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:29 compute-0 sudo[437727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-0 sudo[437727]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:29 compute-0 sudo[437752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:41:29 compute-0 sudo[437752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:41:29 compute-0 sudo[437752]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:41:29 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 sudo[437796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:30 compute-0 sudo[437796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-0 sudo[437796]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-0 ceph-mon[73668]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 sudo[437821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:41:30 compute-0 sudo[437821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-0 sudo[437821]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-0 sudo[437846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:30 compute-0 sudo[437846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-0 sudo[437846]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-0 sudo[437871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:41:30 compute-0 sudo[437871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:30.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:30 compute-0 nova_compute[256940]: 2025-10-02 13:41:30.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:30 compute-0 sudo[437871]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 94cf9fb6-626e-4f24-ae7c-4d71143c061b does not exist
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev e0057bd6-29c9-420a-9e85-8a9516232e78 does not exist
Oct 02 13:41:30 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 29930d30-ea14-4fc4-9498-a95e6632e8be does not exist
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:41:30 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:41:30 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:41:31 compute-0 sudo[437928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:31 compute-0 sudo[437928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:31 compute-0 sudo[437928]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:31 compute-0 sudo[437953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:41:31 compute-0 sudo[437953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:31 compute-0 sudo[437953]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:31 compute-0 sudo[437978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:31 compute-0 sudo[437978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:31.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:31 compute-0 sudo[437978]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:31 compute-0 sudo[438003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:41:31 compute-0 sudo[438003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:41:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:41:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:41:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:41:31 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:41:31 compute-0 podman[438069]: 2025-10-02 13:41:31.563714533 +0000 UTC m=+0.049260694 container create 4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:41:31 compute-0 systemd[1]: Started libpod-conmon-4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5.scope.
Oct 02 13:41:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:41:31 compute-0 podman[438069]: 2025-10-02 13:41:31.542569071 +0000 UTC m=+0.028115252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:41:31 compute-0 podman[438069]: 2025-10-02 13:41:31.645037627 +0000 UTC m=+0.130583808 container init 4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:41:31 compute-0 podman[438069]: 2025-10-02 13:41:31.653458743 +0000 UTC m=+0.139004904 container start 4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:41:31 compute-0 podman[438069]: 2025-10-02 13:41:31.658637686 +0000 UTC m=+0.144183877 container attach 4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 13:41:31 compute-0 thirsty_colden[438085]: 167 167
Oct 02 13:41:31 compute-0 systemd[1]: libpod-4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5.scope: Deactivated successfully.
Oct 02 13:41:31 compute-0 podman[438069]: 2025-10-02 13:41:31.661972251 +0000 UTC m=+0.147518412 container died 4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-44cdbfb0f9cb0ed34daa3f314eb0f681748ec151f4884db713f45551c0639b5a-merged.mount: Deactivated successfully.
Oct 02 13:41:31 compute-0 podman[438069]: 2025-10-02 13:41:31.705598649 +0000 UTC m=+0.191144810 container remove 4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 13:41:31 compute-0 systemd[1]: libpod-conmon-4f75372ef919b335e901fea2eb8924cdff9d7ef6fc1b923e1c2b8c9657f922c5.scope: Deactivated successfully.
Oct 02 13:41:31 compute-0 podman[438110]: 2025-10-02 13:41:31.870612389 +0000 UTC m=+0.040096709 container create 34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 13:41:31 compute-0 systemd[1]: Started libpod-conmon-34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d.scope.
Oct 02 13:41:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ff3f27a44252dabca8ace8f259302da5b2b6c951d0415959f65acf2c18f525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ff3f27a44252dabca8ace8f259302da5b2b6c951d0415959f65acf2c18f525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ff3f27a44252dabca8ace8f259302da5b2b6c951d0415959f65acf2c18f525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ff3f27a44252dabca8ace8f259302da5b2b6c951d0415959f65acf2c18f525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ff3f27a44252dabca8ace8f259302da5b2b6c951d0415959f65acf2c18f525/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:31 compute-0 podman[438110]: 2025-10-02 13:41:31.945238081 +0000 UTC m=+0.114722421 container init 34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:41:31 compute-0 podman[438110]: 2025-10-02 13:41:31.853617713 +0000 UTC m=+0.023102053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:41:31 compute-0 podman[438110]: 2025-10-02 13:41:31.952850947 +0000 UTC m=+0.122335287 container start 34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:41:31 compute-0 podman[438110]: 2025-10-02 13:41:31.958878111 +0000 UTC m=+0.128362451 container attach 34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 13:41:32 compute-0 ceph-mon[73668]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:32.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:32 compute-0 unruffled_booth[438127]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:41:32 compute-0 unruffled_booth[438127]: --> relative data size: 1.0
Oct 02 13:41:32 compute-0 unruffled_booth[438127]: --> All data devices are unavailable
Oct 02 13:41:32 compute-0 systemd[1]: libpod-34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d.scope: Deactivated successfully.
Oct 02 13:41:32 compute-0 podman[438110]: 2025-10-02 13:41:32.717653769 +0000 UTC m=+0.887138099 container died 34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ff3f27a44252dabca8ace8f259302da5b2b6c951d0415959f65acf2c18f525-merged.mount: Deactivated successfully.
Oct 02 13:41:32 compute-0 podman[438110]: 2025-10-02 13:41:32.768765639 +0000 UTC m=+0.938249959 container remove 34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_booth, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:41:32 compute-0 systemd[1]: libpod-conmon-34194d2dac54557b9922e213724a5b862b356e2033bdfbe5726660a71da32c5d.scope: Deactivated successfully.
Oct 02 13:41:32 compute-0 sudo[438003]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:32 compute-0 sudo[438157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:32 compute-0 sudo[438157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:32 compute-0 sudo[438157]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:32 compute-0 sudo[438182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:41:32 compute-0 sudo[438182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:32 compute-0 sudo[438182]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:32 compute-0 sudo[438208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:32 compute-0 sudo[438208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:32 compute-0 sudo[438208]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:33 compute-0 sudo[438233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:41:33 compute-0 sudo[438233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:33 compute-0 podman[438299]: 2025-10-02 13:41:33.392591088 +0000 UTC m=+0.054057956 container create 9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 13:41:33 compute-0 systemd[1]: Started libpod-conmon-9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7.scope.
Oct 02 13:41:33 compute-0 podman[438299]: 2025-10-02 13:41:33.362882367 +0000 UTC m=+0.024349255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:41:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:41:33 compute-0 podman[438299]: 2025-10-02 13:41:33.507641507 +0000 UTC m=+0.169108405 container init 9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:41:33 compute-0 podman[438299]: 2025-10-02 13:41:33.514344049 +0000 UTC m=+0.175810927 container start 9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:41:33 compute-0 systemd[1]: libpod-9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7.scope: Deactivated successfully.
Oct 02 13:41:33 compute-0 pensive_jepsen[438315]: 167 167
Oct 02 13:41:33 compute-0 conmon[438315]: conmon 9c637edd49a6f8aa0730 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7.scope/container/memory.events
Oct 02 13:41:33 compute-0 podman[438299]: 2025-10-02 13:41:33.532049353 +0000 UTC m=+0.193516231 container attach 9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:41:33 compute-0 podman[438299]: 2025-10-02 13:41:33.532489394 +0000 UTC m=+0.193956272 container died 9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4730a23af09a5f5f85ab4d6fe8eb9c52ca61f3d74f99a4be9b24606163ef3e45-merged.mount: Deactivated successfully.
Oct 02 13:41:33 compute-0 podman[438299]: 2025-10-02 13:41:33.593503528 +0000 UTC m=+0.254970396 container remove 9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:41:33 compute-0 systemd[1]: libpod-conmon-9c637edd49a6f8aa0730b3b9f5945449acb296ef746c8fd189e29f16539a9ac7.scope: Deactivated successfully.
Oct 02 13:41:33 compute-0 podman[438342]: 2025-10-02 13:41:33.75435267 +0000 UTC m=+0.046144013 container create c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:41:33 compute-0 systemd[1]: Started libpod-conmon-c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b.scope.
Oct 02 13:41:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49649cc12c04342826a49b0a915b4d66d4c56c7187663b31d6c8cf0cb3c1f9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49649cc12c04342826a49b0a915b4d66d4c56c7187663b31d6c8cf0cb3c1f9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49649cc12c04342826a49b0a915b4d66d4c56c7187663b31d6c8cf0cb3c1f9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49649cc12c04342826a49b0a915b4d66d4c56c7187663b31d6c8cf0cb3c1f9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:33 compute-0 podman[438342]: 2025-10-02 13:41:33.824660552 +0000 UTC m=+0.116451945 container init c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_colden, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:41:33 compute-0 podman[438342]: 2025-10-02 13:41:33.730905009 +0000 UTC m=+0.022696322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:41:33 compute-0 podman[438342]: 2025-10-02 13:41:33.831271632 +0000 UTC m=+0.123062955 container start c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_colden, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:41:33 compute-0 podman[438342]: 2025-10-02 13:41:33.83470024 +0000 UTC m=+0.126491573 container attach c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_colden, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:41:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:34 compute-0 ceph-mon[73668]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:34 compute-0 nova_compute[256940]: 2025-10-02 13:41:34.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:34 compute-0 lucid_colden[438359]: {
Oct 02 13:41:34 compute-0 lucid_colden[438359]:     "1": [
Oct 02 13:41:34 compute-0 lucid_colden[438359]:         {
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "devices": [
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "/dev/loop3"
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             ],
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "lv_name": "ceph_lv0",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "lv_size": "7511998464",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "name": "ceph_lv0",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "tags": {
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.cluster_name": "ceph",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.crush_device_class": "",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.encrypted": "0",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.osd_id": "1",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.type": "block",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:                 "ceph.vdo": "0"
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             },
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "type": "block",
Oct 02 13:41:34 compute-0 lucid_colden[438359]:             "vg_name": "ceph_vg0"
Oct 02 13:41:34 compute-0 lucid_colden[438359]:         }
Oct 02 13:41:34 compute-0 lucid_colden[438359]:     ]
Oct 02 13:41:34 compute-0 lucid_colden[438359]: }
Oct 02 13:41:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:41:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:34.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:41:34 compute-0 systemd[1]: libpod-c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b.scope: Deactivated successfully.
Oct 02 13:41:34 compute-0 podman[438342]: 2025-10-02 13:41:34.601302538 +0000 UTC m=+0.893093881 container died c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_colden, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c49649cc12c04342826a49b0a915b4d66d4c56c7187663b31d6c8cf0cb3c1f9c-merged.mount: Deactivated successfully.
Oct 02 13:41:34 compute-0 podman[438342]: 2025-10-02 13:41:34.676907826 +0000 UTC m=+0.968699149 container remove c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:41:34 compute-0 systemd[1]: libpod-conmon-c00fed977690b381ee47bb90329e4c4941df2a02c148a42cef719ee0845e992b.scope: Deactivated successfully.
Oct 02 13:41:34 compute-0 sudo[438233]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:34 compute-0 sudo[438382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:34 compute-0 sudo[438382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:34 compute-0 sudo[438382]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:34 compute-0 sudo[438407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:41:34 compute-0 sudo[438407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:34 compute-0 sudo[438407]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:34 compute-0 sudo[438432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:34 compute-0 sudo[438432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:34 compute-0 sudo[438432]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:34 compute-0 sudo[438458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:41:34 compute-0 sudo[438458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:35 compute-0 podman[438523]: 2025-10-02 13:41:35.275389125 +0000 UTC m=+0.039039592 container create 7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:41:35 compute-0 systemd[1]: Started libpod-conmon-7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0.scope.
Oct 02 13:41:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:41:35 compute-0 podman[438523]: 2025-10-02 13:41:35.347039111 +0000 UTC m=+0.110689588 container init 7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:41:35 compute-0 podman[438523]: 2025-10-02 13:41:35.354155704 +0000 UTC m=+0.117806181 container start 7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:41:35 compute-0 podman[438523]: 2025-10-02 13:41:35.258656986 +0000 UTC m=+0.022307483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:41:35 compute-0 podman[438523]: 2025-10-02 13:41:35.35715013 +0000 UTC m=+0.120800607 container attach 7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 13:41:35 compute-0 zen_hodgkin[438540]: 167 167
Oct 02 13:41:35 compute-0 systemd[1]: libpod-7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0.scope: Deactivated successfully.
Oct 02 13:41:35 compute-0 podman[438523]: 2025-10-02 13:41:35.359175852 +0000 UTC m=+0.122826349 container died 7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-23131e19430da587931fba3a35056a53eb657ed5629dfda0061bb7a1a006dacf-merged.mount: Deactivated successfully.
Oct 02 13:41:35 compute-0 podman[438523]: 2025-10-02 13:41:35.397936566 +0000 UTC m=+0.161587043 container remove 7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:41:35 compute-0 systemd[1]: libpod-conmon-7bb9746439dda91024dae7409b047290859c03dcc1aed0e78455605f3dca70f0.scope: Deactivated successfully.
Oct 02 13:41:35 compute-0 podman[438564]: 2025-10-02 13:41:35.545937519 +0000 UTC m=+0.038459107 container create 965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:41:35 compute-0 systemd[1]: Started libpod-conmon-965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8.scope.
Oct 02 13:41:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a969de92541125cefebd37b99f07072d2e8d9c8b7304ef0c1f7135e713341d96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a969de92541125cefebd37b99f07072d2e8d9c8b7304ef0c1f7135e713341d96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a969de92541125cefebd37b99f07072d2e8d9c8b7304ef0c1f7135e713341d96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a969de92541125cefebd37b99f07072d2e8d9c8b7304ef0c1f7135e713341d96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:41:35 compute-0 podman[438564]: 2025-10-02 13:41:35.526678655 +0000 UTC m=+0.019200263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:41:35 compute-0 podman[438564]: 2025-10-02 13:41:35.633340989 +0000 UTC m=+0.125862597 container init 965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 13:41:35 compute-0 podman[438564]: 2025-10-02 13:41:35.639048285 +0000 UTC m=+0.131569873 container start 965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:41:35 compute-0 podman[438564]: 2025-10-02 13:41:35.641848867 +0000 UTC m=+0.134370455 container attach 965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:41:35 compute-0 nova_compute[256940]: 2025-10-02 13:41:35.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:36 compute-0 ceph-mon[73668]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]: {
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]:         "osd_id": 1,
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]:         "type": "bluestore"
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]:     }
Oct 02 13:41:36 compute-0 gracious_northcutt[438580]: }
Oct 02 13:41:36 compute-0 systemd[1]: libpod-965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8.scope: Deactivated successfully.
Oct 02 13:41:36 compute-0 podman[438564]: 2025-10-02 13:41:36.491035903 +0000 UTC m=+0.983557521 container died 965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a969de92541125cefebd37b99f07072d2e8d9c8b7304ef0c1f7135e713341d96-merged.mount: Deactivated successfully.
Oct 02 13:41:36 compute-0 podman[438564]: 2025-10-02 13:41:36.546235147 +0000 UTC m=+1.038756755 container remove 965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 13:41:36 compute-0 systemd[1]: libpod-conmon-965e6e41e24198f3bd7cd859c14d8c682a676ec27140c01528f83151a3cc7aa8.scope: Deactivated successfully.
Oct 02 13:41:36 compute-0 sudo[438458]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:41:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:41:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:36.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:41:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:36 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:41:36 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f367a0a9-58f1-487d-84f5-9f71ccbef8db does not exist
Oct 02 13:41:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 3ba3f1c5-b1dd-4eb8-971a-8aa4d89127bf does not exist
Oct 02 13:41:36 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5ee1fc19-f94f-4f03-95f7-483d55efaf91 does not exist
Oct 02 13:41:36 compute-0 sudo[438615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:36 compute-0 sudo[438615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:36 compute-0 sudo[438615]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:36 compute-0 sudo[438641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:41:36 compute-0 sudo[438641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:36 compute-0 sudo[438641]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:41:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:41:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:37 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:37 compute-0 ceph-mon[73668]: pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:38.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:39.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:39 compute-0 podman[438667]: 2025-10-02 13:41:39.385501749 +0000 UTC m=+0.059661710 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=iscsid, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:41:39 compute-0 podman[438668]: 2025-10-02 13:41:39.385866318 +0000 UTC m=+0.058623023 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Oct 02 13:41:39 compute-0 nova_compute[256940]: 2025-10-02 13:41:39.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:40 compute-0 ceph-mon[73668]: pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:40.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:40 compute-0 nova_compute[256940]: 2025-10-02 13:41:40.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:41:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:42 compute-0 ceph-mon[73668]: pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:44 compute-0 nova_compute[256940]: 2025-10-02 13:41:44.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:44 compute-0 ceph-mon[73668]: pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:45 compute-0 ceph-mon[73668]: pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:45 compute-0 nova_compute[256940]: 2025-10-02 13:41:45.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:47.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:47 compute-0 sudo[438712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:47 compute-0 sudo[438712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:47 compute-0 sudo[438712]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:47 compute-0 sudo[438737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:47 compute-0 sudo[438737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:47 compute-0 sudo[438737]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:48 compute-0 ceph-mon[73668]: pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:49.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:49 compute-0 ceph-mon[73668]: pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:49 compute-0 nova_compute[256940]: 2025-10-02 13:41:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:41:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:41:50 compute-0 nova_compute[256940]: 2025-10-02 13:41:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:51.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:52 compute-0 ceph-mon[73668]: pgmap v3943: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:52 compute-0 podman[438764]: 2025-10-02 13:41:52.385938567 +0000 UTC m=+0.054051636 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 13:41:52 compute-0 podman[438765]: 2025-10-02 13:41:52.442023465 +0000 UTC m=+0.106705136 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:41:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:41:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:52.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:41:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:53.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:53 compute-0 ceph-mon[73668]: pgmap v3944: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.246 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.247 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.247 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.247 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:54.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:41:54 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3216893789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.702 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.844 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.845 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4071MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.845 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.846 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.896 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.897 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:41:54 compute-0 nova_compute[256940]: 2025-10-02 13:41:54.910 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.010 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.011 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.026 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:41:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2271838928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:55 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3216893789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.050 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.069 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.088708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515088747, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2102, "num_deletes": 251, "total_data_size": 3967682, "memory_usage": 4036496, "flush_reason": "Manual Compaction"}
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515120562, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3880001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85513, "largest_seqno": 87614, "table_properties": {"data_size": 3870469, "index_size": 6089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18979, "raw_average_key_size": 20, "raw_value_size": 3851624, "raw_average_value_size": 4101, "num_data_blocks": 267, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412288, "oldest_key_time": 1759412288, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 31941 microseconds, and 7979 cpu microseconds.
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.120643) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3880001 bytes OK
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.120666) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.131447) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.131493) EVENT_LOG_v1 {"time_micros": 1759412515131483, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.131519) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3959281, prev total WAL file size 3959281, number of live WAL files 2.
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.132664) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3789KB)], [197(12MB)]
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515132741, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 17088061, "oldest_snapshot_seqno": -1}
Oct 02 13:41:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:41:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:55.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:41:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 11700 keys, 15047415 bytes, temperature: kUnknown
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515272379, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 15047415, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14971875, "index_size": 45155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 308829, "raw_average_key_size": 26, "raw_value_size": 14767525, "raw_average_value_size": 1262, "num_data_blocks": 1719, "num_entries": 11700, "num_filter_entries": 11700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.272736) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 15047415 bytes
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.276719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.3 rd, 107.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.6 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 12217, records dropped: 517 output_compression: NoCompression
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.276780) EVENT_LOG_v1 {"time_micros": 1759412515276739, "job": 124, "event": "compaction_finished", "compaction_time_micros": 139759, "compaction_time_cpu_micros": 40385, "output_level": 6, "num_output_files": 1, "total_output_size": 15047415, "num_input_records": 12217, "num_output_records": 11700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515277735, "job": 124, "event": "table_file_deletion", "file_number": 199}
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515280920, "job": 124, "event": "table_file_deletion", "file_number": 197}
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.132478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.281158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.281167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.281170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.281173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:41:55.281175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:41:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2404797557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.509 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.514 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.539 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.541 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.541 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:41:55 compute-0 nova_compute[256940]: 2025-10-02 13:41:55.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/123264677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:56 compute-0 ceph-mon[73668]: pgmap v3945: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2404797557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:56.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/96427201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:57 compute-0 ceph-mon[73668]: pgmap v3946: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2919220754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:41:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:41:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:58.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:41:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:59.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:59 compute-0 nova_compute[256940]: 2025-10-02 13:41:59.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:59 compute-0 ceph-mon[73668]: pgmap v3947: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:00.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:00 compute-0 nova_compute[256940]: 2025-10-02 13:42:00.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:01.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:02 compute-0 ceph-mon[73668]: pgmap v3948: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:02 compute-0 nova_compute[256940]: 2025-10-02 13:42:02.542 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:02 compute-0 nova_compute[256940]: 2025-10-02 13:42:02.543 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:42:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:42:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:02.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:42:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:03.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:03 compute-0 nova_compute[256940]: 2025-10-02 13:42:03.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:03 compute-0 ceph-mon[73668]: pgmap v3949: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:04 compute-0 nova_compute[256940]: 2025-10-02 13:42:04.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:04.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:05 compute-0 nova_compute[256940]: 2025-10-02 13:42:05.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:05 compute-0 nova_compute[256940]: 2025-10-02 13:42:05.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:06 compute-0 ceph-mon[73668]: pgmap v3950: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:42:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:42:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:06.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:07.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:07 compute-0 nova_compute[256940]: 2025-10-02 13:42:07.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:07 compute-0 ceph-mon[73668]: pgmap v3951: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:07 compute-0 sudo[438858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:07 compute-0 sudo[438858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:07 compute-0 sudo[438858]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:07 compute-0 sudo[438883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:07 compute-0 sudo[438883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:07 compute-0 sudo[438883]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:08 compute-0 nova_compute[256940]: 2025-10-02 13:42:08.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:08.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:09 compute-0 nova_compute[256940]: 2025-10-02 13:42:09.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:10 compute-0 ceph-mon[73668]: pgmap v3952: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:10 compute-0 podman[438910]: 2025-10-02 13:42:10.400387986 +0000 UTC m=+0.070099457 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:42:10 compute-0 podman[438909]: 2025-10-02 13:42:10.402658714 +0000 UTC m=+0.075874525 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:42:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:10.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:10 compute-0 nova_compute[256940]: 2025-10-02 13:42:10.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:11.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:11 compute-0 nova_compute[256940]: 2025-10-02 13:42:11.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:12 compute-0 nova_compute[256940]: 2025-10-02 13:42:12.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:12 compute-0 nova_compute[256940]: 2025-10-02 13:42:12.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:42:12 compute-0 nova_compute[256940]: 2025-10-02 13:42:12.213 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:42:12 compute-0 nova_compute[256940]: 2025-10-02 13:42:12.241 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:42:12 compute-0 ceph-mon[73668]: pgmap v3953: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:12.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:14 compute-0 ceph-mon[73668]: pgmap v3954: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:14 compute-0 nova_compute[256940]: 2025-10-02 13:42:14.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:14.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:15 compute-0 nova_compute[256940]: 2025-10-02 13:42:15.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:16 compute-0 ceph-mon[73668]: pgmap v3955: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:16.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:17 compute-0 ceph-mon[73668]: pgmap v3956: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:18.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.101260) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539101619, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 435, "num_deletes": 250, "total_data_size": 408741, "memory_usage": 417736, "flush_reason": "Manual Compaction"}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539107435, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 312887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87615, "largest_seqno": 88049, "table_properties": {"data_size": 310473, "index_size": 513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6550, "raw_average_key_size": 20, "raw_value_size": 305594, "raw_average_value_size": 952, "num_data_blocks": 23, "num_entries": 321, "num_filter_entries": 321, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412516, "oldest_key_time": 1759412516, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 6204 microseconds, and 1473 cpu microseconds.
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.107470) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 312887 bytes OK
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.107485) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.109202) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.109214) EVENT_LOG_v1 {"time_micros": 1759412539109210, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.109229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 406094, prev total WAL file size 406094, number of live WAL files 2.
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.109781) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323539' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(305KB)], [200(14MB)]
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539109812, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 15360302, "oldest_snapshot_seqno": -1}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 11518 keys, 11628427 bytes, temperature: kUnknown
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539185043, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 11628427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11558698, "index_size": 39856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28805, "raw_key_size": 305255, "raw_average_key_size": 26, "raw_value_size": 11362014, "raw_average_value_size": 986, "num_data_blocks": 1497, "num_entries": 11518, "num_filter_entries": 11518, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.185457) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11628427 bytes
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.186894) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.5 rd, 154.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(86.3) write-amplify(37.2) OK, records in: 12021, records dropped: 503 output_compression: NoCompression
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.186910) EVENT_LOG_v1 {"time_micros": 1759412539186903, "job": 126, "event": "compaction_finished", "compaction_time_micros": 75471, "compaction_time_cpu_micros": 30421, "output_level": 6, "num_output_files": 1, "total_output_size": 11628427, "num_input_records": 12021, "num_output_records": 11518, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539187532, "job": 126, "event": "table_file_deletion", "file_number": 202}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539190603, "job": 126, "event": "table_file_deletion", "file_number": 200}
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.109721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.190783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.190788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.190790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.190792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:42:19.190794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:19 compute-0 nova_compute[256940]: 2025-10-02 13:42:19.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:20 compute-0 ceph-mon[73668]: pgmap v3957: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:20 compute-0 nova_compute[256940]: 2025-10-02 13:42:20.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:21 compute-0 nova_compute[256940]: 2025-10-02 13:42:21.235 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:22 compute-0 ceph-mon[73668]: pgmap v3958: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:23 compute-0 nova_compute[256940]: 2025-10-02 13:42:23.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:23 compute-0 nova_compute[256940]: 2025-10-02 13:42:23.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:42:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:23.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:23 compute-0 nova_compute[256940]: 2025-10-02 13:42:23.226 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:42:23 compute-0 podman[438958]: 2025-10-02 13:42:23.371220734 +0000 UTC m=+0.047837768 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:42:23 compute-0 podman[438959]: 2025-10-02 13:42:23.402223188 +0000 UTC m=+0.076228625 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 13:42:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:24 compute-0 ceph-mon[73668]: pgmap v3959: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:24 compute-0 nova_compute[256940]: 2025-10-02 13:42:24.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:24.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:25.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:25 compute-0 ceph-mon[73668]: pgmap v3960: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:25 compute-0 nova_compute[256940]: 2025-10-02 13:42:25.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:42:26.542 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:42:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:42:26.543 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:42:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:42:26.543 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:42:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:26.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:27 compute-0 sudo[439001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:27 compute-0 sudo[439001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:27 compute-0 sudo[439001]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:27 compute-0 sudo[439026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:27 compute-0 sudo[439026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:27 compute-0 sudo[439026]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:28 compute-0 ceph-mon[73668]: pgmap v3961: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:42:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:42:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:42:28
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control', 'volumes']
Oct 02 13:42:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:42:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:29 compute-0 nova_compute[256940]: 2025-10-02 13:42:29.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:29 compute-0 ceph-mon[73668]: pgmap v3962: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:42:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:42:30 compute-0 nova_compute[256940]: 2025-10-02 13:42:30.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:30 compute-0 nova_compute[256940]: 2025-10-02 13:42:30.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:42:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:30 compute-0 nova_compute[256940]: 2025-10-02 13:42:30.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:31.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:31 compute-0 ceph-mon[73668]: pgmap v3963: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:42:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:32.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:42:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:33.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:33 compute-0 ceph-mon[73668]: pgmap v3964: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:34 compute-0 nova_compute[256940]: 2025-10-02 13:42:34.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:34.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:35.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:35 compute-0 ceph-mon[73668]: pgmap v3965: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:35 compute-0 nova_compute[256940]: 2025-10-02 13:42:35.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:36.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:37.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:37 compute-0 sudo[439056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:37 compute-0 sudo[439056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-0 sudo[439056]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:37 compute-0 sudo[439081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:42:37 compute-0 sudo[439081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-0 sudo[439081]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:37 compute-0 sudo[439106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:37 compute-0 sudo[439106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-0 sudo[439106]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:37 compute-0 sudo[439131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:42:37 compute-0 sudo[439131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Oct 02 13:42:37 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Oct 02 13:42:37 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:37 compute-0 ceph-mon[73668]: pgmap v3966: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:38 compute-0 sudo[439131]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 13:42:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:38 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Oct 02 13:42:38 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:38.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:39.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:39 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Oct 02 13:42:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:42:39 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:42:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:42:39 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:42:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:42:39 compute-0 nova_compute[256940]: 2025-10-02 13:42:39.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8dbbe818-b318-481f-aada-f1fc71189c06 does not exist
Oct 02 13:42:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev a30580c2-8422-4430-8298-112df0e25376 does not exist
Oct 02 13:42:40 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 12819e35-98f7-4638-9350-301c16ff669a does not exist
Oct 02 13:42:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:42:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:42:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:42:40 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:42:40 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:42:40 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:42:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:42:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:40.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:42:40 compute-0 sudo[439188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:40 compute-0 sudo[439188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:40 compute-0 sudo[439188]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:40 compute-0 podman[439213]: 2025-10-02 13:42:40.744919049 +0000 UTC m=+0.050146066 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:40 compute-0 sudo[439225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:42:40 compute-0 sudo[439225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:40 compute-0 nova_compute[256940]: 2025-10-02 13:42:40.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:40 compute-0 sudo[439225]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:40 compute-0 podman[439212]: 2025-10-02 13:42:40.756036134 +0000 UTC m=+0.063677933 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:40 compute-0 sudo[439274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:40 compute-0 sudo[439274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:40 compute-0 sudo[439274]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:40 compute-0 sudo[439300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:42:40 compute-0 sudo[439300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:41 compute-0 ceph-mon[73668]: pgmap v3967: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:42:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:42:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:41 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:42:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:41.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:41 compute-0 podman[439367]: 2025-10-02 13:42:41.15462782 +0000 UTC m=+0.020540467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:42:41 compute-0 podman[439367]: 2025-10-02 13:42:41.684893392 +0000 UTC m=+0.550806059 container create b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:42:42 compute-0 systemd[1]: Started libpod-conmon-b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042.scope.
Oct 02 13:42:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:42:42 compute-0 podman[439367]: 2025-10-02 13:42:42.196881715 +0000 UTC m=+1.062794392 container init b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:42:42 compute-0 podman[439367]: 2025-10-02 13:42:42.206341357 +0000 UTC m=+1.072254004 container start b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:42:42 compute-0 tender_euler[439383]: 167 167
Oct 02 13:42:42 compute-0 systemd[1]: libpod-b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042.scope: Deactivated successfully.
Oct 02 13:42:42 compute-0 podman[439367]: 2025-10-02 13:42:42.352701228 +0000 UTC m=+1.218613905 container attach b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 13:42:42 compute-0 podman[439367]: 2025-10-02 13:42:42.354975526 +0000 UTC m=+1.220888193 container died b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:42:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:42:42 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:42:42 compute-0 ceph-mon[73668]: pgmap v3968: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:42.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b2edc38cb60da1b4c2e41e074dca6161b39881460892d09480ad3a7a52fed35-merged.mount: Deactivated successfully.
Oct 02 13:42:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:43.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:43 compute-0 ceph-mon[73668]: pgmap v3969: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:43 compute-0 podman[439367]: 2025-10-02 13:42:43.722774744 +0000 UTC m=+2.588687381 container remove b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 13:42:43 compute-0 systemd[1]: libpod-conmon-b5de2e41cdad13148480855dd9a98661a67bc77098390e4bc216cbdcbbdc1042.scope: Deactivated successfully.
Oct 02 13:42:43 compute-0 podman[439407]: 2025-10-02 13:42:43.882154549 +0000 UTC m=+0.024564831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:42:44 compute-0 podman[439407]: 2025-10-02 13:42:44.039171363 +0000 UTC m=+0.181581605 container create de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:42:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:44 compute-0 systemd[1]: Started libpod-conmon-de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728.scope.
Oct 02 13:42:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d009ebc8a0da8d0b8f30779195eac224e5356b473fc23b70b880ab80e02fa0a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d009ebc8a0da8d0b8f30779195eac224e5356b473fc23b70b880ab80e02fa0a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d009ebc8a0da8d0b8f30779195eac224e5356b473fc23b70b880ab80e02fa0a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d009ebc8a0da8d0b8f30779195eac224e5356b473fc23b70b880ab80e02fa0a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d009ebc8a0da8d0b8f30779195eac224e5356b473fc23b70b880ab80e02fa0a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:44 compute-0 podman[439407]: 2025-10-02 13:42:44.569769892 +0000 UTC m=+0.712180244 container init de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_brattain, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:42:44 compute-0 podman[439407]: 2025-10-02 13:42:44.577464709 +0000 UTC m=+0.719875001 container start de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:42:44 compute-0 nova_compute[256940]: 2025-10-02 13:42:44.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:44.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:44 compute-0 podman[439407]: 2025-10-02 13:42:44.769695096 +0000 UTC m=+0.912105408 container attach de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 13:42:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:45.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:45 compute-0 elastic_brattain[439425]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:42:45 compute-0 elastic_brattain[439425]: --> relative data size: 1.0
Oct 02 13:42:45 compute-0 elastic_brattain[439425]: --> All data devices are unavailable
Oct 02 13:42:45 compute-0 systemd[1]: libpod-de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728.scope: Deactivated successfully.
Oct 02 13:42:45 compute-0 podman[439407]: 2025-10-02 13:42:45.493874437 +0000 UTC m=+1.636284699 container died de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:42:45 compute-0 nova_compute[256940]: 2025-10-02 13:42:45.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:45 compute-0 ceph-mon[73668]: pgmap v3970: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d009ebc8a0da8d0b8f30779195eac224e5356b473fc23b70b880ab80e02fa0a2-merged.mount: Deactivated successfully.
Oct 02 13:42:46 compute-0 podman[439407]: 2025-10-02 13:42:46.355022678 +0000 UTC m=+2.497432970 container remove de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_brattain, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:42:46 compute-0 systemd[1]: libpod-conmon-de31ec19626832b6afff703dc85041690c5f3ddfa54526c6db5e9e8907a09728.scope: Deactivated successfully.
Oct 02 13:42:46 compute-0 sudo[439300]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:46 compute-0 sudo[439454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:46 compute-0 sudo[439454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:46 compute-0 sudo[439454]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:46 compute-0 sudo[439479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:42:46 compute-0 sudo[439479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:46 compute-0 sudo[439479]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:46 compute-0 sudo[439504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:46 compute-0 sudo[439504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:46 compute-0 sudo[439504]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:42:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:46.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:42:46 compute-0 sudo[439529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:42:46 compute-0 sudo[439529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:47 compute-0 podman[439594]: 2025-10-02 13:42:47.016455411 +0000 UTC m=+0.022817005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:42:47 compute-0 podman[439594]: 2025-10-02 13:42:47.128980705 +0000 UTC m=+0.135342289 container create 65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:42:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:47.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:47 compute-0 systemd[1]: Started libpod-conmon-65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e.scope.
Oct 02 13:42:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:42:47 compute-0 podman[439594]: 2025-10-02 13:42:47.448010643 +0000 UTC m=+0.454372247 container init 65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:42:47 compute-0 podman[439594]: 2025-10-02 13:42:47.462627047 +0000 UTC m=+0.468988641 container start 65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:42:47 compute-0 flamboyant_wu[439611]: 167 167
Oct 02 13:42:47 compute-0 systemd[1]: libpod-65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e.scope: Deactivated successfully.
Oct 02 13:42:47 compute-0 podman[439594]: 2025-10-02 13:42:47.620926804 +0000 UTC m=+0.627288478 container attach 65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:42:47 compute-0 podman[439594]: 2025-10-02 13:42:47.621560441 +0000 UTC m=+0.627922055 container died 65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:42:47 compute-0 ceph-mon[73668]: pgmap v3971: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:47 compute-0 sudo[439627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:47 compute-0 sudo[439627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:47 compute-0 sudo[439627]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:47 compute-0 sudo[439652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:47 compute-0 sudo[439652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:47 compute-0 sudo[439652]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-37fb045c94c3bc63d7794c43db52d471050d7a9290aba4840a3d196f731b737b-merged.mount: Deactivated successfully.
Oct 02 13:42:48 compute-0 podman[439594]: 2025-10-02 13:42:48.151182255 +0000 UTC m=+1.157543829 container remove 65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:42:48 compute-0 systemd[1]: libpod-conmon-65f68f79c00b405542e8bb6755c7b1e08336c96055cabd7a49b318abe4dc7c1e.scope: Deactivated successfully.
Oct 02 13:42:48 compute-0 podman[439685]: 2025-10-02 13:42:48.366772201 +0000 UTC m=+0.069908043 container create 692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:42:48 compute-0 podman[439685]: 2025-10-02 13:42:48.31913647 +0000 UTC m=+0.022272332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:42:48 compute-0 systemd[1]: Started libpod-conmon-692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127.scope.
Oct 02 13:42:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef175d3afdec0fa218df5939e118d384737221119261a388427849893e94dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef175d3afdec0fa218df5939e118d384737221119261a388427849893e94dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef175d3afdec0fa218df5939e118d384737221119261a388427849893e94dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef175d3afdec0fa218df5939e118d384737221119261a388427849893e94dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:48 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:42:48 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:42:48 compute-0 podman[439685]: 2025-10-02 13:42:48.520348157 +0000 UTC m=+0.223484039 container init 692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:48 compute-0 podman[439685]: 2025-10-02 13:42:48.528766913 +0000 UTC m=+0.231902795 container start 692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:42:48 compute-0 podman[439685]: 2025-10-02 13:42:48.564433067 +0000 UTC m=+0.267568909 container attach 692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:48.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:49 compute-0 musing_ritchie[439700]: {
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:     "1": [
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:         {
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "devices": [
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "/dev/loop3"
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             ],
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "lv_name": "ceph_lv0",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "lv_size": "7511998464",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "name": "ceph_lv0",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "tags": {
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.cluster_name": "ceph",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.crush_device_class": "",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.encrypted": "0",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.osd_id": "1",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.type": "block",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:                 "ceph.vdo": "0"
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             },
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "type": "block",
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:             "vg_name": "ceph_vg0"
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:         }
Oct 02 13:42:49 compute-0 musing_ritchie[439700]:     ]
Oct 02 13:42:49 compute-0 musing_ritchie[439700]: }
Oct 02 13:42:49 compute-0 systemd[1]: libpod-692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127.scope: Deactivated successfully.
Oct 02 13:42:49 compute-0 podman[439685]: 2025-10-02 13:42:49.293679589 +0000 UTC m=+0.996815451 container died 692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:49 compute-0 ceph-mon[73668]: pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:49 compute-0 nova_compute[256940]: 2025-10-02 13:42:49.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ef175d3afdec0fa218df5939e118d384737221119261a388427849893e94dd7-merged.mount: Deactivated successfully.
Oct 02 13:42:49 compute-0 podman[439685]: 2025-10-02 13:42:49.844351971 +0000 UTC m=+1.547487814 container remove 692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:42:49 compute-0 systemd[1]: libpod-conmon-692fcb8c822a1e84b9072ead179824fe8e6f445e498e17e45d4d2cdc0d197127.scope: Deactivated successfully.
Oct 02 13:42:49 compute-0 sudo[439529]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:49 compute-0 sudo[439723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:49 compute-0 sudo[439723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:49 compute-0 sudo[439723]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:49 compute-0 sudo[439748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:42:49 compute-0 sudo[439748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:50 compute-0 sudo[439748]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:50 compute-0 sudo[439773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:50 compute-0 sudo[439773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:50 compute-0 sudo[439773]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:50 compute-0 sudo[439798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:42:50 compute-0 sudo[439798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:50 compute-0 podman[439863]: 2025-10-02 13:42:50.488560513 +0000 UTC m=+0.023557345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:42:50 compute-0 podman[439863]: 2025-10-02 13:42:50.674522549 +0000 UTC m=+0.209519391 container create 63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 13:42:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:50.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:50 compute-0 nova_compute[256940]: 2025-10-02 13:42:50.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:50 compute-0 systemd[1]: Started libpod-conmon-63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea.scope.
Oct 02 13:42:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:42:50 compute-0 podman[439863]: 2025-10-02 13:42:50.899367752 +0000 UTC m=+0.434364574 container init 63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:42:50 compute-0 podman[439863]: 2025-10-02 13:42:50.907204933 +0000 UTC m=+0.442201735 container start 63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:42:50 compute-0 elastic_banzai[439879]: 167 167
Oct 02 13:42:50 compute-0 systemd[1]: libpod-63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea.scope: Deactivated successfully.
Oct 02 13:42:50 compute-0 podman[439863]: 2025-10-02 13:42:50.969771767 +0000 UTC m=+0.504768569 container attach 63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:42:50 compute-0 podman[439863]: 2025-10-02 13:42:50.971668895 +0000 UTC m=+0.506665717 container died 63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31aaf62ec770f9bdb1c8065327484aa0d7dcf34047ead089d356a5c01269dd6-merged.mount: Deactivated successfully.
Oct 02 13:42:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:51 compute-0 podman[439863]: 2025-10-02 13:42:51.494761393 +0000 UTC m=+1.029758185 container remove 63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:42:51 compute-0 systemd[1]: libpod-conmon-63fe92b513534cea6fa175e9b0d39d546c34b0069271b8670c8633909708daea.scope: Deactivated successfully.
Oct 02 13:42:51 compute-0 ceph-mon[73668]: pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:51 compute-0 podman[439906]: 2025-10-02 13:42:51.641604237 +0000 UTC m=+0.025385642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:42:51 compute-0 podman[439906]: 2025-10-02 13:42:51.746264009 +0000 UTC m=+0.130045394 container create 904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_vaughan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:42:51 compute-0 systemd[1]: Started libpod-conmon-904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf.scope.
Oct 02 13:42:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb078214b547b363561127865ecd71b08efb5b957f19bbebec51bbd6492aa42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb078214b547b363561127865ecd71b08efb5b957f19bbebec51bbd6492aa42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb078214b547b363561127865ecd71b08efb5b957f19bbebec51bbd6492aa42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb078214b547b363561127865ecd71b08efb5b957f19bbebec51bbd6492aa42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:42:51 compute-0 podman[439906]: 2025-10-02 13:42:51.991247039 +0000 UTC m=+0.375028444 container init 904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 13:42:51 compute-0 podman[439906]: 2025-10-02 13:42:51.997483509 +0000 UTC m=+0.381264894 container start 904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 13:42:52 compute-0 podman[439906]: 2025-10-02 13:42:52.206037524 +0000 UTC m=+0.589818909 container attach 904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_vaughan, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 13:42:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]: {
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]:         "osd_id": 1,
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]:         "type": "bluestore"
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]:     }
Oct 02 13:42:52 compute-0 quirky_vaughan[439923]: }
Oct 02 13:42:52 compute-0 systemd[1]: libpod-904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf.scope: Deactivated successfully.
Oct 02 13:42:52 compute-0 podman[439906]: 2025-10-02 13:42:52.893959445 +0000 UTC m=+1.277740830 container died 904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbb078214b547b363561127865ecd71b08efb5b957f19bbebec51bbd6492aa42-merged.mount: Deactivated successfully.
Oct 02 13:42:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:53.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:53 compute-0 podman[439906]: 2025-10-02 13:42:53.361815707 +0000 UTC m=+1.745597102 container remove 904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:42:53 compute-0 systemd[1]: libpod-conmon-904ed39f8589f7b6f76e810028a2b3e789004ef46260da523a38d70fe85edeaf.scope: Deactivated successfully.
Oct 02 13:42:53 compute-0 sudo[439798]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:42:53 compute-0 ceph-mon[73668]: pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:53 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:42:53 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 143508f6-abab-47d5-b047-157575b9c495 does not exist
Oct 02 13:42:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f4cf1c0f-cc7d-4c91-bb34-26422951369d does not exist
Oct 02 13:42:54 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev bb169356-72b5-499f-83f5-6b3e9f3151f3 does not exist
Oct 02 13:42:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:54 compute-0 sudo[439959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:54 compute-0 sudo[439959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:54 compute-0 sudo[439959]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:54 compute-0 sudo[439986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:42:54 compute-0 sudo[439986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:54 compute-0 podman[439983]: 2025-10-02 13:42:54.17117042 +0000 UTC m=+0.065301634 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 13:42:54 compute-0 sudo[439986]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:54 compute-0 podman[439984]: 2025-10-02 13:42:54.210892068 +0000 UTC m=+0.100119677 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:54 compute-0 nova_compute[256940]: 2025-10-02 13:42:54.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:54.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:54 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:42:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:55.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:42:55 compute-0 nova_compute[256940]: 2025-10-02 13:42:55.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:56 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/281986870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:56 compute-0 ceph-mon[73668]: pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.225 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.321 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.321 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.321 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.322 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.322 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:42:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:56 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:42:56 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059668800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.772 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.907 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.908 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4029MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.908 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:42:56 compute-0 nova_compute[256940]: 2025-10-02 13:42:56.908 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.048 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.049 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.074 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:42:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/805743753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1059668800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:57.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:57 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:42:57 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1568033478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.562 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.567 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.614 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.615 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.615 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:42:57 compute-0 nova_compute[256940]: 2025-10-02 13:42:57.616 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:58 compute-0 ceph-mon[73668]: pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3426009518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1568033478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:42:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:42:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:58.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:42:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:59.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2860197508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:59 compute-0 nova_compute[256940]: 2025-10-02 13:42:59.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:00 compute-0 ceph-mon[73668]: pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:00 compute-0 nova_compute[256940]: 2025-10-02 13:43:00.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:01.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:01 compute-0 ceph-mon[73668]: pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:03.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:04 compute-0 ceph-mon[73668]: pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:04 compute-0 nova_compute[256940]: 2025-10-02 13:43:04.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:04 compute-0 nova_compute[256940]: 2025-10-02 13:43:04.653 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:04 compute-0 nova_compute[256940]: 2025-10-02 13:43:04.653 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:43:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:04.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:05 compute-0 nova_compute[256940]: 2025-10-02 13:43:05.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:05 compute-0 nova_compute[256940]: 2025-10-02 13:43:05.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:05.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:43:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:43:05 compute-0 nova_compute[256940]: 2025-10-02 13:43:05.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:06 compute-0 ceph-mon[73668]: pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:07.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:07 compute-0 ceph-mon[73668]: pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:07 compute-0 sudo[440101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:07 compute-0 sudo[440101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:07 compute-0 sudo[440101]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:08 compute-0 sudo[440126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:08 compute-0 sudo[440126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:08 compute-0 sudo[440126]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:08 compute-0 nova_compute[256940]: 2025-10-02 13:43:08.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:08 compute-0 nova_compute[256940]: 2025-10-02 13:43:08.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:43:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:08.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:43:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:09.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:09 compute-0 nova_compute[256940]: 2025-10-02 13:43:09.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:10 compute-0 ceph-mon[73668]: pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:10.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:10 compute-0 nova_compute[256940]: 2025-10-02 13:43:10.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:11 compute-0 nova_compute[256940]: 2025-10-02 13:43:11.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:11.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:11 compute-0 podman[440153]: 2025-10-02 13:43:11.398435143 +0000 UTC m=+0.068729483 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:43:11 compute-0 podman[440154]: 2025-10-02 13:43:11.431727346 +0000 UTC m=+0.101818810 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 02 13:43:12 compute-0 ceph-mon[73668]: pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:12.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:13 compute-0 nova_compute[256940]: 2025-10-02 13:43:13.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:13 compute-0 nova_compute[256940]: 2025-10-02 13:43:13.211 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:43:13 compute-0 nova_compute[256940]: 2025-10-02 13:43:13.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:43:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:13 compute-0 nova_compute[256940]: 2025-10-02 13:43:13.241 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:43:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:13.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:14 compute-0 ceph-mon[73668]: pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:14 compute-0 nova_compute[256940]: 2025-10-02 13:43:14.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:43:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:14.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:43:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:15.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:15 compute-0 nova_compute[256940]: 2025-10-02 13:43:15.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:16 compute-0 ceph-mon[73668]: pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:17.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:17 compute-0 ceph-mon[73668]: pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:18.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:19.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:19 compute-0 nova_compute[256940]: 2025-10-02 13:43:19.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:20 compute-0 ceph-mon[73668]: pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:20.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:20 compute-0 nova_compute[256940]: 2025-10-02 13:43:20.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:21 compute-0 nova_compute[256940]: 2025-10-02 13:43:21.236 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:21.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:22 compute-0 ceph-mon[73668]: pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:43:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:22.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:43:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:23.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:24 compute-0 ceph-mon[73668]: pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:24 compute-0 podman[440198]: 2025-10-02 13:43:24.424923239 +0000 UTC m=+0.093412185 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 13:43:24 compute-0 podman[440199]: 2025-10-02 13:43:24.493811274 +0000 UTC m=+0.149265236 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:43:24 compute-0 nova_compute[256940]: 2025-10-02 13:43:24.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:24.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:25 compute-0 nova_compute[256940]: 2025-10-02 13:43:25.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:25.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:25 compute-0 nova_compute[256940]: 2025-10-02 13:43:25.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:26 compute-0 ceph-mon[73668]: pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:43:26.543 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:43:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:43:26.543 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:43:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:43:26.543 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:43:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:26.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:27.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:28 compute-0 sudo[440242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:28 compute-0 sudo[440242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:28 compute-0 sudo[440242]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:28 compute-0 sudo[440267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:28 compute-0 sudo[440267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:28 compute-0 sudo[440267]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:28 compute-0 ceph-mon[73668]: pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:43:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:43:28
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms']
Oct 02 13:43:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:43:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:29.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:29 compute-0 nova_compute[256940]: 2025-10-02 13:43:29.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:43:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:43:30 compute-0 ceph-mon[73668]: pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:30 compute-0 nova_compute[256940]: 2025-10-02 13:43:30.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:31.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:31 compute-0 ceph-mon[73668]: pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:32.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:33.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:34 compute-0 ceph-mon[73668]: pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:34 compute-0 nova_compute[256940]: 2025-10-02 13:43:34.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:35.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:35 compute-0 nova_compute[256940]: 2025-10-02 13:43:35.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:36 compute-0 ceph-mon[73668]: pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:36.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:38 compute-0 ceph-mon[73668]: pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:43:39 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 20K writes, 88K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1492 writes, 6920 keys, 1492 commit groups, 1.0 writes per commit group, ingest: 10.37 MB, 0.02 MB/s
                                           Interval WAL: 1492 writes, 1492 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     37.6      3.19              0.35        63    0.051       0      0       0.0       0.0
                                             L6      1/0   11.09 MB   0.0      0.8     0.1      0.6       0.6      0.0       0.0   5.5     80.5     69.1      9.57              1.91        62    0.154    505K    33K       0.0       0.0
                                            Sum      1/0   11.09 MB   0.0      0.8     0.1      0.6       0.8      0.1       0.0   6.5     60.4     61.2     12.75              2.26       125    0.102    505K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2    119.8    117.7      0.72              0.23        12    0.060     70K   3018       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.6       0.6      0.0       0.0   0.0     80.5     69.1      9.57              1.91        62    0.154    505K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     37.6      3.18              0.35        62    0.051       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.117, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.76 GB write, 0.11 MB/s write, 0.75 GB read, 0.11 MB/s read, 12.8 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563da8e551f0#2 capacity: 304.00 MB usage: 84.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.001313 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5270,80.71 MB,26.5493%) FilterBlock(126,1.36 MB,0.448423%) IndexBlock(126,2.22 MB,0.730625%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:43:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:43:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:39.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:43:39 compute-0 nova_compute[256940]: 2025-10-02 13:43:39.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:40 compute-0 ceph-mon[73668]: pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:40 compute-0 nova_compute[256940]: 2025-10-02 13:43:40.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:43:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:41 compute-0 ceph-mon[73668]: pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:42 compute-0 podman[440300]: 2025-10-02 13:43:42.417507477 +0000 UTC m=+0.082855215 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 13:43:42 compute-0 podman[440299]: 2025-10-02 13:43:42.447531306 +0000 UTC m=+0.110740239 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:43:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:43.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:43 compute-0 ceph-mon[73668]: pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:44 compute-0 nova_compute[256940]: 2025-10-02 13:43:44.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:43:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:45.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:43:45 compute-0 ceph-mon[73668]: pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:45 compute-0 nova_compute[256940]: 2025-10-02 13:43:45.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:47.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:47 compute-0 ceph-mon[73668]: pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:48 compute-0 sudo[440342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:48 compute-0 sudo[440342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:48 compute-0 sudo[440342]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:48 compute-0 sudo[440367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:48 compute-0 sudo[440367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:48 compute-0 sudo[440367]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:43:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:43:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:49.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:49 compute-0 nova_compute[256940]: 2025-10-02 13:43:49.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:50 compute-0 ceph-mon[73668]: pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:50 compute-0 nova_compute[256940]: 2025-10-02 13:43:50.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:51.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:51 compute-0 ceph-mon[73668]: pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:43:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:43:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:53.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:54 compute-0 ceph-mon[73668]: pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:54 compute-0 sudo[440395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:54 compute-0 sudo[440395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:54 compute-0 sudo[440395]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:54 compute-0 nova_compute[256940]: 2025-10-02 13:43:54.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:54 compute-0 sudo[440433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:43:54 compute-0 sudo[440433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:54 compute-0 sudo[440433]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:54 compute-0 podman[440419]: 2025-10-02 13:43:54.703904862 +0000 UTC m=+0.104344425 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:43:54 compute-0 podman[440420]: 2025-10-02 13:43:54.761609461 +0000 UTC m=+0.152280174 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:43:54 compute-0 sudo[440484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:54 compute-0 sudo[440484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:54 compute-0 sudo[440484]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:54 compute-0 sudo[440512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:43:54 compute-0 sudo[440512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:55 compute-0 sudo[440512]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:55.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:43:55 compute-0 ceph-mon[73668]: pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:43:55 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:43:55 compute-0 nova_compute[256940]: 2025-10-02 13:43:55.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:43:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev f78fdf32-8eac-43fc-bd99-be13daac87eb does not exist
Oct 02 13:43:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 11aa085e-3061-4bc8-bcb9-4c4252d0a1ab does not exist
Oct 02 13:43:55 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7bb18992-182c-41b8-8021-ea27ee5299fb does not exist
Oct 02 13:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:43:55 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:43:55 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:43:55 compute-0 sudo[440567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:55 compute-0 sudo[440567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:55 compute-0 sudo[440567]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:56 compute-0 sudo[440592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:43:56 compute-0 sudo[440592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:56 compute-0 sudo[440592]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:56 compute-0 sudo[440617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:56 compute-0 sudo[440617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:56 compute-0 sudo[440617]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:56 compute-0 sudo[440642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:43:56 compute-0 sudo[440642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:56 compute-0 podman[440707]: 2025-10-02 13:43:56.643039014 +0000 UTC m=+0.023212946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:43:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:43:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:43:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:43:57 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:43:57 compute-0 podman[440707]: 2025-10-02 13:43:57.108752661 +0000 UTC m=+0.488926573 container create 815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sutherland, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:43:57 compute-0 nova_compute[256940]: 2025-10-02 13:43:57.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:57.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:57 compute-0 systemd[1]: Started libpod-conmon-815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3.scope.
Oct 02 13:43:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:43:57 compute-0 podman[440707]: 2025-10-02 13:43:57.73491351 +0000 UTC m=+1.115087472 container init 815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sutherland, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:43:57 compute-0 podman[440707]: 2025-10-02 13:43:57.744583667 +0000 UTC m=+1.124757579 container start 815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sutherland, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:43:57 compute-0 systemd[1]: libpod-815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3.scope: Deactivated successfully.
Oct 02 13:43:57 compute-0 admiring_sutherland[440725]: 167 167
Oct 02 13:43:57 compute-0 conmon[440725]: conmon 815fbaff0e15f5d7a11f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3.scope/container/memory.events
Oct 02 13:43:57 compute-0 podman[440707]: 2025-10-02 13:43:57.842730683 +0000 UTC m=+1.222904635 container attach 815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sutherland, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:43:57 compute-0 podman[440707]: 2025-10-02 13:43:57.844908549 +0000 UTC m=+1.225082501 container died 815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:43:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e7d950488931d1d99c978b18e42c3693ab6e44ebffc733ee0b2ac17f4434f3-merged.mount: Deactivated successfully.
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.254 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.256 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.256 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.257 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.257 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:43:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/86670586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:58 compute-0 ceph-mon[73668]: pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:43:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:43:58 compute-0 podman[440707]: 2025-10-02 13:43:58.576320765 +0000 UTC m=+1.956494677 container remove 815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sutherland, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:43:58 compute-0 systemd[1]: libpod-conmon-815fbaff0e15f5d7a11fa398f6fe499f7c6a2c4d04762423019fb1615030e3d3.scope: Deactivated successfully.
Oct 02 13:43:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:43:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:58.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:43:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:43:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553964524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.826 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:43:58 compute-0 podman[440769]: 2025-10-02 13:43:58.790863403 +0000 UTC m=+0.032468483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.990 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.991 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4072MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.991 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:43:58 compute-0 nova_compute[256940]: 2025-10-02 13:43:58.991 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:43:59 compute-0 podman[440769]: 2025-10-02 13:43:59.021530505 +0000 UTC m=+0.263135575 container create 79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.060 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.060 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.086 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:43:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:59 compute-0 systemd[1]: Started libpod-conmon-79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9.scope.
Oct 02 13:43:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a187419417b4893955e69e5707e9ae5f2a535f1f3a19a2ccf1e8c220e1a815c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a187419417b4893955e69e5707e9ae5f2a535f1f3a19a2ccf1e8c220e1a815c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a187419417b4893955e69e5707e9ae5f2a535f1f3a19a2ccf1e8c220e1a815c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a187419417b4893955e69e5707e9ae5f2a535f1f3a19a2ccf1e8c220e1a815c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a187419417b4893955e69e5707e9ae5f2a535f1f3a19a2ccf1e8c220e1a815c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:43:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:43:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:59.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:59 compute-0 podman[440769]: 2025-10-02 13:43:59.489365396 +0000 UTC m=+0.730970496 container init 79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:43:59 compute-0 podman[440769]: 2025-10-02 13:43:59.497071884 +0000 UTC m=+0.738676954 container start 79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:43:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:43:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738925671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.585 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.595 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:43:59 compute-0 podman[440769]: 2025-10-02 13:43:59.648272429 +0000 UTC m=+0.889877519 container attach 79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2452559841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3553964524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.745 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.748 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:43:59 compute-0 nova_compute[256940]: 2025-10-02 13:43:59.748 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:44:00 compute-0 charming_elion[440789]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:44:00 compute-0 charming_elion[440789]: --> relative data size: 1.0
Oct 02 13:44:00 compute-0 charming_elion[440789]: --> All data devices are unavailable
Oct 02 13:44:00 compute-0 systemd[1]: libpod-79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9.scope: Deactivated successfully.
Oct 02 13:44:00 compute-0 podman[440769]: 2025-10-02 13:44:00.42977159 +0000 UTC m=+1.671376680 container died 79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_elion, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 13:44:00 compute-0 nova_compute[256940]: 2025-10-02 13:44:00.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:00.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:00 compute-0 ceph-mon[73668]: pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3738925671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/955581800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1389360451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a187419417b4893955e69e5707e9ae5f2a535f1f3a19a2ccf1e8c220e1a815c0-merged.mount: Deactivated successfully.
Oct 02 13:44:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:01.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:01 compute-0 podman[440769]: 2025-10-02 13:44:01.610305377 +0000 UTC m=+2.851910447 container remove 79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:44:01 compute-0 systemd[1]: libpod-conmon-79e016c021289f795fc13111b18e11922d01e6c5a63fd5a6559c78c15fd2f7d9.scope: Deactivated successfully.
Oct 02 13:44:01 compute-0 sudo[440642]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:01 compute-0 sudo[440839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:01 compute-0 sudo[440839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:01 compute-0 sudo[440839]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:01 compute-0 sudo[440864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:44:01 compute-0 sudo[440864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:01 compute-0 sudo[440864]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:01 compute-0 sudo[440889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:01 compute-0 sudo[440889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:01 compute-0 sudo[440889]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:01 compute-0 sudo[440914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:44:01 compute-0 sudo[440914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:02 compute-0 ceph-mon[73668]: pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:02 compute-0 podman[440979]: 2025-10-02 13:44:02.348130818 +0000 UTC m=+0.035607144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:44:02 compute-0 podman[440979]: 2025-10-02 13:44:02.476687493 +0000 UTC m=+0.164163779 container create 6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 13:44:02 compute-0 systemd[1]: Started libpod-conmon-6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417.scope.
Oct 02 13:44:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:44:02 compute-0 podman[440979]: 2025-10-02 13:44:02.762909299 +0000 UTC m=+0.450385615 container init 6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:44:02 compute-0 podman[440979]: 2025-10-02 13:44:02.775168643 +0000 UTC m=+0.462644929 container start 6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:44:02 compute-0 competent_hermann[440995]: 167 167
Oct 02 13:44:02 compute-0 systemd[1]: libpod-6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417.scope: Deactivated successfully.
Oct 02 13:44:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:03 compute-0 podman[440979]: 2025-10-02 13:44:03.031805271 +0000 UTC m=+0.719281657 container attach 6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:44:03 compute-0 podman[440979]: 2025-10-02 13:44:03.032436937 +0000 UTC m=+0.719913263 container died 6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 13:44:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c925afb91cb4119e40dd9692a085585e4f6b623b4dd1c3c79046eeb3b475b2d-merged.mount: Deactivated successfully.
Oct 02 13:44:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:03 compute-0 ceph-mon[73668]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:03 compute-0 podman[440979]: 2025-10-02 13:44:03.739791148 +0000 UTC m=+1.427267434 container remove 6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:44:03 compute-0 systemd[1]: libpod-conmon-6b284f2981177fa2cde825a6be43b1c2dd9882d36510b9d15c14f6e3e0dfd417.scope: Deactivated successfully.
Oct 02 13:44:04 compute-0 podman[441020]: 2025-10-02 13:44:03.935238397 +0000 UTC m=+0.029055796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:44:04 compute-0 podman[441020]: 2025-10-02 13:44:04.084467511 +0000 UTC m=+0.178284910 container create a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 02 13:44:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:04 compute-0 systemd[1]: Started libpod-conmon-a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c.scope.
Oct 02 13:44:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c41963b16c164bd621bf957e95923ef2424da014f8b79804e80e578deb74b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c41963b16c164bd621bf957e95923ef2424da014f8b79804e80e578deb74b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c41963b16c164bd621bf957e95923ef2424da014f8b79804e80e578deb74b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c41963b16c164bd621bf957e95923ef2424da014f8b79804e80e578deb74b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:04 compute-0 podman[441020]: 2025-10-02 13:44:04.400793399 +0000 UTC m=+0.494610808 container init a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:44:04 compute-0 podman[441020]: 2025-10-02 13:44:04.414914601 +0000 UTC m=+0.508732000 container start a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:44:04 compute-0 podman[441020]: 2025-10-02 13:44:04.518836264 +0000 UTC m=+0.612653743 container attach a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 13:44:04 compute-0 nova_compute[256940]: 2025-10-02 13:44:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:04.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:05 compute-0 zen_haslett[441037]: {
Oct 02 13:44:05 compute-0 zen_haslett[441037]:     "1": [
Oct 02 13:44:05 compute-0 zen_haslett[441037]:         {
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "devices": [
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "/dev/loop3"
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             ],
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "lv_name": "ceph_lv0",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "lv_size": "7511998464",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "name": "ceph_lv0",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "tags": {
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.cluster_name": "ceph",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.crush_device_class": "",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.encrypted": "0",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.osd_id": "1",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.type": "block",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:                 "ceph.vdo": "0"
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             },
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "type": "block",
Oct 02 13:44:05 compute-0 zen_haslett[441037]:             "vg_name": "ceph_vg0"
Oct 02 13:44:05 compute-0 zen_haslett[441037]:         }
Oct 02 13:44:05 compute-0 zen_haslett[441037]:     ]
Oct 02 13:44:05 compute-0 zen_haslett[441037]: }
Oct 02 13:44:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:05 compute-0 systemd[1]: libpod-a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c.scope: Deactivated successfully.
Oct 02 13:44:05 compute-0 podman[441047]: 2025-10-02 13:44:05.317529385 +0000 UTC m=+0.029563339 container died a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 02 13:44:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:05.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-52c41963b16c164bd621bf957e95923ef2424da014f8b79804e80e578deb74b4-merged.mount: Deactivated successfully.
Oct 02 13:44:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:44:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3923771226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:44:05 compute-0 podman[441047]: 2025-10-02 13:44:05.369062525 +0000 UTC m=+0.081096459 container remove a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:44:05 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:44:05 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3923771226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:44:05 compute-0 systemd[1]: libpod-conmon-a2ab06abcc0682f89b0bb27b92509302df0072acacfc322e4959c7309587178c.scope: Deactivated successfully.
Oct 02 13:44:05 compute-0 sudo[440914]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:05 compute-0 sudo[441062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:05 compute-0 sudo[441062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:05 compute-0 sudo[441062]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:05 compute-0 sudo[441087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:44:05 compute-0 sudo[441087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:05 compute-0 sudo[441087]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:05 compute-0 sudo[441112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:05 compute-0 sudo[441112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:05 compute-0 sudo[441112]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:05 compute-0 sudo[441137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:44:05 compute-0 sudo[441137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:05 compute-0 nova_compute[256940]: 2025-10-02 13:44:05.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:06 compute-0 podman[441203]: 2025-10-02 13:44:06.035520527 +0000 UTC m=+0.046827651 container create 638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 13:44:06 compute-0 systemd[1]: Started libpod-conmon-638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f.scope.
Oct 02 13:44:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:44:06 compute-0 podman[441203]: 2025-10-02 13:44:06.11796819 +0000 UTC m=+0.129275304 container init 638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:44:06 compute-0 podman[441203]: 2025-10-02 13:44:06.021078317 +0000 UTC m=+0.032385451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:44:06 compute-0 podman[441203]: 2025-10-02 13:44:06.124912438 +0000 UTC m=+0.136219542 container start 638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:44:06 compute-0 podman[441203]: 2025-10-02 13:44:06.127784502 +0000 UTC m=+0.139091606 container attach 638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:44:06 compute-0 vibrant_joliot[441220]: 167 167
Oct 02 13:44:06 compute-0 systemd[1]: libpod-638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f.scope: Deactivated successfully.
Oct 02 13:44:06 compute-0 podman[441203]: 2025-10-02 13:44:06.132345669 +0000 UTC m=+0.143652783 container died 638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-04acc2c7961518fea170770a6a2abed14f784948a64808905df1de6c4e5ab622-merged.mount: Deactivated successfully.
Oct 02 13:44:06 compute-0 podman[441203]: 2025-10-02 13:44:06.166761011 +0000 UTC m=+0.178068115 container remove 638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 13:44:06 compute-0 systemd[1]: libpod-conmon-638a8a85ee04ec61652b831e797829a3205bfd53e027ee000799c00ed9a0515f.scope: Deactivated successfully.
Oct 02 13:44:06 compute-0 podman[441243]: 2025-10-02 13:44:06.33216355 +0000 UTC m=+0.043638979 container create 00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:44:06 compute-0 ceph-mon[73668]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3923771226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:44:06 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/3923771226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:44:06 compute-0 systemd[1]: Started libpod-conmon-00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82.scope.
Oct 02 13:44:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c921860ddac9ca22667d151994c1bd5d7797841f975d5df196557a8fa97c8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c921860ddac9ca22667d151994c1bd5d7797841f975d5df196557a8fa97c8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c921860ddac9ca22667d151994c1bd5d7797841f975d5df196557a8fa97c8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c921860ddac9ca22667d151994c1bd5d7797841f975d5df196557a8fa97c8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:44:06 compute-0 podman[441243]: 2025-10-02 13:44:06.315568625 +0000 UTC m=+0.027044054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:44:06 compute-0 podman[441243]: 2025-10-02 13:44:06.417072027 +0000 UTC m=+0.128547486 container init 00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 13:44:06 compute-0 podman[441243]: 2025-10-02 13:44:06.423615844 +0000 UTC m=+0.135091253 container start 00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:44:06 compute-0 podman[441243]: 2025-10-02 13:44:06.428439228 +0000 UTC m=+0.139914687 container attach 00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 13:44:06 compute-0 nova_compute[256940]: 2025-10-02 13:44:06.749 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:06 compute-0 nova_compute[256940]: 2025-10-02 13:44:06.751 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:06 compute-0 nova_compute[256940]: 2025-10-02 13:44:06.751 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:06 compute-0 nova_compute[256940]: 2025-10-02 13:44:06.751 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:44:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:06.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:07 compute-0 hopeful_buck[441259]: {
Oct 02 13:44:07 compute-0 hopeful_buck[441259]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:44:07 compute-0 hopeful_buck[441259]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:44:07 compute-0 hopeful_buck[441259]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:44:07 compute-0 hopeful_buck[441259]:         "osd_id": 1,
Oct 02 13:44:07 compute-0 hopeful_buck[441259]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:44:07 compute-0 hopeful_buck[441259]:         "type": "bluestore"
Oct 02 13:44:07 compute-0 hopeful_buck[441259]:     }
Oct 02 13:44:07 compute-0 hopeful_buck[441259]: }
Oct 02 13:44:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:07 compute-0 podman[441243]: 2025-10-02 13:44:07.352665767 +0000 UTC m=+1.064141206 container died 00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_buck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 02 13:44:07 compute-0 systemd[1]: libpod-00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82.scope: Deactivated successfully.
Oct 02 13:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-63c921860ddac9ca22667d151994c1bd5d7797841f975d5df196557a8fa97c8b-merged.mount: Deactivated successfully.
Oct 02 13:44:07 compute-0 podman[441243]: 2025-10-02 13:44:07.433250142 +0000 UTC m=+1.144725541 container remove 00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_buck, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:44:07 compute-0 systemd[1]: libpod-conmon-00b5edbff617d0671224a3a465f098ff091427e072ff4a6db6702a8232cf5e82.scope: Deactivated successfully.
Oct 02 13:44:07 compute-0 sudo[441137]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:44:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:44:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:44:07 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:44:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev fceb661b-536d-4df7-833e-c194d6e854ca does not exist
Oct 02 13:44:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev d0640232-d491-4b2a-9547-99c76d377010 does not exist
Oct 02 13:44:07 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c992d6c2-497f-4347-bcfb-992cda40896f does not exist
Oct 02 13:44:07 compute-0 sudo[441294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:07 compute-0 sudo[441294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:07 compute-0 sudo[441294]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:07 compute-0 sudo[441319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:44:07 compute-0 sudo[441319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:07 compute-0 sudo[441319]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:08 compute-0 ceph-mon[73668]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:44:08 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:44:08 compute-0 sudo[441344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:08 compute-0 sudo[441344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:08 compute-0 sudo[441344]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:08 compute-0 sudo[441369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:08 compute-0 sudo[441369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:08 compute-0 sudo[441369]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:08.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:09 compute-0 nova_compute[256940]: 2025-10-02 13:44:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:09 compute-0 nova_compute[256940]: 2025-10-02 13:44:09.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:09.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:09 compute-0 ceph-mon[73668]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:09 compute-0 nova_compute[256940]: 2025-10-02 13:44:09.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:10 compute-0 nova_compute[256940]: 2025-10-02 13:44:10.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:10.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:44:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:11.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:44:12 compute-0 nova_compute[256940]: 2025-10-02 13:44:12.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:12 compute-0 ceph-mon[73668]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:12.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:13.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:13 compute-0 podman[441398]: 2025-10-02 13:44:13.392268044 +0000 UTC m=+0.056420997 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 02 13:44:13 compute-0 podman[441397]: 2025-10-02 13:44:13.392336246 +0000 UTC m=+0.056572201 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid)
Oct 02 13:44:13 compute-0 ceph-mon[73668]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:14 compute-0 nova_compute[256940]: 2025-10-02 13:44:14.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:14.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:15 compute-0 nova_compute[256940]: 2025-10-02 13:44:15.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:15 compute-0 nova_compute[256940]: 2025-10-02 13:44:15.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:44:15 compute-0 nova_compute[256940]: 2025-10-02 13:44:15.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:44:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:15 compute-0 nova_compute[256940]: 2025-10-02 13:44:15.318 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:44:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:15.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:15 compute-0 ceph-mon[73668]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:15 compute-0 nova_compute[256940]: 2025-10-02 13:44:15.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:17.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:18 compute-0 ceph-mon[73668]: pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:19.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:19 compute-0 nova_compute[256940]: 2025-10-02 13:44:19.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:19 compute-0 ceph-mon[73668]: pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:20 compute-0 nova_compute[256940]: 2025-10-02 13:44:20.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:20.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:21 compute-0 ceph-mon[73668]: pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:22 compute-0 nova_compute[256940]: 2025-10-02 13:44:22.313 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:44:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:22.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:44:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:23 compute-0 ceph-mon[73668]: pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:24 compute-0 nova_compute[256940]: 2025-10-02 13:44:24.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:24.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:25 compute-0 podman[441439]: 2025-10-02 13:44:25.410149309 +0000 UTC m=+0.069247696 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:44:25 compute-0 podman[441440]: 2025-10-02 13:44:25.420829743 +0000 UTC m=+0.085956054 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:44:25 compute-0 nova_compute[256940]: 2025-10-02 13:44:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:26 compute-0 ceph-mon[73668]: pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:44:26.544 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:44:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:44:26.545 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:44:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:44:26.545 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:44:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:44:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:44:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:27.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:27 compute-0 ceph-mon[73668]: pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:44:28 compute-0 sudo[441483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:28 compute-0 sudo[441483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:28 compute-0 sudo[441483]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:28 compute-0 sudo[441508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:28 compute-0 sudo[441508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:28 compute-0 sudo[441508]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:28.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:44:28
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'backups']
Oct 02 13:44:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:44:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:29.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:29 compute-0 ceph-mon[73668]: pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:29 compute-0 nova_compute[256940]: 2025-10-02 13:44:29.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:44:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:44:30 compute-0 nova_compute[256940]: 2025-10-02 13:44:30.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:30.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:31.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:31 compute-0 ceph-mon[73668]: pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:32.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:33.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:33 compute-0 ceph-mon[73668]: pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:34 compute-0 nova_compute[256940]: 2025-10-02 13:44:34.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:34.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:35.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:35 compute-0 ceph-mon[73668]: pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:35 compute-0 nova_compute[256940]: 2025-10-02 13:44:35.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:36.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:37.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:37 compute-0 ceph-mon[73668]: pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:38.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:39 compute-0 nova_compute[256940]: 2025-10-02 13:44:39.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:40 compute-0 ceph-mon[73668]: pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:40 compute-0 nova_compute[256940]: 2025-10-02 13:44:40.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:40.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:44:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:44:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:41.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:44:41 compute-0 ceph-mon[73668]: pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:43.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:44 compute-0 ceph-mon[73668]: pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:44 compute-0 podman[441541]: 2025-10-02 13:44:44.38100958 +0000 UTC m=+0.057277478 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:44:44 compute-0 podman[441542]: 2025-10-02 13:44:44.408905915 +0000 UTC m=+0.082046473 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:44:44 compute-0 nova_compute[256940]: 2025-10-02 13:44:44.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:45.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:45 compute-0 nova_compute[256940]: 2025-10-02 13:44:45.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:46 compute-0 ceph-mon[73668]: pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:46.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:47.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:47 compute-0 ceph-mon[73668]: pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:48 compute-0 sudo[441583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:48 compute-0 sudo[441583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:48 compute-0 sudo[441583]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:48 compute-0 sudo[441608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:48 compute-0 sudo[441608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:48 compute-0 sudo[441608]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:48.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.157731) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689157807, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1483, "num_deletes": 255, "total_data_size": 2612207, "memory_usage": 2651008, "flush_reason": "Manual Compaction"}
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689180277, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 2572239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88051, "largest_seqno": 89532, "table_properties": {"data_size": 2565284, "index_size": 4025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14215, "raw_average_key_size": 19, "raw_value_size": 2551404, "raw_average_value_size": 3543, "num_data_blocks": 177, "num_entries": 720, "num_filter_entries": 720, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412539, "oldest_key_time": 1759412539, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 22588 microseconds, and 8471 cpu microseconds.
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.180330) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 2572239 bytes OK
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.180355) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.182399) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.182421) EVENT_LOG_v1 {"time_micros": 1759412689182414, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.182444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 2605915, prev total WAL file size 2605915, number of live WAL files 2.
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.183570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373732' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(2511KB)], [203(11MB)]
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689183629, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 14200666, "oldest_snapshot_seqno": -1}
Oct 02 13:44:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 11709 keys, 14066432 bytes, temperature: kUnknown
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689288530, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 14066432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13992714, "index_size": 43333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 310120, "raw_average_key_size": 26, "raw_value_size": 13790025, "raw_average_value_size": 1177, "num_data_blocks": 1645, "num_entries": 11709, "num_filter_entries": 11709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.288833) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14066432 bytes
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.290562) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.3 rd, 134.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.1 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 12238, records dropped: 529 output_compression: NoCompression
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.290589) EVENT_LOG_v1 {"time_micros": 1759412689290577, "job": 128, "event": "compaction_finished", "compaction_time_micros": 104987, "compaction_time_cpu_micros": 48785, "output_level": 6, "num_output_files": 1, "total_output_size": 14066432, "num_input_records": 12238, "num_output_records": 11709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689291429, "job": 128, "event": "table_file_deletion", "file_number": 205}
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689294532, "job": 128, "event": "table_file_deletion", "file_number": 203}
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.183496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.294624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.294632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.294634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.294636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:44:49.294638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:49.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:49 compute-0 nova_compute[256940]: 2025-10-02 13:44:49.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:50 compute-0 ceph-mon[73668]: pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:50 compute-0 nova_compute[256940]: 2025-10-02 13:44:50.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:50.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:51.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:52 compute-0 ceph-mon[73668]: pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:53.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:54 compute-0 ceph-mon[73668]: pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:54 compute-0 nova_compute[256940]: 2025-10-02 13:44:54.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:54 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:54 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:54 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:54.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:55.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:55 compute-0 ceph-mon[73668]: pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:55 compute-0 nova_compute[256940]: 2025-10-02 13:44:55.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:56 compute-0 podman[441637]: 2025-10-02 13:44:56.42294011 +0000 UTC m=+0.076933073 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:44:56 compute-0 podman[441638]: 2025-10-02 13:44:56.455592827 +0000 UTC m=+0.106763578 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:44:56 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:56 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:44:56 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:56.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:44:57 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/290588798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:44:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:58 compute-0 ceph-mon[73668]: pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3055510341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.307 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.308 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.308 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.308 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.308 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:44:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:44:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:44:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410477897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.783 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:44:58 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:58 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:58 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:58.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.982 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.984 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4094MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.985 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:44:58 compute-0 nova_compute[256940]: 2025-10-02 13:44:58.985 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:44:59 compute-0 nova_compute[256940]: 2025-10-02 13:44:59.093 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:44:59 compute-0 nova_compute[256940]: 2025-10-02 13:44:59.093 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:44:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/410477897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:44:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:59.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:59 compute-0 nova_compute[256940]: 2025-10-02 13:44:59.519 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:44:59 compute-0 nova_compute[256940]: 2025-10-02 13:44:59.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:44:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357204207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:59 compute-0 nova_compute[256940]: 2025-10-02 13:44:59.979 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:44:59 compute-0 nova_compute[256940]: 2025-10-02 13:44:59.987 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:45:00 compute-0 nova_compute[256940]: 2025-10-02 13:45:00.019 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:45:00 compute-0 nova_compute[256940]: 2025-10-02 13:45:00.021 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:45:00 compute-0 nova_compute[256940]: 2025-10-02 13:45:00.021 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:45:00 compute-0 ceph-mon[73668]: pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/357204207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:00 compute-0 nova_compute[256940]: 2025-10-02 13:45:00.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:00 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:00 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:00 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:00.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:01 compute-0 ceph-mon[73668]: pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2734375241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2899976604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:02 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:02 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:45:02 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:02.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:45:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:03 compute-0 ceph-mon[73668]: pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:04 compute-0 nova_compute[256940]: 2025-10-02 13:45:04.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:04 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:04 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:04 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:04.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:45:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:45:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:05.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:05 compute-0 nova_compute[256940]: 2025-10-02 13:45:05.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:06 compute-0 ceph-mon[73668]: pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:06 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:06 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:06 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:07.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:08 compute-0 sudo[441732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:08 compute-0 sudo[441732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-0 sudo[441732]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:08 compute-0 nova_compute[256940]: 2025-10-02 13:45:08.023 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:08 compute-0 nova_compute[256940]: 2025-10-02 13:45:08.023 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:08 compute-0 nova_compute[256940]: 2025-10-02 13:45:08.023 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:08 compute-0 nova_compute[256940]: 2025-10-02 13:45:08.023 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:45:08 compute-0 sudo[441757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:45:08 compute-0 sudo[441757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-0 sudo[441757]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:08 compute-0 sudo[441782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:08 compute-0 sudo[441782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-0 sudo[441782]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:08 compute-0 sudo[441807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:45:08 compute-0 sudo[441807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-0 ceph-mon[73668]: pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:08 compute-0 podman[441906]: 2025-10-02 13:45:08.668513493 +0000 UTC m=+0.058890539 container exec 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:45:08 compute-0 podman[441906]: 2025-10-02 13:45:08.774460633 +0000 UTC m=+0.164837639 container exec_died 59b10e0ac16514577faaf1a75ecc9b2500053126049fa48acd80fe8a7aaf1e05 (image=quay.io/ceph/ceph:v18, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:45:08 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:08 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:08 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:08.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:08 compute-0 sudo[441957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:08 compute-0 sudo[441957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-0 sudo[441957]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-0 sudo[441999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:09 compute-0 sudo[441999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:09 compute-0 sudo[441999]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:09 compute-0 nova_compute[256940]: 2025-10-02 13:45:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:09 compute-0 nova_compute[256940]: 2025-10-02 13:45:09.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:45:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:45:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:09 compute-0 podman[442092]: 2025-10-02 13:45:09.353520888 +0000 UTC m=+0.065673923 container exec 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:45:09 compute-0 podman[442092]: 2025-10-02 13:45:09.370160717 +0000 UTC m=+0.082313732 container exec_died 27298eed99f54b7f5d8319036e1a523d978dc749aac3d51cb1553574fce41791 (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-0-zhecum)
Oct 02 13:45:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:09.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:09 compute-0 podman[442157]: 2025-10-02 13:45:09.591800509 +0000 UTC m=+0.056767654 container exec 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, version=2.2.4, release=1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public)
Oct 02 13:45:09 compute-0 podman[442157]: 2025-10-02 13:45:09.604612209 +0000 UTC m=+0.069579354 container exec_died 09b3831fc007962c4d37ee988221572d1c4444a246c6de342c04e1e25836bf47 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-0-nghmbz, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, release=1793, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 02 13:45:09 compute-0 sudo[441807]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:45:09 compute-0 nova_compute[256940]: 2025-10-02 13:45:09.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:45:09 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:09 compute-0 sudo[442208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:09 compute-0 sudo[442208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:09 compute-0 sudo[442208]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-0 sudo[442233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:45:09 compute-0 sudo[442233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:09 compute-0 sudo[442233]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-0 sudo[442258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:09 compute-0 sudo[442258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:09 compute-0 sudo[442258]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:10 compute-0 sudo[442283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:45:10 compute-0 sudo[442283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-0 ceph-mon[73668]: pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-0 sudo[442283]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b75b6804-86fd-44b1-ae35-6837ce751864 does not exist
Oct 02 13:45:10 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 7e54b68f-7c6e-4569-9cf9-ffa04051b7ad does not exist
Oct 02 13:45:10 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 249acbdf-ed23-44af-a784-4a27572a3cec does not exist
Oct 02 13:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:45:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:45:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:45:10 compute-0 sudo[442339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:10 compute-0 sudo[442339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:10 compute-0 sudo[442339]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:10 compute-0 sudo[442364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:45:10 compute-0 sudo[442364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:10 compute-0 sudo[442364]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:10 compute-0 sudo[442389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:10 compute-0 sudo[442389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:10 compute-0 sudo[442389]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:10 compute-0 nova_compute[256940]: 2025-10-02 13:45:10.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:10 compute-0 sudo[442414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:45:10 compute-0 sudo[442414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:10 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:10 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:10 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:11 compute-0 podman[442481]: 2025-10-02 13:45:11.287402151 +0000 UTC m=+0.049344863 container create 13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ptolemy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 13:45:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:45:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:45:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:45:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:45:11 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:45:11 compute-0 systemd[1]: Started libpod-conmon-13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649.scope.
Oct 02 13:45:11 compute-0 podman[442481]: 2025-10-02 13:45:11.262493219 +0000 UTC m=+0.024435961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:45:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:45:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:11.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:11 compute-0 podman[442481]: 2025-10-02 13:45:11.404728305 +0000 UTC m=+0.166671067 container init 13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ptolemy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 13:45:11 compute-0 podman[442481]: 2025-10-02 13:45:11.418959702 +0000 UTC m=+0.180902454 container start 13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 13:45:11 compute-0 podman[442481]: 2025-10-02 13:45:11.423638832 +0000 UTC m=+0.185581554 container attach 13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 13:45:11 compute-0 zen_ptolemy[442498]: 167 167
Oct 02 13:45:11 compute-0 systemd[1]: libpod-13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649.scope: Deactivated successfully.
Oct 02 13:45:11 compute-0 conmon[442498]: conmon 13cdb317deadb86b1e57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649.scope/container/memory.events
Oct 02 13:45:11 compute-0 podman[442481]: 2025-10-02 13:45:11.430588502 +0000 UTC m=+0.192531224 container died 13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 13:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-aab79bcb6ebbd856eebc240d9025a92b4c3043206a6cffee63740aefb1d1be7d-merged.mount: Deactivated successfully.
Oct 02 13:45:11 compute-0 podman[442481]: 2025-10-02 13:45:11.507200116 +0000 UTC m=+0.269142828 container remove 13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:45:11 compute-0 systemd[1]: libpod-conmon-13cdb317deadb86b1e57efaa41c92da5d5139ea62c12a3348422d0f943d70649.scope: Deactivated successfully.
Oct 02 13:45:11 compute-0 podman[442522]: 2025-10-02 13:45:11.704246425 +0000 UTC m=+0.060063499 container create 9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 13:45:11 compute-0 systemd[1]: Started libpod-conmon-9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a.scope.
Oct 02 13:45:11 compute-0 podman[442522]: 2025-10-02 13:45:11.683442709 +0000 UTC m=+0.039259833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:45:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8b93585a94c23ebf88d11cfc01d8044b2c9eec8349d76b166d93f90cce1203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8b93585a94c23ebf88d11cfc01d8044b2c9eec8349d76b166d93f90cce1203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8b93585a94c23ebf88d11cfc01d8044b2c9eec8349d76b166d93f90cce1203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8b93585a94c23ebf88d11cfc01d8044b2c9eec8349d76b166d93f90cce1203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8b93585a94c23ebf88d11cfc01d8044b2c9eec8349d76b166d93f90cce1203/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:11 compute-0 podman[442522]: 2025-10-02 13:45:11.807420264 +0000 UTC m=+0.163237318 container init 9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 13:45:11 compute-0 podman[442522]: 2025-10-02 13:45:11.819770512 +0000 UTC m=+0.175587546 container start 9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 13:45:11 compute-0 podman[442522]: 2025-10-02 13:45:11.82357028 +0000 UTC m=+0.179387344 container attach 9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:45:12 compute-0 ceph-mon[73668]: pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:12 compute-0 focused_mendeleev[442538]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:45:12 compute-0 focused_mendeleev[442538]: --> relative data size: 1.0
Oct 02 13:45:12 compute-0 focused_mendeleev[442538]: --> All data devices are unavailable
Oct 02 13:45:12 compute-0 systemd[1]: libpod-9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a.scope: Deactivated successfully.
Oct 02 13:45:12 compute-0 podman[442522]: 2025-10-02 13:45:12.636554724 +0000 UTC m=+0.992371798 container died 9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b8b93585a94c23ebf88d11cfc01d8044b2c9eec8349d76b166d93f90cce1203-merged.mount: Deactivated successfully.
Oct 02 13:45:12 compute-0 podman[442522]: 2025-10-02 13:45:12.743966453 +0000 UTC m=+1.099783487 container remove 9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 13:45:12 compute-0 systemd[1]: libpod-conmon-9f616d680e00763a130db5e385727af98b32f31e4b6baeacd3f3149ad0fadb2a.scope: Deactivated successfully.
Oct 02 13:45:12 compute-0 sudo[442414]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:12 compute-0 sudo[442567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:12 compute-0 sudo[442567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:12 compute-0 sudo[442567]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:12 compute-0 sudo[442592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:45:12 compute-0 sudo[442592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:12 compute-0 sudo[442592]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:12 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:12 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:12 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:12.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:12 compute-0 sudo[442617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:12 compute-0 sudo[442617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:12 compute-0 sudo[442617]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:13 compute-0 sudo[442643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:45:13 compute-0 sudo[442643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:13 compute-0 podman[442710]: 2025-10-02 13:45:13.329175865 +0000 UTC m=+0.040767552 container create 0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 13:45:13 compute-0 systemd[1]: Started libpod-conmon-0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7.scope.
Oct 02 13:45:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:45:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:13.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:13 compute-0 podman[442710]: 2025-10-02 13:45:13.311618302 +0000 UTC m=+0.023210019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:45:13 compute-0 podman[442710]: 2025-10-02 13:45:13.412757199 +0000 UTC m=+0.124348906 container init 0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:45:13 compute-0 podman[442710]: 2025-10-02 13:45:13.420082688 +0000 UTC m=+0.131674375 container start 0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 13:45:13 compute-0 podman[442710]: 2025-10-02 13:45:13.42365702 +0000 UTC m=+0.135248737 container attach 0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 02 13:45:13 compute-0 pedantic_cori[442726]: 167 167
Oct 02 13:45:13 compute-0 systemd[1]: libpod-0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7.scope: Deactivated successfully.
Oct 02 13:45:13 compute-0 podman[442710]: 2025-10-02 13:45:13.426200075 +0000 UTC m=+0.137791762 container died 0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-da39a8b66bb1875310193b137099634f00c5a66d1c7525bbf05271afa736edb1-merged.mount: Deactivated successfully.
Oct 02 13:45:13 compute-0 podman[442710]: 2025-10-02 13:45:13.464665567 +0000 UTC m=+0.176257284 container remove 0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 13:45:13 compute-0 systemd[1]: libpod-conmon-0be75821c1d1f9746bfb30c127872c729689d14b3ae557475a3075450f0604e7.scope: Deactivated successfully.
Oct 02 13:45:13 compute-0 podman[442751]: 2025-10-02 13:45:13.629156977 +0000 UTC m=+0.051177341 container create 9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:45:13 compute-0 systemd[1]: Started libpod-conmon-9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883.scope.
Oct 02 13:45:13 compute-0 podman[442751]: 2025-10-02 13:45:13.605743433 +0000 UTC m=+0.027763897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:45:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad191b13d6fece4589b73bca96138c5c4473b812cbb092a1435b7b5cd3252cc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad191b13d6fece4589b73bca96138c5c4473b812cbb092a1435b7b5cd3252cc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad191b13d6fece4589b73bca96138c5c4473b812cbb092a1435b7b5cd3252cc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad191b13d6fece4589b73bca96138c5c4473b812cbb092a1435b7b5cd3252cc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:13 compute-0 podman[442751]: 2025-10-02 13:45:13.762660397 +0000 UTC m=+0.184680801 container init 9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:45:13 compute-0 podman[442751]: 2025-10-02 13:45:13.777641213 +0000 UTC m=+0.199661587 container start 9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:45:13 compute-0 podman[442751]: 2025-10-02 13:45:13.781677548 +0000 UTC m=+0.203697942 container attach 9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:45:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:14 compute-0 nova_compute[256940]: 2025-10-02 13:45:14.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:14 compute-0 ceph-mon[73668]: pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:14 compute-0 elegant_neumann[442768]: {
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:     "1": [
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:         {
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "devices": [
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "/dev/loop3"
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             ],
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "lv_name": "ceph_lv0",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "lv_size": "7511998464",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "name": "ceph_lv0",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "tags": {
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.cluster_name": "ceph",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.crush_device_class": "",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.encrypted": "0",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.osd_id": "1",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.type": "block",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:                 "ceph.vdo": "0"
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             },
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "type": "block",
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:             "vg_name": "ceph_vg0"
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:         }
Oct 02 13:45:14 compute-0 elegant_neumann[442768]:     ]
Oct 02 13:45:14 compute-0 elegant_neumann[442768]: }
Oct 02 13:45:14 compute-0 systemd[1]: libpod-9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883.scope: Deactivated successfully.
Oct 02 13:45:14 compute-0 podman[442751]: 2025-10-02 13:45:14.570256322 +0000 UTC m=+0.992276696 container died 9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad191b13d6fece4589b73bca96138c5c4473b812cbb092a1435b7b5cd3252cc7-merged.mount: Deactivated successfully.
Oct 02 13:45:14 compute-0 podman[442751]: 2025-10-02 13:45:14.638527702 +0000 UTC m=+1.060548066 container remove 9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:45:14 compute-0 systemd[1]: libpod-conmon-9350bd640a56c97bb1b008d18ce17c849993b0a990d2ba0e1c81c2608126b883.scope: Deactivated successfully.
Oct 02 13:45:14 compute-0 sudo[442643]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:14 compute-0 podman[442785]: 2025-10-02 13:45:14.684849546 +0000 UTC m=+0.077971551 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:45:14 compute-0 podman[442778]: 2025-10-02 13:45:14.701156846 +0000 UTC m=+0.094918207 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:45:14 compute-0 sudo[442830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:14 compute-0 sudo[442830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:14 compute-0 sudo[442830]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:14 compute-0 nova_compute[256940]: 2025-10-02 13:45:14.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:14 compute-0 sudo[442855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:45:14 compute-0 sudo[442855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:14 compute-0 sudo[442855]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:14 compute-0 sudo[442880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:14 compute-0 sudo[442880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:14 compute-0 sudo[442880]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:14 compute-0 sudo[442905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:45:14 compute-0 sudo[442905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:14 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:14 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:14 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:14.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:15 compute-0 podman[442972]: 2025-10-02 13:45:15.265714027 +0000 UTC m=+0.045209666 container create fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:45:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:15 compute-0 systemd[1]: Started libpod-conmon-fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98.scope.
Oct 02 13:45:15 compute-0 podman[442972]: 2025-10-02 13:45:15.244462149 +0000 UTC m=+0.023957818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:45:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:45:15 compute-0 podman[442972]: 2025-10-02 13:45:15.368894236 +0000 UTC m=+0.148389895 container init fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:45:15 compute-0 podman[442972]: 2025-10-02 13:45:15.378552905 +0000 UTC m=+0.158048544 container start fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:45:15 compute-0 podman[442972]: 2025-10-02 13:45:15.382824575 +0000 UTC m=+0.162320234 container attach fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 13:45:15 compute-0 optimistic_williams[442988]: 167 167
Oct 02 13:45:15 compute-0 systemd[1]: libpod-fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98.scope: Deactivated successfully.
Oct 02 13:45:15 compute-0 podman[442972]: 2025-10-02 13:45:15.385948936 +0000 UTC m=+0.165444575 container died fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:45:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:15.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b19466b35defe29fa07765f65fd2172bb65f3b37f0537b4c10eba138f7f4ef33-merged.mount: Deactivated successfully.
Oct 02 13:45:15 compute-0 podman[442972]: 2025-10-02 13:45:15.421128613 +0000 UTC m=+0.200624242 container remove fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 13:45:15 compute-0 systemd[1]: libpod-conmon-fe1902607f9ed59a9698a506f05a3bd350d80b9d24c38d30c3f5fe5a0369ad98.scope: Deactivated successfully.
Oct 02 13:45:15 compute-0 podman[443011]: 2025-10-02 13:45:15.608014829 +0000 UTC m=+0.054968427 container create aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 13:45:15 compute-0 systemd[1]: Started libpod-conmon-aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297.scope.
Oct 02 13:45:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbefea7fa893d31dbe5d20f206ea2b820092d32bec049d6d5e831e3c83392b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbefea7fa893d31dbe5d20f206ea2b820092d32bec049d6d5e831e3c83392b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbefea7fa893d31dbe5d20f206ea2b820092d32bec049d6d5e831e3c83392b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbefea7fa893d31dbe5d20f206ea2b820092d32bec049d6d5e831e3c83392b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:45:15 compute-0 podman[443011]: 2025-10-02 13:45:15.578612162 +0000 UTC m=+0.025565800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:45:15 compute-0 podman[443011]: 2025-10-02 13:45:15.680652732 +0000 UTC m=+0.127606310 container init aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:45:15 compute-0 podman[443011]: 2025-10-02 13:45:15.689854529 +0000 UTC m=+0.136808087 container start aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:45:15 compute-0 podman[443011]: 2025-10-02 13:45:15.692885037 +0000 UTC m=+0.139838625 container attach aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:45:15 compute-0 nova_compute[256940]: 2025-10-02 13:45:15.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:16 compute-0 ceph-mon[73668]: pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:16 compute-0 exciting_panini[443027]: {
Oct 02 13:45:16 compute-0 exciting_panini[443027]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:45:16 compute-0 exciting_panini[443027]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:45:16 compute-0 exciting_panini[443027]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:45:16 compute-0 exciting_panini[443027]:         "osd_id": 1,
Oct 02 13:45:16 compute-0 exciting_panini[443027]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:45:16 compute-0 exciting_panini[443027]:         "type": "bluestore"
Oct 02 13:45:16 compute-0 exciting_panini[443027]:     }
Oct 02 13:45:16 compute-0 exciting_panini[443027]: }
Oct 02 13:45:16 compute-0 systemd[1]: libpod-aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297.scope: Deactivated successfully.
Oct 02 13:45:16 compute-0 podman[443011]: 2025-10-02 13:45:16.571225585 +0000 UTC m=+1.018179143 container died aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 13:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ecbefea7fa893d31dbe5d20f206ea2b820092d32bec049d6d5e831e3c83392b-merged.mount: Deactivated successfully.
Oct 02 13:45:16 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:16 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:16 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:16.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:17 compute-0 podman[443011]: 2025-10-02 13:45:17.019387045 +0000 UTC m=+1.466340633 container remove aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:45:17 compute-0 systemd[1]: libpod-conmon-aae9ed77efbd332bf23ddec181dd676e4eb1d074bbcfdc35b550532433933297.scope: Deactivated successfully.
Oct 02 13:45:17 compute-0 sudo[442905]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:45:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:45:17 compute-0 nova_compute[256940]: 2025-10-02 13:45:17.213 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:17 compute-0 nova_compute[256940]: 2025-10-02 13:45:17.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:45:17 compute-0 nova_compute[256940]: 2025-10-02 13:45:17.214 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:45:17 compute-0 nova_compute[256940]: 2025-10-02 13:45:17.238 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:45:17 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev b75f81d6-68dd-4d26-a062-c516044c283d does not exist
Oct 02 13:45:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev ae3ca467-d059-4974-918c-a8cb09a2ee58 does not exist
Oct 02 13:45:17 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 6e8fc152-b06b-4cf3-97da-b9940de5f566 does not exist
Oct 02 13:45:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:17 compute-0 sudo[443063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:17 compute-0 sudo[443063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:17 compute-0 sudo[443063]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:17 compute-0 sudo[443088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:45:17 compute-0 sudo[443088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:17 compute-0 sudo[443088]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:18 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:18 compute-0 ceph-mon[73668]: pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:18 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:18 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:18 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:19 compute-0 nova_compute[256940]: 2025-10-02 13:45:19.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:20 compute-0 ceph-mon[73668]: pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:20 compute-0 nova_compute[256940]: 2025-10-02 13:45:20.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:20 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:20 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:20 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:22 compute-0 ceph-mon[73668]: pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:22 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:22 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:22 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:22.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:23 compute-0 nova_compute[256940]: 2025-10-02 13:45:23.232 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:23.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:23 compute-0 ceph-mon[73668]: pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:24 compute-0 nova_compute[256940]: 2025-10-02 13:45:24.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:24 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:24 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:24 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:25 compute-0 nova_compute[256940]: 2025-10-02 13:45:25.206 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:25 compute-0 nova_compute[256940]: 2025-10-02 13:45:25.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:26 compute-0 ceph-mon[73668]: pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:45:26.545 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:45:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:45:26.546 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:45:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:45:26.546 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:45:26 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:26 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:26 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:27 compute-0 podman[443118]: 2025-10-02 13:45:27.406982959 +0000 UTC m=+0.064797871 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 13:45:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:27 compute-0 podman[443119]: 2025-10-02 13:45:27.437905166 +0000 UTC m=+0.096636071 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:45:28 compute-0 ceph-mon[73668]: pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:45:28 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:28 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:28 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:45:28
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'default.rgw.meta']
Oct 02 13:45:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:45:29 compute-0 sudo[443164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:29 compute-0 sudo[443164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:29 compute-0 sudo[443164]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:29 compute-0 sudo[443189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:29 compute-0 sudo[443189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:29 compute-0 sudo[443189]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:29 compute-0 nova_compute[256940]: 2025-10-02 13:45:29.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:45:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:45:30 compute-0 ceph-mon[73668]: pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:30 compute-0 nova_compute[256940]: 2025-10-02 13:45:30.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:30 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:30 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:30 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:30.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:32 compute-0 ceph-mon[73668]: pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:32 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:32 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:32 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:32.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:33 compute-0 ceph-mon[73668]: pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:34 compute-0 nova_compute[256940]: 2025-10-02 13:45:34.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:34 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:34 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:34 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:34.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:35.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.484231) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735484760, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 695, "num_deletes": 251, "total_data_size": 934589, "memory_usage": 947520, "flush_reason": "Manual Compaction"}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735491717, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 913877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89533, "largest_seqno": 90227, "table_properties": {"data_size": 910235, "index_size": 1485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8415, "raw_average_key_size": 19, "raw_value_size": 902881, "raw_average_value_size": 2104, "num_data_blocks": 65, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412690, "oldest_key_time": 1759412690, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 7529 microseconds, and 3056 cpu microseconds.
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.491763) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 913877 bytes OK
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.491786) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494492) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494548) EVENT_LOG_v1 {"time_micros": 1759412735494536, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494576) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 931021, prev total WAL file size 931021, number of live WAL files 2.
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.495267) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(892KB)], [206(13MB)]
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735495326, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 14980309, "oldest_snapshot_seqno": -1}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 11621 keys, 12970926 bytes, temperature: kUnknown
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735607223, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 12970926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12898968, "index_size": 41829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29061, "raw_key_size": 308965, "raw_average_key_size": 26, "raw_value_size": 12698840, "raw_average_value_size": 1092, "num_data_blocks": 1573, "num_entries": 11621, "num_filter_entries": 11621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405416, "oldest_key_time": 0, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804ca3a6-55d1-491a-b05c-fd7bfcd46561", "db_session_id": "J3W12KU9TJU0P77F61TZ", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.607739) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12970926 bytes
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.609686) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 116.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(30.6) write-amplify(14.2) OK, records in: 12138, records dropped: 517 output_compression: NoCompression
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.609757) EVENT_LOG_v1 {"time_micros": 1759412735609710, "job": 130, "event": "compaction_finished", "compaction_time_micros": 111852, "compaction_time_cpu_micros": 54889, "output_level": 6, "num_output_files": 1, "total_output_size": 12970926, "num_input_records": 12138, "num_output_records": 11621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735610444, "job": 130, "event": "table_file_deletion", "file_number": 208}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735617132, "job": 130, "event": "table_file_deletion", "file_number": 206}
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.495181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.617203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.617212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.617215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.617218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-0 ceph-mon[73668]: rocksdb: (Original Log Time 2025/10/02-13:45:35.617221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-0 nova_compute[256940]: 2025-10-02 13:45:35.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:36 compute-0 ceph-mon[73668]: pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:36 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:36 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:36 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:38 compute-0 ceph-mon[73668]: pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:38 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:38 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:38 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:38.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:45:39 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 60K writes, 232K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 60K writes, 21K syncs, 2.82 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 701 writes, 1075 keys, 701 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s
                                           Interval WAL: 701 writes, 347 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:45:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:39.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:39 compute-0 nova_compute[256940]: 2025-10-02 13:45:39.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:40 compute-0 ceph-mon[73668]: pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:40 compute-0 nova_compute[256940]: 2025-10-02 13:45:40.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:40 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:40 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:40 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:40.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:45:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:42 compute-0 ceph-mon[73668]: pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:42 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:42 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:42 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:42.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:43.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:43 compute-0 ceph-mon[73668]: pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:44 compute-0 nova_compute[256940]: 2025-10-02 13:45:44.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:44 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:44 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:44 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:44.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:45 compute-0 podman[443222]: 2025-10-02 13:45:45.385514083 +0000 UTC m=+0.051865067 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid)
Oct 02 13:45:45 compute-0 podman[443223]: 2025-10-02 13:45:45.417334343 +0000 UTC m=+0.069571734 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Oct 02 13:45:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:45.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:45 compute-0 nova_compute[256940]: 2025-10-02 13:45:45.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:46 compute-0 ceph-mon[73668]: pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:46 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:46 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:46 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:46.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:47.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:47 compute-0 ceph-mgr[73961]: [devicehealth INFO root] Check health
Oct 02 13:45:48 compute-0 ceph-mon[73668]: pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:48 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:48 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:48 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:48.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:49 compute-0 sudo[443260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:49 compute-0 sudo[443260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:49 compute-0 sudo[443260]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:49 compute-0 sudo[443285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:49 compute-0 sudo[443285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:49 compute-0 sudo[443285]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:49 compute-0 nova_compute[256940]: 2025-10-02 13:45:49.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:50 compute-0 ceph-mon[73668]: pgmap v4062: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:50 compute-0 nova_compute[256940]: 2025-10-02 13:45:50.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:50 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:50 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:50 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:50.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:52 compute-0 ceph-mon[73668]: pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:52 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:52 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:52 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:54 compute-0 ceph-mon[73668]: pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:54 compute-0 nova_compute[256940]: 2025-10-02 13:45:54.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:55.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:55 compute-0 nova_compute[256940]: 2025-10-02 13:45:55.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:56 compute-0 ceph-mon[73668]: pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:57 compute-0 ceph-mon[73668]: pgmap v4066: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.277 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.278 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.278 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.278 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.279 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:45:58 compute-0 podman[443315]: 2025-10-02 13:45:58.394702111 +0000 UTC m=+0.061087546 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 13:45:58 compute-0 podman[443316]: 2025-10-02 13:45:58.433970733 +0000 UTC m=+0.097490384 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:45:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:45:58 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:45:58 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038663619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.728 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:45:58 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4038663619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.882 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.883 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4080MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.883 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:45:58 compute-0 nova_compute[256940]: 2025-10-02 13:45:58.883 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:45:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:45:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:45:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.248 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.249 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.276 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:45:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:45:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:45:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:59.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:45:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:45:59 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648727320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.725 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.733 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.770 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.772 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.772 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:45:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4274614728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:59 compute-0 ceph-mon[73668]: pgmap v4067: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:59 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1648727320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:59 compute-0 nova_compute[256940]: 2025-10-02 13:45:59.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:00 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2203131589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:00 compute-0 nova_compute[256940]: 2025-10-02 13:46:00.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:01.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:01.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:01 compute-0 ceph-mon[73668]: pgmap v4068: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3288497061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3746623552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:03.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:03 compute-0 ceph-mon[73668]: pgmap v4069: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:04 compute-0 nova_compute[256940]: 2025-10-02 13:46:04.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:05.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:46:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:46:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:05.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:05 compute-0 nova_compute[256940]: 2025-10-02 13:46:05.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:06 compute-0 ceph-mon[73668]: pgmap v4070: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:07.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:08 compute-0 ceph-mon[73668]: pgmap v4071: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:08 compute-0 nova_compute[256940]: 2025-10-02 13:46:08.773 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:08 compute-0 nova_compute[256940]: 2025-10-02 13:46:08.774 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:08 compute-0 nova_compute[256940]: 2025-10-02 13:46:08.774 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:08 compute-0 nova_compute[256940]: 2025-10-02 13:46:08.774 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:46:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:09.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:09 compute-0 sudo[443407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:09 compute-0 sudo[443407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:09 compute-0 sudo[443407]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:09 compute-0 sudo[443432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:09 compute-0 sudo[443432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:09 compute-0 sudo[443432]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:09 compute-0 nova_compute[256940]: 2025-10-02 13:46:09.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:10 compute-0 nova_compute[256940]: 2025-10-02 13:46:10.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:10 compute-0 ceph-mon[73668]: pgmap v4072: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:10 compute-0 nova_compute[256940]: 2025-10-02 13:46:10.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:46:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:11.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:46:11 compute-0 nova_compute[256940]: 2025-10-02 13:46:11.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:12 compute-0 ceph-mon[73668]: pgmap v4073: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:13.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:13 compute-0 ceph-mon[73668]: pgmap v4074: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:14 compute-0 nova_compute[256940]: 2025-10-02 13:46:14.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:14 compute-0 nova_compute[256940]: 2025-10-02 13:46:14.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:15.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:16 compute-0 nova_compute[256940]: 2025-10-02 13:46:16.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:16 compute-0 podman[443460]: 2025-10-02 13:46:16.379998308 +0000 UTC m=+0.053660564 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 13:46:16 compute-0 ceph-mon[73668]: pgmap v4075: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:16 compute-0 podman[443461]: 2025-10-02 13:46:16.420522893 +0000 UTC m=+0.089435297 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 13:46:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:46:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:17.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:46:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:17 compute-0 sudo[443501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:17 compute-0 sudo[443501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:17 compute-0 sudo[443501]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:17 compute-0 sudo[443526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:46:17 compute-0 sudo[443526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:17 compute-0 sudo[443526]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:17 compute-0 sudo[443551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:17 compute-0 sudo[443551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:17 compute-0 sudo[443551]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:17 compute-0 sudo[443576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:46:17 compute-0 sudo[443576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:18 compute-0 nova_compute[256940]: 2025-10-02 13:46:18.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:18 compute-0 nova_compute[256940]: 2025-10-02 13:46:18.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:46:18 compute-0 nova_compute[256940]: 2025-10-02 13:46:18.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:46:18 compute-0 nova_compute[256940]: 2025-10-02 13:46:18.232 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:46:18 compute-0 sudo[443576]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:18 compute-0 ceph-mon[73668]: pgmap v4076: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:46:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:46:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:46:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 9f91678c-10a2-4d61-97df-f70ada2cd9ce does not exist
Oct 02 13:46:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 974aef93-ca2d-483c-b385-da82eeffad15 does not exist
Oct 02 13:46:18 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 5b8e7046-1a15-4dea-b673-a7e2a5e818a9 does not exist
Oct 02 13:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:46:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:46:18 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:46:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:46:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:46:18 compute-0 sudo[443631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:18 compute-0 sudo[443631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:18 compute-0 sudo[443631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:18 compute-0 sudo[443656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:46:18 compute-0 sudo[443656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:18 compute-0 sudo[443656]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:18 compute-0 sudo[443681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:18 compute-0 sudo[443681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:18 compute-0 sudo[443681]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:18 compute-0 sudo[443706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:46:18 compute-0 sudo[443706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:19.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:19 compute-0 podman[443772]: 2025-10-02 13:46:19.075903741 +0000 UTC m=+0.041011448 container create 6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 13:46:19 compute-0 systemd[1]: Started libpod-conmon-6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c.scope.
Oct 02 13:46:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:46:19 compute-0 podman[443772]: 2025-10-02 13:46:19.058633196 +0000 UTC m=+0.023740913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:46:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:19 compute-0 podman[443772]: 2025-10-02 13:46:19.168053286 +0000 UTC m=+0.133160993 container init 6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 02 13:46:19 compute-0 podman[443772]: 2025-10-02 13:46:19.176756101 +0000 UTC m=+0.141863808 container start 6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:46:19 compute-0 podman[443772]: 2025-10-02 13:46:19.180650461 +0000 UTC m=+0.145758188 container attach 6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:46:19 compute-0 happy_newton[443787]: 167 167
Oct 02 13:46:19 compute-0 systemd[1]: libpod-6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c.scope: Deactivated successfully.
Oct 02 13:46:19 compute-0 conmon[443787]: conmon 6a5a795585e4227d2810 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c.scope/container/memory.events
Oct 02 13:46:19 compute-0 podman[443772]: 2025-10-02 13:46:19.184365157 +0000 UTC m=+0.149472864 container died 6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-31262616b129ad19598dd066de698b4ccde4f0fd8c2158ee2366e15240399b9b-merged.mount: Deactivated successfully.
Oct 02 13:46:19 compute-0 podman[443772]: 2025-10-02 13:46:19.217775648 +0000 UTC m=+0.182883355 container remove 6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 13:46:19 compute-0 systemd[1]: libpod-conmon-6a5a795585e4227d28103bbf7add159283aa120475cb42e64794acdf7647d21c.scope: Deactivated successfully.
Oct 02 13:46:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:19 compute-0 podman[443812]: 2025-10-02 13:46:19.374184509 +0000 UTC m=+0.037569499 container create 8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:46:19 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:46:19 compute-0 systemd[1]: Started libpod-conmon-8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0.scope.
Oct 02 13:46:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0444c56d6dde65db35503307cfcb3e1eb496c5498afcf2c63c0d71f956c90a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0444c56d6dde65db35503307cfcb3e1eb496c5498afcf2c63c0d71f956c90a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0444c56d6dde65db35503307cfcb3e1eb496c5498afcf2c63c0d71f956c90a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0444c56d6dde65db35503307cfcb3e1eb496c5498afcf2c63c0d71f956c90a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0444c56d6dde65db35503307cfcb3e1eb496c5498afcf2c63c0d71f956c90a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:19 compute-0 podman[443812]: 2025-10-02 13:46:19.359883591 +0000 UTC m=+0.023268611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:46:19 compute-0 podman[443812]: 2025-10-02 13:46:19.46228428 +0000 UTC m=+0.125669310 container init 8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:46:19 compute-0 podman[443812]: 2025-10-02 13:46:19.468984593 +0000 UTC m=+0.132369603 container start 8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:46:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:19.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:19 compute-0 podman[443812]: 2025-10-02 13:46:19.472932975 +0000 UTC m=+0.136317995 container attach 8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:46:19 compute-0 nova_compute[256940]: 2025-10-02 13:46:19.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:20 compute-0 keen_dubinsky[443829]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:46:20 compute-0 keen_dubinsky[443829]: --> relative data size: 1.0
Oct 02 13:46:20 compute-0 keen_dubinsky[443829]: --> All data devices are unavailable
Oct 02 13:46:20 compute-0 systemd[1]: libpod-8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0.scope: Deactivated successfully.
Oct 02 13:46:20 compute-0 podman[443812]: 2025-10-02 13:46:20.229837823 +0000 UTC m=+0.893222823 container died 8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 13:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0444c56d6dde65db35503307cfcb3e1eb496c5498afcf2c63c0d71f956c90a3-merged.mount: Deactivated successfully.
Oct 02 13:46:20 compute-0 podman[443812]: 2025-10-02 13:46:20.282844339 +0000 UTC m=+0.946229339 container remove 8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:46:20 compute-0 systemd[1]: libpod-conmon-8541fecfb388ceb4d79e270f9affad50d98d55a09e5d0aef3f8a07ec6da6aee0.scope: Deactivated successfully.
Oct 02 13:46:20 compute-0 sudo[443706]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:20 compute-0 sudo[443855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:20 compute-0 sudo[443855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:20 compute-0 sudo[443855]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:20 compute-0 ceph-mon[73668]: pgmap v4077: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:20 compute-0 sudo[443880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:46:20 compute-0 sudo[443880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:20 compute-0 sudo[443880]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:20 compute-0 sudo[443905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:20 compute-0 sudo[443905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:20 compute-0 sudo[443905]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:20 compute-0 sudo[443930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:46:20 compute-0 sudo[443930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:20 compute-0 podman[443995]: 2025-10-02 13:46:20.937932924 +0000 UTC m=+0.086573683 container create 521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:46:20 compute-0 podman[443995]: 2025-10-02 13:46:20.877058575 +0000 UTC m=+0.025699364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:46:21 compute-0 systemd[1]: Started libpod-conmon-521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33.scope.
Oct 02 13:46:21 compute-0 nova_compute[256940]: 2025-10-02 13:46:21.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:46:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:21.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:21 compute-0 podman[443995]: 2025-10-02 13:46:21.058024098 +0000 UTC m=+0.206664887 container init 521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:46:21 compute-0 podman[443995]: 2025-10-02 13:46:21.067234355 +0000 UTC m=+0.215875114 container start 521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:46:21 compute-0 cranky_haslett[444013]: 167 167
Oct 02 13:46:21 compute-0 systemd[1]: libpod-521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33.scope: Deactivated successfully.
Oct 02 13:46:21 compute-0 podman[443995]: 2025-10-02 13:46:21.082397666 +0000 UTC m=+0.231038465 container attach 521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 13:46:21 compute-0 podman[443995]: 2025-10-02 13:46:21.08372252 +0000 UTC m=+0.232363319 container died 521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 13:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4b2edde9bb7d28bd45665ec8ea78e113aa3f36ec4e771fb8140f04a8d6225ca-merged.mount: Deactivated successfully.
Oct 02 13:46:21 compute-0 podman[443995]: 2025-10-02 13:46:21.174446498 +0000 UTC m=+0.323087257 container remove 521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:46:21 compute-0 systemd[1]: libpod-conmon-521c33a9b514e37cc1dd6d3f966710206105a8f0287d94c257cf5101175cad33.scope: Deactivated successfully.
Oct 02 13:46:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:21 compute-0 podman[444040]: 2025-10-02 13:46:21.319847356 +0000 UTC m=+0.042488716 container create 78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 13:46:21 compute-0 systemd[1]: Started libpod-conmon-78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727.scope.
Oct 02 13:46:21 compute-0 podman[444040]: 2025-10-02 13:46:21.298734642 +0000 UTC m=+0.021376022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:46:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b14614d9c6195f3e7c3f5fcf2cd893ac4f48fae9a40e3c6bf4d0f98ef01cc6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b14614d9c6195f3e7c3f5fcf2cd893ac4f48fae9a40e3c6bf4d0f98ef01cc6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b14614d9c6195f3e7c3f5fcf2cd893ac4f48fae9a40e3c6bf4d0f98ef01cc6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b14614d9c6195f3e7c3f5fcf2cd893ac4f48fae9a40e3c6bf4d0f98ef01cc6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:21 compute-0 podman[444040]: 2025-10-02 13:46:21.438407912 +0000 UTC m=+0.161049292 container init 78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:46:21 compute-0 podman[444040]: 2025-10-02 13:46:21.451049688 +0000 UTC m=+0.173691048 container start 78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 02 13:46:21 compute-0 podman[444040]: 2025-10-02 13:46:21.457066263 +0000 UTC m=+0.179707633 container attach 78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 13:46:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:21.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:22 compute-0 mystifying_carson[444056]: {
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:     "1": [
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:         {
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "devices": [
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "/dev/loop3"
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             ],
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "lv_name": "ceph_lv0",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "lv_size": "7511998464",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3e590da2-9176-4197-8be9-66fc8d360a0c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "lv_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "name": "ceph_lv0",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "tags": {
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.block_uuid": "vqgCwg-70MB-4E8q-3fw7-r7n0-ekRZ-0UEeR2",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.cluster_name": "ceph",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.crush_device_class": "",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.encrypted": "0",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.osd_fsid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.osd_id": "1",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.type": "block",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:                 "ceph.vdo": "0"
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             },
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "type": "block",
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:             "vg_name": "ceph_vg0"
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:         }
Oct 02 13:46:22 compute-0 mystifying_carson[444056]:     ]
Oct 02 13:46:22 compute-0 mystifying_carson[444056]: }
Oct 02 13:46:22 compute-0 systemd[1]: libpod-78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727.scope: Deactivated successfully.
Oct 02 13:46:22 compute-0 podman[444040]: 2025-10-02 13:46:22.239119109 +0000 UTC m=+0.961760469 container died 78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 13:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b14614d9c6195f3e7c3f5fcf2cd893ac4f48fae9a40e3c6bf4d0f98ef01cc6b-merged.mount: Deactivated successfully.
Oct 02 13:46:22 compute-0 podman[444040]: 2025-10-02 13:46:22.448329942 +0000 UTC m=+1.170971302 container remove 78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:46:22 compute-0 systemd[1]: libpod-conmon-78647a84339a777a4a237f8fb9658e9c257a0b62b6a8827c6f181808531f2727.scope: Deactivated successfully.
Oct 02 13:46:22 compute-0 ceph-mon[73668]: pgmap v4078: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:22 compute-0 sudo[443930]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:22 compute-0 sudo[444079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:22 compute-0 sudo[444079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:22 compute-0 sudo[444079]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:22 compute-0 sudo[444104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:46:22 compute-0 sudo[444104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:22 compute-0 sudo[444104]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:22 compute-0 sudo[444129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:22 compute-0 sudo[444129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:22 compute-0 sudo[444129]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:22 compute-0 sudo[444154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 13:46:22 compute-0 sudo[444154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:23 compute-0 podman[444219]: 2025-10-02 13:46:23.112613823 +0000 UTC m=+0.051096658 container create b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:46:23 compute-0 systemd[1]: Started libpod-conmon-b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9.scope.
Oct 02 13:46:23 compute-0 podman[444219]: 2025-10-02 13:46:23.085482563 +0000 UTC m=+0.023965438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:46:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:46:23 compute-0 podman[444219]: 2025-10-02 13:46:23.223656545 +0000 UTC m=+0.162139470 container init b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:46:23 compute-0 podman[444219]: 2025-10-02 13:46:23.233247382 +0000 UTC m=+0.171730207 container start b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:46:23 compute-0 laughing_chatelet[444235]: 167 167
Oct 02 13:46:23 compute-0 systemd[1]: libpod-b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9.scope: Deactivated successfully.
Oct 02 13:46:23 compute-0 podman[444219]: 2025-10-02 13:46:23.256872541 +0000 UTC m=+0.195355396 container attach b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 13:46:23 compute-0 podman[444219]: 2025-10-02 13:46:23.258430051 +0000 UTC m=+0.196912876 container died b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1314398d2bbe236b4795cc538010683d5aa8ab90f5fbbc94ec22d3208536f225-merged.mount: Deactivated successfully.
Oct 02 13:46:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:23 compute-0 podman[444219]: 2025-10-02 13:46:23.33405159 +0000 UTC m=+0.272534415 container remove b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:46:23 compute-0 systemd[1]: libpod-conmon-b83c4d84ef03d4dcbb02167101e890ac1c4787b9757c043d8ce4830749f6b1a9.scope: Deactivated successfully.
Oct 02 13:46:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:46:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:46:23 compute-0 ceph-mon[73668]: pgmap v4079: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:23 compute-0 podman[444264]: 2025-10-02 13:46:23.556968315 +0000 UTC m=+0.055778588 container create a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jang, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:46:23 compute-0 systemd[1]: Started libpod-conmon-a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51.scope.
Oct 02 13:46:23 compute-0 podman[444264]: 2025-10-02 13:46:23.532627308 +0000 UTC m=+0.031437631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:46:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee52ea050c1e3d014103f6a6f44fc25df1a69f8200433743559f4e4830e2e87b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee52ea050c1e3d014103f6a6f44fc25df1a69f8200433743559f4e4830e2e87b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee52ea050c1e3d014103f6a6f44fc25df1a69f8200433743559f4e4830e2e87b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee52ea050c1e3d014103f6a6f44fc25df1a69f8200433743559f4e4830e2e87b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:46:23 compute-0 podman[444264]: 2025-10-02 13:46:23.669086135 +0000 UTC m=+0.167896468 container init a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jang, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 13:46:23 compute-0 podman[444264]: 2025-10-02 13:46:23.676984549 +0000 UTC m=+0.175794832 container start a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jang, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:46:23 compute-0 podman[444264]: 2025-10-02 13:46:23.683922388 +0000 UTC m=+0.182732671 container attach a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jang, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:46:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:24 compute-0 unruffled_jang[444280]: {
Oct 02 13:46:24 compute-0 unruffled_jang[444280]:     "3e590da2-9176-4197-8be9-66fc8d360a0c": {
Oct 02 13:46:24 compute-0 unruffled_jang[444280]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 13:46:24 compute-0 unruffled_jang[444280]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 13:46:24 compute-0 unruffled_jang[444280]:         "osd_id": 1,
Oct 02 13:46:24 compute-0 unruffled_jang[444280]:         "osd_uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c",
Oct 02 13:46:24 compute-0 unruffled_jang[444280]:         "type": "bluestore"
Oct 02 13:46:24 compute-0 unruffled_jang[444280]:     }
Oct 02 13:46:24 compute-0 unruffled_jang[444280]: }
Oct 02 13:46:24 compute-0 systemd[1]: libpod-a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51.scope: Deactivated successfully.
Oct 02 13:46:24 compute-0 podman[444264]: 2025-10-02 13:46:24.55115855 +0000 UTC m=+1.049968843 container died a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jang, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:46:24 compute-0 nova_compute[256940]: 2025-10-02 13:46:24.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee52ea050c1e3d014103f6a6f44fc25df1a69f8200433743559f4e4830e2e87b-merged.mount: Deactivated successfully.
Oct 02 13:46:24 compute-0 podman[444264]: 2025-10-02 13:46:24.987405463 +0000 UTC m=+1.486215756 container remove a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 02 13:46:25 compute-0 sudo[444154]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:46:25 compute-0 systemd[1]: libpod-conmon-a2f95d53642e042f4cc783f0ddfdb0c37688cb32ac8cbfe580a0f6a82a2eaf51.scope: Deactivated successfully.
Oct 02 13:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:46:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:25 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev c2bbce56-3f08-4b6f-8dde-144c2129acbf does not exist
Oct 02 13:46:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 33abacbc-7d19-47fa-a10f-075258a897e4 does not exist
Oct 02 13:46:25 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 90c99ecf-98b4-4aef-b30a-e28dba15fc00 does not exist
Oct 02 13:46:25 compute-0 sudo[444316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:25 compute-0 sudo[444316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:25 compute-0 sudo[444316]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:25 compute-0 sudo[444341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:46:25 compute-0 sudo[444341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:25 compute-0 sudo[444341]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:25 compute-0 nova_compute[256940]: 2025-10-02 13:46:25.227 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:25.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:26 compute-0 nova_compute[256940]: 2025-10-02 13:46:26.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:26 compute-0 ceph-mon[73668]: pgmap v4080: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:46:26.546 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:46:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:46:26.547 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:46:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:46:26.547 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:46:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:27.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:28 compute-0 ceph-mon[73668]: pgmap v4081: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:46:28
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data']
Oct 02 13:46:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:46:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:29 compute-0 podman[444368]: 2025-10-02 13:46:29.401999943 +0000 UTC m=+0.063875637 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:46:29 compute-0 podman[444369]: 2025-10-02 13:46:29.446518951 +0000 UTC m=+0.099673520 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:46:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:29 compute-0 sudo[444411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:29 compute-0 sudo[444411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:29 compute-0 sudo[444411]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:29 compute-0 sudo[444437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:29 compute-0 sudo[444437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:29 compute-0 sudo[444437]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:29 compute-0 nova_compute[256940]: 2025-10-02 13:46:29.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:46:30 compute-0 ceph-mgr[73961]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 13:46:30 compute-0 ceph-mon[73668]: pgmap v4082: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:31 compute-0 nova_compute[256940]: 2025-10-02 13:46:31.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:31.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:31 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:31 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:31 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:31 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:32 compute-0 ceph-mon[73668]: pgmap v4083: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:33 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:33 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:33 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:33 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:33.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:34 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:34 compute-0 ceph-mon[73668]: pgmap v4084: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:34 compute-0 nova_compute[256940]: 2025-10-02 13:46:34.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:35 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:35 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:35 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:35 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:36 compute-0 nova_compute[256940]: 2025-10-02 13:46:36.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:36 compute-0 ceph-mon[73668]: pgmap v4085: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:37.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:37 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:37 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:37 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:37 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:37.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:37 compute-0 ceph-mon[73668]: pgmap v4086: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:39 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:39 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:39 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:39 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:39 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:39.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:39 compute-0 nova_compute[256940]: 2025-10-02 13:46:39.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:40 compute-0 ceph-mon[73668]: pgmap v4087: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:41 compute-0 nova_compute[256940]: 2025-10-02 13:46:41.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:41.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:46:41 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:41 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:41 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:41 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:41 compute-0 ceph-mon[73668]: pgmap v4088: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:46:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:43.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:46:43 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:43 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:43 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:43 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:43.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:44 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:44 compute-0 ceph-mon[73668]: pgmap v4089: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:44 compute-0 nova_compute[256940]: 2025-10-02 13:46:44.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:45.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:45 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:45 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:45 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:45 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:45 compute-0 ceph-mon[73668]: pgmap v4090: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:46 compute-0 nova_compute[256940]: 2025-10-02 13:46:46.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:47.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:47 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:47 compute-0 podman[444472]: 2025-10-02 13:46:47.449917846 +0000 UTC m=+0.098345006 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:46:47 compute-0 podman[444471]: 2025-10-02 13:46:47.463938677 +0000 UTC m=+0.118145836 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:46:47 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:47 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:47 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:47.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:47 compute-0 ceph-mon[73668]: pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:49.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:49 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:49 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:49 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:49 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:46:49 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:49.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:46:49 compute-0 sudo[444513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:49 compute-0 sudo[444513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:49 compute-0 sudo[444513]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:49 compute-0 ceph-mon[73668]: pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:49 compute-0 sudo[444538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:49 compute-0 sudo[444538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:49 compute-0 sudo[444538]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:49 compute-0 nova_compute[256940]: 2025-10-02 13:46:49.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:51 compute-0 nova_compute[256940]: 2025-10-02 13:46:51.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:46:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:51.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:46:51 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:51 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:51 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:51 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:51.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:51 compute-0 ceph-mon[73668]: pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000025s ======
Oct 02 13:46:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:53.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 02 13:46:53 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:53 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:53 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:53 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:53.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:53 compute-0 ceph-mon[73668]: pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:54 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:54 compute-0 nova_compute[256940]: 2025-10-02 13:46:54.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:55.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:55 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:55 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:55 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:55 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:55.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:56 compute-0 nova_compute[256940]: 2025-10-02 13:46:56.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:56 compute-0 ceph-mon[73668]: pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:57.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:57 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:57 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:57 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:46:57 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:57.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:46:57 compute-0 ceph-mon[73668]: pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:46:58 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:46:58 compute-0 sshd-session[444567]: Accepted publickey for zuul from 192.168.122.10 port 41582 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:46:58 compute-0 systemd-logind[820]: New session 78 of user zuul.
Oct 02 13:46:58 compute-0 systemd[1]: Started Session 78 of User zuul.
Oct 02 13:46:58 compute-0 sshd-session[444567]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:46:58 compute-0 sudo[444571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 13:46:58 compute-0 sudo[444571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:46:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:59.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:59 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:59 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:59 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:46:59 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:59 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:59.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:59 compute-0 nova_compute[256940]: 2025-10-02 13:46:59.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:00 compute-0 podman[444606]: 2025-10-02 13:47:00.028744479 +0000 UTC m=+0.086366847 container health_status 17fc3d20ba092c5a2dfcf71f62368df04bc2da8fc53aee2b5480fe08645f3012 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:47:00 compute-0 podman[444607]: 2025-10-02 13:47:00.06329522 +0000 UTC m=+0.127384574 container health_status aff4ff581119d3bc782777d096b503225ba9cfe77eb19f33e6a37478a2333d68 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.210 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.315 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.316 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.316 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.316 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.316 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:47:00 compute-0 ceph-mon[73668]: pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:00 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:47:00 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/425010776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.768 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.960 2 WARNING nova.virt.libvirt.driver [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.962 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4046MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.963 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:47:00 compute-0 nova_compute[256940]: 2025-10-02 13:47:00.963 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.105 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.106 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:47:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:47:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:01.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.125 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing inventories for resource provider 8733289a-aa77-4139-9e88-bac686174c8d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:47:01 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47225 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.326 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating ProviderTree inventory for provider 8733289a-aa77-4139-9e88-bac686174c8d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.326 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Updating inventory in ProviderTree for provider 8733289a-aa77-4139-9e88-bac686174c8d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:47:01 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.351 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing aggregate associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.380 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Refreshing trait associations for resource provider 8733289a-aa77-4139-9e88-bac686174c8d, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.414 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:47:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/425010776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:01 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/811443912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:01 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:01 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:01 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:01.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:01 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38202 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:01 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47950 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:01 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47231 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:01 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:47:01 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1073257260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.955 2 DEBUG oslo_concurrency.processutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:47:01 compute-0 nova_compute[256940]: 2025-10-02 13:47:01.961 2 DEBUG nova.compute.provider_tree [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed in ProviderTree for provider: 8733289a-aa77-4139-9e88-bac686174c8d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:47:02 compute-0 nova_compute[256940]: 2025-10-02 13:47:02.020 2 DEBUG nova.scheduler.client.report [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Inventory has not changed for provider 8733289a-aa77-4139-9e88-bac686174c8d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:47:02 compute-0 nova_compute[256940]: 2025-10-02 13:47:02.022 2 DEBUG nova.compute.resource_tracker [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:47:02 compute-0 nova_compute[256940]: 2025-10-02 13:47:02.022 2 DEBUG oslo_concurrency.lockutils [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:47:02 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38214 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:02 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47965 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:02 compute-0 ceph-mon[73668]: from='client.47225 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:02 compute-0 ceph-mon[73668]: pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1073257260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/853717363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:02 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/473589523' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:02 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:47:02 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402390930' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:47:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:03.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:47:03 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.38202 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.47950 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.47231 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.38214 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.47965 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1498764034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3447872947' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:03 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1402390930' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:03 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:03 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:47:03 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:03.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:47:04 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:04 compute-0 ceph-mon[73668]: pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:04 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/240607502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:04 compute-0 nova_compute[256940]: 2025-10-02 13:47:04.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:05.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:05 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:05 compute-0 ceph-mon[73668]: pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2758744299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:47:05 compute-0 ceph-mon[73668]: from='client.? 192.168.122.10:0/2758744299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:47:05 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:05 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:05 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:05.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:05 compute-0 ovs-vsctl[444948]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 13:47:06 compute-0 nova_compute[256940]: 2025-10-02 13:47:06.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:06 compute-0 virtqemud[257589]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 13:47:06 compute-0 virtqemud[257589]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 13:47:06 compute-0 virtqemud[257589]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:47:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:07.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:07 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:07 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:07 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: cache status {prefix=cache status} (starting...)
Oct 02 13:47:07 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:07 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47995 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:47:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:07 compute-0 lvm[445277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 13:47:07 compute-0 lvm[445277]: VG ceph_vg0 finished
Oct 02 13:47:07 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:07 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:07 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:07.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:07 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: client ls {prefix=client ls} (starting...)
Oct 02 13:47:07 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:07 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47288 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:07 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:47:07 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:07 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48016 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:07 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38256 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:47:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879570657' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38268 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47312 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:08.342+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:08 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.47276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.47995 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3622600288' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3618812975' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1778923128' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1511379920' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/571185985' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1879570657' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4261183795' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48046 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:08 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:08.535+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:08 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:47:08 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739593765' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 13:47:08 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:09 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47333 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:09.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:09 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48070 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:09 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 13:47:09 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:09 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38310 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:09 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:09.219+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:09 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 13:47:09 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:09 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.47288 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.48016 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.38256 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.38268 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.47312 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2739593765' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2798236164' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3224226010' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2575496155' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2997921120' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3094434344' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1393102153' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3431451301' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47357 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48088 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:09 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:09 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:09.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:09 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: ops {prefix=ops} (starting...)
Oct 02 13:47:09 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 13:47:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046414643' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 13:47:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1621607900' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:09 compute-0 sudo[445612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:09 compute-0 sudo[445612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:09 compute-0 sudo[445612]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:47:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:09 compute-0 nova_compute[256940]: 2025-10-02 13:47:09.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:09 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:47:09 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:09 compute-0 sudo[445638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:09 compute-0 sudo[445638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:09 compute-0 sudo[445638]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:10 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38343 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403507624' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: session ls {prefix=session ls} (starting...)
Oct 02 13:47:10 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns Can't run that command on an inactive MDS!
Oct 02 13:47:10 compute-0 ceph-mds[95523]: mds.cephfs.compute-0.yqiqns asok_command: status {prefix=status} (starting...)
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.48046 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.47333 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.48070 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.38310 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.47357 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3569749802' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4046414643' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1621607900' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/356698630' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/165376371' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2328061460' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1056629558' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/649391731' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2403507624' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2648762803' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1861714374' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38355 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48142 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:47:10 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:10.710+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:47:10 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:47:10 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185257415' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:10 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47405 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:10.812+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:47:10 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:47:11 compute-0 nova_compute[256940]: 2025-10-02 13:47:11.024 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:11 compute-0 nova_compute[256940]: 2025-10-02 13:47:11.025 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:11 compute-0 nova_compute[256940]: 2025-10-02 13:47:11.025 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:11 compute-0 nova_compute[256940]: 2025-10-02 13:47:11.025 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:11 compute-0 nova_compute[256940]: 2025-10-02 13:47:11.025 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:47:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635537241' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:11 compute-0 nova_compute[256940]: 2025-10-02 13:47:11.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:11.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:47:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406744526' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.48088 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.38343 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2714691879' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3411214287' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/185257415' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2721180045' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3505966985' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/635537241' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3993059270' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2406744526' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2241957471' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4215150955' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1295786025' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 13:47:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046619135' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:11 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:11 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:11 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:11 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:47:11 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500950542' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47444 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48193 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:11 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38397 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:11 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:11.931+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:47:11 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 13:47:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47456 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 13:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212330445' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:12 compute-0 nova_compute[256940]: 2025-10-02 13:47:12.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48208 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 13:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2013296607' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.38355 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.48142 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.47405 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4046619135' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/809149563' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3500950542' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1977807061' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2425987190' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4212330445' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1645271435' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2013296607' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4153172606' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48220 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47489 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 13:47:12 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3677689488' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48235 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38445 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:13.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47504 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48244 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38460 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:47:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513305581' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.47444 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.48193 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.38397 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.47456 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.48208 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.47474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/402183147' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4153172606' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/392775669' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3677689488' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1967728647' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3564654829' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1513305581' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4214691663' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:13 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:13 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:13 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48262 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38469 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:47:13 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783132319' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47531 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4464792 data_alloc: 234881024 data_used: 26546176
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a41c2000/0x0/0x1bfc00000, data 0x3542132/0x374c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:53.782450+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a070c06960
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3ddc000/0x0/0x1bfc00000, data 0x3518132/0x3722000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:54.782656+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:55.782804+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:56.782966+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:57.783086+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070a2a1e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456486 data_alloc: 234881024 data_used: 26411008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:58.783278+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a070a2b4a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.273836136s of 12.963358879s, submitted: 48
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a070ac2f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:59.783425+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3ddc000/0x0/0x1bfc00000, data 0x3518132/0x3722000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:00.783598+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 48545792 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:01.783693+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4b8e000/0x0/0x1bfc00000, data 0x2766132/0x2970000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401588224 unmapped: 54165504 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:02.783820+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401604608 unmapped: 54149120 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4b8e000/0x0/0x1bfc00000, data 0x2766132/0x2970000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311350 data_alloc: 218103808 data_used: 20557824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:03.783982+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 54140928 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dce2800 session 0x55a0709ee960
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:04.784156+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401620992 unmapped: 54132736 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:05.784304+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401620992 unmapped: 54132736 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:06.784453+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401620992 unmapped: 54132736 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:07.784599+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301623 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb9000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:08.784737+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 54K writes, 215K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s
                                           Cumulative WAL: 54K writes, 19K syncs, 2.88 writes per sync, written: 0.20 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7295 writes, 25K keys, 7295 commit groups, 1.0 writes per commit group, ingest: 25.84 MB, 0.04 MB/s
                                           Interval WAL: 7295 writes, 2838 syncs, 2.57 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:09.784912+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets getting new tickets!
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.785120+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _finish_auth 0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.785754+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:11.785211+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:12.785351+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301623 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:13.785455+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb9000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:14.785703+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 54124544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc ms_handle_reset ms_handle_reset con 0x55a0708e1800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: get_auth_request con 0x55a075699000 auth_method 0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:15.785900+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dce2800 session 0x55a06fca5680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070e10780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a071dd0780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a07087e000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 54042624 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.029085159s of 17.251653671s, submitted: 36
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:16.786072+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 52994048 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706d8400 session 0x55a070b41c20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dce2800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:17.786237+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a070b414a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402751488 unmapped: 53002240 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4338959 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a06dd474a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a0707c5c20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a07015b2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a07085a3c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:18.786395+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402767872 unmapped: 52985856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47b7000/0x0/0x1bfc00000, data 0x2b3e161/0x2d47000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:19.786516+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402767872 unmapped: 52985856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:20.786618+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402776064 unmapped: 52977664 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:21.786711+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 402776064 unmapped: 52977664 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a07068d860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:22.786854+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f1800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403128320 unmapped: 52625408 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4341645 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:23.787017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403128320 unmapped: 52625408 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:24.787237+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a478d000/0x0/0x1bfc00000, data 0x2b68161/0x2d71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:25.787437+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:26.787598+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:27.787759+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4372045 data_alloc: 218103808 data_used: 24625152
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:28.787956+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:29.788215+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a478d000/0x0/0x1bfc00000, data 0x2b68161/0x2d71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:30.788358+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:31.788500+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:32.788663+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4372045 data_alloc: 218103808 data_used: 24625152
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a478d000/0x0/0x1bfc00000, data 0x2b68161/0x2d71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:33.788816+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 403136512 unmapped: 52617216 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.274131775s of 18.334613800s, submitted: 38
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:34.789608+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 405454848 unmapped: 50298880 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:35.789749+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408371200 unmapped: 47382528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e6b000/0x0/0x1bfc00000, data 0x348a161/0x3693000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:36.789871+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:37.790000+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4457259 data_alloc: 218103808 data_used: 24952832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:38.790137+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:39.790276+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3f000/0x0/0x1bfc00000, data 0x34b6161/0x36bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:40.790411+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 47284224 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:41.790530+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 47464448 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:42.790646+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 47464448 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4451799 data_alloc: 218103808 data_used: 24956928
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a071dd10e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:43.790870+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 47464448 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3b000/0x0/0x1bfc00000, data 0x34b81d3/0x36c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:44.791043+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 47456256 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:45.791199+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447254181s of 11.133079529s, submitted: 94
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408313856 unmapped: 47439872 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:46.791456+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408313856 unmapped: 47439872 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:47.791626+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 46391296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456859 data_alloc: 218103808 data_used: 24956928
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a06f9f45a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:48.791763+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 46391296 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3a000/0x0/0x1bfc00000, data 0x34b91d3/0x36c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070ba6f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a0706f1800 session 0x55a07068d0e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:49.791967+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3e3a000/0x0/0x1bfc00000, data 0x34b91d3/0x36c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 46366720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:50.792164+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a070a2b680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 46358528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:51.792287+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:52.792488+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316919 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:53.792702+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:54.792871+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:55.793409+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:56.793913+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:57.794428+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316919 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:58.794766+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:59.795193+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408215552 unmapped: 47538176 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:00.795461+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:01.795828+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a47f5000/0x0/0x1bfc00000, data 0x273c171/0x2946000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:02.795970+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316919 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:03.796268+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 47529984 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:04.796454+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408190976 unmapped: 47562752 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.548774719s of 19.727108002s, submitted: 40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:05.796590+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a075249e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 47218688 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a471e000/0x0/0x1bfc00000, data 0x2bd6171/0x2de0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:06.796810+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a471e000/0x0/0x1bfc00000, data 0x2bd6171/0x2de0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:07.797079+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361005 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:08.797307+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a471e000/0x0/0x1bfc00000, data 0x2bd6171/0x2de0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:09.797564+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:10.797756+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 47210496 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:11.797927+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a072f36000 session 0x55a06f786780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408715264 unmapped: 47038464 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:12.798095+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408715264 unmapped: 47038464 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363639 data_alloc: 218103808 data_used: 20422656
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:13.798300+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:14.798474+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:15.798605+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:16.798756+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:17.798915+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4388919 data_alloc: 218103808 data_used: 23932928
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:18.799085+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:19.799288+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:20.799455+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:21.799602+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408731648 unmapped: 47022080 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:22.799735+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408739840 unmapped: 47013888 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4388919 data_alloc: 218103808 data_used: 23932928
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:23.799909+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x2c00171/0x2e0a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408739840 unmapped: 47013888 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.091243744s of 19.163955688s, submitted: 16
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:24.800074+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 43646976 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:25.800207+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f39000/0x0/0x1bfc00000, data 0x33ba171/0x35c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:26.800345+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:27.800472+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4461973 data_alloc: 234881024 data_used: 25284608
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:28.800650+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:29.800820+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:30.800970+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:31.801157+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f2d000/0x0/0x1bfc00000, data 0x33c6171/0x35d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:32.801319+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4461989 data_alloc: 234881024 data_used: 25284608
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:33.801453+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a070709000 session 0x55a070a2b860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:34.801619+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:35.801751+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 44318720 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:36.801906+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07049bc00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.064500809s of 12.350695610s, submitted: 83
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411443200 unmapped: 44310528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:37.802017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411443200 unmapped: 44310528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4463005 data_alloc: 234881024 data_used: 25284608
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f2d000/0x0/0x1bfc00000, data 0x33c61d3/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:38.802186+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a07049bc00 session 0x55a0707c54a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411443200 unmapped: 44310528 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:39.802316+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411451392 unmapped: 44302336 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a075698c00 session 0x55a070a2a960
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06fabb000 session 0x55a06ec6cb40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:40.802467+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a3f2d000/0x0/0x1bfc00000, data 0x33c61d3/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:41.802618+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a06f9f4f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:42.802750+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb7000/0x0/0x1bfc00000, data 0x273c1d3/0x2947000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4328499 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb7000/0x0/0x1bfc00000, data 0x273c1d3/0x2947000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:43.802896+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:44.803288+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 47079424 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:45.803499+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408682496 unmapped: 47071232 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:46.803628+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.785040855s of 10.047825813s, submitted: 25
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408698880 unmapped: 47054848 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a0752483c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bb7000/0x0/0x1bfc00000, data 0x273c1d3/0x2947000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,3,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:47.803798+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a070709000 session 0x55a06dd554a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408797184 unmapped: 46956544 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4327235 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:48.803974+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a070709000 session 0x55a070def860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06dde1400 session 0x55a07087e5a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 46874624 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:49.804121+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:50.804241+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06ddea400 session 0x55a0752494a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:51.804447+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:52.804739+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325249 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:53.804867+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:54.805017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:55.805147+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:56.805273+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:57.805818+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325249 data_alloc: 218103808 data_used: 20418560
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:58.805939+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:59.806268+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.085711479s of 13.555994987s, submitted: 392
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 ms_handle_reset con 0x55a06fabb000 session 0x55a0707c4780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:00.806407+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a4bba000/0x0/0x1bfc00000, data 0x273c0ff/0x2944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:01.806580+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408961024 unmapped: 46792704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:02.806739+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 396 ms_handle_reset con 0x55a075698c00 session 0x55a070b405a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408969216 unmapped: 46784512 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330920 data_alloc: 218103808 data_used: 20426752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:03.806878+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 396 ms_handle_reset con 0x55a075698c00 session 0x55a0708ec3c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 46776320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 397 ms_handle_reset con 0x55a06dde1400 session 0x55a070ac21e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:04.807197+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 397 ms_handle_reset con 0x55a06ddea400 session 0x55a0706bfa40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:05.807339+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 397 ms_handle_reset con 0x55a06fabb000 session 0x55a070bf45a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:06.807498+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a4bb4000/0x0/0x1bfc00000, data 0x273fa05/0x294a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a4bb4000/0x0/0x1bfc00000, data 0x273fa05/0x294a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:07.807625+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4332582 data_alloc: 218103808 data_used: 20430848
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:08.807834+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a4bb4000/0x0/0x1bfc00000, data 0x273fa05/0x294a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:09.807993+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 46841856 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:10.808140+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408920064 unmapped: 46833664 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:11.808300+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.481833458s of 11.654898643s, submitted: 48
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:12.808447+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:13.808603+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 398 ms_handle_reset con 0x55a06faa6c00 session 0x55a06de9ba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:14.808804+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:15.808994+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409976832 unmapped: 45776896 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:16.809172+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409985024 unmapped: 45768704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:17.809344+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409985024 unmapped: 45768704 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:18.809493+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409993216 unmapped: 45760512 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:19.809633+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409993216 unmapped: 45760512 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:20.809842+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:21.809997+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:22.810165+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:23.810320+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:24.810493+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:25.810626+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a4bb0000/0x0/0x1bfc00000, data 0x2741544/0x294d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:26.810768+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:27.810896+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410001408 unmapped: 45752320 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 20439040
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:28.811033+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 45744128 heap: 455753728 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:29.811207+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.665527344s of 17.685497284s, submitted: 14
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 55238656 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:30.811334+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06ddea400 session 0x55a070dee960
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:31.811610+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:32.811860+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4429647 data_alloc: 218103808 data_used: 20447232
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:33.812171+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 55222272 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:34.812443+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06fabb000 session 0x55a06dd47860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a075698c00 session 0x55a070a3c3c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a070709000 session 0x55a06eb0b680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 55214080 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:35.812700+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072f36000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a072f36000 session 0x55a0708ecf00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06ddea400 session 0x55a07087eb40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418308096 unmapped: 45842432 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:36.812899+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418308096 unmapped: 45842432 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:37.813176+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06fabb000 session 0x55a070bf41e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a070709000 session 0x55a06dd49c20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a075698c00 session 0x55a070a3c3c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a0708dd400 session 0x55a06de9ba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06ddea400 session 0x55a070bf45a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a06fabb000 session 0x55a0707c4780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4520248 data_alloc: 234881024 data_used: 30212096
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:38.813383+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3f3b000/0x0/0x1bfc00000, data 0x33b31e3/0x35c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:39.813582+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:40.813820+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fe000/0x0/0x1bfc00000, data 0x3af01f3/0x3d00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:41.814007+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fe000/0x0/0x1bfc00000, data 0x3af01f3/0x3d00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 45539328 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:42.814165+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.855843544s of 13.100606918s, submitted: 40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a070709000 session 0x55a070def860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418619392 unmapped: 45531136 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:43.814302+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4521449 data_alloc: 234881024 data_used: 30220288
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418570240 unmapped: 45580288 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:44.814522+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fe000/0x0/0x1bfc00000, data 0x3af01f3/0x3d00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:45.814671+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:46.814956+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:47.815143+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:48.815251+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4567209 data_alloc: 234881024 data_used: 36593664
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:49.815386+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375a000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 ms_handle_reset con 0x55a07375a000 session 0x55a0709ee960
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fd000/0x0/0x1bfc00000, data 0x3af0255/0x3d01000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:50.815531+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:51.815690+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fd000/0x0/0x1bfc00000, data 0x3af0255/0x3d01000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:52.815844+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708e5c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:53.815973+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4568991 data_alloc: 234881024 data_used: 36593664
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a37fd000/0x0/0x1bfc00000, data 0x3af0255/0x3d01000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.379405022s of 11.574952126s, submitted: 8
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:54.816190+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418488320 unmapped: 45662208 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 ms_handle_reset con 0x55a072d03800 session 0x55a070bf50e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:55.816317+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420610048 unmapped: 43540480 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a3149000/0x0/0x1bfc00000, data 0x41a2eae/0x43b5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:56.816466+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422387712 unmapped: 41762816 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:57.816615+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:58.816790+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4652820 data_alloc: 234881024 data_used: 38694912
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:59.816962+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:00.817130+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:01.817257+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:02.817412+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:03.817531+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4652820 data_alloc: 234881024 data_used: 38694912
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:04.817763+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:05.817950+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:06.818149+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 41705472 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:07.818305+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:08.818496+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4652820 data_alloc: 234881024 data_used: 38694912
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:09.818622+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:10.818806+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:11.818931+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30b9000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.468400955s of 17.711011887s, submitted: 78
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:12.819070+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 ms_handle_reset con 0x55a06ddea400 session 0x55a06e6aa3c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422453248 unmapped: 41697280 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 heartbeat osd_stat(store_statfs(0x1a30ba000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a06fabb000 session 0x55a06fca43c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:13.819215+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422461440 unmapped: 41689088 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651650 data_alloc: 234881024 data_used: 38731776
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30ba000/0x0/0x1bfc00000, data 0x4231eae/0x4444000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30b7000/0x0/0x1bfc00000, data 0x4233b5b/0x4447000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:14.819433+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422461440 unmapped: 41689088 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:15.819612+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:16.819781+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:17.819919+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:18.820076+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4650418 data_alloc: 234881024 data_used: 38731776
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a075698c00 session 0x55a06ec6cb40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a0708df400 session 0x55a07085b2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a0708e5c00 session 0x55a070a2b4a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:19.820248+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 ms_handle_reset con 0x55a06ddea400 session 0x55a070e00b40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30b7000/0x0/0x1bfc00000, data 0x4233b5b/0x4447000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:20.820383+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422469632 unmapped: 41680896 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a30b7000/0x0/0x1bfc00000, data 0x4233b5b/0x4447000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075698c00
Oct 02 13:47:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:21.820595+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 41664512 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:22.820741+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 41664512 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:23.820909+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4655300 data_alloc: 234881024 data_used: 38797312
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:24.821152+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:25.821281+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:26.821493+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:27.821625+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:28.821766+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4655300 data_alloc: 234881024 data_used: 38797312
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:29.821906+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b3000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:30.822050+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:31.822200+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422494208 unmapped: 41656320 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:32.822353+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422502400 unmapped: 41648128 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.606676102s of 21.199974060s, submitted: 36
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:33.822523+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4659344 data_alloc: 234881024 data_used: 39182336
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:34.822695+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:35.822857+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:36.822985+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:37.823089+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:38.823261+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4661904 data_alloc: 234881024 data_used: 39387136
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:39.823387+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:40.830775+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:41.830901+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:42.831061+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:43.831208+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4662672 data_alloc: 234881024 data_used: 39415808
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:44.831417+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:45.831576+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:46.831733+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:47.831873+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:48.832008+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4662672 data_alloc: 234881024 data_used: 39415808
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:49.832342+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:50.832452+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 41639936 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:51.832566+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a0708df400 session 0x55a070a2b2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422518784 unmapped: 41631744 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:52.832701+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 41623552 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.710981369s of 20.009254456s, submitted: 5
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:53.832825+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a070709000 session 0x55a0708ed4a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 41623552 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a30b4000/0x0/0x1bfc00000, data 0x423569a/0x444a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4661800 data_alloc: 234881024 data_used: 39411712
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:54.833024+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a06fabb000 session 0x55a07068c5a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a075698c00 session 0x55a070e01680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422526976 unmapped: 41623552 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:55.833171+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 ms_handle_reset con 0x55a06ddea400 session 0x55a070ba41e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:56.833333+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:57.833500+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a3f33000/0x0/0x1bfc00000, data 0x33b8628/0x35cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:58.833649+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4480010 data_alloc: 234881024 data_used: 30236672
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:59.833875+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:00.834089+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 42754048 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:01.834321+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 42745856 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a3f31000/0x0/0x1bfc00000, data 0x33ba2b2/0x35cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a3f31000/0x0/0x1bfc00000, data 0x33ba2b2/0x35cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:02.834488+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 403 ms_handle_reset con 0x55a06fabb000 session 0x55a070a2b0e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414613504 unmapped: 49537024 heap: 464150528 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a4731000/0x0/0x1bfc00000, data 0x274a28f/0x295c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.192452431s of 10.011085510s, submitted: 87
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:03.834698+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414621696 unmapped: 57925632 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456133 data_alloc: 218103808 data_used: 20021248
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 404 ms_handle_reset con 0x55a070709000 session 0x55a070deeb40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:04.834888+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:05.835006+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:06.835187+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:07.835383+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:08.835585+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461899 data_alloc: 218103808 data_used: 20029440
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:09.835757+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:10.835999+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 57917440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:11.836239+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3abb000/0x0/0x1bfc00000, data 0x33bbf37/0x35d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 57982976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:12.836534+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 57982976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:13.836807+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 57982976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464345 data_alloc: 218103808 data_used: 20029440
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.417022705s of 10.853311539s, submitted: 21
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a0708df400 session 0x55a071dd12c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07375a000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f29000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:14.837042+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a07375a000 session 0x55a06dd6d4a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:15.837258+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:16.837506+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f2a000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:17.837768+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422256640 unmapped: 50290688 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:18.837969+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422281216 unmapped: 50266112 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4499663 data_alloc: 234881024 data_used: 31506432
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06ddea400 session 0x55a07087fa40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fabb000 session 0x55a0708ed2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a070709000 session 0x55a070b41a40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a0708df400 session 0x55a07068d0e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:19.838161+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe4c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a075fe4c00 session 0x55a06d1d6f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:20.838365+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:21.838580+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:22.838737+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 422486016 unmapped: 50061312 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:23.838907+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421871616 unmapped: 50675712 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4521353 data_alloc: 234881024 data_used: 31506432
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:24.839163+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:25.839349+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:26.839565+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:27.839746+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:28.839974+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4540713 data_alloc: 234881024 data_used: 34197504
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.272889137s of 14.806378365s, submitted: 38
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fabb000 session 0x55a06ec6c960
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:29.840233+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c97000/0x0/0x1bfc00000, data 0x364fad8/0x3867000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:30.840396+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:31.840643+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:32.840830+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:33.841044+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3c96000/0x0/0x1bfc00000, data 0x364fae8/0x3868000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4541725 data_alloc: 234881024 data_used: 34197504
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:34.841381+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421879808 unmapped: 50667520 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:35.841567+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424501248 unmapped: 48046080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:36.841769+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424714240 unmapped: 47833088 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:37.841942+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 47742976 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:38.842184+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4587119 data_alloc: 234881024 data_used: 34201600
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:39.842350+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36c7000/0x0/0x1bfc00000, data 0x3c1eae8/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.107211113s of 11.299701691s, submitted: 54
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:40.842543+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36c7000/0x0/0x1bfc00000, data 0x3c1eae8/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:41.842757+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36c5000/0x0/0x1bfc00000, data 0x3c20ae8/0x3e39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,0,0,3])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:42.842932+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:43.843216+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 48144384 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4590479 data_alloc: 234881024 data_used: 34512896
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36aa000/0x0/0x1bfc00000, data 0x3c3bae8/0x3e54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:44.843456+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 48136192 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:45.850543+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:46.850691+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:47.850836+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:48.851026+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4594277 data_alloc: 234881024 data_used: 34508800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:49.851182+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36a2000/0x0/0x1bfc00000, data 0x3c43ae8/0x3e5c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:50.851348+0000)
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38487 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:51.851494+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.190292358s of 11.736324310s, submitted: 18
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 47079424 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a36a2000/0x0/0x1bfc00000, data 0x3c43ae8/0x3e5c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:52.851633+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:53.851802+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369f000/0x0/0x1bfc00000, data 0x3c46ae8/0x3e5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4594765 data_alloc: 234881024 data_used: 34508800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:54.851993+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:55.852162+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:56.852307+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369f000/0x0/0x1bfc00000, data 0x3c46ae8/0x3e5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:57.852432+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:58.852564+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 47071232 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34512896
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:59.852686+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:00.852867+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:01.853007+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:02.853156+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:03.853364+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34512896
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:04.853567+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:05.853752+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:06.853951+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:07.854073+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:08.854323+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.575996399s of 16.818254471s, submitted: 3
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369e000/0x0/0x1bfc00000, data 0x3c47ae8/0x3e60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593881 data_alloc: 234881024 data_used: 34512896
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:09.854483+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:10.854627+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 47046656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:11.854823+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06ddea400 session 0x55a0709ef860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a070709000 session 0x55a06dd6ad20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:12.854975+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070849400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:13.855194+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593525 data_alloc: 234881024 data_used: 34516992
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:14.855459+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:15.855618+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:16.855752+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:17.855899+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:18.856075+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34545664
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:19.856217+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:20.856386+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 47038464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:21.856512+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 47030272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:22.856612+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 47030272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:23.856796+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 47030272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593845 data_alloc: 234881024 data_used: 34545664
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.512025833s of 15.597708702s, submitted: 6
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:24.856942+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:25.857152+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:26.857280+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:27.857378+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a369d000/0x0/0x1bfc00000, data 0x3c48ae8/0x3e61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:28.857517+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4598009 data_alloc: 234881024 data_used: 35102720
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:29.857677+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:30.857846+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:31.858007+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 47022080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:32.858202+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 47063040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:33.858378+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4600341 data_alloc: 234881024 data_used: 35106816
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:34.858620+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:35.858780+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:36.858970+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:37.859186+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:38.859431+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4600341 data_alloc: 234881024 data_used: 35106816
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:39.859622+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3697000/0x0/0x1bfc00000, data 0x3c4eae8/0x3e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.862111092s of 16.040271759s, submitted: 8
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:40.859816+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a070849400 session 0x55a07087fc20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:41.859978+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fb47c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fb47c00 session 0x55a070ac2f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06ddea400 session 0x55a070c06960
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:42.860160+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a0708df400 session 0x55a070ac2000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 47054848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:43.860268+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4504435 data_alloc: 234881024 data_used: 31506432
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 ms_handle_reset con 0x55a06fabb000 session 0x55a0708ecb40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:44.860722+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f27000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:45.860840+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:46.861057+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:47.861187+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 heartbeat osd_stat(store_statfs(0x1a3f27000/0x0/0x1bfc00000, data 0x33bda76/0x35d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:48.861290+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420077568 unmapped: 52469760 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4503609 data_alloc: 234881024 data_used: 31502336
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:49.861411+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420110336 unmapped: 52436992 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.566287041s of 10.001653671s, submitted: 74
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:50.861540+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 406 ms_handle_reset con 0x55a070709000 session 0x55a0708ed4a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:51.861686+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 406 heartbeat osd_stat(store_statfs(0x1a4b99000/0x0/0x1bfc00000, data 0x274f6f0/0x2965000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:52.861842+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:53.861984+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389934 data_alloc: 218103808 data_used: 20037632
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:54.862171+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 406 heartbeat osd_stat(store_statfs(0x1a4b99000/0x0/0x1bfc00000, data 0x274f6f0/0x2965000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:55.862325+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:56.862471+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:57.862604+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:58.862711+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:59.862824+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:00.863071+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 60350464 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:01.863247+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:02.863372+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:03.863500+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:04.863662+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:05.863788+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:06.863934+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:07.864080+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:08.864231+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:09.864427+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:10.864582+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:11.864717+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 60342272 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:12.864930+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:13.865068+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:14.865231+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:15.865360+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:16.865465+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:17.865575+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 60334080 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:18.865713+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:19.865852+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:20.865960+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:21.866136+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:22.866224+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:23.866389+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:24.866569+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:25.866722+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 60325888 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:26.866860+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:27.867020+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:28.867135+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:29.867289+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:30.867428+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:31.867565+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:32.867686+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:33.867837+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 60317696 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:34.868056+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:35.868205+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:36.868369+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:37.868565+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:38.868728+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394108 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:39.868934+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b95000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:40.869192+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 60309504 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070849400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.996181488s of 51.030910492s, submitted: 17
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:41.869375+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a070849400 session 0x55a070a27e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 60252160 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:42.869532+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412303360 unmapped: 60243968 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:43.869679+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428190 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:44.869882+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:45.870049+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:46.870184+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:47.870378+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:48.870515+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 60235776 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428190 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:49.870992+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:50.872151+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:51.873234+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:52.874157+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:53.874453+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456510 data_alloc: 218103808 data_used: 23871488
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:54.875321+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:55.875983+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:56.876264+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:57.876640+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:58.876854+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 60063744 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456510 data_alloc: 218103808 data_used: 23871488
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:59.877204+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4766000/0x0/0x1bfc00000, data 0x2b8122f/0x2d98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.503690720s of 18.558368683s, submitted: 8
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 59154432 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:00.877360+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:01.877618+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 59154432 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:02.877848+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:03.878072+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:04.878335+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4476274 data_alloc: 218103808 data_used: 23891968
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4575000/0x0/0x1bfc00000, data 0x2d6a22f/0x2f81000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:05.878529+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:06.878654+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 58679296 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:07.878871+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 59351040 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a455b000/0x0/0x1bfc00000, data 0x2d8c22f/0x2fa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:08.879003+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:09.879215+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4472746 data_alloc: 218103808 data_used: 23891968
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:10.879469+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a455b000/0x0/0x1bfc00000, data 0x2d8c22f/0x2fa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:11.879600+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 59342848 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:12.879752+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.390573502s of 12.535014153s, submitted: 38
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:13.879883+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:14.880062+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4473622 data_alloc: 218103808 data_used: 23900160
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd505a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:15.880185+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 59334656 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4550000/0x0/0x1bfc00000, data 0x2d9722f/0x2fae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a06fabb000 session 0x55a0707c4000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:16.880323+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:17.880484+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:18.880661+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:19.880864+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:20.881024+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:21.881811+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:22.882022+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:23.882490+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 59138048 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:24.883561+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:25.884017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:26.884343+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:27.884565+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:28.884925+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:29.885566+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:30.885860+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 59129856 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:31.886014+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:32.886303+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:33.886566+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:34.886841+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:35.887165+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:36.887582+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:37.887786+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 59121664 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:38.887961+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:39.888245+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:40.888423+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:41.888607+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:42.888763+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:43.888944+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:44.889164+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:45.889371+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 59113472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:46.889578+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:47.889780+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:48.889952+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:49.890185+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 59105280 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:50.890351+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413450240 unmapped: 59097088 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:51.890511+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 59088896 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:52.890680+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 59088896 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:53.890800+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 59088896 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:54.890949+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:55.891063+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:56.891134+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:57.891279+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:58.891456+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:59.891611+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:00.891757+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:01.891919+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413466624 unmapped: 59080704 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:02.892053+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 59072512 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:03.892263+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 59072512 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:04.892484+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:05.892647+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:06.892776+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:07.892911+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:08.893049+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:09.893161+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:10.893322+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 59064320 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:11.893473+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a4b96000/0x0/0x1bfc00000, data 0x275122f/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 59056128 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:12.893863+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 59056128 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:13.893991+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 59056128 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:14.894186+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397148 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.355045319s of 62.387897491s, submitted: 8
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 ms_handle_reset con 0x55a070709000 session 0x55a06de9b860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:15.894331+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:16.894463+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:17.894611+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 59727872 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:18.894805+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 59719680 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:19.894946+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4429796 data_alloc: 218103808 data_used: 20045824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 59719680 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:20.895093+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 59719680 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708df400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:21.895268+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412835840 unmapped: 59711488 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:22.895435+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:23.895561+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:24.895726+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4451236 data_alloc: 218103808 data_used: 22974464
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:25.895849+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:26.895991+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:27.896209+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:28.896346+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:29.896475+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4451236 data_alloc: 218103808 data_used: 22974464
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:30.896603+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412901376 unmapped: 59645952 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:31.896738+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.108156204s of 17.133529663s, submitted: 12
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414777344 unmapped: 57769984 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a486d000/0x0/0x1bfc00000, data 0x2a7a22f/0x2c91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:32.896884+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 56041472 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:33.897019+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 55869440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:34.897250+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4532264 data_alloc: 218103808 data_used: 24707072
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 55869440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:35.897481+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 55869440 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:36.897668+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a3b0e000/0x0/0x1bfc00000, data 0x33c922f/0x35e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:37.897850+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:38.897999+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:39.898202+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a3ada000/0x0/0x1bfc00000, data 0x33fd22f/0x3614000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4533836 data_alloc: 234881024 data_used: 24936448
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:40.898383+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:41.898533+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.606778145s of 10.013916016s, submitted: 99
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416645120 unmapped: 55902208 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:42.898758+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a3ada000/0x0/0x1bfc00000, data 0x33fd22f/0x3614000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 55894016 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:43.899005+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 55894016 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:44.899312+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4534560 data_alloc: 234881024 data_used: 24973312
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 55894016 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070707400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:45.899528+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 408 ms_handle_reset con 0x55a06eabd400 session 0x55a06fca4780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 408 ms_handle_reset con 0x55a070707400 session 0x55a070e005a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417710080 unmapped: 54837248 heap: 472547328 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:46.899699+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 439975936 unmapped: 39198720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 408 heartbeat osd_stat(store_statfs(0x1a2d9d000/0x0/0x1bfc00000, data 0x4134f5c/0x4350000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [0,0,0,0,0,1,0,0,2,6])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:47.899966+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 408 ms_handle_reset con 0x55a06ddea400 session 0x55a0708ede00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:48.900183+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 409 ms_handle_reset con 0x55a06eabd400 session 0x55a06e6aad20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416997376 unmapped: 62177280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:49.900399+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811272 data_alloc: 234881024 data_used: 33505280
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a06fabb000 session 0x55a07087fe00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417038336 unmapped: 62136320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a1fbd000/0x0/0x1bfc00000, data 0x4f1187e/0x512f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:50.900602+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a070709000 session 0x55a071dd05a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddefc00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a06ddefc00 session 0x55a07068c1e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 410 ms_handle_reset con 0x55a06ddea400 session 0x55a070a2bc20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417054720 unmapped: 62119936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:51.900763+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417062912 unmapped: 62111744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:52.900980+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:53.901164+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:54.901400+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a1fbd000/0x0/0x1bfc00000, data 0x4f1187e/0x512f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4810976 data_alloc: 234881024 data_used: 33505280
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:55.901542+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 62103552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:56.901730+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.591431618s of 14.673515320s, submitted: 91
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06eabd400 session 0x55a06dd474a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06fabb000 session 0x55a070bf5c20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070709000 session 0x55a06e5b7860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 69304320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070707000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070707000 session 0x55a06dd48f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06eabd400 session 0x55a06fca6780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06ddea400 session 0x55a06fca61e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:57.901911+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06fabb000 session 0x55a070b40f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070709000 session 0x55a070a27c20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075fe5800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a075fe5800 session 0x55a0709670e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409968640 unmapped: 69206016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:58.902172+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409968640 unmapped: 69206016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:59.902336+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4793558 data_alloc: 234881024 data_used: 33509376
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 409985024 unmapped: 69189632 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:00.902687+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fbb000/0x0/0x1bfc00000, data 0x4f133cd/0x5133000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 69165056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:01.902847+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06ddea400 session 0x55a071dd14a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 69165056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:02.903046+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06eabd400 session 0x55a06dd47a40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 410009600 unmapped: 69165056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:03.903199+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a06fabb000 session 0x55a0706be5a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a070709000 session 0x55a06ec3bc20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411074560 unmapped: 68100096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:04.903371+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4795396 data_alloc: 234881024 data_used: 33509376
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 411074560 unmapped: 68100096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:05.903646+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412622848 unmapped: 66551808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:06.903849+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:07.904042+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:08.904219+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:09.904355+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886756 data_alloc: 251658240 data_used: 44552192
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:10.904547+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:11.904678+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:12.904838+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:13.904975+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:14.905216+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886756 data_alloc: 251658240 data_used: 44552192
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:15.905350+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1fba000/0x0/0x1bfc00000, data 0x4f133dd/0x5134000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417013760 unmapped: 62160896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:16.905532+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.954988480s of 20.212965012s, submitted: 12
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:17.905767+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:18.905997+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418545664 unmapped: 60628992 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:19.906209+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934062 data_alloc: 251658240 data_used: 50483200
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:20.906353+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:21.906546+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:22.906698+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:23.906926+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:24.907161+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934766 data_alloc: 251658240 data_used: 50483200
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:25.907302+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:26.907450+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:27.907587+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:28.907878+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.713869095s of 11.773273468s, submitted: 7
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:29.908044+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934414 data_alloc: 251658240 data_used: 50483200
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:30.908189+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:31.908361+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 59039744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:32.908492+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420143104 unmapped: 59031552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:33.908634+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:34.908819+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934238 data_alloc: 251658240 data_used: 50483200
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:35.908994+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:36.909185+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:37.909315+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420208640 unmapped: 58966016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:38.909437+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a1d4c000/0x0/0x1bfc00000, data 0x51813dd/0x53a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420216832 unmapped: 58957824 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:39.909577+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4934238 data_alloc: 251658240 data_used: 50483200
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 58941440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:40.909729+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420495360 unmapped: 58679296 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:41.909857+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 ms_handle_reset con 0x55a07036c800 session 0x55a070bf4d20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 420560896 unmapped: 58613760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:42.909992+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.115175247s of 14.167896271s, submitted: 6
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a06ddea400 session 0x55a070e01e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a06fabb000 session 0x55a06dd44b40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a06eabd400 session 0x55a06f9f4f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 440082432 unmapped: 39092224 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:43.910182+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a07036c800 session 0x55a070e00780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070709000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 412 ms_handle_reset con 0x55a070709000 session 0x55a06dd55e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 47538176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:44.910402+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 412 handle_osd_map epochs [414,414], i have 412, src has [1,414]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 412 handle_osd_map epochs [413,414], i have 412, src has [1,414]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd86b40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5240270 data_alloc: 268435456 data_used: 62140416
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 414 heartbeat osd_stat(store_statfs(0x19f93f000/0x0/0x1bfc00000, data 0x75869ba/0x77ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 431718400 unmapped: 47456256 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:45.910524+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a06eabd400 session 0x55a070deed20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a06fabb000 session 0x55a070b41a40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 414 ms_handle_reset con 0x55a07036c800 session 0x55a0709ee1e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432889856 unmapped: 46284800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:46.910668+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708dcc00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432889856 unmapped: 46284800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:47.910833+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432922624 unmapped: 46252032 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:48.910950+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 415 ms_handle_reset con 0x55a0708dcc00 session 0x55a0706be5a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432930816 unmapped: 46243840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:49.911080+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a1d40000/0x0/0x1bfc00000, data 0x5188621/0x53ae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 415 ms_handle_reset con 0x55a0706f0400 session 0x55a06da49e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 415 ms_handle_reset con 0x55a0706ef000 session 0x55a070e11860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4996972 data_alloc: 268435456 data_used: 62132224
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432930816 unmapped: 46243840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:50.911230+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432930816 unmapped: 46243840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:51.911366+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432955392 unmapped: 46219264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:52.911645+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 416 ms_handle_reset con 0x55a06ddea400 session 0x55a070def4a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1fab000/0x0/0x1bfc00000, data 0x4f1c16c/0x5142000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1fac000/0x0/0x1bfc00000, data 0x4f1c15c/0x5141000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432955392 unmapped: 46219264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:53.911777+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432955392 unmapped: 46219264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:54.912000+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.380544662s of 12.168486595s, submitted: 103
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4968539 data_alloc: 251658240 data_used: 59604992
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432963584 unmapped: 46211072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:55.912173+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 432988160 unmapped: 46186496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:56.912280+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 418 ms_handle_reset con 0x55a06eabd400 session 0x55a06d1d6f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 54345728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:57.912400+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a47b6000/0x0/0x1bfc00000, data 0x3439890/0x365e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 418 ms_handle_reset con 0x55a0708df400 session 0x55a06de9b2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 54345728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:58.912509+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:59.912644+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4469598 data_alloc: 218103808 data_used: 18591744
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 418 ms_handle_reset con 0x55a06ddea400 session 0x55a06fca61e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:00.912848+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:01.913000+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a57a5000/0x0/0x1bfc00000, data 0x2764890/0x2989000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:02.913201+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:03.913348+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:04.913529+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4469598 data_alloc: 218103808 data_used: 18591744
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:05.913679+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:06.913811+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.185250282s of 11.454839706s, submitted: 82
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:07.913949+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:08.914090+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 57K writes, 226K keys, 57K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s
                                           Cumulative WAL: 57K writes, 20K syncs, 2.85 writes per sync, written: 0.21 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2950 writes, 10K keys, 2950 commit groups, 1.0 writes per commit group, ingest: 9.51 MB, 0.02 MB/s
                                           Interval WAL: 2950 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a06c3fb610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:09.914251+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:10.914380+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:11.914529+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:12.914653+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:13.914776+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:14.914989+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:15.915174+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:16.915298+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:17.915429+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:18.915578+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:19.915734+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:20.915896+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:21.916031+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:22.916154+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:23.916306+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:24.916482+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:25.916679+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:26.916868+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:27.917019+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:28.917192+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:29.917308+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:30.917499+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:31.917668+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:32.917805+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:33.917938+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:34.918521+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:35.919355+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:36.919518+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:37.920035+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:38.920450+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:39.920661+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473772 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:40.921414+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:41.921552+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.615318298s of 35.640926361s, submitted: 13
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:42.921784+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x27663de/0x298d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:43.922090+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5761000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06eabd400 session 0x55a07087fe00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:44.922670+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479523 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:45.922810+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:46.922974+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:47.923113+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:48.923436+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5761000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:49.923557+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479523 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:50.923921+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:51.924086+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:52.924288+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:53.924488+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:54.924683+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.724076271s of 12.753091812s, submitted: 7
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5761000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706ef000 session 0x55a0707c4000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4481350 data_alloc: 218103808 data_used: 18599936
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:55.924843+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:56.925004+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:57.925231+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:58.925377+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:59.925527+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 65404928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a6401/0x29ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706f0400 session 0x55a070a27e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482262 data_alloc: 218103808 data_used: 18870272
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:00.925636+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06fabb000 session 0x55a070ac2f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:01.925797+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:02.926055+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x27a63de/0x29cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:03.926267+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06ddea400 session 0x55a070e101e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:04.926469+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.236925125s of 10.163439751s, submitted: 26
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480511 data_alloc: 218103808 data_used: 18862080
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:05.926674+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a5762000/0x0/0x1bfc00000, data 0x27a63cf/0x29cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:06.926839+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06eabd400 session 0x55a06dd49e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:07.926966+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a2000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:08.927156+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:09.927310+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 65396736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06fabb000 session 0x55a07085ba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706ef000 session 0x55a06dd54b40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474155 data_alloc: 218103808 data_used: 21745664
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:10.927448+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:11.927560+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706f0400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706f0400 session 0x55a070e00000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a2000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:12.927710+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:13.927839+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:14.928019+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06ddea400 session 0x55a06e7d0d20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.611102104s of 10.015885353s, submitted: 25
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:15.928151+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475892 data_alloc: 218103808 data_used: 21745664
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a57a2000/0x0/0x1bfc00000, data 0x27663cf/0x298c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:16.928265+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06eabd400 session 0x55a0708ec780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 63168512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:17.928423+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a06fabb000 session 0x55a07068d860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 ms_handle_reset con 0x55a0706ef000 session 0x55a06f9e4000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:18.928560+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:19.928667+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07036c800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a07036c800 session 0x55a070a2ba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:20.928801+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4539361 data_alloc: 218103808 data_used: 21753856
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:21.928959+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5009000/0x0/0x1bfc00000, data 0x2efc08a/0x3124000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:22.929082+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:23.929282+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:24.929516+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:25.929677+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4539361 data_alloc: 218103808 data_used: 21753856
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5009000/0x0/0x1bfc00000, data 0x2efc08a/0x3124000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:26.929917+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:27.930074+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.243913651s of 12.955587387s, submitted: 41
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:28.930227+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06ddea400 session 0x55a070defa40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:29.930383+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:30.930543+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541143 data_alloc: 218103808 data_used: 21753856
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:31.930676+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5008000/0x0/0x1bfc00000, data 0x2efc0ec/0x3125000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:32.930880+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:33.931062+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06eabd400 session 0x55a070966b40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:34.931378+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416038912 unmapped: 63135744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:35.931596+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541900 data_alloc: 218103808 data_used: 21757952
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416047104 unmapped: 63127552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:36.931741+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416055296 unmapped: 63119360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5008000/0x0/0x1bfc00000, data 0x2efc10f/0x3126000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:37.931890+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06fabb000 session 0x55a070e114a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416055296 unmapped: 63119360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.679343224s of 10.163791656s, submitted: 5
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:38.932183+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a0706ef000 session 0x55a070ac2000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416055296 unmapped: 63119360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:39.932490+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a5008000/0x0/0x1bfc00000, data 0x2efc0ec/0x3125000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a072d03400 session 0x55a06d5623c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:40.932644+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4588443 data_alloc: 234881024 data_used: 28573696
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 ms_handle_reset con 0x55a06ddea400 session 0x55a06f9f5e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:41.933150+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a06eabd400 session 0x55a07015b2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:42.933495+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416063488 unmapped: 63111168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:43.933700+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a06fabb000 session 0x55a070ac3860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 66928640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:44.934436+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 66928640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579b000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:45.934626+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4489055 data_alloc: 218103808 data_used: 21762048
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 66912256 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:46.934795+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412303360 unmapped: 66871296 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:47.934983+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 66830336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.170576572s of 10.021444321s, submitted: 241
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:48.935144+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,2,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412327936 unmapped: 66846720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a0706ef000 session 0x55a070e101e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07db91800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:49.935305+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 ms_handle_reset con 0x55a07db91800 session 0x55a06f7870e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,1,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413360128 unmapped: 65814528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x2769cd5/0x2992000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [0,2] op hist [0,0,1,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:50.935466+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4488807 data_alloc: 218103808 data_used: 21762048
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 65781760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:51.935659+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd543c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 65798144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:52.935777+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:53.935929+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:54.936165+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:55.936311+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4495387 data_alloc: 218103808 data_used: 21774336
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 65789952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06eabd400 session 0x55a070e01a40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:56.936502+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:57.936728+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f8000/0x0/0x1bfc00000, data 0x276b824/0x2996000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.733456612s of 10.062119484s, submitted: 194
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06fabb000 session 0x55a06e6aa3c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:58.936869+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a45f9000/0x0/0x1bfc00000, data 0x276b814/0x2995000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:59.937017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413401088 unmapped: 65773568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a0706ef000 session 0x55a06e5b7e00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070891400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:00.937183+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a070891400 session 0x55a070ba4d20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:01.937372+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:02.937524+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:03.937665+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:04.937864+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:05.938009+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:06.938153+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:07.938290+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:08.938395+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:09.938523+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:10.938662+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:11.938866+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:12.939003+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:13.939196+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:14.939376+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:15.939531+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4546365 data_alloc: 218103808 data_used: 21770240
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 65765376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x2dfe823/0x3029000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:16.939704+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 65757184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:17.939830+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 65757184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:18.939957+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 65757184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.165819168s of 20.921848297s, submitted: 22
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 ms_handle_reset con 0x55a06ddea400 session 0x55a06fca65a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:19.940076+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412450816 unmapped: 66723840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:20.940217+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547958 data_alloc: 218103808 data_used: 21774336
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 66330624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:21.940331+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:22.940468+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:23.940591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:24.940741+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:25.940900+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4596438 data_alloc: 234881024 data_used: 28655616
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:26.941046+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:27.941180+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:28.941349+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:29.941488+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:30.941661+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4596438 data_alloc: 234881024 data_used: 28655616
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415268864 unmapped: 63905792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:31.941825+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.558998108s of 12.588422775s, submitted: 5
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a3f64000/0x0/0x1bfc00000, data 0x2dfe846/0x302a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:32.941982+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:33.942160+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:34.942383+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418168832 unmapped: 61005824 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:35.942557+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4664224 data_alloc: 234881024 data_used: 28803072
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:36.942725+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:37.942935+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a37e2000/0x0/0x1bfc00000, data 0x3580846/0x37ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:38.943160+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:39.943352+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:40.943503+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4664224 data_alloc: 234881024 data_used: 28803072
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:41.943633+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:42.943757+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a37df000/0x0/0x1bfc00000, data 0x3583846/0x37af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.099544525s of 10.999442101s, submitted: 52
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:43.943978+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a37df000/0x0/0x1bfc00000, data 0x3583846/0x37af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 422 handle_osd_map epochs [423,423], i have 423, src has [1,423]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:44.944133+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06fabb000 session 0x55a071dd0780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:45.944275+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668876 data_alloc: 234881024 data_used: 28811264
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:46.944446+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:47.944672+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:48.944875+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:49.945034+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:50.945227+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668876 data_alloc: 234881024 data_used: 28811264
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:51.945375+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:52.945649+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:53.945838+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.160892487s of 11.599664688s, submitted: 7
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:54.946014+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a072d03c00 session 0x55a070e103c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a0706ef000 session 0x55a07085b0e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585653/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:55.946192+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669648 data_alloc: 234881024 data_used: 28811264
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:56.946419+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:57.946591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:58.946733+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:59.946878+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:00.947075+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669648 data_alloc: 234881024 data_used: 28811264
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:01.947247+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:02.947432+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:03.947560+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:04.947709+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:05.947867+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.391080856s of 11.623667717s, submitted: 3
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4670128 data_alloc: 234881024 data_used: 28868608
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:06.948001+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:07.948155+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:08.948324+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:09.948458+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:10.948623+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669952 data_alloc: 234881024 data_used: 28868608
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37da000/0x0/0x1bfc00000, data 0x3585663/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:11.948746+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:12.948868+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:13.948987+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419209216 unmapped: 59965440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:14.949165+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,2])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417415168 unmapped: 61759488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:15.949282+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4737360 data_alloc: 234881024 data_used: 29294592
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417415168 unmapped: 61759488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:16.949415+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:17.949673+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:18.949824+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:19.949971+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:20.950152+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742480 data_alloc: 234881024 data_used: 30220288
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:21.950331+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:22.950592+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:23.950902+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:24.951179+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:25.951428+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742480 data_alloc: 234881024 data_used: 30220288
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:26.951658+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:27.951951+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:28.952166+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a3319000/0x0/0x1bfc00000, data 0x3c70663/0x3c75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417447936 unmapped: 61726720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:29.952362+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.808780670s of 23.742141724s, submitted: 5
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 57483264 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:30.952606+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4765360 data_alloc: 234881024 data_used: 31952896
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419414016 unmapped: 59760640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:31.952837+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:32.953019+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:33.953206+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:34.953419+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:35.953636+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4765360 data_alloc: 234881024 data_used: 31952896
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:36.953815+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:37.953990+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:38.954207+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06eabc400 session 0x55a07085af00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:39.954397+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:40.954553+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290309906s of 11.381837845s, submitted: 6
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4765360 data_alloc: 234881024 data_used: 31952896
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06ddea400 session 0x55a0709ee5a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:41.954774+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:42.954990+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:43.955228+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:44.955535+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06eabc400 session 0x55a0706bed20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:45.955684+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419446784 unmapped: 59727872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a30ee000/0x0/0x1bfc00000, data 0x3e9b663/0x3ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4704432 data_alloc: 234881024 data_used: 31891456
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:46.955901+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a06fabb000 session 0x55a06e6aba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:47.956257+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:48.956462+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 ms_handle_reset con 0x55a0706ef000 session 0x55a06e6450e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a072d03c00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:49.956651+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:50.956822+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 62005248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4703691 data_alloc: 234881024 data_used: 31887360
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.943298340s of 10.327015877s, submitted: 15
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a37db000/0x0/0x1bfc00000, data 0x37b05f1/0x37b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:51.956952+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417193984 unmapped: 61980672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 424 ms_handle_reset con 0x55a072d03c00 session 0x55a06debb680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:52.957172+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417193984 unmapped: 61980672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:53.957441+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 424 ms_handle_reset con 0x55a06eabd400 session 0x55a07068d2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:54.957737+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:55.957885+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4695700 data_alloc: 234881024 data_used: 31891456
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:56.958087+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 424 ms_handle_reset con 0x55a06ddea400 session 0x55a06de9af00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x358727b/0x37b4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:57.958354+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:58.958590+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417210368 unmapped: 61964288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:59.958753+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 425 ms_handle_reset con 0x55a06eabc400 session 0x55a070ac30e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:00.958955+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4699921 data_alloc: 234881024 data_used: 31899648
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3588dba/0x37b7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:01.959207+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.219140530s of 10.156544685s, submitted: 50
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:02.959424+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:03.959571+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 61939712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a37d8000/0x0/0x1bfc00000, data 0x2770dab/0x299e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,8])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:04.959839+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 425 ms_handle_reset con 0x55a06fabb000 session 0x55a070e01860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:05.960022+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521128 data_alloc: 218103808 data_used: 21794816
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:06.960256+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a45f0000/0x0/0x1bfc00000, data 0x2770dab/0x299e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a45f0000/0x0/0x1bfc00000, data 0x2770dab/0x299e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:07.960452+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0706ef000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:08.960652+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:09.960873+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 63381504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:10.961066+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415801344 unmapped: 63373312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525302 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:11.961284+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.447560787s of 10.297828674s, submitted: 36
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a45ec000/0x0/0x1bfc00000, data 0x2772a58/0x29a1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:12.973881+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 426 ms_handle_reset con 0x55a0706ef000 session 0x55a070a2a1e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:13.974053+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:14.974282+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:15.974471+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a45ec000/0x0/0x1bfc00000, data 0x2772a58/0x29a1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525302 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:16.974654+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:17.974888+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 63365120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:18.975173+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x2774597/0x29a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:19.975412+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:20.975623+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4528276 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:21.975814+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:22.975995+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 63356928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x2774597/0x29a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:23.976165+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 63348736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.541569710s of 12.292300224s, submitted: 15
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06ddea400 session 0x55a070deed20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:24.976337+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45ea000/0x0/0x1bfc00000, data 0x2774597/0x29a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabc400 session 0x55a070e11680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 63275008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:25.976500+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 63275008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4528196 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabd400 session 0x55a06eb121e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:26.976692+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 63275008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:27.976916+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 63266816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x27745a7/0x29a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:28.977059+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 63258624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06fabb000 session 0x55a070a3c1e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fa1b000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a45e9000/0x0/0x1bfc00000, data 0x27745a7/0x29a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06fa1b000 session 0x55a070a3c1e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:29.977176+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 63250432 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06ddea400 session 0x55a070e11680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:30.977314+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabc400 session 0x55a070e01860
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:31.977464+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:32.977592+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:33.977813+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:34.978017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:35.978184+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 63209472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:36.978392+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:37.978574+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:38.978714+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:39.978867+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:40.979044+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:41.979178+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:42.979307+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 63201280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:43.979456+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 63193088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:44.979614+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 63193088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:45.979880+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 63193088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:46.980021+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:47.980163+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:48.980302+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabd400 session 0x55a07068d2c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:49.980523+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 63184896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06fabb000 session 0x55a06debb680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070842800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a070842800 session 0x55a06e6450e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:50.980715+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627014 data_alloc: 218103808 data_used: 21803008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:51.980879+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06ddea400 session 0x55a06e6aba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:52.981149+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:53.981409+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 63176704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [1])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:54.981668+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 414916608 unmapped: 64258048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:55.981822+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4713894 data_alloc: 234881024 data_used: 32583680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:56.981973+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:57.982153+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:58.982315+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:59.982443+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:00.982589+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:01.982771+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4713894 data_alloc: 234881024 data_used: 32583680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:02.982955+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:03.983163+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:04.983373+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 64028672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.079414368s of 40.885025024s, submitted: 18
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x34725a7/0x36a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,10,17,3])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:05.983530+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 59334656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:06.983668+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4815398 data_alloc: 234881024 data_used: 33091584
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:07.983863+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:08.984008+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:09.984163+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c42000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:10.984298+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:11.984450+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4824148 data_alloc: 234881024 data_used: 32972800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:12.984613+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:13.984748+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c42000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419356672 unmapped: 59817984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:14.985019+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabc400 session 0x55a0709ee5a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418709504 unmapped: 60465152 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc ms_handle_reset ms_handle_reset con 0x55a075699000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: get_auth_request con 0x55a070842800 auth_method 0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.103345871s of 10.333523750s, submitted: 100
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06eabd400 session 0x55a070a2ba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:15.985187+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:16.985391+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4818180 data_alloc: 234881024 data_used: 32956416
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a06dce2800 session 0x55a070b41680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06fabb000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:17.985569+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:18.985706+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:19.985930+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:20.986199+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:21.986400+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4818180 data_alloc: 234881024 data_used: 32956416
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:22.986546+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:23.986782+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:24.986964+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:25.987141+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:26.987327+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4818180 data_alloc: 234881024 data_used: 32956416
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:27.987468+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:28.987644+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c58000/0x0/0x1bfc00000, data 0x41055a7/0x4336000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:29.987768+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:30.987915+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.739142418s of 15.747713089s, submitted: 1
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a075699800 session 0x55a070ba63c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:31.988039+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4816663 data_alloc: 234881024 data_used: 32956416
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a07db93000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:32.988179+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418881536 unmapped: 60293120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:33.988255+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:34.988420+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x41055ca/0x4337000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:35.988548+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:36.988711+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4816183 data_alloc: 234881024 data_used: 33579008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:37.988897+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:38.989183+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:39.989333+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x41055ca/0x4337000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:40.989467+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:41.989611+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4816183 data_alloc: 234881024 data_used: 33579008
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:42.989772+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419086336 unmapped: 60088320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.251277924s of 12.339883804s, submitted: 4
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:43.989912+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:44.990095+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x41055ca/0x4337000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:45.990276+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:46.990435+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4824999 data_alloc: 234881024 data_used: 34119680
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:47.990606+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:48.990774+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:49.990906+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:50.991089+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x41115ca/0x4343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:51.991266+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4828535 data_alloc: 234881024 data_used: 34115584
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:52.991389+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:53.991506+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:54.991699+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x41115ca/0x4343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:55.991839+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.064384460s of 12.121880531s, submitted: 26
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x41115ca/0x4343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:56.991951+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4836247 data_alloc: 234881024 data_used: 35917824
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a074fdf000
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:57.992098+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 ms_handle_reset con 0x55a074fdf000 session 0x55a070a2a5a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:58.992282+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _renew_subs
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:59.992438+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabc400 session 0x55a0752483c0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06ddea400 session 0x55a070b40f00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c40000/0x0/0x1bfc00000, data 0x41c8223/0x434d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:00.992562+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c3f000/0x0/0x1bfc00000, data 0x41c8295/0x434f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:01.992698+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4867651 data_alloc: 234881024 data_used: 35926016
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c35000/0x0/0x1bfc00000, data 0x427e295/0x4359000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:02.992853+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:03.993036+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419217408 unmapped: 59957248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:04.993272+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:05.993446+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:06.993601+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4867651 data_alloc: 234881024 data_used: 35926016
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabd400 session 0x55a075248d20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:07.993781+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c35000/0x0/0x1bfc00000, data 0x427e295/0x4359000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a075699800 session 0x55a070b40b40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:08.993934+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a0708eb800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a0708eb800 session 0x55a070b401e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.019895554s of 13.079443932s, submitted: 13
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd44780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:09.994069+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419225600 unmapped: 59949056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c34000/0x0/0x1bfc00000, data 0x427e2a5/0x435a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:10.994195+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419241984 unmapped: 59932672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:11.994437+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4870129 data_alloc: 234881024 data_used: 35942400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:12.994591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:13.994777+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c34000/0x0/0x1bfc00000, data 0x427e2a5/0x435a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:14.994948+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:15.995183+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:16.995324+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4870129 data_alloc: 234881024 data_used: 35942400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:17.995462+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:18.995669+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:19.995817+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2c34000/0x0/0x1bfc00000, data 0x427e2a5/0x435a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:20.995979+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:21.996180+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.991645813s of 13.058052063s, submitted: 1
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4887340 data_alloc: 234881024 data_used: 36724736
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419340288 unmapped: 59834368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:22.996330+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:23.996501+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:24.996658+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:25.996848+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x42c42a5/0x43a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:26.997055+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889596 data_alloc: 234881024 data_used: 36868096
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:27.997315+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:28.997528+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x42c42a5/0x43a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:29.997709+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419373056 unmapped: 59801600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:30.997986+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:31.998184+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889628 data_alloc: 234881024 data_used: 36868096
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.439553261s of 10.471271515s, submitted: 10
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:32.998360+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x42c42a5/0x43a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:33.998563+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:34.998751+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:35.998911+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:36.999044+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892634 data_alloc: 234881024 data_used: 36872192
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:37.999192+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:38.999380+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:39.999475+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:40.999629+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:41.999779+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4894314 data_alloc: 234881024 data_used: 37044224
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:42.999920+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:44.000091+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:45.000305+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:46.000460+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:47.000627+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be7000/0x0/0x1bfc00000, data 0x42c42a5/0x43a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4894314 data_alloc: 234881024 data_used: 37044224
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:48.000766+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabc400 session 0x55a07085a780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.469573975s of 15.737050056s, submitted: 14
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabd400 session 0x55a07068c1e0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a075699800 session 0x55a070defe00
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:49.000929+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be8000/0x0/0x1bfc00000, data 0x42c4295/0x43a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:50.001227+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:51.001412+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:52.001591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a2be8000/0x0/0x1bfc00000, data 0x42c4295/0x43a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892077 data_alloc: 234881024 data_used: 37048320
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a070843400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a070843400 session 0x55a06f786d20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419381248 unmapped: 59793408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:53.001776+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06ddea400 session 0x55a070e005a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabc400 session 0x55a06da485a0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:54.001906+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:55.002139+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 ms_handle_reset con 0x55a06eabd400 session 0x55a07068da40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:56.002359+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:57.002534+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4877579 data_alloc: 234881024 data_used: 36933632
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a2833000/0x0/0x1bfc00000, data 0x4115ed0/0x434a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:58.002726+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 429 ms_handle_reset con 0x55a07db93000 session 0x55a070ba8780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a075699800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.269157410s of 10.401799202s, submitted: 32
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:59.002862+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:00.003018+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 429 ms_handle_reset con 0x55a075699800 session 0x55a06dd54780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:01.003259+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:02.003409+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4876004 data_alloc: 234881024 data_used: 36929536
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419389440 unmapped: 59785216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:03.003531+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a2834000/0x0/0x1bfc00000, data 0x4115ead/0x4349000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 429 handle_osd_map epochs [430,430], i have 430, src has [1,430]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419397632 unmapped: 59777024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:04.003679+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419397632 unmapped: 59777024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06ddea400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 ms_handle_reset con 0x55a06ddea400 session 0x55a06dd47a40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:05.003891+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabc400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 ms_handle_reset con 0x55a06eabc400 session 0x55a075248780
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a2831000/0x0/0x1bfc00000, data 0x41179ec/0x434c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416833536 unmapped: 62341120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:06.004166+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416833536 unmapped: 62341120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:07.004294+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:08.004499+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:09.004665+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:10.004930+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:11.005277+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:12.005481+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:13.005696+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:14.005842+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 ms_handle_reset con 0x55a06dde1400 session 0x55a070b41c20
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06eabd400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416841728 unmapped: 62332928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:15.006006+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:16.006207+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:17.006368+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:18.006489+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:19.006688+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:20.006841+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:21.007054+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:22.007213+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416849920 unmapped: 62324736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:23.007417+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:24.007607+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:25.007779+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:26.007973+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:27.008184+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:28.008369+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:29.008523+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:30.008667+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416858112 unmapped: 62316544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:31.008875+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416866304 unmapped: 62308352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:32.009053+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:33.009244+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:34.009370+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:35.009580+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:36.009722+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:37.009874+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:38.010150+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:39.010332+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416874496 unmapped: 62300160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:40.010468+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:41.010642+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:42.010804+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:43.010945+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:44.011159+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 62291968 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:45.011356+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416890880 unmapped: 62283776 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:46.011547+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416890880 unmapped: 62283776 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:47.011710+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416899072 unmapped: 62275584 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:48.011883+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:49.012075+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:50.012228+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:51.012541+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:52.012678+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:53.012811+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:54.012948+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:55.013164+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:56.013333+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:57.013458+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:58.013645+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:59.013771+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:00.013950+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:01.014074+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:02.014229+0000)
Oct 02 13:47:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416907264 unmapped: 62267392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:03.014452+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416915456 unmapped: 62259200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:04.014620+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:05.014759+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 62251008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:06.014897+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 62251008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814393998' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:07.015053+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 62251008 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:08.015237+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:09.015476+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:10.015653+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:11.015809+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416931840 unmapped: 62242816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:12.015965+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:13.016130+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:14.016318+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:15.016535+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:16.016692+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:17.016832+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:18.016977+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:19.017091+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416940032 unmapped: 62234624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:20.017239+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:21.017375+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:22.017582+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:23.017732+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:24.017896+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:25.018073+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:26.018209+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:27.018399+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416956416 unmapped: 62218240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:28.018529+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416964608 unmapped: 62210048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:29.018665+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416964608 unmapped: 62210048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:30.018762+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416964608 unmapped: 62210048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:31.018863+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:32.019033+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:33.019160+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:34.019298+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416972800 unmapped: 62201856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:35.019522+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416980992 unmapped: 62193664 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:36.019791+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:37.019955+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:38.020285+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:39.020422+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:40.020551+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:41.020678+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:42.020826+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:43.021154+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416989184 unmapped: 62185472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:44.021341+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416997376 unmapped: 62177280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:45.021591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config show' '{prefix=config show}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416825344 unmapped: 62349312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:46.021708+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416612352 unmapped: 62562304 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:47.021832+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 62521344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:48.022000+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 62521344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:49.022148+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416137216 unmapped: 63037440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:50.022319+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416145408 unmapped: 63029248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:51.022453+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416145408 unmapped: 63029248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:52.022597+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416145408 unmapped: 63029248 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:53.022742+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:54.022878+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:55.023085+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:56.023266+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:57.023438+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:58.023584+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:59.023729+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:00.023878+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416153600 unmapped: 63021056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:01.024000+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416169984 unmapped: 63004672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:02.024187+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416169984 unmapped: 63004672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:03.024304+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416169984 unmapped: 63004672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:04.024426+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416169984 unmapped: 63004672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:05.024575+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416169984 unmapped: 63004672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:06.024697+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416169984 unmapped: 63004672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:07.024879+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416178176 unmapped: 62996480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:08.025017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416178176 unmapped: 62996480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:09.025266+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416194560 unmapped: 62980096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:10.025468+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416194560 unmapped: 62980096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:11.025683+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416194560 unmapped: 62980096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:12.025860+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416194560 unmapped: 62980096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:13.026047+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416194560 unmapped: 62980096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:14.026196+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416194560 unmapped: 62980096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:15.026504+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416202752 unmapped: 62971904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:16.026894+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416202752 unmapped: 62971904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:17.027035+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416202752 unmapped: 62971904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:18.027213+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416202752 unmapped: 62971904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:19.027330+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416210944 unmapped: 62963712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:20.027466+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416210944 unmapped: 62963712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:21.027605+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416210944 unmapped: 62963712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:22.027758+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416210944 unmapped: 62963712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:23.027913+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416210944 unmapped: 62963712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:24.028027+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416210944 unmapped: 62963712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:25.028160+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416219136 unmapped: 62955520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:26.028331+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416219136 unmapped: 62955520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:27.028481+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416219136 unmapped: 62955520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:28.028745+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416219136 unmapped: 62955520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:29.028942+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416219136 unmapped: 62955520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:30.029164+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416219136 unmapped: 62955520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:31.029329+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:32.029486+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:33.029670+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:34.029902+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:35.030167+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:36.030372+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:37.030518+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:38.030710+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416235520 unmapped: 62939136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:39.030849+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416243712 unmapped: 62930944 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:40.031021+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416243712 unmapped: 62930944 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:41.031178+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416251904 unmapped: 62922752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:42.031320+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416251904 unmapped: 62922752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:43.031456+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416251904 unmapped: 62922752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:44.031600+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416251904 unmapped: 62922752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:45.031782+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416251904 unmapped: 62922752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:46.031933+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416251904 unmapped: 62922752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:47.032063+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416251904 unmapped: 62922752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:48.032222+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416260096 unmapped: 62914560 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:49.032402+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416268288 unmapped: 62906368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:50.032591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416268288 unmapped: 62906368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:51.032721+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416268288 unmapped: 62906368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:52.032804+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416268288 unmapped: 62906368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:53.032914+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416268288 unmapped: 62906368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:54.033012+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416268288 unmapped: 62906368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:55.033231+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416268288 unmapped: 62906368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:56.033375+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416276480 unmapped: 62898176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:57.033578+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416276480 unmapped: 62898176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:58.033742+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416276480 unmapped: 62898176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:59.033894+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416276480 unmapped: 62898176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:00.034084+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416276480 unmapped: 62898176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:01.034268+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416276480 unmapped: 62898176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:02.034447+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416276480 unmapped: 62898176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:03.034581+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:04.034747+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:05.035001+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:06.035318+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:07.035505+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:08.035655+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:09.035811+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 59K writes, 231K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 59K writes, 21K syncs, 2.83 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1753 writes, 5167 keys, 1753 commit groups, 1.0 writes per commit group, ingest: 3.97 MB, 0.01 MB/s
                                           Interval WAL: 1753 writes, 742 syncs, 2.36 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:10.035943+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416284672 unmapped: 62889984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:11.036147+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416301056 unmapped: 62873600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:12.036303+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416301056 unmapped: 62873600 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:13.036436+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 62865408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:14.036558+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 62865408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:15.036723+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 62865408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:16.036853+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 62865408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:17.037015+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 62865408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:18.037205+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 62865408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:19.037324+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 62865408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:20.037478+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 62857216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:21.037603+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 62857216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:22.037754+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 62857216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:23.037893+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 62857216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:24.038052+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 62857216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:25.038259+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416325632 unmapped: 62849024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:26.038375+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416325632 unmapped: 62849024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:27.038504+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416342016 unmapped: 62832640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:28.038687+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416342016 unmapped: 62832640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:29.038835+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 62824448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:30.039007+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 62824448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:31.039193+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 62824448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:32.039338+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 62824448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:33.039463+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 62824448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:34.039626+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 62824448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:35.039805+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416358400 unmapped: 62816256 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:36.039983+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 62808064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:37.040139+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 62808064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:38.040302+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 62808064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:39.040477+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 62808064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:40.040772+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 62808064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:41.041008+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 62808064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:42.041252+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 62808064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:43.041417+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:44.041943+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:45.042159+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:46.043230+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:47.043577+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:48.044389+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:49.044577+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:50.045199+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416382976 unmapped: 62791680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:51.045695+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416391168 unmapped: 62783488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:52.046138+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416391168 unmapped: 62783488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:53.046361+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416391168 unmapped: 62783488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:54.046712+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416391168 unmapped: 62783488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:55.046880+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416391168 unmapped: 62783488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:56.047195+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416391168 unmapped: 62783488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:57.047402+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416399360 unmapped: 62775296 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:58.047616+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416399360 unmapped: 62775296 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:59.047772+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416407552 unmapped: 62767104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:00.048008+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 62758912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:01.048143+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 62758912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:02.048306+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 62758912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:03.048465+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 62758912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:04.048785+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 62758912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:05.049013+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 62758912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:06.049295+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 62758912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:07.049417+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 62742528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:08.049782+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:09.049940+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:10.050176+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:11.050307+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:12.050619+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:13.050778+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:14.051021+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:15.051189+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416440320 unmapped: 62734336 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:16.051455+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 62726144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:17.052004+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 62726144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:18.052364+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 62726144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:19.053738+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 62726144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:20.054053+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 62726144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:21.054185+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 62726144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:22.054334+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416448512 unmapped: 62726144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:23.054477+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:24.054958+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:25.055138+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:26.055260+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:27.055571+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:28.055900+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:29.056207+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:30.056501+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416464896 unmapped: 62709760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:31.056764+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416473088 unmapped: 62701568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:32.056981+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416473088 unmapped: 62701568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:33.057357+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416473088 unmapped: 62701568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:34.057595+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416473088 unmapped: 62701568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:35.057848+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416481280 unmapped: 62693376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:36.058096+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416481280 unmapped: 62693376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:37.058379+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416481280 unmapped: 62693376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:38.058536+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416481280 unmapped: 62693376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:39.058718+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416489472 unmapped: 62685184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:40.058860+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 62668800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:41.059065+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 62668800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:42.059281+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 62668800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:43.059481+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548368 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 62668800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:44.059697+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d0000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 62668800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:45.060014+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 62668800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:46.060280+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 286.179077148s of 288.188110352s, submitted: 37
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416505856 unmapped: 62668800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:47.060469+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416522240 unmapped: 62652416 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:48.060677+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416538624 unmapped: 62636032 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:49.060844+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416579584 unmapped: 62595072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:50.061163+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416620544 unmapped: 62554112 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:51.061422+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416628736 unmapped: 62545920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:52.061883+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:53.062486+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416636928 unmapped: 62537728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:54.062859+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 416653312 unmapped: 62521344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:55.063323+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417677312 unmapped: 61497344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:56.063540+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:57.063722+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:58.063874+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:59.064044+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:00.064388+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:01.064705+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:02.064964+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:03.065270+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:04.065465+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:05.065761+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:06.066012+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:07.066212+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:08.066561+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:09.066773+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:10.066989+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:11.067173+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:12.067374+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:13.067539+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:14.067747+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:15.067920+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:16.068138+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:17.068303+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 61472768 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:18.068447+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417710080 unmapped: 61464576 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:19.068591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 61456384 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:20.068763+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 61456384 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:21.068994+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 61456384 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:22.069163+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 61456384 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:23.069310+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 61456384 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:24.069437+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417726464 unmapped: 61448192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:25.069655+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417726464 unmapped: 61448192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:26.069832+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417726464 unmapped: 61448192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:27.069950+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417726464 unmapped: 61448192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:28.070164+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:29.070294+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:30.070427+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:31.070590+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:32.070746+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:33.070867+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:34.071029+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:35.071174+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 61440000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:36.071325+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417742848 unmapped: 61431808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:37.071480+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417742848 unmapped: 61431808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:38.071631+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417742848 unmapped: 61431808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:39.071750+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417742848 unmapped: 61431808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:40.071894+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417742848 unmapped: 61431808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:41.072030+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417742848 unmapped: 61431808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:42.072198+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417742848 unmapped: 61431808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:43.072317+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:44.072453+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:45.072645+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:46.072802+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:47.072956+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:48.073146+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:49.073278+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:50.073428+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417751040 unmapped: 61423616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:51.073553+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:52.073747+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:53.073924+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:54.074284+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:55.074577+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:56.074836+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:57.075075+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:58.075283+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417759232 unmapped: 61415424 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:59.075462+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417767424 unmapped: 61407232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:00.075592+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417767424 unmapped: 61407232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:01.075775+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417767424 unmapped: 61407232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:02.075981+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417767424 unmapped: 61407232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:03.076202+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417767424 unmapped: 61407232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:04.076433+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417767424 unmapped: 61407232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:05.076677+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417767424 unmapped: 61407232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:06.076864+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417775616 unmapped: 61399040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:07.076999+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417775616 unmapped: 61399040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:08.077171+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417775616 unmapped: 61399040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:09.077368+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417783808 unmapped: 61390848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:10.077558+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417783808 unmapped: 61390848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:11.077711+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417783808 unmapped: 61390848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:12.077935+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417783808 unmapped: 61390848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:13.078127+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417783808 unmapped: 61390848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:14.078365+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417783808 unmapped: 61390848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:15.078701+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:16.078866+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:17.079010+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:18.079224+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:19.079438+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:20.079740+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:21.079986+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:22.080202+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417792000 unmapped: 61382656 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:23.080339+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417800192 unmapped: 61374464 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:24.080474+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417800192 unmapped: 61374464 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:25.080702+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:26.080898+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:27.081052+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:28.081175+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:29.081280+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:30.081455+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:31.081537+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:32.081677+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:33.081813+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:34.082001+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:35.082212+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:36.082385+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:37.082564+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:38.082783+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417808384 unmapped: 61366272 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:39.083204+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417816576 unmapped: 61358080 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:40.083365+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417816576 unmapped: 61358080 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:41.083515+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:42.083673+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:43.083794+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:44.083952+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:45.084608+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:46.084809+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:47.084944+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:48.085053+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417824768 unmapped: 61349888 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:49.085227+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417832960 unmapped: 61341696 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:50.085401+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417832960 unmapped: 61341696 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:51.085537+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417832960 unmapped: 61341696 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:52.085733+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417832960 unmapped: 61341696 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:53.085885+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417832960 unmapped: 61341696 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:54.086037+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417832960 unmapped: 61341696 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:55.086195+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417841152 unmapped: 61333504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:56.086330+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417841152 unmapped: 61333504 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:57.086512+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417849344 unmapped: 61325312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:58.086655+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417849344 unmapped: 61325312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:59.086844+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417849344 unmapped: 61325312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:00.088542+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417849344 unmapped: 61325312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:01.089518+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417849344 unmapped: 61325312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:02.089861+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417849344 unmapped: 61325312 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:03.090269+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:04.090678+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:05.090915+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:06.091240+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:07.091447+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:08.091702+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:09.091939+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:10.092218+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417857536 unmapped: 61317120 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:11.092467+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417865728 unmapped: 61308928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:12.092642+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417865728 unmapped: 61308928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:13.092838+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417865728 unmapped: 61308928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:14.093193+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417865728 unmapped: 61308928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:15.093382+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417865728 unmapped: 61308928 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:16.093631+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417873920 unmapped: 61300736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:17.093841+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417873920 unmapped: 61300736 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:18.094066+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417882112 unmapped: 61292544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:19.094232+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417882112 unmapped: 61292544 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:20.094446+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 61284352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:21.094636+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 61284352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:22.094848+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 61284352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:23.095005+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 61284352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:24.095211+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 61284352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:25.095545+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 61284352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:26.095808+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417890304 unmapped: 61284352 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:27.096071+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:28.096355+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:29.096584+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:30.096886+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:31.097056+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:32.097209+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:33.097385+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:34.097537+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417898496 unmapped: 61276160 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:35.097744+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417914880 unmapped: 61259776 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:36.097941+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417923072 unmapped: 61251584 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:37.098243+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417931264 unmapped: 61243392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:38.098469+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417931264 unmapped: 61243392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:39.098699+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417931264 unmapped: 61243392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:40.098846+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417931264 unmapped: 61243392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:41.099009+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417931264 unmapped: 61243392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:42.099183+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417931264 unmapped: 61243392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:43.099294+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:44.099416+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:45.099620+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:46.099779+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:47.099890+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:48.100042+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:49.100366+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:50.100492+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417939456 unmapped: 61235200 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:51.100698+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417955840 unmapped: 61218816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:52.100858+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417955840 unmapped: 61218816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:53.100997+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417955840 unmapped: 61218816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:54.101147+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417955840 unmapped: 61218816 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:55.101331+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417964032 unmapped: 61210624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:56.101472+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417964032 unmapped: 61210624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:57.101613+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417964032 unmapped: 61210624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:58.101755+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417964032 unmapped: 61210624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:59.101919+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417964032 unmapped: 61210624 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:00.102133+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417972224 unmapped: 61202432 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:01.102291+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417980416 unmapped: 61194240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:02.102477+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417980416 unmapped: 61194240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:03.102617+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417980416 unmapped: 61194240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:04.103241+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417980416 unmapped: 61194240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:05.104213+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417980416 unmapped: 61194240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:06.104647+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417980416 unmapped: 61194240 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:07.105062+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417988608 unmapped: 61186048 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:08.105338+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:09.105665+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417996800 unmapped: 61177856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:10.106052+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417996800 unmapped: 61177856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:11.106380+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417996800 unmapped: 61177856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:12.106500+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417996800 unmapped: 61177856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:13.106744+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417996800 unmapped: 61177856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:14.106955+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417996800 unmapped: 61177856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:15.107280+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 417996800 unmapped: 61177856 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:16.107577+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418004992 unmapped: 61169664 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:17.107851+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418013184 unmapped: 61161472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:18.108056+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418013184 unmapped: 61161472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:19.108243+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418013184 unmapped: 61161472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:20.108493+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418013184 unmapped: 61161472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:21.108749+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418013184 unmapped: 61161472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:22.108924+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418013184 unmapped: 61161472 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:23.109122+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418021376 unmapped: 61153280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:24.109260+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418021376 unmapped: 61153280 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:25.111731+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:26.116671+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:27.116922+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:28.117208+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:29.117415+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:30.117546+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:31.117700+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:32.117831+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418029568 unmapped: 61145088 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:33.117975+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418037760 unmapped: 61136896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:34.118153+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418037760 unmapped: 61136896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:35.118340+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418037760 unmapped: 61136896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:36.118461+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418037760 unmapped: 61136896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:37.118566+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418037760 unmapped: 61136896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:38.118734+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418037760 unmapped: 61136896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:39.118874+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418037760 unmapped: 61136896 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:40.119038+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 61128704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:41.119187+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 61128704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:42.119337+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 61128704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:43.119591+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 61128704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:44.119757+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 61128704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:45.119930+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418045952 unmapped: 61128704 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:46.120250+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418054144 unmapped: 61120512 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:47.120385+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418062336 unmapped: 61112320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:48.120561+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418062336 unmapped: 61112320 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:49.120739+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418078720 unmapped: 61095936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:50.120949+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418078720 unmapped: 61095936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:51.121096+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418078720 unmapped: 61095936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:52.121353+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418078720 unmapped: 61095936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:53.121505+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418078720 unmapped: 61095936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:54.121666+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418078720 unmapped: 61095936 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:55.121921+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418086912 unmapped: 61087744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:56.122165+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418086912 unmapped: 61087744 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:57.122359+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418095104 unmapped: 61079552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:58.122559+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418095104 unmapped: 61079552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:59.122732+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418095104 unmapped: 61079552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:00.122859+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418095104 unmapped: 61079552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:01.123001+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418095104 unmapped: 61079552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:02.123222+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418095104 unmapped: 61079552 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:03.123426+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418103296 unmapped: 61071360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:04.123578+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418103296 unmapped: 61071360 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:05.123739+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418111488 unmapped: 61063168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:06.123906+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418111488 unmapped: 61063168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:07.124023+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418111488 unmapped: 61063168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:08.124231+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418111488 unmapped: 61063168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:09.124598+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418111488 unmapped: 61063168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:10.124841+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418111488 unmapped: 61063168 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:11.125139+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:12.125403+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:13.125670+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:14.125949+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:15.126156+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:16.126400+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:17.126645+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:18.126865+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418119680 unmapped: 61054976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:19.127081+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418127872 unmapped: 61046784 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:20.127257+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418127872 unmapped: 61046784 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:21.127578+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:22.127875+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:23.128019+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:24.128183+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:25.128360+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:26.128567+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:27.128854+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:28.129068+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418144256 unmapped: 61030400 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:29.129311+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418152448 unmapped: 61022208 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:30.129534+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418152448 unmapped: 61022208 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:31.129737+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418152448 unmapped: 61022208 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:32.129984+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418152448 unmapped: 61022208 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:33.130239+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418152448 unmapped: 61022208 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:34.130483+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418152448 unmapped: 61022208 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:35.130757+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 61014016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:36.131252+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 61014016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:37.131514+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 61014016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:38.131717+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 61014016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:39.131863+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 61014016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:40.132017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 61014016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:41.132859+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418160640 unmapped: 61014016 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:42.133612+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418168832 unmapped: 61005824 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:43.134259+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418177024 unmapped: 60997632 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:44.134921+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418177024 unmapped: 60997632 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:45.135497+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 60989440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:46.135896+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 60989440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:47.136176+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 60989440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:48.136424+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 60989440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:49.136546+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 60989440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:50.136717+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 60989440 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:51.136873+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:52.137277+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:53.137748+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:54.138224+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:55.138596+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:56.138977+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:57.139566+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:58.140068+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418201600 unmapped: 60973056 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:59.140487+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 60964864 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:00.140769+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 60964864 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:01.141180+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 60964864 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:02.141598+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 60964864 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:03.141765+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 60964864 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:04.142093+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:05.142397+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:06.142725+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:07.143032+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:08.143178+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418217984 unmapped: 60956672 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:09.143484+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418226176 unmapped: 60948480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:10.143726+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418226176 unmapped: 60948480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:11.143841+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418226176 unmapped: 60948480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:12.143988+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418226176 unmapped: 60948480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:13.144296+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418226176 unmapped: 60948480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:14.144476+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418226176 unmapped: 60948480 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:15.145150+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:16.145307+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418234368 unmapped: 60940288 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:17.145454+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:18.145587+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:19.145707+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:20.145913+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:21.146080+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:22.146250+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418242560 unmapped: 60932096 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:23.146809+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:24.146939+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 60923904 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:25.147161+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418258944 unmapped: 60915712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:26.147300+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418258944 unmapped: 60915712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:27.147410+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418258944 unmapped: 60915712 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:28.147565+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418267136 unmapped: 60907520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:29.147642+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418267136 unmapped: 60907520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:30.147764+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418267136 unmapped: 60907520 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:31.147881+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:32.148024+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:33.148152+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:34.148311+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:35.148534+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:36.148674+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:37.148811+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:38.148977+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418275328 unmapped: 60899328 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:39.149201+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418283520 unmapped: 60891136 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:40.149364+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418299904 unmapped: 60874752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:41.149550+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418299904 unmapped: 60874752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:42.149751+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418299904 unmapped: 60874752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:43.150014+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418299904 unmapped: 60874752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:44.150186+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418299904 unmapped: 60874752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:45.150395+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418299904 unmapped: 60874752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:46.150707+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418299904 unmapped: 60874752 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:47.150947+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418308096 unmapped: 60866560 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:48.151211+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418316288 unmapped: 60858368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:49.151494+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418316288 unmapped: 60858368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:50.151779+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418316288 unmapped: 60858368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:51.152161+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418316288 unmapped: 60858368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:52.153806+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418316288 unmapped: 60858368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:53.154384+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418316288 unmapped: 60858368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:54.154714+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418316288 unmapped: 60858368 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:55.155173+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418324480 unmapped: 60850176 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:56.155441+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:57.155862+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:58.156037+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:59.157517+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:00.157655+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:01.157975+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:02.158277+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:03.158644+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 60841984 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:04.159303+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:05.159664+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:06.159968+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:07.160175+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:08.160613+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:09.160928+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:10.161188+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 60833792 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:11.161440+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:12.161797+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:13.162044+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:14.162245+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:15.162507+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:16.162948+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:17.163173+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:18.163796+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418357248 unmapped: 60817408 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:19.164208+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418365440 unmapped: 60809216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:20.164342+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418365440 unmapped: 60809216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:21.164620+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418365440 unmapped: 60809216 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:22.164896+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418373632 unmapped: 60801024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:23.165090+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418373632 unmapped: 60801024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:24.165277+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418373632 unmapped: 60801024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:25.165450+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418373632 unmapped: 60801024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:26.165590+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418373632 unmapped: 60801024 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:27.165913+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418390016 unmapped: 60784640 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:28.166330+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:29.166583+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:30.166825+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:31.167174+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:32.167326+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:33.167499+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:34.167736+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:35.168012+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418398208 unmapped: 60776448 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:36.168153+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418406400 unmapped: 60768256 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-10-02T13:43:37.168418+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _finish_auth 0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:37.169593+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 60760064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:38.168705+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 60760064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:39.168932+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 60760064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:40.169207+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 60760064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:41.169429+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 60760064 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:42.169695+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418422784 unmapped: 60751872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:43.169848+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418422784 unmapped: 60751872 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:44.170085+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418430976 unmapped: 60743680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:45.170320+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418430976 unmapped: 60743680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:46.170488+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418430976 unmapped: 60743680 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:47.170634+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418439168 unmapped: 60735488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:48.170888+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418439168 unmapped: 60735488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:49.171187+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418439168 unmapped: 60735488 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:50.171599+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418447360 unmapped: 60727296 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:51.171969+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:52.172429+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:53.172658+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:54.173092+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:55.173537+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:56.173901+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:57.174544+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:58.175057+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418455552 unmapped: 60719104 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:59.175436+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:00.175697+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418463744 unmapped: 60710912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:01.175974+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418463744 unmapped: 60710912 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:02.176283+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418471936 unmapped: 60702720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:03.176561+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418471936 unmapped: 60702720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:04.176757+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418471936 unmapped: 60702720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:05.177174+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418471936 unmapped: 60702720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:06.177442+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418471936 unmapped: 60702720 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:07.177676+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:08.177948+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:09.178206+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:10.178388+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:11.178614+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:12.178898+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:13.179138+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:14.179413+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418480128 unmapped: 60694528 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:15.179697+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418496512 unmapped: 60678144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:16.179979+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418496512 unmapped: 60678144 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:17.180193+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418504704 unmapped: 60669952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:18.180461+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418504704 unmapped: 60669952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:19.180611+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418504704 unmapped: 60669952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:20.180793+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418504704 unmapped: 60669952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:21.180951+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418504704 unmapped: 60669952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:22.181244+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418504704 unmapped: 60669952 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:23.181503+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418512896 unmapped: 60661760 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:24.181656+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418521088 unmapped: 60653568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:25.181920+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418521088 unmapped: 60653568 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:26.182191+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418529280 unmapped: 60645376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:27.182358+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418529280 unmapped: 60645376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:28.182573+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418529280 unmapped: 60645376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:29.182816+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418529280 unmapped: 60645376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:30.183008+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418529280 unmapped: 60645376 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:31.183225+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418537472 unmapped: 60637184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:32.183460+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418537472 unmapped: 60637184 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:33.183690+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60620800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:34.183861+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60620800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:35.184156+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60620800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:36.184280+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60620800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:37.184432+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60620800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:38.184550+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60620800 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:39.184694+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:40.184882+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:41.185032+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:42.185232+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:43.185364+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:44.185553+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:45.185732+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:46.185922+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418562048 unmapped: 60612608 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:47.186189+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418570240 unmapped: 60604416 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:48.186445+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418570240 unmapped: 60604416 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:49.186646+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418570240 unmapped: 60604416 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:50.186845+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418570240 unmapped: 60604416 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:51.187054+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418570240 unmapped: 60604416 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:52.187216+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418578432 unmapped: 60596224 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:53.187392+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418578432 unmapped: 60596224 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:54.187570+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418578432 unmapped: 60596224 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:55.187739+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418586624 unmapped: 60588032 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:56.187884+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418586624 unmapped: 60588032 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:57.188051+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418594816 unmapped: 60579840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:58.188210+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418594816 unmapped: 60579840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:59.188416+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418594816 unmapped: 60579840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:00.188615+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418594816 unmapped: 60579840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:01.188806+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418594816 unmapped: 60579840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:02.188949+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418594816 unmapped: 60579840 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:03.189240+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 60563456 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:04.189405+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418611200 unmapped: 60563456 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:05.189645+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418627584 unmapped: 60547072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:06.189803+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418627584 unmapped: 60547072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:07.189973+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418627584 unmapped: 60547072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:08.190224+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418627584 unmapped: 60547072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:09.190381+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 60K writes, 232K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 60K writes, 21K syncs, 2.82 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 701 writes, 1075 keys, 701 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s
                                           Interval WAL: 701 writes, 347 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418627584 unmapped: 60547072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:10.190546+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418627584 unmapped: 60547072 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:11.190683+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:12.190854+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:13.191023+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:14.191169+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:15.191400+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc ms_handle_reset ms_handle_reset con 0x55a070842800
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: get_auth_request con 0x55a0708eb800 auth_method 0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:16.191524+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:17.191684+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 ms_handle_reset con 0x55a06fabb000 session 0x55a06de9ba40
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: handle_auth_request added challenge on 0x55a06dde1400
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:18.191865+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 60538880 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:19.192017+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418652160 unmapped: 60522496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:20.192209+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418652160 unmapped: 60522496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:21.192314+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418652160 unmapped: 60522496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:22.192489+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418652160 unmapped: 60522496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:23.192639+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418652160 unmapped: 60522496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:24.192752+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418652160 unmapped: 60522496 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:25.192928+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418660352 unmapped: 60514304 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:26.193084+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418660352 unmapped: 60514304 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:27.193305+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418660352 unmapped: 60514304 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:28.193478+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418660352 unmapped: 60514304 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:29.193682+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:30.193854+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:31.194011+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:32.194170+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:33.194343+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:34.194518+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:35.194707+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:36.194840+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418676736 unmapped: 60497920 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:37.194986+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:38.195158+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:39.195275+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:40.195416+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:41.195516+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:42.195674+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:43.195814+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:44.196026+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418684928 unmapped: 60489728 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:45.196169+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418693120 unmapped: 60481536 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:46.196261+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418693120 unmapped: 60481536 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:47.196403+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418701312 unmapped: 60473344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:48.196540+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418701312 unmapped: 60473344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:49.196669+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418701312 unmapped: 60473344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:50.196813+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418701312 unmapped: 60473344 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:51.196944+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418709504 unmapped: 60465152 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:52.197142+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418709504 unmapped: 60465152 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:53.197275+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418709504 unmapped: 60465152 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:54.197442+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418709504 unmapped: 60465152 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:55.197639+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 60456960 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:56.197790+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 60456960 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:57.197958+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 60456960 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:58.198142+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 60456960 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:59.198313+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 60456960 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:00.198467+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 60456960 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:01.198728+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418734080 unmapped: 60440576 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:02.198886+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418734080 unmapped: 60440576 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:03.199047+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418734080 unmapped: 60440576 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:04.199167+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418734080 unmapped: 60440576 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:05.199383+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418734080 unmapped: 60440576 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:06.199510+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418734080 unmapped: 60440576 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:07.199640+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:08.199816+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:09.199955+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:10.200142+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:11.200281+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:12.200424+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:13.200573+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:14.200744+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 60424192 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:15.200997+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 60416000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:16.201248+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 60416000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:17.201454+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 60416000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:18.201587+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:19.201742+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 60407808 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:20.201913+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 60399616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:21.202057+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 60399616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:22.202278+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 60399616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:23.202447+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 60399616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:24.202626+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 60399616 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:25.202939+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 60383232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:26.203149+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 60383232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:27.203338+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 60383232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:28.203513+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 60383232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:29.203700+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 60383232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:30.203959+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 60383232 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:31.204157+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:32.204309+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:33.204453+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:34.204616+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:35.204888+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:36.205003+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:37.205237+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:38.205401+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 60375040 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:39.205530+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 60366848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:40.205690+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 60366848 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a41d1000/0x0/0x1bfc00000, data 0x27799dc/0x29ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [0,2] op hist [])
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:14 compute-0 ceph-osd[84115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore.MempoolThread(0x55a06c4d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547488 data_alloc: 218103808 data_used: 19402752
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:41.205843+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config show' '{prefix=config show}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 60416000 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:42.206047+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 60219392 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: tick
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_tickets
Oct 02 13:47:14 compute-0 ceph-osd[84115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:43.206242+0000)
Oct 02 13:47:14 compute-0 ceph-osd[84115]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 594.258422852s of 596.282348633s, submitted: 354
Oct 02 13:47:14 compute-0 ceph-osd[84115]: prioritycache tune_memory target: 4294967296 mapped: 419143680 unmapped: 60030976 heap: 479174656 old mem: 2845415832 new mem: 2845415832
Oct 02 13:47:14 compute-0 ceph-osd[84115]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47546 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48298 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.48220 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.47489 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.48235 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.38445 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.47504 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.48244 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.38460 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3678720938' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.47519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.48262 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.38469 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3783132319' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/325351810' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3968431496' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.47531 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.48280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.38487 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/531916238' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2814393998' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3951671801' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: from='client.47546 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48304 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:47:14 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444893857' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47567 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:14 compute-0 nova_compute[256940]: 2025-10-02 13:47:14.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:14 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48316 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38514 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:15.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:47:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895258866' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:15 compute-0 nova_compute[256940]: 2025-10-02 13:47:15.211 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38529 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:15 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:15 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:47:15 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.48298 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.48304 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1444893857' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.47567 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.48316 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.38514 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2664765077' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3895258866' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1067755395' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mon[73668]: pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 13:47:15 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1775043447' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47591 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:15.621+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:15 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 13:47:15 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684971148' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48349 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:15 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:15.691+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:15 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38541 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 crontab[446644]: (root) LIST (root)
Oct 02 13:47:16 compute-0 nova_compute[256940]: 2025-10-02 13:47:16.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:16 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38559 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 13:47:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715167348' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.38529 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.47591 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/684971148' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1739021306' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.48349 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2071889290' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.38541 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1965547968' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3440469174' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4037444600' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1088956816' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.38559 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2969688570' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3474037624' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3042551414' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1715167348' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:16 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 13:47:16 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668710871' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.001000026s ======
Oct 02 13:47:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:17.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 02 13:47:17 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38589 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-0-unmtoh[73957]: 2025-10-02T13:47:17.223+0000 7f9ff4535640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:17 compute-0 ceph-mgr[73961]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 13:47:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 13:47:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469539752' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 13:47:17 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:17 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:17 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:17.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:17 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 13:47:17 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224356891' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3164670167' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2672268410' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2775357721' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3946441983' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2668710871' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1460811360' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3676004763' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.38589 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2540075283' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3469539752' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: pgmap v4106: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3512761745' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/868572869' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2457636684' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2370018337' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:47:17 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1224356891' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 13:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650286015' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 13:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73184818' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 podman[446960]: 2025-10-02 13:47:18.419372634 +0000 UTC m=+0.078158496 container health_status 23ddc5e305e050b3fc3f961f27eebac3610ca43db13f8a193a70fa57a2a03125 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 13:47:18 compute-0 podman[446959]: 2025-10-02 13:47:18.419469116 +0000 UTC m=+0.078871754 container health_status 1f8f04dd37e8ba06fca42b260cb81a951a6b4e22748de9826c6489947b1c3718 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 13:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 13:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061298051' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48481 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 13:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017930756' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47735 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1780767086' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3501681092' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2065031953' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2951718361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1650286015' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4160799232' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3500533200' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/73184818' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3340405755' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1209891436' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4061298051' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1571223080' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4017930756' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/4036422266' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 13:47:18 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880924643' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:18 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48496 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 13:47:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220108605' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47744 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:19.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47750 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:19 compute-0 nova_compute[256940]: 2025-10-02 13:47:19.212 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:19 compute-0 nova_compute[256940]: 2025-10-02 13:47:19.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:47:19 compute-0 nova_compute[256940]: 2025-10-02 13:47:19.212 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:47:19 compute-0 nova_compute[256940]: 2025-10-02 13:47:19.265 2 DEBUG nova.compute.manager [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:47:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48511 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 13:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 13:47:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758411455' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47759 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 13:47:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218426517' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:19 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:19 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:19.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48532 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.48481 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.47735 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.48502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3880924643' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.48496 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3220108605' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.47744 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.47750 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.48511 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: pgmap v4107: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2758411455' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.47759 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2218426517' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 13:47:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2948375085' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47783 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 13:47:19 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025455943' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-0 nova_compute[256940]: 2025-10-02 13:47:19.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:19 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48547 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 systemd[1]: Started Hostname Service.
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47795 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38697 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 13:47:20 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707473037' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48559 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47807 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38709 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38715 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.48532 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2948375085' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.47783 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3856099125' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1025455943' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3080644985' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.48547 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.47795 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.38697 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2042272111' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/707473037' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.48559 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2646368920' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3100741757' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:47:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:47:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:47:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48577 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47819 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.002000052s ======
Oct 02 13:47:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:21.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 02 13:47:21 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38727 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 nova_compute[256940]: 2025-10-02 13:47:21.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:21 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Oct 02 13:47:21 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47834 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38745 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:21 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:21 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:21.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:21 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 13:47:21 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2386771092' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.47807 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.38709 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.38715 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.48577 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/791524135' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.47819 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2931492052' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.38727 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.48592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2251973729' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: pgmap v4108: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.47834 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2386771092' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38760 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 13:47:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2592065937' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48646 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38772 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47888 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:47:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634751391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38784 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.38745 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/889082410' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.38760 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2592065937' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1550219910' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.48646 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.38772 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.47888 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1634751391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4290154286' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:22 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 13:47:22 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695843270' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38796 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:23.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:23 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 0 B/s wr, 205 op/s
Oct 02 13:47:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 13:47:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4108506579' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:23 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:23 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:23 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:23.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:23 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 13:47:23 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913213451' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.38784 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/500212777' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1695843270' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/803408113' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.38796 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3923046428' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: pgmap v4109: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 0 B/s wr, 205 op/s
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/4108506579' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/3015238990' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:23 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1913213451' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38844 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 13:47:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/863202803' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48688 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 13:47:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235792891' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47936 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2522795157' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: from='client.38844 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/863202803' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: from='client.48688 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1235792891' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3111663973' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:47:24 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 13:47:24 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906699497' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:24 compute-0 nova_compute[256940]: 2025-10-02 13:47:24.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:25 compute-0 nova_compute[256940]: 2025-10-02 13:47:25.259 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:25 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Oct 02 13:47:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 13:47:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016534467' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:25 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:25 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:25 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:25.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48721 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:25 compute-0 sudo[447875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:25 compute-0 sudo[447875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-0 sudo[447875]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:25 compute-0 sudo[447921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:47:25 compute-0 sudo[447921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-0 sudo[447921]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:25 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47963 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:25 compute-0 sudo[447954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:25 compute-0 sudo[447954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-0 sudo[447954]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:25 compute-0 sudo[447983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:47:25 compute-0 sudo[447983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 13:47:25 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501585160' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:25 compute-0 ceph-mon[73668]: from='client.47936 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2906699497' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2935842323' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:47:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/3319394505' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:47:25 compute-0 ceph-mon[73668]: pgmap v4110: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Oct 02 13:47:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/712644003' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:47:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3016534467' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:25 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/1501585160' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47975 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 nova_compute[256940]: 2025-10-02 13:47:26.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48736 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 sudo[447983]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-0 sudo[448099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:26 compute-0 sudo[448099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-0 sudo[448099]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-0 sudo[448143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:47:26 compute-0 sudo[448143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-0 sudo[448143]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.47984 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 sudo[448172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:26 compute-0 sudo[448172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-0 sudo[448172]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:47:26.548 158104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:47:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:47:26.548 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:47:26 compute-0 ovn_metadata_agent[158078]: 2025-10-02 13:47:26.548 158104 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:47:26 compute-0 sudo[448203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 13:47:26 compute-0 sudo[448203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48742 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3839049197' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 sudo[448203]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 69fb28a7-8d9b-4800-8168-3921c36a64e1 does not exist
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 8a165f9a-8008-477a-8030-260784641198 does not exist
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: [progress WARNING root] complete: ev 1fa89e28-502e-4c3f-b4bf-d35c52d7c434 does not exist
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:47:26 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='client.48721 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='client.47963 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2724286883' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/1164119169' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='client.47975 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='client.48736 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/3839049197' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:47:26 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:47:26 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38898 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:26 compute-0 sudo[448305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:26 compute-0 sudo[448305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-0 sudo[448305]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:27 compute-0 sudo[448336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:47:27 compute-0 sudo[448336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:27 compute-0 sudo[448336]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:27 compute-0 sudo[448361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:27 compute-0 sudo[448361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:27 compute-0 sudo[448361]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:27 compute-0 sudo[448386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 13:47:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:27 compute-0 sudo[448386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 13:47:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663022645' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:47:27 compute-0 nova_compute[256940]: 2025-10-02 13:47:27.258 2 DEBUG oslo_service.periodic_task [None req-468c7856-5ab4-413d-9914-7d0a889ffe73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:27 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 120 KiB/s rd, 0 B/s wr, 199 op/s
Oct 02 13:47:27 compute-0 podman[448486]: 2025-10-02 13:47:27.513574806 +0000 UTC m=+0.042490506 container create 75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 13:47:27 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:27 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:27 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:27 compute-0 systemd[1]: Started libpod-conmon-75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762.scope.
Oct 02 13:47:27 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38910 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:47:27 compute-0 podman[448486]: 2025-10-02 13:47:27.496176397 +0000 UTC m=+0.025092097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:47:27 compute-0 podman[448486]: 2025-10-02 13:47:27.608758309 +0000 UTC m=+0.137674019 container init 75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:47:27 compute-0 podman[448486]: 2025-10-02 13:47:27.615598675 +0000 UTC m=+0.144514395 container start 75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:47:27 compute-0 podman[448486]: 2025-10-02 13:47:27.618965222 +0000 UTC m=+0.147880942 container attach 75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 13:47:27 compute-0 stoic_chatelet[448512]: 167 167
Oct 02 13:47:27 compute-0 systemd[1]: libpod-75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762.scope: Deactivated successfully.
Oct 02 13:47:27 compute-0 podman[448486]: 2025-10-02 13:47:27.621576079 +0000 UTC m=+0.150491799 container died 75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c54a19b2a5be0ba251eb7362414bdceb666a0e9233db8dbdca83996f97f8c578-merged.mount: Deactivated successfully.
Oct 02 13:47:27 compute-0 podman[448486]: 2025-10-02 13:47:27.667851142 +0000 UTC m=+0.196766852 container remove 75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 13:47:27 compute-0 systemd[1]: libpod-conmon-75027f3d5ebde5ab514c633cc79bcb15a579048f32a8830a1f528f92e09f2762.scope: Deactivated successfully.
Oct 02 13:47:27 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48766 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:27 compute-0 podman[448579]: 2025-10-02 13:47:27.818563327 +0000 UTC m=+0.034966713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:47:27 compute-0 podman[448579]: 2025-10-02 13:47:27.917778854 +0000 UTC m=+0.134182260 container create a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.47984 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.48742 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.38898 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/359332251' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/2663022645' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2174606874' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: pgmap v4111: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 120 KiB/s rd, 0 B/s wr, 199 op/s
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2634325384' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/2344641372' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:47:27 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 13:47:27 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170762306' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:47:27 compute-0 systemd[1]: Started libpod-conmon-a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803.scope.
Oct 02 13:47:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 13:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177f332bac094a2323db93d08bc039b00ecdd3b18bb762c8f5c98675a86cd275/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177f332bac094a2323db93d08bc039b00ecdd3b18bb762c8f5c98675a86cd275/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177f332bac094a2323db93d08bc039b00ecdd3b18bb762c8f5c98675a86cd275/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177f332bac094a2323db93d08bc039b00ecdd3b18bb762c8f5c98675a86cd275/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177f332bac094a2323db93d08bc039b00ecdd3b18bb762c8f5c98675a86cd275/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 13:47:28 compute-0 podman[448579]: 2025-10-02 13:47:28.014285121 +0000 UTC m=+0.230688517 container init a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 13:47:28 compute-0 podman[448579]: 2025-10-02 13:47:28.020795779 +0000 UTC m=+0.237199165 container start a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:47:28 compute-0 podman[448579]: 2025-10-02 13:47:28.0251254 +0000 UTC m=+0.241528806 container attach a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48026 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48772 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38928 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48032 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161596466126931 of space, bias 1.0, pg target 0.6484789398380792 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.38937 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 naughty_nobel[448620]: --> passed data devices: 0 physical, 1 LVM
Oct 02 13:47:28 compute-0 naughty_nobel[448620]: --> relative data size: 1.0
Oct 02 13:47:28 compute-0 naughty_nobel[448620]: --> All data devices are unavailable
Oct 02 13:47:28 compute-0 systemd[1]: libpod-a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803.scope: Deactivated successfully.
Oct 02 13:47:28 compute-0 podman[448579]: 2025-10-02 13:47:28.839153671 +0000 UTC m=+1.055557057 container died a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:47:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-177f332bac094a2323db93d08bc039b00ecdd3b18bb762c8f5c98675a86cd275-merged.mount: Deactivated successfully.
Oct 02 13:47:28 compute-0 podman[448579]: 2025-10-02 13:47:28.912549363 +0000 UTC m=+1.128952749 container remove a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:47:28 compute-0 systemd[1]: libpod-conmon-a520d6ec3b800232bc2750cf178d8656744b3ddcf1e47394dfe9d09de8200803.scope: Deactivated successfully.
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.38910 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.48766 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.100:0/4170762306' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.48026 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.48772 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.38928 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.48032 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/2908794196' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.101:0/91705223' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:47:28 compute-0 ceph-mon[73668]: from='client.? 192.168.122.102:0/1677529663' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:47:28 compute-0 sudo[448386]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Optimize plan auto_2025-10-02_13:47:28
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] do_upmap
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 02 13:47:28 compute-0 ceph-mgr[73961]: [balancer INFO root] prepared 0/10 changes
Oct 02 13:47:29 compute-0 sudo[448882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:29 compute-0 sudo[448882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:29 compute-0 sudo[448882]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:29 compute-0 sudo[448921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:47:29 compute-0 sudo[448921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:29 compute-0 sudo[448921]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 13:47:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651399609' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:47:29 compute-0 sudo[448954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:29 compute-0 sudo[448954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:29 compute-0 sudo[448954]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:29.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48799 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:29 compute-0 sudo[449000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 13:47:29 compute-0 sudo[449000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:29 compute-0 ceph-mgr[73961]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Oct 02 13:47:29 compute-0 radosgw[92108]: ====== starting new request req=0x7fc55654c6f0 =====
Oct 02 13:47:29 compute-0 radosgw[92108]: ====== req done req=0x7fc55654c6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:29 compute-0 radosgw[92108]: beast: 0x7fc55654c6f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:29 compute-0 ceph-mon[73668]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 13:47:29 compute-0 ceph-mon[73668]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1532563355' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:47:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48059 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:29 compute-0 ceph-mgr[73961]: log_channel(audit) log [DBG] : from='client.48808 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
